uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,116,691,499,915
arxiv
\section{I. Introduction} The rapid development of superconducting quantum electronics has motivated a search for near quantum-limited microwave amplifiers for the low-noise readout of qubits and linear cavity resonators. It was long ago recognized that the dc Superconducting QUantum Interference Device (dc SQUID) can achieve noise performance approaching the fundamental quantum limit imposed on phase-insensitive linear amplifiers: namely, the amplifier must add at least half a quantum of noise to the signal it amplifies \cite{Koch80}. Yet while the SQUID is in principle capable of amplifying signals at frequencies approaching the Josephson frequency (typically of order tens of GHz), it remains challenging to embed the SQUID in a 50~$\Omega$ environment and to provide for efficient coupling of a microwave signal to the device. Recently, it was shown that near quantum-limited performance can be achieved with a microstrip SQUID amplifier, where the input coil is configured as a microstrip resonator with the SQUID washer acting as a groundplane \cite{Mueck98}. The noise temperature of a microstrip SQUID amplifier cooled to millikelvin temperatures has been measured to be 47 $\pm$ 10 mK and 48 $\pm$ 5 mK at frequencies of 519 MHz and 612 MHz, respectively, more than an order of magnitude lower than the best semiconductor amplifiers available and within a factor of 2 of the quantum limit \cite{Mueck01, Kinion11}. However, efforts to extend the operating frequencies of these amplifiers into the gigahertz range are hampered by the fact that reduction of the length of the input resonator is coupled to reduction of the mutual inductance between the input coil and the SQUID \cite{Mueck03}. Alternative approaches have included the integration of a high-gain SQUID gradiometer into a coplanar waveguide resonator at a current antinode \cite{Spietz08, Spietz09}. The current study was motivated by the development of a new device configuration that enables the efficient coupling of a GHz-frequency signal to a low-inductance, high gain SQUID that should achieve noise performance approaching the standard quantum limit. The gain element is more properly termed a SLUG (Superconducting Low-inductance Undulatory Galvanometer), as the signal is not coupled to the device inductively, but rather injected directly into the device loop as a current \cite{Clarke66}. The low-inductance design is straightforward to model at microwave frequencies, and the SLUG is readily incorporated into a microstrip line in such a way that the modes of the SLUG element and the input resonator remain cleanly resolved, greatly simplifying analysis of the circuit. In what follows we present a comprehensive theoretical study of the gain and noise performance of the SLUG microwave amplifier. Our goals are to clearly spell out the design tradeoffs, to outline a clear path to device optimization, and to identify the fundamental limits to performance. As we shall see, the scattering parameters of the SLUG are very similar to those of the more familiar symmetric dc SQUID, apart from a trivial shift in flux bias that arises from the asymmetric division of bias current between the two arms of the SLUG. However, while it is straightforward to fabricate a low-inductance ($\sim$~10~pH) SLUG and to embed the device in a 50~$\Omega$ environment, it is challenging to engineer a clean, purely inductive coupling to a conventional dc SQUID at microwave frequencies. For this reason we have chosen to focus our discussion of microwave amplifiers on the SLUG geometry. In this manuscript we will not consider phase-sensitive amplifiers based on parametrically modulated Josephson junctions operated in the supercurrent state \cite{Yurke88, Yurke89}. There has been significant recent development of low-noise Josephson parametric amplifiers \cite{Castellanos07, Yamamoto08, Hatridge11}, including such milestones as squeezing of vacuum noise \cite{Castellanos08} and observation of quantum jumps in a superconducting qubit \cite{Vijay11}. Because these amplifiers squeeze the input state, they can achieve added noise numbers for one field quadrature that are below the standard quantum limit \cite{Caves82, Clerk10}. Moreover, these devices operate with negligible dissipation, circumventing practical problems associated with hot-electron effects that are intrinsic to devices that operate in the finite-voltage regime. In related work, there have been efforts to develop low-noise phase-insensitive amplifiers based on parametrically modulated junctions in a ring modulator configuration \cite{Bergeal10}. Broadly speaking, advantages of the Josephson parametric amplifiers include unsurpassed noise performance and ease of fabrication, while potential disadvantages relative to SQUID-based dissipative amplifiers include modest gain-bandwidth product, limited dynamic range, and increased complexity of operation. Ultimately we suspect that there is a place in the superconducting quantum optician's toolbox for both ultralow noise phase-sensitive parametric amplifiers and robust, broadband phase-insensitive amplifiers operating near the standard quantum limit. This paper is organized as follows. In Section II we introduce the circuit models of the symmetric dc SQUID and the SLUG. In Section III we calculate the dc characteristics of the devices. In Section IV we evaluate SLUG scattering parameters, and examine the maximum achievable gain over the range of device parameters. Sections V and VI present an analysis of noise properties in the thermal and quantum regimes, respectively. In Section VII we describe the design and performance of practical SLUG amplifiers for GHz frequency operation, and in Section VIII we discuss amplifier dynamic range. In Section IX we describe the effect of the finite admittance of the input circuit on device characteristics, gain, and noise, and in Section X we discuss hot-electron effects. In Section XI we present our concluding remarks. \section{II. Device Model} To make contact with the earlier numerical studies of Tesche and Clarke \cite{Tesche77}, we begin by considering the familiar symmetric dc SQUID, shown in Fig. \ref{fig:circuits}a. The gain element consists of two overdamped Josephson junctions embedded in a superconducting loop with inductance $L$. The junctions (with gauge invariant phases $\delta_{1,2}$) have equal critical currents $I_0$, self-capacitances $C$, and shunt resistances $R$. The superconducting loop is formed from two equal branches with inductance $L/2$; we neglect the mutual inductance between the branches. A dc bias current $I_b$ and bias flux $\Phi_b$ establish a quasistatic operating point, and the signal is injected into an input coil that is coupled to the SQUID loop with mutual inductance $M$. The currents through the junctions are given by \begin{figure}[t] \begin{center} \includegraphics[width=.45\textwidth]{fig_1} \vspace*{-0.0in} \caption{Device geometries. (a) Symmetric dc SQUID. (b) Symmetric SLUG.} \label{fig:circuits} \end{center} \end{figure} \begin{align} I_1 &= I_0\sin{\delta_1} + \frac{(V_1 - V_{n,1})}{R} + C \, \frac{dV_1}{dt} \nonumber \\ I_2 &= I_0\sin{\delta_2} + \frac{(V_2 - V_{n,2})}{R} + C \, \frac{dV_2}{dt}, \end{align} where $V_{n,1}$ and $V_{n,2}$ are noise voltages associated with the resistive shunts, and where the voltages $V_{1,2}$ are related to the junction phases by the ac Josephson relation: \begin{align} V_1 &= \frac{\Phi_0}{2 \pi} \frac{d \delta_1}{dt} \nonumber \\ V_2 &= \frac{\Phi_0}{2 \pi} \frac{d \delta_2}{dt}. \end{align} Here, $\Phi_0~=~h/2e$ is the magnetic flux quantum. The SQUID loop supports a circulating current $J$ given by \begin{align} J = \frac{I_1 - I_2}{2}. \end{align} The voltage across the device is given by \begin{align} V &= V_1 + \frac{L}{2} \, \frac{dI_1}{dt} \nonumber \\ &= V_2 + \frac{L}{2} \, \frac{dI_2}{dt}. \end{align} The circulating current and the junction phases are related to the total flux in the loop $\Phi_T$ as follows: \begin{align} \Phi_T &= \Phi_b + LJ \nonumber \\ &= \frac{\Phi_0}{2 \pi} (\delta_2 - \delta_1). \end{align} \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_2} \vspace*{-0.0in} \caption{I-V characteristics of (a) symmetric dc SQUID and (b) symmetric SLUG for $\Phi_b = 0 \, \Phi_0$ (black), $ 0.25 \, \Phi_0$ (blue), and $0.5 \, \Phi_0$ (red). The device parameters are $\beta_L = 1$ and $\beta_C = 0.8$.} \label{fig:IV} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_3} \vspace*{-0.0in} \caption{V-$\Phi$ characteristics of (a) symmetric dc SQUID and (b) symmetric SLUG for $I_b = 1.8 \, I_0$ (black), $ 1.9 \, I_0$ (blue), and $2.0 \, I_0$ (red). The device parameters are $\beta_L = 1$ and $\beta_C = 0.8$.} \label{fig:VPhi} \end{center} \end{figure} We introduce dimensionless variables $i$, $v$, $\phi$, and $\theta$, defined as follows: $i~\equiv~I/I_0$, $v~\equiv~V/I_0 R$, $\phi~\equiv~\Phi/\Phi_0$, and $\theta~\equiv t~/~[\Phi_0/(2 \pi I_0 R)]$. In addition, we introduce the dimensionless reduced inductance $\beta_L = 2 I_0 L/\Phi_0$ and the damping parameter $\beta_C = (2 \pi/\Phi_0) I_0 R^2 C$. The equations of motion for the junction phases are written as \begin{align} \beta_C \ddot{\delta_1} &= \frac{i_b}{2} + \frac{\delta_2 - \delta_1 - 2\pi \phi_b}{\pi \beta_L} - \sin{\delta_1} - \dot{\delta_1} + v_{n,1} \nonumber \\ \beta_C \ddot{\delta_2} &= \frac{i_b}{2} - \frac{\delta_2 - \delta_1 - 2\pi \phi_b}{\pi \beta_L} - \sin{\delta_2} - \dot{\delta_2} + v_{n,2}. \label{eqn:SQUID} \end{align} The quasistatic output voltage and circulating current are given by \begin{align} v_{out} &= \frac{1}{2}\left( \dot{\delta_1} + \dot{\delta_2} \right) \\ j &= \frac{1}{\pi \beta_L}\left( \delta_2 - \delta_1 - 2\pi \phi_b \right). \end{align} In the SLUG geometry of Fig. \ref{fig:circuits}b, the device loop is formed from two superconducting traces separated by a thin dielectric layer, and the input signal is injected directly into one of the traces. In the case where the SLUG is integrated into a microstrip transmission line, the device is realized in three metallization steps (corresponding to the circuit groundplane and the two arms of the SLUG), with two dielectric thin films separating the metal layers. The mutual inductance between the arms of the device is of order the self-inductance of the arms, and must be taken into account. Below for the sake of simplicity we consider the case where the two dielectric films separating the superconducting layers are of equal thickness, resulting in branch inductances $2L$ and $L$ with mutual inductance $L$. The total inductance of the device is then $L$, and the mutual coupling $M$ of the input current $I_\Phi$ to the device loop is also $L$. We refer to this configuration as the \textit{symmetric SLUG}. The total flux through the device becomes \begin{align} \Phi_T &= L\left(I_1 + I_\Phi \right) + \Phi_b \nonumber \\ &= \frac{\Phi_0}{2\pi}\left(\delta_2 - \delta_1\right). \end{align} We write the dimensionless equations of motion for $\delta_{1,2}$ as follows: \begin{align} \beta_C \ddot{\delta_1} &= \frac{\delta_2 - \delta_1 - 2\pi \phi_b}{\pi \beta_L} - i_\phi - \sin{\delta_1} - \dot{\delta_1} + v_{N,1} \nonumber \\ \beta_C\ddot{\delta_2} &= - \frac{\delta_2 - \delta_1 - 2\pi \phi_b}{\pi \beta_L} + i_b + i_\phi - \sin{\delta_2} - \dot{\delta_2} + v_{N,2}. \label{eqn:SLUG} \end{align} The output voltage and circulating current are given by \begin{align} v_{out} &= \dot{\delta_2} \\ j &= \frac{1}{\pi \beta_L}\left( \delta_2 - \delta_1 - 2\pi \phi_b \right) - i_\phi/2. \end{align} To operate the SQUID or the SLUG as an amplifier, one chooses $I_b$ and $\Phi_b$ to establish a quasistatic operating point where the transfer function $V_\Phi \equiv \partial V/\partial \Phi$ is large. In both cases, the device acts as a transimpedance element: the input signal is coupled to the device as a current, and the output signal is coupled from the device as a voltage. \section{III. dc Characteristics} Equations \eqref{eqn:SQUID} and \eqref{eqn:SLUG} were numerically integrated using a 4th order Runge-Kutta solver for $N \sim 2^{18}$ time steps $\Delta \theta$ over a range of bias points. In Figs. \ref{fig:IV}a-b we show the I-V characteristics of the symmetric dc SQUID and the symmetric SLUG with $\beta_L = 1$ and $\beta_C = 0.8$; in Figs. \ref{fig:VPhi}a-b we show the V-$\Phi$ characteristics of the same devices. For bias near $1.9 \, I_0$, the peak-to-peak voltage modulation is around 0.5~$I_0 R$. \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_4} \vspace*{-0.0in} \caption{Forward transfer function $V_\Phi$ of SLUG circuit \textit{versus} quasistatic bias flux for bias currents $I_b = 1.8 \, I_0$ (black), $ 1.9 \, I_0$ (blue), and $2.0 \, I_0$ (red). The device parameters are $\beta_L = 1$ and $\beta_C = 0.8$.} \label{fig:dVdPhi} \end{center} \end{figure} We observe that the dc characteristics of the SLUG closely match those of the symmetric dc SQUID, apart from a shift in flux bias point that arises from the asymmetric division of the SLUG bias current. Similarly, we have found that the scattering parameters and noise properties of the SLUG and the SQUID are closely matched, apart from this bias shift. Therefore for the sake of simplicity we choose to focus in the following on the device characteristics of the SLUG alone. We will consider the following set of SLUG parameters: $\beta_L=1$, $\beta_C = 0.8$, $L$~=~10~pH, and $C$~=~50~fF, corresponding to a junction with critical current 100 $\mu$A and area around 1~$\mu$m$^2$. Several considerations lead us to this choice. First, inductances of order 10~pH are realized in a reliable, controlled way using the SLUG geometry, and the resulting device is immune from stray reactances and straightforward to model at microwave frequencies. The required critical current density is 10~kA/cm$^2$, within the reach of standard Nb-AlO$_x$-Nb technology. While Joule heating in the shunt resistors is significant, the addition of large-volume normal-metal cooling fins should allow equilibration of the shunt resistors at temperatures below 100~mK (see Section X). Lower device inductances would require uncomfortably high junction critical currents to achieve comparable device performance, and fabrication yield and Joule heating of the shunts would become problematic. On the other hand, a significantly larger SLUG inductance would provide less gain and complicate the microwave engineering, owing to the larger device dimensions. \section{IV. Scattering Parameters} In order to optimize SLUG amplifier design, it is necessary to understand the forward transfer function and the complex input and output impedances of the device. To extract these from our model, we apply appropriate test signals and probe the complex response at the excitation frequency, chosen to be a small fraction of the Josephson frequency $\omega_J/2 \pi$. The forward transimpedance $V_I \equiv \partial V/\partial I$ is readily derived from the SLUG flux-to-voltage transfer function $V_\Phi$: \begin{align} V_I = M V_\Phi, \end{align} where again we have $M=L$ for the case of the symmetric SLUG. In Fig. \ref{fig:dVdPhi} we plot SLUG $V_\Phi$ \textit{versus} flux over a range of current bias points for $\beta_L = 1$ and $\beta_C = 0.8$. \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_5} \vspace*{-0.0in} \caption{$R/\mathcal{R}$ \textit{versus} flux for a SLUG with $\beta_L = 1$ and $\beta_C = 0.8$, for bias currents $I_b = 1.8 \, I_0$ (black), $ 1.9 \, I_0$ (blue), and $2.0 \, I_0$ (red).} \label{fig:Rin} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_6} \vspace*{-0.0in} \caption{$L/\mathcal{L}$ \textit{versus} flux for a SLUG with $\beta_L = 1$ and $\beta_C = 0.8$, for bias currents $I_b = 1.8 \, I_0$ (black), $ 1.9 \, I_0$ (blue), and $2.0 \, I_0$ (red).} \label{fig:Xin} \end{center} \end{figure} Next we consider the input return loss. The SLUG input is an inductive short to ground at low frequencies, and the complex input impedance $Z_i$ is frequency dependent. The input impedance is readily derived from the dynamic impedance $\mathcal{Z}$, defined in terms of the flux-to-current transfer function $J_\Phi \equiv \partial J/\partial \Phi$ as follows: \begin{align} -J_\Phi \equiv \frac{1}{\mathcal{Z}} = \frac{1}{\mathcal{L}} + \frac{j \omega}{\mathcal{R}}, \label{eqn:Zdyn} \end{align} where following Hilbert \textit{et al.} we have introduced the frequency-independent dynamic resistance $\mathcal{R}$ and dynamic inductance $\mathcal{L}$ \cite{Hilbert85a}. In Figs. \ref{fig:Rin} and \ref{fig:Xin} we plot $R/\mathcal{R}$ and $L/\mathcal{L}$, respectively, for a SLUG with $\beta_L = 1$ and $\beta_C = 0.8$ over a range of bias points. \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_7} \vspace*{-0.0in} \caption{SLUG output resistance $R_o$ \textit{versus} flux for bias currents $I_b = 1.8 \, I_0$ (black), $ 1.9 \, I_0$ (blue), and $2.0 \, I_0$ (red). The device parameters are $\beta_L = 1$ and $\beta_C = 0.8$.} \label{fig:Rout} \end{center} \end{figure} Finally, in Fig. \ref{fig:Rout} we show the device output impedance $R_o$ over a range of bias points. The output impedance is real and frequency independent, and the magnitude of $R_o$ is of order the junction shunt resistance $R$. \begin{figure}[t] \begin{center} \includegraphics[width=.47\textwidth]{fig_8} \vspace*{-0.0in} \caption{Maximum achievable power gain $G_m$ for a SLUG amplifier \textit{versus} flux for bias currents $I_b = 1.8 \, I_0$ (black), $ 1.9 \, I_0$ (blue), and $2.0 \, I_0$ (red). The device parameters are $\beta_L = 1$, $\beta_C = 0.8$, $L$~=~10~pH, and $C$~=~50~fF; the operating frequency is 5~GHz.} \label{fig:gain} \end{center} \end{figure} For the following discussion, it is convenient to work in terms of the bias-dependent dimensionless impedance parameters $\rho_{i,o}$, defined as follows: \begin{align} R_i &= \rho_i \frac{(\omega M)^2}{R} \nonumber \\ R_o &= \rho_o R \label{eqn:rho} \end{align} From the definition of $\mathcal{R}$ it follows that $\rho_i = R/\mathcal{R}$. As we will see, amplifier gain, bandwidth, and noise properties depend sensitively on $\rho_i$ and $\rho_o$. Power gain of the device is maximized when appropriate conjugate matching networks are employed to couple the signal to and from the device. The maximum available power gain $G_m$ is given as follows: \begin{align} G_m &= \frac{V_o^2/4 R_o}{I_i^2 R_i} \\ \nonumber \end{align} where $I_i$ is the input current and $V_o$ is the output voltage. Using Eq. \eqref{eqn:rho}, we find \begin{align} G_m &= \frac{1}{4 \rho_i \rho_o} \left(\frac{V_\Phi}{\omega}\right)^2. \end{align} In Fig. \ref{fig:gain} we plot $G_m$ for the symmetric SLUG with $\beta_L = 1$, $\beta_C = 0.8$, $L$~=~10~pH, and $C$~=~50~fF for an operating frequency of 5~GHz. Over a broad range of bias parameters gain in excess of 20 dB is readily achievable. It is important to note, however, that a conjugate match to a 50~$\Omega$ source does not yield best amplifier noise performance, due to the mismatch between the real part of the SLUG input impedance $R_i$ and the optimal noise-matched source impedance, which can be significantly larger than $R_i$. Amplifier optimization therefore involves a tradeoff between gain and noise performance, as discussed in detail below. \begin{figure}[t] \begin{center} \includegraphics[width=.47\textwidth]{fig_9} \vspace*{-0.0in} \caption{Dimensionless SLUG noises (a) $\gamma_V$, (b) $\gamma_J$, and (c) $\gamma_{VJ}$ \textit{versus} flux for bias currents $I_b = 1.8 \, I_0$ (black), $ 1.9 \, I_0$ (blue), and $2.0 \, I_0$ (red). The SLUG parameters are $\beta_L = 1$, $\beta_C = 0.8$, and $\Gamma = 4 \times 10^{-5}$.} \label{fig:gammas} \end{center} \end{figure} The bandwidth of the SLUG amplifier will be determined by the coupling to the low-impedance input port, as the device output is reasonably well-matched to typical transmission line impedances. To get a rough idea of amplifier bandwidth we consider a 50~$\Omega$ source impedance and assume that conjugate matching at the device input is accomplished via a simple quarter-wave transmission line section; for simplicity we neglect the imaginary part of the SLUG input impedance. The amplifier quality factor $Q$ is given by \begin{align} Q &\approx \frac{\pi}{8}\sqrt{\frac{50 \, \Omega}{R_i}} \nonumber \\ &= \frac{\pi}{8 \omega M} \sqrt{\frac{50 \, \Omega \times R}{\rho_i}}. \label{eqn:Q} \end{align} \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_10} \vspace*{-0.0in} \caption{Circuit for noise analysis.} \label{fig:noise_analysis} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_11} \vspace*{-0.0in} \caption{Real part of the optimal source impedance $R_{s,opt}$ \textit{versus} flux for bias currents $I_b = 1.8 \, I_0$ (black), $ 1.9 \, I_0$ (blue), and $2.0 \, I_0$ (red). The SLUG parameters are $\beta_L = 1$, $\beta_C = 0.8$, and $\Gamma = 4 \times 10^{-5}$. The operating frequency is 5~GHz.} \label{fig:Rs_Ri} \end{center} \end{figure} The bandwidth of an amplifier designed at an operating frequency $\omega/2 \pi$ is then $\omega/2 \pi Q$. For an operating frequency around 5~GHz, we find that $R_i$ is of order 0.1~$\Omega$. Therefore we expect $Q$ of order 10, and amplifier bandwidths of order hundreds of MHz. For current bias $I_b < 2 I_0$ and for a narrow range of fluxes corresponding to bias points near the supercurrent branch, we find that it is possible to achieve extremely high power gain (see Fig. \ref{fig:gain}). However, the high gains achieved at these bias points are due largely to vanishing $R_i$; an amplifier designed to operate in this regime would have a rather small bandwidth. It is important to note that Eq. \eqref{eqn:Q} presents only a rough guideline for the bandwidth rather than a fundamental limit. In particular, it is possible to obtain a larger bandwidth with no degradation in gain by employing either a tapered transmission line matching section or a multisection input transformer with stepped transmission line impedances. We postpone a more detailed discussion of amplifier bandwidth to Section VII. \section{V. Noise Properties in the Thermal Regime} The Johnson noise of the shunt resistors gives rise to a voltage noise at the device output and to a circulating current noise in the device loop; moreover, these noises are partially correlated, since the circulating current noise couples a flux noise to the loop, which in turn yields a voltage noise across the device. To incorporate noise in our model, we used a pseudorandom number generator to create a gaussian-distributed set of voltages $v_{N,1}$ and $v_{N,2}$ with zero mean and variance $2\Gamma/\Delta \theta$, where we have introduced the dimensionless noise parameter $\Gamma = 2\pi k_B T/I_0 \Phi_0$; this choice corresponds to the usual white power spectral density $S_v = 4\Gamma$ for Johnson noise in the thermal limit. The simulations were averaged over many ($\sim$~100) realizations of the random noise voltages. Following Clarke \textit{et al.}, we introduce the dimensionless noise parameters $\gamma_V$, $\gamma_J$, and $\gamma_{VJ}$, such that the voltage noise spectral density at the device output is given by $S_V=2 \gamma_V k_B T R$, the circulating current noise spectral density is $S_J = 2 \gamma_J k_B T/R$, and the cross noise spectral density is $S_{VJ} = 2 \gamma_{VJ} k_B T$; here $T$ is the electron temperature of the shunt resistors \cite{Tesche79, Hilbert85b}. These noises are calculated by solving the Langevin equations \eqref{eqn:SLUG}. The noise spectrum consists of a series of peaks at the Josephson frequency and its harmonics; the dimensionless noises $\gamma$ are evaluated at low frequency $f \ll \omega_J/2 \pi$ where the spectrum is white. The noises $\gamma$ do depend on the noise parameter $\Gamma$, due to the possibility of saturation and smearing of the device characteristics at elevated temperature. In Fig. \ref{fig:gammas} we plot the dimensionless noises over a range of bias parameters of the symmetric SLUG for $\beta_L = 1$, $\beta_C = 0.8$, and $\Gamma = 4 \times 10^{-5}$; this choice corresponds to a temperature of 100 mK for a junction critical current of 100 $\mu$A. We note that at high bias current $I_b \gg I_0$, $\gamma_{V, J}$ approach the expected Johnson noise limit of $1$ for the two shunt resistors in parallel. The device noise temperature $T_n$ can be evaluated from the circuit shown in Fig. \ref{fig:noise_analysis}. We assume a noiseless source impedance $Z_s = R_s + j X_s$ and equate the total noise of the amplifier to the noise contribution from a source resistance $R_s$ at an effective temperature $T_n$. We refer all noises to the device output. We find \begin{align} &4 k_B T_n R_s \, \frac{V_\Phi^2 M^2}{R_t^2 + X_t^2} \, = \nonumber \\ &2\gamma_V k_B T R \, + \, \frac{2 \gamma_J k_B T}{R} \, \frac{\omega^2 V_\Phi^2 M^4}{R_t^2 + X_t^2} \, + \, 4 \gamma_{VJ} k_B T \, \frac{\omega V_\Phi M^2 X_t}{R_t^2 + X_t^2}. \end{align} Here $R_t = R_s + R_i$ ($X_t = X_s + X_i$) is the sum of the real (imaginary) parts of the source impedance and the device input impedance. The noise temperature is thus given by \begin{align} T_n = \left[\frac{\gamma_V}{2}\, \frac{(R_t^2+X_t^2)R}{V_\Phi^2 M^2 R_s} \,+\, \frac{\gamma_J}{2} \,\frac{\omega^2 M^2}{R R_s} \,+\, \gamma_{VJ}\,\frac{\omega X_t}{V_\Phi R_s} \right] T . \label{eqn:Tnthermal} \end{align} \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_12} \vspace*{-0.0in} \caption{Optimal SLUG noise temperature \textit{versus} flux for bias currents $I_b = 1.8 \, I_0$ (black), $ 1.9 \, I_0$ (blue), and $2.0 \, I_0$ (red). The SLUG parameters are $\beta_L = 1$, $\beta_C = 0.8$, and $\Gamma = 4 \times 10^{-5}$. The operating frequency is 5~GHz.} \label{fig:Tn_thermal} \end{center} \end{figure} We use the condition $\partial T_n / \partial X_t = 0$ to solve for the imaginary part of the optimal source impedance. We find \begin{align} X_{s,opt} = -\frac{\gamma_{VJ}}{\gamma_V} \, \frac{\omega V_\Phi M^2}{R} - X_i. \end{align} Similarly, the condition $\partial T_n / \partial R_s = 0$ yields the real part of the optimal source impedance. We have \begin{align} R_{s,opt} = \left[1 + \frac{1}{\gamma_V^2 \rho_i^2}\left(\frac{V_\Phi}{\omega}\right)^2 \left(\gamma_V \gamma_J - \gamma_{VJ}^2\right) \right]^{1/2} R_i. \end{align} For bias points where $V_\Phi$ is highest, we have the following approximate expression for $R_{s,opt}$: \begin{align} R_{s,opt} \, \approx& \, \frac{1}{\gamma_V \rho_i} \frac{V_\Phi}{\omega} \left(\gamma_V \gamma_J - \gamma_{VJ}^2 \right)^{1/2} R_i \nonumber \\ =& \, \frac{\omega V_\Phi M^2}{\gamma_V R} \left(\gamma_V \gamma_J - \gamma_{VJ}^2 \right)^{1/2}. \end{align} In Fig. \ref{fig:Rs_Ri} we plot $R_{s,opt}/R_i$ \textit{versus} flux for various bias currents. For typical device parameters, we have $R_{s,opt} \gg R_i$. For this reason, it is not possible to achieve a simultaneous power match and noise match. It is worthwhile to note, however, that the ratio $R_{s,opt}/R_i$ scales with frequency as $\omega^{-1}$, facilitating simultaneous attainment of high gain and good noise performance at higher operating frequencies. When the signal is coupled to the device via a source with optimal impedance $R_{s,opt} + j X_{s,opt}$, the amplifier noise temperature becomes \begin{align} T_{n,opt} \, = \, \frac{\omega}{V_\Phi} \left(\gamma_V \gamma_J - \gamma_{VJ}^2 \right)^{1/2} T. \end{align} In Fig. \ref{fig:Tn_thermal} we show the optimal noise temperature $T_{n,opt}$ for a SLUG amplifier over a range of bias points at an operating frequency $\omega / 2 \pi = 5$~GHz. Note that every point in these plots corresponds to a different realization of the input matching network; in Section VII we will examine the bias- and frequency-dependent noise temperature of SLUG amplifiers operated with a fixed input network. \section{VI. Noise Properties in the Quantum Regime} At sufficiently low temperature, the zero-point fluctuations of the resistive shunts are expected to make the dominant noise contribution. The full expression for the spectral density of voltage noise produced by the resistors is written as $2 h f R \, \textrm{coth}(hf/2k_B T)$. \begin{figure}[th!] \begin{center} \includegraphics[width=.47\textwidth]{fig_13} \vspace*{-0.0in} \caption{Quantum noises (a) $S_V$, (b) $S_J$, and (c) $S_{VJ}$ \textit{versus} flux for bias currents $I_b = 1.8 \, I_0$ (black), $ 1.9 \, I_0$ (blue), and $2.0 \, I_0$ (red). The SLUG parameters are $\beta_L = 1$, $\beta_C = 0.8$, $L$~=~10~pH, and $C$~=~50~fF.} \label{fig:quantumnoise} \end{center} \end{figure} We have calculated the added noise of the symmetric SLUG in the zero-temperature limit, where the voltage spectral density of the shunt resistors becomes $2 h f R$. We generate a single-sided quantum spectral density by digitally filtering gaussian white noise. Using the quantum noise as a driving term in the Langevin equations \eqref{eqn:SLUG}, we evaluate the voltage power spectral density $S_V(f)$ at the device output, the circulating current spectral density $S_J(f)$, and the cross spectral density $S_{VJ}(f)$; in Fig. \ref{fig:quantumnoise} we plot these noises \textit{versus} flux for various bias currents. Once again, the device noise temperature $T_n$ can be evaluated from the circuit of Fig. \ref{fig:noise_analysis}. We assume a zero-temperature source impedance $Z_s = R_s + j X_s$, and equate the total noise of the amplifier to the noise contribution from a source resistance $R_s$ at a finite effective temperature $T_n$. The amplifier noise temperature is obtained from the relation \begin{align} &2 h f R_s \, \textrm{coth}\left(hf/2k_B T_n\right) \frac{V_\Phi^2 M^2}{R_t^2 + X_t^2} = \nonumber \\ &S_V + S_J \frac{\omega^2 V_\Phi^2 M^4}{R_t^2 + X_t^2} + 2 S_{VJ} \frac{\omega V_\Phi M^2 X_t}{R_t^2 + X_t^2} + 2 h f R_s \frac{V_\Phi^2 M^2}{R_t^2 + X_t^2}. \end{align} Alternatively, we can express the noise contribution of the device in terms of an added number of noise photons $n$, where $n$ and $T_n$ are related as follows: \begin{align} \textrm{coth}\left(hf/2k_B T_n\right) = 2 n + 1, \label{eqn:n_Tn} \end{align} so that \begin{align} n = \frac{1}{2 h f R_s} \left[\frac{S_V}{2} \, \frac{R_t^2 + X_t^2}{V_\Phi^2 M^2} + \frac{S_J}{2} \omega^2 M^2 + S_{VJ} \frac{\omega}{V_\Phi} X_t \right]. \label{eqn:n} \end{align} The optimal source impedance $Z_{s,opt} = R_{S,opt} + j X_{s,opt}$ is obtained from the relations $\partial n / \partial X_t~=~0$ and $\partial n / \partial R_s~=~0$. The imaginary part of the optimal source impedance is given as follows: \begin{align} X_{s,opt} = -\frac{S_{VJ}}{S_V} \omega V_\Phi M^2 - X_i. \end{align} Similarly, the real part of the optimal source impedance is written \begin{align} R_{s,opt} = \left[1 + \left(\frac{V_\Phi R}{\rho_i \omega S_V}\right)^2 \left(S_V S_J - S_{VJ}^2 \right)\right]^{1/2} R_i. \end{align} In the limit $V_\Phi \gg \omega$, we find \begin{align} R_{s,opt} \approx \frac{\omega V_\Phi M^2}{S_V} \left(S_V S_J - S_{VJ}^2 \right)^{1/2}. \label{eqn:rsquant} \end{align} In Fig. \ref{fig:Rsopt_quantum} we plot $R_{s,opt}/R_i$ in the quantum regime \textit{versus} flux for a range of bias currents. For the optimally matched source, the added number of noise photons is given by \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_14} \vspace*{-0.0in} \caption{Real part of the optimal source impedance $R_{s,opt}$ in the quantum regime \textit{versus} flux for bias currents $I_b = 1.8 \, I_0$ (black), $ 1.9 \, I_0$ (blue), and $2.0 \, I_0$ (red). The operating frequency is 5~GHz and the SLUG parameters are $\beta_L = 1$, $\beta_C = 0.8$, $L$~=~10~pH, and $C$~=~50~fF.} \label{fig:Rsopt_quantum} \end{center} \end{figure} \begin{align} n_{opt} = \frac{1}{2 \hbar V_\Phi} \left(S_V S_J - S_{VJ}^2 \right)^{1/2}. \end{align} In Fig. \ref{fig:n_added} we plot $n_{opt}$ \textit{versus} flux, for various current biases. We see that for an appropriately noise-matched source the SLUG approaches a noise level that is close to the standard quantum limit $n_{SQL} = 1/2$, the minimum achievable added noise for a phase-insensitive linear amplifier \cite{Caves82}. \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_15} \vspace*{-0.0in} \caption{Minimum number of added noise photons in the quantum regime $n_{opt}$ \textit{versus} flux for bias currents $I_b = 1.8 \, I_0$ (black), $ 1.9 \, I_0$ (blue), and $2.0 \, I_0$ (red). The operating frequency is 5~GHz and the SLUG parameters are $\beta_L = 1$, $\beta_C = 0.8$, $L$~=~10~pH, and $C$~=~50~fF.} \label{fig:n_added} \end{center} \end{figure} \section{VII. Amplifier Design} The above analysis demonstrates that the SLUG is an attractive gain element for the realization of a low-noise microwave amplifier. We now consider concrete external networks used to embed the device in a 50~$\Omega$ environment. The tasks are to maximize power transfer to and from the device and to match the 50~$\Omega$ source to the optimal noise impedance at the desired operating frequency. For example, to maximize gain we design a conjugate matching network to transform the 50~$\Omega$ source to $R_i - j X_i$. On the other hand, optimal noise performance is achieved for an input matching network that transforms the 50~$\Omega$ generator to the complex optimal source impedance $Z_{s,opt} = R_{s,opt} + j X_{s,opt}$. Since $R_{s,opt} \gg R_i$ for typical parameters, it is generally not possible to achieve a simultaneous power match and noise match. However, it is possible to find a compromise where there is reasonable gain and good noise performance over a relatively broad bias range. Fig. \ref{fig:amplifier}a shows a schematic diagram of a SLUG-based microwave amplifier with transmission line matching sections at the input and output. To calculate amplifier gain and noise performance, we treat the SLUG as a ``black box" with scattering and noise parameters derived from the calculations of Sections IV-VI (Fig. \ref{fig:amplifier}b). \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_16} \vspace*{-0.0in} \caption{(a) Schematic of SLUG microwave amplifier. (b) Circuit for amplifier analysis.} \label{fig:amplifier} \end{center} \end{figure} As an example, we show in Fig. \ref{fig:quadplot} the frequency-dependent gain, noise temperature $T_n$, and added noise quanta $n$ for SLUG amplifiers operated with different single-section transmission line input couplers with characteristic impedance in the range from 1-3 $\Omega$. Here we have used the full expressions \eqref{eqn:Tnthermal} and \eqref{eqn:n} to calculate the frequency-dependent noise contribution of the amplifier in the thermal and quantum regimes, respectively. The length of the input coupler provides a bare quarter-wave resonance at 6.5 GHz; inductive loading by the SLUG pulls the operating frequency down to the desired value of 5 GHz. We remark that the transmission line impedances considered here are readily achieved with thin-film microstrip technology: for example, a trace width of 10~$\mu$m and a dielectric with $\epsilon_r = 4$ and thickness 100~nm corresponds to a characteristic impedance of 2~$\Omega$. \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_17} \vspace*{-0.0in} \caption{(a) Gain, (b) noise temperature, and (c) added noise quanta for a 5~GHz amplifier incorporating a 10~pH SLUG element with $\beta_L = 1$, $\beta_C = 0.8$, $C$~=~50~fF and $I_b = 1.8 \, I_0$. The input matching network is a single transmission line section with characteristic impedance as indicated. Gain and added noise are evaluated at the frequency where the quantum noise contribution of the SLUG is minimum.} \label{fig:quadplot} \end{center} \end{figure} In Fig. \ref{fig:singlesection} we consider the frequency-dependent gain and noise performance of SLUG amplifiers operated with different fixed single-section input coupling networks. Due to the nonvanishing cross spectral density $S_{VJ}$, the minimum noise temperature occurs at a frequency that is somewhat lower than the frequency of maximum gain. For a $Z_{0,i}=2$~$\Omega$ input coupler, we achieve noise within 50\% of the standard quantum limit at a frequency where amplifier gain is 15~dB, and noise within a factor of 2 of the standard quantum limit at a frequency where gain is 18~dB. \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_18} \vspace*{-0.0in} \caption{(a) Gain, (b) noise temperature in the thermal regime, and (c) added noise in the quantum regime for a 5~GHz SLUG amplifier. The device parameters are $\beta_L = 1$, $\beta_C = 0.8$, $L$~=~10~pH, $C$~=~50~fF, and $I_b = 1.8 \, I_0$. The input matching network is a single transmission line section with bare quarter-wave resonance at 6.5~GHz and characteristic impedance 2~$\Omega$.} \label{fig:singlesection} \end{center} \end{figure} Finally, we note that is possible to increase amplifier bandwidth significantly by coupling the input signal to the device via a multisection transformer with stepped characteristic impedances. As an example, we show in Fig. \ref{fig:three_section} the frequency-dependent gain and added noise for amplifiers operated with different three-section matching networks. Here the length of each transmission line section is chosen to provide a bare quarter-wave resonance at 5 GHz, and the characteristic impedances were determined by numerical minimization of the quantum noise contribution of the SLUG in the frequency range from 4.5 to 5.5 GHz. \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_19} \vspace*{-0.0in} \caption{(a) Gain, (b) noise temperature in the thermal regime, and (c) added noise in the quantum regime for broadband amplifiers incorporating a 10~pH SLUG element with $\beta_L = 1$, $\beta_C = 0.8$, $I_b = 1.8 \, I_0$, and $\Phi_b = 0.35 \, \Phi_0$. The red traces correspond to a three-section input matching network with quarter-wave resonances at 5 GHz and with characteristic impedances of 24.3 $\Omega$, 17.4 $\Omega$, and 3.0 $\Omega$, derived from numerical minimization of the SLUG quantum noise over the band from 4.5 GHz to 5.5 GHz. The blue traces correspond to a matching network consisting of three sections with characteristic impedance 29.8 $\Omega$, 7.1 $\Omega$, and 1.1 $\Omega$ followed by a series capacitance of 38 pF to tune out the imaginary part of the SLUG input impedance at a frequency of 5 GHz.} \label{fig:three_section} \end{center} \end{figure} \section{VIII. Dynamic Range} The strong nonlinearity of the SLUG leads to gain compression and harmonic generation when the device is driven with a large-amplitude signal. It is important to verify that the SLUG dynamic range will be sufficient for the desired application. In Fig. \ref{fig:dynamic_range}a we plot normalized SLUG gain \textit{versus} signal power coupled to the device input over a range of bias parameters for $\beta_L = 1$, $\beta_C = 0.8$, $L$~=~10~pH and $C$~=~50~fF. These plots were generated by solving the SLUG equations of motion (\ref{eqn:SLUG}) with a sinusoidal driving term of varying amplitude. Depending on bias point, the 1 dB compression point occurs somewhere in the range from -110~dBm to -~90~dBm, corresponding to input powers from 10~fW to 1~pW. These 1~dB compression points are comparable to those seen in other SQUID-based microwave amplifiers \cite{Spietz09} and 1-2 orders of magnitude higher than those achieved with typical Josephson parametric amplifiers \cite{Yamamoto08}. Amplifier dynamic range is determined by dividing the signal power at 1 dB compression by the noise power contributed by the SLUG over a given bandwidth. In Fig. \ref{fig:dynamic_range}b we plot SLUG dynamic range; here we have used the zero-temperature quantum spectral density for the shunt resistors of the SLUG. We find a typical value of 130~dB~Hz, corresponding to a dynamic range of 40~dB in an amplifier bandwidth of 1~GHz. For applications related to dispersive readout of qubits in a circuit QED architecture, where the focus is on measurement of signals at the level of single microwave quanta in bandwidths of order 100~MHz to 1~GHz, the dynamic range of the SLUG amplifier is more than adequate. \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_20} \vspace*{-0.0in} \caption{(a) Normalized gain \textit{verus} input power for a SLUG element with $\beta_L = 1$, $\beta_C = 0.8$, $L= 10$~pH, $C=50$~fF, and $I_b = 1.8 \, I_0$. The different traces correspond to various flux bias points. (b) SLUG dynamic range \textit{versus} flux for various current bias points; the device parameters are as in (a), and we assume a zero-temperature quantum spectral density for the SLUG shunt resistors.} \label{fig:dynamic_range} \end{center} \end{figure} \section{IX. Effect of Input Circuit Admittance} In the above analysis, we have solved for the behavior of the isolated SLUG element, and then treated the device as a ``black box" with known scattering parameters for the purpose of designing appropriate matching networks. In reality, the nonvanishing admittance at the device input and output will modify the device characteristics, and a complete treatment must take loading by the external circuit into account. The scattering parameters will now depend on the particular realization of the matching network, and a full exploration of the space of design parameters becomes tedious. However, we find that the performance of the SLUG amplifier is not greatly affected by the nonvanishing input circuit admittance, particularly once modest steps are taken to decouple the SLUG element from the higher-order modes of the resonant input matching network. To take into account the admittance of the resonant input matching network, we modify the junction equations of motion \eqref{eqn:SLUG} to include an additional term representing the current drawn by the input circuit. The circuit model is shown in Fig. \ref{fig:filter_circuits}a. The input transmission line of impedance $Z_0$ can be exactly modeled as a pair of coupled, time dependent voltage sources $E_L$ and $E_S$. These are related to the voltages $V_{L,S}$ and currents $I_{L,S}$ at the two ends of the transmission line as follows: \begin{align} E_L(t) &= V_S(t-t_D) + Z_0 I_S(t - t_D) \nonumber \\ E_S(t) &= V_L(t-t_D) - Z_0 I_L(t - t_D), \end{align} where $t_D$ is the propagation delay along the transmission line. The input current is then determined by the additional differential equation \begin{equation} \dot{I_L} = \frac{1}{L} \left[\frac{\Phi_0}{2 \pi} (\dot{\delta_2} - \dot{\delta_1}) - E_L + I_L Z_0 \right]. \end{equation} \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_21} \vspace*{-0.0in} \caption{(a) Model for circuit analysis with finite input circuit admittance. (b) Amplifier circuit with filter inductor $L_f$ to decouple SLUG from modes of the input circuit.} \label{fig:filter_circuits} \end{center} \end{figure} Using the modified equations of motion for the junction phases, we calculate the dc characteristics of the SLUG. The I-V and V-$\Phi$ curves of a 10~pH, $\beta_L = 1$ SLUG with a 10~GHz quarter-wave input transformer are shown in Fig. \ref{fig:filter_dc}a-b. We observe sharp Shapiro step-like structure at voltages corresponding to Josephson frequencies that are integer multiples of the half-wave resonance of the input circuit. While quantum fluctuations of the SLUG shunts smooth out this structure somewhat, it is clearly desirable to decouple to the SLUG from the higher-order standing wave modes of the input circuit, as these modes will limit amplifier dynamic range and lead to excess noise. \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_22} \vspace*{-0.0in} \caption{(a) I-V curves of a SLUG operated with a transmission line input circuit with characteristic impedance $Z_0$~=~2~$\Omega$ and bare quarter-wave resonance at 10~GHz for $\Phi_b = 0 \, \Phi_0$ (black), $ 0.25 \, \Phi_0$ (blue), and $0.5 \, \Phi_0$ (red). (b) V-$\Phi$ curves of the same SLUG for $I_b = 1.8 \, I_0$ (black), $ 1.9 \, I_0$ (blue), and $2.0 \, I_0$ (red). (c)-(d) As in (a)-(b), respectively, for a circuit incorporating a 60~pH filter inductor $L_f$ to decouple the modes of the SLUG from the modes of the input circuit.} \label{fig:filter_dc} \end{center} \end{figure} To suppress the resonances of the input circuit, we insert a filter inductor $L_f$ of order tens of pH between the input coupler and the SLUG element, as shown in Fig. \ref{fig:filter_circuits}b. In Fig. \ref{fig:filter_dc}c-d we plot the SLUG characteristics with a 60~pH filter inductor in place. We see that the resonant structure is greatly suppressed. We can now calculate the gain and noise properties of the complete circuit of Fig. \ref{fig:filter_circuits}b by performing a full integration of the amplifier equations of motion. Power gain and bandwidth are determined by driving the amplifier with a sinusoidal input tone and monitoring the SLUG output at the excitation frequency. In Fig. \ref{fig:full_circuit}a we plot frequency-dependent gain for the SLUG circuit. The blue trace is the result of the full circuit simulation, where we have taken a transmission line input with characteristic impedance $Z_0 = 2$ and a length corresponding to a bare quarter-wave resonance at 10~GHz, significantly higher than the amplifier operating frequency of 4.5~GHz in order to compensate for the additional reactive loading by the filter inductor. The red trace was obtained by treating the SLUG as a ``black box" with scattering parameters calculated as described above in Section IV. The agreement with the full circuit simulation is good, confirming that the filter inductance has effectively isolated the modes of the SLUG and the input circuit. \begin{figure}[t] \begin{center} \includegraphics[width=.49\textwidth]{fig_23} \vspace*{-0.0in} \caption{(a) Gain and (b) added noise in the quantum regime for SLUG amplifiers calculated using the ``black box" scattering parameters of the isolated SLUG (red traces) or by solving the full circuit model of Fig. \ref{fig:filter_circuits}. The SLUG parameters are $\beta_L = 1$, $\beta_C = 0.8$, $L= 10$~pH, $C=50$~fF, $I_b = 1.8 \, I_0$, and $\Phi_b = 0.35 \Phi_0$. The input matching network consists of a 2 $\Omega$ transmission line section with bare quarter-wave resonance at 10 GHz followed by a filter inductor $L_f = 60$ pH.} \label{fig:full_circuit} \end{center} \end{figure} To calculate the frequency-dependent noise temperature $T_n(f)$, we simulate a ``hot load / cold load" experiment where we compare the power spectra $S_{V,cold}$ and $S_{V,hot}$ at the device output for source resistances at temperatures $T=0$ and $T_b$, respectively. In the thermal regime, we have \begin{align} T_n(f) = \frac{S_{V,cold}(f)}{S_{V,hot}(f) - S_{V,cold}(f)} \,\, T_b. \end{align} In the quantum regime, we find \begin{align} \frac{\textrm{coth}\left[hf/2k_B(T_b+T_n) \right]}{\textrm{coth}\left( hf/2k_B T_n \right)} = \frac{S_{V,hot}}{S_{V,cold}}. \end{align} The added noise number is then obtained from Eq. \ref{eqn:n_Tn}. In Fig. \ref{fig:full_circuit}b we plot the added noise of a 5~GHz SLUG amplifier calculated with the full circuit model and with the ``black box" scattering parameters of the isolated SLUG. The noise magnitude is similar in the two cases, although the full circuit solution predicts a higher frequency for the minimum in the amplifier noise contribution. We understand the shift in the frequency-dependent noise characteristics to be due to a modification of the circulating current spectral density $S_J$ by the nonvanishing admittance of the input network. \section{X. Hot Electron Effects} At millikelvin temperatures electrons decouple from the phonons, and the electron temperature of the SLUG shunts can be significantly higher than the bath temperature. Wellstood \textit{et al.} showed that the electron temperature $T_e$ in a metal thin film resistor is given by \begin{align} T_e = (P/\Sigma \Omega + T_p^5)^{1/5}, \end{align} where $P$ is the power deposited in the resistor, $\Sigma$ is a materials parameter equal to approximately $2~\times~10^9$~W/m$^3$K$^5$, $\Omega$ is the normal metal volume, and $T_p$ is the phonon temperature \cite{Wellstood94}. The elevated temperature of the shunt resistors translates directly to elevated noise temperature of the amplifier. For a device with fixed $\beta_C$, the power dissipation in the shunts scales as $1/R^3$. Hot electron effects will be particularly relevant for the microwave amplifiers discussed here, as optimal performance is achieved for small SLUG inductance, corresponding to large critical currents and small shunt resistances. A proven strategy to promote thermalization of the SLUG shunts at millikelvin temperatures is to fabricate large-volume normal metal cooling fins in metallic contact with the resistor element. At low temperatures, the inelastic diffusion length is of order several mm \cite{Wellstood94}; the cooling fins thus allow hot electrons generated in a localized region of the shunt resistor to diffuse over a large volume and thermalize with cold electrons and phonons. Wellstood \textit{et al.} demonstrated a significant reduction in the electron temperature of dc SQUIDs incorporating 400~$\times$~400~$\mu$m$^2$ CuAu cooling fins with thickness around 1~$\mu$m, with measured electron temperatures under 40~mK \cite{Wellstood89}. A similar approach has been used to suppress hot-electron effects and reduce the noise temperature of microstrip SQUID amplifiers operated in the radiofrequency regime \cite{Kinion11}. It will be straightforward to integrate normal metal cooling fins with area of order 1~mm$^2$ into a standard microwave SLUG amplifier geometry without compromising the microwave integrity of the circuit. We anticipate that the addition of such cooling fins will make it possible to attain electron temperatures under 100~mK for the device parameters considered here, corresponding to operation far in the quantum regime for frequencies in the range from 5-10~GHz. \section{XI. Concluding Remarks} We have presented a comprehensive theoretical treatment of the SLUG microwave amplifier. Specific advantages of this approach over competing approaches to low-noise microwave amplification are as follows: \begin{enumerate} \item{The low-inductance device geometry is compact, straightforward to model at microwave frequencies, and readily integrated into a microwave transmission line.} \item{The device input and output are both reasonably well-matched to a 50~$\Omega$ transmission-line impedance, facilitating broadband operation. Moreover, multisection transmission-line input couplers provide a clear path to attaining bandwidths of order GHz while maintaining excellent gain and noise performance.} \item{It is straightforward to decouple the SLUG modes from the input modes, allowing separate optimization of the gain element and the input matching network.} \item{The dynamic range of the amplifier is large relative to that required for qubit readout or circuit QED applications.} \item{Due to its extremely small magnetic sensing area, the SLUG gain element is extremely robust and immune to ambient magnetic field fluctuations.} \end{enumerate} We believe that we have identified the major technical obstacles and outlined a clear path to device optimization. We anticipate that these amplifiers will be attractive in the context of qubit readout in a circuit QED architecture \cite{Wallraff04}, either as a near quantum-limited first-stage amplifier or as an ultralow noise postamplifier following a Josephson paramp. Other possible applications include fundamental studies of microwave photon counting statistics \cite{Bozyigit11}, or ultralow noise amplification for dark-matter axion detection \cite{Asztalos10}. \begin{acknowledgments} We thank J.M. Martinis, M. M$\ddot{\textrm{u}}$ck, and B.L.T. Plourde for useful discussions, and we acknowledge support from IARPA under contracts W911NF-10-1-0324 and W911NF-10-1-0334. All statements of fact, opinion or conclusions contained herein are those of the authors and should not be construed as representing the official views or policies of the U.S. Government. \end{acknowledgments}
1,116,691,499,916
arxiv
\section{Introduction} Multidimensional hydrodynamic instabilities play a central role in the explosion mechanism of most core-collapse supernovae (CCSNe). Though this has been repeatedly demonstrated in a variety of contexts \citep[][]{Hera92,Burr93,Hera94,Burr95a,Jank96,Mare09}, there is as yet no definitive understanding of the role of multidimensional effects in facilitating explosions. Important effects may include increased dwell times in the gain region \citep[][]{Murp08}, expansion of the shock due to turbulent pressure support from neutrino-driven convection \citep[][]{Hera94,Burr95a,Murp12} and/or the standing accretion shock instability (SASI) \citep[e.g.][]{Sche08}, simultaneous accretion and explosion \citep[][]{Burr95a}, suppression of cooling beneath the gain region \citep{Pejc12}, and other still unidentified processes. With the exception of a few preliminary results in 3D \citep{Brue09,Kuro12,Taki12}, the enormous computational expense has limited the most sophisticated supernova models, including the multi-species, multi-group neutrino transport, to 2D axisymmetric simulations \citep{Ott08,Mull12}. Unfortunately, the fundamentally 3D hydrodynamics in the post-shock turbulent flow is qualitatively different if axisymmetry is imposed, as we discuss in detail below. Understanding the differences between 2D and 3D behavior and how dimension effects the mechanism of explosion is critical in the interpretation of realistic 2D simulations and ultimately in elucidating how real stars explode. If we find explosions in axisymmetry, should we expect them in 3D? If so, are the conditions identified in 2D as crucial to producing explosions manifest in 3D? How are explosions triggered in 3D? In this work, we do not perform sophisticated radiation hydrodynamic simulations of realistic core-collapse supernovae. Rather, we perform a series of simplified numerical experiments designed to clarify how dimension effects the hydrodynamics leading to explosions \citep[see also][]{Murp08,Nord10,Hank12}. We find that there are many differences between 2D and 3D models, some quite dramatic, and that conditions identified as important in 2D may not remain so in more realistic 3D models. Nevertheless, our 3D models explode whenever the 2D models explode and, moreover, 3D models explode earlier. We begin in Sec.~\ref{sec:setup} with a discussion of our numerical setup and solution technique. Section~\ref{sec:overview} gives an overview of the basic results of our simulations and discusses qualitative differences in the structures of 2D and 3D models. Section~\ref{sec:global} begins the quantitative analyses of the simulations, showing how the global structures of the flows are different between 2D and 3D models. Section~\ref{sec:turb} compares various measures of the 2D and 3D turbulence in the post-shock flows. Section~\ref{sec:expl} discusses popular explosion metrics and introduces a simple model for explosions based on the runaway growth of bubbles. Finally, Sec.~\ref{sec:conc} discusses our results and conclusions. \section{Numerical Setup}\label{sec:setup} Our setup is the same as that presented in \citet{Burr12} and \citet{Murp12}. We use the CASTRO adaptive mesh refinement hydrodynamics code \citep{Almg10} to evolve two-dimensional (axisymmetric) and three-dimensional models of the collapse, bounce, and subsequent evolution of the $15$-$\,{\rm M}_\odot$ non-rotating solar-metallicity progenitor of \citet{Woos95}. The adaptive mesh uses six levels of factor two refinement with $\approx0.5\,{\rm km}$ resolution in the inner $50\,{\rm km}$ and $2\,{\rm km}$ or better resolution everywhere behind and including the shock during the stalled pre-explosion phase. We ensure that both two- and three-dimensional simulations utilize the same refinement criteria to minimize any differences that might arise from different grid structures. The domains include the inner $5000\,{\rm km}$ in radius. As in previous works, we adopt the ``light bulb'' prescription for neutrino heating and cooling \citep{Murp08,Nord10,Hank12,Burr12,Murp12,Couc12}. In this prescription, neutrino heating is parametrized by a constant driving electron neutrino luminosity $L_{\nu_e}$ (the electron neutrino and antineutrino luminosities are assumed to be equal) and we present results for three luminosities: $2.1\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$, $2.2\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$, and $2.3\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$. Neutrino cooling occurs at a rate $\propto T^6$ \citep[][]{Beth90}. Since we do not explicitly treat the neutrino transport, the electron fraction $Y_e$ is evolved according to the prescription given by \citet{Lieb05}. Finally, the equation of state is based on the relativistic mean-field theory of \citet{Shen98a,Shen98b} and we assume nuclear statistical equilibrium. \section{Overview of Simulation Results}\label{sec:overview} We find that the structure and evolution of the post-bounce hydrodynamics can be clearly distinguished between simulations that differ only in dimension. In this section, we develop a qualitative view of the structure and evolution of 2D and 3D models. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{f1.eps} \caption{Average shock radii for all six models considered in this work. The very early phases are quite similar, but the models diverge after $100\,{\rm ms}$ post-bounce. In the quasi-steady accretion phase, the average shock radius is $\sim$$30\,{\rm km}$ larger in 3D (solid) than in 2D (dashed) for a given driving neutrino luminosity. Finally, when explosions occur, they set in $\sim$$100$--$300\,{\rm ms}$ earlier in 3D than in 2D.} \label{fig:rshock} \end{figure} Figure~\ref{fig:rshock} shows the development of the average shock radius for all three driving neutrino luminosities in both 2D and 3D. For the first $100\,{\rm ms}$ after bounce, the models are nearly indistinguishable, but diverge thereafter. The stalled shock radii in 3D are generally larger than in 2D and are less variable in both angle and time. The models with driving luminosities of $2.2\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ and $2.3\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ have sufficient neutrino heating to explode within the simulated time in both 2D and 3D. The explosions, however, occur earlier in 3D than in 2D, in qualitative agreement with \citet{Nord10} despite deficiencies in that work \citep{Burr12}. Moreover, once explosions set in, the shock radii grow monotonically in 3D whereas they show oscillatory behavior in 2D, at least early in the explosion phase. \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{f2.eps} \caption{Snapshots of the entropy, radial velocity, magnitude of the vorticity ($|\nabla\times\vec{v}|$), and velocity divergence ($\nabla\cdot\vec{v}$) at $250\,{\rm ms}$ post-bounce for the 2D (left) and 3D (slice, right) $L_{\nu_e}=2.2\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ models. As evident in all four quantities and in contrast with the 2D models, 3D models show significantly more small scale structure, have no preferred axis, and tend to be more spherical in the early stalled accretion shock phase.} \label{fig:snap250} \end{figure} To develop a qualitative sense for some of the underlying hydrodynamic differences, we show snapshots of various quantities at select times in the evolution. Figure~\ref{fig:snap250} shows slices of the entropy, radial velocity, magnitude of the vorticity ($|\nabla\times\vec{v}|$), and velocity divergence ($\nabla\cdot\vec{v}$) at $250\,{\rm ms}$ post-bounce for the $L_{\nu_e}=2.2\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ models in 2D and 3D. Figure~\ref{fig:snap500} shows the same quantities at $500\,{\rm ms}$ post-bounce. Perhaps the most obvious difference between 2D and 3D is the clear presence of a preferred axis, leading to a distinctly prolate distortion of the post-shock flow. This is a generic result seen in all 2D simulations, even those that include sophisticated neutrino transport and relativistic effects \citep[][]{Ott08,Mare09,Mull12}. Importantly, this may be an \emph{artifact} of assuming axisymmetry, the consequences of which are difficult to clarify. Another important difference, identified previously, is the existence of more small-scale structure in the flow in 3D \citep{Hank12}, as can be seen in the entropy, radial velocity, vorticity, and velocity divergence. Interestingly, when comparing the entropy and radial velocity maps in both the 2D and 3D models, there is a strong correspondence between large-scale structures; high-entropy plumes are associated with outflow whereas low-entropy regions are associated with inflow, a natural consequence of buoyancy-driven convection, which it has been argued dominates the flow in the stalled accretion shock phase \citep{Burr12,Murp12}. In 3D, these rising plumes also have lower vorticities and larger velocity divergences than the surrounding flow, suggesting that these structures are relatively coherent and expanding. In 2D, these associations are more difficult to make by eye. \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{f3.eps} \caption{Same as Fig.~\ref{fig:snap250}, but at $500\,{\rm ms}$ post-bounce. As compared with the snapshot at $250\,{\rm ms}$, the 3D model is developing significant asymmetry as it evolves towards explosion. The 2D model is qualitatively similar at $500\,{\rm ms}$ post-bounce to $250\,{\rm ms}$ post-bounce, with its distinctive prolate distortion and characteristically larger structures compared with the 3D model.} \label{fig:snap500} \end{figure} The 2D and 3D post-shock flows have qualitative differences that are easily identified. The geometries of the flows are different and the characteristics of the turbulent, convective, post-shock flows are different. One consequence of these differences is a larger averaged stalled shock radius in 3D relative to 2D. In the remainder of this work, we undertake detailed analyses of the post-shock hydrodynamics to better understand how these qualitative differences manifest quantitatively in quantities that have been suggested to be of importance. \section{Global Structures}\label{sec:global} \subsection{Integral Quantities} \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{f4.eps} \caption{Integrated rate of neutrino heating in the volume between the neutrinosphere ($r_\nu$) and shock ($r_s$). The rates are quite comparable between 2D (dashed) and 3D (solid), but 3D tends to be a few percent ($\sim$$1$--$3\%$) higher.} \label{fig:total_heat} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{f5.eps} \caption{Integrated rate of neutrino cooling in the volume between the neutrinosphere ($r_\nu$) and shock ($r_s$). The 3D (solid) models tend to cool more ($\sim$$10\%$) than their 2D (dashed) counterparts.} \label{fig:total_cool} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{f6.eps} \caption{Net rate of neutrino heating in the volume between the neutrinosphere ($r_\nu$) and shock ($r_s$). Note that the heating rates are all negative, indicating that there is net neutrino cooling overall. In spite of the marginally higher heating rate, the 3D models (solid) show less ($\sim$$10\%$) \emph{net} heating per unit time than the 2D models (dashed).} \label{fig:total_net} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{f7.eps} \caption{Net neutrino heating rate in the gain region between the gain radius ($r_g$) and the shock ($r_s$). After an initial transient phase lasting $\sim$$50\,{\rm ms}$, 2D models (dashed) show $\sim$$30\%$ higher heating rates in this region than the 3D models (solid) for a given driving luminosity. The specific heating rates, found by dividing the heating rates shown above by the mass in the gain region, are also larger in 2D than in 3D.} \label{fig:gain_net} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{f8.eps} \caption{Mass in the gain region for all six models in units of $10^{-2}\,{\rm M}_\odot$. Prior to explosion, the gain mass is generally larger in 2D (dashed) than in 3D (solid), which may contribute to the higher neutrino heating rates.} \label{fig:mgain} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{f9.eps} \caption{Average specific entropy in units of $k_b/{\rm baryon}$ in the gain region. In the stalled shock phase, the average entropies are higher in 3D (solid) by about one unit compared to 2D (dashed). The turndown in the $L_{\nu_e}=2.2\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ and $L_{\nu_e}=2.3\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ models is associated with the relatively lower entropy, unheated material that enters into the post-shock region in the exploding phase.} \label{fig:avgent} \end{figure} While unable to capture the full scope of the differences between 2D and 3D flows, integral quantities offer the advantages of simplicity and widespread usage. In the context of the neutrino mechanism, the integrated heating and cooling in the post-shock flow are naturally important quantities. Figures~\ref{fig:total_heat}, \ref{fig:total_cool}, and \ref{fig:total_net} show the total integrated heating, cooling, and net heating minus cooling rates, respectively, in the region between the neutrinosphere (where the optical depth to neutrinos is approximately unity) and shock. While there tends to be marginally more heating in the 3D models, they have significantly more cooling and, therefore, less net heating than their corresponding 2D models. If we focus on the gain region (i.e. only the region with net neutrino heating), we see in Fig.~\ref{fig:gain_net} that the 2D models have significantly more heating than their 3D counterparts. This arises, in part, because, prior to explosion, there is more mass in the gain region in 2D than in 3D, as shown in Fig.~\ref{fig:mgain}. Finally, in spite of the higher heating rates in 2D, the average specific entropy in the gain region, \begin{equation} \langle s \rangle_{\rm gain} = \frac{1}{M_{\rm gain}}\int_{\rm gain} \rho s dV, \end{equation} is larger in 3D than in 2D, as shown in Fig.~\ref{fig:avgent} and suggested by \citet{Nord10}. This is consistent with the results shown in \citet{Hank12} who found somewhat larger average entropies in 3D than in 2D. We note, however, that a higher average entropy is not directly related to earlier explosions, as is clearly demonstrated by the 3D $L_{\nu_e}=2.1\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ model, which has the highest average entropy of any model shown but does not explode within the simulated time. In any case, since entropy depends on the integrated heating, not the heating rate, this suggests that perhaps some material is exposed longer to heating in 3D than in 2D, to which we return to in \S~\ref{sec:dwell}. \subsection{Radial Profiles} Moving beyond integral quantities, we can look at radial profiles, averaged over solid angle, to distinguish the global structures of 2D and 3D models. In order to highlight the trends with luminosity and dimension, we compute two sets of profiles, one time-averaged from $200\,{\rm ms}$ to $300\,{\rm ms}$ post-bounce and another time-averaged from $450\,{\rm ms}$ to $550\,{\rm ms}$ post-bounce. The $L_{\nu_e}=2.3\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ models are excluded from the latter set of plots because they are already well into explosion. Time averaging is essential for the 2D profiles to minimize the large fluctuations that obscure their underlying quasi-steady structure. \begin{figure*}[htb] \centering \subfigure{\includegraphics[width=\columnwidth]{f10a.eps}}\hfill \subfigure{\includegraphics[width=\columnwidth]{f10b.eps}} \caption{Time- and spherically-averaged entropy profiles in the inner $500\,{\rm km}$ for 2D (dashed) and 3D (solid) models. The left panel shows the profiles time-averaged from $200$--$300\,{\rm ms}$ post bounce while the right panel shows the profiles time-averaged from $450$--$500\,{\rm ms}$ post-bounce. The $L_{\nu_e}=2.3\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ models explode before $500\,{\rm ms}$ and are therefore not included in the latter. The 3D models generically have higher averaged entropies between $\sim$$100\,{\rm km}$ (near the gain radius) and the shock.} \label{fig:prof_ent} \end{figure*} \begin{figure*}[htb] \centering \subfigure{\includegraphics[width=\columnwidth]{f11a.eps}}\hfill \subfigure{\includegraphics[width=\columnwidth]{f11b.eps}} \caption{The same as Fig.~\ref{fig:prof_ent}, but for the time- and spherically-averaged radial velocity. The 3D models (solid) tend to have smaller radial velocities between $\sim$$100\,{\rm km}$ and the shock than the 2D models (dashed).} \label{fig:prof_vr} \end{figure*} \begin{figure*}[htb] \centering \subfigure{\includegraphics[width=\columnwidth]{f12a.eps}}\hfill \subfigure{\includegraphics[width=\columnwidth]{f12b.eps}} \caption{The same as Fig.~\ref{fig:prof_ent}, but for the time- and spherically averaged net heating rate. Overall, the 2D (dashed) and 3D (solid) models have very similar net heating profiles. Note that the ``gain radius'' is found at $\sim$$100\,{\rm km}$ where the net heating changes sign.} \label{fig:prof_netheat} \end{figure*} \begin{figure*}[htb] \centering \subfigure{\includegraphics[width=\columnwidth]{f13a.eps}}\hfill \subfigure{\includegraphics[width=\columnwidth]{f13b.eps}} \caption{The same as Fig.~\ref{fig:prof_netheat}, but zoomed into the region of significant net neutrino heating. The 2D models (dashed) tend to have more heating at smaller radii (near the gain radius where the net heating changes sign), but less heating at larger radii, compared to the 3D models (solid).} \label{fig:prof_netheat_zoom} \end{figure*} \begin{figure*}[htb] \centering \subfigure{\includegraphics[width=\columnwidth]{f14a.eps}}\hfill \subfigure{\includegraphics[width=\columnwidth]{f14b.eps}} \caption{The same as Fig.~\ref{fig:prof_ent}, but for the time- and spherically-averaged transverse kinetic energy per unit mass. In the early stalled accretion shock phase ($200$--$300\,{\rm ms}$ post-bounce), the 2D models (dashed) have $\sim$$50$--$100\%$ more specific transverse kinetic energy than the 3D models (solid). This early turbulent vigor seems to be an \emph{artifact} of assuming axisymmetry. At later times, the 2D/3D difference is reduced, but is still significant ($\sim$$30\%$).} \label{fig:prof_transke} \end{figure*} Figure~\ref{fig:prof_ent} shows the time- and spherically-averaged entropy profiles. Consistent with Fig.~\ref{fig:rshock}, the entropy profiles clearly show that the shock radii are systematically larger in 3D than in 2D prior to explosion. The 3D models also have higher peak entropies than their corresponding 2D models, consistent with Fig.~\ref{fig:avgent}. Interestingly, the peak entropies vary little over the luminosity range considered for models with a given number of dimensions, but there is a clear distinction between the 2D and 3D models. On the other hand, the entropy profiles between $\sim$$50\,{\rm km}$ and $\sim$$100\,{\rm km}$ are remarkably similar between all models. Figure~\ref{fig:prof_vr} shows the profiles of radial velocity. As with the entropy profiles, the radial velocities in the post-shock region vary more with dimension than over the range of luminosities of models with a given number of dimensions. The magnitudes of the radial velocities in the post-shock flow are generally larger in 2D than in 3D. This remains true also when doing mass-weighted spherical averages, though the profiles are somewhat closer. The larger post-shock radial velocities in 2D are associated with lower densities and, therefore, lower heating rates for radii beyond $\sim$$120$--$150\,{\rm km}$ as compared with 3D. This has the effect of lowering the temperatures, which suppresses cooling and moves the gain radius inward (relative to 3D) towards higher densities and neutrino fluxes. Figures~\ref{fig:prof_netheat} and \ref{fig:prof_netheat_zoom} show the radial profiles of the net neutrino heating rate and bear out these arguments. In the end, the smaller gain radius in 2D leads to a larger integrated net heating rate in the gain region in 2D as shown in Fig.~\ref{fig:gain_net} and a larger mass in the gain region as shown in Fig.~\ref{fig:mgain}, even though the densities and net heating rates are larger in 3D at radii beyond $\sim$$120\,{\rm km}$ and $\sim$$150\,{\rm km}$, respectively. Finally, Fig.~\ref{fig:prof_transke} shows the radial profiles of the transverse kinetic energy. From $200\,{\rm ms}$ and $300\,{\rm ms}$ and between $\sim$$100\,{\rm km}$ and the shock, the 2D models have nearly twice as much transverse kinetic energy as the 3D models. Again, the profiles vary much more with dimension than between the different luminosity models of the same dimension. At this stage in the evolution, the turbulence in the post-shock flow is \emph{artificially} vigorous in 2D axisymmetric models, consistent with multidimensional simulations of stellar convection \citep{Meak07}. At later times, the transverse kinetic energies in the 3D models have grown to be roughly comparable with the 2D models, though the $L_{\nu_e}=2.2\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ models still differ by $\sim$$30\%$. \section{Turbulence Diagnostics}\label{sec:turb} \subsection{Dwell-Time Distributions}\label{sec:dwell} \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{f15.eps} \caption{Distribution of dwell times in the gain region for tracer particles injected at $250\,{\rm ms}$ post-bounce in the 2D (blue) and 3D (red) $L_{\nu_e}=2.1\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ models. The 2D model has a longer mean dwell time, but the 3D distribution has a long shallow tail. Note that the data beyond $400\,{\rm ms}$ for the 2D model is dominated by shot noise associated with the finite number of tracer particles.} \label{fig:dwell} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{f16.eps} \caption{One minus the cumulative distribution function of dwell times in the gain region for 2D (blue) and 3D (red). The curves cross at $\sim$$50\,{\rm ms}$, with the 3D values larger at later times, indicating that $\sim$$25\%$ of the accreted material spends more time in the gain region in 3D compared to 2D.} \label{fig:dwell_cum} \end{figure} During the stalled accretion shock phase, fluid elements advect through the gain region and eventually settle onto the proto-neutron star. In spherical symmetry, the time to advect through the gain region (the dwell time) is short and shared by all fluid elements belonging to the same mass shell. In multiple dimensions, the aspherical shock structure and post-shock turbulence lead to a distribution of dwell times, with some fraction of the mass being exposed longer to net neutrino heating \citep{Murp08}. This effect can increase the neutrino heating efficiency and has naturally been suggested to be one key element of the multidimensional picture \citep{Burr95a,Murp08}. Here, we focus on the $L_{\nu_e}=2.1\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ models since the dwell time distribution is difficult to interpret once fluid elements begin to participate in explosion. We measure the dwell-time distributions by following the trajectories of Lagrangian tracer particles which move according to the time-dependent velocity field. We initialize the particles at a radius of $400\,{\rm km}$ at $250\,{\rm ms}$\footnote{We also looked at results based on particles injected at $500\,{\rm ms}$ post-bounce and the basic conclusions remain unchanged.} post-bounce and begin integrating their dwell times once they pass through the shock. In 3D, we use approximately $2^{18}$ particles distributed quasi-uniformly on the sphere. In 2D, the fewer degrees of freedom means that there are fewer independent trajectories for fluid elements of a given mass shell, even at the same resolution. To combat this we use 16 shells of approximately $2^{12}$ particles injected over $2\,{\rm ms}$ for a total of $2^{16}$ particles. In 2D, the particles are distributed uniformly in angle and we weight each particle's contribution to the dwell time distribution by $\sin\theta$ to account for the fact that each particle represents an angle-dependent annulus. The results are shown in Fig.~\ref{fig:dwell}. We find that there are two important effects. First, the mean dwell time, $\langle\tau\rangle$, is longer in 2D. In steady state, the mass in the gain region is related to the mean dwell time simply by $M_{\rm gain}=\dot{M} \langle\tau\rangle$ and so a longer mean dwell time in 2D is consistent with the larger gain mass seen previously in Fig.~\ref{fig:mgain}. Second, the dwell time distribution in 3D has a longer, shallower tail than the 2D distribution. Figure~\ref{fig:dwell_cum} shows one minus the cumulative distribution functions from which it can be seen that about $25\%$ of the material spends more time in the gain region in 3D than in 2D. By recording the peak entropy reached by each tracer particle, we have found that there is a strong correlation between dwell time and peak entropy, suggesting that these long lived trajectories may be responsible, at least in part, for the larger average entropies seen in the 3D models and discussed previously. \subsection{Turbulent Energy Spectra} Perhaps the single most distinguishing characteristic between the 2D and 3D turbulent post-shock flows is found in their energy spectra. As shown by \citet{Krai67} and later confirmed experimentally in various contexts \citep[see, e.g.,][]{Boff12}, turbulent cascades are different in 2D and 3D. In 3D turbulence, energy is the only constant of the motion and this leads to a single turbulent cascade that transfers energy from some driving wavenumber $k_d$ towards larger $k$ (smaller scales). In 2D, both energy and squared vorticity are constants of the motion which leads to cascades of energy and enstrophy (proportional to the mean squared vorticity). The enstrophy cascade transports enstrophy in $k$-space from the driving wavenumber $k_d$ toward larger $k$ (smaller scales). This leads to a characteristic $k^{-3}$ scaling of the velocity energy spectrum for $k > k_d$ \citep{Krai67}. The energy cascade, by contrast, transports energy from $k_d$ to \emph{smaller} $k$ (larger scales) and leads to a $k^{-5/3}$ scaling of the velocity energy spectrum for $k < k_d$, in direct analogy with the Kolmogorov theory of turbulence. This is the inverse energy cascade of 2D turbulence, which tends to exaggerate motions on the largest scales of the flow. These turbulence theories were developed within highly idealized setups, assuming, for example, steady isotropic turbulence, and do not necessarily apply directly to the turbulence seen in the core-collapse context. Nevertheless, we find, as did \citet{Hank12}, that the basic predictions of these theories---the predominance of energy at the largest scales in 2D, the excess energy at the largest scales in 2D relative to 3D, and the shallower slope of the velocity energy spectrum for $k>k_d$ (and therefore more energy for large $k$) in 3D relative to 2D---are all confirmed in our simulations. The most natural basis to represent the matter fields in the quasi-spherical post-shock flow is the basis of real spherical harmonics. We decompose the arbitrary scalar quantity $Q$ into spherical harmonics with time- and radially-dependent coefficients \begin{equation} a_{lm}(t,r) = \oint Q(t,r,\theta,\phi) Y_l^m(\theta,\phi) d\Omega, \end{equation} where \begin{equation} Y_l^m(\theta,\phi) = \begin{cases} \sqrt{2} N_l^m P_l^m(\cos\theta) \cos m\phi& m>0,\\ N_l^0 P_l^0(\cos\theta) & m=0,\\ \sqrt{2} N_l^{|m|} P_l^{|m|}(\cos\theta) \sin |m|\phi& m<0 \end{cases} \end{equation} and \begin{equation} N_l^m = \sqrt{\frac{2l+1}{4\pi}\frac{(l-m)!}{(l+m)!}}\;. \end{equation} In 2D, axisymmetry implies that all coefficients with $m\neq0$ are identically zero. Here, we consider $Q=\{\rho,P,s,\sqrt{\rho}v_i\}$, where $v_i$ represents the spherical velocity components $\{v_r,v_\theta\}$ in 2D and $\{v_r,v_\theta,v_\phi\}$ in 3D. We compute the discrete energy spectrum as a function of spherical harmonic degree $l$ as \begin{equation} E(l) = \sum_{m=-l}^l a_{lm}^2, \end{equation} where it should be understood that $a_{lm}$, and, therefore, $E(l)$, depend on time and radius \citep{Burr12}. \begin{figure*}[htb] \centering \subfigure{\includegraphics[width=\columnwidth]{f17a.eps}}\hfill \subfigure{\includegraphics[width=\columnwidth]{f17b.eps}} \caption{Discrete energy spectra of $\sqrt{\rho}v_\theta$ as a function of spherical harmonic degree $l$ for the $L_{\nu_e}=2.1\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ 2D (blue) and 3D (red) models, measured at $r=150\,{\rm km}$ and time-averaged between $450$--$500\,{\rm ms}$ post-bounce. The left panel includes power-laws $\propto l^{-2.6}$ and $\propto l^{-1}$, indicating the inertial range scaling in 2D and 3D, respectively. The spectra show that the 2D model has excess power for all $l\lesssim30$. Grid-scale dissipation sets in at $l\sim40$. The right panel reproduces the results shown in \citet{Hank12} for comparison (dashed). Both sets of results show the same inertial range scalings, but the excess power at low $l$ is even more extreme in the results from \citet{Hank12}.} \label{fig:psd} \end{figure*} Figure~\ref{fig:psd} shows the energy spectra of $\sqrt{\rho}v_\theta$ for the $L_{\nu_e}=2.1\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ models at a radius of $150\,{\rm km}$ and time-averaged between $450$--$500\,{\rm ms}$ after bounce. We also reproduce the results of \citet{Hank12} for comparison. While these curves change in detail for different quantities, over time, and at different radii, the qualitative trends and relation between 2D and 3D are quite robust. At $l=1$, 2D has an order of magnitude more energy in all quantities $Q=\{\rho,P,s,\sqrt{\rho}v_i\}$, consistent with the idea of an inverse energy cascade which pumps energy into the largest scales. The results of \citet{Hank12} are even more extreme, with a factor $\approx50$ more energy in the $l=1$ mode in 2D relative to 3D. In real-space, this excess energy at the largest scales manifests as the characteristic ``sloshing'' always found in 2D simulations (including those with coupled neutrino transport). This sloshing is typically associated with the development of the SASI \citep{Blon03,Sche08,Fogl07}, but the inverse energy cascade will always produce excess energy at $l=1$ in 2D, even if the turbulence is driven by, for example, neutrino-driven convection. The SASI may be capable, in principle, of producing significant energy in $l=1$, but disentangling its effects from the inevitable $l=1$ energy associated with the inverse cascade would seem to require 3D simulations. Whether the low-$l$ energy is a result of the SASI or not, all of the 3D supernova simulations thus far presented in the literature show muted or nonexistent sloshing \citep{Iwak08,Hank12,Burr12}, though there has yet to be a fully self-consistent 3D radiation hydrodynamic simulation. Therefore, the results seen thus far suggest that these violent sloshing motions are an \emph{artifact} of assuming axisymmetry, not a feature that must be incorporated into 3D models as suggested by \citet{Hank12}. Importantly, this artifact is not a small effect; most of the energy in the flow in 2D simulations is at low-$l$. This artifact, in fact, dominates the flow and has poorly understood consequences on other coupled aspects of the problem, including the neutrino transport. At intermediate $l$, the spectra are significantly steeper in 2D ($\sim l^{-2.6}$) than in 3D ($\sim l^{-1}$). That these power-laws differ from those naively expected from simple theoretical arguments should not be surprising given that the turbulence analyzed here is, among other things, not steady-state or isotropic. Qualitatively, however, the expectation of a steeper slope in 2D relative to 3D is confirmed and we find our results generally consistent with those of \citet{Hank12}\footnote{\citet{Hank12} report scalings of $l^{-3}$ and $l^{-5/3}$ in 2D and 3D, respectively, but Fig.~\ref{fig:psd} shows that their results are consistent with the shallower slopes reported here.}. The transition to these slopes occurs around $l\sim10$ in 2D and $l\sim4$ in 3D and reflects a characteristic scale for convective plumes, a scale that likely depends on the model-dependent size of the gain region. The 3D model has significantly more energy at small-scales than the 2D model. At $l\gtrsim40$ (spatial scales $\sim$$10\,{\rm km}$ at a radius of $150\,{\rm km}$), we begin to see the effects of grid-scale ($2\,{\rm km}$ at this radius) dissipation. \citet{Hank12} showed that their results, especially in 3D, were sensitive to resolution. The comparison of energy spectra in Fig.~\ref{fig:psd} affords us the opportunity to directly compare the effective resolutions of our independent calculations by comparing the scales at which dissipation begins to set in. As noted above, our 3D results begin to deviate from the $l^{-1}$ power-law at $l\approx40$, while the results in \citet{Hank12} begin to deviate at $l\approx25$, confirming our expectation that our models have nearly double the effective resolution. \citet{Hank12} argue that 3D models become less prone to explosion as resolution is increased, yet our higher resolution 3D models explode earlier than the 2D models. The source of this apparent discrepancy is unclear. \section{Multidimensional Explosions}\label{sec:expl} \subsection{Explosion Conditions} A number of quantities have been proposed in the literature that are meant to distinguish exploding from non-exploding models and, further, to define in a systematic way the time at which a model transitions into the exploding phase. The most widely used condition is based on the idea of a critical ratio of the advection to heating time scales \citep{Jank98,Thom00a,Thom03}. There are numerous ways of defining these timescales. Here, we define the advection time as \begin{equation} t_{\rm adv} = \int_{R_{s}}^{R_{\rm gain}} \frac{dr}{\langle v_r \rangle} \end{equation} where $\langle v_r\rangle$ is the spherically-averaged radial velocity. In multi-D models, the shock radius $R_s$ and the gain radius $R_{\rm gain}$ are not uniquely defined bounds\footnote{Indeed, the gain region need not even be bounded from below by a single closed surface.}. We appeal to the radial profiles shown previously and define the shock radius as the outermost zero in the radial velocity gradient and the gain radius as the first zero crossing in the net heating rate interior to the shock (always around $\sim$$100\,{\rm km}$). Furthermore, to minimize the large fluctuations in $t_{\rm adv}$ that appear in 2D due to transient and localized fluctuations in $v_r$, we time-average the velocity profiles over a $\pm15\,{\rm ms}$ window. We define the heating timescale as \begin{equation} t_{\rm heat} = \frac{\int_{R_{\rm gain}}^{R_{s}} (\langle\rho\varepsilon\rangle - \langle\rho\rangle\varepsilon_0(\langle\rho\rangle,\langle Y_e\rangle)) 4 \pi r^2 dr}{\int_{R_{\rm gain}}^{R_{s}} \langle\rho\rangle (\langle\mathcal{H}-\mathcal{C}\rangle) 4 \pi r^2 dr}, \end{equation} where angle brackets indicate solid-angle averaging, $\varepsilon$ is the specific internal energy, $\varepsilon_0$ is the zero-point energy of the EOS, and $\mathcal{H}-\mathcal{C}$ is the net heating rate per unit mass. The results are shown in Fig.~\ref{fig:adv_heat_ratio}. After an initial transient phase, the models settle on an approximately (model-dependent) constant value. Rather than there being a particular critical value for the ratio of advection to heating time scales, explosions seem to be robustly connected to rapid growth from these model-dependent quasi-steady values. Nonetheless, we define an explosion time in a systematic manner by measuring the time at which this ratio exceeds $0.5$ without later dropping below this value. \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{f18.eps} \caption{Ratio of advection to heating timescales (defined in the text) for the 3D (solid) and 2D (dashed) models. Explosions are associated with a rapid growth in this ratio, though it is difficult to identify a particular critical value for the onset of explosions. With a critical value of $0.5$, the $L_{\nu_e}=2.3\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ model explodes $159\,{\rm ms}$ earlier in 3D relative to 2D, while the $L_{\nu_e}=2.2\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ model explodes $333\,{\rm ms}$ earlier in 3D.} \label{fig:adv_heat_ratio} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{f19.eps} \caption{Maximum of the ratio of the square of the local sound speed to escape speed for the 3D (solid) and 2D (dashed) models, smoothed for clarity, as discussed by \citet{Pejc12} in their antesonic condition. In agreement with \citet{Mull12}, we find that the critical value of $0.2$ suggested by \citet{Pejc12} is too low, but using a larger value ($\approx0.3$) may be a useful diagnostic of explosion. With a critical value of $0.3$, the $L_{\nu_e}=2.3\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ model explodes $87\,{\rm ms}$ earlier in 3D relative to 2D, while the $L_{\nu_e}=2.2\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ model explodes $309\,{\rm ms}$ earlier in 3D.} \label{fig:ante} \end{figure} An alternative explosion condition (the ``antesonic'' condition) was suggested by \cite{Pejc12}. Based on parametrized 1D steady-state models, they suggested that explosions occur when ${\rm max}(c_s^2/v_{\rm esc}^2)$ reaches a critical value $\approx 0.2$. \citet{Murp11} tested this condition with their parametrized 2D simulations and found it to be consistent with their results. \citet{Mull12} on the other hand, using 2D radiation hydrodynamic simulations, assessed the validity of the antesonic condition and suggest that it may not be a robust indicator of explosion, but, in any case, the critical value should at least be somewhat larger ($\sim$$0.35$). Like \citet{Mull12}, we find that a larger value for the critical condition is required and we adopt $0.3$ as the critical ratio. Unfortunately, ${\rm max}(c_s^2/v_{\rm esc}^2)$ is a noisy quantity, with brief spikes as large as $\approx0.5$ that then return well below the critical value. The smoothed curves, however, seem to reliably indicate when explosions set in, but these smoothed curves are necessarily produced \textit{ex post facto}. In other words, the antesonic condition is not a reliable indicator of explosion given the instantaneous value of the ratio. The evolutions for the 2D and 3D models, smoothed for clarity, are shown in Fig.~\ref{fig:ante}. As above, we identify the time of explosion as the time when the smoothed version of ${\rm max}(c_s^2/v_{\rm esc}^2)$ exceeds $0.3$ without later dropping below this value. A final, phenomenological, explosion condition is simply when the average shock radius exceeds $400\,{\rm km}$ without later receding below this value, as used in \citet{Nord10}. In this case, explosion times can be read directly from the shock radius evolution curves in Fig.~\ref{fig:rshock}. \begin{deluxetable*}{lccccccc} \tabletypesize{\footnotesize} \tablewidth{0pt} \tablecaption{Explosion Times and Accretion Rates\label{tab:exp}} \tablecolumns{8} \tablehead{ \colhead{$L_{\nu_e}$\tablenotemark{a}} & \colhead{Dimension} & \multicolumn{6}{c}{Explosion Condition}\\ \cline{1-8}\\[-1.5ex] \colhead{} & \colhead{} &\multicolumn{2}{c}{$t_{\rm adv}/t_{\rm heat}$} & \multicolumn{2}{c}{${\rm max}(c_s^2/v_{\rm esc}^2)$} & \multicolumn{2}{c}{$\langle R_s\rangle$}\\[0.3ex] \cline{1-8}\\[-2ex] \colhead{} & \colhead{} & \colhead{$t_{\rm exp}$} & \colhead{$\dot{M}$} & \colhead{$t_{\rm exp}$} & \colhead{$\dot{M}$} & \colhead{$t_{\rm exp}$} & \colhead{$\dot{M}$}\\ \colhead{} & \colhead{} & \colhead{(ms)} & \colhead{($\,{\rm M}_\odot\,{\rm s}^{-1}$)} & \colhead{(ms)} & \colhead{($\,{\rm M}_\odot\,{\rm s}^{-1}$)} & \colhead{(ms)} & \colhead{($\,{\rm M}_\odot\,{\rm s}^{-1}$)} } \startdata \multirow{2}{*}{2.2} & 2D & 1031 & 0.182 & 920 & 0.196 & 954 & 0.188\\ & 3D & 698 & 0.213 & 611 & 0.224 & 695 & 0.214\\[1em] \multirow{2}{*}{2.3} & 2D & 571 & 0.231 & 501 & 0.241 & 557 & 0.233\\ & 3D & 412 & 0.269 & 414 & 0.268 & 434 & 0.262 \enddata \tablenotetext{a}{$10^{52}\,{\rm erg}\,{\rm s}^{-1}$} \end{deluxetable*} Table~\ref{tab:exp} shows the explosion times and corresponding mass accretion rates as determined by the three conditions above for the $L_{\nu_e}=2.2\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ and $L_{\nu_e}=2.3\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ models. All three conditions give comparable numbers, but the antesonic condition tends to give the earliest indication of explosion. We note, however, that this conclusion may be somewhat sensitive to the particular critical values adopted. That the three conditions give comparable explosion times is not surprising; the ratio of advection to heating times, the ratio of sound to escape speed, and the average shock radius all grow as models transition into explosion. None of these conditions is able to robustly predict when or even if an explosion will occur before the explosion begins. This suggests that while these conditions may be indicative of explosion, they are by no means the full story. With the results shown in Table~\ref{tab:exp} in hand, we can quantify the delay between the 2D explosions and the earlier 3D explosions. Taking the minimum difference found between the explosion conditions, we find that the $L_{\nu_e}=2.3\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ 3D model explodes at least $87\,{\rm ms}$ before the corresponding 2D model, while the $L_{\nu_e}=2.2\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ 3D model explodes at least $259\,{\rm ms}$ earlier than the corresponding 2D model. These delays may translate into appreciable differences in the explosion energy at infinity \citep{Yama12}, with earlier explosions (3D) being more energetic. \subsection{Explosion Trigger in 3D} What conditions trigger explosions in 3D? The multi-D nature of the hydrodynamics is generally agreed to be important in producing such conditions. Turbulence in the post-shock flow leads to a distribution of dwell times for accreting parcels of matter, increasing the mass in the gain region and, therefore, the efficiency of neutrino heating. In this way, turbulence plays a crucial, but secondary, role as an aid to neutrino heating. \citet{Murp11} argue that convection modifies the quasi-steady global structure of the flow by introducing turbulent fluxes of, for example, enthalpy and entropy. Their model focuses on how turbulent convection modifies averaged radial profiles, rather than on the effects of particular turbulent fluctuations, and was able to account for the differences in, for example, the radial profiles of entropy between 1D and 2D models. While these are important roles for the post-shock turbulent motions, here we suggest that the turbulent fluctuations themselves, driven by neutrino heating, may be instrumental in triggering explosions. The dominant hydrodynamic instability, aside from the explosion itself, in our parametrized 3D models is neutrino-heating-driven convection \citep{Burr12,Murp12}. In the nonlinear phase, the post-shock turbulent flow involves a complex interaction between buoyantly rising, neutrino-heating-driven plumes, negatively buoyant accretion streams, turbulent entrainment resulting from secondary Kelvin-Helmholtz instabilities, and dissipative interactions between rising plumes and the bounding shock. The evolution of a buoyant ``bubble'' depends on parameters, including the heating rate, bubble size, and background inflow velocity. Small bubbles tend to be shredded by turbulent entrainment, while large bubbles can rise all the way to the shock and locally push it outward. Our simulations suggest that some bubbles can continue rising, locally pushing the shock to ever larger radii until the global explosion is triggered. Figure~\ref{fig:volseq} shows a sequence of volume renderings illustrating that large buoyant features form in the flow and push the shock to larger radii. Figure~\ref{fig:shocksurf} shows the evolution of the shock surface where the same local growth of the shock radius can be clearly seen. Interestingly, the growth of these features occurs not on a dynamical time, but appears to proceed quasi-statically. The same basic picture holds in all of our 3D simulations, suggesting that this may be a generic feature of the transition to explosion. \begin{figure*}[htb] \centering \includegraphics[width=\textwidth]{f20.eps} \caption{Sequence of volume renderings of the specific entropy for the 3D $L_{\nu_e}=2.2\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ model. As seen at the bottom left of each image, a high entropy plume forms around $\sim$$250$--$300\,{\rm ms}$ and persists for hundreds of milliseconds, eventually pushing the shock out far enough to seemingly trigger the global explosion. A similar structure appears around $\sim$$450\,{\rm ms}$ at the top right of each image, which leads to similar local shock expansion thereafter.} \label{fig:volseq} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=\textwidth]{f21.eps} \caption{Evolution of the shock surface from the 3D $L_{\nu_e}=2.2\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ model. The colors indicate radius to emphasize the aspherical nature of the shock surface and the secularly growing feature associated with the high-entropy plume shown in Fig.~\ref{fig:volseq}.} \label{fig:shocksurf} \end{figure*} We can try to understand the conditions for the runaway growth of bubbles (and the associated triggered explosions) by considering the highly simplified toy model presented in the Appendix. In this model, the evolution of a bubble's outer radius ($R_b$) is determined by the competition between neutrino driving power ($L_\nu \tau$, $L_\nu$ is the neutrino driving luminosity and $\tau$ is the effective optical depth of the bubble) and accretion power ($\alpha G M \dot{M}/R_b$, $\alpha$ is a constant defined in the Appendix) associated with the ram pressure of material immediately behind the shock. The evolution follows \begin{equation} \frac{dR_b}{dt} = \frac{\Omega_0}{4 \pi} \frac{R_b^2}{G M M_b} \left(L_\nu \tau - \alpha \frac{G M \dot{M}}{R_b}\right), \end{equation} which has the solution \begin{equation}\label{eq:rbub} R_b(t) = \frac{\alpha G M \dot{M}}{L_\nu \tau + e^{\lambda t} (\alpha GM\dot{M}/R_0 - L_\nu \tau)}, \end{equation} where \begin{equation} \lambda = \alpha\frac{\Omega_0}{4\pi} \frac{\dot{M}}{M_b}. \end{equation} Here, $\Omega_0$ is the bubble's constant solid angle and $M_b$ is the bubble's fixed mass. Qualitatively, $R_b(t)$ has two types of behavior. When the neutrino driving term exceeds the ram pressure term, $R_b(t)$ grows without bound, presumably triggering the global explosion \citep{Thom00a}. When the ram pressure term dominates, the bubble recedes, perhaps allowing another bubble to take its place. \begin{figure}[htb] \centering \includegraphics[width=\columnwidth]{f22.eps} \caption{Maximum shock radii for the 3D $L_{\nu_e}=2.2\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ and $L_{\nu_e}=2.3\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ models, along with fitted solutions (black dashed lines) based on Eq.~\ref{eq:rbub}. See the text and Appendix for discussions.} \label{fig:rmax_model} \end{figure} This simple model makes two predictions. First, the model predicts a characteristic growth of the maximum shock radius, at least in the quasi-static growth phase. Figure~\ref{fig:rmax_model} shows the maximum shock radii ($R_{\rm max}$) from the $L_{\nu_e}=2.2\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ and $L_{\nu_e}=2.3\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ 3D models along with fitted model solutions. To compute the fits, we fix $M=1.6\,{\rm M}_\odot$, $\dot{M}=0.25\,{\rm M}_\odot\,{\rm s}^{-1}$, and $M_b/\Omega_0=2.5\times10^{30}\,{\rm g}$ and fit the model solution to the data between $250\,{\rm ms}$ post-bounce and the time when $R_{\rm max}=1000\,{\rm km}$, leaving $R_0$, $\tau$, and $\alpha$ as free parameters. Both fits give $\tau\approx0.04$ and $\alpha\approx0.25$. Given the simplicity of the model, the agreement between the hydrodynamical simulation results and the model predictions is quite surprising and encouraging. Second, there is a critical luminosity for a given mass accretion rate and shock radius above which a bubble will runaway. If the stalled shock radius scales as $L^\beta$, then the critical luminosity for runaway bubble growth is proportional to $\dot{M}^{1/1+\beta}$. Empirically, $\beta\sim3$ and if we adopt the parameters used above we find \begin{equation} \begin{split} L_{\rm crit} \approx& 2.2 \left(\frac{M}{1.6\,{\rm M}_\odot}\right)^{1/4} \left(\frac{\dot{M}}{0.25\,{\rm M}_\odot\,{\rm s}^{-1}}\right)^{1/4}\\ & \times \left(\frac{\tau}{0.04}\right)^{-1/4}\times10^{52}\,{\rm erg}\,{\rm s}^{-1}, \end{split} \end{equation} which seems consistent with the critical explosion curves from parametrized multi-D models shown in other works \citep{Murp08,Nord10,Hank12,Couc12}. Inasmuch as runway bubble growth is associated with explosions, this may be viewed as an alternative, albeit crude, derivation of the critical luminosity curve of \citet{Burr93a}. This may suggest that the reduction in the critical luminosity in going from 1D to 2D and 3D models might arise, in part, from the emergence of bubbles and their runaway growth. \section{Discussion and Conclusions}\label{sec:conc} We have presented analyses of parametrized 2D axisymmetric and 3D core-collapse supernova models. Our basic conclusion is not surprising nor controversial---the hydrodynamics of core-collapse supernovae are different between 2D axisymmetric and 3D models. These differences are many and, while some are subtle and perhaps not crucial to understanding the mechanism, some are quite dramatic and make interpreting 2D supernova models problematic. Our parametrized models indicate that the global structures of the flows are different between 2D and 3D. We see this reflected in integral measures like the mass and average entropy in the gain region and in the radial profiles of various quantities. In spite of identical heating and cooling prescriptions between 2D and 3D, our analyses show how the different global structures effect the heating and cooling rates. We find that the 2D models have significantly higher integrated net heating rates than their corresponding 3D models and attribute this to the smaller gain radius (and more mass in the gain region) in 2D, which may in turn be related to the larger radial velocities in 2D. On the other hand, the 3D models tend to have higher densities at large radii and larger average shock radii. The dwell time distributions for accreted parcels of matter offer another vantage point from which to distinguish 2D and 3D models. In both cases, the post-shock turbulence leads to a broad distribution of dwell times. In our simulations, the mean dwell time is somewhat longer in 2D than in 3D, consistent with the larger mass in the gain region in 2D than in 3D, but the 3D distribution has a relatively prominent tail towards long dwell times. About $25\%$ of the material spends more time in the gain region in 3D than in 2D, being exposed to more integrated heating and reaching higher peak entropies. Ultimately, many of the differences we see are plausibly associated with the character of the post-shock turbulent flow. In 2D, turbulent energy is pumped into the largest scales of the flow, which inevitably gives rise to the sloshing behavior manifest in all modern 2D core-collapse supernova simulations. This sloshing behavior has been identified as a crucial ingredient in nearly all successful neutrino-driven explosion models to date \citep{Bura06,Mare09,Mull12}. In 3D, this sloshing motion is muted or absent, at least in part, because the turbulent energy transport is predominantly towards small scales. Since we see explosions earlier in 3D than in 2D, vigorous sloshing is either not critical in any dimension or the explosion mechanism operates differently in 2D and 3D. Finally, we present a toy model that describes the evolution of buoyant bubbles driven by the competition between neutrino heating power and accretion power. The simple model is able to account for the quasi-static growth of the maximum shock radius in the exploding 3D models with neutrino luminosities of $L_{\nu_e}=2.2\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$ and $L_{\nu_e}=2.3\times10^{52}\,{\rm erg}\,{\rm s}^{-1}$. It also predicts the existence of a critical luminosity for a given mass accretion rate and shock radius beyond which a bubble will have runaway growth. We speculate that this runaway growth triggers the global explosion of the star, and therefore that the critical luminosities for runaway bubble growth and explosion are effectively the same. The mean background and fluctuating turbulent components of the post-shock flow in 2D axisymmetric models are different from 3D models and, more importantly, likely not representative of the hydrodynamics of supernova cores in Nature. While the models presented in this work are incomplete, lacking adequate neutrino transport and feedback, this basic conclusion seems robust. It seems inevitable that we must await 3D models before drawing conclusions concerning the trigger of core-collapse supernovae. \acknowledgements The authors acknowledge stimulating interactions with Christian Ott, Rodrigo Fernandez, Thierry Foglizzo, John Blondin, Ann Almgren, and John Bell. A.B. acknowledges support from the Scientific Discovery through Advanced Computing (SciDAC) program of the DOE, under grant number DE-FG02-08ER41544, the NSF under the sub- award No. ND201387 to the Joint Institute for Nuclear Astrophysics (JINA, NSF PHY-0822648), and the NSF PetaApps program, under award OCI-0905046 via a subaward No. 44592 from Louisiana State University to Princeton University. The authors thank the members of the Center for Computational Sciences and Engineering (CCSE) at LBNL for their invaluable support for CASTRO. The authors employed computational resources provided by the TIGRESS high performance computer center at Princeton University, which is jointly supported by the Princeton Institute for Computational Science and Engineering (PICSciE) and the Princeton University Office of Information Technology; by the National Energy Research Scientific Computing Center (NERSC), which is supported by the Office of Science of the US Department of Energy under contract DE-AC03-76SF00098; and on the Kraken supercomputer, hosted at NICS and provided by the National Science Foundation through the TeraGrid Advanced Support Program under grant number TG-AST100001.
1,116,691,499,917
arxiv
\section{Introduction} Despite much effort, the composition of high-energy (above $10^{15}$ eV) cosmic-rays is poorly known. Satellite and balloon experiments have insufficient exposure for high-statistics measurements. Terrestrial experiments have used the electromagnetic and low energy muon components of air showers for composition measurements. The ratio of electromagnetic to muon energy may be sensitive to the cosmic-ray composition, but it is also sensitive to the physics models used to simulate the air shower. One critical component of these models is the forward production of hadrons in high-energy interactions. Most of these hadrons are produced at low transverse momentum ($p_T$), where perturbative QCD (pQCD) is not applicable. A variety of non-pertubative calculations and extrapolations are used to model low $p_T$ particle production, and thereby make predictions about cosmic-ray composition. Here, we present a complementary approach to study air showers, using high energy, high $p_T$ muons to study cosmic-ray composition. The calculations use pQCD, and so should be well understood. High $p_T$ muons are produced predominantly in the initial cosmic-ray interaction \cite{prompt2}, from semileptonic decays of heavy quarks, and from decays of high $p_T$ pions and kaons produced in jet fragmentation. The rates for muons with $p_T$ above a few GeV is calculable in pQCD; the spectrum depends on the parton composition of the incident ions. For a fixed shower energy, the energy per nucleon drops as the atomic number rises, substantially altering the parton density, and so changing the muon spectrum. This composition may be inferred from the high $p_T$ muon spectrum. The low$-x$ parton distributions in nitrogen also contribute in some kinematic areas; this may be an additional study topic. \section{Experimental Technique} The study of high $p_T$ muons requires a surface air shower array combined with a large underground muon detector. The surface array measures the shower energy, core position and incident direction, and the underground detector measures the energy and position of high-energy muons. Previous experiments have studied high energy muons in air showers, but, with relatively small underground detectors \cite{MACRO}\cite{SPASE}. These experiments did not make use of the distance between the muon and the air shower core. IceCube and IceTop comprise a 1 km$^2$ surface air shower array and a 1 km$^3$ muon detector \cite{ice}. The muon detector is big enough to observe muons far from the shower core. Together, the combination can determine the key elements of the event. IceTop will measure the shower energy, core position, and arrival direction. It will consist of 160 ice-filled tanks spread over a 1 km$^2$ area\cite{bai}. It has an energy threshold of about 300 TeV. IceCube will consist of up to 4800 optical modules in 1 km$^3$ of ice, at depths from 1450 to 2450 meters. The combined acceptance will be about 0.3 km$^2$ sr \cite{bai}. IceCube will measure the energy, position and direction of muons. For vertical muons, the energy threshold is about 500 GeV Muon energy, $E_\mu$ is measured by determining the muon specific energy loss ($dE/dx$). For $E_\mu > 1$ TeV, $dE/dx$ scales with $E_\mu$. In a sufficiently high energy event, IceCube will observe multiple muons. Most of these muons will have low $p_T$, and so will cluster around the shower core. In this high-density region, it may not be possible to reconstruct individual muons; the bundle will be reconstructed as a single light source. However, far from the core, where the muon density is lower, it should be possible to reconstruct individual tracks. The separation required to resolve individual tracks is unknown, but 100 meters (comparable to the spacing between optical module strings) seems like a safe value. It is more than 3 times the effective light scattering length of about 30 m \cite{icepaper}. With the muon energy and distance from the core determined, the muon $p_T$ may be calculated: \begin{equation} p_T = {E_\mu d_c \over h} \end{equation} where $E_\mu$ is the muon energy, $d_c$ is its distance from the core, and $h$ is the distance from IceCube to the site of the initial cosmic-ray interaction in the atmosphere. The value of $h$ is not determined on an event-by-event basis, but its average value ($\approx$ 30 km), zenith angle dependence and distribution are well known; the event-to-event variation and slight composition dependence can be considered in determining the $p_T$ spectrum. With the conservative $d_c> 100$ m requirement and a 1 TeV muon, IceCube can study the spectrum for $p_T > 3$ GeV/c, covering most of the interesting high $p_T$ region. As $E_\mu$ increases, so does the minimum accessible $p_T$; for $E_\mu=10$ TeV, $p_T > 30$ GeV/c, while for a 50 TeV muon, $p_T > 150$ GeV/c. Since higher energy muons produce more Cherenkov light, they are easier to track, and it is likely that, for higher energy muons, a smaller $d_c$ cut could be used. The systematic errors in these measurements remain to be studied. Some components are: 1) Uncertainty in the absolute cosmic-ray flux and energy scale. The flux factors out, since we will only use events observed by IceTop. The energy scale introduces an uncertainty in the scale of $x$ measurements. 2) Error on the core position and extrapolation to depth. For muons far from the core, the fractional error is small. In the first IceCube string, the systematic offset between IceCube and IceTop is less than 1 degree \cite{performance}. 3) Uncertainty in the muon energy and position, due to stochastic interactions, multiple scattering and other factors. Multiple scattering contributes to $d_c$, but, far from the core, $d_c$ is dominated by the muon $p_T$. \section{Rates} Thunman, Ingelman and Gondolo (TIG) calculated the prompt (charm only) and non-prompt muon production using a PYTHIA Monte Carlo simulation, with the MRS-G parton distribution functions, leading-order pQCD cross-sections for $q\overline q\rightarrow Q\overline Q$ and $gg\rightarrow Q\overline Q$, and standard charm quark hadronization and decay models \cite{prompt}. Particles were propagated in the atmosphere using transport equations. Table \ref{t1} shows the expected muon rate in the combined IceCube -IceTop acceptance for different energy thresholds. Calculations, with a fixed $E_\mu^{-3.7}$ energy spectrum find more non-prompt muons at low energies \cite{gaisser}. \begin{table}[tb] \caption{Muon rates for the TIG95 calculation, for 1 year ($3\times10^7$ s) with 0.3 km$^2$ sr acceptance.} \label{t1} \begin {tabular}{lrr} \hline Energy & Prompt & Non-Prompt \\ Threshold & Rate & Rate \\ \hline $10^{15}$ eV & 1.5 & 0.5 \\ $10^{14}$ eV & 1,000 & 18,500 \\ $10^{13}$ eV & 56,000 & $10^{7}$ \\ $10^{12}$ eV & 600,000 & $7\times10^8$ \\ \hline \end{tabular} \end{table} A newer calculation uses next-to-leading-order pQCD calculations and the CTEQ3 parton distributions. It finds prompt rates that are comparable to the earlier calculation at 1 TeV, but are higher at higher energies; at 100 TeV, the prompt cross section is about 8 times higher \cite{prompt2}. The difference is due largely to the different low$-x$ behavior of the two parton distributions. Bottom quark production was not included. Calculations for the LHC (at a comparable energy) find that, for muon $p_T >2$ GeV/c, bottom contributes a larger muon signal than charm \cite{LHC}. For $E_\mu > 50$ TeV, the accompanying shower should almost always trigger IceTop; at lower muon energies, some of the showers may be too small to trigger IceTop, so will not be seen. The prompt signal is significant for $E_\mu < 10^{14}$ eV, although smaller than the non-prompt signal. A $p_T$ cut should eliminate all of the soft (non-perturbative) non-prompt signal, leaving muons from heavy quarks and high $p_T$ $\pi$ and $K$. Based on calculations for the LHC, a cut on $p_T>2-4$ GeV/c will also eliminate 90-98\% of the prompt muons\cite{LHC}. Although a drastic reduction, this still leaves an interesting sample. The final cut on $p_T$ (or $d_c$) will depend on the detector 2-track (core + distant muon) separation capability. \section{Muon spectrum Analysis} Measurement of the muon $p_T$ spectra has some similarities with the lepton spectra studies done at the Relativistic Heavy Ion Collider (RHIC). The $p_T$ spectrum of leptons produced in proton-proton, deuteron-gold and heavy-ion collisions have been studied. \begin{figure}[tb] \center{\includegraphics[width=2.1 in,clip]{one}} \vskip -0.2 in \caption{Parton distribution for a $10^{17}$ eV cosmic ray for quarks (top), gluons (middle) and antiquarks (bottom). The solid lines are for hydrogen ($A=1$), while the dashed lines are for $A=10$ nuclei. These curves are based on the MRST99 parton distributions \cite{partons} at $Q^2 = 1000$ GeV$^2$.} \label{fig:partons} \end{figure} The RHIC experiments fit their data using a multi-component 'cocktail.' For electrons, the cocktail consists of $\pi^0$ and $\eta$ Dalitz decays, $\gamma\rightarrow e^+e^-$ conversions, leptonic decays of vector mesons, plus semileptonic decays of heavy mesons and baryons \cite{RHICe}. For muons, the cocktail consists of $\pi$, $K$ and heavy quark decays \cite{RHICmu}. For muons, the $\pi$ and $K$ decay fraction is reduced with vertex cuts, sample, so this result is not directly relevant to IceCube. However, the electron analysis seems quite relevant. The fraction of prompt electrons rises with the electron $p_T$; for $p_T >5$ GeV/c, prompt electrons are dominant. Additional confidence in these studies comes from the good agreement seen between high $p_T$ $\pi^0$ data and pQCD calculations\cite{pi0}. Simple arguments predict this dominance. Light meson production is predominantly soft; $d\sigma/dp_T$ falls exponentially with $\langle p_T\rangle \approx m$, $m$ being the meson mass. In contrast, heavy quark production is described by pQCD, which gives a power law $p_T$ spectrum; at high enough $p_T$, this will dominate over any exponential. pQCD processes also produce high $p_T$ $\pi$ and K, but these mesons are only a small fraction of the total production. A similar approach could apply to the $p_T$ spectrum of muons from air showers, with a cocktail of perturbative and non-perturbative non-prompt muons, and prompt muons. For $p_T$ above a few GeV/c, the non-perturbative component will be gone, and the remaining events could be fit to a power law spectrum. Spectra could be measured for different muon energies and shower energies. The muon energies are related to the muon rapidity in the center-of-mass frame. \section{Composition Determination} Fig. 1 compares the parton energy spectrum (in the target frame) for a $10^{17}$ eV cosmic ray, for hydrogen ($A=1$) and an $A=10$ nucleus. The maximum parton energy scales as $1/A$; this determines the maximum parton-parton center of mass energy. For a given collision, the $x$ values of the parton densities that are probed depend on the kinematics of the produced partons. So, different muon energies and $p_T$ are sensitive to different $x$ regions. In any collision, the maximum possible muon energy is the maximum incident parton energy. The maximum $p_T$ is half the parton-parton center-of mass energy, $W = \sqrt{2E x_P x_N m_p}$. Here $E$ is cosmic-ray energy, $x_T$ and $x_N$ are the $x$ of projectile and target partons, and $m_p$ is the mass of the proton. The corresponding spectra are determined by a calculation that includes the kinematics of the parton production, fragmentation, and semileptonic decays. It may be worth giving a few examples of the range of $x$ values probed, using a a grossly simplified model where the muon takes half of the energy and $p_T$ of the parton produced in the collisions. For an incident $10^{18}$ eV proton producing a $10^{16}$ eV muon with $p_T = 1$ GeV (this muon would be in the core, and only distinguishable due to its huge $dE/dx$), $x_P$ = 0.01, $x_N=1.5\times10^{-6}$ and $Q^2\approx 30$ GeV$^2$. With the same incident particle and muon energy, a muon with $p_T = 100$ GeV would come from a collision with $x_P=10^{-3}$, $x_N = 8\times10^{-2}$ and $Q^2=10^5$ GeV$^2$. \section{Conclusions} Studies of high $p_T$ muon production in cosmic-ray air showers appears feasible with IceCube and IceTop combined. IceTop can measure the air shower energy, incident direction, and core position, while IceCube will measure muon energy and distance from the shower core; these data can be used to determine the muon $p_T$. For $p_T$ above a few GeV, the muon $p_T$ spectrum can be interpreted in terms of perturbative QCD plus and fragmentation functions. The muon $p_T$ spectrum should be sensitive to the composition of incident cosmic rays. It is a pleasure to acknowledge useful comments from Xinhua Bai and Tom Gaisser and Teresa Montaruli.
1,116,691,499,918
arxiv
\section*{Acknowledgements} \label{cha:acknowledgements} \addcontentsline{toc}{chapter}{Acknowledgements} We acknowledge financial support from the Bhabha Atomic Research Centre (BARC) and the Indian Institute of Technology Bombay, India; the Bundesministerium f\"ur Bildung und Forschung (BMBF), Germany; the Carl-Zeiss-Stiftung 21-0563-2.8/122/1 and 21-0563-2.8/131/1, Mainz, Germany; the Center for Advanced Radiation Technology (KVI-CART), Groningen, Netherlands; the CNRS/IN2P3 and the Universit\'{e} Paris-Sud, France; the Czech Ministry (MEYS) grants LM2015049, CZ.02.1.01/0.0/0.0/16 and 013/0001677, Czech Republic; the Deutsche Forschungsgemeinschaft (DFG), Germany; the Deutscher Akademischer Austauschdienst (DAAD), Germany; the European Union's Horizon 2020 research and innovation programme under grant agreement No 824093; the Forschungszentrum J\"ulich, Germany; the Gesellschaft f\"ur Schwerionenforschung GmbH (GSI), Darmstadt, Germany; the Helmholtz-Gemeinschaft Deutscher Forschungszentren (HGF), Germany; the INTAS, European Commission funding; the Institute of High Energy Physics (IHEP) and the Chinese Academy of Sciences, Beijing, China; the Istituto Nazionale di Fisica Nucleare (INFN), Italy; the Ministerio de Educación y Ciencia (MEC) under grant FPA2006-12120-C03-02, Spain; the Polish Ministry of Science and Higher Education (MNiSW) grant No. 2593/7, PR UE/2012/2, and the National Science Centre (NCN) DEC-2013/09/N/ST2/02180, Poland; the State Atomic Energy Corporation Rosatom, National Research Center Kurchatov Institute, Russia; the Schweizerischer Nationalfonds zur F\"orderung der Wissenschaftlichen Forschung (SNF), Switzerland; the Science and Technology Facilities Council (STFC), British funding agency, Great Britain; the Scientific and Technological Research Council of Turkey (TUBITAK) under the Grant No. 119F094, Turkey; the Stefan Meyer Institut f\"ur Subatomare Physik and the \"Osterreichische Akademie der Wissenschaften, Wien, Austria; the Swedish Research Council and the Knut and Alice Wallenberg Foundation, Sweden. \section{Reconstruction Methods} \section{The Decay Tree Fit} \input{DecayTreeFitter} \input{EventReconstruction} \input{backgroundstudies} \input{ResultsAndDiscussion} \input{Summary} \input{acknowledgements} \bibliographystyle{ieeetr} \section{Background Studies} \label{sec:BackgroundStudies} In addition to the study of the signal channel, a study of hadronic background events is performed. The most critical contribution to background are processes ending in similar final states, e.g. \pbarp$\rightarrow \mt{p}\bar{\mt{p}}\pi^+\pi^+\pi^-\pi^-\mt{K}^+\mt{K}^-$ for \mychannelfs and \pbarp$\rightarrow \mt{p}\bar{\mt{p}}\pi^+\pi^+\pi^-\pi^-\pi^0$ for \channelalbrecht. In the latter case, the cross section is estimated to be on the order of $100\,\mu\mt{b}$ by extrapolating the results from \cite{Flaminio1984}. Here, data samples were generated with the Dual Parton Model \cite{Capella1994} based generator DPM \cite{Galoyan2005} including only inelastic processes. The DPM event generator simulates all possible hadronic reactions for a given beam momentum. The cross-section of the \pbarp process is parameterized based on experimental data.\\ 100 million background events were subject to the same analysis strategy used for the signal events. In case of \mychannelfs, no event out of these 100 million background events survived the analysis procedure.\\ In the study of $\bar{\mt{p}}\mt{p}\rightarrow$\cascasbarpinull, 7 events remained in the event sample after applying the full analysis procedure. Further studies showed, that these events could be removed by restricting the distance between the \cascade and \anticascade decay vertices $d_{\Xi - \bar{\Xi}}$. By requiring $d_{\Xi - \bar{\Xi}} > 1\unit{cm}$, the signal reconstruction efficiency is reduced to $3.1\,\%$.\\ The non-observation of background events corresponds to a $90\,\%$ confidence upper limit of 2.3 events, which is used to calculate a lower limit for the signal-to-background ratio as well as for the signal significance. The signal-to-background ratio is given by \begin{equation} \frac{S}{B} = \frac{\sigma_{\mt{sig}}\cdot \epsilon_{\mt{sig}}\cdot b_{\mt{sig}}}{\sigma_{\mt{bg}}\cdot \epsilon_{\mt{bg}}}, \end{equation} where $\sigma_{\mt{sig}}$ and $\sigma_{\mt{bg}}$ are the signal and inelastic \pbarp cross sections, respectively, $b_{\mt{sig}}$ is the total branching ratio of signal events, and $\epsilon_{\mt{sig}}$ and $\epsilon_{\mt{bg}}$ are the respective reconstruction efficiencies for signal and background. Since the signal cross sections has not yet been measured, for the \fs signal final state including also the continuum contribution, it is assumed to be $\sigma_{\mt{sig}}=1\,\mu\mt{b}$ and for \cascasbarpinull to be $2\,\mu\mt{b}$, since in experimental studies the cross section for the \cascasbarpinull ground state was determined to be higher than the cross section for the \fs ground state \cite{Musgrave1965}. Furthermore, the inelastic \pbarp cross section at a beam momentum of $4.6\momentumunit$ is $\sigma_{\mt{bg}}=50\unit{mb}$ \cite{PDG2018}. During the generation of the signal events, the branching ratio of \lam and \alam was set to $100\,\%$ for the decay \mbox{\decay{\lam}{p}{\piminus}} and \mbox{\decay{\alam}{\aprot}{\piplus}}. For the following calculations this ratio has been corrected by the factor $b_{\mt{sig}}=b^2_\Lambda = 0.4083$ for both final states investigated here.\\ For the signal events, the reconstruction efficiency is $\epsilon_{\mt{sig}}=5.4\,\%$ for both \fs and \fscc, and $\epsilon_{\mt{sig}}=3.37\,\%$ for \cascasbarpinull. The significance of the signal $S_{\mt{sig}}$ is given by \begin{equation} S_{\mt{sig}} = \frac{N_{\mt{sig}}}{\sqrt{N_{\mt{sig}}+N_{\mt{bg}}\cdot F_{\mt{bg}}}}, \label{eq:Significance} \end{equation} where $F_{\mt{bg}}$ denotes a scaling factor which corrects the number of background events according to the number of signal events, since the generated ratio for signal and background does not reflect the cross sections. The scaling factor is given by \begin{equation} F_{\mt{bg}} = \frac{N_{\mt{sig}}^{\mt{gen}}\cdot \sigma_{\mt{bg}}}{N_{\mt{bg}}^{\mt{gen}}\cdot \sigma_{\mt{sig}}\cdot b_{\mt{sig}}}, \label{eq:ScalingFactor} \end{equation} where $N_{\mt{sig}}^{\mt{gen}}$ and $N_{\mt{bg}}^{\mt{gen}}$ are the number of generated signal and background events, respectively. \cref{eq:Significance,eq:ScalingFactor} transform to \begin{equation} S_{\mt{sig}} = \frac{\sqrt{N^{\mt{gen}}_{\mt{sig}}}\cdot \epsilon_{\mt{sig}}}{\sqrt{\epsilon_{\mt{sig}}+\frac{\epsilon_{\mt{bg}}\cdot \sigma_{\mt{bg}}}{\sigma_{\mt{sig}}\cdot b_{\mt{sig}}}}}. \end{equation} In to following, the signal significance is calculated with the expected number of events within 3 days of data taking. This is motivated by the beam time which is need to collect the statistics necessary for a future partial wave analysis. Assuming a luminosity of $L=10^{31}\unit{cm}^{-2}\mt{s}^{-1}$, $\sigma_{\mt{sig}}=1\,\mu\mt{b}$ for \fs and $\sigma_{\mt{sig}}=2\,\mu\mt{b}$ for \cascasbarpinull, the expected number of events is $N^{\mt{gen}}_{\mt{sig}}\approx 12\cdot 10^6$ for \fs as well as for the \cc channel, and $N^{\mt{gen}}_{\mt{sig}}\approx 24\cdot 10^6$ for \cascasbarpinull. The calculated signal-to-background ratio and signal significance for each investigated channel are summarized in \cref{tab:SBandSignificance}. We also included the results based on a factor 10 smaller cross section to give an indication of the lower limit case. \begin{table}[htb] \centering \caption{Signal-to-background ratio and signal significance. In addition to the assumed cross sections, calculations for a cross section of a factor 10 less are done.} \label{tab:SBandSignificance} \begin{tabular}{llcc} \hline & $\sigma_{\mt{sig}}$ & \fs ($\&$c.c.) & \cascasbarpinull\\ \hline $S/B$ & $\sim 1\,\mu\mt{b}$ & $>19.1$ & $>22.0$\\ $S_{\mt{sig}}$ & $\sim 1\,\mu\mt{b}$ & $>361$ & $>392$ \\ $S/B$ & $\sim 0.1\,\mu\mt{b}$ & $>1.91$ & $>2.2$\\ $S_{\mt{sig}}$ & $\sim 0.1\,\mu\mt{b}$ & $>95$ & $>105$ \\ \hline \end{tabular} \end{table} \subsection{The Full Decay Tree Fit Procedure} \label{sec:DecayTreeFit} In this section an overview on the method to perform a least-squares fit of a full decay chain is presented. For further information the reader can consult \cite{Hulsbergen2005}.\\ The presented least-squares fit allows a simultaneous extraction of all parameters in a decay chain. This method has been developed for the data analysis at the BaBar experiment \cite{Hulsbergen2005}. It uses a parameterization in terms of vertex position, momentum and decay time of a particle.\\ The parameterization of the decay tree is chosen as followed: \begin{itemize} \item Final state particles are represented by their momentum vector ($p_x, p_y, p_z$), respectively. The mass of the final state particle is assigned by the particle hypothesis set in the decay tree. \item Intermediate state are modeled by a four-momentum vector ($p_x, p_y, p_z, E$) and a decay vertex position($x, y, z$). In case the intermediate state is not the initial particle, also the decay time $\theta \equiv l/\left|\vec{p}\right|$, where $l$ is decay length, is used as parameter. \end{itemize} Furthermore, two types of constraints have to be distinguished: the internal constraints, i.e. vertex constraint and momentum conservation constraint, to remove redundant degrees of freedom, and the external constraint constituted by the reconstructed final state particles. The degrees of freedom of the decay tree are formed by the vertex positions and momenta of all involved particles.\\ The constraints described above are the minimal set of constraints necessary to fit the decay tree starting with the reconstructed final state particles. In addition, other constraints, i.e. constraining the mass of composites and the four-momentum of the head, are implemented. In principle, missing particles could also be included, if this does not mean that the decay tree is kinematically under-constrained.\\ The order in which the constraints are applied has an impact on the sum of the \chisq contributions, but with one exception: if all applied constraints are linear, the sum of the \chisq contributions is not affected by the order of the constraints. Based on this, the external constraints are applied first, followed by all four-momentum conservation constraints. In the last step geometric constraints as well as mass constraints are applied.\\ In general, the decay tree fit is repeated until the total \chisq reaches a stable value. In each iteration the parameters are initialized with the results of the previous iteration. In contrast, the covariance matrix is reset for each iteration to its original value. \section{Event generation and Track Reconstruction $\mathbf{\&}$ Filtering} % In this section the event generation as well as the procedure for the single track reconstruction and for track filtering are presented. \subsection{Event generation} \begin{figure}[htb] \centering \includegraphics[width=0.46\textwidth]{./plots/decaytree_new.pdf} \caption{Decay tree for the simulation of \mychannel where $\Xi^*$ decays into \lam\kminus.} \label{fig:DecayTree} \end{figure} In this study, the events to be analyzed, called signal events in the following, were generated with the event generator EvtGen \cite{Lange2001} according to a defined decay chain. The decay chain for one of the channels simulated in this work is presented in \cref{fig:DecayTree}. The antiproton momentum is chosen to be $p_{\bar{\mt{p}}}=4.6\momentumunit$ corresponding to a c.m. energy of $\sqrt{s}=3.25\unit{GeV}$. The chosen beam momentum allows the population of several resonant states of the $\Xi$ baryon, i.e. \excitedcascadefifteen, \excitedcascadesixteen and \excitedcascadetwenty as well as \excitedanticascadefifteen, \excitedanticascadesixteen and \excitedanticascadetwenty. \begin{table}[b] \centering \caption{Mass and width of the $\Xi$ resonances as implemented for the event generation. The values in parentheses were used for the event generation of the reaction \channelalbrecht.} \label{tab:XiResProps} \begin{tabular}{lcc} \hline State & Mass [MeV$/\mt{c}^2$] & $\Gamma$ [MeV$/\mt{c}^2$] \\ \hline \excitedcascadefifteen & 1535 & 9.9 \\ \excitedcascadesixteen & 1690 & 30 (25) \\ \excitedcascadetwenty & 1823 & 24 (25) \\ \hline \end{tabular} \end{table} The properties of the resonant states according to \cite{PDG2018} are summarized in \cref{tab:XiResProps}. Different decay channels of the $\Xi$ resonances are investigated: \begin{itemize} \item \mbox{\excitedcascade$\rightarrow$\lam\kminus}, \item \excitedcascade$\rightarrow \Xi^- \pi^0$, and \item their \cc channels. \end{itemize} The chosen decay channels allow a good test of the reconstruction of far-off vertices (\lam), PID of rare particles (\kplus, \kminus), the reconstruction of composite vertices, \cascade $\rightarrow\pi^-$\lam followed by \lam $\rightarrow\pi^-$p, and also the combination of charged particle information with photon reconstruction (\pinull$\rightarrow \gamma\gamma$).\\ A non-resonant contribution has been generated in addition to the $\Xi^*$ states mentioned. \begin{table*}[t] \centering \caption{Production and decay branches of the signal events. c.c denotes the \cc.} \label{tab:channels} \begin{tabular}{lllll} \hline \pbarp $\rightarrow$ & & & $\rightarrow$ & \fs \\ \pbarp $\rightarrow$ & \anticascade \excitedcascadesixteen & & $\rightarrow$ & \fs \\ \pbarp $\rightarrow$ & \anticascade \excitedcascadetwenty& & $\rightarrow$ & \fs\\ \pbarp $\rightarrow$ & & & $\rightarrow$ & \fscc \\ \pbarp $\rightarrow$ &\excitedanticascadesixteen \cascade & & $\rightarrow$ & \fscc\\ \pbarp $\rightarrow$ & \excitedanticascadetwenty \cascade & & $\rightarrow$ & \fscc\\ % \hline % \pbarp $\rightarrow$ & & & $\rightarrow$ & \anticascade \cascade \pinull\\ \pbarp $\rightarrow$ & \anticascade \excitedcascadefifteenmin & (+ c.c.)& $\rightarrow$ & \anticascade \cascade \pinull\\ \pbarp $\rightarrow$ & \anticascade \excitedcascadesixteen& (+ c.c.)& $\rightarrow$ & \anticascade \cascade \pinull\\ \pbarp $\rightarrow$ & \anticascade \excitedcascadetwenty& (+ c.c.)& $\rightarrow$ & \anticascade \cascade \pinull\\ \hline \end{tabular} \end{table*} A full overview of the generated samples is shown in \cref{tab:channels}. The ratio between the resonant and non-resonant contribution to the signal events is an assumption based on measured total production cross sections of both excited and ground states of single strange hyperons in \cite{Flaminio1984}.\\ For each decay mode an isotropic angular distribution is chosen since there are neither experimental data nor theoretical predictions for the reaction \mychannel and its \cc reaction, respectively. This simplification ensures that both baryon and anti-baryon are underlying the same detector acceptance. In addition, the decay of each resonance is assumed to be isotropic.\\ Furthermore, the production cross section for \mychannel as well as for \mychannelcc is unknown. For the production of \anticascade\cascade in \pbarp collisions at $p=3\momentumunit$ beam momentum a cross section of $\sigma\simeq 2\,\mu\mt{b}$ has been measured \cite{Musgrave1965}. In case of single strange hyperons, the comparison of the ground state and the excited state production shows similar cross sections for both species \cite{Flaminio1984}. Therefore, the cross section $\sigma$(\mychannel) is assumed to be $1\,\mu\mt{b}$.\\\newline Since EvtGen does not take into account the curved trajectory in the magnetic field of the solenoid or the interaction of particles with the detector volume, the propagation of \anticascade and \cascade is passed to Geant4.\\ The branching ratio for both $\Xi$ baryons to $\Lambda\pi$ is BR($\Xi \rightarrow \Lambda \pi$)$=99.98\,\%$. In contrast, \lam as well as \alam have various decay modes with a significant branching ratios. Since this study focuses on \decay{\lam}{p}{\piminus} and \decay{\alam}{\aprot}{\piplus} the corresponding branching ratio ($BR=63.4\,\%$) is set to $100\,\%$. The final results have been scaled by the correct branching ratios for further calculations. \subsection{Track Reconstruction and Filtering} \label{ssec:trackfilter} A characteristic feature of ground state hyperons is their long decay time, so that they can propagate several centimeters before they decay. The lifetimes ($\ctau$) of the \lam and $\Xi$ is are $7.89$ and $4.91\unit{cm}$, respectively \cite{PDG2018}. This implies, that their daughter particles are not produced close to the interaction point. As mentioned in \cref{ssec:SoftwareFramework}, the tracking algorithms in PandaRoot assume particles to come from the IP meaning that the implemented algorithms are not able to reconstructed the charged final state particles of the reactions to be studied. Since no pattern recognition algorithm was available that takes into account particles that decay away from the IP, we used an ideal pattern recognition algorithm instead. As a consequence, also particles leaving only one hit in any sub-detector will be reconstructed. To simulate a more realistic condition, a track filter is used to reject those tracks with a low hit multiplicity in the tracking detectors. In the following, only those charged final state particles are further considered if they leave at least four hits in one of inner tracking detectors (MVD, STT or GEM). This selection criterion is motivated by the helix trajectory of a charged particle in a homogeneous magnetic field. Consider f.e. the case the particle is moving along the $z$-axis. In that case, the projection of the trajectory onto the $x$-$y$-plane is a circle which can be defined by three hit points inside the detector part. A fourth hit is then a confirmation of the track hypothesis. \section{Event Reconstruction} \subsection[$\bar{\mt{p}}\mt{p}\rightarrow \bar{\Xi}^+ \Lambda \mt{K}^-$ + c.c] {$\mathbf{\bar{\textbf{p}}\textbf{p}\rightarrow \bar{\Xi}^+ \Lambda \textbf{K}^-}$ + c.c.} In this study, in total about 10 million signal events of the reactions \mychannelfs and \mychannelfscc have been analyzed, containing $40\,\%$ \excitedcascadesixteen (\excitedanticascadesixteen), $40\,\%$ \excitedcascadetwenty (\excitedanticascadetwenty), and $20\,\%$ continuum. \input{FinalStates} \input{IntermediateStates} \input{FullTreeReco} \subsection[$\bar{\mt{p}}\mt{p}\rightarrow \bar{\Xi^+} \Xi^- \pi^0$]{$\mathbf{\bar{\textbf{p}}\textbf{p}\rightarrow \bar{\Xi^+} \Xi^- \pi^0}$} \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{plots/Albrecht/DecayTree} \caption{Schematic illustration of the decay tree for the process \pbarp$\rightarrow$ \cascasbarpinull.} \label{fig:DecayTreeAlbrecht} \end{figure} 9 million signal events, generated according to the decay tree shown in \cref{fig:DecayTreeAlbrecht} have been analyzed containing a continuum contribution as well as the resonant states \excitedcascadefifteen, \excitedcascadesixteen, \excitedcascadetwenty, and their \cc states. \input{FinalStatesAlbrecht} \input{intermediateStatesAlbrecht} \input{fulltreeAlbrecht} \subsubsection*{\textbf{Final States Particles}} The reconstruction of the charged final states particles is similar to the reconstruction presented in in the previous analysis. In addition, the neutral candidate list is filled whenever hits in the EMC cannot be associated with any charged track. Not using PID information leads to large combinatorics in the reconstruction process. Therefore, various selection criteria are used as a pre-filter for the candidates to reduce this combinatorics. The track filtering is already described in \cref{ssec:trackfilter}. In addition to the track filter, the PID information is used as veto. The PID value is calculated by using information about the energy loss $dE/dx$ in the detector material, the Cherenkov angle and the EMC cluster energy.\\ Proton and antiproton candidates, which have a PID probability of more than $90\,\%$ to be a pion, are excluded. The same is applied for pions with a PID probability of more than $90\,\%$ to be a proton. The achieved reconstruction efficiency of the charged final state particles is summarized in \cref{tab:RecoEff_FS_Albrecht}. \begin{table}[htbp] \centering \caption{Reconstruction efficiency of the charged final state particles. The statistical error is on the order of $0.05\,\%$} \label{tab:RecoEff_FS_Albrecht} \begin{tabular}{lr} \hline Particle & Efficiency [\%] \\ \hline \piminus(\lam) & 78.63 \\ \piminus(\cascade) & 83.89 \\ \piplus(\alam) & 78.67 \\ \piplus(\anticascade) & 84.07 \\ p & 96.52 \\ \aprot & 93.21 \\ \hline \end{tabular} \end{table} For a further reduction of combinatorics, the candidates are subject to kinematical constraints on the transversal versus longitudinal momentum ($P_{t}$ vs. $P_{z}$) distribution. The elliptic boundary of the kinematically allowed region is given by \begin{equation} \frac{\left(x-x_0\right)^2}{a^2}+ \frac{y^2}{b^2} = 1 \end{equation} with \begin{eqnarray*} x_0 &=& \left(p_{z,\max}+p_{z,\min}\right)/2 \\ a &=& \left(p_{z,\max}-p_{z,\min}\right)/2, \mt{ and} \\ b &=& p_{t,\max}. \end{eqnarray*} In this analysis, an event is marked as reconstructable, if the event contains the minimum number of entries according to the charged final states, \fsalbrecht as well as two neutral candidates. \subsubsection*{\textbf{Final State Particles}} \label{sssec:FinalStateSelection} After the track filtering, the final state particle candidates are filled into the corresponding candidate lists. For the selection of the possible candidates no PID information is used. This implies that for a given charge sign each of the corresponding candidate lists is filled with the same candidate. The single candidates differ only in the mass, which is set according to the hypothesis of the corresponding candidate list.\\ If at least three candidates for each charge sign are available per event, it is marked as "reconstructable". This pre-selection avoids the reconstruction of incomplete signal events.\\ In the following, the reconstruction efficiency is defined as the ratio of MC matched candidates to the number of generated candidates. MC matched means that the reconstructed candidate has a partner in the MC truth list which has the correct event genealogy up to the initial \pbarp system. \begin{table}[b] \centering \caption{Reconstruction efficiency for the final state particles of \mychannelfs and \mychannelfscc (c.c.), respectively.} \label{tab:RecoEffFinalStates} \begin{tabular}{lrr} \hline particle type & eff. [\%] & eff.[\%](c.c.)\\ \hline \piminus & 71.2 & 70.6\\ \piplus (\alam) & 68.6 & 68.3 \\ \piplus (\anticascade) & 73.7 & 73.1\\ \kminus (resonance) & 84.9 & 86.7\\ \kminus (continuum) & 85.1 & 86.9\\ p & 88.7 & 86.2\\ \aprot & 82.3 & 83.4\\ \hline \end{tabular} \end{table} The reconstruction efficiencies achieved for the final state particles are listed in \cref{tab:RecoEffFinalStates}. The statistical error on the reconstruction efficiency is of the order of $0.1\,\%$. A systematic error, for example caused by the acceptance of the individual sub-detectors, is not included.\\ For each final state particle two-dimensional histograms of transverse momentum versus longitudinal momentum as well as absolute momentum versus polar angle are generated. As an example, the generated and the reconstructed transverse versus longitudinal momentum distributions for \piminus coming from \lam decay are shown in \cref{fig:PiMinusLam}. Here, the generated distributions are used as reference plots to deduce the quality of the reconstruction. For all final state candidates the distributions contain entries outside the kinematically allowed. This could be caused by interactions of the generated particles inside the detector material or with the beam pipe during the propagation. In addition, the generated distribution shows an ellipse of entries which corresponds stopped \lam that subsequentially decay into a p\piminus pair. \begin{figure}[t] \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=0.8\textwidth]{./plots/PiMinus_lam_pt_vs_pz_MCgen_cleaned.pdf} \caption{} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=0.8\textwidth]{./plots/piminus_lam_pt_vs_pz.pdf} \caption{} \end{subfigure} \caption{Transverse vs. longitudinal momentum distribution for generated (a) and reconstructed (b) \piminus candidates from \lam, requiring that the generated \lam has only two daughters.} \label{fig:PiMinusLam} \end{figure} The comparison between the generated and the reconstructed distributions shows that the \piminus from the signal events are clearly identifiable.\\ The relative momentum resolution is obtained from \begin{equation} \frac{\Delta p}{p}=\frac{p^{\mt{reco}} - p^{\mt{MC}}}{p^{\mt{MC}}} \end{equation} where $p^{\mt{reco}}$ denotes the reconstructed and $p^{\mt{MC}}$ the generated momentum. The value of the resolution is determined by performing a double Gaussian fit to the resulting distribution and using the width of the inner, most narrow, Gauss function. Here, about $64\,\%$ of the yield is in the inner Gauss and about $86\,\%$ in the second Gauss function. By varying the fit parameters, the systematic error of the fit value is estimated to be $0.09$ percentage points. \begin{table}[t] \centering \caption{Momentum resolution for the final state particles of \mychannelfs and \mychannelfscc (c.c.), respectively. The error on the fit value is dominated by the systematic error which is estimated to be $0.09$ percentage points.} \label{tab:MomResFinalStates} \begin{tabular}{lrr} \hline particle type & $\Delta p/p$ [\%] & $\Delta p/p$ [\%](c.c.)\\ \hline \piminus & 1.61 & 1.61\\ \piplus (\alam) & 1.64 & 1.64\\ \piplus (\anticascade) & 1.48 & 1.48 \\ \kminus (res.) & 1.65 & 1.65 \\ \kminus (cont.) & 1.66 & 1.65 \\ p & 1.63 & 1.61 \\ \aprot & 1.59 & 1.60 \\ \hline \end{tabular} \end{table} The determined fit values are summarized in \cref{tab:MomResFinalStates}. \subsubsection*{\textbf{Reconstruction of the $\mathbf{\bar{\Xi}^+\Xi^-\pi^0}$ System}} In the last step of the analysis, the complete \cascasbarpinull system is combined. The combination of the three particles leads to a high amount of combinatorics. To reduce the number of \enquote{accidental} combined candidates a selection on the momentum in each component is performed corresponding to a selective cut on the four-momentum of the initial \pbarp system: \begin{eqnarray*} -0.14\unit{GeV/c} < & P_{x,y} & < 0.14\unit{GeV/c}\\ 4.2\unit{GeV/c} < & P_{z\,} &< 5.0\unit{GeV/c} \\ 5.3\unit{GeV} < & E & < 5.9\unit{GeV}\\ 3.155\unit{GeV/c}^2 < & M & < 3.35\unit{GeV/c}^2. \end{eqnarray*} All remaining candidates are then subject to a full decay tree fit. In addition to the standard fits (vertices, four-momentum, masses), the constraint of the hyperon masses and \pinull mass are required.\\ The fit results showed that the mass constraint of the \pinull is not perfectly fulfilled. To reduce the number of the candidates with a mass different from $M_{\pi^0}=0.135$\massunit \cite{PDG2018} the decay tree fit is redone with a corrected energy component for the \pinull candidates with too low masses. Finally, a minimum fit probability threshold of more than $10^{-4}$ is required to select the candidate. The probability threshold was chosen according to reach the best figure of merit in terms of reconstruction efficiency and pure signal fraction of the final selected sample. The described selection scheme leads to a reconstruction efficiency of $3.6\,\%$. The most significant losses occur in the reconstruction of \pinull mesons. The signal purity of the final selected \cascasbarpinull candidates is $93.5\,\%$. In order to estimate the reconstructed signal event rate, the number of remaining signal events are multiplied by the product of all branching fractions of $0.4026$ within the decay tree, the luminosity and the cross section.\\ The Dalitz plots for the final selected \cascasbarpinull are shown in \cref{fig:DalitzPlots_Albrecht}. In case of the continuum contribution, shown in \cref{fig:DalitzPlot_reco_cont}, the distribution differs from an expected uniform distribution. A loss of efficiency towards low $\Xi\pi^0$ masses is observable. The reason for the efficiency loss has to be investigated in the future. Nevertheless, the loss of efficiency is smooth so that this Dalitz plot could be analyzed. The contributing resonances are clearly observable as bands in \cref{fig:DalitzPlot_reco_res}. As an example, the mass distribution of the final selected \cascade \pinull sub-system is shown in \cref{fig:MassDist_Reco_final_XiP0}. \cref{tab:MassWidth_Reco_final_XiP0} summarized the obtained masses and widths of the contributing resonances by fitting the single peaks. In this study, the chosen input value for the \excitedcascadetwenty mass as well as the width of \excitedcascadesixteen and \excitedcascadetwenty were slightly different compared to te former study. The determined resonance masses are in good agreement with the input values, while the width for all resonances deviate from the input. Nevertheless, the fit values for the $\Xi$ and $\bar{\Xi}$ resonances are consistent with each other. \begin{figure}[h] \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.8\textwidth]{plots/Albrecht/hdalitzC_cont} \caption{} \label{fig:DalitzPlot_reco_cont} \end{subfigure} % \begin{subfigure}{0.5\textwidth} \centering \includegraphics[width=0.8\textwidth]{plots/Albrecht/hdalitzC_res} \caption{} \label{fig:DalitzPlot_reco_res} \end{subfigure} \caption{Dalitz plot for the final selection \cascasbarpinull candidates from the continuum contribution only (a) and for the resonance contribution only (b).} \label{fig:DalitzPlots_Albrecht} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{plots/Albrecht/hmXimPi0C} \caption{Mass distribution of the final selected \cascade\pinull sub-system.} \label{fig:MassDist_Reco_final_XiP0} \end{figure} \begin{table}[h] \caption{Fit results of for the mass and width of the $\Xi$ resonances determined with a fit to the peaks in the \cascade\pinull and \anticascade\pinull invariant mass distribution.} \label{tab:MassWidth_Reco_final_XiP0} % \begin{tabular}{lcc} \hline & M [$\mt{MeV}/\mt{c}^2$] & $\Gamma$ [$\mt{MeV}/\mt{c}^2$]\\\hline \excitedcascadefifteen & $1535.9 \pm 0.3$ & $10.4 \pm 0.4$ \\ \excitedanticascadefifteen & $1536.0 \pm 0.3$ & $10.4 \pm 0.4$ \\ \excitedcascadesixteen & $1690.4 \pm 0.2$ & $21.7 \pm 0.5$ \\ \excitedanticascadesixteen & $1690.7 \pm 0.2$ & $21.1 \pm 0.5$ \\ \excitedcascadetwenty & $1819.8 \pm 0.3$ & $20.1 \pm 0.7 $\\ \excitedanticascadetwenty & $1820.3 \pm 0.3$ & $20.5 \pm 0.7$\\ \hline \end{tabular} \end{table} \subsubsection*{\textbf{Full Decay Tree}} In the following, the reconstruction of the full decay tree is described. Within this procedure, described in \cref{sec:DecayTreeFit}, the four-momentum conservation of the initial energy and momentum vector \[ P_{\mt{ini}} = \left(0,0,4.6,5.633\right)\unit{GeV}, \] as well as the hyperon masses are constraint. Unless otherwise indicated, the results listed below are for the \fs final state.\\ Since the $\Xi$ resonances decay promptly into a \lam \kminus pair or into \alam \kplus in the \cc channel, the reconstruction of the full decay tree is done by combining \fs and \fscc, respectively. Subsequently, the candidates are fitted with the \DTF implemented in PandaRoot. The fit quality is represented by the \chisq value and a fit probability is calculated. \begin{figure*}[htb] \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=0.8\textwidth]{./plots/DTFit_chisq.pdf} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=0.8\textwidth]{./plots/DTFit_prob.pdf} \caption{} \label{fig:DTF_prob} \end{subfigure} \caption{\chisq (a) and probability (b) distribution for the decay tree fit performed on the \fs sample. The rise of the probability distribution indicates that the errors are overestimated in some cases.} \label{fig:DTFQuality} \end{figure*} \Cref{fig:DTFQuality} shows the corresponding distributions. The probability distribution (Fig. \ref{fig:DTF_prob}) shows a rising behaviour close to the value of one. That indicates that the errors are overestimated for some cases. For the final selection only candidates, which have been successfully fitted are taken into account. The fit probability (P) is used as selection criterion for the candidate selection. Here, a lower threshold of $P>1\cdot 10^{-4}$ is applied corresponding to a selection on the \chisq value with $\chi^{2}<43$. The applied selection criterion was optimized according to reach the best figure of merit in terms of reconstruction efficiency and pure signal fraction of the final selected sample. The final selected sample contains 277,133 \fscc events and 283,617 \fscc events. \Cref{tab:RecoEffFinal} summarizes the achieved reconstruction efficiency and the signal purity for the final selected signal samples. In addition, the ratio between the resonant and \begin{table}[b] \centering \caption{Reconstruction efficiency and purity for the final selected signal samples.} \label{tab:RecoEffFinal} \begin{tabular}{lrr} \hline Sample & Reco. Eff. [\%] & Purity [\%] \\ \hline \fs & 5.4 & 97.7 \\ \fscc & 5.5 & 97.7 \\ \hline \end{tabular} \end{table} the non-resonant decay modes is determined, see \cref{tab:RatioDecayModes}. \begin{table}[b] \centering \caption{Channels and their fraction of ther generated cross section for the \mychannelfs and \mychannelfscc final reconstructed sample (Reco.) and the generated sample (Input).} \label{tab:RatioDecayModes} \begin{tabular}{lrr} \hline Channel & Reco. [\%] & Input [\%]\\ \hline \anticascade \excitedcascadesixteen & $37.7 \pm 0.8$ & 40 \\ \anticascade \excitedcascadetwenty & $42.4 \pm 0.8$ & 40 \\ \fs & $19.9 \pm 0.5$ & 20 \\ \hline \vspace{.5pt}\\ \cascade \excitedanticascadesixteen & $37.8 \pm 0.8$ & 40\\ \cascade \excitedanticascadetwenty & $42.2 \pm 0.8$ & 40\vspace{2pt}\\ \fscc & $19.9 \pm 0.5$ & 20 \\ \hline \end{tabular} \end{table} A comparison with the input values shows that the fraction for \excitedcascadesixteen and \excitedanticascadesixteen are lower and the fraction for \excitedcascadetwenty and \excitedanticascadetwenty are higher than the input values, while the determined fraction for the continuum contribution is in good agreement with the input. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{plots/significance_vs_crosssection_beamtime} \caption{Signal significance as function of the signal cross section. The gray band indicates the region the signal is assumed.} \label{fig:SigvsXSec} \end{figure} The signal significance is defined as $S_{\mt{Sig}}=S/\sqrt{S+B}$ and depends on the cross section of the reaction to study, see \cref{sec:BackgroundStudies}, called signal cross section in the following. Here, $S$ and $B$ are the number of signal and background events, respectively. \Cref{fig:SigvsXSec} shows the expected signal significance as function of the signal cross section. The signal final state is clearly identifiable above the hadronic background, even if the cross section is an order of magnitude smaller than assumed here.\\ The \DTF uses the four-momentum constraint fit, which leads to a correction of the momentum and the energy for each involved candidate to match the initial four-momentum vector. This correction has an impact on the momentum resolution of the candidates. The momentum resolution is evaluated by performing a double Gaussian fit to the relative deviation of the reconstructed and generated total momentum, like described in \cref{sssec:FinalStateSelection}. \Cref{tab:MomResFinal} summarizes the evaluated momentum resolution of the intermediate state particles.\\ From the deviation of the reconstructed from the generated decay vertex position of all three spatial coordinates the decay vertex resolution is determined. \begin{figure} \includegraphics[width=0.4\textwidth]{plots/lambda0_vtxres_final_x} \caption{Deviation of the reconstructed from the generated x coordinate of the \lam decay vertex in the process \mychannelfs after the final selection.} \label{fig:VtxResFinal_Lam} \end{figure} \Cref{fig:VtxResFinal_Lam} shows the deviation of the decay vertex position of final selected \lam for the x coordinate as an example. The resulting distribution is clearly not Gaussian. Therefore, the decay vertex resolution is determined by evaluating \begin{equation} \sigma_{\mt{vtx}} = \frac{\mt{FWHM}}{2\cdot \sqrt{2\cdot \ln 2}}, \end{equation} where FWHM is the full width at half maximum of the distribution. The achieved resolutions for all intermediate state particles are listed in \cref{tab:VtxRes}. \begin{table}[b] \caption{Relative momentum resolution for the intermediate state particles of \mychannelfs and \mychannelfscc.} \label{tab:MomResFinal} \centering \begin{tabular}[b]{lc} \hline Particle & $\sigma_{\mt{p}}\,\left[\%\right]$ \\ \hline \lam & $\left( 0.777\pm0.007\right)$\\ \alam & $\left( 0.803 \pm 0.007\right)$ \\ \anticascade & $\left( 1.30 \pm 0.01\right)$ \\ \hline \vspace{0.1pt}\\ \lam & $\left( 0.795 \pm 0.006\right)$ \\ \alam & $\left( 0.748 \pm 0.006\right)$ \\ \cascade & $\left( 1.29 \pm 0.01\right)$ \\ \hline \end{tabular} \end{table} \begin{table}[h] \centering \caption{Decay vertex resolution for each spatial direction of the final selected intermediate state particles of \mychannelfs and \mychannelfscc.} \label{tab:VtxRes} \begin{tabular}[t]{lrrr} \hline Particle & x [mm] & y [mm] & z [mm] \\ \hline \multicolumn{4}{c}{\mychannelfs}\\ \lam & 0.110 & 0.093& 0.544 \\ \alam & 0.127& 0.110& 0.595 \\ \anticascade & 0.119& 0.119& 0.510 \vspace{0.1pt}\\ \hline \multicolumn{4}{c}{\mychannelfscc}\\ \lam & 0.127& 0.110& 0.578 \\ \alam & 0.110& 0.110& 0.544 \\ \cascade & 0.119& 0.119& 0.510 \\ \hline \end{tabular} \end{table} Since the determined FWHM is depending on the chosen bin size, the error on the FWHM is estimated by varying the number of bins of the corresponding histogram. With this procedure, the error on the vertex resolution is estimated to be about $8\,\mu\mt{m}$.\\ The decay products of the resonance together with the additional hyperon, \fs and \fscc, can be defined as a three-body final state of the strong interaction, since the involved particles further decay weakly or electromagnetically. In this analysis, $M^2$(\lam\kminus) and $M^2$(\anticascade\kminus) as well as the squared mass for their \cc particles are used as the axes of the corresponding Dalitz plot. The different decay modes of the reaction lead to different distributions within the Dalitz plot. For the continuum production of the three-body final state, the Dalitz plot shows a uniform distribution over the entire kinematically allowed region. For a contributing resonant process, the resonance will \begin{figure}[b] \centering \includegraphics[width=0.4\textwidth]{./plots/Dalitz_plot_aXiLK.pdf} \caption{Dalitz plot for the final selected \fs candidates from \mychannelfs.} \label{fig:DalitzReco} \end{figure} be visible as structure in the Dalitz plot. The Dalitz plot for the \fs final state is shown in \cref{fig:DalitzReco}. Here, the $\Xi$ resonances are visible as vertical bands around the nominal squared mass values. \begin{figure}[t] \centering \includegraphics[width=0.4\textwidth]{./plots/Dalitz_ratio_plot_AXiLK.pdf} \caption{Ratio of the Dalitz Plots for the MC truth partners of the final \fs sample and the generated sample.} \label{fig:DalitzRatio} \end{figure} To compare the reconstructed and the generated Dalitz plot, the ratio of the Dalitz plots for the MC truth partners of the reconstructed and the generated candidates is illustrated in \cref{fig:DalitzRatio}. The ratio plot shows a uniform distribution. \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{./plots/LamK_ratio.pdf} \caption{Reconstruction efficiency (black histogram) as function of the invariant \lam\kminus mass in the process \mychannelfs. The statistical error is shown in red.} \label{fig:LamKMassRatio} \end{figure} By illustrating the ratio of the generated and reconstructed mass distribution for the \lam\kminus sub-system, \cref{fig:LamKMassRatio}, one can observe a decrease of the reconstruction efficiency by about $20\,\%$ towards lower sub-system masses.\\ The mass and the width of the resonances \begin{table}[h] \centering \caption{Fit results for the mass and width of the $\Xi$ resonances determined with a fit function containing two Voigt functions and a polynomial.} \label{tab:ResFitValues} \begin{tabular}{lcc} \hline & M [MeV$/\mt{c}^{2}$] & $\Gamma$ [MeV$/\mt{c}^{2}$] \\ \hline \excitedcascadesixteen & $1689.99\pm 0.13$ & $30.1\pm 0.6$\\ \excitedanticascadesixteen & $1690.16\pm 0.12$ & $30.2\pm 0.6$ \\ \excitedcascadetwenty & $1822.98\pm 0.12$ & $22.9\pm 0.4$ \\ \excitedanticascadetwenty & $1823.12\pm 0.12$ & $22.7\pm 0.4$ \\ \hline \end{tabular} \end{table} are determined by fitting a function containing two Voigt functions \cite{armstrong1967spectrum} and a polynomial to the corresponding mass distributions. \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{./plots/XiResonances_mass_fit.pdf} \caption{Mass distribution (black histogram) of the final reconstructed \lam\kminus from \mychannelfs with fit function (red dashed curve) containing two Voigt functions and a polynomial.} \label{fig:LamKMass} \end{figure} The mass distribution of \lam\kminus is shown as an example in \cref{fig:LamKMass}. In this analysis, the best fit result is achieved by fixing the instrumental width $\sigma_{M}$ for both resonances to $\sigma_{M}=4\unit{MeV}/c^2$. This value was determined by calculating the FWHM for the deviation of the final reconstructed and the generated mass distribution. The resulting fit values for the $\Xi$ resonances are summarized in \cref{tab:ResFitValues}. Except for the width for \excitedcascadetwenty and \excitedanticascadetwenty, the fitted values are consistent with the input values listed in \cref{tab:XiResProps}. The width for \excitedcascadetwenty and \excitedanticascadetwenty agree within $2\,\sigma$.\\ \newline An isotropic angular distribution was assumed for the production of the \anticascade and \excitedcascade as well as for their \cc particles. From the ratio of the $\cos{\theta}$ distribution in the c.m. frame for the MC truth partners of the final selected candidates (MCT) and the generated (MC) candidates, shown in \cref{fig:CosThtRatio}, it is possible to deduce the reconstruction efficiency for any c.m. angular distribution. The ratio shows a reduced efficiency for particles emitted in forward and backward direction, which is due to the loss of propagated particles inside the beam pipe. \begin{figure}[h] \centering \includegraphics[width=0.4\textwidth]{./plots/XiPlus_costht_cms_ratio.pdf} \caption{Ratio of the $\cos\left(\Theta\right)$ distributions in the c.m. frame for the final selected \anticascade candidates in the process \mychannelfs. MC indicates the generated candidates and MCT the MC truth partners of the final selected candidates.} \label{fig:CosThtRatio} \end{figure} \subsubsection*{\textbf{Intermediate State Particles}} \label{ssec:intermediateStatesAlbrecht} The first step is to reconstruct the \lam and \alam particles. For \lam the list of protons and \piminus candidates are combined, for \alam those of \aprot and \piplus. Apart from this the procedure for \lam and \alam are identical. If not otherwise stated, the following description for \lam applies to the \alam reconstruction in the same way.\\ The \lam candidates are first filtered by requiring that the $\pi^-$p mass ($M_{\mt{raw}}$) is within the following range: $\left|M_{\mt{raw}}-M_\Lambda\right|<0.15$\massunit. Here the lower bound of the mass window is given by the sum of the masses of the daughter particles. At this stage of reconstruction it is possible to reconstruct $30.5\,\%$ of the generated \lam and about $29.4\,\%$ of the generated \alam.\\ \newline In order to reconstruct \cascade and \anticascade \mbox{(anti-)} hyperons, candidate pairs of \piminus and \lam or \piplus and \alam are built, respectively. The pion candidates from the respective candidate lists, which where used for the reconstruction of \lam and \alam, are excluded. Unless otherwise stated, the description of the \cascade reconstruction implicitly includes the reconstruction of \anticascade as well. In principle, the same procedure as for the \lam and \alam reconstruction is used. In a first step, the \cascade candidates are filtered by a coarse mass window $\left|M_{\mt{raw}}-M_{\Xi^-}\right|< 0.15$\massunit, where the lower bound of the mass window is given by the sum mass of the daughter particles $M_\Lambda + M_{\pi^-}$. At this stage of the reconstruction, the reconstruction efficiency is $27.9\,\%$ for \cascade and $27.0\,\%$ for \anticascade.\\ \newline The procedure to reconstruct the \pinull meson differs from the procedure for the hyperons.\\% \lam because no useful vertex fit can be performed on the photon daughter.\\ In the first step of the reconstruction, all members in the neutral candidates list are required to have at least $15\unit{MeV}$. To improve the \pinull selection, a photon time cut is introduced to reject slow neutral particles. For each neutral candidate a flight time difference of $T -T_{\mt{v=c}} < 3\unit{ns}$ is required, where $T$ is the recorded time of the first hit in the EMC.\\ All pairwise combinations from the neutral candidate list are entered into the \pinull candidate list if the invariant mass of the pair ($M_{\mt{cand}}$) is within the following coarse mass window: $\left|M_{\mt{cand}}-m_{\pi^0}\right| < 0.05$\massunit with $m_{\pi^0} = 134.9768\unit{MeV}/c^2$ \cite{PDG2018} is then applied to these candidates. All candidates are subject to a mass constraint fit. A minimum fit probability threshold of $10^{-3}$ is required. If more than one candidate passes the fit, the candidates with the highest and second highest fit probability are selected. We separately counted MC truth \pinull decays into two photons whereby one or both of the photons have converted into a $e^+e^-$ pair in the material in front of the EMC. Therefore, the sum of true and \enquote{conversion} \pinull candidates is counted as good candidates leading to a fraction for \pinull signal events of $40.2\,\%$. The remaining candidates can be interpreted as combinatorial background. \subsubsection*{\textbf{Intermediate State Particles}} \begin{figure*} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=0.8\textwidth]{plots/lambda0_mass_reco} \caption{} \end{subfigure} % \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=0.8\textwidth]{plots/xiplus_mass} \caption{} \end{subfigure} \caption{Invariant mass spectrum of \lam (a) and \anticascade (b) after the mass window selection. \lam and \anticascade are produced in the reaction \mychannelfs.} \label{fig:HyperonMassSpectraMassWindowCut} \end{figure*} The candidate selection of the intermediate state particles, i.e. \alam, \lam, \anticascade and \cascade, are similar for each particle type. In the first step, \alam and \alam are built by combining the daughter particles: \aprot and \piplus for \alam, p and \piminus for \lam. In the next stage of reconstruction \alam and an additional \piplus are combined to \anticascade as well as \lam and \piminus to \cascade in the \cc channel. Since the input for the \DTF are \enquote{raw} candidates, only a coarse pre-selection is done to reduce the number of wrongly combined candidates. For this a symmetric mass window selection of $\pm 0.15$\massunit around the nominal hyperon mass is applied on the candidate masses. The \lam and \anticascade invariant mass spectra after the mass window selection are shown in \cref{fig:HyperonMassSpectraMassWindowCut}. This selection rejects candidates with a mass much higher than the input hyperon mass. All remaining candidates are passed to the next stage of reconstruction. \section{Introduction} \label{sec:introduction} The strong coupling constant $\alpha_s$ increases with decreasing momentum transfer, until at a scale of the proton radius the value of $\alpha_s$ is so large that perturbative methods no longer are applicable. Theoretical models used to quantitatively predict hadronic processes in this kinematic regime need to be constrained by experimental data. Two approaches are well established. One of them is Lattice Quantum Chromodynamics (LQCD) \cite{Wilson1974} which solves the non-perturbative QCD by using numerical simulations. LQCD has given impressive results for hadron spectroscopy \cite{Lin2008,Horsley2011} during the last decades. The other approach is called \textit{chiral perturbation theory} which was proposed by Weinberg \cite{Weinberg1990} and utilizes the confinement for low energies.\\ At low energy, the exchange of hadrons appears to describe the appropriate degrees of freedom for the excitation spectrum and the scattering cross section of baryonic resonances. For a deeper insight into the mechanism of non-perturbative QCD the understanding of the excitation pattern of baryons is essential. Hadrons are composite particles which have internal degrees of freedom and thus an excitation spectrum. This leads to two possibilities to study hadrons in experiments. One possibility is to study reaction dynamics, i.e. the investigation of hadron-hadron interactions and hadron production, while the other is hadron spectroscopy, where the structure of hadrons is investigated. Most systematic experimental studies so far have focused on the nucleon excitation spectrum. Recently, studies of the $\Delta$ and $N^*$ excited states with the hypercentral Constituent Quark Model (hCQM) have been performed \cite{Shah2019,Menapara2020}. In the hCQM the baryon is described as system three quarks or antiquarks which are bound by some confining interaction . In contrast, the knowledge is poor for excited double or triple strange baryon states, also called hyperons. Based on the SU(3) flavor symmetry, the $\Xi$ spectrum should contain as many states as the $N^*$ and $\Delta$ spectrum together.\\ Hyperons are unstable particles and thus unveil more information on their characteristics than nucleons. Hence, hyperons, especially their decay, are a powerful tool to address physics problems like the internal structure and fundamental symmetries.\\ For most hyperons the excitation spectra as well as the ground state properties are still not well understood. Antiproton-proton (\pbarp) induced reactions resulting in a baryon-antibaryon pair provide a good opportunity to access these properties and spectra, since a high fraction of the inelastic \pbarp cross section is associated to final states with a baryon-antibaryon pair together with additional mesons. In the \pbarp entrance channel, the production of extra strange mesons is not needed to balance the strangeness in the production of strange or multi-strange baryons. In addition, it is possible to directly populate intermediate states, where one hyperon or both hyperons are in an excited state. The excited states will predominantly give rise to final states consisting of a baryon-antibaryon pair and one or more mesons, where the produced particles may further decay weakly or electromagnetically. If the resonant states in the (anti-)baryon-meson combined system are sufficiently narrow, it will be possible to measure their mass and width directly. A partial wave analysis will then give the opportunity to access those observables, e.g. spin and parity quantum numbers, which are otherwise difficult to determine directly.\\ Comprehensive measurements require next generation experiments. For instance, Jefferson Lab recently approved the KLF proposal to construct a $K_L$ beam \cite{Amaryan2020}. This facility will be able to produce e.g. an estimated $5.3\cdot 10^6$ \excitedcascadetwenty events within the approved 100 days of beam on target. Furthermore, the future Antiproton Annihilation in Darmstadt (\panda) experiment located at the FAIR facility will be such an experiment \cite{Erni2009}. It will be a multi-purpose detector to study antiproton-proton induced reaction at beam energies between $1.5\momentumunit$ and $15\momentumunit$. Therefore, \panda is well-suited for a comprehensive baryon spectroscopy program in the multi-strange and charm sector. The expected cross section for final states containing a \anticascade\cascade pair is on the order of $\mu\mt{b}$ \cite{Musgrave1965}, thus giving the possibility to produce $10^6$ (excited) \cascade events per day, which compares favorably to the $5.3\cdot 10^4$ produced events expected per day at KLF. The cross section of the reaction \pbarp$\rightarrow \Omega^-\bar{\Omega}^+$ has never been measured, but is predicted to be $\sigma\simeq2\unit{nb}$ at $p_{\bar{p}}=7\momentumunit$ \cite{Kaidalov1994}.\\ This work presents a feasibility study for the reconstruction of the reaction \mychannel and its \cc channel with the \panda detector, where $\Xi^*$ denotes the following intermediate resonances: \excitedcascadefifteen, \excitedcascadesixteen and \excitedcascadetwenty. Various decay modes of the resonance states are investigated to study the reconstruction into neutral and charged final state particles, for which the detector might have significantly different performance. % \section{\pandabf} The \panda experiment \cite{Erni2009} will be part of the Facility for Antiproton and Ion Research (FAIR) \cite{FAIR2019}. FAIR is an international accelerator facility for the research with antiprotons and ions, which is currently under construction in Darmstadt, Germany. The facility will consist of a system of storage rings. One of these storage rings is the High Energy Storage Ring (HESR) which is optimized for high energy antiprotons and will provide a luminosity of about $10^{31}\unit{cm}^{-2}\unit{s}^{-1}$ in the first phase of operation \cite{Schuett2016}. HESR can accelerate or decelerate the antiprotons to produce a phase-space cooled beam momentum between $1.5\momentumunit$ and $15\momentumunit$. In a later stage a peak luminosity of $2\cdot10^{32}\unit{cm}^{-2}\mt{s}^{-1}$ will be reached \cite{Lehrach2006}.\\ \newline The proposed \panda detector, shown in \cref{fig:PANDADetector}, is a multi-purpose detector and it will be an internal experiment at the HESR. \begin{figure*}[htb] \centering \includegraphics[width=0.8\textwidth]{./plots/PANDADetector.pdf} \caption{Schematic overview of the \panda detector setup. The components with black labels will be available for the initial configuration of \panda and the components with red labels will be added later. Figure taken from \cite{PANDADetector}.} \label{fig:PANDADetector} \end{figure*} It will be composed of two parts, the \textit{Target Spectrometer} (TS) surrounding the interaction point (IP) and the \textit{Forward Spectrometer} (FS). This modular design of \panda will lead to almost $4\pi$ geometrical acceptance.\\ \panda will investigate interactions between the antiproton beam and fixed target protons and/or nuclei. Reactions of the antiproton beam on fixed target protons will have a center-of-mass (c.m.) energy between $2.25\unit{GeV}$ and $5.47\unit{GeV}$. The target protons will be provided either by a cluster-jet or frozen hydrogen pellets \cite{PandaTDRTarget}. In addition, targets of other elements can also be provided for $\bar{\mt{p}}A$ studies.\\ \panda provides a nearly complete angular coverage, high resolutions for charged and neutral particles as well as a good particle identification. The Micro Vertex Detector (MVD) is the innermost part of the tracking system inside the \textit{Target Spectrometer} and uses two different detector technologies: hybrid pixel detectors and double-sided micro-strip detectors \cite{PandaTDRMVD}. The main task is to tag events with open charm and strangeness. Therefore, the MVD will provide a maximum spatial resolution of $\mu\mt{m}$ perpendicular to and better than $100\,\mu\mt{m}$ along the beam axis.\\ The main tracking detector for charged particles in the TS is the Straw Tube Tracker (STT), which consists of 4224 single straw tubes arranged in a cylindrical volume around the IP and encloses the MVD \cite{PandaTDRSTT}. Together with the MVD and the Gaseous Electron Multiplier (GEM) planes, which are downstream of the STT. The STT is embedded inside the magnetic field of a $2\,\mt{T}$ solenoid \cite{PandaTDRMagnets} giving the possibility to measure the momentum of charged particles. A momentum resolution for charged particles of $\sigma_p/p \sim 1 - 2\,\%$ will be provided by the tracking system of the target spectrometer.\\ The main charged particle tracking system in the FS is called the Forward Tracker (FTrk) and will consists of three pairs of tracking planes equipped with straw tubes \cite{PandaTDRFTS}. The planes will be placed before, inside and behind a $2\unit{T}\cdot\mt{m}$ dipole magnet. One of the main tasks is the measurement of particles with low transverse momentum.\\ A good particle identification (PID) is important for the event reconstruction. Therefore, the design of the \panda detector includes PID sub-detectors, i.e. Cherenkov detectors, in particular the Detection of Internal Cherenkov Light (DIRC) \cite{PandaTDRDirc} and the Ring Imaging Cherenkov (RICH) detector, the Barrel Time of Flight (BarrelToF) \cite{PandaTDRBarrelToF} and the Forward Time of Flight (FToF) detector \cite{PandaTDRFToF}, and the Muon Detector System (MDS) \cite{PandaTDRMDS}.\\ Many channels that will be studied within the physics program of \panda contain photons or electron-positron pairs in the final state. The Electromagnetic Calorimeter (EMC) will provide an efficient reconstruction of positron, electron and photons while the background will be suppressed efficiently. In the TS the EMC, consisting of the Backward-Endcap EMC (BE EMC), the Barrel EMC and the Forward-Endcap EMC (FE EMC), will be equipped with more than 15,000 PbW$\mt{O}_4$ crystals \cite{PandaTDREMC}. In the FS, a shashlyk-type calorimeter is foreseen \cite{PandaTDRFWEMC}. The \textit{Forward Spectrometer} will be completed with a Luminosity Detector (LMD) to enable cross section normalization by measuring forward elastically scattered antiprotons in the Coulomb-Nuclear interference region \cite{PandaTDRLumi}. \subsubsection*{\textbf{Software Framework}} \label{ssec:SoftwareFramework} The software framework used to analyze the data is called PandaRoot and is based on ROOT \cite{Brun1997} together with the Virtual Monte Carlo (VMC) package \cite{Hrivnacova2003}. The simulation and reconstruction code is implemented within the FairRoot software framework \cite{Al-Turany2012} developed as a common computing structure for all future FAIR experiments \cite{Spataro2011}. The detector simulation is handled by VMC and allows the usage of Geant3 \cite{brun1993geant} and Geant4 \cite{Agostinelli2003}. Several event generators, i.e. EvtGen \cite{Lange2001EvtGen}, DPM \cite{Capella1994}, UrQMD \cite{bass1998microscopic}, Pythia \cite{sjostrand1997computer} and Fluka \cite{ferrari2005fluka} can be used for the production of signal and background events. Subsequently, VMC sends these events to the transport model. The detector response after the simulation and propagation of the events is simulated by digitizers.\\ Charged particle tracks are formed by combining the hits from the tracking detectors. For the TS tracking system, the tracking algorithms assume a constant magnetic field and thus helix trajectories for charged particles. The Kalman Filter GENFIT \cite{Rauch2015} and the track follower GEANE \cite{Fontana2008} are used to take magnetic field inhomogeneities, energy loss, small angle scattering and the error calculation for the different detector parts into account. Up to now, the tracking algorithms use the IP as the origin of the particle track. As a consequence, the tracking algorithm has poorer performance for particles emitted far from the IP and thus the standard tracking algorithms do not perform well for the reconstruction of hyperons, which decay with displaced vertices due to their relative long lifetime. For this case, an ideal tracking algorithm is used, which groups the hit points into a track based on the generated particle information.\\ The information of the PID detectors are correlated to the information coming from the reconstructed particles tracks to form charged particles. If the particle tracks are not correlated to clusters inside the EMC, neutral candidates are formed. For a fast particle identification, algorithms based on Bayesian approaches are implemented \cite{Spataro2011}. \section{Results and Discussion} In the previous section the feasibility study of the reactions \mychannelfs, \mychannelfscc, and \channelalbrecht was described.\\ In absence of experimental data and theoretical predictions for the angular distribution of the signal events, a uniform phase space distribution was assumed. This assumption is reasonable, since the amount of energy above the threshold is low for both channels and both strange valence quarks have been pair produced from the sea. Here, this simplification assures, that the produced \cascade and \anticascade hyperons are underlying the same detector acceptance. An ideal pattern recognition was used for the track reconstruction in both analyses, since a realistic tracking algorithm for secondary tracks is currently not available. Therefore, a track filter was introduced to make the charged final state particle selection more realistic.\\ The single particle reconstruction efficiency for the charged final state particles is between $68\,\%$ and $96\,\%$. The intermediate state particles are reconstructed by applying a coarse mass window symmetrically around the nominal hyperon mass. With the resulting candidates, the three-body systems \fs, \fscc, and \cascasbarpinull are reconstructed and fitted with the DecayTreeFitter.. In the analysis of \fs and \fscc, a reconstruction efficiency of $\sim 5\,\%$ is achieved for each channel while for \cascasbarpinull a reconstruction efficiency of $3.6\,\%$ is achieved. The obtained sample purity is $97.7\,\%$ for both \fs, and \fscc and $93.5\,\%$ for \cascasbarpinull, implying that the genealogy of the signal is suppressing the combinatorial background efficiently.\\ The decay tree includes six final state particles in case of \mychannelfs (+ c.c.) and eight for \channelalbrecht. Here, the combined acceptance of the final state particles is limiting the reconstruction efficiency. In the study of \channelalbrecht the most limiting factor is the reconstruction of \pinull$\rightarrow \gamma\gamma$, since the reconstruction efficiency for \pinull is only about $40\,\%$. An improvement of the neutral particle reconstruction will also improve the reconstruction of the the \pinull candidates.\\ With the assumed cross section of $1\,\mu\mt{b}$ for each considered final state, \fs and \fscc, the determined reconstruction efficiencies and the initial luminosity of $L=10^{31}\unit{cm}^{-2}\mt{s}^{-1}$, the expected reconstructed number of events is 38,500 per day. For the \cascasbarpinull final state a cross section of $2\,\mu\mt{b}$ is assumed. With the corresponding reconstruction efficiency and the initial luminosity, 22,800 reconstructed events are expected per day. These rates correspond to about 15 days of data taking to collect data samples with the same size of the reconstructed samples shown in this report.\\ For the study of the hadronic background the same analysis strategies were used as for the signal, leading to no surviving event out of 100 million generated background events for the \fs and \fscc final states. For the \cascasbarpinull final state seven events survived the applied cuts. Additional selection based on the distance between the \anticascade and \cascade vertices removed all background, but also reduced the overall signal efficiency to $3.1\,\%$. The background studies showed that at a $90\,\% $ confidence level a signal-to-background ratio of $S/B>19.1$ for \fs, $S/B>19.5$ for \fscc and $S/B>22$ for \cascasbarpinull could be achieved. The lower limit for the signal significance is $S_{\mt{sig}}>364$ for \fs, $S_{\mt{sig}}>361$ for \fscc and $S_{\mt{sig}}>392$ for \cascasbarpinull. To further quantify the signal-to-background ratio and the signal significance for \cascasbarpinull, future studies have to be performed with at least a factor 10 larger background sample. From the limits that we obtained, we can already conclude that it is feasible to produce a clean data sample necessary to perform a partial wave analysis. \newline Both analyses demonstrate that the experimental study of the process \mychannelfs, its \cc channel and \mbox{\channelalbrecht}, including also resonant baryon states, is feasible with the \panda detector. \section{Summary and Outlook} A first step has been done in investigating the feasibility of studying the $\Lambda$K and the $\Xi\pi$ decay of $\Xi$ resonances with the \panda detector in the reaction \mychannel and its \cc channel at an antiproton beam momentum of $4.6\momentumunit$.\\ In the \fs study, a reconstruction efficiency of about $5\,\%$ has been achieved with a sample purity of $98\,\%$. The total reconstruction efficiency corresponds to 277,133 \fs events and 283,617 \fscc events. Assuming an initial luminosity $L=10^{31}\unit{cm}^{-2}\mt{s}^{-1}$, that number of final selected signal events can be collected within 15 days of data taking. 100 million generated DPM background events were subject to the same selection strategy. No background event survived, so that on a $90\,\%$ confidence level a lower limit for the signal significance of 361 for \fs and 392 \fscc has been determined.\\ In the analysis of the \cascasbarpinull signal events the obtained total reconstruction efficiency is $3.6\,\%$, before selecting the (anti-)hyperon decay vertex position with respect to the interaction point. The sample purity of the final selected sample is $\sim93\,\%$. The fake combinations in the sample are dominated by accidental combinations of neutral candidates in the reconstruction of the \pinull mesons. The total reconstruction efficiency of the signal events corresponds to about $3.2\cdot10^5$ events which can be collected in 15 days of data taking at the luminosity of $10^{31}\unit{cm}^{-2}\mt{s}^{-1}$. The identical analysis of 100 million DPM background events results in seven events surviving the applied cuts. These events can be removed by requiring a separation of more than $1\unit{cm}$ between the \cascade and \anticascade decay vertex. The additional restrictions reduce the signal reconstruction efficiency to $3.1\,\%$. A lower limit for the signal-to-background ratio is deduced to be larger than 22, and the signal significance to be larger than 392.\\ \newline The discussion in the previous chapter shows various steps that should be included in the analyses presented in the future. One point refers to the usage of the ideal tracking algorithm. As soon as a realistic tracking algorithm for secondary particles is available, the results of both studies need to be confirmed. The second point is the selection of the final state particles. The impact of the various PID selection criteria on the total reconstruction efficiency, the sample purity as well as on the signal-to-background ratio and the signal significance should be investigated. Furthermore, the model dependency of the background events should be reduced by comparing the results to the output of the background generators.\\ A major goal of the $\Xi$ spectroscopy program at \panda is the determination of the spin and parity quantum numbers of the $\Xi$ states. Therefore, a partial wave analysis (PWA) of the reconstructed three-body has to be performed. First investigations on a PWA tool which can be combined with a PandaRoot simulation and analysis are ongoing \cite{Puetz2020}.
1,116,691,499,919
arxiv
\section{Introduction} \label{intro} \PARstart{G}{ames} have existed in all human societies and many other animal species. While some of the oldest board games, such as Go, Backgammon, or Checkers, are still played today, video games have become one of the most relevant forms of entertainment in our society, with budgets and profits far exceeding those of huge related industries such as cinema \cite{wijman2020global}. However, since the origin of games, they have had intentions and benefits beyond entertainment, such as teaching social norms, strengthening social bonds, or developing imagination and planning skills. The rise of video games has had a remarkable social impact, transforming mentalities and helping to establish new patterns of social interaction \cite{rogers2016video}. A prominent example of this trend is the gamification that our lives have experienced \cite{Koivisto2019rise}, from the workplace (e.g., \textit{Habitica}, \textit{LifeUp}) to romantic relationships (e.g., \textit{Tinder}, \textit{Grindr}) or education (e.g., \textit{Kahoot!}, \textit{Duolingo}). One of the main reasons games have such a tremendous impact on players is due to interactivity, an almost unique feature over other cultural or artistic elements. This attribute encourages higher motivation, engagement, and empathy levels than in other media. It is noteworthy that at the same time as the video game industry is rising, the board game industry continues to grow as well \cite{arizton2020board}. We can draw a clear conclusion from all these facts: our society is highly gamified; we love to play games, and they have enormous potential to transform how we see the world in a much more profound way than we are often aware of. \begin{figure*}[hbt!] \center \caption{Graphical overview of the paper. The blue boxes are the applications found for serious games and AI applied to them. The red box indicates the challenges faced by this union for its use as a research tool. And the green box indicates promising lines of work in this direction.} \includegraphics[width=0.93\textwidth,keepaspectratio]{GraphAbstract.pdf} \label{fig:abstract} \end{figure*} Some games ---referred to as \emph{serious}~\cite{Abt1987serious}--- are explicitly designed for a primary purpose beyond pure entertainment (e.g., learning new skills, conveying values, awareness-raising). However, being entertaining is part of their attractiveness. The first serious games were released in a wide range of formats, from sports to board games (e.g., \textit{Monopoly}, \textit{Suffragetto}), so this concept precedes the digital era. The current re-emergence of serious games has coincided with the eruption of Artificial Intelligence (AI), one of the most impressive game-changers in the history of humanity. Nowadays, and increasingly so, almost every entertainment element and digital product are at the service of data analysis and AI algorithms, and games are no exception. Especially given that the amount of data available via video games far exceeds any other artistic element. AI is already transforming society through large digital platforms, social networks, and recommender systems. The most widespread use of these tools is in marketing, meticulously analyzing our patterns and tastes to sell us products and capture our attention as much as possible. AI has demonstrated its potential to analyze and improve our understanding of the dynamics of our societies, social interactions, as well as individual and collective behaviors. For this reason, we firmly believe that the synergy between serious games and AI offers an exceptional window of opportunity for large-scale, non-invasive, and inexpensive social studies, leveraging their disinhibition and entertainment effects, along with interactivity, to collect large amounts of meaningful data. Moreover, games' "casual" and playful nature can help break down conventional communication boundaries, encouraging participants to interact openly and discuss topics that might otherwise be complicated or too sensitive. In Figure \ref{fig:abstract} we can find a visual overview of this paper's content to guide and facilitate the reading of the document. Including applications of serious games (Section \ref{sec:app}), the role of AI in them (Section \ref{sec:AI}), and the new lines of work and challenges opening up for employing them as research tools (Section \ref{sec:challenge}), especially in the computational social sciences \cite{ComputationalSS}. Finally, Section \ref{sec:conclusion} presents the main conclusions drawn from this research. \section{Applications of Serious Games} \label{sec:app} The upsurge that serious games have been experiencing in recent years may lead us to think this is a new phenomenon. However, the origin of serious games dates back to the 1970s. Clark C. Abt is credited for coining the term \emph{serious games}, defining them as "\textit{games with an explicit and carefully thought-out educational purpose that are not intended to be played primarily for amusement}". Clark C. Abt studied the potential of games as a vehicle for political, educational, or marketing ideas. Another of the leading figures in the history of serious games is Ian Bogost, the author of seminal books on the theory behind them, such as "\textit{Persuasive Games: The expressive power of video games}" \cite{bogost2010persuasive}. His research has leveraged video games for political, educational, and business use in the 21st century. Even though both concepts mirror the same social phenomenon, it is relevant to highlight the distinction between gamification and serious games. Gamification consists of using and integrating game elements into non-game concepts, while serious games refer to the design of entire games for non-playful primary purposes. Although both are concepts from the last century, they have resurfaced in the academic and commercial arenas in recent years. Among the first serious video games, we find examples where they are employed to convey particular values (e.g., \textit{Captain Bible in the Dome of Darkness}, \textit{The Oregon Trail}, \textit{Mobility}), disease awareness (e.g., \textit{Captain Novolin}), or military training (e.g., \textit{Bradley Trainer}). Nevertheless, the line between ''normal'' and ''serious'' games is quite blurred regarding serious games used to convey specific beliefs or ideologies. Like any artistic or intellectual creation, video games always carry an implicit political and philosophical perspective. For example, popular video games such as \textit{Papers Please} or \textit{This War of Mine} convey strong political messages that raise fundamental questions. Yet, they were not developed under the idea of being explicit ''serious games''. On the other hand, video games such as \textit{The Sims} or \textit{SimCity} are very politically charged. Still, they are not usually perceived as such since they represent a situation closer to our day-to-day life. Focusing on serious games that consider themselves as such and have been designed for that purpose, we find many fields where they have demonstrated their usefulness on numerous occasions: \subsection*{Education} In this section, we focus on serious games designed for the player to learn a series of concepts of a specific subject. To do so, the players must demonstrate their knowledge during the game and score their performances. Education has been one of the main focuses of action for serious games, based on the principle that learning while having fun is possible and efficient. This field has been explored so extensively that success and failure factors have even been analyzed in depth \cite{zhonggen_meta_2019} \cite{ravyse_success_2017}. Prominent examples of building STEM skills might include \textit{Garfield's Count Me In} \cite{Garfield2021}, \textit{Minecraft: Education Edition} \cite{Minecraft2018}, the \textit{Kahoot DragonBox} maths apps \cite{DragonBox} and the \textit{LightBot} coding apps \cite{Lightbot}. Serious games for educational purposes have also become popular in higher medical education \cite{sharifzadeh2020health}, although some authors question their usefulness at such high educational levels, possibly serving as complements to more traditional learning methods \cite{gorbanev2018systematic} \cite{haoran2019serious}. \subsection*{Training} Closely related to education, this category refers to games designed for players to learn and practice specific skills that will enable them to perform those actions in the real world with improved safety, confidence, and knowledge. This approach is widely used in companies where human failure is critical or costly. One of the best-known examples is flight simulators, such as \textit{Microsoft Flight Simulator} \cite{2020_microsoft}, where aspiring pilots must spend hours practicing before flying an actual commercial aircraft. There are also notable examples of training healthcare professionals \cite{wang_systematic_2016}, cybersecurity trainees \cite{hendrix2016game} \cite{TiohCyber2017}, and law enforcement agencies or military forces \cite{akhgar2019serious} \cite{samvcovic2018serious}. Another widespread use is training to manage complex business situations or the administration of teams and resources, used both in actual private companies \cite{gamelearn2015} \cite{larson_serious_2020} and universities \cite{HarvardSimulators2021} \cite{Zoomsim}. \subsection*{Awareness} Thanks to the almost unique characteristic of interactivity, games evoke deep levels of empathy, making them an ideal vehicle to convey an awareness of relevant social issues. A classic example is \textit{Darfur is Dying} \cite{Darfur2006}, which sought to tell the story of the humanitarian crisis in the Darfur region of South Sudan. However, we can find examples on a wide range of topics, such as drug consumption and trafficking \cite{stapinski2018pure} \cite{code2018fight}, cyberbullying \cite{CALVOMORATA2020}, gender equality \cite{barrera2020review}, misinformation \cite{educsci9020155} \cite{Harmony2021}, climate change \cite{flood2018adaptive}, and environmental sustainability \cite{den2018evaluating} \cite{aubert2018review} \cite{stanitsas2019facilitating} \cite{johnson2017gamification}. \subsection*{Health Treatments} This category is framed in healthcare but focuses more on patients than professionals. Well-known examples might be the \textit{Wii Fit} and \textit{Brain Training} games, which aim to have fun and stay fit (physically and mentally) simultaneously. Other notable examples can be found in the field of mental health therapy \cite{Ferguson2021} \cite{fleming2017serious}, increasing self-efficacy and physical activity in people with chronic diseases \cite{Holly2020} \cite{bossen2020effectiveness}, helping the learning process and support of children with autism \cite{fridenson2017emotiplay} \cite{khowaja2019serious}, palliative care and memory training for the elderly and/or people with dementia \cite{Huansheng2020Dementia} \cite{nguyen2017impact}, and guidance and motivation in rehabilitation processes \cite{lopes2018games} \cite{KARAMIANS2020885} \cite{AYED2019103909} \cite{MEIJER20181890}. Notably, in 2020 the US Food and Drug Administration approved the first video game-based treatment, \textit{EndeavorRx}, targeting children between the ages of eight and twelve with certain types of Attention Deficit Hyperactivity Disorder (ADHD) \cite{CNN2020game}. \subsection*{Recruitment} \label{recruitment} If we combine games' interactivity with players' ability to make decisions in a well-designed environment, we can infer some behaviors or aspects of the players' abilities with reasonable confidence. For this reason, serious games have also been used to optimize the recruitment process in private companies \cite{buzady2019fligby} \cite{loreal2014} and even in military forces \cite{america2002army}. In these games, players are presented with complex situations where they must make decisions and act under certain constraints or pressures. A recent notable example is the \textit{CodinGame}\footnote{CodinGame \url{https://www.codingame.com}} platform, where users practice their programming skills while playing, and many tech companies recruit profiles they find interesting. Another great example is the \textit{GT Academy}\footnote{GT Academy \url{https://www.gran-turismo.com/es/academy/}}, a competition in which the best players of a car racing video game have the opportunity to become professional drivers. \subsection*{Marketing \& Propaganda} \label{marketing} When the game is developed primarily for marketing purposes, it is often known as an ''advergame''. This category of games aims to convey ideas and create desires unintrusively and easily customizable way. It should not be confused with games that introduce advertising during gameplay for economic profit. The principal medium for these advergames is smartphones due to their proliferation, ease of development, and everyday use among young people. Major brands such as \textit{Volkswagen}, \textit{Magnum}, \textit{Chupa Chups} or \textit{M\&Ms} have developed advergames. Concerning the \nameref{recruitment} category, in some cases, companies seek to present and profile themselves through these games to attract new employees and trainees or discover talent. Likewise, there have also been attempts to use video games as a tool to disseminate electoral campaigns, such as the video game \textit{Corbyn Run} \cite{Corbyn2019}, or to encourage citizen participation in public decisions \cite{hazMadrid2017} \cite{schouten2016playful}. \subsection*{Science \& Human-based Computation} This category encompasses games to advance scientific knowledge in some way. One of the most common approaches is employing human players to perform seemingly trivial tasks, either too costly, too complex, or unfeasible with finite computational resources. These tasks may include labelling data, transcribing text, using common sense, or activities based on the human experience. One of the first examples of this category was ''\textit{The ESP Game}'' \cite{von2004labeling}, in which players, grouped in pairs, had to guess the photo labels their partner had come up with. Google's reCAPTCHA\footnote{reCAPTCHA \url{https://www.google.com/recaptcha/about/}} is a recent example that has followed this approach of using human players to label images while identifying legitimate users for accessing online resources. Another successful example was ''\textit{EteRNA}'' \cite{eterna2010}, where players had to design RNA sequences that fold into a particular form. The solutions were evaluated to improve computer-based RNA folding prediction models. Other prominent examples might be ''\textit{Foldit}'' \cite{foldit2008} to predict protein structures, ''\textit{Eyewire}'' \cite{eyewire2012} to map retinal neurons, ''\textit{MalariaSpot}'' \cite{malariaspot2016} to help diagnose malaria cases, ''\textit{Phylo}'' \cite{kawrykow_phylo_2012} to optimize alignments of nucleotide sequences, or ''\textit{Quantum Moves}'' \cite{quantum2012} to improve how atoms move in a quantum computer. \section{Role of AI and Data Science in Serious Games} \label{sec:AI} Games have long been the test bed for AI as they provide a controlled environment with simple rules for algorithms to learn sophisticated strategies. However, in recent years data science and AI have found a different use for games as sources of vast amounts of player data. Moreover, thus be able to get relevant information about the human players that may be useful both outside and inside the game itself. Nevertheless, serious games are a particular branch of the gaming industry, so the AI and analysis techniques and their purposes differ noticeably. The significant heterogeneity in the goals of serious games also implies significant technical differences among them. Despite this heterogeneity, we can discern main branches encompassing all major applications of AI and analytics in serious games: \subsection*{Assessment} \label{assessment} Game-based assessment is a fruitful field in serious games \cite{KimAssessment2019}, primarily used in education, training, and recruitment. Players are scored based on their knowledge or skills in a particular subject. Pellegrino \textit{et al.} \cite{Pellegrino2001knowing} stipulate three primary purposes of assessment: (i) to assist learning (formative assessment), (ii) to evaluate the player's capabilities, and (iii) to evaluate programs. In general, collecting, analyzing, and extracting information through educational serious games is known as Game Learning Analytics \cite{FreireGLA2016}. The main difference with traditional evaluation methods or test gamification is that game-based assessment also uses in-game and interaction data (e.g., response times) to evaluate the player. Numerous authors have demonstrated the utility of using additional in-game data to evaluate students \cite{liu_learning_2017} \cite{kiili_evaluating_2018} \cite{AlonsoLessons2019} or to predict learning results \cite{AlonsoPredicting2020} \cite{hernandez_lara_applying_2019}. It has also been successfully tested in recruitment processes \cite{LandersAssessment2022}. Although nowadays, they are more of a complement to the traditional exam-based assessment. The techniques used are very diverse, from simple descriptive statistics and correlations to supervized machine learning algorithms (e.g., linear regression, decision trees, Naive Bayes, Neural Networks) \cite{AlonsoApplications2019} \cite{gomez2022systematic}. More rarely, some papers use knowledge inference with Bayesian networks \cite{RoyBayes2019} \cite{shute2016assessing}, which explicitly allows the application of psychological or mental state models, but a flawed model will negatively influence the results significantly. This branch of AI applications in serious games is one of the most researched and developed, thanks to the technology push changing how education is delivered. However, much work still needs to be done, especially in demonstrating that they can be better than traditional approaches \cite{PamelaAssessmentJungle2017}. \subsection*{Game Design \& Validation} Game design is planning the content, rules, and mechanics of a game to create valuable interactive experiences. The large number of artistic and technical factors involved in this process make any analytical information about the players extremely valuable. Game validation employs data and evidence to verify and calibrate the game tasks and their difficulty. In the case of serious games, in addition to maintaining engagement, we also want to ensure that the game meets its primary objective (e.g., to train players in a particular skill, increase awareness of an issue, etc.). Data-driven serious game design has flourished in academia in recent years, where we can find successful examples of the use of analytical techniques to design, improve, personalize, and validate these games \cite{Hicks_Puzzle2016} \cite{calvo_validation_2020} \cite{AlonsoLessons2019} \cite{Freire2016} \cite{Peddycord2017}. This category is also closely related to the previous one (\nameref{assessment}), as it is almost essential to use data-driven validation during the game development stage to calibrate the players' evaluation \cite{tong2016serious} \cite{kowalewski_validation_2017}. Such analytics can go a step further to adapt in real-time the difficulty of the game \cite{hendrix2018implementing} \cite{hocine2015adaptation} and even detect when the player is frustrated \cite{defalco2018detecting}. In this category, due to the particular aspects of design and validation of each game, the most commonly used techniques are descriptive statistics and visualizations \cite{kang_using_2017} \cite{alario_datadriven_2020} \cite{Cano_Using_2018}, Randomized Control Trials (to test the usefulness of the intervention) \cite{calvo_creating_2021} \cite{calvo_validation_2020} and unsupervized machine learning algorithms (to find similar types of players and common patterns in the game) \cite{calvo_validation_2020}. Using these analytical techniques enables creators and researchers to ensure that their games are entertaining, engaging, and well-designed to fulfill their objectives. \subsection*{Player modeling \& Profiling} Player modeling is the creation of computational models to detect, predict and characterize the human player attributes that manifest while playing a game \cite{yannakakis2018artificial}. These models can be any mathematical representation, rule set, or probability set that maps parameters to observable variables and are built on dynamic information obtained during game-player interaction. On the other hand, player profiling usually refers to categorizing players based on static information that does not alter during gameplay (e.g., personality, cultural background, gender, age). Despite their dissimilarities, these two concepts can complement each other, contributing to more reliable player models. The main objective of studying players is to understand their cognitive, affective, and behavioral patterns. Recent advances in AI have demonstrated an impressive ability for these same goals that player modeling sets out to achieve. Although, at the moment, there is a significant lack of interpretability in complex models. AI for player modeling is a perfectly good fit solely when explainability is not a hard constraint. Hooshyar \textit{et al.} \cite{SurveyModeling2018} conducted a systematic literature review that profoundly analyzes the computational and data-driven techniques used for player modeling between the period of 2008 to 2016. As this is such a broad and promising field, the variety of algorithms used is immense: descriptive statistics and correlations, path/network analysis, supervized learning (e.g., Neural Networks, Linear Regression, Hidden Markov Models, Decision Trees), unsupervized learning (e.g., k-means, Linear Discriminant Analysis, Self-Organizing Map), probabilistic algorithms (e.g., Bayesian / Markov Networks), evolutionary methods (e.g., Genetic algorithms), reinforcement learning methods (e.g., Multi-armed bandits), etc. Most of the computational methods used are model-free, meaning they do not impose strict assumptions on the model. However, there are also some model-based approaches (e.g., Bayesian hierarchical models) \cite{Streicher_Bayes2022} \cite{Kim_Computational2018} that yield more interpretable and explicit models (e.g., psychological or cognitive) than those which do not impose strict assumptions on the model. For instance, these models can infer the player's hidden parameters or mental states. Player modeling can be helpful both inside and outside the game itself. The most straightforward goal is to improve the game design, tailoring the content to increase engagement and enhance the gaming or learning experience \cite{DELIMA201832}. Although outside of serious games, we find some prominent examples, such as \textit{Left 4 Dead} \cite{valve2008left}, where an AI tracks player behavior and adapts future waves of enemies to maintain rhythm and tension. Perhaps the most famous example is the video game \textit{Silent Hill Shattered Memories} \cite{climax2009silent}, which uses a psychological approach where an AI system tries to manipulate players' emotions using the \textit{Five Factor Model} of personality \cite{digman1990personality}. Outside the game itself, the most common use of player modeling in the gaming industry is for personalized marketing campaigns, since the commercial sector is very interested in understanding customer behaviors and preferences. In these cases, the games are often presented as free to play in exchange for an intrusion into personal privacy \cite{DREIER2017328}. Besides the "advergames" discussed in the section \nameref{marketing}, a famous example outside serious games is \textit{Farmville} \cite{willson2015zynga}, which monitored the players' behavior to adapt \textit{Amazon} marketing campaigns to them. This business model is particularly hazardous for younger users, its main target. In academia, especially in psychology, experiments have been conducted using games (serious and non-serious) for research, but primarily focusing on analyzing how the player's personality is projected in the gameplay patterns \cite{GamesPersonality2011} \cite{Yee_Introverted_2011} \cite{Halim_Profilin_2019} \cite{denden_implicit_2018} \cite{MCCORD201995}. However, studying psychological characteristics or phenomenology using serious games seems an up-and-coming field, especially if we introduce AI techniques into the equation. \section{Challenges and New Horizons} \label{sec:challenge} In the previous sections, we have discussed the main applications of serious games and the current trends in their synergies with data science and AI. In this section, we take up the argument outlined in the introduction about the great potential of serious games together with AI to serve as research tools, particularly in computational social sciences \cite{ComputationalSS}, examining the most critical challenges and promising new lines of work to meet this objective. As argued in the \nameref{intro} section, games allow research to be entertaining, provide high levels of empathy, and have a disinhibition effect that is highly sought after in social investigations. Games can evoke dynamic and complex emotions in players, the manifestations of which are difficult to capture with the traditional approaches of empirical psychology, affective computing, or cognitive modeling research. This is primarily due to their ability to introduce the player to a continuous mode of interaction, which could generate complex cognitive, emotional, and behavioral reactions \cite{yannakakis2018artificial}. Therefore, the use of serious games as research tools may contribute to the advancement of human-computer interaction and the progress of our knowledge of human experiences. We can already find some splendid examples of the use of games as large-scale social research tools, such as \textit{The Moral Machine Experiment} \cite{awad_moral_2018}, which uses a gamified environment to explore the moral dilemmas surrounding autonomous cars. To do so, they use the framework of the classic trolley problem and study participants' responses to variations in different parameters (e.g., number of people who would die, age, gender, etc.) and the cross-cultural differences in this decision-making \cite{AwadMoralVariations2019}. We can also find some noteworthy examples that use serious games to explore collaborative and trusting behaviors \cite{pereda_group_2019} \cite{PoncelaDyadic2016}, understand preferences for charity donations\footnote{MyGoodness! \url{https://www.my-goodness.net/}}, or even fight cybercrime \cite{RAYUELA}. On the other hand, the latest advances in AI allow us to analyze vast amounts of data and find patterns or behaviors that would be very difficult to observe with traditional analytical methods. So far, the main application given to large AI models that study our interactions through social networks and personal data is for marketing purposes and generating monetary value \cite{Ma_Marketing_2020}. This practice has been done almost since the beginning of social networks, without considering the negative social consequences it could have, particularly for children and adolescents \cite{Keles_SocialMedia_2020} \cite{Abi_Smartphones_2020}. With this paper, we also aim to contribute humbly to the ''AI for Good''\footnote{AI for Good Global Summit \url{https://aiforgood.itu.int/}}\textsuperscript{,}\footnote{AI for Good \url{https://ai4good.org/}} movements. We are at a critical social, cultural, and economic moment. We must start to consider the uses of AI that can benefit society and each individual in particular. We firmly believe that IA has the potential to help us live better and also to know ourselves better. Furthermore, to achieve great goals that improve our society, it is essential to unite forces between different branches of science (e.g., sociology, psychology, engineering, computer science, AI, etc.), and we believe that games represent such an excellent vehicle for this purpose. However, in order for serious games to be able to meet these major goals, they must face some critical \textbf{challenges}: \begin{itemize} \item \textit{Game design}: Whether a game can serve as a valuable research tool depends strongly on whether it has good design and playability. Designing a game is a complex process involving many artistic and technical aspects that can not be wholly rationalized from a scientific standpoint. \item \textit{Validation and generalizability}: One of the most complicated aspects of using serious games as a means of research is demonstrating that their results are as valid as traditional methods. Although we already have numerous examples in some branches, such as game-based assessment or the reflection of player morality into in-game moral dilemmas \cite{Weaver_Mirrored_morality} \cite{Sven_influence_moral}, there is still a long way to go in this aspect. This is also because each game (and its purpose) is different from the others and therefore requires individual validation in most cases. \item \textit{Data scarcity}: In recent years, it has become clear that to take full advantage of AI, we need large amounts of data to feed it. Apart from a few exceptional cases \cite{awad_moral_2018}, serious games academic experiments suffer from small, biased, and heterogeneous datasets. If we aspire to use them as social research tools, we must find ways to get more participants, make the best use of available data or establish appropriate methods of sharing sensitive data. \item \textit{Explainability}: Many of today's AI tools can be highly complex, if not completely opaque (so-called black box models). The general trend in computer science also drives this to focus more on prediction than explanation. However, aspiring to use these tools to study human and social behavior implies a deep understanding of the outcomes that AI provides. While considerable progress has been made in explainable AI techniques, many hurdles still exist \cite{bruijn_perils_2022}. \item \textit{Ethical considerations}: When dealing with personal data (whether anonymized or not) and AI, we must seek unequivocal ethical standards. The potential benefits must outweigh the risks, as the participants' safety and well-being must be the top priority, especially when dealing with data from children or people at risk of exclusion. Achieving these standards is genuinely complex because computer scientists and social scientists tend to have different approaches to research ethics \cite{salganik2019bit}. \end{itemize} Despite the challenges mentioned above, we can also find promising \textbf{new horizons} and future lines of work regarding the interplay between serious games and AI: \begin{itemize} \item \textit{Synthetic data}: The AI field has extensive experience in developing agents that aim to win a game \cite{Hasselt_DQN_2016}. However, in recent years, we are also experiencing the emergence of novel synthetic data generation techniques capable of modeling or mimicking human behavior in some aspects \cite{Hussein_Imitation_2017}, and impressive new data augmentation techniques such as Generative Adversarial Networks \cite{goodfellow_GAN}. Concerning the challenge of data scarcity, this is a promising line of work in which we could make the most of the limited data available and build models that help us better understand players' motivations in decision-making. \item \textit{Data sharing}: The field of computational social science has faced many difficulties in finding and sharing open data, especially from private companies \cite{Lazer_CSS_obstacles_2020}. However, the field of serious games is in a much more advantageous position in this respect, as it does not involve such an amount of sensitive data. Moreover, using anonymization and privacy-preserving algorithms has proven to be very useful in recent years. With this promising line of work, we can address the poor sample size that traditional social science has had and share meaningful data from serious games at scale to enhance collaboration and motivate research. \item \textit{Causality}: The social sciences have traditionally prioritized interpretable explanations of human behavior, mainly invoking causality through randomized controlled trials. However, as powerful as these techniques are, they are also very costly in terms of resources and money. On the other hand, computer scientists have traditionally been more concerned with developing accurate predictive models, whether or not they correspond to causal/interpretable mechanisms. Nevertheless, in the last years, we are experiencing a resurgence of computational causality techniques \cite{mcelreath2020statistical} \cite{glymour2016causal}, even from observational data (i.e., quasi-experiments) \cite{liu_quantifying_2021}, allowing us to explain with greater robustness the workings of the systems under study. Besides, it makes explicit the assumptions of the computational model and the scientist performing it, helping us to make research more open to discussion and to rethink plausible alternatives for existing explanations. If our ultimate goal is to better understand individual and collective human behavior, it is critical to integrate predictive and explanatory approaches to scientific research \cite{hofman_integrating_2021}. \end{itemize} \section{Conclusions} \label{sec:conclusion} Gaming, both for entertainment and utility purposes, has been indispensable throughout the development of humankind. The flourishing of AI in recent times, coupled with the vast amounts of meaningful data that can be collected and transmitted through games, creates a unique window of opportunity to use serious games as tools for social research. In this paper, we have reviewed serious games' main applications and their synergies with AI. We can already find numerous successful examples of serious games in education, science, business and social interests. The great potential of games to transform society should not be underestimated and deserve more and deeper inquiry. In addition, we have identified some challenges and promising new lines of work for using serious games as research tools. By doing so, we aim to motivate researchers to pursue these lines of work and help them to identify potential applications of serious games for beneficial social objectives. We also want to encourage interdisciplinary research, which is essential in this field of science, and which we firmly believe is how the future of science should ultimately be. We are at a critical juncture as a society, where we are beginning to realize that we need to change the motivations and goals by which we make progress. AI is a game changer that can bring immense benefits or harm to society. It is time to start breaking new ground in using these technologies for the common good. What better way to do it than by playing? \bibliographystyle{ieeetr}
1,116,691,499,920
arxiv
\section{Introduction} Reaction systems, introduced in 2007 by Ehrenfeucht and Rozenberg \cite{ehrenfeucht2007reaction}, are elementary computational models inspired by biochemical reactions taking place within the living cells. This study belongs to one of the diverse research lines initiated in \cite{ehrenfeucht2011functions} that pertains to mathematical study of state transition functions specified by reaction systems, called $rs$ functions. For a motivational survey on reaction systems, we refer the reader to Ehrenfeucht, Petre, and Rozenberg \cite{ehrenfeucht2017reaction}. Minimal reaction systems \cite{ehrenfeucht2012minimal}, where the number of resources in each reaction is minimal, have been relatively well-studied due to their simplicity. Salomaa \cite{salomaa2014compositions} initiated the study on the generative power under composition of $rs$ functions specified by minimal reaction systems and later we showed that not every $rs$ function can be thus generated for the quarternary alphabet \cite{teh2018compositions}. On the other hand, Manzoni, Pocas, and Porreca \cite{manzoni2014simple} introduced the study of simulation by reaction systems and they showed that every $rs$ function can be simulated by some minimal reaction system over an extended background set. Other studies on mathematical properties of minimal reaction systems include \cite{azimi2017steady, salomaa2014minimal, salomaa2017minimal, teh2017minimal}. This study refines and expands on the study of simulation by reaction systems initiated in \cite{manzoni2014simple}. We propose strictly minimal reaction systems as a new canonical class of reaction systems. We also introduce hybrid reaction systems, where the reactant set, inhibitor set, and product set of each reaction are allowed to contain entities from different background sets. Then the result by Manzoni et al.~is revisited and strengthened by showing that the extended background set can be fixed ahead independent of the given $rs$ function. Next, we show that the number of extra resources needed in the fixed extended background set cannot be bounded polynomially in terms of the size of the original background set. Finally, a stronger version of simulation is studied and it will be shown that minimal reaction systems are in fact rich enough to strongly simulate every $rs$ function over a given background set. \section{Preliminaries} If $S$ is any finite set, then the cardinality of $S$ is denoted by $\vert S\vert$ and the power set of $S$ is denoted by $2^S$. From now onwards, unless stated otherwise, $S$ is a fixed finite nonempty set. \begin{definition} A \emph{reaction in $S$} is a triple $a=(R_a,I_a, P_a)$, where $R_a$ and $I_a$ are (possibly empty) disjoint subsets of $S$ and $P_a$ is a nonempty subset of $S$. The sets $R_a$, $I_a$, and $P_a$ are the \emph{reactant set}, \emph{inhibitor set}, and \emph{product set} respectively. The pair $(R_a, I_a)$ is the \emph{core of $a$}. \end{definition} \begin{definition}\label{2606a} A \emph{reaction system over $S$} is a pair $\mathcal{A}=(S,A)$ where $S$ is called the \emph{background set} and $A$ is a (possibly empty) set of reactions in $S$. We say that $\mathcal{A}$ is \emph{nondegenerate} if $R_a$ and $I_a$ are both nonempty for every $a\in A$ and $\mathcal{A}$ is \emph{maximally inhibited} if $I_a= S\backslash R_a$ for every $a\in A$. \end{definition} \begin{definition} Suppose \mbox{$\mathcal{A}=(S,A)$} is a reaction system. The \emph{state transition function} $\res_{\mathcal{A}} \colon 2^S\rightarrow 2^S$ is defined by $$\res_{\mathcal{A}}(X)= \bigcup \{\, P_a \mid a\in A \text{ such that } R_a\subseteq X\text{ and } I_a\cap X=\emptyset\,\},\quad \text{for all }X\subseteq S.$$ \end{definition} If $R_a\subseteq X$ and $I_a\cap X=\emptyset$, we say that the reaction $a$ is \emph{enabled by $X$}. Hence, $\res_{\mathcal{A}}(X)$ is the cumulative union of product sets of all reactions enabled by $X$. For our purpose and without loss of generality, we may assume that distinct reactions in $A$ do not have the same core. \begin{definition}\label{311017b} Every function $f\colon 2^S\rightarrow 2^S$ is called an \emph{$rs$ function over $S$}. We say that $f$ can be \emph{specified by a reaction system} $\mathcal{A}$ over $S$ if $f=\res_{\mathcal{A}}$. \end{definition} Since every $rs$ function over $S$ can be canonically specified by a unique maximally inhibited reaction system over $S$, it follows that the class of $rs$ functions over $S$ is exactly the class of state transition functions over $S$. \begin{definition}\cite{ehrenfeucht2012minimal, teh2017minimal} \label{1507d} Suppose $\mathcal{A}=(S,A)$ is reaction system. Then $\mathcal{A}$ is \emph{minimal} if $\vert R_a\vert\leq 1$ and $\vert I_a\vert\leq 1$ for every reaction $a\in A$. \end{definition} The elements in the reactant set or inhibitor set of a reaction $a$ are called the \emph{resources} of $a$. The classification of any reaction system according to the total number of resources allowed in each of its reactions was initiated by Ehrenfeucht, Main, and Rozenberg \cite{ehrenfeucht2011functions}. From time to time, nondegeneracy has been a naturally adopted convention. Hence, every minimal reaction system satisfies $\vert R_a\vert=\vert I_a\vert=1$ for each $a\in A$ in the early studies. A characterization of $rs$ functions that can be specified by minimal reaction systems was obtained by Ehrenfeucht, Kleijn, Koutny, and Rozenberg \cite{ehrenfeucht2012minimal}. Later the same characterization was extended in \cite{teh2017minimal} to cover for degenerate reaction systems as well. We present this characterization due to its historical significance. \begin{theorem} \cite{ehrenfeucht2012minimal, teh2017minimal} \label{2305b} Suppose $f$ is an $rs$ function over $S$. Then $f=\res_\mathcal{A}$ for some (possibly degenerate) minimal reaction system $\mathcal{A}$ if and only if $f$ satisfies the following two properties: \begin{itemize} \item (Union-subadditivity) $f(X\cup Y)\subseteq f(X)\cup f(Y)$ for all $X,Y\subseteq S$; \item (Intersection-subadditivity) $f(X\cap Y)\subseteq f(X)\cup f(Y)$ for all $X,Y\subseteq S$. \end{itemize} \end{theorem} The following definition of simulation was introduced by Manzoni et al.~\cite{manzoni2014simple}. \begin{definition} Suppose $f$ is an $rs$ function over $S$ and $k$ is a positive integer. Suppose $S\subseteq S'$ and $\mathcal{A}$ is a reaction system over $S'$. We say that $f$ can be \emph{\mbox{$k$-simulated} by} $\mathcal{A}$ if for every $X\subseteq S$, $$f^n(X)=\res_{\mathcal{A}}^{kn}(X)\cap S\quad \text{ for all positive integers } n.$$ \end{definition} The following observation says that $1$-simulation do not add to the expressive power of reaction systems. \begin{proposition}\label{230118b} Suppose $f$ is an $rs$ function over $S$ and $S\subseteq S'$. Suppose $f$ can be $1$-simulated by some reaction system $\mathcal{A}'=(S',A')$ over $S'$. Then $f$ can be specified by the reaction system $\mathcal{A}=(S,A)$ over $S$, where $$A=\{ \,(R_a, I_a \cap S, P_a\cap S)\mid a\in A'\text{ and } R_a\subseteq S \, \}.$$ \end{proposition} \begin{proof} The definition of $1$-simulation implies that $f(X)= \res_{\mathcal{A}'}(X)\cap S$ for every $X\subseteq S$. Fix an arbitrary $X\subseteq S$. It suffices to show that $\res_{\mathcal{A}}(X)= \res_{\mathcal{A}'}(X)\cap S$. Suppose $x\in \res_{\mathcal{A}}(X)$. Then $(R_a, I_a \cap S, P_a\cap S)$ is enabled by $X$ for some $a\in A'$ with $x\in P_a \cap S$. It follows that $a=(R_a, I_a, P_a)$ is enabled by $X$ because $X\subseteq S$ and thus $x\in \res_{\mathcal{A}'}(X)\cap S$. Conversely, if $x\in \res_{\mathcal{A}'}(X)\cap S$. Then $a=(R_a, I_a, P_a)$ is enabled by $X$ for some $a\in A'$ with $x\in P_a$. Hence, $(R_a, I_a \cap S, P_a\cap S)\in A$ is enabled by $X$. Since $x\in P_a\cap S$, it follows that $x\in \res_{\mathcal{A}}(X)$. \end{proof} Manzoni~et al.~\cite{manzoni2014simple} showed that minimal reaction systems are rich enough for the purpose of simulation. Their result serves as the main motivation for this study. We observe that the number of resources in each reaction of the reaction system constructed in their proof is actually one. Therefore, we introduce the following definition before stating what they have actually shown. \begin{definition} \label{030118a} Suppose $\mathcal{A}=(S,A)$ is reaction system. Then $\mathcal{A}$ is \emph{strictly minimal} if $\vert R_a \cup I_a\vert\leq 1$ for every reaction $a\in A$. \end{definition} \begin{theorem}\cite{manzoni2014simple}\label{1107a} Suppose $f$ is an $rs$ function over $S$. Then there exists a strictly minimal reaction system $\mathcal{B}$ over some $S'\supseteq S$ such that $f$ can be \mbox{$2$-simulated} by $\mathcal{B}$. \end{theorem} \section{Hybrid Reaction Systems} There are studies on mathematical properties of reaction systems, for example, the totalness of state transition functions and the functional completeness of the reaction systems as in Salomaa~\cite{salomaa2012functions}, where the properties do not depend on the nonempty product sets. More importantly, the reactant set, inhibitor set, and product set of each reaction in the reaction system constructed in the proof of Theorem~\ref{1107a} appear to contain entities of different nature. These observations motivate our definition of hybrid reaction system, where the output elements are allowed to come from a different background set whenever a reaction is enabled. \begin{definition} Suppose $S$ and $T$ are finite nonempty sets. An \emph{$(S,T)$-reaction} is a triple of sets $a=(R_a, I_a, P_a)$ such that $R_a$ and $I_a$ are (possibly empty) disjoint subsets of $S$ and $P_a$ is a nonempty subset of $T$. A \emph{hybrid reaction system} \emph{over $(S,T)$} is a triple $\mathcal{A}=(S,T,A)$ where $S$ and $T$ are the \emph{background sets} and $A$ is a (possibly empty) set of $(S,T)$-reactions. \end{definition} Obviously, a hybrid reaction system over $(S,T)$ becomes a (normal) reaction system when $S=T$. Basic terminology of reaction systems carries over to hybrid reaction systems analogously. Hence, the reader is assumed to know, for example, the definition of the state transition function $\res_{\mathcal{A}}$ and what it means for $\mathcal{A}$ to be maximally inhibited when $\mathcal{A}$ is a hybrid reaction system. Furthermore, every $rs$ function $f\colon 2^S\rightarrow 2^T$ can be canonically specified by a unique maximally inhibited hybrid reaction system over $(S,T)$. The following theorem says that every reaction system can be naturally decomposed into two strictly minimal hybrid reaction systems. This theorem is essentially extracted from the proof of Theorem~\ref{1107a}. \begin{theorem}\label{0403a} Suppose $\mathcal{A}=(S,A)$ is a reaction system. Let $T=\{ \,\bar{a}\mid a\in A\,\}$ where $\bar{a}$ is a distinguished symbol for each $a\in A$. Let $$C=\{\,(\emptyset, \{x\}, \{\bar{a}\}) \mid a\in A \text{ and } x\in R_a\,\} \cup \{\,(\{y\}, \emptyset, \{\bar{a}\})\mid a\in A \text{ and } y\in I_a\,\}$$ and $D= \{\,(\emptyset, \{\bar{a}\}, P_a)\mid a\in A \,\}$. Then $\mathcal{C}=(S,T,C)$ and $\mathcal{D}=(T,S,D)$ are strictly minimal hybrid reaction systems such that $\res_{\mathcal{A}}=\res_{\mathcal{D}}\circ \res_{\mathcal{C}}$. \end{theorem} \begin{proof} Note that $\res_{\mathcal{D}}(Y) = \bigcup\{ \,P_a\mid \bar{a}\in T\backslash Y\,\}$ for every $Y\subseteq T$. Therefore, it suffices to show that $$\res_{\mathcal{C}}(X)=\{\,\bar{a}\in T\mid a \text{ is not enabled by } X\,\} \quad\text{ for all } X\subseteq S$$ because $\res_{\mathcal{A}}=\res_{\mathcal{D}}\circ \res_{\mathcal{C}}$ would then follow immediately. Suppose $X\subseteq S$ and $a\in A$. By definition, $a$ is not enabled by $X$ if and only if $R_a\nsubseteq X$ or $I_a\cap X\neq \emptyset$. If $x\in R_a\backslash X$, then $(\emptyset,\{x\}, \{\bar{a}\})\in C$ is enabled by $X$. Similarly, if $y\in I_a\cap X$, then $(\{y\},\emptyset, \{\bar{a}\})\in C$ is enabled by $X$. It follows that $\bar{a}\in \res_{\mathcal{C}}(X)$ whenever $a$ is not enabled by $X$. Conversely, suppose $\bar{a}\in \res_{\mathcal{C}}(X)$. Then some $c\in C$ such that $P_c=\{\bar{a}\}$ is enabled by $X$. If $c=(\emptyset,\{x\}, \{\bar{a}\})$ for some $x\in R_a$, then $x\in R_a\backslash X$ and so $R_a\nsubseteq X$. Similarly, if $c =(\{y\},\emptyset, \{\bar{a}\})$ for some $y\in I_a$, then $y\in I_a\cap X$ and so $I_a\cap X\neq \emptyset$. It follows that $a$ is not enabled by $X$ whenever $\bar{a}\in \res_{\mathcal{C}}(X)$. \end{proof} In view of the proof of Theorem~\ref{0403a}, it is intriguing whether there are $\mathcal{C}$ and $\mathcal{D}$ such that $\res_{\mathcal{C}}(X)$ is the set of $\bar{a}$ such that $a$ is enabled by $X$ and $\res_{\mathcal{A}}=\res_{\mathcal{D}}\circ \res_{\mathcal{C}}$. Trivially, we can take $C=\{\,(R_a, I_a, \{\bar{a}\}) \mid a\in A \,\}$ and $D= \{\,(\{\bar{a}\},\emptyset, P_a)\mid a\in A \,\}$. However, it will only be interesting if such $\mathcal{C}$ exists where its complexity is less than $\mathcal{A}$. By our next claim, this is not possible. \begin{claim Suppose $\mathcal{C}=(S,T,C)$ is any hybrid reaction system such that $\res_{\mathcal{C}}(X)$ is the set of $\bar{a}$ such that $a$ is enabled by $X$ for each $X\subseteq S$. Then for every reaction $c\in C$, if $\bar{a}\in P_c$, then $R_a\subseteq R_c$ and $I_a\subseteq I_c$. \end{claim} \begin{proof} Suppose $c\in C$ and $\bar{a}\in P_c$. Clearly, $c$ is enabled by $R_c$ and thus $\bar{a}\in \res_{\mathcal{C}}(R_c)$. By the hypothesis, $a$ is enabled by $R_c$, implying that $R_a\subseteq R_c$. Similarly, $c$ is enabled by $S\backslash I_c$ and thus $\bar{a}\in \res_{\mathcal{C}}(S\backslash I_c)$. By the hypothesis again, $a$ is enabled by $S\backslash I_c$, implying that $(S\backslash I_c) \cap I_a=\emptyset$ and thus $I_a\subseteq I_c$. \end{proof} Theorem~\ref{0403a} justifies the canonicalness of strictly minimal hybrid reaction systems, as functions specified by them can generate every $rs$ function $f$ over $S$ under composition. However, the hybrid reaction system $\mathcal{C}$ as in the theorem depends on $\mathcal{A}$ such that $\res_{\mathcal{A}}=f$. Therefore, the next theorem is a variation of Theorem~\ref{0403a} where the hybrid reaction system $\mathcal{C}$ is independent from $f$. This theorem is essentially implied by the proof of Theorem~4 in Salomaa~\cite{salomaa2015two}, although over there any reaction system is required to be nondegenerate. The main idea is to give a name to each subset of the background set. An alternative original proof of this next theorem can be found in \cite{teh2018note}. \begin{theorem}\label{0407a} Let $T=\{\,N_X \mid X\subseteq S \,\}$ where $N_X$ is a distinguished symbol for each $X\subseteq S$. Let $$C= \{\,( \emptyset, \{x\},\{ N_X\})\mid X\subseteq S \text{ and } x\in X \,\} \cup \{\,( \{y\},\emptyset , \{N_X \} )\mid X\subseteq S\text{ and } y\in S\backslash X \,\}. $$ Suppose $f$ is an $rs$ function over $S$. Let $$D=\{\,(\emptyset, \{N_X\}, f(X))\mid X\subseteq S \text{ and } f(X)\neq \emptyset \,\}. $$ Then $\mathcal{C}=(S, T, C)$ and $\mathcal{D}=(T,S,D)$ are strictly minimal hybrid reaction systems such that $\res_{\mathcal{D}} \circ \res_{\mathcal{C}}= f$. \end{theorem} \begin{comment} \begin{proof} Suppose $X$ is a subset of $S$. It suffices to show that exactly $N_X$ is missing from $\res_{\mathcal{C}}(X)$ because then only the reaction $(\emptyset, \{N_X\}, f(X))$ is enabled by $\res_{\mathcal{C}}(X)$, provided $f(X)\neq \emptyset$, and thus $\res_{\mathcal{D}} ( \res_{\mathcal{C}}(X))=f(X)$. Suppose $Y$ is a subset of $S$ distinct from $X$. If $X\backslash Y\neq \emptyset$, say $x\in X\backslash Y$, then the reaction $( \{x\},\emptyset , \{N_Y\} )$ is enabled by $X$. Otherwise, if $Y\backslash X\neq \emptyset$, say $y\in Y\backslash X$, then the reaction $( \emptyset,\{y\}, \{N_Y\} )$ is enabled by $X$. In either case, $N_Y\in \res_{\mathcal{C}}(X)$. On the other hand, none of the reactions in $C$ with product set being $\{N_X\}$ is enabled by $X$. Therefore, $\res_{\mathcal{C}}(X)=T\backslash \{N_X\}$ as required. \end{proof} We discovered the following alternative proof long after the first one. \end{comment} \begin{proof} Let $\mathcal{A}=(S,A)$ be the canonical maximally inhibited reaction system such that $f=\res_{\mathcal{A}}$, that is, where $$A=\{\,(X,S\backslash X, f(X)) \mid X\subseteq S \text{ and } f(X)\neq \emptyset \,\}.$$ Every $X\subseteq S$ can be uniquely associated to the reaction $a_X=(X,S\backslash X, f(X))$. Let $N_X$ be the distinguished symbol $\overline{a_X}$ for each $X\subseteq S$. Then it can be verified that $$C=\{\,(\emptyset, \{x\}, \{\bar{a}\}) \mid a\in A \text{ and } x\in R_a\,\} \cup \{\,(\{y\}, \emptyset, \{\bar{a}\})\mid a\in A \text{ and } y\in I_a\,\}$$ and $D= \{\,(\emptyset, \{\bar{a}\}, P_a)\mid a\in A \,\}$. Therefore, by Theorem~\ref{0403a}, it follows that $\res_{\mathcal{D}} \circ \res_{\mathcal{C}}= \res_{\mathcal{A}}=f$. \end{proof} Using Theorem~\ref{0407a} and adapting the proof of Theorem~\ref{1107a}, we now strengthen Theorem~\ref{1107a} by showing that the extended background set for the simulating reaction system can be chosen ahead independent from the given $rs$ function. Before that, we need a lemma. \begin{lemma}\label{2612a} Suppose $\mathcal{C}=(S,S',C)$ and $\mathcal{D}=(T,T',D)$ are hybrid reaction systems. Let $\mathcal{A}$ be the hybrid reaction system $(S\cup T, S' \cup T', C\cup D)$. Then $$\res_{\mathcal{A}}(X)=\res_{\mathcal{C}}(X\cap S)\cup \res_{\mathcal{D}}(X\cap T), \quad \text{ for all } X\subseteq S\cup T.$$ \end{lemma} \begin{proof} Suppose $X\subseteq S\cup T$. Then \begin{align*} \res_{\mathcal{A}}(X) ={} & \bigcup_{\substack{c\in C\\ R_c\subseteq X, I_c\cap X=\emptyset} } P_c \quad \cup \quad \bigcup_{\substack{d\in D\\ R_{d}\subseteq X, I_{d}\cap X=\emptyset} } P_{d} \\ ={} & \bigcup_{\substack{c\in C\\ R_c\subseteq X \cap S, I_c\cap (X\cap S)=\emptyset} } P_c \quad\cup \quad \bigcup_{\substack{d\in D\\ R_{d}\subseteq X\cap T, I_{d}\cap (X\cap T)=\emptyset} } P_{d} \\ ={}& \res_{\mathcal{C}}(X\cap S)\cup \res_{\mathcal{D}}(X\cap T). \end{align*} \end{proof} \begin{theorem}\label{0407b} There exists a fixed set $S'\supseteq S$ such that every $rs$ function over $S$ can be $2$-simulated by some strictly minimal reaction system over $S'$. \end{theorem} \begin{proof} Let $T=\{\,N_X\mid X\subseteq S\,\}$ as in Theorem~\ref{0407a} and $S'=S\cup T$. Then $S'$ is a fixed background set extending $S$. Suppose $f$ is an $rs$ function over $S$. Let $\mathcal{C}$ and $\mathcal{D}$ be as in Theorem~\ref{0407a}. Consider the reaction system $\mathcal{A}=(S', C\cup D)$ over $S'$. Clearly, $\mathcal{A}$ is strictly minimal. \begin{claim For every integer $n\geq 2$ and every $X\subseteq S'$, $$\res^{n}_{\mathcal{A}}(X)\cap S=( \res_{\mathcal{D}}\circ \res_{\mathcal{C}}) (\res_{\mathcal{A}}^{n-2}(X)\cap S),$$ where $\res^{0}_{\mathcal{A}}(X)=X$. \end{claim} \begin{proof}[Proof of the claim] We argue by mathematical induction. For the base step, suppose $X\subseteq S'$. By Lemma~\ref{2612a}, $\res_{\mathcal{A}}(X)=\res_{\mathcal{C}}(X\cap S)\cup \res_{\mathcal{D}}(X\cap T)$ and $$\res_{\mathcal{A}}^2(X)= \res_{\mathcal{A}}(\res_{\mathcal{A}}(X))= \res_{\mathcal{C}}( \res_{\mathcal{A}}(X) \cap S)\cup \res_{\mathcal{D}}( \res_{\mathcal{A}}(X) \cap T).$$ Since $\res_{\mathcal{C}}\colon 2^{S}\rightarrow 2^T$, $\res_{\mathcal{D}}\colon 2^{T}\rightarrow 2^S$, and $S\cap T=\emptyset$, it follows that $$\res_{\mathcal{A}}^2(X)\cap S =\res_{\mathcal{D}}( \res_{\mathcal{A}}(X) \cap T) = \res_{\mathcal{D}}(\res_{\mathcal{C}}(X\cap S)) = ( \res_{\mathcal{D}}\circ \res_{\mathcal{C}})(X\cap S).$$ Thus the base step is complete. For the induction step, suppose $X\subseteq S'$. Then $$\res_{\mathcal{A}}^{n+1}(X)\cap S = \res_{\mathcal{A}}^{2}( \res_{\mathcal{A}}^{n-1}(X) ) \cap S = (\res_{\mathcal{D}}\circ \res_{\mathcal{C}}) (\res_{\mathcal{A}}^{n-1}(X)\cap S) .$$ The last equality follows from the base step. The induction step is complete. \renewcommand{\qedsymbol}{} \end{proof} The fact that $f$ can be $2$-simulated by $\mathcal{A}$ follows immediately from the next claim. \begin{claim For every positive integer $n$ and every $X\subseteq S'$, $$f^n(X\cap S)= \res^{2n}_{\mathcal{A}}(X)\cap S.$$ \end{claim} \begin{proof}[Proof of the claim] We argue by mathematical induction. By Theorem~\ref{0407a} and the previous claim, $\res^2_{\mathcal{A}}(X)\cap S=(\res_{\mathcal{D}}\circ \res_{\mathcal{C}})(X\cap S)=f(X\cap S)$ for all $X\subseteq S'$, thus the base step is done. For the induction step, suppose $X\subseteq S'$. By the previous claim again, $\res^{2(n+1)}_{\mathcal{A}}(X)\cap S =( \res_{\mathcal{D}}\circ \res_{\mathcal{C}}) (\res_{\mathcal{A}}^{2n}(X)\cap S) $, which equals $ f ( f^n(X\cap S) ) =f^{n+1}(X\cap S)$ by Theorem~\ref{0407a} and the induction hypothesis. \renewcommand{\qedsymbol}{} \end{proof} Therefore, the proof is complete. \end{proof} \section{Extension of Background Set is Necessary} In view of Theorem~\ref{0407b}, the following theorem says that $S'$ needs to properly extend $S$ if every $rs$ function over $S$ is to be $k$-simulated for some positive integer $k$ by some strictly minimal reaction system over $S'$. \begin{theorem}\cite{manzoni2014simple}\label{140118a} Suppose $\vert S\vert\geq 3$. There exists an $rs$ function over $S$ that cannot be $k$-simulated for any positive integer $k$ by any minimal reaction system over $S$. \end{theorem} The next lemma is a generalization of what was actually shown by the first half of the proof of Theorem~\ref{140118a}. \begin{lemma}\label{060118a} Suppose $\vert S\vert=n\geq 2$ and let $X_1, X_2, \dotsc, X_{2^n}$ be any enumeration of all the subsets of $S$. Let $f$ be the $rs$ function over $S$ defined by $$f(X_i)=\begin{cases} X_{i+1} &\text{ if } 1\leq i<2^n\\ X_{2^n} &\text{ if } i= 2^n. \end{cases}$$ Suppose $S'$ is a finite background set extending $S$. Then $f$ cannot be $k$-simulated by any reaction system over $S'$ whenever $k > \frac{2^{\vert S'\vert}-2}{2^{\vert S\vert}-2 }$. \end{lemma} \begin{proof} We argue by contradiction. Suppose $\mathcal{A}$ is an arbitrary reaction system over $S'$ and $k$ is an arbitrary integer such that $k > \frac{2^{\vert S'\vert}-2}{2^{\vert S\vert}-2 }$. Assume $f$ can be $k$-simulated by $\mathcal{A}$. Consider the state sequence $X_1, \res_{\mathcal{A}}(X_1), \res^2_{\mathcal{A}}(X_1),\dotsc$ generated by $\mathcal{A}$ with initial state $X_1$. Since $k\cdot (2^{n} -2)+2 > 2^{\vert S'\vert}$, the following initial terms $$X_1, \res_{\mathcal{A}}(X_1), \res^2_{\mathcal{A}}(X_1), \dotsc,\res^{k(2^n-2)+1}_{\mathcal{A}}(X_1) $$ cannot be all distinct subsets of $S'$. Hence, $\res^{k(2^n-2)}_{\mathcal{A}}(X_1)$ is part of a cycle, say with period $p\geq 1$. Then $$ \res^{k(2^n-2+p)}_{\mathcal{A}}(X_1)=\res^{kp}_{\mathcal{A}}(\res^{k(2^n-2)}_{\mathcal{A}}(X_1))= \res^{k(2^n-2)}_{\mathcal{A}}(X_1)$$ and thus $$\res^{k(2^n-2+p)}_{\mathcal{A}}(X_1)\cap S= \res^{k(2^n-2)}_{\mathcal{A}}(X_1)\cap S= f^{2^n-2}(X_1)=X_{2^n-1}.$$ However, $\res^{k(2^n-2+p)}_{\mathcal{A}}(X_1)\cap S= f^{2^n-2+p}(X_1)=X_{2^n}$, which is a contradiction. \end{proof} The conclusion of Theorem~\ref{140118a} is true for $\vert S\vert=2$ as well. Let $S=\{a,b\}$ and consider the $rs$ function $f$ defined by $f(\{a\})=\{b\}$, $f( \{b\})=\emptyset$, $f(\emptyset)=S$, and $f(S)=S$. By Lemma~\ref{060118a}, $f$ cannot be $k$-simulated by any reaction system over $S$ whenever $k>1$. Since $f(S)\not\subseteq f(\{a\})\cup f( \{b\} )$, it follows that $f$ is not union-subadditive and thus cannot be $1$-simulated (equivalently, specified) by any minimal reaction system over $S$ by Theorem~\ref{2305b}. In view of Theorem~\ref{0407b}, we now show that when the background set $S$ is extended by a fixed finite number of elements, it is not generally sufficient to \mbox{$2$-simulate} every $rs$ function over $S$ by some strictly minimal reaction systems over the extended background set. In fact, the following much stronger statement holds. \begin{theorem}\label{120219} No polynomial $p$ has the property that for every set $S$, there exists a set $S'\supseteq S$ with $\vert S'\vert \leq p(\vert S\vert)$ such that every $rs$ function over $S$ can be $k$-simulated by some strictly minimal reaction system over $S'$ for some positive integer $k$. \end{theorem} \begin{proof} First of all, notice that the $rs$ function $f$ as defined in Lemma~\ref{060118a} is uniquely determined by the corresponding sequence $X_1, X_2, \dotsc, X_{2^{\vert S\vert}}$. Hence, by some simple counting argument, there are $2^{\vert S\vert}!$ distinct such $rs$ functions over $S$. On the other hand, there are $(2^{\vert S'\vert})^{2\vert S'\vert+1}$ distinct strictly minimal reaction systems over $S'$ because there are $2\vert S'\vert+1$ distinct possible cores (which include $(\emptyset, \emptyset)$) for reactions in such reaction systems. Fix a polynomial $p$. Suppose $S\subseteq S'$ and $\vert S'\vert \leq p(\vert S\vert)$. Since $2^{p(\vert S\vert)}\geq 2^{\vert S'\vert} > \frac{2^{\vert S'\vert}-2}{2^{\vert S\vert}-2 }$ (for $\vert S\vert\geq 2$), by Lemma~\ref{060118a}, any such $f$ cannot be $k$-simulated by any reaction system over $S'$ whenever $k > 2^{p(\vert S\vert)}$ (in fact, whenever $k>\frac{2^{\vert S'\vert}-2}{2^{\vert S\vert}-2 }$). Meanwhile, for every positive integer $k$, by definition, there are at most $(2^{p(\vert S\vert)})^{2p(\vert S\vert)+1 }$ $rs$ functions over $S$ that can be \mbox{$k$-simulated} by some strictly minimal reaction system over $S'$. Therefore, at most $2^{p(\vert S\vert)} \cdot (2^{p(\vert S\vert)})^{2p(\vert S\vert)+1 }$ $rs$ functions over $S$ can be $k$-simulated by some strictly minimal reaction system over $S'$ for some $k\leq 2^{p(\vert S\vert)}$. When $\vert S\vert$ is sufficiently large, $$2^{p(\vert S\vert)} \cdot (2^{p(\vert S\vert)})^{2p(\vert S\vert)+1 }=2^{p(\vert S\vert) ( 2p(\vert S\vert)+2 ) } <2^{\vert S\vert}!,$$ and thus it follows that not all of the $2^{\vert S\vert}!$ $rs$ functions over $S$ as defined in Lemma~\ref{060118a} can be $k$-simulated by some strictly minimal reaction system over $S'$ for some positive integer $k$. \end{proof} \section{Strong $k$-Simulation by Reaction Systems} Now, we formally study a stronger version of $k$-simulation, which was first proposed in the conclusion section of \cite{manzoni2014simple}. \begin{definition} Suppose $f$ is an $rs$ function over $S$ and $k$ is a positive integer. Suppose $S\subseteq S'$ and $\mathcal{A}$ is a reaction system over $S'$. We say that $f$ can be \emph{strongly $k$-simulated by $\mathcal{A}$} if{f} $f(X)=\res_{\mathcal{A}}^{k}(X)$ for all $X\subseteq S$. \end{definition} Some direct computation shows that the $2$-simulating strictly minimal reaction system constructed in the proof of Theorem~\ref{0407b} generally does not strongly \mbox{$2$-simulate} the given $rs$ function. The following is a reformulation of Theorem~3 in \cite{salomaa2015two}, which is a strong version analogue of Theorem~\ref{1107a}. \begin{theorem}\cite{salomaa2015two}\label{1407a} Suppose $f$ is an $rs$ function over $S$ such that $f(\emptyset)=\emptyset$. Then there exists a minimal reaction system $\mathcal{B}$ over some $S'\supseteq S$ such that $f$ can be strongly $2$-simulated by $\mathcal{B}$. \end{theorem} Salomaa adopted the convention that every reaction system is nondegenerate. Relaxing this constraint, we strengthen Theorem~\ref{1407a}, not only by having a fixed extended background set independent from the given $rs$ function, but also by generalizing it to every $rs$ function over $S$. First, we need a technical lemma, which is an adaptation of Theorem~\ref{0407a} to suit our purpose. \begin{lemma}\label{0707a} Let $T=\{\,N_X \mid \emptyset \neq X\subseteq S \,\}\cup \{\ast, \diamond\}$, where $N_X$ is a distinguished symbol for each $\emptyset \neq X\subseteq S$. Let \begin{multline*} C=\{\,(\{y\}, \emptyset, \{N_X \})\mid \emptyset \neq X\subseteq S \text{ and } y\in S\backslash X\,\} \,\cup \, \{\,(\{s\},\emptyset, \{\ast\} )\mid s\in S\,\} \,\, \cup \\ \{\,(\{x\}, \{x'\}, \{N_X\}) \mid X\subseteq S \text{ and } x,x'\in X \text{ with } x\neq x'\,\} \,\cup\, \{ ( \emptyset, \{\diamond\}, \{ \diamond \} ) \}. \end{multline*} Suppose $f$ is an $rs$ function over $S$. Let $$D= \{\,(\{\ast\}, \{N_X\}, f(X))\mid \emptyset \neq X\subseteq S \text{ and } f(X)\neq \emptyset \,\}\,\cup\, \{ ( \{ \diamond \}, \{ \ast\}, f(\emptyset) ) \}.$$ Then $\mathcal{C}=(S\cup \{\diamond \}, T, C)$ and $\mathcal{D}=(T,S,D)$ are hybrid minimal reaction systems such that $(\res_{\mathcal{D}} \circ \res_{\mathcal{C}})(X)= f(X)$ for all $X\subseteq S$. \end{lemma} \begin{proof} Note that $\res_{\mathcal{D}} (\res_{\mathcal{C}}(\emptyset))= \res_{\mathcal{D}} (\{ \diamond \})= f(\emptyset)$. Suppose $X$ is a nonempty subset of $S$. It suffices to show that $\res_{\mathcal{C}}(X)= T \backslash \{N_X\} $ because then only the reaction $(\{\ast\}, \{N_X\}, f(X))$ is enabled by $\res_{\mathcal{C}}(X)$, provided $f(X)\neq \emptyset$, and thus $\res_{\mathcal{D}} (\res_{\mathcal{C}}(X))=f(X)$. Suppose $Y$ is a nonempty subset of $S$ distinct from $X$. If $X\nsubseteq Y$, say $x\in X\backslash Y$, then the reaction $( \{x\},\emptyset , \{N_Y\} )$ is enabled by $X$. Otherwise, if $X\subseteq Y$ and so $Y\nsubseteq X$, then the reaction $( \{y\},\{y'\}, \{N_Y\} )$ is enabled by $X$ for any $y\in X \,(\neq \emptyset)$ and $y'\in Y\backslash X$. In either case, $N_Y\in \res_{\mathcal{C}}(X)$. On the other hand, none of the reactions in $C$ with product set being $\{N_X\}$ is enabled by $X$. Furthermore, $\{ \ast, \diamond \}\subseteq \res_\mathcal{C}(X)$ because $X$ is a nonempty subset of $S$. Therefore, $\res_{\mathcal{C}}(X)= T\backslash \{N_X\}$ as required. \end{proof} \begin{theorem}\label{0603a} There exists a fixed set $S'\supseteq S$ such that every $rs$ function $f$ over $S$ can be strongly $2$-simulated by some minimal reaction system over $S'$. \end{theorem} \begin{proof} Let $T=\{\,N_X \mid \emptyset \neq X\subseteq S \,\}\cup \{\ast, \diamond\}$ as in Lemma~\ref{0707a} and $S'= S\cup T$. Then $S'$ is a fixed background set extending $S$. Suppose $f$ is an $rs$ function over $S$. Let $\mathcal{C}$ and $\mathcal{D}$ be as in Lemma~\ref{0707a}. Consider the reaction system $\mathcal{A}= (S', C\cup D) $ over $S'$. Clearly, $\mathcal{A}$ is minimal. Suppose $X\subseteq S$. By Lemma~\ref{2612a}, $$\res_{\mathcal{A}}(X) =\res_{\mathcal{C}}(X \cap (S \cup \{\diamond\} ) ) \cup \res_{\mathcal{D}}(X \cap T )= \res_{\mathcal{C}}(X ) \cup \res_{\mathcal{D}}(\emptyset)= \res_{\mathcal{C}}(X ).$$ Hence, by Lemma~\ref{2612a} again, $$\res_{\mathcal{A} }^2(X) =\res_{\mathcal{A} } (\res_{\mathcal{C} }(X) )= \res_{\mathcal{C}}( \res_{\mathcal{C}}(X) \cap (S\cup \{\diamond\} ) ) \cup \res_{\mathcal{D}}( \res_{\mathcal{C}}(X) \cap T ).$$ Note that $\res_{\mathcal{C}}(X) \cap (S\cup \{\diamond\} )$ equals $\{\diamond\}$ because $\res_{\mathcal{C}}(X)\subseteq T$ and $\diamond \in \res_{\mathcal{C}}(X)$. It follows that $\res_{\mathcal{C}}( \res_{\mathcal{C}}(X) \cap (S\cup \{\diamond\} ) )=\res_{\mathcal{C}}( \{\diamond\})=\emptyset$. Therefore, $\res_{\mathcal{A}}^2(X) =\res_{\mathcal{D}}( \res_{\mathcal{C}}(X) )= f(X)$ by Lemma~\ref{0707a}. \end{proof} Finally, we address a question related to Lemma~\ref{060118a}. When $\vert S'\vert=\vert S\vert+l$, the lemma identifies certain $rs$ functions that cannot be $k$-simulated by any reaction system over $S'$ whenever $k> \frac{2^{\vert S\vert +l}-2}{2^{\vert S\vert}-2} >2^l$. Does any of those $rs$ functions behave identically for some $k\leq 2^l$? The following theorem answers this negatively. \begin{theorem} Suppose $\vert S'\vert =\vert S\vert+l$ for a nonnegative integer $l$. Then every $rs$ function over $S$ can be strongly $k$-simulated by some reaction system over $S'$ whenever $k\leq 2^l$. \end{theorem} \begin{proof} Suppose $f$ is an arbitrary $rs$ function over $S$. Since $f$ can be canonically specified by a unique maximally inhibited reaction system over $S$, the case $k=1$ is trivial. Suppose $1<k\leq 2^l$. Let $T=S'\backslash S$. Note that $\vert 2^{T}\vert =2^l\geq k$. Let $\emptyset =L_1, L_2, \dotsc, L_k=T$ be any $k$ distinct subsets of $T$. Consider the reaction system $\mathcal{B}=(T,B)$, where \begin{multline*} B= \{\, (X\cup T, S\backslash X, f(X))\mid X\subseteq S\,\} \,\cup \\ \{\, (L_i, T\backslash L_i, L_{i+1})\mid 1\leq i\leq k-1 \,\}\cup \bigcup_{t\in T} \{ \,(\{s\}, \{t\}, \{s\})\mid s\in S \, \}. \end{multline*} We claim that $f$ can be strongly $k$-simulated by $\mathcal{B}=(S',B)$. Suppose $X$ is an arbitrary subset of $S$. It suffices to show that $\res_{\mathcal{B}}^i (X)= X\cup L_{i+1}$ for all $0\leq i\leq k-1$ because then $\res_{\mathcal{B}}^k (X)=\res_{\mathcal{B}}(\res_{\mathcal{B}}^{k-1}(X) )= \res_{\mathcal{B}}( X\cup T)= f(X)$. We argue by induction. Trivially, $\res_{\mathcal{B}}^0(X)=X= X\cup L_1$. For the induction step, suppose $1\leq i\leq k-1$. Then $\res_{\mathcal{B}}^{i} (X)=\res_{\mathcal{B}}(\res_{\mathcal{B}}^{i-1}(X) )= \res_{\mathcal{B}}(X\cup L_{i} ) $ by the induction hypothesis. Since $L_{i}\neq T$, it follows that $\res_{\mathcal{B}}(X\cup L_{i} )= X\cup L_{i+1}$. \end{proof} \section{Conclusion and Open Problems} The \emph{reaction system rank} of any $rs$ function $f$ over $S$ is the smallest possible size of a set $A$ of reactions such that $f$ can be specified by the reaction system $(S,A)$. Through Theorem~\ref{0403a}, it can be shown that the number of extra resources needed to simulate a given $rs$ function $f$ by some strictly minimal reaction systems is bounded by the reaction system rank of $f$. However, since the upper bound $2^{\vert S\vert}$ for reaction system rank is effectively attainable by $rs$ functions over $S$ (see \cite{teh2017irreducible}), in view of Theorem~\ref{0407b} and Theorem~\ref{120219}, it is intriguing but not surprising if no fixed $S'\supseteq S$ exists with $\vert S' \vert<\vert S\vert+2^{\vert S\vert}$ such that every $rs$ function over $S$ can be $2$-simulated by some strictly minimal reaction system over $S'$. In another direction, one can study the difference between $k$-simulation and strong $k$-simulation. With respect to Theorem~\ref{0603a}, one can ask whether the class of simulating reaction systems can be further restricted to the ones that are strictly minimal. Additionally, would an extended background set $S'$ of size $\vert S\vert+2^{\vert S\vert}$ be sufficient to strongly $2$-simulate every $rs$ function over $S$ by some minimal reaction system over $S'$? If either question has a negative answer, then this would mean that strong $k$-simulation is essentially weaker than $k$-simulation in terms of generative power. As a conclusion, simulation of reaction systems and its strong version can be further studied and compared from the following perspectives: \begin{enumerate} \item the complexity of the simulating reaction system; \item the relative size of the extended background set; \item the order of $k$-simulation, that is, the value of $k$. \end{enumerate} Finally, every hybrid reaction system over $(S,T)$ can be viewed as a reaction system over $S\cup T$. Hence, it is not our intention to generalize the study of reaction systems by introducing hybrid reaction systems. It is simply natural and convenient to do so in this study. \section*{Acknowledgment} This work is an extension of that published in the proceedings paper \cite{teh2018note}. The first author acknowledges support of Fundamental Research Grant Scheme \linebreak No.~203.PMATHS.6711644 of Ministry of Education, Malaysia, and Universiti Sains Malaysia. Furthermore, this work is completed during his sabbatical leave from \mbox{15 Nov 2018} to 14 Aug 2019, supported by Universiti Sains Malaysia. \vspace{5mm}
1,116,691,499,921
arxiv
\section{\label{sec:intro} Introduction} Recent studies of quasar absorption systems have suggested that the fine structure constant, $\alpha = e^2/\hbar c$, may have been smaller in the early universe than it is today on Earth \cite{webb99prl,webb01prl,murphy01mnrasA,webb03ass}. Other groups using different telescopes have shown zero variation \cite{quast04aap,srianand04prl,levshakov05aap}. All of these studies use the ``many multiplet'' method \cite{dzuba99prl}, which relies on the comparison of the wavelengths of many transitions in many ions to enhance the size of the effects and remove sources of systematic error. While this method offers an order-of-magnitude improvement in sensitivity over the previously used alkali-doublet method, a potential systematic error is introduced related to the isotope abundances of the absorbers. If the isotope abundances of the absorbers differ from the terrestrial abundances then there can be spurious shifts in the measured wavelengths which may be incorrectly interpreted as variation of $\alpha$. This problem can be resolved if both the relativistic shift and isotope shift of each transition is known, by using particular combinations of the transition frequencies as probes which are insensitive to either $\alpha$-variation or isotopic abundances \cite{kozlov04pra}. Changes in the isotope abundances and the fine-structure constant can then be measured directly. The measured isotopic abundance of carbon can be used to test models of chemical evolution of the Universe. Of particular importance are some chemical evolution models with enhanced populations of intermediate-mass stars that serve as factories for heavy Mg isotopes. The changes in the relative abundances of Mg isotopes could account for much of the evidence for variation in $\alpha$ at relatively low redshifts ($z \lesssim 2$) \cite{ashenfelter04prl,ashenfelter04apj}. However, these models also overproduce nitrogen, violating observed abundance trends in high-$z$ damped Lyman-$\alpha$ systems. Furthermore, it has been shown that such models would also increase the ratio of $^{13}$C to $^{12}$C \cite{fenner05mnras}. With the isotope shifts calculated in this paper it is possible to measure this ratio, and hence provide a diagnostic of these non-standard chemical evolution models. We also have one more reason to study carbon: it is a well studied atom, and we can compare the results of our method with those of other theoretical analyses, as well as a few experiments (unfortunately, isotope shifts for the majority of lines used in astrophysical applications have not been measured). In particular, much progress has been made to calculate isotope shifts using the multiconfigurational Hartree-Fock (MCHF) and configuration interaction (CI) approach \cite{carlsson95jpb,jonsson96jpb,jonsson99jpb,godefroid01jpb}. In Sections~\ref{sec:method} and \ref{sec:energy} we present our method of calculating the isotope shift. It uses the finite-field method to reduce the problem of isotope shift (or relativistic shift) to that of an energy calculation. The transition energies are calculated using a combination of the configuration interaction and many-body perturbation theory (MBPT) approaches, following the works of Refs.~\cite{dzuba96pra} and~\cite{dzuba98pra}. In Sec.~\ref{sec:calc} we show that our CI + MBPT method has accuracy comparable to that of the MCHF calculations, and are accurate enough to be used to measure isotope abundances. We believe that our method may have some advantages over the MCHF calculations; in particular, it is more readily applicable to heavier atoms (where we need to calculate isotope shifts for the study of the $\alpha$ variation and isotopic evolution) and has already been shown to be more accurate in the case of Mg\scaps{I} \cite{berengut05arXiv}. We also use this method to calculate dependence of the carbon transition frequencies on $\alpha$, which is needed to study $\alpha$ variation. \section{\label{sec:method} Method} Using many-body perturbation theory in the residual Coulomb operator and specific mass shift (SMS) operator to calculate isotope shift shows poor convergence. Therefore, we are looking for an ``all order'' method of calculation. The finite-field scaling method is used, reducing the task to an energy calculation, and including the SMS in all parts of the calculation. A similar idea is used to calculate relativistic shift. \subsection{Relativistic shift} To measure $\alpha$ in the distant past, we compare the frequency of transitions in the laboratory, $\omega_0$, with those measured in the quasar absorption spectra, $\omega$. The difference can be expressed as \begin{equation} \omega = \omega_0 + q x, \end{equation} where $x = (\alpha / \alpha_0 )^2 - 1$ and $\alpha_0$ is the laboratory value of the fine structure constant. We vary $\alpha$ directly in computer codes, and extract the relativistic shift $q$ from the calculated spectrum as \begin{equation} q = \frac{d \omega}{d x} \bigg{|}_{x = 0} . \end{equation} Thus we have reduced the problem to an accurate numerical calculation of the energy, and hence $\omega$. \subsection{Isotope shift} The isotope shifts of atomic transition frequencies come from two sources: the finite size of the nuclear charge distribution (the ``volume'' or ``field'' shift), and the finite mass of the nucleus (see, e.g., \Cite{sobelman79book}). In atoms as light as carbon, the field shift is negligible in comparison to the mass shift: we will not consider it further. Because a real nucleus has a finite mass, there is a recoil effect from the movements of the electrons. The energy shift due to the recoil of the nucleus of mass $M$ is given by \begin{eqnarray} \frac{\vect{p}_N^2}{2M} &=& \frac{1}{2M} \left( \sum_i \vect{p}_i \right)^2 \nonumber \\ &=& \frac{1}{2M} \sum_i p_i^2 + \frac{1}{M} \sum_{i<j} \vect{p}_i \cdot \vect{p}_j . \label{eq:smsdef} \end{eqnarray} This ``mass shift'' is traditionally divided into the normal mass shift (NMS) and the specific mass shift (SMS), given by the first and second terms of \Eref{eq:smsdef}, respectively. The normal mass shift is easily calculated from the transition frequency; the specific mass shift is difficult to evaluate accurately. The shift in energy of any transition in an isotope with mass number $A'$ with respect to an isotope with mass number $A$ can be expressed as \begin{equation} \label{eq:is} \delta \nu^{A', A} = \nu^{A'} - \nu^{A} = \left( k_{\rm NMS} + k_{\rm SMS} \right)\left(\frac{1}{A'} - \frac{1}{A} \right), \end{equation} where the normal mass shift constant is \begin{equation} k_{\rm NMS} = -\frac{\nu}{1823} . \end{equation} The value 1823 refers to the ratio of the atomic mass unit to the electron mass. To calculate $k_{\rm SMS}$ we include a scaled specific-mass-shift operator directly into our energy calculation from the very beginning. We add the two-body SMS operator to the Coulomb potential $\tilde{Q} = 1/\left| \vect{r}_1 - \vect{r}_2 \right| + \lambda \vect{p}_1 \cdot \vect{p}_2$. The operator $\tilde{Q}$ replaces the Coulomb operator everywhere that it appears in an energy calculation. We recover the specific mass shift constant as \begin{equation} k_{\rm SMS} = \frac{d \omega}{d \lambda} \bigg{|}_{\lambda = 0}. \end{equation} The operator $\tilde{Q}$ has the same symmetry and structure as the Coulomb operator (see Appendix~\ref{app:sms_operator}). \section{\label{sec:energy} Energy calculation} To calculate the energies we first solve the Dirac-Fock equations for the core and valence electrons, and we generate a basis set that includes the core and valence orbitals and a number of virtual orbitals. We then calculate the energy levels using a combination of CI and MBPT for many-valence-electron atoms as was done for Mg\scaps{I} in \Cite{berengut05arXiv}. In this section we outline the procedure for the combined CI and MBPT method; it generally follows the work of \Cite{dzuba96pra}. Note that for single-valence-electron atoms, where the CI procedure is unnecessary, the method reduces to the addition of core-correlation effects to the Dirac-Fock energy using MBPT, as was shown to be highly successful for calculating SMS in \Cite{berengut03pra}. Atomic units ($\hbar = m_e = e = 1$) are used throughout this paper. \subsection{Single particle basis} We firstly solve the Dirac-Fock equations for one-particle wavefunctions of the open-shell core $\left| m \right>$ \begin{equation} \label{eq:single_particle} h^{\rm DF} \left| m \right> = \epsilon_m \left| m \right> , \end{equation} where \begin{equation} h^{\rm DF} = c\, \vect{\alpha} \cdot \vect{p} + (\beta -1) m_e c^2 - \frac{Z}{r} + V^{\rm N_{DF}}(r) \end{equation} where $V^{\rm N_{DF}}$ is the potential (both direct and exchange) of the $N_{\rm DF}$ electrons included in the self-consistent Hartree-Fock procedure. Note that this is not necessarily the number of electrons in the closed-shell core, in fact $N_{\rm core} \leq N_{\rm DF} \leq N$, where $N$ is the total number of electrons. For the purposes of the CI calculation there are $N - N_{\rm core}$ valence electrons. We need to generate a basis set, $\left| i \right>$ that includes the core and valence states and a large number of virtual states. In this paper we have used a B-spline basis set, formed by diagonalizing the Dirac-Fock operator on the basis set of B-splines and excluding orbitals with high energy. This basis has been shown to be effective in energy calculations using this method of CI and MBPT \cite{dzuba98pra}. \subsection{Configuration interaction method} The many-electron Hilbert space is separated into subspaces $P$ and $Q$. $P$ corresponds to the frozen-core approximation; $Q$ is complimentary to it and includes all states with core excitations. Using Slater determinants (denoted with capital letters) $\left| I \right>$ of the single-particle functions $\left| i \right>$ as a basis set in the many-electron space, we can define projection operators for $P$ and $Q$ by \begin{eqnarray} \mathcal{P} = \sum_{I \in P} \left| I \right> \left< I \right| \\ \mathcal{P} + \mathcal{Q} = 1 \end{eqnarray} Determinants that have all core states fully occupied are in the $P$ subspace; all others are in the subspace $Q$. In the CI method, the many-electron wavefunction is expressed as a linear combination of Slater determinants from the subspace $P$: \begin{equation} \psi = \sum_{I \in P} C_I \left| I \right> , \end{equation} where the $C_I$ are obtained from the matrix eigenvalue problem \begin{equation} \label{eq:CI_matrix} \sum_{J \in P} H_{IJ} C_J = E C_I . \end{equation} In the frozen-core approximation, the determinants $\left| I \right>$ need include only the valence electrons. Although $P$ is infinite-dimensional, we can use a finite-dimensional model subspace by specifying the set of allowed configurations for the valence electrons, for example by restricting the set of single particle states. The restrictions we use are different for each ion, and are expressed more fully in the relevant parts of Section~\ref{sec:calc}. We will not distinguish here between the finite and infinite subspaces. The Hamiltonian for the CI problem is a projection of the exact Hamiltonian $\mathcal{H}$ onto the model subspace. The core is frozen in the $P$ subspace, so our projected Hamiltonian is \begin{equation} \label{eq:PHP} \mathcal{PHP} = E_{\rm core} + \sum_{i > N_{\rm core}} h^{\rm CI}_i + \sum_{j > i > N_{\rm core}} \frac{1}{r_{ij}} \end{equation} where $E_{\rm core}$ is the total energy of the $N_{\rm core}$ core electrons and the single particle operator \begin{equation} \label{eq:h_CI} h^{\rm CI} = c\, \vect{\alpha} \cdot \vect{p} + (\beta - 1) m_e c^2 -\frac{Z}{r} + V^{\rm N_{core}} \end{equation} acts only on the valence electrons. Using the operator~\eref{eq:PHP} in \Eref{eq:CI_matrix} corresponds to the pure CI method in the frozen-core approximation. \subsection{Exact Hamiltonian expansion} We wish to write the exact equivalent of $\mathcal{H}$ in the subspace $P$. The ``Feshbach operator'' yields the exact energy when operating on the model function $\Psi_P = \mathcal{P} \Psi$. Following Lindgren and Morrison \cite{lindgren86book}, we start from the many-body Schr\"odinger equation in the form \begin{equation} \label{eq:schrodinger} \mathcal{H(P+Q)} \Psi = E \Psi \end{equation} and operate from the left with $\mathcal{P}$ and $\mathcal{Q}$, respectively to obtain \begin{eqnarray*} \mathcal{PHP}\, \Psi_P + \mathcal{PHQ}\, \Psi_Q &=& E \Psi_P \\ \mathcal{QHP}\, \Psi_P + \mathcal{QHQ}\, \Psi_Q &=& E \Psi_Q . \end{eqnarray*} Eliminating $\Psi_Q$, \begin{equation} \label{eq:exact_expansion} \left[ \mathcal{PHP} + \Sigma (E) \right] \Psi_P = E \Psi_P \end{equation} where \begin{equation} \label{eq:sigma_def} \Sigma (E) = \mathcal{PHQ} \frac{1}{E - \mathcal{QHQ}} \mathcal{QHP} . \end{equation} In \Cite{dzuba96pra} these expressions were also used to rewrite the orthonormality conditions for the solutions of \Eref{eq:schrodinger} in terms of the model functions $\Psi_P$ (the solutions of \Eref{eq:exact_expansion}). As they found however, if an appropriate choice of the $P$ subspace is made, the usual orthonormality condition for $\Psi_P$ can be applied. In this case the standard CI procedure can be used to solve \Eref{eq:exact_expansion}. \subsection{Many-body perturbation theory} Here we will generate a perturbation expansion for $\Sigma$. Define a single particle Hamiltonian by \begin{eqnarray} h_0 a_i^\dag \left| 0 \right> &=& \epsilon_i a_i^\dag \left| 0 \right> \\ \epsilon_i & \equiv & \left< i \right| h^{\rm DF} \left| i \right> . \nonumber \end{eqnarray} where we introduce the operators $a^\dag_i$ ($a_i$) to create (annihilate) a particle. For particles in the core, the functions are those of \Eref{eq:single_particle}. The many-body zero-order Hamiltonian is \begin{equation} \mathcal{H}_0 = E_{\rm core} + \sum_i \{ a_i^\dag a_i \} \epsilon_i . \end{equation} where the brackets $\{...\}$ denote normal ordering with respect to the closed-shell core. The exact Hamiltonian is \begin{equation} \mathcal{H} = \sum_i h^{\rm nuc} + \sum_{i < j} \frac {1}{r_{ij}} \end{equation} where $h^{\rm nuc} = c\,\vect{\alpha} \cdot \vect{p} + (\beta-1) m_e c^2 - \frac{Z}{r_i}$. $\mathcal{H}$ can be separated into zero, one, and two-body parts: \begin{eqnarray} \mathcal{H}^{(0)} &=& E_{\rm core} \nonumber \\ \mathcal{H}^{(1)} &=& \sum_{ij} \{ a_i^\dag a_j \} \bigg[ \left< i \right| h^{\rm nuc} \left| j \right> \nonumber \\ & & + \sum_{m}^{\rm core} \left( \left< im \right| r_{12}^{-1} \left| jm \right> - \left< im \right| r_{12}^{-1} \left| mj \right> \right) \bigg] \nonumber \\ \label{eq:H_1} &=& \sum_{ij} \{ a_i^\dag a_j \} \left< i \right| h^{\rm CI} \left| j \right> \\ \label{eq:H_2} \mathcal{H}^{(2)} &=& \sum_{ijkl} \{ a_i^\dag a_j^\dag a_l a_k \} \left< ij \right| r_{12}^{-1} \left| kl \right> . \end{eqnarray} Expanding \Eref{eq:sigma_def} in the residual Coulomb interaction, $\mathcal{V} = \mathcal{H} - \mathcal{H}_0$, we obtain \begin{eqnarray} \Sigma (E) &=& \mathcal{PHQ}\frac{1}{E - \mathcal{H}_0}\mathcal{QHP} \nonumber \\ & & + \mathcal{PHQ}\frac{1}{E - \mathcal{H}_0}\mathcal{QVQ}\frac{1}{E - \mathcal{H}_0}\mathcal{QHP} + \ldots \label{eq:sigma_expansion} \end{eqnarray} One advantage of this formalism is that $h_0$ is not necessarily the same as $h^{\rm DF}$. Thus we may in principle use any set of functions we like in the virtual basis as long as they are orthogonal. In practice, however, it is important that $\mathcal{V}$ not get too large, and this requires that \begin{equation} \label{eq:V_1} \mathcal{V}^{(1)} = \left< i \right| h^{\rm CI}-h_0 \left| j \right> \end{equation} is small. We can write $\Sigma$ in matrix form: \begin{eqnarray} \Sigma_{IJ} &=& \sum_{M \in Q} \frac{\left< I \right| H \left| M \right> \left< M \right| H \left| J \right>} {E - E_{M}} \nonumber \\ & & + \sum_{M,L \in Q} \frac{\left< I \right| H \left| M \right> \left< M \right| V \left| L \right> \left< L \right| H \left| J \right>}{(E - E_M)(E - E_L)} + \ldots \label{eq:sigma_matrix} \\ &=& \left( \Sigma_2 \right)_{IJ} + \left( \Sigma_3 \right)_{IJ} + ... \end{eqnarray} where I and J enumerate determinants from the subspace $P$, and M and L are determinants from the subspace $Q$. In this paper we calculate $\Sigma$ to second-order of the perturbation expansion. For the one-valence-electron case it has been shown that this level of perturbation theory, when used with the finite-field scaling method, is sufficient to obtain accurate results for isotope shift \cite{berengut03pra}. Furthermore, previous studies have shown that energies calculated to this order are very accurate \cite{dzuba96pra,dzuba98pra}. Substituting $\Sigma_2$ into \eref{eq:exact_expansion} we obtain the equation of the combined CI and MBPT method, which we write in the same form as \Eref{eq:CI_matrix}: \begin{equation} \label{eq:CI_MBPT_matrix} \sum_{J \in P} \left( H_{IJ} + \sum_{M \in Q} \frac{\left< I \right| H \left| M \right> \left< M \right| H \left| J \right>} {E - E_{M}} \right) C_J = E C_I . \end{equation} Thus our method includes the core-correlation effects by simply altering the matrix elements in the CI calculation. In practice, this involves adding the matrix element of the sigma operator to the one and two-particle Coulomb integrals. \subsection{Diagrammatic technique for calculating $\Sigma$} Here we present all second-order diagrams for $\Sigma$, represented by Goldstone diagrams (see, e.g.:~\Cite{lindgren86book}). The diagrams are valid for any ion and any choice of core. Unlinked lines are omitted, since they correspond to states of the valence electrons that are not involved in the interaction. The omitted lines do affect the value of the MBPT correction via the Pauli principle; however, as noted in \Cite{dzuba96pra}, the Pauli principle can be ignored due to the exact cancellation of unphysical terms in different diagrams. This rule works for all orders of perturbation theory (this is Wick's theorem, see e.g.:~\Cite{lindgren86book}). The second-order diagrams can be grouped according to how many external lines (valence electrons) they have. There are four one-valence-electron diagrams, all shown in \Fig{feyn:sigma1}. They correspond to the ``self-energy'' correction, which describes correlations between the valence electron and the core. Additionally, there are five so-called ``subtraction diagrams'' for the self-energy. These are diagrams that involve the external field $\mathcal{V}^{(1)}$ (\Eref{eq:V_1}), and are so named because in the Hartree-Fock field enters $\mathcal{V}^{(1)}$ with a minus sign. In our formulation we instead use $\mathcal{H}^{(1)}$ as the external field (see Eqs.~\eref{eq:H_1}~and~\eref{eq:sigma_matrix}) which is equivalent when calculating $\Sigma_2$ because $\mathcal{PH}_0 \mathcal{Q} = 0$. Three of the $\Sigma^{(1)}$ subtraction diagrams are shown in \Fig{feyn:sigma1sub}; the other two are the mirror image partners of diagrams \ref{feyn:sigma1sub}.1 and \ref{feyn:sigma1sub}.2. \begin{figure}[htb] \caption{One-valence-electron diagrams of $\Sigma$ ($\Sigma^{(1)}$).} \begin{fmffile}{sigma1} \label{feyn:sigma1} \fmfframe(10,20)(10,30){ \begin{fmfgraph*}(80,25) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i2,i1} \fmfright{o2,o1} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v3} \fmffixedy{0}{v3,o1} \fmf{fermion,label=$a$,l.s=left,l.d=4}{i1,v1} \fmf{fermion,tension=0.5,label=$\beta$,l.s=left,l.d=4}{v1,v3} \fmf{fermion,label=$b$,l.s=left,l.d=4}{v3,o1} \fmffixedy{0}{i2,v2} \fmffixedy{0}{v2,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v2} \fmf{phantom}{v4,o2} \fmf{fermion,left=0.5,tension=0.25,label=$\alpha$,l.s=left,l.d=4}{v2,v4} \fmf{fermion,left=0.5,tension=0.25,label=$n$,l.s=right,l.d=4}{v4,v2} \fmf{dashes}{v1,v2} \fmf{dashes}{v3,v4} \fmfforce{(0.5w,-25)}{c} \fmfv{label=$1$,l.d=0}{c} \end{fmfgraph*} } \fmfframe(10,20)(10,30){ \begin{fmfgraph*}(80,25) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i2,i1} \fmfright{o2,o1} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v3} \fmffixedy{0}{v3,o1} \fmf{fermion,label=$a$,l.s=left,l.d=4}{i1,v1} \fmf{fermion,tension=0.5,label=$\beta$,l.s=left,l.d=4}{v1,v3} \fmf{phantom}{v3,o1} \fmffixedy{0}{i2,v2} \fmffixedy{0}{v2,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v2} \fmf{fermion,label=$b$,l.s=left,l.d=4}{v4,o2} \fmf{fermion,tension=0.5,label=$\alpha$,l.s=right,l.d=4}{v2,v4} \fmf{fermion,tension=0,label=$n$,l.s=right,l.d=3}{v3,v2} \fmf{dashes}{v1,v2} \fmf{dashes}{v3,v4} \fmfforce{(0.5w,-25)}{c} \fmfv{label=$2$,l.d=0}{c} \end{fmfgraph*} } \fmfframe(10,20)(10,30){ \begin{fmfgraph*}(80,50) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i3,i2,i1} \fmfright{o3,o2,o1} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v2} \fmffixedy{0}{v2,o1} \fmf{fermion,label=$a$,l.s=left,l.d=4}{i1,v1} \fmf{plain,tension=0.5}{v1,v2} \fmf{phantom}{v2,o1} \fmf{fermion,tension=0,label=$m$,l.s=right,l.d=3}{v2,v3} \fmffixedy{0}{i2,v3} \fmffixedy{0}{v3,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v3} \fmf{plain,tension=0.5}{v3,v4} \fmf{fermion,label=$b$,l.s=left,l.d=4}{v4,o2} \fmffixedy{0}{i3,v5} \fmffixedy{0}{v5,v6} \fmffixedy{0}{v6,o3} \fmf{phantom}{i3,v5} \fmf{phantom}{v6,o3} \fmf{fermion,left=0.5,tension=0.25,label=$\alpha$,l.s=left,l.d=4}{v5,v6} \fmf{fermion,left=0.5,tension=0.25,label=$n$,l.s=right,l.d=4}{v6,v5} \fmf{dashes}{v3,v5} \fmf{dashes}{v2,v6} \fmfforce{(0.5w,-25)}{c} \fmfv{label=$3$,l.d=0}{c} \end{fmfgraph*} } \fmfframe(10,20)(10,30){ \begin{fmfgraph*}(80,50) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i3,i2,i1} \fmfright{o3,o2,o1} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v2} \fmffixedy{0}{v2,o1} \fmf{fermion,label=$a$,l.s=left,l.d=4}{i1,v1} \fmf{plain,tension=0.5}{v1,v2} \fmf{phantom}{v2,o1} \fmf{fermion,tension=0,label=$m$,l.s=right,l.d=3}{v2,v3} \fmffixedy{0}{i2,v3} \fmffixedy{0}{v3,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v3} \fmf{fermion,tension=0.5,label=$\alpha$,l.s=left,l.d=4}{v3,v4} \fmf{phantom}{v4,o2} \fmf{fermion,tension=0,label=$n$,l.s=left,l.d=3}{v4,v5} \fmffixedy{0}{i3,v5} \fmffixedy{0}{v5,v6} \fmffixedy{0}{v6,o3} \fmf{phantom}{i3,v5} \fmf{plain,tension=0.5}{v5,v6} \fmf{fermion,label=$b$,l.s=left,l.d=4}{v6,o3} \fmf{dashes}{v3,v5} \fmf{dashes}{v2,v4} \fmfforce{(0.5w,-25)}{c} \fmfv{label=$4$,l.d=0}{c} \end{fmfgraph*} } \end{fmffile} \end{figure} \begin{figure}[htb] \caption{Subtraction diagrams of $\Sigma^{(1)}$. Diagrams 1 and 2 have complementary diagrams obtained by reflection in the vertical plane.} \begin{fmffile}{sigma1sub} \label{feyn:sigma1sub} \fmfframe(10,20)(10,30){ \begin{fmfgraph*}(80,50) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i3,i2,i1} \fmfright{o3,o2,o1} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v2} \fmffixedy{0}{v2,o1} \fmf{fermion,label=$a$,l.s=left,l.d=4}{i1,v1} \fmf{plain,tension=0.5}{v1,v2} \fmf{fermion,label=$b$,l.s=left,l.d=4}{v2,o1} \fmffixedy{0}{i2,v3} \fmffixedy{0}{v3,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v3} \fmf{fermion,left=0.5,tension=0.25,label=$\alpha$,l.s=left,l.d=4}{v3,v4} \fmf{fermion,left=0.5,tension=0.25,label=$n$,l.s=right,l.d=4}{v4,v3} \fmf{phantom}{v4,o2} \fmffixedy{0}{i3,v5} \fmffixedy{0}{v5,o3} \fmf{phantom,tension=1/3}{i3,v5} \fmf{phantom}{v5,o3} \fmfv{d.shape=cross,d.size=8}{v5} \fmf{dashes}{v4,v5} \fmf{dashes}{v1,v3} \fmfforce{(0.5w,-25)}{c} \fmfv{label=$1$,l.d=0}{c} \end{fmfgraph*} } \fmfframe(10,20)(10,30){ \begin{fmfgraph*}(80,50) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i3,i2,i1} \fmfright{o3,o2,o1} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v2} \fmffixedy{0}{v2,o1} \fmf{fermion,label=$a$,l.s=left,l.d=4}{i1,v1} \fmf{plain,tension=0.5}{v1,v2} \fmf{phantom}{v2,o1} \fmf{fermion,tension=0,label=$n$,l.s=right,l.d=3}{v2,v3} \fmffixedy{0}{i2,v3} \fmffixedy{0}{v3,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v3} \fmf{fermion,tension=0.5,label=$\alpha$,l.s=right,l.d=4}{v3,v4} \fmf{fermion,label=$b$,l.s=left,l.d=4}{v4,o2} \fmffixedy{0}{i3,v5} \fmffixedy{0}{v5,o3} \fmf{phantom}{i3,v5} \fmf{phantom,tension=1/3}{v5,o3} \fmfv{d.shape=cross,d.size=8}{v5} \fmf{dashes}{v3,v5} \fmf{dashes}{v2,v4} \fmfforce{(0.5w,-25)}{c} \fmfv{label=$2$,l.d=0}{c} \end{fmfgraph*} } \fmfframe(10,20)(10,30){ \begin{fmfgraph*}(80,75) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i3,i2,i1,i0} \fmfright{o3,o2,o1,o0} \fmffixedy{0}{i0,v0} \fmffixedy{0}{v0,o0} \fmf{phantom,tension=1/3}{i0,v0} \fmf{phantom}{v0,o0} \fmfv{d.shape=cross,d.size=8}{v0} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v2} \fmffixedy{0}{v2,o1} \fmf{fermion,label=$a$,l.s=left,l.d=4}{i1,v1} \fmf{plain,tension=0.5}{v1,v2} \fmf{phantom}{v2,o1} \fmf{fermion,tension=0,label=$n$,l.s=right,l.d=3}{v2,v3} \fmffixedy{0}{i2,v3} \fmffixedy{0}{v3,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v3} \fmf{plain,tension=0.5}{v3,v4} \fmf{fermion,label=$b$,l.s=left,l.d=4}{v4,o2} \fmffixedy{0}{i3,v5} \fmffixedy{0}{v5,o3} \fmf{phantom}{i3,v5} \fmf{phantom,tension=1/3}{v5,o3} \fmfv{d.shape=cross,d.size=8}{v5} \fmf{dashes}{v0,v2} \fmf{dashes}{v3,v5} \fmfforce{(0.5w,-25)}{c} \fmfv{label=$3$,l.d=0}{c} \end{fmfgraph*} } \end{fmffile} \end{figure} Diagrams with two external lines correspond to screening of the valence-valence interaction by the core electrons. There are nine diagrams represented in \Fig{feyn:sigma2}, and four subtraction diagrams represented in \Fig{feyn:sigma2sub}. The resulting corrections to the interactions between valence electrons can be written in the form of an effective radial integral, as is usually done for the Coulomb integrals. However, it is important to note that the box diagrams (\hbox{Figs. \ref{feyn:sigma2}.4 -- \ref{feyn:sigma2}.6}) have less symmetry than the Coulomb integrals, because swapping the initial and final states in either the upper or lower lines changes the integral. This approximately doubles the number of independent radial integrals that need to be stored for the CI problem. In addition, for the box diagrams the multipolarity of the Coulomb interaction need not follow the usual rule: $\xi(l_a + l_c + k)\xi(l_b + l_d + k)$ (see Appendix~\ref{app:sms_operator}). Instead, $k$ need only satisfy the triangle conditions and can be both odd and even. This would again double the number of independent radial integrals, except that we have found that the diagrams of ``wrong'' parity are unimportant for carbon and may be omitted in order to reduce the complexity of the calculations. \begin{figure}[htb] \caption{$\Sigma^{(2)}$: Diagrams 1, 2 and 3 have complementary diagrams obtained by reflection in the vertical plane.} \begin{fmffile}{sigma2} \label{feyn:sigma2} \fmfframe(10,20)(10,30){ \begin{fmfgraph*}(80,50) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i3,i2,i1} \fmfright{o3,o2,o1} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v2} \fmffixedy{0}{v2,o1} \fmf{fermion,label=$a$,l.s=left,l.d=4}{i1,v1} \fmf{plain,tension=0.5}{v1,v2} \fmf{fermion,label=$c$,l.s=left,l.d=4}{v2,o1} \fmffixedy{0}{i2,v3} \fmffixedy{0}{v3,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v3} \fmf{fermion,left=0.5,tension=0.25,label=$\alpha$,l.s=left,l.d=4}{v3,v4} \fmf{fermion,left=0.5,tension=0.25,label=$n$,l.s=right,l.d=4}{v4,v3} \fmf{phantom}{v4,o2} \fmffixedy{0}{i3,v5} \fmffixedy{0}{v5,v6} \fmffixedy{0}{v6,o3} \fmf{fermion,label=$b$,l.s=left,l.d=4}{i3,v5} \fmf{plain,tension=0.5}{v5,v6} \fmf{fermion,label=$d$,l.s=left,l.d=4}{v6,o3} \fmf{dashes}{v1,v3} \fmf{dashes}{v4,v6} \fmfforce{(0.5w,-25)}{c} \fmfv{label=$1$,l.d=0}{c} \end{fmfgraph*} } \fmfframe(10,20)(10,30){ \begin{fmfgraph*}(80,50) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i3,i2,i1} \fmfright{o3,o2,o1} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v2} \fmffixedy{0}{v2,o1} \fmf{fermion,label=$a$,l.s=left,l.d=4}{i1,v1} \fmf{plain,tension=0.5}{v1,v2} \fmf{phantom}{v2,o1} \fmf{fermion,tension=0,label=$n$,l.s=right,l.d=3}{v2,v3} \fmffixedy{0}{i2,v3} \fmffixedy{0}{v3,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v3} \fmf{fermion,tension=0.5,label=$\alpha$,label.s=right,l.d=4}{v3,v4} \fmf{fermion,label=$c$,l.s=left,l.d=4}{v4,o2} \fmffixedy{0}{i3,v5} \fmffixedy{0}{v5,v6} \fmffixedy{0}{v5,o3} \fmf{fermion,label=$b$,l.s=left,l.d=4}{i3,v5} \fmf{plain,tension=0.5}{v5,v6} \fmf{fermion,label=$d$,l.s=left,l.d=4}{v6,o3} \fmf{dashes}{v2,v4} \fmf{dashes}{v3,v5} \fmfforce{(0.5w,-25)}{c} \fmfv{label=$2$,l.d=0}{c} \end{fmfgraph*} } \fmfframe(10,20)(10,30){ \begin{fmfgraph*}(80,50) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i3,i2,i1} \fmfright{o3,o2,o1} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v2} \fmffixedy{0}{v2,o1} \fmf{fermion,label=$a$,l.s=left,l.d=4}{i1,v1} \fmf{plain,tension=0.5}{v1,v2} \fmf{fermion,label=$c$,l.s=left,l.d=4}{v2,o1} \fmffixedy{0}{i2,v3} \fmffixedy{0}{v3,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v3} \fmf{fermion,tension=0.5,label=$\alpha$,l.s=left,l.d=4}{v3,v4} \fmf{fermion,label=$d$,l.s=left,l.d=4}{v4,o2} \fmf{fermion,tension=0,label=$n$,l.s=left,l.d=3}{v6,v3} \fmffixedy{0}{i3,v5} \fmffixedy{0}{v5,v6} \fmffixedy{0}{v6,o3} \fmf{fermion,label=$b$,l.s=left,l.d=4}{i3,v5} \fmf{plain,tension=0.5}{v5,v6} \fmf{phantom}{v6,o3} \fmf{dashes}{v1,v3} \fmf{dashes}{v4,v6} \fmfforce{(0.5w,-25)}{c} \fmfv{label=$3$,l.d=0}{c} \end{fmfgraph*} } \fmfframe(10,20)(10,30){ \begin{fmfgraph*}(80,50) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i3,i2,i1} \fmfright{o3,o2,o1} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v2} \fmffixedy{0}{v2,o1} \fmf{fermion,label=$a$,l.s=left,l.d=4}{i1,v1} \fmf{plain,tension=0.5}{v1,v2} \fmf{phantom}{v2,o1} \fmf{fermion,tension=0,label=$n$,l.s=right,l.d=3}{v2,v3} \fmffixedy{0}{i2,v3} \fmffixedy{0}{v3,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v3} \fmf{plain,tension=0.5}{v3,v4} \fmf{fermion,label=$c$,l.s=left,l.d=4}{v4,o2} \fmffixedy{0}{i3,v5} \fmffixedy{0}{v5,v6} \fmffixedy{0}{v5,o3} \fmf{fermion,label=$b$,l.s=left,l.d=4}{i3,v5} \fmf{fermion,tension=0.5,label=$\alpha$,l.s=left,l.d=4}{v5,v6} \fmf{fermion,label=$d$,l.s=left,l.d=4}{v6,o3} \fmf{dashes}{v2,v6} \fmf{dashes}{v3,v5} \fmfforce{(0.5w,-25)}{c} \fmfv{label=$4$,l.d=0}{c} \end{fmfgraph*} } \fmfframe(10,20)(10,30){ \begin{fmfgraph*}(80,50) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i3,i2,i1} \fmfright{o3,o2,o1} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v2} \fmffixedy{0}{v2,o1} \fmf{fermion,label=$a$,l.s=left,l.d=4}{i1,v1} \fmf{fermion,tension=0.5,label=$\alpha$,l.s=left,l.d=4}{v1,v2} \fmf{fermion,label=$c$,l.s=left,l.d=4}{v2,o1} \fmffixedy{0}{i2,v3} \fmffixedy{0}{v3,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v3} \fmf{plain,tension=0.5}{v3,v4} \fmf{fermion,label=$d$,l.s=left,l.d=4}{v4,o2} \fmf{fermion,tension=0,label=$n$,l.s=left,l.d=3}{v6,v3} \fmffixedy{0}{i3,v5} \fmffixedy{0}{v5,v6} \fmffixedy{0}{v6,o3} \fmf{fermion,label=$b$,l.s=left,l.d=4}{i3,v5} \fmf{plain,tension=0.5}{v5,v6} \fmf{phantom}{v6,o3} \fmf{dashes}{v1,v3} \fmf{dashes}{v2,v6} \fmfforce{(0.5w,-25)}{c} \fmfv{label=$5$,l.d=0}{c} \end{fmfgraph*} } \fmfframe(10,20)(10,30){ \begin{fmfgraph*}(80,75) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i4,i3,i2,i1} \fmfright{o4,o3,o2,o1} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v2} \fmffixedy{0}{v2,o1} \fmf{fermion,label=$a$,l.s=left,l.d=4}{i1,v1} \fmf{plain,tension=0.5}{v1,v2} \fmf{phantom}{v2,o1} \fmf{fermion,tension=0,label=$m$,l.s=left,l.d=3}{v2,v3} \fmffixedy{0}{i2,v3} \fmffixedy{0}{v3,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v3} \fmf{plain,tension=0.5}{v3,v4} \fmf{fermion,label=$c$,l.s=left,l.d=4}{v4,o2} \fmffixedy{0}{i3,v5} \fmffixedy{0}{v5,v6} \fmffixedy{0}{v6,o3} \fmf{fermion,label=$b$,l.s=left,l.d=4}{i3,v5} \fmf{plain,tension=0.5}{v5,v6} \fmf{phantom}{v6,o3} \fmf{fermion,tension=0,label=$n$,l.s=left,l.d=3}{v6,v7} \fmffixedy{0}{i4,v7} \fmffixedy{0}{v7,v8} \fmffixedy{0}{v8,o4} \fmf{phantom}{i4,v7} \fmf{plain,tension=0.5}{v7,v8} \fmf{fermion,label=$d$,l.s=left,l.d=4}{v8,o4} \fmf{dashes}{v3,v7} \fmf{dashes}{v2,v6} \fmfforce{(0.5w,-25)}{c} \fmfv{label=$6$,l.d=0}{c} \end{fmfgraph*} } \end{fmffile} \end{figure} \begin{figure}[htb] \caption{$\Sigma^{(2)}$ subtraction diagrams. Each has a complementary mirror-image partner.} \begin{fmffile}{sigma2sub} \label{feyn:sigma2sub} \fmfframe(10,20)(10,30){ \begin{fmfgraph*}(80,100) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i4,i3,i2,i1,i0} \fmfright{o4,o3,o2,o1,o0} \fmffixedy{0}{i0,v0} \fmffixedy{0}{v0,o0} \fmf{phantom,tension=1/3}{i0,v0} \fmf{phantom}{v0,o0} \fmfv{d.shape=cross,d.size=8}{v0} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v2} \fmffixedy{0}{v2,o1} \fmf{fermion,label=$a$,l.s=left,l.d=4}{i1,v1} \fmf{plain,tension=0.5}{v1,v2} \fmf{phantom}{v2,o1} \fmf{fermion,tension=0,label=$n$,l.s=right,l.d=3}{v2,v3} \fmffixedy{0}{i2,v3} \fmffixedy{0}{v3,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v3} \fmf{plain,tension=0.5}{v3,v4} \fmf{fermion,label=$c$,l.s=left,l.d=4}{v4,o2} \fmffixedy{0}{i3,v5} \fmffixedy{0}{v5,v6} \fmffixedy{0}{v5,o3} \fmf{fermion,label=$b$,l.s=left,l.d=4}{i3,v5} \fmf{plain,tension=0.5}{v5,v6} \fmf{fermion,label=$d$,l.s=left,l.d=4}{v6,o3} \fmf{dashes}{v0,v2} \fmf{dashes}{v3,v5} \fmfforce{(0.5w,-25)}{c} \fmfv{label=$1$,l.d=0}{c} \end{fmfgraph*} } \fmfframe(10,20)(10,30){ \begin{fmfgraph*}(80,100) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i4,i3,i2,i1,i0} \fmfright{o4,o3,o2,o1,o0} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v2} \fmffixedy{0}{v2,o1} \fmf{fermion,label=$a$,l.s=left,l.d=4}{i1,v1} \fmf{plain,tension=0.5}{v1,v2} \fmf{fermion,label=$c$,l.s=left,l.d=4}{v2,o1} \fmffixedy{0}{i2,v3} \fmffixedy{0}{v3,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v3} \fmf{plain,tension=0.5}{v3,v4} \fmf{fermion,label=$d$,l.s=left,l.d=4}{v4,o2} \fmf{fermion,tension=0,label=$n$,l.s=left,l.d=3}{v6,v3} \fmffixedy{0}{i3,v5} \fmffixedy{0}{v5,v6} \fmffixedy{0}{v6,o3} \fmf{fermion,label=$b$,l.s=left,l.d=4}{i3,v5} \fmf{plain,tension=0.5}{v5,v6} \fmf{phantom}{v6,o3} \fmffixedy{0}{i4,v7} \fmffixedy{0}{v7,o4} \fmf{phantom,tension=1/3}{i4,v7} \fmf{phantom}{v7,o4} \fmfv{d.shape=cross,d.size=8}{v7} \fmf{dashes}{v1,v3} \fmf{dashes}{v6,v7} \fmfforce{(0.5w,-25)}{c} \fmfv{label=$2$,l.d=0}{c} \end{fmfgraph*} } \end{fmffile} \end{figure} It is worth noting also that the two-body diagrams with the largest contribution to the energy (the direct diagram, \Fig{feyn:sigma2}.1, and its mirror) make no (linear) contribution to the SMS because they cancel. The exchange diagrams do contribute to the SMS; nevertheless, it is because of this cancellation that the contribution of $\Sigma^{(2)}$ to $k_{\rm SMS}$ is generally smaller than that of $\Sigma^{(1)}$. Figure~\ref{feyn:sigma3} shows a diagram with three external lines, $\Sigma^{(3)}$, where three valence electrons interact via the core. The diagrams of this type are easy to calculate (having only one internal summation and no summations over virtual states), but the number of corresponding effective radial integrals is huge. To include this diagram everywhere in the CI calculation would involve taking into account not only corrections to $\mathcal{H}^{(1)}$ and $\mathcal{H}^{(2)}$, but it would also introduce an effective $\mathcal{H}^{(3)}$. Fortunately, as explained in \Cite{dzuba96pra}, it is possible to omit these diagrams entirely provided that one makes an appropriate choice of the atomic core. We have not included $\Sigma^{(3)}$ in this paper. \begin{figure}[htb] \caption{Effective three-valence-electron interaction of $\Sigma_2$.} \begin{fmffile}{sigma3} \label{feyn:sigma3} \fmfframe(10,10)(10,10){ \begin{fmfgraph}(80,100) \fmfstraight \fmfpen{thin} \fmfset{arrow_len}{3mm} \fmfleft{i4,i3,i2,i1,i0} \fmfright{o4,o3,o2,o1,o0} \fmffixedy{0}{i0,v0} \fmffixedy{0}{v0,v9} \fmffixedy{0}{v9,o0} \fmf{fermion}{i0,v0} \fmf{plain,tension=0.5}{v0,v9} \fmf{fermion}{v9,o0} \fmffixedy{0}{i1,v1} \fmffixedy{0}{v1,v2} \fmffixedy{0}{v2,o1} \fmf{fermion}{i1,v1} \fmf{plain,tension=0.5}{v1,v2} \fmf{phantom}{v2,o1} \fmf{fermion,tension=0}{v2,v3} \fmffixedy{0}{i2,v3} \fmffixedy{0}{v3,v4} \fmffixedy{0}{v4,o2} \fmf{phantom}{i2,v3} \fmf{plain,tension=0.5}{v3,v4} \fmf{fermion}{v4,o2} \fmffixedy{0}{i3,v5} \fmffixedy{0}{v5,v6} \fmffixedy{0}{v5,o3} \fmf{fermion}{i3,v5} \fmf{plain,tension=0.5}{v5,v6} \fmf{fermion}{v6,o3} \fmf{dashes}{v9,v2} \fmf{dashes}{v3,v5} \end{fmfgraph} } \end{fmffile} \end{figure} Analytical expressions for the diagrams of \hbox{Figs. \ref{feyn:sigma1} -- \ref{feyn:sigma2sub}} are given in Appendix~\ref{app:sigma}. A final point worth mentioning is the definition of the energy denominators in the expressions corresponding to the diagrams. Our formalism corresponds to Brillouin-Wigner perturbation theory, and so there is an explicit dependence on the energy of the whole atom: $E$ in Eqs.~\eref{eq:sigma_def} and \eref{eq:CI_MBPT_matrix}. From \Eref{eq:PHP}, $E = E_{\rm core} + E_{\rm val}$, and so the core part cancels in the energy denominator $E - \mathcal{H}_0$ (see \Eref{eq:sigma_expansion}). The energy $E_{\rm val}$ corresponds to the energy of all the valence electrons. To calculate $\mathcal{H}_0$, however, one should take into account the state of all valence electrons for each disconnected diagram. This is computationally impractical, but again one can simplify. In this paper all connected diagrams are evaluated at the energies which correspond to the main configuration. For example, when calculating \Fig{feyn:sigma1}.1 the usual Rayleigh-Shr\"odinger perturbation theory would give the denominator ($\epsilon_a + \epsilon_n - \epsilon_\alpha - \epsilon_\beta$); we instead use ($\epsilon_{2s} + \epsilon_n - \epsilon_\alpha - \epsilon_\beta$) if $a$ and $b$ are $s$-wave, or ($\epsilon_{2p_{1/2}} + \epsilon_n - \epsilon_\alpha - \epsilon_\beta$) if $a$ is $p$-wave, and so on. The Brillouin-Wigner formalism used in our method has two major advantages over the Rayleigh-Shr\"odinger perturbation theory. Firstly, we wish to preserve the symmetry of the CI matrix when $\Sigma$ is added, so we cannot have an energy denominator that depends on which state is initial and which is final. Secondly, Rayleigh-Shr\"odinger theory does not allow a large $P$ space as it will produce small denominators for excited configurations. That is known to lead to ``intruder states'' -- unphysical states that can lie even below the ground state. By contrast, Brillouin-Wigner theory is known to have the wrong limit for an infinite number of valence electrons. It is possible to formulate the theory without these drawbacks by modifying $E$ (\Cite{kozlov99os}). However, the energy dependence of the effective Hamiltonian is a higher order effect; compared to the prescription above, it leads to a relatively small energy correction that we neglect. \section{\label{sec:calc} Calculations and results} We present results for each species of carbon separately, in order of increasing number of valence electrons. For all calculations we use a relativistic B-spline single-electron basis set similar to that developed by Johnson \emph{et~al.} \cite{johnson86prl,johnson87pra,johnson88pra}. In all cases the MBPT calculation considers $1s^2$ as the core, and all other states as valence. We have used two different B-spline basis sets, each with a different number of shells included in the self-consistent Dirac-Fock procedure. The first set, $\mathcal{B}_1$, is formed in the potential of the closed shell $1s^2$ core; this corresponds to $N_{\rm DF} = N_{\rm core}$. For this set the subtraction diagrams of Figs.~\ref{feyn:sigma1sub} and \ref{feyn:sigma2sub} are zero. The second basis set, $\mathcal{B}_2$, is formed in the potential of the $1s^2 2s^2$ core; thus $N_{\rm DF} > N_{\rm core}$ and the subtraction diagrams must be included. The basis sets are described by giving the largest principal quantum number for each wave, e.g. $8spd6f$ includes the orbitals $1 - 8s_{1/2}$, $2 - 8p_j$, $3 - 8d_j$, and $4 - 6f_j$. For the MBPT we have used the basis $40spdfg$ for all ions. This basis is fully saturated in the sense that the energies and isotope shifts do not change with the addition of more basis functions. \subsection{\label{sec:CIV} C\scaps{IV}} C\scaps{IV} has one electron above a closed $1s^2$ core. It can therefore be treated as a single valence electron atom using MBPT, or as a three electron atom using CI. We have used both methods; the results are presented for comparison in \Tref{tab:CIV}. In each case, the calculations were done using the $\mathcal{B}_1$ basis set. The isotope shift results have also been compared with previous theoretical approaches. Multiconfiguration Hartree-Fock -- CI calculations were presented in \Cite{carlsson95jpb}, which also combined the Hylleraas and full-core plus correlation (FCPC) calculations of Refs.~\cite{king89pra} and \cite{wang93pscr} respectively. \begin{table}[htbp] \caption{\label{tab:CIV} Comparison of energies and $k_{\rm SMS}$ transitions from the ground state in C\scaps{IV} using various methods. The transition energies presented do not include the addition of mass shift effects. Note that all results presented by other groups are non-relativistic and hence do not distinguish fine-structure.} \begin{tabular}{lcccc} \hline \hline Level & Energy & DF + $\Sigma^{(1)}$ & Full CI & Other works \\ & (expt.) & & & \\ \hline & & \multicolumn{3}{c}{Energy (cm$^{-1}$)} \\ $2p\ ^2\!P^o_{1/2}$ & 64484 & 64551 & 64594 & 64564\footnotemark[1] \\ $2p\ ^2\!P^o_{3/2}$ & 64592 & 64681 & 64725 & 64399\footnotemark[2] \\ & & & & 64449\footnotemark[3] \\ \\ & & \multicolumn{3}{c}{$k_{\rm SMS}$ (GHz$\cdot$amu)} \\ $2p\ ^2\!P^o_{1/2}$ & & -4511 & -4521 & -4526\footnotemark[1] \\ $2p\ ^2\!P^o_{3/2}$ & & -4504 & -4514 & -4527\footnotemark[2] \\ & & & & -4528\footnotemark[3] \\ \hline \hline \end{tabular} \footnotetext[1]{MCHF--CI: Carlsson \emph{et~al.}, 1995 \cite{carlsson95jpb}} \footnotetext[2]{MCHF: Godefroid \emph{et~al.}, 2001 \cite{godefroid01jpb}} \footnotetext[3]{Hylleraas + FCPC: Results of King, 1989 \cite{king89pra}, and Wang \emph{et~al.}, 1993 \cite{wang93pscr}, combined and presented in \Cite{carlsson95jpb}} \end{table} As noted previously, the MBPT basis was completely saturated, however we have only included second order diagrams in our calculation. By contrast, the CI calculations are complete (although they do not include the Breit interaction and QED effects), but the basis is not completely saturated. We have used an effective $22spd8f$ basis for the CI calculation, including only single and double excitations from the leading configurations (SD-CI). We included triple excitations for a smaller basis, $14spd8f$: this made a difference of less than 2~cm$^{-1}$~in the transition energy, and less than 1~GHz$\cdot$amu for $k_{\rm SMS}$. Other $f$-wave and higher partial waves were also found to be unimportant. All methods give the same value for the transition energies and SMS constants to within 0.5\%. \subsection{\label{sec:CIII} C\scaps{III}} The ground state for C\scaps{III} is $1s^2 2s^2\ ^1\!S_0$. We have done our calculations both as a four-electron CI problem (full CI) and by combining two-valence-electron CI with MBPT, considering $1s^2$ as the frozen core (CI + $\Sigma^{(1,2)}$). All results are presented in \Tref{tab:CIII}. Also included are CI results (the pure two-electron CI method) and CI + $\Sigma^{(1)}$ results (that do not include $\Sigma^{(2)}$). This allows us to examine the roles of the different parts of the core-correlation. The CI and CI + $\Sigma^{(1)}$ results are calculated with the $\mathcal{B}_2$ basis set; the complete CI + $\Sigma^{(1,2)}$ results have been calculated using both the $\mathcal{B}_1$ and $\mathcal{B}_2$ sets. Additionally we have presented the MCHF results of \Cite{jonsson99jpb}. \begin{table*}[htbp] \caption{\label{tab:CIII} Comparison of energies, $k_{\rm SMS}$, and $q$-values for transitions from the ground state in C\scaps{III} using various methods. Note that the MCHF results are non-relativistic and hence do not distinguish fine-structure.} \begin{tabular}{lccccccc} \hline \hline Level & Energy & CI & CI + $\Sigma^{(1)}$ & \multicolumn{2}{c}{CI + $\Sigma^{(1,2)}$} & Full CI & MCHF\footnotemark[1]\\ &(expt.) & & & $\mathcal{B}_2$ & $\mathcal{B}_1$ & & \\ \hline & \multicolumn{7}{c}{Energy (cm$^{-1}$)} \\ $2s2p\ ^3\!P^o_0$ & 52367 & 52750 & 52322 & 52349 & 52383 & 52506 & 52280 \\ $2s2p\ ^3\!P^o_1$ & 52391 & 52784 & 52357 & 52383 & 52418 & 52534 & \\ $2s2p\ ^3\!P^o_2$ & 52447 & 52852 & 52427 & 52453 & 52488 & 52592 & \\ $2s2p\ ^1\!P^o_1$ &102352 &103719 &103365 &102725 &102775 &103109 & 102598 \\ \\ & \multicolumn{7}{c}{$k_{\rm SMS}$ (GHz$\cdot$amu)} \\ $2s2p\ ^3\!P^o_0$ & & -3439 & -3478 & -3473 & -3470 & -3483 & -3475\\ $2s2p\ ^3\!P^o_1$ & & -3438 & -3476 & -3472 & -3468 & -3480 & \\ $2s2p\ ^3\!P^o_2$ & & -3434 & -3473 & -3468 & -3465 & -3474 & \\ $2s2p\ ^1\!P^o_1$ & & -2688 & -2759 & -2790 & -2784 & -2882 & -2817 \\ \\ & \multicolumn{7}{c}{$q$ (cm$^{-1}$)} \\ $2s2p\ ^3\!P^o_0$ & & 75 & 74 & 74 & & & \\ $2s2p\ ^3\!P^o_1$ & & 109 & 108 & 108 & & & \\ $2s2p\ ^3\!P^o_2$ & & 177 & 178 & 178 & & & \\ $2s2p\ ^1\!P^o_1$ & & 165 & 165 & 165 & & & \\ \hline \hline \end{tabular} \footnotetext[1]{J\"onsson \emph{et~al.}, 1999 \cite{jonsson99jpb}} \end{table*} For the full four-electron CI method we used a very large basis $16spdf$, in the SD-CI approximation. This was not enough to saturate the basis entirely, and we could go no further because the Hamiltonian matrix became too large. The error in $k_{\rm SMS}$ from the full CI calculation could be as large as 100~GHz$\cdot$amu. Nonetheless, they are in agreement with the calculations of \Cite{jonsson99jpb}, as well as the results of our own CI + MBPT. The CI + MBPT method is particularly accurate for C\scaps{III} for two reasons. Firstly, because there are only two valence electrons there are no triple excitations, which keeps the Hamiltonian small without the need for approximations (for ions with more valence electrons we have used the SD-CI approximation). Because the Hamiltonian is relatively small, we can completely saturate the basis at $20spdf$. Also, there is no $\Sigma^{(3)}$ (corresponding to \Fig{feyn:sigma3}); as stated before, in this paper we have not included it anyway. These points hold true for all two-valence-electron atoms; in particular we have previously shown that excellent results are attainable for Mg\scaps{I} (\Cite{berengut05arXiv}). In \Cite{jonsson99jpb} the MCHF results were given an error of 1\%; our CI + MBPT results are within this range, and so we believe that we have obtained a similar accuracy. It is also worth noting again that we have excluded the extra box diagrams with ``wrong'' parity from the results presented. The inclusion of these diagrams in $\Sigma^{(2)}$ makes a difference of around 0.1\% to the $k_{\rm SMS}$ constants. \subsection{\label{sec:CII} C\scaps{II}} We have treated C\scaps{II} as a three-valence-electron ion; its ground state is $2s^22p\ ^2\!P^o_{1/2}$. We have used the $\mathcal{B}_2$ basis $20spdf$, which corresponds to the $V^{N-1}$ potential, and we have restricted ourselves in the CI problem to single and double excitations from the leading configurations $2s^22p$ and $2s2p^2$. In \Tref{tab:CII} we present all results for C\scaps{II}. Again we have presented the breakdown of the various parts of the CI + MBPT method. We have also performed our calculations using the $\mathcal{B}_1$ basis: this changed the results by less than 1\% for all results except for the $^2\!S_{1/2}$ transition, in which the difference was around 3\%. For this transition, neither basis set was enough to completely saturate $k_{\rm SMS}$. Furthermore, the difference between the results of CI + MBPT and MCHF--CI is fairly large for this transition (around 7\%). Adding the next most important configuration, $2s^23s$, to the leading set changes the energy of the $^2\!S_{1/2}$ transition by 30~cm$^{-1}$~(0.03\%) and $k_{\rm SMS}$ by 14~GHz$\cdot$amu (around 1\%). The effect on all other transitions was much smaller. \begin{table*}[htbp] \caption{\label{tab:CII} Comparison of energies, $k_{\rm SMS}$, and $q$-values for transitions from the ground state in C\scaps{II}. Note that the MCHF--CI results are non-relativistic and hence do not distinguish fine-structure.} \begin{tabular}{lcccccc} \hline \hline Level & Energy & \multicolumn{3}{c}{This work} & MCHF--CI\footnotemark[1] & CI\footnotemark[2] \\ & (expt.) & CI & CI + $\Sigma^{(1)}$ & CI + $\Sigma^{(1,2)}$ & & \\ \hline & & \multicolumn{5}{c}{Energy (cm$^{-1}$)} \\ $2s2p^2\ ^4\!P_{1/2}$ & 43003 & 43118 & 42767 & 42782 && \\ $2s2p^2\ ^4\!P_{3/2}$ & 43025 & 43144 & 42794 & 42808 && \\ $2s2p^2\ ^4\!P_{5/2}$ & 43054 & 43186 & 42838 & 42852 && \\ $2s2p^2\ ^2\!D_{5/2}$ & 74930 & 75587 & 75350 & 75227 & 74842 & \\ $2s2p^2\ ^2\!D_{3/2}$ & 74933 & 75585 & 75347 & 75225 && \\ $2s2p^2\ ^2\!S_{1/2}$ & 96494 & 97095 & 96965 & 96960 & 96478 & \\ $2s2p^2\ ^2\!P_{1/2}$ & 110624 & 112135 & 111913 & 111205 & 110569 & \\ $2s2p^2\ ^2\!P_{3/2}$ & 110666 & 112187 & 111967 & 111259 && \\ \\ & & \multicolumn{5}{c}{$k_{\rm SMS}$ (GHz$\cdot$amu)} \\ $2s2p^2\ ^4\!P_{1/2}$ & & -2913 & -2956 & -2960 && \\ $2s2p^2\ ^4\!P_{3/2}$ & & -2912 & -2954 & -2958 && \\ $2s2p^2\ ^4\!P_{5/2}$ & & -2910 & -2952 & -2956 && \\ $2s2p^2\ ^2\!D_{5/2}$ & & -2604 & -2666 & -2672 & -2672 & \\ $2s2p^2\ ^2\!D_{3/2}$ & & -2604 & -2666 & -2671 && \\ $2s2p^2\ ^2\!S_{1/2}$ & & -1204 & -1301 & -1321 & -1411 & \\ $2s2p^2\ ^2\!P_{1/2}$ & & -1323 & -1410 & -1471 & -1531 & \\ $2s2p^2\ ^2\!P_{3/2}$ & & -1320 & -1407 & -1468 && \\ \\ & & \multicolumn{5}{c}{$q$ (cm$^{-1}$)} \\ $2s2p^2\ ^4\!P_{1/2}$ & & 132 & 132 & 132 && \\ $2s2p^2\ ^4\!P_{3/2}$ & & 157 & 158 & 158 && \\ $2s2p^2\ ^4\!P_{5/2}$ & & 200 & 202 & 202 && \\ $2s2p^2\ ^2\!D_{5/2}$ & & 179 & 181 & 181 && 179 (20)\\ $2s2p^2\ ^2\!D_{3/2}$ & & 176 & 178 & 178 && 176 (20)\\ $2s2p^2\ ^2\!S_{1/2}$ & & 165 & 167 & 168 && 161 (20)\\ $2s2p^2\ ^2\!P_{1/2}$ & & 162 & 163 & 163 && \\ $2s2p^2\ ^2\!P_{3/2}$ & & 215 & 217 & 217 && \\ \hline \hline \end{tabular} \footnotetext[1]{J\"onsson \emph{et~al.}, 1996 \cite{jonsson96jpb}} \footnotetext[2]{Berengut \emph{et~al.}, 2004 \cite{berengut04praB}} \end{table*} \subsection{\label{sec:CI} C\scaps{I}} In \Tref{tab:CI_energies} we present energies for transitions in neutral carbon. The ground state of C\scaps{I} is $2s^22p^2\ ^3P_0$. We used the $\mathcal{B}_2$ basis of size $16spdf$, and took all single and double excitations from several leading configurations. The energies obtained for this atom are not as good as those of the other ions. It is testimony to the power of the B-spline basis, however, that the levels appear in the correct order (with the exception of some fine structure), which is remarkable considering that the spectrum is very dense. \begin{table*}[htbp] \caption{\label{tab:CI_energies} Comparison of energies and $q$-values for transitions from the ground state in C\scaps{I} using various methods. Note that the MCHF--CI results are non-relativistic and hence do not distinguish fine-structure.} \begin{tabular}{lccccccc} \hline \hline Level & \multicolumn{5}{c}{Energy (cm$^{-1}$)} & \multicolumn{2}{c}{$q$ (cm$^{-1}$)} \\ & (expt.) & CI & CI + $\Sigma^{(1)}$ & CI + $\Sigma^{(1,2)}$ & MCHF--CI & CI + $\Sigma^{(1,2)}$ & CI\footnotemark[1] \\ \hline $2s^22p^2\ ^1\!S_0$ & 21648 & 22140 & 22213 & 22335 & 21753\footnotemark[2] & 38 & \\ $2s2p^3\ ^5\!S^o_2$ & 33735 & 33529 & 33234 & 33240 & 33498\footnotemark[2] & 146 & \\ $2s^22p3s\ ^3\!P^o_0$ & 60333 & 59806 & 60182 & 60144 & & -47 & \\ $2s^22p3s\ ^3\!P^o_1$ & 60353 & 59826 & 60202 & 60164 & & -24 & \\ $2s^22p3s\ ^3\!P^o_2$ & 60393 & 59866 & 60243 & 60206 & & 26 & \\ $2s^22p3s\ ^1\!P^o_1$ & 61982 & 61587 & 61975 & 61911 & 62002\footnotemark[2] & 1 &\\ $2s2p^3\ ^3\!D^o_3$ & 64087 & 64773 & 64628 & 64562 & & 144 & 151(60) \\ $2s2p^3\ ^3\!D^o_1$ & 64090 & 64762 & 64617 & 64551 & 64036\footnotemark[3] & 137 & 141(60) \\ $2s2p^3\ ^3\!D^o_2$ & 64091 & 64766 & 64622 & 64555 & & 140 & 145(60) \\ $2s2p^3\ ^3\!P^o_1$ & 75254 & 76209 & 76196 & 76153 & & 117 & 111(60) \\ $2s2p^3\ ^3\!P^o_2$ & 75255 & 76214 & 76202 & 76158 & & 121 & \\ $2s2p^3\ ^3\!P^o_0$ & 75256 & 76207 & 76194 & 76151 & & 115 & \\ $2s^22p3d\ ^1\!D^o_2$ & 77680 & 79297 & 79724 & 79643 & & 7 & \\ $2s^22p4s\ ^3\!P^o_0$ & 78105 & 79737 & 80152 & 80076 & & -33 & \\ $2s^22p4s\ ^3\!P^o_1$ & 78117 & 79750 & 80166 & 80090 & & -21 & \\ $2s^22p4s\ ^3\!P^o_2$ & 78148 & 79787 & 80205 & 80130 & & 24 & \\ $2s^22p3d\ ^3\!F^o_2$ & 78199 & 79845 & 80271 & 80193 & & -31 & \\ $2s^22p3d\ ^3\!F^o_3$ & 78216 & 79862 & 80289 & 80211 & & -18 & \\ $2s^22p3d\ ^3\!D^o_1$ & 78293 & 79937 & 80354 & 80275 & & -13 & \\ $2s^22p3d\ ^3\!D^o_2$ & 78308 & 79954 & 80371 & 80293 & & 13 & \\ $2s^22p3d\ ^3\!D^o_3$ & 78318 & 79966 & 80385 & 80306 & & 29 & \\ $2s^22p4s\ ^1\!P^o_1$ & 78340 & 79983 & 80403 & 80327 & & 17 & \\ $2s^22p3d\ ^1\!F^o_3$ & 78530 & 80802 & 80626 & 80549 & & 15 & \\ $2s^22p3d\ ^1\!P^o_1$ & 78731 & 80402 & 80822 & 80746 & & 12 & \\ $2s^22p3d\ ^3\!P^o_2$ & 79311 & 80957 & 81307 & 81225 & & 18 & \\ $2s^22p3d\ ^3\!P^o_1$ & 79319 & 80967 & 81318 & 81237 & & 30 & \\ $2s^22p3d\ ^3\!P^o_0$ & 79323 & 80972 & 81323 & 81242 & & 35 & \\ $2s^22p4d\ ^1\!D^o_2$ & 83498 & 86521 & 86942 & 86858 & & 8 & \\ \hline \hline \end{tabular} \footnotetext[1]{Berengut \emph{et~al.}, 2004 \cite{berengut04praB}} \footnotetext[2]{Carlsson \emph{et~al.}, 1995 \cite{carlsson95jpb}} \footnotetext[3]{J\"onsson \emph{et~al.}, 1996 \cite{jonsson96jpb}} \end{table*} We have generated $q$-values for C\scaps{I} because the previous calculations (\Cite{berengut04praB}) had large uncertainties. We believe the new values, presented in \Tref{tab:CI_energies}, have errors of around 10~cm$^{-1}$. In \Tref{tab:CI_SMS} we present the SMS constants for C\scaps{I}. We are within 1\% of the values obtained using the MCHF--CI method (Refs.~\cite{carlsson95jpb} and \cite{jonsson96jpb}). For most transitions the effect of core-correlations on $k_{\rm SMS}$ is around 1 or 2\%, however in some cases they are larger (for example, in $2s2p^3\ ^3\!P^o$ the core correlations are 8\% of the total). \begin{table*}[htbp] \caption{\label{tab:CI_SMS} Comparison of $k_{\rm SMS}$ for transitions from the ground state in C\scaps{I} using various methods. Note that the MCHF--CI results are non-relativistic and hence do not distinguish fine-structure.} \begin{tabular}{lccccc} \hline \hline Level & Energy & \multicolumn{4}{c}{$k_{\rm SMS}$ (GHz$\cdot$amu)} \\ & (expt.) & CI & CI + $\Sigma^{(1)}$ & CI + $\Sigma^{(1,2)}$ & MCHF--CI \\ \hline $2s^22p^2\ ^1\!S_0$ & 21648 & 186 & 180 & 191 & 152\footnotemark[1] \\ $2s2p^3\ ^5\!S^o_2$ & 33735 & -2540 & -2579 & -2588 & -2583\footnotemark[1] \\ $2s^22p3s\ ^3\!P^o_0$ & 60333 & 1405 & 1405 & 1419 & \\ $2s^22p3s\ ^3\!P^o_1$ & 60353 & 1406 & 1406 & 1420 & \\ $2s^22p3s\ ^3\!P^o_2$ & 60393 & 1408 & 1408 & 1422 & \\ $2s^22p3s\ ^1\!P^o_1$ & 61982 & 1549 & 1551 & 1559 & 1553\footnotemark[1] \\ $2s2p^3\ ^3\!D^o_3$ & 64087 & -2165 & -2224 & -2227 & \\ $2s2p^3\ ^3\!D^o_1$ & 64090 & -2165 & -2224 & -2227 & -2222\footnotemark[2] \\ $2s2p^3\ ^3\!D^o_2$ & 64091 & -2165 & -2224 & -2227 & \\ $2s2p^3\ ^3\!P^o_1$ & 75254 & -1272 & -1390 & -1392 & \\ $2s2p^3\ ^3\!P^o_2$ & 75255 & -1272 & -1389 & -1391 & \\ $2s2p^3\ ^3\!P^o_0$ & 75256 & -1271 & -1390 & -1392 & \\ $2s^22p3d\ ^1\!D^o_2$ & 77680 & 1334 & 1320 & 1331 & \\ $2s^22p4s\ ^3\!P^o_0$ & 78105 & 1398 & 1392 & 1407 & \\ $2s^22p4s\ ^3\!P^o_1$ & 78117 & 1404 & 1397 & 1412 & \\ $2s^22p4s\ ^3\!P^o_2$ & 78148 & 1415 & 1408 & 1422 & \\ $2s^22p3d\ ^3\!F^o_2$ & 78199 & 1378 & 1368 & 1381 & \\ $2s^22p3d\ ^3\!F^o_3$ & 78216 & 1381 & 1372 & 1384 & \\ $2s^22p3d\ ^3\!D^o_1$ & 78293 & 1430 & 1422 & 1434 & \\ $2s^22p3d\ ^3\!D^o_2$ & 78308 & 1429 & 1421 & 1432 & \\ $2s^22p3d\ ^3\!D^o_3$ & 78318 & 1430 & 1420 & 1432 & \\ $2s^22p4s\ ^1\!P^o_1$ & 78340 & 1443 & 1435 & 1446 & \\ $2s^22p3d\ ^1\!F^o_3$ & 78530 & 1451 & 1440 & 1452 & \\ $2s^22p3d\ ^1\!P^o_1$ & 78731 & 1436 & 1426 & 1438 & \\ $2s^22p3d\ ^3\!P^o_2$ & 79311 & 948 & 998 & 1010 & \\ $2s^22p3d\ ^3\!P^o_1$ & 79319 & 956 & 1006 & 1018 & \\ $2s^22p3d\ ^3\!P^o_0$ & 79323 & 960 & 1009 & 1021 & \\ $2s^22p4d\ ^1\!D^o_2$ & 83498 & 1277 & 1258 & 1268 & \\ \hline \hline \end{tabular} \footnotetext[1]{Carlsson \emph{et~al.}, 1995 \cite{carlsson95jpb}} \footnotetext[2]{J\"onsson \emph{et~al.}, 1995 \cite{jonsson96jpb}} \end{table*} \section{Conclusions} In this paper we have presented, in detail, a method for calculating isotope shifts and relativistic shifts in atomic spectra. The method uses the finite-field method to reduce the problem to that of an energy calculation, which is carried out using CI for the valence electrons combined with MBPT for the core correlations. Having previously tested the method in magnesium, we have now applied it to transitions in neutral carbon, C\scaps{II}, C\scaps{III}, and C\scaps{IV}. In all cases we have agreement with previous MCHF and MCHF--CI calculations to within a few percent. In \Tref{tab:experiment} we compare our calculations with the few experiments that exist for carbon ions; in all cases agreement is within around 0.005~cm$^{-1}$, which corresponds to an error in $k_{\rm SMS}$ of around 20~GHz$\cdot$amu. \begin{table}[htbp] \caption{\label{tab:experiment} Comparison of calculated $^{13}$C -- $^{12}$C isotope shifts with experiment.} \begin{tabular}{llccc} \hline \hline \multicolumn{2}{c}{Transition}& $\lambda$ & \multicolumn{2}{c}{$\delta \nu^{13, 12}$ (cm$^{-1}$)} \\ Lower Level & Upper Level & (\AA) & Expt. & This work \\ \hline C\scaps{I} \\ $2s^22p^2\ ^3\!P_2$ & $2s 2p^3\ ^5\!S_2^o$ & 2967 & 0.670(5)\footnotemark[1] & 0.674\footnotemark[2] \\ $2s^22p^2\ ^1\!S_0$ & $2s^22p3s\ ^1\!P_1^o$ & 2479 & -0.156(3)\footnotemark[3] & -0.151 \\ & & & -0.156(2)\footnotemark[4] & \\ C\scaps{II} \\ $2s2p^2\ ^2\!S_{1/2}$ & $2s^23p\ ^2\!P_{3/2}^o$ & 2837 & -0.612(2)\footnotemark[3] & -0.617 \\ $2s2p^2\ ^2\!S_{1/2}$ & $2s^23p\ ^2\!P_{1/2}^o$ & 2838 & -0.623(3)\footnotemark[3] & -0.617 \\ \hline \hline \end{tabular} \footnotetext[1]{Bernheim and Kittrell, 1980 \cite{bernheim80sab}} \footnotetext[2]{Actually, the $2s^22p^2\ ^3\!P_0$ -- $2s 2p^3\ ^5\!S_2^o$ transition was calculated.} \footnotetext[3]{Burnett, 1950 \cite{burnett50prev}} \footnotetext[4]{Holmes, 1951 \cite{holmes51osa}} \end{table} In \Tref{tab:results} we present total isotope shifts for some important transitions. These transitions can be observed in quasar absorption spectra, and can therefore be used to probe variation of $\alpha$ and isotope abundance evolution. The results are presented both in MHz and km/sec: the latter is the preferred form for use in astronomy. \begin{table*}[htbp] \caption{\label{tab:results} Total $^{13}$C -- $^{12}$C and $^{14}$C -- $^{12}$C isotope shifts for important transitions. We believe the errors are of the order of 0.1 GHz.} \begin{tabular}{llccccc} \hline \hline \multicolumn{2}{c}{Transition} & $\lambda$ & \multicolumn{2}{c}{$\delta \nu^{13, 12}$} & \multicolumn{2}{c}{$\delta \nu^{14, 12}$} \\ Ground State & Upper Level & (\AA)& (GHz) & (km/sec)\footnotemark[1] & (GHz) & (km/sec)\footnotemark[1] \\ \hline C\scaps{I} \\ $2s^2 2p^2\ ^3\!P_0$ & $2s^2 2p 3s\ ^3\!P_1^o$ & 1657 & -2.75 & 0.46 & -5.09 & 0.84 \\ & $2s 2p^3 \ ^3\!D_1^o$ & 1560 & 21.10 & -3.29 & 39.12 & -6.10 \\ & $2s 2p^3 \ ^3\!P_1^o$ & 1329 & 16.91 & -2.25 & 31.34 & -4.17 \\ & $2s^2 2p 4s\ ^3\!P_1^o$ & 1280 & -0.82 & 0.10 & -1.51 & 0.19 \\ & $2s^2 2p 3d\ ^3\!D_1^o$ & 1277 & -0.94 & 0.12 & -1.75 & 0.22 \\ & $2s^2 2p 4s\ ^1\!P_1^o$ & 1276 & -1.01 & 0.13 & -1.88 & 0.24 \\ & $2s^2 2p 3d\ ^3\!P_1^o$ & 1261 & 1.84 & -0.23 & 3.42 & -0.43 \\ \\ C\scaps{II} \\ $2s^2 2p\ ^2\!P_{1/2}^o$ & $2s 2p^2\ ^2\!D_{3/2}$ & 1336 & 25.10 & -3.35 & 46.53 & -6.21 \\ & $2s 2p^2\ ^2\!D_{5/2}$ & 1336 & 25.10 & -3.35 & 46.54 & -6.21 \\ & $2s 2p^2\ ^2\!S_{1/2}$ & 1037 & 18.70 & -1.94 & 34.66 & -3.59 \\ \\ C\scaps{III} \\ $2s^2\ ^1\!S_0$ & $2s2p\ ^1\!P_1^o$ & 977 & 28.76 & -2.81 & 53.33 & -5.21 \\ \\ C\scaps{IV} \\ $2s\ ^2\!S_{1/2}$ & $2p\ ^2\!P_{1/2}^o$ & 1551 & 35.89 & -5.57 & 66.54 & -10.32 \\ & $2p\ ^2\!P_{3/2}^o$ & 1548 & 35.79 & -5.54 & 66.35 & -10.27 \\ \hline \hline \end{tabular} \footnotetext[1]{$\delta \nu = \delta \lambda / \lambda \times c$ (km/sec). A negative velocity means that $^{14}$C (and $^{13}$C) are at lower wavelength than $^{12}$C.} \end{table*} \section{Acknowledgements} This work is supported by the Australian Research Council; Department of Energy, Office of Nuclear Physics, Contract No. W-31-109-ENG-38; Gordon Godfrey fund; and Russian Foundation for Basic Research, grant No.~05-02-16914. The authors would like to thank V. A. Dzuba for useful discussions. We are grateful to the APAC National Facility for providing computer time.
1,116,691,499,922
arxiv
\section{Introduction} The goal of this paper is to present a fully relativistic calculation of the two-particle two-hole (2p-2h) contributions to the inclusive $(e,e')$ response functions of nuclei for intermediate to high momentum transfers in a Fermi gas model. Consistency with perturbation theory is maintained and all diagrams with one-pion exchange in the nuclear current are considered, constructed by attaching a photon to all possible lines in the basic one-pion exchange Feynman diagram. In this way not only meson-exchange currents (MEC) arise (for example, where the photon is attached to the pion), but also pionic correlation diagrams, where the virtual photon is absorbed by one of the two interacting nucleons. Both kinds of diagrams are considered in our model, together with the usual virtual $\Delta$-isobar electroexcitation and decay. We are motivated by previous work presented in~\cite{DePace03,DePace04}, where only the MEC were included in the 2p-2h transverse ($T$) response, together with earlier work both in non-relativistic~\cite{Van80} and relativistic~\cite{Dek91,Dek92,Dek94,Dek95} regimes. The contribution found from the 2p-2h excitations is small at the quasielastic (QE) peak, and increases with energy transfer, being more important in the dip region, where it is dominated by the $\Delta$ current. At the non-relativistic level attempts were also made to evaluate the 2p-2h contribution of MEC in the $T$-response for finite nuclei in a shell model~\cite{Ama93,Ama94}. The MEC are not the only two-body operators able to induce 2p-2h excitations. The correlation operators arising from Feynman diagrams where the photon is attached to a nucleon line, exchanging a pion with another nucleon, are of the same order as the MEC in the perturbative expansion and should be included to be consistent~\cite{Alb84,Alb91,Gil97}. These diagrams, however, present the problem of giving an infinite answer in a Fermi gas model. The reason is that there is a nucleon propagator that can be on-shell in the region of the quasielastic peak. Since the response function is the square of the amplitude, the resulting double pole gives an infinite result after integration. In dealing with this problem, in~\cite{Alb84} a prescription was followed by keeping the lines with a nucleon propagator strictly off the mass shell. A different approach was taken in~\cite{Alb91} by subtracting from the proper self-energy its value on the mass shell, with the unphysical shortcoming of obtaining negative results for the 2p-2h responses to the left of the QE peak. Finally, in \cite{Gil97} a nucleon self-energy in the medium was introduced in the nucleon propagator. In dealing with the seven-dimensional integrals appearing in the 2p-2h responses, some of the previous calculations have resorted to the approximation of setting the two hole momenta both equal to zero in some of the diagrams~\cite{Alb84} or by taking into account only an average nucleon momentum~\cite{Gil97}. In this work we revisit the double-pole problem to analyze the nature of the divergence of the resulting contributions. By isolating the divergent terms we are able to link them to the infinite extension of the Fermi gas system. In fact the double-pole term can be related to the probability of one-nucleon emission followed by nucleon re-scattering off another nucleon, with the final ejection of two particles. This probability is infinite, since it is proportional to the propagation time of a real nucleon in a Fermi gas. This fact was pointed out in \cite{Gil97} where it was cured, as mentioned above, by introducing a nucleon self-energy with an imaginary part giving it a finite lifetime for collisions. In this paper we use a similar procedure by introducing a finite imaginary part $i\epsilon$ in the nucleon propagator, but with a new meaning for the free parameter $\epsilon$. Instead of being an imaginary part of the nucleon self-energy for collisions, we relate it to the time $T$ that a nucleon can travel across the nucleus before leaving it. Hence this term accounts for the finite size of a real nucleus in contrast to an infinite system like the Fermi gas, where $T$ is infinite. The value of $\epsilon$ can be estimated to be roughly about 200 MeV, appreciably larger than the usual values of the nucleon width for collisions. The structure of this work is as follows. In Sect.~II we present our model and define the 2p-2h response functions and the two-body current operators. We discuss in depth the divergence of the correlation diagrams and the need to introduce the parameter $\epsilon$ in Sect.~III (details of the numerical calculation are given in the appendices). In Sect.~IV we present results for the 2p-2h longitudinal and transverse response functions. In the case of the correlation diagrams we present results for several values of the parameter $\epsilon$. Finally, in Sect.~V we present our conclusions. \section{Model for 2p-2h response functions} We consider an electron that scatters off a nucleus transferring four-momentum $Q^{\mu}=(\omega,{\bf q})$, with $\omega$ the energy transfer and ${\bf q}$ the momentum transfer. We follow closely the notation of \cite{Ama02}. Assuming plane waves for the electron, working in the laboratory system and taking the $z$ direction along the momentum transfer, the inclusive cross section is written as \begin{equation} \frac{d\sigma}{d\Omega'_e d\omega} =\sigma_M \left[ v_LR_L(q,\omega) +v_T R_T(q,\omega) \right] \, , \end{equation} where $\sigma_M$ is the Mott cross section, $v_L$ and $v_T$ are the lepton kinematic factors, and the relevant quantities are the longitudinal $R_L(q,\omega)$ and transverse $R_T(q,\omega)$ response functions, respectively. These are defined as the following components of the hadronic tensor, \begin{eqnarray} R_L&=& W^{00}\\ R_T&=& W^{11}+W^{22} \, , \end{eqnarray} where \begin{equation} W^{\mu\nu}=\sum_f \langle f |J^{\mu}(Q)|i\rangle^* \langle f |J^{\nu}(Q)|i\rangle \delta(E_i+\omega-E_f) \end{equation} and $J^{\mu}(Q)$ is the nuclear current operator. In this paper we take the initial nuclear state as the relativistic Fermi gas (RFG) model ground state, $|i\rangle=|F\rangle$, with all states with momenta below the Fermi momentum $k_F$ occupied. The sum over final states can be decomposed as the sum of one-particle one-hole (1p-1h) plus two-particle two-hole (2p-2h) excitations plus additional channels. In the impulse approximation the 1p-1h channel gives the well-known response functions of the RFG. Here we focus on the 2p-2h channel where the final states are of the type $|f\rangle=|{\bf p}'_1 s'_1, {\bf p}'_2 s'_2, {\bf h}_1^{-1} s_1, {\bf h}_2^{-1} s_2 \rangle$, where ${\bf p}'_i$ are momenta of relativistic final nucleons above the Fermi sea, $p'_i>k_F$, with four-momenta $P'_i=(E'_i,{\bf p}'_i)$, and $H_i=(E_i,{\bf h}_i)$ are the four-momenta of the hole states with $h_i<k_F$. The spin indices are $s'_i$ and $s_i$. \subsection{2p-2h Response functions} Since we have two species of nucleons, the 2p-2h responses can be further decomposed as the sum of two-proton (PP), two-neutron (NN) and proton-neutron (PN) emission \begin{equation} R_K=R_K(PP)+R_K(NN)+R_K(PN) \, . \end{equation} For the PP channel we write down the L response as (likewise for the T response): \begin{eqnarray} R_L(PP)&=& \nonumber \\ &&\kern -1cm \frac14 \sum_{{\bf p}'_1s'_1} \sum_{{\bf p}'_2s'_2} \sum_{{\bf h}_1s_1} \sum_{{\bf h}_2s_2} \left| \langle {\bf p}'_1{\bf p}'_2{\bf h}_1^{-1}{\bf h}_2^{-1}|J^0(Q)|F\rangle \right|^2 \nonumber \\ && \kern -1cm \mbox{} \times\delta(E'_1+E'_2-\omega-E_1-E_2) \, , \end{eqnarray} where the spin indices are implicit in the matrix elements. The factor $\frac14$ comes from anti-symmetry of the wave functions, to avoid double counting of the final states under the interchange of the indices $1'\leftrightarrow 2'$ and $1\leftrightarrow 2$. Exploiting the anti-symmetry, the many-body matrix element of a two-body operator can be written as the direct minus exchange part of the two-body current matrix element \[ \langle{\bf p}'_1{\bf p}'_2{\bf h}_1^{-1}{\bf h}_2^{-1}|J^\mu|F\rangle= \langle{\bf p}'_1{\bf p}'_2 |J^\mu|{\bf h}_1{\bf h}_2\rangle -\langle{\bf p}'_1{\bf p}'_2 |J^\mu|{\bf h}_2{\bf h}_1\rangle \, , \] which we write in terms of the two-body current function $j^{\mu}({\bf p}'_1,{\bf p}'_2,{\bf h}_1,{\bf h}_2)$ to be specified below, \begin{eqnarray} &&\langle{\bf p}'_1{\bf p}'_2 |J^\mu|{\bf h}_1{\bf h}_2\rangle= (2\pi)^3\delta({\bf p}'_1+{\bf p}'_2-{\bf h}_1-{\bf h}_2-{\bf q}) \nonumber\\ &&\times\frac{m^2}{V^2(E_1E_2E'_1E'_2)^{1/2}} j^{\mu}({\bf p}'_1,{\bf p}'_2,{\bf h}_1,{\bf h}_2). \end{eqnarray} Going to the thermodynamic limit and integrating over the momentum ${\bf p}'_2$ we obtain \begin{eqnarray} &&R_L(PP)= \frac{V}{4} \sum_{s'_1s'_2s_1s_2} \int \frac{d^3p'_1}{(2\pi)^3} \frac{d^3h_1}{(2\pi)^3} \frac{d^3h_2}{(2\pi)^3} \nonumber\\ &&\times \frac{m^4}{E_1E_2E'_1E'_2} \left|j^{0}({\bf p}'_1,{\bf p}'_2,{\bf h}_1,{\bf h}_2)_A\right|^2 \nonumber\\ &&\mbox{}\times\delta(E'_1+E'_2-\omega-E_1-E_2) \theta(p'_2-k_F)\, , \label{integral} \end{eqnarray} where ${\bf p}'_2={\bf h}_1+{\bf h}_2+{\bf q}-{\bf p}'_1$, and the integration limits are $h_1,h_2<k_F$, $p'_1>k_F$. We have defined the anti-symmetrized current function \[ j^{\mu}(1',2',1,2)_A\equiv j^{\mu}(1',2',1,2)- j^{\mu}(1',2',2,1) \] \begin{figure}[tph] \begin{center} \includegraphics[scale=0.65, bb= 130 470 490 690]{diagmec.ps} \caption{ MEC diagrams considered in the present study. Diagrams (a,b) correspond to the seagull, (c) to the pionic, and (d-g) to the $\Delta$ current, respectively. } \label{Fig1} \end{center} \end{figure} with obvious meaning for the abbreviated arguments. Expanding the square inside the integral in Eq.~(\ref{integral}), three terms are obtained: \begin{eqnarray} \left|j^{\mu}(1',2',1,2)_A\right|^2&=& \left|j^{\mu}(1',2',1,2)\right|^2 +\left|j^{\mu}(1',2',2,1)\right|^2 \nonumber\\ &-& 2{\rm Re}\ j^{\mu}(1',2',2,1)^*j^{\mu}(1',2',1,2)\, . \nonumber\\ \end{eqnarray} Changing variables $1\leftrightarrow 2$ in the second term under the integral, we obtain the first term again. Hence we can finally write for the PP response \begin{eqnarray} R_L(PP)&=& \frac{V}{2} \sum_{s'_1s'_2s_1s_2} \int \frac{d^3p'_1}{(2\pi)^3} \frac{d^3h_1}{(2\pi)^3} \frac{d^3h_2}{(2\pi)^3} \nonumber\\ && \kern -1cm \times \frac{m^4}{E_1E_2E'_1E'_2} \left[ \left|j^{0}({\bf p}'_1,{\bf p}'_2,{\bf h}_1,{\bf h}_2)\right|^2 \right. \nonumber\\ && \kern -1cm \left. -{\rm Re}\ j^{0}({\bf p}'_1,{\bf p}'_2,{\bf h}_1,{\bf h}_2)^* j^{0}({\bf p}'_1,{\bf p}'_2,{\bf h}_2,{\bf h}_1) \right] \nonumber\\ && \kern -1cm \mbox{}\times\delta(E'_1+E'_2-\omega-E_1-E_2) \theta(p'_2-k_F) \,. \label{RL} \end{eqnarray} Note that the factor $\frac12$ in front of the sum comes from the anti-symmetry of the particles (protons). A similar expression is obtained for the NN response $R_L(NN)$. In the case of the PN channel we subtract the charge exchange contribution without any symmetry term because there are no additional isospin sums, and the result is \begin{eqnarray} &&R_L(PN)= V \sum_{s'_1s'_2s_1s_2} \int \frac{d^3p'_1}{(2\pi)^3} \frac{d^3h_1}{(2\pi)^3} \frac{d^3h_2}{(2\pi)^3} \nonumber\\ &&\times \frac{m^4}{E_1E_2E'_1E'_2} \left| \langle PN|j^{0}({\bf p}'_1,{\bf p}'_2,{\bf h}_1,{\bf h}_2)|PN\rangle \right. \nonumber\\ && \left. -\langle NP|j^{0}({\bf p}'_1,{\bf p}'_2,{\bf h}_2,{\bf h}_1)|PN\rangle \right|^2 \nonumber\\ &&\mbox{}\times\delta(E'_1+E'_2-\omega-E_1-E_2) \theta(p'_2-k_F) \, . \label{RLpn} \end{eqnarray} Finally, note that the 2p-2h response is proportional to the volume of the system V which is related to the number of particles ${\cal N}$ (protons or neutrons) by $V=3\pi^2 {\cal N}/k_F^3$. \subsection{Two-body current matrix elements} \begin{figure}[tph] \begin{center} \includegraphics[scale=0.65, bb= 130 580 490 690]{diagcor.ps} \caption{ Correlation diagrams considered in the present study. Diagrams (a,b) correspond to the forward, and (c-d) backward contributions, respectively. } \label{Fig2} \end{center} \end{figure} The MEC considered in this work are represented by the Feynman diagrams of Fig.~1. The pionic four-momenta $K_1$, $K_2$ are defined via $K^\mu_i=P'_i{}^\mu-H^\mu_i$ as the four-momenta given to the nucleons 1 and 2, respectively, by the exchanged pion. Assuming pseudo-vector nucleon-pion coupling, the fully relativistic two-body current matrix elements are given by~\cite{Ama02,Ama03}: \begin{itemize} \item (a-b) Seagull or contact: \end{itemize} \begin{eqnarray} j^{\mu}_{s}({\bf p}'_1, {\bf p}'_2,{\bf p}_1,{\bf p}_2) &=& \frac{f^2}{m_\pi^2} i\epsilon_{3ab} \overline{u}({\bf p}'_1)\tau_a\gamma_5\not{\!K}_1 u({\bf p}_1) \nonumber\\ &&\kern -3cm \times \frac{F_1^V}{K_1^2-m_\pi^2} \overline{u}({\bf p}'_2)\tau_b\gamma_5\gamma^{\mu}u({\bf p}_2) + (1 \leftrightarrow 2) \,. \label{s} \end{eqnarray} \begin{itemize} \item (c) Pion-in-flight: \end{itemize} \begin{eqnarray} j^{\mu}_{p}({\bf p}'_1, {\bf p}'_2,{\bf p}_1,{\bf p}_2) &=& \frac{f^2}{m_\pi^2} i\epsilon_{3ab} \frac{F_\pi(K_1-K_2)^\mu}{(K_1^2-m_\pi^2)(K_2^2-m_\pi^2)} \nonumber\\ &&\kern -3cm \times \overline{u}({\bf p}'_1)\tau_a\gamma_5\not{\!K}_1 u({\bf p}_1) \overline{u}({\bf p}'_2)\tau_b\gamma_5\not{\!K}_2 u({\bf p}_2) \, . \label{p} \end{eqnarray} In the above we use the Einstein convention for the sum over a repeated isospin index $a=1,2,3$. Moreover, $F_1^V$ and $F_\pi$ are the electromagnetic isovector nucleon and pion form factors, respectively. The spinors are normalized according to the Bjorken and Drell convention~\cite{Bjo65} and the pion-nucleon coupling constant is $f^2/4\pi = 0.08$. \begin{itemize} \item (d-g) Delta current: \end{itemize} \begin{eqnarray} j^{\mu}_{\Delta}({\bf p}'_1, {\bf p}'_2,{\bf p}_1,{\bf p}_2) &=& \frac{f_{\pi N\Delta} f}{m_\pi^2} \frac{1}{K_2^2-m_\pi^2} \overline{u}({\bf p}'_1)T_a^\mu(1) u({\bf p}_1) \nonumber\\ &&\kern -2cm \times \overline{u}({\bf p}'_2)\tau_a\gamma_5\not{\!K}_2 u({\bf p}_2) + (1 \leftrightarrow 2) \, . \label{d} \end{eqnarray} The vector $T_a^{\mu}(1)$ is related to the pion electroproduction amplitude \begin{eqnarray} T_a^\mu(1) &=& K_{2,\alpha} \Theta^{\alpha\beta} G^\Delta_{\beta\rho}(H_1+Q) S_f^{\rho \mu}(H_1) T_a T_3^{\dagger} \nonumber\\ &&+ T_3 T_a^{\dagger} S_b^{\mu \rho}(P'_1) G^\Delta_{\rho\beta}(P'_1-Q) \Theta^{\beta\alpha} K_{2,\alpha} \, . \end{eqnarray} The forward $\Delta$ electroexcitation tensor is~\footnote{Note that there is a sign error in Eq.~(15) of \cite{Ama03}.} \begin{eqnarray} S_f^{\rho \mu}(H_1) &=& \Theta^{\rho\mu} \left[ g_1 \not{\!Q} - g_2 H_1\cdot Q +g_3Q^2 \right] \gamma_5 \nonumber\\ &-& \Theta^{\rho\nu}Q_\nu \left[ g_1 \gamma^\mu -g_2 H_1^\mu + g_3Q^\mu \right] \gamma_5 \end{eqnarray} and the backward tensor amplitude is \begin{eqnarray} S_b^{\rho \mu}(P'_1) &=& \gamma_5 \left[ g_1 \not{\!Q} -g_2P'_1\cdot Q -g_3Q^2 \right] \Theta^{\mu\rho} \nonumber\\ &-& \gamma_5 \left[ g_1\gamma^\mu -g_2P'_1{}^{\mu} -g_3 Q^\mu \right] Q_\nu\Theta^{\nu\rho} \, . \end{eqnarray} The tensor $\Theta_{\mu\nu}$ is defined by \begin{equation} \Theta_{\mu\nu}=g_{\mu\nu}-\frac14\gamma_\mu \gamma_\nu \, . \label{Theta} \end{equation} For the $\Delta$ propagator we use the usual Rarita-Schwinger (RS) tensor \begin{eqnarray} G^{\Delta}_{\beta\rho}(P) &=& -\frac{ \not{\!P}+m_\Delta}{P^2-m_\Delta^2} \nonumber\\ && \kern -3cm \times \left[ g_{\beta\rho} - \frac{1}{3} \gamma_\beta\gamma_\rho - \frac{2}{3} \frac{P_\beta P_\rho}{m_\Delta^2} - \frac{\gamma_\beta P_\rho - \gamma_\rho P_\beta}{3m_\Delta} \right] \, . \end{eqnarray} In what follows we perform the substitution $m_\Delta\rightarrow m_\Delta+\frac{i}{2}\Gamma(P)$ in the denominator of the propagator to account for the $\Delta$ decay probability. Finally, the electromagnetic coupling constants $g_i$ are given by \begin{equation} g_1=\frac{G_1}{2 m_N}\, , \kern 1cm g_2=\frac{G_2}{4 m_N^2} \, , \kern 1cm g_3=\frac{G_3}{4 m_N^2} \, . \end{equation} Our approach for the $\Delta$ follows, as a particular case, from the more general form of the $\gamma N \Delta$ Lagrangian of Pascalutsa {\it et al.}~\cite{Pas95}. The $\Delta$ coupling constants used here are $G_1=4.2$, $G_2=4$, $G_3=1$, and $f_{\pi N\Delta} = 4\times 0.564$. The correlation current is defined in Fig.~2, and given by \begin{eqnarray} j^{\mu}_{cor}({\bf p}'_1, {\bf p}'_2,{\bf p}_1,{\bf p}_2) &=& \frac{f^2}{m_\pi^2} \overline{u}({\bf p}'_1)\tau_a\gamma_5\not{\!K}_1 u({\bf p}_1) \frac{1}{K_1^2-m_\pi^2} \nonumber\\ & & \kern -2cm \mbox{}\times \overline{u}({\bf p}'_2) \left[ \tau_a\gamma_5\not{\!K}_1 S_F(P_2+Q)\Gamma^\mu(Q) \right. \nonumber\\ &&\kern -1cm \left. + \Gamma^\mu(Q)S_F(P'_2-Q) \tau_a\gamma_5\not{\!K}_1 \right]u({\bf p}_2) \nonumber\\ & & \kern -2cm \mbox{}+ (1\leftrightarrow2)\, , \label{correlation} \end{eqnarray} where $S_F(P)$ is the Feynman propagator for the nucleon \begin{equation} S_F(P) =\frac{\not{\!P} + m}{P^2-m^2+i\epsilon} \end{equation} and $\Gamma^\mu(Q)$ is the electromagnetic nucleon vertex, \begin{equation} \Gamma^\mu(Q) = F_1\gamma^\mu+\frac{i}{2m}F_2\sigma^{\mu\nu}Q_\nu \, . \end{equation} The nucleon form factors $F_1$ and $F_2$ are given by the Galster parametrization~\cite{Gal71}. The isospin sums and isospin matrix elements must be performed separately for each isospin channel. Explicit expressions are given in Appendix~A. \section{Divergence of the correlation responses} The response functions computed using the correlation current in Eq.~(\ref{correlation}) are divergent in the Fermi gas. There are two sources for this divergence: the first one comes from the double pole of the propagator when taking the square of the current. This divergence can be shown to behave as $1/\epsilon$ plus principal value terms going as $\log\epsilon$. The second source is related to the behavior of the principal values arising from the double and single poles near the RFG boundary of the quasielastic peak, where the principal values present a logarithmic divergence. To illustrate the mathematical structure of this divergence we isolate as an example the singularities produced by the diagram of Fig.~2(a). The corresponding current operator can be written as \begin{equation} j^\mu=\frac{l^{\mu}}{E_1+\omega-E_{{\bf h}_1+{\bf q}}+i\epsilon} \, , \end{equation} where $E_{{\bf p}}=\sqrt{m^2+{\bf p}^2}$ is the on-shell energy. We have explicitly extracted the divergent part of the denominator, with a pole for \begin{equation} E_{{\bf h}_1+{\bf q}}=E_1+\omega \label{condition} \end{equation} in the limit $\epsilon\rightarrow 0$. The above equation is equivalent to the quasielastic condition for emission of an on-shell nucleon with four-momentum $H_1+Q$. In fact, for a given value of $h_1$, Eq.~(\ref{condition}) holds when the angle between ${\bf h}_1$ and ${\bf q}$ is given by \begin{equation}\label{quasielastic} \cos\theta_1=\frac{Q^2+2E_1\omega}{2h_1q} \, . \end{equation} Since the condition $-1<\cos\theta_1< 1$ defines the boundary of the quasielastic peak, the pole can always be reached in that region. To study the behavior of the response functions due to this pole, it is convenient to change the variable $\theta_1$ to a new variable defined by \begin{eqnarray} x_1\equiv E_1+\omega-E_{{\bf h}_1+{\bf q}} \end{eqnarray} in the integral over ${\bf h}_1$ in Eq.~(\ref{RL}). Then the components of the total current matrix element can be written as a function of $x_1$ in the general form \begin{equation} f(x_1)=\frac{\varphi(x_1)}{x_1+i\epsilon}+g(x_1) \, , \end{equation} where the first term comes from diagram 2(a) and the function $g(x_1)$ comes from the sum of the remaining diagrams, and is finite for $x_1=0$. Since the current appears squared in the response function, we are dealing with the integral of a function of the kind \begin{equation} |f(x_1)|^2=\frac{|\varphi(x_1)|^2}{x_1^2+\epsilon^2}+|g(x_1)|^2 +2 {\rm Re}\ \frac{\varphi^*(x_1)g(x_1)}{x_1-i\epsilon} \, . \label{square} \end{equation} When integrating this function over $x_1$, and taking the limit $\epsilon\rightarrow 0$, the first term has a double pole for $x_1=0$, while the third one has a single pole. To deal with the single pole we use the usual Plemeli relation, \begin{equation} \frac{1}{x+i\epsilon}= {\cal P}\ \frac{1}{x}-i\pi\delta(x) \, . \label{dirac} \end{equation} To apply a similar relation for the double pole term, we add and subtract the on-shell value $|\varphi(0)|^2/(x_1^2+\epsilon^2)$. Taking the limit $\epsilon\rightarrow 0$ we can use relations which are valid for any function $\psi(x)$ \begin{equation} \int^b_{-a} \frac{\psi(x)-\psi(0)}{x^2+\epsilon^2}dx \rightarrow {\cal P}\ \int^b_{-a} \frac{\psi(x)- \psi(0)}{x^2}dx \end{equation} and \begin{equation} \int^b_{-a} \frac{\psi(0)}{x^2+\epsilon^2}dx = \frac{1}{\epsilon} \left[\tan^{-1}\frac{b}{\epsilon}+\tan^{-1}\frac{a}{\epsilon} \right] \psi(0) \sim \frac{\pi}{\epsilon}\psi(0) \, . \end{equation} Then Eq.~(\ref{square}) can be written in the form \begin{eqnarray} |f(x_1)|^2 &=& {\cal P}\ \frac{|\varphi(x_1)|^2-|\varphi(0)|^2}{x_1^2} +|g(x_1)|^2 \nonumber\\ &+&2 {\cal P}\ \frac{{\rm Re}\ \varphi^*(x_1)g(x_1)}{x_1} -2\pi{\rm Im}\ \varphi^*(0)g(0)\delta(x_1) \nonumber\\ &+& \frac{|\varphi(0)|^2}{\epsilon}\pi\delta(x_1) \, . \label{expansion} \end{eqnarray} The last $O(1/\epsilon)$ term in Eq.~(\ref{expansion}) provides the dominant contribution to the response function, being infinite for $\epsilon\rightarrow 0$. Due to the $\delta$ function, that term does not contribute outside the quasielastic-peak region, where $x_1$ is different from zero. The principal values present in Eq.~(\ref{expansion}) also diverge in the particular case in which one of the limits of integration is zero. In that case the principal value in Eq.~(\ref{dirac}) should be computed instead using \begin{equation} {\cal P}\int_{-a}^b\frac{\psi(x)}{x}dx= \int_{-a}^b\frac{\psi(x)-\psi(0)}{x}dx +\frac12\psi(0)\ln\frac{b^2+\epsilon^2}{a^2+\epsilon^2} \end{equation} and it gives a $\ln\epsilon$ term if $a$ or $b$ is zero. That situation in fact occurs throughout the quasielastic region, and in particular at the boundary of the quasielastic peak. Therefore one expects an additional divergence $\sim O(\ln\epsilon)$. The meaning of the term $\frac{|\varphi(0)|^2}{\epsilon}\pi\delta(x_1)$ is explained in what follows. Diagram 2(a), when the intermediate nucleon is on shell, gives the probability of a 1p-1h electroexcitation times the probability of quasielastic nucleon scattering. Since the interaction probability is proportional to the interaction time $T$, the probability of this re-scattering process is proportional to $T^2$. Therefore the cross section is proportional to $T$. In an infinite system such as the Fermi gas, the intermediate nucleon never leaves the nucleus and therefore $T\rightarrow \infty$. However, in a finite nucleus one expects no divergence because a high-energy nucleon will leave the nucleus in a finite time. Therefore the interaction time is finite. The relation between $\epsilon$ and $T$ can also be obtained by inspection of the momentum-space propagator in quantum field theory~\cite{Man84}, computed as the vacuum expectation value of time-ordered Fermion fields. The value $\epsilon$ in the denominator of the propagator can be seen as a regularization parameter in the Fourier transform of the time step function for a particle with four-momentum $P^\mu=(p_0,{\bf p})$ \begin{equation}\label{propagator} \int_{-T/2}^{T/2} dt \, {\rm e}^{i(p_0-E_{{\bf p}})t}\theta(t) = \frac{i}{p_0-E_{{\bf p}}+i\epsilon} \, , \end{equation} where $T\rightarrow\infty$ and $\epsilon\rightarrow0$. For a real particle, $p_0-E_{{\bf p}}=0$, the left-hand side of the above equation is $T/2$, and the right-hand side is $1/\epsilon$. Therefore \begin{equation} \frac{T}{2}= \frac 1\epsilon \, . \end{equation} This can be obtained alternatively by replacing the on-shell value of the propagator in Eq.~(\ref{propagator}) as a delta function \begin{equation} \frac{1}{\epsilon}= \lim_{p_0\rightarrow E_{{\bf p}}} \frac{i}{p_0-E_{{\bf p}}+i\epsilon} = \pi\delta(0) \end{equation} and using the integral representation \begin{equation} \delta(0) = \lim_{T\rightarrow\infty} \frac{1}{2\pi}\int_{-T/2}^{T/2}dt= \frac{T}{2\pi}. \end{equation} In this paper we cure the divergence of the correlation diagram by a regularization procedure, using a finite value for $\epsilon$ to account for the finite propagation time of a high-energy nucleon in a nucleus before leaving it. To estimate the value of $\epsilon$ for a nucleus such as $^{12}$C, we assume that the nucleon moves at the velocity of light and it has to cross a distance equal to the nuclear radius $R\sim 2$~fm. Then \begin{equation}\label{epsilon} \epsilon\simeq \frac{2\hbar}{T} \simeq \frac{2\hbar c}{R}\simeq \frac{400}{2}{\rm MeV}\simeq 200\, {\rm MeV} \, . \end{equation} Note that this value, $\epsilon\simeq 200$ MeV, is very different from the nucleon width $\Gamma \sim 10$ MeV which is usually obtained in nuclear matter as the width for nuclear inelastic interaction. In practice the value of $\epsilon$ can be taken as a parameter to be fitted to data. In the next section we perform a study of the dependence of our results upon $\epsilon$. Unless otherwise specified we assume $\epsilon=200$ MeV. At this point we should mention that the use of Eq.~(\ref{square}) to compute the 2p-2h response functions becomes impractical due to complications in the numerical calculation of principal values in multidimensional integrals including the four diagrams of Fig. 2 (and the corresponding exchange parts). Since we are forced to use a finite value of $\epsilon$, it becomes more convenient to keep from the beginning the $i\epsilon$ term in the denominator of the nucleon propagator in Eq.~(\ref{correlation}). \begin{figure}[tph] \begin{center} \includegraphics[scale=0.75, bb= 220 275 400 770]{figpap03.ps} \caption{ 2p-2h transverse response of $^{56}$Fe at $q=550$ MeV/c. Three values of the parameter $\epsilon$ are shown. Thin solid lines: Correlation only. Dotted lines: MEC only. Thick solid lines: total. Dashed: RFG OB results. } \label{Fig3} \end{center} \end{figure} \begin{figure}[th] \begin{center} \includegraphics[scale=0.75, bb= 220 275 400 770]{figpap04.ps} \caption{ As for Fig. 3, but now at $q=1140$ MeV/c.} \label{Fig4} \end{center} \end{figure} \section{Results} Here we present results for the longitudinal and transverse response functions for inclusive two-particle emission. We compute the 2p-2h response functions in the RFG model as the 9-dimensional integrals given by Eqs.~(\ref{RL},\ref{RLpn}). The energy delta function can be used to integrate over $p'_1$, fixing the energy $E'_1$ of the first particle. More details are given in Appendix~B. By rotational invariance considerations, one of the the azimuthal angles can be fixed, multiplying at the same time the result by a factor $2\pi$. We choose $\phi'_1=0$. At the end we have a 7-dimensional integration to be performed numerically. The usual procedure is to use a multi-dimensional Montecarlo (MC) integration. Since the pole structure of the integrand is numerically delicate, in this work we use instead a mixed Montecarlo-Simpson integration procedure. The Simpson algorithm is used for integration over the angles of the two holes $\theta_1,\theta_2$ and of the first particle $\theta'_1$. The remaining 4-dimensional integral over the hole momenta $h_1,h_2$ and their angles $\phi_1,\phi_2$ is made by Montecarlo. To keep the CPU times manageable we use a number of MC points of the order of $10^3$ for $q=1$ GeV/c. For other values of the momentum transfer the number of MC points is modified linearly with $q$. We have performed a study of the stability of the results with the number of MC points and have found that the error from the integration procedure is within a few percent. A pion-nucleon form factor is included in the 2-body currents: $F_{\pi NN}(K_\pi)=(\Lambda^2-m_\pi^2)/(\Lambda^2-K_\pi^2)$, with $\Lambda=1.3$ GeV. We use the same value for the $\pi N\Delta$ form factor in the Delta current. The electromagnetic form factors are those of Galster for the nucleon, and those used in~\cite{Ama02,Ama03} for the MEC. \begin{figure}[th] \begin{center} \includegraphics[scale=0.75, bb= 220 275 400 770]{figpap05.ps} \caption{ As for Fig. 3, but now for $R_L$.} \label{Fig5} \end{center} \end{figure} \begin{figure}[th] \begin{center} \includegraphics[scale=0.75, bb= 220 275 400 770]{figpap06.ps} \caption{ As for Fig. 3, but now for $R_L$ at $q=1140$ MeV/c.} \label{Fig6} \end{center} \end{figure} To make contact with previous work, we apply our model to compute the 2p-2h longitudinal and transverse response functions for the nucleus $^{56}$Fe, and for momenta $q=550$ and $1140$ MeV/c. The results are presented in Figs.~3--6, where the separate contributions of the correlation and MEC currents to the 2p-2h responses are also shown. The 1p-1h responses produced by the one-body (OB) current in the relativistic Fermi gas (RFG) without interaction are also shown. A critical input for our model is the value of the parameter $\epsilon$ in the nucleon propagator, introduced to cure the divergence of the double pole. To see how the responses for the correlation contribution depend on $\epsilon$ we show results for three different values: $\epsilon=100, 200$ and 300 MeV. For $\epsilon=100$ MeV, the correlation 2p-2h contribution presents a shape with a maximum in the region of the quasielastic peak, but with a long tail extended to high transferred energies. The maximum is reminiscent of the pole structure of the nucleon propagator, and therefore a resonance appears for kinematics corresponding to the quasielastic condition in Eq.~(\ref{quasielastic}). A shift to higher energies (of the order of $\sim 40$ MeV) is seen in the case of $q=550$ MeV/c (Figs.~3,~5). Indeed for this value of $q$ the phase space for two-particle emission causes a suppression of the low-energy side of the response function. The resonant structure produced by the 2p-2h correlation contribution diminishes significantly with increasing values of the parameter $\epsilon$. Notice that for $\epsilon\geq 200$ MeV there is no maximum located at the QE peak. For an even lower value of the escape width, say $\epsilon = 50$ MeV, the magnitude of the resonant peak is of the same size as the OB response function. This correction coming from 2p-2h states is obviously too large to be compatible with experimental data that are already of the order of the 1p-1h response at the region of the QE peak. It should be mentioned that, although the 2p-2h contribution should be added to the 1p-1h one, the latter should be first corrected for final-state interaction (FSI) contributions not included in the bare RFG results shown in the figures. In fact FSI contribute importantly to one-nucleon emission through the coupling of 1p-1h to 2p-2h states in the final nucleus~\cite{Smi88}. These processes involve, in particular, two-pion exchange, and are therefore of the same order as the 2p-2h response in the perturbative series, since it is the square of one-pion exchange matrix element. The inclusion of such contributions is out of the scope of the present study. The dependence of the correlation responses on the parameter $\epsilon$ is better appreciated in Figs.~7 and 8, where we show its contribution for the three chosen values of $\epsilon$ in the same plot. In the QE region the height of the responses approximately reduces to one half when $\epsilon$ doubles. This behavior follows because of the leading $1/\epsilon$ dependence in Eq.~(\ref{expansion}), coming from the pole in the propagator. For high $\omega$ the results are more similar and they are almost independent of $\epsilon$ in the high-energy tail. In this case, {\it i.e.} large $\omega$, there is no pole in the integrand and the contribution from the propagator is less sensitive to the precise value of $\epsilon$. \begin{figure}[th] \begin{center} \includegraphics[scale=0.75, bb= 220 490 400 770]{figpap07.ps} \caption{ 2p-2h correlation contribution to the L and T responses of $^{56}$Fe for $q=550$ MeV/c. Three values of the parameter $\epsilon$ are shown. With dotted lines from up to down, $\epsilon=100,200,300$, respectively. Solid lines: RFG one-body responses. } \label{Fig7} \end{center} \end{figure} \begin{figure}[th] \begin{center} \includegraphics[scale=0.75, bb= 220 490 400 770]{figpap08.ps} \caption{ As for Fig. 7, but now at $q=1140$ MeV/c. } \label{Fig8} \end{center} \end{figure} Let us return now to Figs.~3--6, where the MEC separate contribution is also shown. The transverse response (Figs.~3,~4) has a large peak with a maximum around $\omega= (m_\Delta^2+q^2)^{1/2}-m_N$, that comes from the $\Delta$ propagator appearing in the $\Delta$-current. It has the same resonant structure as the correlation current, but located in the region of the $\Delta$ peak, where the real pion emission cross section has a maximum. We do not include the pion emission channel in our calculation. Both channels should be summed up to obtain the total inclusive $(e,e')$ cross section. The $\Delta$ peak is very small in the longitudinal response presented in Figs.~5 and 6. This is consistent with the predominant transverse character of the $\Delta$ current, hence providing a small contribution to the longitudinal channel. For $q=550$ MeV the MEC 2p-2h contribution is large (small) in the T (L) response. However, for $q=1140$ MeV (Figs.~4,~6) we find a larger effect in $R_L$ coming from the MEC seagull and pionic at large energy transfer. Indeed in a non-relativistic expansion in powers of $q/m_N$ the time component of the MEC is of higher order than the transverse one. However, for $q=1140$ MeV, $q/m_N$ is larger than one, and the relative L component of the MEC, compared to the T one, starts to increase. \begin{figure} \begin{center} \includegraphics[scale=0.75, bb= 220 275 400 790]{figpap09.ps} \caption{ Contributions to the transverse response of $^{56}$Fe for $q=550$ MeV/c. The dashed lines are the 1p-1h response with OB current only. The rest of the lines are 2p-2h contributions. (a) Upper panel: Thin solid: Correlation only. Dotted: MEC only. Thick solid: total. (b) Middle panel: Thin solid: Correlation only. Dotted: seagull+pionic only. Thick solid: $\Delta$ only. (c) Bottom panel. Thin solid: pion-in-flight only. Dotted: seagull+pionic only. Thick solid: $seagull$ only. } \label{Fig9} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.75, bb= 220 275 400 770]{figpap10.ps} \caption{ As for Fig. 9, but now at $q=1140$ MeV/c. } \label{Fig10} \end{center} \end{figure} In the case of the correlation current, we observe that its contribution, compared with the OB responses, is similar in the T and L channels. Note that in the correlation current (Fig.~2) the photon couples directly to a nucleon with the same interaction vertex $\Gamma^\mu$ as the OB current. The other side of the diagram with a pion coupled to a second nucleon is independent of the particular component of the current. \begin{figure} \begin{center} \includegraphics[scale=0.75, bb= 220 425 400 770]{figpap11.ps} \caption{ Transverse response of $^{56}$Fe at $q=550$ and 1140 MeV/c. Thin solid: Correlation only for $\epsilon=200$ MeV. Dotted: MEC only. Thick solid: total one- plus two- body responses. Dashed: RFG 1p-1h response with OB current only. } \label{Fig11} \end{center} \end{figure} The separate effects of the different currents contributing to the 2p-2h transverse responses are shown in Figs.~9,~10. As shown, the seagull plus pionic (SPP) currents alone give a small effect compared with the contributions from the $\Delta$ and correlations. In fact, for $\epsilon=200$ MeV the correlation response is much larger (by a factor 2 or 3) than the SPP response function (middle panels in Figs.~9,~10). We also observe that the separate seagull contribution is larger in magnitude than the pionic one, which is negligible for $q=1140$ MeV/c. Note that the two currents interfere destructively and partially cancel when both are considered in the SPP responses (bottom panels). \begin{figure} \begin{center} \includegraphics[scale=0.75, bb= 220 485 400 770]{figpap12.ps} \caption{Transverse structure function $S_T$ of $^{56}$Fe at $q=370$ and 410 MeV/c. The parameter $\epsilon=200$ MeV. To compare with Figs. 11 and 12 of \cite{Alb84}, $S_T$ is defined as \cite{McC80} $S_T=\frac{M_A}{4\pi}R_T$. Thick lines: RFG 1p-1h results. Dashed lines: 2p-2h, MEC only. Thin solid lines: 2p-2h total, MEC plus correlations. } \label{Fig12} \end{center} \end{figure} In Fig.~11 we show the transverse response obtained by adding the total 2p-2h contributions to the OB response. A word of caution should be raised when analyzing these results. First, we have not added the correlation nor MEC corrections to the 1p-1h response. Moreover, the two-pion-exchange interaction generates self-energy corrections to the OB current that lead to interference effects of the same order in the expansion as the corrections included here. As an example, FSI are known to redistribute the strength of the responses, producing a hardening, a reduction of the maximum and an increase of the high-energy tail~\cite{Ama07}. Recently also a large effect from both MEC and FSI has been found in the 1p-1h channel for high momentum transfer~\cite{Ama10}, which should be added to the present results. Finally, the process of real pion emission (not included here) gives also a contribution in the transverse response located mainly the region of the delta peak. \begin{figure} \begin{center} \includegraphics[scale=0.75, bb= 220 485 400 770]{figpap13.ps} \caption{ As for Fig. 12, but now for the longitudinal structure function $S_L=\frac{M_A}{4\pi}R_L$. } \label{Fig13} \end{center} \end{figure} So far we have presented results for intermediate to high momentum transfer. Results for lower values of $q=370$ and 410 MeV/c are shown in Figs.~12 and 13 for the T and L response functions. This allows us to compare the present results with previous non-relativistic calculations~\cite{Alb84}. In Fig. 12 the structure function $S_{T}= \frac{M_A}{4 \pi} R_{T}$ is presented, to allow a direct comparison with Figs. 11 and 12 of \cite{Alb84}. The separate MEC and correlation contributions to the 2p-2h T response shown in Fig.~12 are similar to the ones presented in \cite{Alb84}. The MEC produces a tail above the QE peak that increases with the energy transfer. The presence of correlations lead to an additional, significant raise of the tail. Note that our correlation results are obtained for $\epsilon=200$ MeV. In \cite{Alb84} another prescription to deal with the nucleon pole was adopted. From our results we conclude that both prescriptions are compatible numerically. The OB response of \cite{Alb84} included RPA correlations producing a reduction and hardening of the OB response. The 2p-2h longitudinal responses were not computed in \cite{Alb84}, since the time components of the MEC are of higher order in the non-relativistic reduction and hence, they were expected to be very small. However, our prediction for the correlation 2p-2h contribution in the L response, presented in Fig.~13, shows a similar effect as in the T response, {\it i.e.,} a tail also appears for high energy transfer in the L response coming from correlations. Contrary to the T channel, MEC give no contribution in the L response. \begin{figure} \begin{center} \includegraphics[scale=0.75, bb= 220 275 400 770]{figpap14.ps} \caption{ 2p-2h T response of $^{56}$Fe at $q=1140$ MeV/c. Thin lines with $\pi NN$ form factor. Thick lines without form factor ($F_{\pi NN}=1$). (a)Total, (b) Correlations only, (c) MEC only. } \label{Fig14} \end{center} \end{figure} Since the 2p-2h excitation is produced in this work by one-pion exchange, the results are strongly dependent on the details of this particular interaction. This is illustrated in Fig.~14 where we show how the results depend on the strong $\pi NN$ form factor for $q=1140$ MeV/c. The results without a form factor, {\it i.e.,} with $F_{\pi NN}=1$, are about three times as large as the results with the form factor. This is different from the findings at low momentum transfer~\cite{Ama94}, where the pion form factor can be safely ignored. \begin{figure} \begin{center} \includegraphics[scale=0.75, bb= 220 425 400 770]{figpap15.ps} \caption{2p2-h MEC only contribution to the transverse response of $^{56}$Fe. Solid lines: computed with the $\Delta$ form factors used by~\cite{Ama03}. Dashed lines: computed with the $\Delta$ form factors used in~\cite{DePace03}. Dotted: RFG OB response. } \label{Fig15} \end{center} \end{figure} Another issue is the dependence of the results on the $\Delta$ form factors used in this work, both the electromagnetic and the strong ones, which are somewhat different from the parametrization used in the 2p-2h MEC calculation of~\cite{DePace03}. Calculations done with both sets of parameters are compared in Fig.~15. Our calculation gives a larger contribution for the T response than the one of~\cite{DePace03}. Hence the use of the same form factors reduces the discrepancy between the two calculations. Some of the remaining differences could be linked to other details of the models, in particular to the different Lagrangian chosen for $\Delta$ electroexcitation. We should note that the two models are fully independent. While all the spin sums are performed analytically in~\cite{DePace03} resulting in thousands of terms to be numerically integrated, in this work we first compute the spin matrix elements of the current and later we evaluate the squares and perform the sums numerically. Before concluding, we would like to stress that the 2p-2h responses in the present model are crucially dependent on details of the pion interaction. A critical ingredient of the model is the value of the parameter $\epsilon$, identified with an escape width of a high-energy nucleon from the nucleus. We have proven that a value $\epsilon\sim 200$ MeV leads to results in agreement with the previous calculation of~\cite{Alb84}. This parameter $\epsilon$ is different from the usual interaction width of particle states, usually associated with matrix elements of the phenomenological imaginary optical potential derived from elastic scattering data~\cite{Smi88}. It has also been computed in nuclear matter in a semi-phenomenological approach~\cite{Fer92}. The resulting width for 100 MeV nucleons is of the order of 10 MeV, which is too small to give reasonable results in our calculation. This is due to the $1/\epsilon$ behavior of the 2p-2h response divergence in the QE region, where the pole is being hit. Due to this divergent behaviour, for $\epsilon=5$ MeV the results would be almost one order of magnitude larger than the OB responses at the maximum. We have checked that the the $1/\epsilon$ term in the forward diagrams is the main contribution to the 2p-2h correlations in the QE region for $\epsilon > 20$ MeV. The importance of correlations, for the same value of $\epsilon$, increases with the nuclear mass. We have checked that for the case of $^{12}$C where the sizes of the correlation responses, relative to the OB, are about 20\% smaller than for $^{56}$Fe. This is what one would expect, since the number of correlated pairs increases with $A(A-1)/2$. Moreover, since the estimated value of $\epsilon$ depends on the nuclear radius, Eq.~(\ref{epsilon}) indicates that one should use larger $\epsilon$-values for lighter nuclei, which in turn would reduce even more the size of the correlation responses. Thus we expect an important $A$-dependence of correlations on the nuclear responses coming from the $A$-dependence of the escape width $\epsilon$. A more deailed study of this issue will be presented in forthcoming work. \section{Conclusions} In this work we have presented a fully relativistic model of inclusive two-particle emission reactions induced by electrons. Starting with the free relativistic Fermi gas we have considered all Feynman diagrams in a perturbative expansion of the scattering amplitude with one photon and one pion exchange producing 2p-2h excitations. Those diagrams can be classified in two sets, namely MEC and correlation currents. In the latter there is a nucleon propagator that can be put on shell giving a double pole from $(p_0-E_{{\bf p}}+i\epsilon)^{-2}$ when taking the square of the current matrix element. The corresponding 2p-2h response function diverges as $1/\epsilon$ when $\epsilon\rightarrow 0$ plus additional $\ln\epsilon$ terms. Giving a physical meaning to $\epsilon$ as the escape width of the nucleus, namely, twice the inverse of the nucleon propagation time, the fact that the corresponding response is infinite is related to the infinite extension of the Fermi gas. Using a finite value of $\epsilon$ we account for the finite size of the nucleus, hence getting a finite result. Having no way to compute $\epsilon$ in a Fermi gas, we take it as a parameter. Estimating in a crude way a value around $\sim 200$ MeV, we have made an exploratory study of the results as a function of $\epsilon$. The correlation effects decrease with increasing $\epsilon$. Our analysis shows that the assumption $\epsilon\sim 200-300$ MeV is not unreasonable, whereas for smaller $\epsilon$-values the correlation contribution increases significantly in the QE region. Within this framework we have studied the properties and effects of the different 2p-2h contributions and other ingredients of the model on the transverse and longitudinal response functions of $^{56}$Fe for intermediate to high momentum transfer. The MEC give rise to a wide peak in the region of the $\Delta$ resonance that dominates the T response. In the L channel the MEC are small for low momentum transfer, but they importantly increase for high momentum above the QE peak where their contribution is of the same size as the OB longitudinal response. Concerning the correlations, they add to the MEC in the high-energy tail and are of the same order of magnitude. The contribution of the correlations is similar in the L and T responses. The main goal of this paper has been to study the effect of 2p-2h pion correlations in the L and T response, analyzing the properties of these effects as a function of a single parameter $\epsilon$. In future work we plan to investigate more physically founded ways to ``fine tune'' this parameter, including its dependence on kinematics and nuclear species. Finite-size calculations in conjunction with the use of semi-phenomenological fits of the nucleon spreading width or fits to existing $(e,e')$ data will also be explored. \section*{Acknowledgments} JEA thanks E. Ruiz-Arriola for useful discussions. This work was partially supported by DGI (Spain): FIS2008-01143, FPA2006-13807-C02-01, FIS2008-04189, FPA2007-62216, by the Junta de Andaluc\'{\i}a, by the INFN-MEC collaboration agreement, projects FPA2008-03770-E-INFN, ACI2009-1053, the Spanish Consolider-Ingenio 2000 programmed CPAN (CSD2007-00042), and part (TWD) by U.S. Department of Energy under cooperative agreement DE-FC02-94ER40818.
1,116,691,499,923
arxiv
\section*{Introduction} The iliopsoas muscles, predominantly made up of slow-twitch fibers, are a composite of the psoas major and iliacus muscles; they are anatomically separate in the abdomen and pelvis but are merged together in the thigh. The iliopsoas is engaged during most day to day activities, including posture, walking and running. Together these muscles serve as the chief flexor of the hip and a dynamic stabiliser of the lumbar spine \cite{regev2011psoas}, with the psoas uniquely having role in the movement of both the trunk and lower extremities \cite{hanson1999anatomical}. Given the key involvement of the iliopsoas muscles in daily activities, there is increasing interest in its potential as a health biomarker. This has most commonly taken the form of a cross-sectional area (CSA) through one (generally the right) or both iliopsoas muscles, with the most common measurement taken through the psoas muscle. This CSA can be used either as an independent measurement or as a ratio to vertebral body size \cite{swanson2015correlation, ebbeling2014psoas} or in the form of the psoas muscle index, calculated as the psoas muscle major CSA divided by the height squared \cite{mourtzakis2008practical}. Indeed, psoas CSA has been suggested as a predictor of sarcopenia \cite{jones2015simple}, surgical outcome and length of hospital stay post surgery \cite{durand2014prognostic,saitoh2017low, delitto2017clinically}, poor prognosis in response to cancer treatment \cite{kasahara2017low}, morbidity following trauma \cite{ebbeling2014psoas}, a surrogate marker of whole body lean muscle mass \cite{morrell2016psoas}, cardiovascular fitness \cite{fitzpatrick2017psoas}, changes in cardiometabolic risk variables following lifestyle intervention \cite{maltais2019one} and even risk of mortality \cite{drudi2016psoas,huber2019predictors}. Measurements of the psoas major muscle are most commonly made from CSA of axial MRI or CT images \cite{durand2014prognostic, fitzpatrick2017psoas}, with most studies generally relying on manual annotation of a single slice, through the abdomen, these tend to be retrospectively repurposed from clinical scans rather than a specific acquisition \cite{lee2011frailty, hervochon2017body, bukvic2019psoas}. However, the CSA of the psoas muscle varies considerably along its length \cite{hanson1999anatomical} therefore small differences in measurement position can potentially have a significant effect on its overall measured size. Moreover, there is a lack of consistency within the literature regarding the precise location at which measurement of the psoas CSA should be made, with researchers using a variety of approaches including: the level of the third lumbar vertebrae (L3) \cite{jones2015simple, delitto2017clinically, kasahara2017low, hervochon2017body, bukvic2019psoas}, L4 \cite{swanson2015correlation,drudi2016psoas, lee2011frailty, ebbeling2014psoas}, between L4-L5 \cite{morrell2016psoas, maltais2019one}, as well at level of the umbilicus \cite{gu2018clinical, durand2014prognostic, saitoh2017low} the precise position of which is known to vary with obesity/ascites. There is further discrepancy between studies regarding whether the measurements should comprise of one single \cite{kasahara2017low} or both psoas muscles \cite{hervochon2017body}, with the majority of publications combining the areas of both muscles. This lack of consistency together with the relatively low attention given to robustness and reproducibility of its measurement, and the reliance on images from retrospective clinical scans have led many to question its validity as a biomarker \cite{baracos2017psoas}. A more objective proposition may be to measure total psoas muscle volume \cite{modesto2020psoas, valero2015sarcopenia, amini2015impact, zargar2017change, suh2019effect}, from dedicated images. A variety of approaches have been used thus far: inclusion of muscle between L2-L5 \cite{modesto2020psoas}, psoas muscle volume from L3 and approximately the level of the iliopectineal arch (end point estimated from images in publications) \cite{valero2015sarcopenia, amini2015impact}, from the origin of the psoas at lumbar vertebrae (unspecified) to its insertion in the lesser trochanter \cite{zargar2017change}, or with no anatomical information provided at all \cite{suh2019effect}. Whilst all of these approaches include substantially more muscle than is included in simple CSA measurements, these are still incomplete volume measurements. Moreover, measuring the entire psoas muscle volume as a single entity is challenging, since even with 3D volumetric scans it is difficult to differentiate between composite iliacus and psoas muscles once they merge at the level of the inguinal ligament. Therefore, to measure psoas volume as an independent muscle it is necessary to either assign an arbitrary cutoff and not include a considerable proportion of the psoas muscle (estimated to be approximately 50\% in some studies \cite{valero2015sarcopenia}) or simply include the iliacus muscle and measure the iliopsoas muscle volume in its entirety. The increasing use of whole body imaging \cite{borga2015validation} in large cohort studies such as the UK Biobank (UKBB), which plans to acquire MRI scans from the neck to the knee in 100,000 individuals \cite{sudlow2015uk}, requires different approaches to image analysis. Manual image segmentation is time consuming and unfeasible in a cohort as large as the UKBB. However, this dataset provides a unique opportunity to measure iliopsoas muscles volume in a large cross-sectional population. Therefore, development of a robust and reliable automated method is essential. In this paper, we present an automated method to segment iliopsoas muscle volume based on a Convolutional Neural Network (CNN) and discuss results arising from 5,000 participants from the UKBB imaging cohort, balanced for BMI, age, and gender. \section*{Methods} \subsection*{Data} A total of 5,000 subjects were randomly selected for this study, while controlling for gender and age, from the UKBB imaging cohort. Age was discretised into four groups: 44 to 53, 54 to 63, 63 to 72 and 73 to 82 years. The eight strata were defined to cover both age and gender. Weights were used to maintain the proportion of subjects within each age group to match that of the larger UKBB population. Demographics for the study population (Tab.~\ref{tab:demographics}) were balanced for gender (female:male ratio of 49.9:50.1). The average age of the male subjects was $63.3 \pm 8.4$ years and the female subjects was $63.3 \pm 8.3$ years. The average BMI of the male subjects was $27.0 \pm 3.9~\text{kg/m}^2$ (range: 17.6 to 50.9~$\text{kg/m}^2$) and for female subjects $26.2 \pm 4.7~\text{kg/m}^2$ (range: 16.1 to 55.2~$\text{kg/m}^2$), with the mean for both groups being categorised as overweight. The self-reported ethnicity was predominantly White European: 96.76\%. \begin{table}[!htbp] \centering \begin{tabular}{l | r r} & Female & Male \\ \hline Participants & 2496 (49.9) & 2504 (50.1) \\ Ethnicity of total cohort & \\ ~~~~White European & 2422 (48.44) & 2416 (48.32) \\ ~~~~Asian & 18 (0.36) & 35 (0.70) \\ ~~~~Black & 17 (0.34) & 12 (0.24) \\ ~~~~Other & 15 (0.30) & 9 (0.18) \\ ~~~~Chinese & 11 (0.22) & 10 (0.20) \\ ~~~~Not Reported & 7 (0.14) & 12 (0.24) \\ ~~~~Mixed & 6 (0.12) & 10 (0.20) \\ Age (years) & $63.3 \pm 8.3$ & $63.3 \pm 8.4$ \\ BMI ($\text{kg/m}^2$) & $26.2 \pm 4.7$ & $27.0 \pm 3.9$ \\ Height (cm) & $162.5 \pm 6.1$ & $176.2 \pm 6.8$ \\ Weight (kg) & $69.3 \pm 13.3$ & $83.9 \pm 13.5$ \\ \hline \end{tabular} \caption{Demographics of the participants ($n=5000$). Reported values are counts with percentage (\%) for categorical variables and $\text{average} \pm \text{standard deviation (SD)}$ for continuous variables.} \label{tab:demographics} \end{table} Participant data from the UKBB cohort was obtained as previously described \cite{sudlow2015uk} through UKBB Access Application number 23889. The UKBB has approval from the North West Multi-Centre Research Ethics Committee (REC reference: 11/NW/0382), and obtained written consent from all participants prior to involvement. Researchers may apply to use the UKBB data resource by submitting a health-related research proposal that is in the public interest. More information may be found on the UKBB researchers and resource catalogue pages (https://www.ukbiobank.ac.uk/). Unprocessed MR images were obtained from the UKBB Abdominal Protocol\cite{littlejohns2020biobank}, and preprocessed as previously reported~\cite{basty2020image,liu2020systematic}. Out of the four reconstructed Dixon MRI channels (fat, water, in-phase, opposed-phase), we performed all analysis using the water channel because muscles are most discernible in that channel. \subsection*{Manual annotation} A single expert radiographer manually annotated both iliopsoas muscles for 90 subjects using the open-source software MITK\cite{mitk2013}. Each axial slice of the water images was examined, the iliopsoas identified, and the borders of the psoas and iliopsoas manually drawn for 90 subjects. On average, manual annotation of both muscles took five to seven hours per subject. The annotated data covered a broad range of age and BMI from male and female UKBB participants. A typical Dixon abdominal MRI centred on the iliopsoas muscles is shown in Fig.~\ref{fig:data}, manual iliopsoas muscle annotations are overlaid on the anatomical reference volume in red. A 3D rendering of the manual annotation is also provided. \begin{figure}[!htbp] \centering \includegraphics[width=\textwidth]{figs/dataabcd.png} \caption{Iliopsoas muscle manual annotations: (a) axial, (b) sagittal, and (c) coronal views of the segmentation (red) overlaid on the anatomical reference data, and (d) 3D rendering of manual segmentation.} \label{fig:data} \end{figure} \subsection*{Model} We trained a model able to predict both muscles individually. The preprocessing steps for the training data, where the cropping is also needed for applying the model to unseen data, are as follows. Two arrays of size $96 \times 96 \times 192$ were cropped around the hip landmarks\cite{basty2020image}, to approximate the location of the muscles in order to facilitate the task. Each image was normalised after cropping. Thirty-two training samples were generated from one single subject by separating the right and the left muscles, introducing mirroring flips exploiting the symmetry of the structures. Further data augmentation included seven random transformations consisting of translations by up to six voxels in-plane, up to 24 voxels out-of-plane, and random scaling ranging from $-50$\% to $+50$\% out-of-plane and from $-25$\% to $+25$\% in-plane, in addition to the original data. We chose larger factors for out-of-plane transformations to account for the skewed variability in shape and position of the muscles, to reflect the fact that there is more variation in height than width in the population. After data augmentation, 2,880 training samples were produced from the original 90 manually annotated pairs of iliopsoas muscles. The model used for three-dimensional iliopsoas muscle segmentation closely follows a similar architecture to the U-Net \cite{ronneberger2015u} and the V-Net \cite{milletari2016v}, with a contractive part and an expansive part connected by skip connections at each resolution level. These network architectures have been established as the gold standard for image segmentation over the last few years, as they require modest training data as a consequence of operating on multiple resolution levels while providing excellent results within seconds. Several convolution blocks are used in our model architecture. An initial block ($I$) contains a $5 \times 5 \times 5$ convolution with eight filters followed by a $2 \times 2 \times 2$ convolution with 16 filters and stride two. The down-sampling blocks in the contractive parts ($D_{i,m}$) consist of $i$ successive $5 \times 5 \times 5$ convolutions with $m$ filters followed by a $2 \times 2 \times 2$ convolution of stride with stride two, used to decrease the resolution. In the expansive parts, the up-sampling blocks ($U_{j,n}$) mirror the ones in the contractive parts where there are transpose convolutions instead of stride two convolutions. The block ($L$) at the lowest resolution level of the architecture consist of three successive $5 \times 5 \times 5$ convolutions with 128 filters followed by a $2 \times 2 \times 2$ transpose convolution of stride two and 64 filters. The final block ($F$) contains a $5 \times 5 \times 5$ convolution with 16 filters followed by a single $1 \times 1 \times 1$ convolution and a final sigmoid activation classification layer. All blocks incorporate skip connections between their input and output, resulting in residual layers. The architecture follows: $I \rightarrow D_{2,32} \rightarrow D_{3,64} \rightarrow D_{3,128} \rightarrow L \rightarrow U_{3,128} \rightarrow U_{3,64} \rightarrow U_{3,32} \rightarrow F$ with skip connections between blocks at equivalent resolution levels. Padding is used for the convolutions throughout the network and a stride of one, unless otherwise specified, when moving between the resolution levels. Other than the final sigmoid activation, scaled exponential linear units (SELU) are used throughout the network. The SELU activation function has recently been proposed \cite{klambauer2017self}, where the self-normalising properties allow it to bypass batch normalisation layers enabling higher learning rates that lead to more robust and faster training. The model was trained minimising Dice Score Coefficient (DSC) loss\cite{milletari2016v} with a batch size of three using the Adam optimiser and a learning rate of 0.0001 for 100 epochs until convergence. We performed all of the CNN development, learning, and predictions using Keras \cite{chollet2015keras}. \subsection*{Validation} A common metric used to evaluate segmentation performance is the DSC, also known as the F1 score. It is defined as twice the intersection of the labels divided by the total number of elements. Intersection of labels can also be seen as a True Positive (TP) outcome. The total number of elements can also be seen as the sum of all False Positives (FP), False Negatives (FN) and twice the number of TPs. \begin{equation} \text{DSC} = \frac{2\,\text{TP}}{\text{FP} + 2\,\text{TP} + \text{FN}} \end{equation} We evaluated the model by calculating the DSC for out-of-sample data held back during training. \subsection*{Statistical analysis} All summary statistics and hypothesis tests have been performed using the \textsf{R} software environment for statistical computing and graphics \cite{r2020}. Pearson's product-moment method was used to compute correlations. Two-sample \emph{t}-tests were used to compare means between groups, paired when appropriate. Methods for segmenting the iliopsoas muscle volume were compared using the Bland-Altman plot. Given the exploratory nature of the research, \emph{p}-values < 0.05 were judged to be statistically significant. \section*{Results} \subsection*{Validation} For validation of the model, we trained a model using 70 manually annotated images. The performance of the model was evaluated for 20 out-of-sample images, which gave a DSC of $0.912 \pm 0.018$ (range: 0.842 to 0.938). With those very consistent validation scores showing a robust model performance on both muscles, we trained a final model using the entire 90 available manual annotations. The average bias was $-4.3~\text{ml}$ with upper and lower limits of agreement being $15.2~\text{ml}$ and $-23.4~\text{ml}$, respectively, when comparing the final model against the manual annotations (Fig.~\ref{fig:bland_altman}). \begin{figure}[!htbp] \centering \includegraphics[width=0.6\textwidth]{figs/bland_altman.png} \caption{Bland-Altman plot of iliopsoas muscle volumes determined with CNN-based and manual segmentations ($n=90$). Dotted lines represent the average bias ($-4.3~\text{ml}$) and the 95\% limits of agreement.} \label{fig:bland_altman} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.9\textwidth]{figs/24bold.jpg} \caption{Volumes of the left and right iliopsoas muscles are overlaid in purple (right) and blue (left), taken from a range of body types and iliopsoas muscle volumes are shown: (a-c) small, (d-f) average, (g-i) large and (j-l) asymmetric.} \label{fig:output_fig} \end{figure} Example segmentations from our method are provided in Fig.~\ref{fig:output_fig}, displaying a sample of 12 subjects covering a variety of body sizes and habitus. We can see that the model performs well for all of them. \subsection*{Iliopsoas muscle volume} In each gender there was a small (approximately 2\%) yet statistically significant asymmetry between left and right iliopsoas muscles (one sample \emph{t}-test; male: $d = -7.3$~ml; female: $d = -6.5$~ml; both $p < 10^{-15}$). These differences were not significantly associated with the handedness of the participants. Significantly larger iliopsoas muscle volumes were measured in male compared with female subjects (Tab.~\ref{tab:results}). \begin{table}[!htbp] \centering \begin{tabular}{l | r r | r r | r} & \multicolumn{2}{|c|}{Female} & \multicolumn{2}{|c|}{Male} & \\ & Mean $\pm$ SD & Range & Mean $\pm$ SD & Range & Significance\\ \hline Total Volume (ml) & $542.3 \pm 72.1$ & 307.5,~904.2 & $814.5 \pm 125.4$ & 467.3,~1311.5 & $p<10^{-15}$\\ Average Volume (ml) & $271.2 \pm 36.0$ & 153.8,~452.1 & $407.2 \pm 62.7$ & 233.7,~655.8 & $p<10^{-15}$\\ Left Volume (ml) & $267.9 \pm 36.8$ & 134.5,~457.2 & $403.6 \pm 63.5$ & 247.0,~639.2 & $p<10^{-15}$\\ Right Volume (ml) & $274.4 \pm 37.0$ & 159.4,~447.0 & $410.9 \pm 64.0$ & 220.3,~675.1 & $p<10^{-15}$\\ L-R Volume Difference (ml) & $-6.5 \pm 16.1$ & $-96.9$,~62.5 & $-7.3 \pm 22.8$ & $-95.6$,~184.4 & $p=0.1306$\\ Iliopsoas Muscle Index ($\text{ml/cm}^2$) & $205.1 \pm 22.6$ & 124.2,~304.1 & $261.8 \pm 34.2$ & 157.6,~417.2 & $p<10^{-15}$\\ \hline \end{tabular} \caption{Iliopsoas muscle volumes ($n = 5,000$). Significance refers to the \emph{p}-value for a two sample \emph{t}-test, where the null hypothesis is the means between the two groups (male and female subjects) being equal.} \label{tab:results} \end{table} \subsection*{Relationship between iliopsoas muscle volume and physical characteristics} Significant correlations were observed between the total iliopsoas muscle volume and height in both genders (male: $r = 0.52$; female: $r = 0.56$, both $p < 10^{-15}$) (Fig.~\ref{fig:total_volume_versus_height}). \begin{figure}[!htbp] \centering \includegraphics[width=0.6\textwidth]{figs/total_volume_versus_height.png} \caption{Scatterplot of total iliopsoas muscle volume (ml) by height (cm), separated by gender.} \label{fig:total_volume_versus_height} \end{figure} To account for the potential confounding effect of height on iliopsoas muscle volume, an iliopsoas muscle index (IMI) was defined \begin{equation} \text{IMI} = \frac{\text{total iliopsoas muscle volume}}{\text{height}^2}, \end{equation} with units $\text{ml}/\text{m}^2$. Significant correlations were observed between the IMI and BMI in both genders (male: $r = 0.49$; female: $r = 0.49$, both $p < 10^{-15}$) (Fig.~\ref{fig:imi_versus_bmi}). \begin{figure}[!htbp] \centering \includegraphics[width=0.6\textwidth]{figs/imi_versus_bmi.png} \caption{Scatterplot of iliopsoas muscle index ($\text{ml/m}^2$) by BMI ($\text{kg/m}^2$), separated by gender.} \label{fig:imi_versus_bmi} \end{figure} A significant negative correlation was observed between IMI and age in both genders (male: $r = -0.31$, $p < 10^{-15}$; female: $r = -0.12$, $p < 10^{-8}$). However, the relationship could not be easily explained by a simple linear method (Fig.~\ref{fig:imi_versus_age}). In fact the decrease in IMI as a function of age accelerates for men, starting in their early 60s, while for women it remains relatively constant. \begin{figure}[!htbp] \centering \includegraphics[width=0.6\textwidth]{figs/imi_versus_age.png} \caption{Scatterplot of iliopsoas muscle index ($\text{ml/m}^2$) by age at recruitment (years), separated by gender. The curves are fit to the data using a generalised additive model with cubic splines.} \label{fig:imi_versus_age} \end{figure} \section*{Discussion} There is considerable interest in measuring psoas muscle size, primarily related to its potential as a sarcopenic marker, thereby making it an indirect predictor of conditions influenced by sarcopenia and frailty, including health outcomes such as morbidity, and mortality \cite{jones2015simple, durand2014prognostic, saitoh2017low, delitto2017clinically, kasahara2017low, ebbeling2014psoas, drudi2016psoas, huber2019predictors}. The complexity in measuring total muscle directly, particularly in a frail population has necessitated the reliance on easily measured surrogates and the psoas muscle CSA is increasingly used for this purpose. However there is little consistency in the field regarding how the psoas muscle is measured, with considerable variation between publications. An automated approach to analysis will reduce the need for manual annotation, allowing more of the muscle to be measured and enable much larger cohorts to be studied, this is particularly important as large population based biobanks are becoming more common. In this paper we have described a CNN-based method to automatically extract and quantify iliopsoas muscle volume from MRI scans for 5,000 participants from the UKBB. Excellent agreement was obtained between automated measurements and the manual annotation undertaken by a trained radiographer as demonstrated by the extremely high DSC with testing data. CNNs have been established as the gold standard in automated image segmentation. The results, which can be produced with a modest amount of manual annotations as training data and smart data augmentation, are highly accurate, fast, and reproducible. Manual annotations become a bottleneck for large-scale population studies, when the number of participants exceeds many thousand such as with the UKBB. Applying automated methods to vast amounts of data requires a thorough set of quality-control procedures beyond just out-of-sample testing data, which is often used to validate new methods in machine learning studies. Large-scale quality control can be done by steps such as looking at maximum and minimum values, asymmetric values (for symmetric structures such as the iliopsoas muscles), outliers, and overall behaviour of the results. The vast majority of previous studies investigating psoas size have relied on CSA measurements primarily because of data availability and time constraints \cite{durand2014prognostic, jones2015simple, delitto2017clinically, kasahara2017low, hervochon2017body, bukvic2019psoas, swanson2015correlation, drudi2016psoas, lee2011frailty, ebbeling2014psoas, morrell2016psoas, maltais2019one, gu2018clinical, saitoh2017low}. Analysis of CSA is considerably less labour intensive than manually measuring tissue volumes, furthermore, many studies have repurposed clinical CT or MRI scans \cite{lee2011frailty, hervochon2017body, bukvic2019psoas} which typically will not have been acquired in a manner to enable volume measurements. This has led to psoas muscle CSA being measured at a variety of positions relating to lumbar landmarks including L3, L4 and between L4-5, as well as more unreliable soft tissue landmarks such as the umbilicus, with the CSA measurements used alone, relative to lumbar area, height, height squared or total abdominal muscle within the image at the selected level. While lumber landmarks should provide a relatively consistent CSA in longitudinal studies, comparison between studies and cohorts becomes almost impossible. This is further compounded by studies that have shown considerable variation in psoas CSA along its length \cite{hanson1999anatomical, reid1994geometry}, and that regional differences in psoas CSA have been observed in athletes \cite{sanchis2011iliopsoas}, following exercise training or inactivity \cite{hides2007magnetic}. This appears to suggest that CSA at a fixed position may not accurately reflect changes in the psoas size elsewhere in response to health related processes. It is clear that to overcome these confounding factors, it is essential to measure total psoas volume. In this study, we have trained a CNN to segment iliopsoas muscles, applied it to 5,000 UKBB subjects and measured their total volume. This measurement includes the psoas major and iliacus muscles, and as mentioned in the proceeding section, the psoas minor muscle (if present). This reflects the practical difficulties of isolating the entire psoas muscle in images in a consistent and robust manner. The merging of the iliacus and psoas muscles below the inguinal ligament makes their separation not only impractical, but unachievable with standard imaging protocols. Similarly, it is not possible to separate the psoas major and minor muscles under these conditions, even if CSA measurements were to be made. Therefore, a standard operating procedure was required, either measure a partial psoas volume, selecting an anatomical cut-off before the junction with the iliacus muscle, or to include the iliacus and measure the iliopsoas muscle volume in its entirety. In this study we have opted for the latter, as selecting an arbitrary set point would clearly introduce a significant confounding factor with unforeseeable impact on the subsequent results. Thus, we have measured the entire iliopsoas muscle, and although literature comparisons are limited, as there is a paucity of comparable volumetric studies within the general population, our average reported values for male subjects ($407.2 \pm 62.7$~ml) were within the range $351.1-579.5$~ml in a cohort which included male athletes and controls \cite{sanchis2011iliopsoas}. Furthermore, our CNN-based method performs very well, with a small but systematic underestimation of 4.3~ml when compared with manual annotations. Incremental improvement of the model is possible using straightforward techniques, such as increasing the number and variety of training data or expanding the breadth of data augmentation\cite{lundervold2019overview}. These are currently under investigation. We observed a small (approximately 2\%) but significant asymmetry in iliopsoas muscle volume, with the right muscle being larger in both male and female subjects. Previous studies have looked at the muscle asymmetry in tennis players, and found that the iliopsoas muscle was 13\% smaller on the non-dominant compared with the dominant side of the body, whereas inactive controls the dominant size was 4\% larger than the non-dominant \cite{sanchis2011iliopsoas}. Similarly footballers players have significantly larger psoas CSA on their dominant kicking side \cite{stewart2010consistency}. The best equivalent to this within the UKBB phenotyping data was handedness, which we found not to be related to left-right differences in iliopsoas volume in the current study. An additional factor which may contribute towards iliopsoas asymmetry relates to the presence or absence of the psoas minor muscle, a long slim muscle typically found in front of the psoas major. This muscle can often fail to develop during embryonic growth \cite{hanson1999anatomical} and there can be considerable differences in the incidence of agenesis which can be unilateral or bilateral with ethnicity thought to be a factor \cite{ojha2016morphology}. Further work is required to understand whether this contributes to the left-right asymmetry observed in the present study, since it is not possible to resolve this muscle on standard MRI images. In line with previous studies of psoas CSA, male subjects had significantly larger iliopsoas muscles compared to females \cite{jones2015simple}. This is unsurprising since gender differences in both total muscle and regional muscle volumes are well established \cite{gallagher1998muscle, janssen2000skeletal}. Indeed some studies have suggested using gender specific cut-offs of either psoas CSA alone or psoas muscle index to identify patients at risk of poorer health outcomes \cite{kasahara2017low}. Furthermore, some studies have suggested that the magnitude of gender differences in trunk muscle CSA vary depending where are measured. This adds weight to the argument that volumetric measurements are perhaps more robust than CSA measures for this comparison \cite{abe2003sex}. It has been proposed that the gender differences in psoas volume could in part relate to the impact of height on psoas volume \cite{fitzpatrick2017psoas}. Indeed, we found a significant correlation between iliopsoas muscle volume and height similar to those previously reported by earlier studies \cite{janssen2000skeletal}. However, the gender differences observed in our study were still present when correcting for height. Interestingly, it has been reported that the relationship between muscle volume and body weight is curvilinear, since increases in body weight often reflect gain in fat, as well as muscle mass. In the present study we observe a significant correlation between IMI and BMI. This is in agreement with previous studies of psoas CSA which have also shown a significant correlation with BMI \cite{jones2015simple}, indeed some studies combined both metrics as a prognostic marker \cite{hervochon2017body}. We also found a significant correlation between IMI and age. It is widely reported that muscle mass declines with age, particularly beyond the fifth decade, a fundamental characteristic of sarcopenia \cite{rosenberg1997sarcopenia}. The magnitude of this decline was relatively small, but this may arise by the limited age range within the UKBB data set ($44-82$ years), compared to other studies that have investigated the impact of age on muscle volume across the entire adult age span ($18-88$ years), which usually tend to reveal a more dramatic decline in muscle volume \cite{janssen2000skeletal}. In conclusion, we have developed a robust and reliable model using a CNN to automatically segment iliopsoas muscles and demonstrated the applicability of this methodology in a large cohort, which will enable future population-wide studies of the utility of iliopsoas muscle as a predictor of health outcomes.
1,116,691,499,924
arxiv
\section{Introduction} \label{sec:intro} NLP systems that operate in more than one language have been proven effective in tasks such as cross-lingual natural language understanding and machine translation \cite{devlin-etal-2019-bert, conneau-etal-2020-unsupervised, aharoni-etal-2019-massively}. The performance of such systems is strongly connected to their use of an input space that can sufficiently represent all the considered languages \cite{sennrich-etal-2016-neural, wu-dredze-2019-beto, conneau-etal-2020-unsupervised}. Conceptually, an effective cross-lingual input space should exploit latent similarities between languages. State-of-the-art multilingual systems take advantage of cross-lingual similarities in their input spaces through the use of a shared vocabulary of subwords. This vocabulary is learned on the concatenation of multilingual training corpora, using heuristic subword segmentation algorithms \cite{sennrich-etal-2016-neural, WordPiece, Kudo2018-xx}, which handle the open vocabulary problem by identifying tokens at multiple granularity levels, based on character n-gram frequencies. Therefore, the embeddings of subwords that appear in several languages act as anchors between these languages and, thus, provide implicit cross-lingual information that leads to improved performance~\cite{xlm, pires-etal-2019-multilingual, conneau-etal-2020-emerging}. Cross-lingual transfer in joint subword models may be limited by false positives, i.e.\ identical subwords with different meanings in two languages, a phenomenon also known as `oversharing' \citep{Wang2020Cross-lingual, dhar-bisazza-2021-understanding}. Moreover, they do not benefit from false negatives, i.e.\ different subwords with identical meanings. Examples of false positives are: \textit{die}, a definite article in German and a verb in English; \textit{also}, meaning `so' or `therefore' in German, not `as well' as in English; or \textit{fast}, which in German means `almost', not `quick'. Examples of false negatives are \textit{and} and \textit{und}, \textit{very} and \textit{sehr}, \textit{people} and \textit{Menschen} -- all pairs being near synonyms that could benefit from a unique embedding rather than two. A unique embedding would not constrain the models to always represent or translate them in the same way, as representations are highly contextualized. In this paper, we address the problem of false positives and negatives by employing \textit{subword similarity to create cross-lingual anchors}. Specifically, using cross-lingual mapping, we determine subword alignments for a set of subwords, and then share their representations. In this way, we relax the requirements for isomorphism and common scripts between languages on which previous studies rely. We demonstrate that this can improve both cross-lingual transfer of language models and machine translation (MT). Our contributions are the following: \begin{enumerate} \setlength{\itemsep}{0pt} \item We propose a method for subword mapping and anchoring across two languages (SMALA), with no constraints on the availability of parallel data or the similarity of scripts (Section~\ref{sec:approach}). \item We show how SMALA can be used to extend an existing monolingual vocabulary and facilitate cross-lingual transfer of a pre-trained language model to an unseen language under a limited parameter budget (Section~\ref{sec:lm-transfer}). \item We demonstrate experimentally the benefits of SMALA for cross-language natural language inference (XNLI) (Section~\ref{sec:experiments-with-xnli}). \item We demonstrate how SMALA can be used to build a shared vocabulary for MT, and bring experimental evidence of its benefits (Section~\ref{sec:smala-for-mt}). \end{enumerate} We release our code online\footnote{\url{https://github.com/GeorgeVern/smala}}. \section{Related Work} \label{sec:related-work} \textbf{Cross-lingual representations}. A large body of work has attempted to harness the similarities of languages via cross-lingual word embeddings, i.e.\ continuous word vectors that can represent multiple languages in a shared vector space. A first approach to obtain these embeddings is offline mapping of pre-trained monolingual embeddings, where the mapping can be learned using supervision in the form of lexicons \citep{DBLP:journals/corr/MikolovLS13,xing-etal-2015-normalized,joulin-etal-2018-loss}, or by leveraging weak supervision in the form of identical seed words \citep{artetxe-etal-2017-learning,sogaard-etal-2018-limitations}, or in an unsupervised way \citep{artetxe-etal-2018-robust,lample2018word}. A second approach to obtain cross-lingual embeddings is joint training from scratch, by combining monolingual language modeling objectives with a cross-lingual objective -- with either strong, or weak, or no supervision \citep[see respectively][]{luong-etal-2015-bilingual,duong-etal-2016-learning,lample-etal-2018-phrase}. Despite their success, both approaches have certain limitations. On the one hand, alignment methods assume that the monolingual embedding spaces have comparable structures, i.e., that they are isomorphic to a certain extent. However, this assumption has been challenged, especially for etymologically distant languages, but also for related ones \citep{sogaard-etal-2018-limitations,patra-etal-2019-bilingual,ormazabal-etal-2019-analyzing}. Unsupervised joint training, on the other hand, relies on the assumption that identical tokens carry the same information across languages, which is not always true. To address the limitations of alignment and joint training (the isomorphism assumption and requirement for common script), combinations of the two methods have been proposed. \citet{Wang2020Cross-lingual} jointly train embeddings on concatenated monolingual corpora and then ``unshare'' identical words across languages, reallocating the overshared word embeddings and subsequently aligning them. \citet{ormazabal2020offline} find word alignments that are used as anchors to create cross-lingual representations with a modified version of Skip-gram \citep{skip-gram}. Our approach shares a similar motivation, but instead of directly creating cross-lingual representations, we shape the input space (i.e.\ the vocabulary) of multilingual systems in a way that facilitates cross-lingual transfer. \noindent\textbf{Subword vocabularies}. Recently, multilingual language models have superseded cross-lingual word embeddings, not only because they produce contextualized representations, but also because they can handle the open vocabulary problem through the use of subwords as tokens \citep{sennrich-etal-2016-neural, WordPiece,Kudo2018-xx}. Multilingual subword vocabularies are simply obtained by learning the subwords on the concatenation of all used languages. Since each subword is assigned to a unique embedding, identical subwords that appear in several languages serve as anchors between languages, providing implicit cross-lingual information \citep{wu-dredze-2019-beto,pires-etal-2019-multilingual,conneau-etal-2020-emerging}. Parameter sharing across languages make subword models particularly suitable for multilingual NLP and machine translation. The number of shared tokens in multilingual vocabularies highly depends on the similarities of script between languages. When this is not the case, transliteration can be applied \citep{nguyen-chiang-2017-transfer,muller2020unseen,Amrhein_2020}. In addition, shared subword vocabularies often produce inconsistent segmentations across languages that can hurt cross-lingual transfer. Regularization techniques that introduce randomness in the tokenization process \citep{Kudo2018-xx,provilkov2020bpe} can partially address this problem, or consistency between the different segmentations can be otherwise enforced \citep{MVR}. Still, there is no guarantee that shared (sub)words have identical meanings (false positives are not excluded) and, conversely, subwords with identical meanings but different spellings (false negatives) are missed. \noindent\textbf{Cross-lingual LM transfer}. The success of pretrained monolingual and multilingual language models raises the question of whether these models can be transferred to unseen languages. To transfer such a model, it is mostly necessary to add language-specific parameters in the form of a subword embedding layer, which can be learned from scratch \citep{artetxe-etal-2020-cross,devries2020good}. Alternatively, offline mapping can be used to initialize the new embedding layer, for faster convergence and improved zero-shot performance \citep{tran2020english}. Another option, which reduces the computational cost of this transfer but assumes similarity of scripts, is to leverage common subwords between languages \citep{chronopoulou-etal-2020-reusing,wang-etal-2020-extending}. Our proposal combines the two approaches without the requirement for a common script. Recent work has shown that cross-lingual transfer can still be achieved in the absence of anchors (i.e.\ subwords shared between languages), although the existence of anchors contributes to performance gains \citep{artetxe-etal-2020-cross, conneau-etal-2020-emerging, aji-etal-2020-neural}. Specifically, \citet{conneau-etal-2020-emerging} have shown that performance increases with the number of available anchors. However, these studies do not discuss the quality of anchors, or how they can be obtained, which is the main focus of our work. \section{SMALA: Subword Mapping and Anchoring across Languages} \label{sec:approach} Our motivation is to create cross-lingual vocabularies that are parameter-efficient and exploit the similarity of concepts between different languages. We propose a method for Subword Mapping and Anchoring across Languages (SMALA), which combines the powerful initialization of mapping methods with the anchoring properties of joint training, while attempting to alleviate the limitations of both methods. We first learn subwords separately for each language and then train the corresponding embeddings. We then apply a mapping method to obtain similarity scores between the embeddings, which we use to extract alignments between subwords of the two languages. We finally tie the parameters of the aligned subwords to create anchors during training. We describe hereafter in detail the two main components of our approach. \subsection{Subword Mapping} \label{sec:alignment-of-subwords} As a first step, we aim to find subwords that have similar meanings or functions (morphological or syntactic) between different languages, i.e.\ to extract subword alignments. To this end, we first learn separate subword vocabularies for each language from monolingual data using one of the existing subword segmentation algorithms (specified below for each series of experiments). Since we argue against using identical subwords as anchors between languages, we employ a distributional method to find the alignments: we obtain subword representations for each language from monolingual data from FastText embeddings \citep{bojanowski-etal-2017-enriching}\footnote{The use of subword co-occurrence and PCA appeared to underperform with respect to FastText.} and then align them using a state-of-the-art unsupervised alignment approach, VecMap \citep{artetxe-etal-2018-robust}. Our method can also exploit parallel data, when it is available. In this case, we tokenize both sides of the bitext with language-specific subwords and then use FastAlign \citep{dyer-etal-2013-simple} to estimate the alignment, similar to \citet{tran2020english}. Implementation details can be found in Appendix~\ref{appendix:impl_details}. \subsection{Anchoring of Similar Subwords} \label{sec:anchoring-of-similar-subwords} After the mapping step, we apply cosine similarity\footnote{We also experimented with CSLS retrieval \cite{lample2018word} but it produced more alignments of lower quality.} to compute a similarity matrix $S$: each of its coefficients $S_{i,j}$ is the cosine similarity between the embeddings of the $i^\mathrm{th}$ subword of language $\mathcal{L}_1$ and of the $j^\mathrm{th}$ subword of language $\mathcal{L}_2$. We use the similarity matrix $S$ to identify alignments between subwords in a completely unsupervised way. We extract the aligned subword alignments using the \textit{Argmax} method of \citet{jalili-sabet-etal-2020-simalign}, as follows. A subword $w^{L_1}_i$ from the $\mathcal{L}_1$ vocabulary is aligned to a subword $w^{L_2}_j$ from the $\mathcal{L}_2$ vocabulary, if and only if $w^{L_2}_j$ is the most similar subword to $w^{L_1}_i$ and vice versa: \begin{equation} \label{eq:mutual-argmax} i = \argmax_l(S_{l,j}) \ \text{and} \ j = \argmax_l(S_{i,l}) \end{equation} Each pair of subwords that satisfies the above consistency condition forms an alignment, to which we assign a score: the average similarity $(S_{i,j} + S_{j, i})/2$. This will be used as a threshold to select a subset of all alignments. We thus obtain a dictionary $D$ of aligned subwords that will function as anchors between languages during training, by tying their embeddings. The above definition implies that the aligned subwords are translations of one another. Although this might seem quite limiting, the same issue arises for joint vocabulary construction, with the difference being the criterion according to which we choose to share subwords. We argue that our similarity is a more expressive criterion than the raw surface form. Our approach does not rely on the surface form for cross-lingual anchors and additionally removes the requirement for a common script. Furthermore, it prevents sharing subwords that are identical but differ in meaning (false positives) and allows sharing subwords that are spelled differently but are close to synonyms (false negatives). The (sub)words aligned by our method may or not be identical, as long as they satisfy Equation~\ref{eq:mutual-argmax}. \section{Language\,Model\,Transfer\,with\,SMALA} \label{sec:lm-transfer} For the first set of experiments, we attempt to transfer a pretrained Language Model (LM) from one language ($\mathcal{L}_1$) to another language ($\mathcal{L}_2$), by leveraging the linguistic knowledge that was implicitly encoded in $\mathcal{L}_1$'s embedding layer. Following previous work \citep{artetxe-etal-2020-cross,tran2020english}, we create an embedding layer for $\mathcal{L}_2$ and initialize it by sharing parameters using SMALA. In this way, we aim to reduce the computational budget of cross-lingual transfer via parameter sharing without sacrificing performance, but removing the need for a common script and the pitfalls of false positives and false negatives. We transfer the model following the same steps as \citet{tran2020english}. We start from a pretrained LM that we continue training on masked language modeling (MLM) using monolingual data from both the original and the target languages ($\mathcal{L}_1$ and $\mathcal{L}_2$). The bilingual model has two \textit{separate embedding layers}, one for $\mathcal{L}_1$ and one for $\mathcal{L}_2$, while the rest of the encoder is common to $\mathcal{L}_1$ and $\mathcal{L}_2$. Each language-specific embedding layer is used both as the first and last layer (tied embeddings). During this training phase, we keep including monolingual data from $\mathcal{L}_1$ to avoid degradation in performance in the original language and maximize cross-lingual transfer \cite{pires-etal-2019-multilingual, conneau-etal-2020-emerging}. We update the weights of the whole model during this phase, since updating only the embeddings would not significantly reduce computation time (due to the need to calculate all activations for backpropagation) and has actually a negative impact on performance, as we observed in our initial experiments. At this stage, the transferred model could be used for any cross-lingual natural language understanding task \citep{pmlr-v119-hu20b} or for unsupervised machine translation \citep{xlm,chronopoulou-etal-2020-reusing, doi:10.1162/tacl/a00343}. In a second stage, we fine-tune the model for XNLI \citep{conneau-etal-2018-xnli} on labeled data in $\mathcal{L}_1$ (English), using $\mathcal{L}_1$ embeddings and freezing the embedding layer. Finally, we zero-shot transfer the model to $\mathcal{L}_2$ data by simply changing the language-specific embedding layer. \section{Experiments with XNLI} \label{sec:experiments-with-xnli} \subsection{Models} \label{sec:models} We compare several models in our experiments on cross-lingual natural language inference (textual entailment) with the XNLI dataset \citep{conneau-etal-2018-xnli}. We note that all models, with the exception of mBERT, follow the pipeline from the previous section to transfer the pretrained LM to a new language. The only difference between these models is the way the new embedding layer is created. \noindent\textbf{\textsc{joint}}. A system that employs parameter sharing based on surface form, that is, the union of the two language-specific vocabularies, similar to joint tokenization. The embeddings for the tokens that are not shared with the original embedding layer are initialized randomly. This model allows for a comparison between anchoring identical vs.\ semantically similar subwords identified by SMALA, as an inductive bias for cross-lingual vocabularies. Although this is not exactly the same as joint tokenization, previous works have suggested that performance is similar \citep{aji-etal-2020-neural, conneau-etal-2020-emerging} and that a language-specific embedding layer and tokenizer can have a positive impact on performance \citep{rust-etal-2021-good, pfeiffer2020unks}. \noindent\textbf{\textsc{ours}}. Our approach (SMALA) leverages similarity to find alignments between subwords. The parameters of the subwords are then tied, as explained above. Our system is directly comparable to \textsc{joint}, since we only use monolingual data to find the alignments, and the non-aligned subwords are randomly initialized. \noindent\textbf{\textsc{ours+align}}. Random initialization of the non-aligned subwords requires more computation to reach convergence \citep{artetxe-etal-2020-cross} and/or can lead to subpar performance\footnote{In our experiments, even a random alignment produced better results than random initialization.} \citep{tran2020english, aji-etal-2020-neural}. Therefore, we also propose a system which initializes the non-aligned subwords using the similarity matrix $S$ from which we calculated the subword alignments. Following \citet{tran2020english}, we use \textit{sparsemax} \citep{pmlr-v48-martins16} to initialize the non-shared $\mathcal{L}_2$ subwords as a sparse weighted sum of $\mathcal{L}_1$ subwords. We experiment with either monolingual or parallel data to learn the similarity matrix $S$ in this case. \noindent\textbf{\textsc{ramen}}. \textsc{ramen} \citep{tran2020english} leverages alignments learned from either monolingual or parallel data to initialize the $\mathcal{L}_2$ subword embeddings. Unlike our approach, for monolingual data, common words are used to initialize a supervised word alignment method \citep{joulin-etal-2018-loss}, and then the word alignment is transferred to subwords using several approximations. In contrast to our method, \textsc{ramen} does not employ any parameter sharing but trains a full embedding layer for $\mathcal{L}_2$. \noindent\textbf{m\textsc{BERT}}. For comparison, we use multilingual BERT \citep{devlin-etal-2019-bert} in the same zero-shot cross-lingual transfer setting. However, results are not strictly comparable to the above models, since mBERT has a larger shared vocabulary, hence more parameters (178M compared to 133M for \textsc{ramen}) and is trained for more steps. We include mBERT in our experiments as a reference for high-performing multilingual models. \subsection{Data and Settings} For XNLI experiments, we select five target languages that vary in terms of language family, typology and script: Spanish (Es), German (De), Greek (El), Russian (Ru) and Arabic (Ar). We obtain monolingual corpora from the Wikipedia of each language using WikiExtractor\footnote{\href{https://github.com/attardi/wikiextractor}{https://github.com/attardi/wikiextractor}}. We use these corpora for MLM training, similar to \citet{devlin-etal-2019-bert}, and to extract subword alignments using SMALA. When parallel data is used, we either use Europarl~\cite{koehn-etal-2007-moses} or the United Nations Parallel Corpus~\cite{ziemski-etal-2016-united}. We use the same amount of parallel data for each pair and we subsample the data, if needed. Both monolingual and parallel data are lowercased and tokenized with the Moses tokenizer~\cite{koehn-etal-2007-moses}. For our implementation we use Hugging Face’s Transformers library~\cite{HF} and for \textsc{ramen} we use the public implementation from the author. We choose \textsc{BERT-base} (110M parameters) as our pretrained LM. We further train all bilingual models on MLM for 120k steps with a batch size of 76 and a maximum sequence length of 256. Each batch contains equal numbers of samples from both languages, similar to \citet{tran2020english}. We optimize bilingual LMs using Adam \citep{DBLP:journals/corr/KingmaB14} with bias correction, a learning rate of 5e$-$5 and linear decay. \begin{table*}[ht!] \centering \begin{tabular}{lcccccc} \Xhline{2\arrayrulewidth} Method &Data &Es &De &El &Ru &Ar \\ \hline \textsc{joint} &mono &$70.0 \pm 0.2$ &$64.4 \pm 0.8$ &$61.2 \pm 0.9$ &$56.2 \pm 1.1$ &$45.8 \pm 0.4$ \\ \textsc{ours} &mono &$\mathbf{74.2} \pm 0.4$ &$\mathbf{70.6} \pm 0.1$ &$\mathbf{70.0} \pm 0.7$ &$\mathbf{65.4} \pm 0.9$ &$\mathbf{62.3} \pm 0.4$ \\ \hline \textsc{ours+align} &mono &$76.5 \pm 0.4$ &$72.8 \pm 0.5$ &$72.9 \pm 0.5$ &$70.2 \pm 0.6$ &$67.0 \pm 0.4$ \\ \textsc{ours+align} &para &$\mathbf{77.1} \pm 0.8$ &$\mathbf{74.1} \pm 0.5$ &$\mathbf{75.1} \pm 0.7$ &$\mathbf{71.9} \pm 0.4$ &$\mathbf{67.8} \pm 0.8$ \\ \hline \textsc{ramen} &mono &$76.5 \pm 0.6$ &$72.5 \pm 0.8$ &$72.5 \pm 0.8$ &$68.6 \pm 0.7$ &$66.1 \pm 0.8$ \\ \textsc{ramen} &para &$\mathbf{77.3} \pm 0.6$ &$\mathbf{74.1} \pm 0.9$ &$\mathbf{74.5} \pm 0.6$ &$\mathbf{71.6} \pm 0.8$ &$\mathbf{68.6} \pm 0.6$ \\ mBERT &mono &$74.9 \pm 0.4$ &$71.3 \pm 0.6$ &$66.6 \pm 1.2$ &$68.7 \pm 1.1$ &$64.7 \pm 0.6$ \\ \Xhline{2\arrayrulewidth} \end{tabular} \caption{\label{tab:xnli_results} Zero-shot classification scores (accuracy) on XNLI: mean and standard deviation over 5 runs, when either monolingual or parallel corpora are used for alignment (or token matching for \textsc{joint}). Systems in the first 4 rows use parameter sharing, while those in rows 5-6 train a full embedding layer. Moreover, rows 1-2 only share subwords, while rows 3-4 also use alignment for initialization. The best model in each subgroup is in \textbf{bold}.} \end{table*} We fine-tune the adapted bilingual LMs on the MultiNLI dataset \citep{N18-1101} in \textit{English}, using a batch size of 32 and a maximum sequence length of 256. We also use Adam with a learning rate of 2e$-$5, a linear warm up schedule over the 10\% initial steps, bias correction and linear decay. We fine-tune each model for five epochs and evaluate five times per epoch, as suggested by \citet{dodge2020finetuning}. We select the best model based on validation loss. We evaluate on the test data for $\mathcal{L}_2$ from the XNLI dataset \citep{conneau-etal-2018-xnli}, with no specific training for $\mathcal{L}_2$ (zero-shot). As in the robust evaluation scheme for zero-shot cross-lingual transfer used by \citet{wu-dredze-2020-explicit}, we report mean and variance over the systems resulting from five different runs of the fine-tuning stage, with the same hyper-parameters but different seeds. We did not perform any exhaustive hyper-parameter search for this task, and use the exact same settings for all model variants and languages. For each target language, we learn a new subword vocabulary using the WordPiece\footnote{As implemented at: \href{https://huggingface.co/docs/tokenizers/python/latest/}{https://huggingface.co/docs/tokenizers /python/latest/}.} algorithm \citep{WordPiece}. The bilingual models contain two language-specific embedding layers corresponding to these vocabularies.\footnote{Following \citet{tran2020english}, we initialize special tokens ([CLS], [SEP], [MASK], [PAD] and [UNK]) with their pretrained representations, in all methods except mBERT.} For \textsc{ramen}, which does not share parameters, the size of the $\mathcal{L}_2$ embedding layer is the same as the original one. For methods that employ sharing (\textsc{ours} and \textsc{joint}), the parameters of the shared subwords are tied, reducing the size of the new embedding layer. Table~\ref{tab:parameter_efficiency} presents the percentage of the $\mathcal{L}_2$ embeddings that are shared with $\mathcal{L}_1$ for all methods. \subsection{Results on XNLI} We present the results of our experiments on XNLI in Table~\ref{tab:xnli_results}. Our approach is significantly better than sharing based on surface form (\textsc{ours} vs.\ \textsc{joint}), and the improvement increases with the distance of $\mathcal{L}_2$ from English (for Greek, Russian and Arabic). This can be attributed to the erroneous sharing of non-informative subwords (e.g. letters and English words) in the \textsc{joint} model. Our approach is more parameter-efficient than \textsc{joint}, as shown in Table~\ref{tab:parameter_efficiency}, as it enables the sharing of a larger number of embeddings, especially for distant languages. Therefore, despite the smaller number of parameters, results are significantly improved. Moreover, the results also demonstrate the applicability of our approach to languages with different scripts. Among methods that do not make use of parallel data (rows 1-3 and 5 in Table~\ref{tab:xnli_results}), we notice a significant gap between the performance of anchoring based on surface form (\textsc{joint}) and training a full embedding layer, without sharing, initialized by alignment (\textsc{ramen} with mono). Our approach can sufficiently bridge this gap, with a smaller number of parameters, demonstrating the importance of the choice of anchors in cross-lingual vocabularies. \begin{table}[t] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{lcccccc} \Xhline{2\arrayrulewidth} Method &Data &Es &De &El &Ru &Ar \\ \hline \textsc{joint} &mono &$26\%$ &$25\%$ &$11\%$ &$9\%$ &$10\%$ \\ \textsc{ours} &mono &$44\%$ &$37\%$ &$33\%$ &$31\%$ &$30\%$ \\ \textsc{ours} &para &$32\%$ &$26\%$ &$21\%$ &$21\%$ &$15\%$ \\ \Xhline{2\arrayrulewidth} \end{tabular} } \caption{\label{tab:parameter_efficiency} Percentage of $\mathcal{L}_2$ embeddings that are shared with $\mathcal{L}_1$ (English) for each system and language.} \end{table} Among methods that use alignment (rows 3-6), our approach with additional alignment of the non-shared subwords (\textsc{ours+align}) performs on par or better than \textsc{ramen}. This trend is consistent across the use of monolingual and parallel data for the alignment. In the latter case, the alignment is learned with the same method and data in both systems. Our higher score supports our claim that better anchoring can lead to more parameter-efficient vocabularies without sacrificing performance. Finally, in Table~\ref{tab:xnli_results}, we observe that all methods that employ alignment outperform mBERT. In some cases, even our approach without alignment performs comparably (Es, De) or even better (El) than mBERT. These results show that our method -- which transfers a monolingual LM to an unseen language with minimal computation demands -- is a competitive alternative to using an off-the-shelf multilingual model. This is particularly useful when the considered language is not modeled well (e.g.\ Greek) or not covered at all by the multilingual model. \begin{table*}[t] \begin{center} \begin{tabular}{lcccccccc} \Xhline{2\arrayrulewidth} Languages & \multicolumn{2}{c}{En-Ru} & \multicolumn{2}{c}{En-De } & \multicolumn{2}{c}{En-Ro } & \multicolumn{2}{c}{En-Ar } \\ Data & \multicolumn{2}{c}{25M} & \multicolumn{2}{c}{5.85M} & \multicolumn{2}{c}{612k} & \multicolumn{2}{c}{239k} \\ & $\leftarrow$ & $\rightarrow$ & $\leftarrow$ & $\rightarrow$ & $\leftarrow$ & $\rightarrow$ & $\leftarrow$ & $\rightarrow$ \\ \hline \textsc{joint} & 30.0 & 26.1 & 32.1 & 27.1 & 30.9 & 23.2 & 29.0 & 11.8 \\ \textsc{ours} & 30.2 & \textbf{26.6} & 32.1 & 27.0 & 30.8 & 23.3 & 28.8 & 12.2 \\ \Xhline{2\arrayrulewidth} \end{tabular} \caption{\label{tab:MT_results_1col} BLEU scores of baseline and our system for machine translation. Language pairs are ordered by decreasing size of training data (numbers of sentences). \textbf{Bold} indicates statistical significance ($p<0.05$).} \end{center} \end{table*} \section{Experiments with Machine Translation} \label{sec:smala-for-mt} In the second set of experiments, we apply SMALA to MT by leveraging subword alignments to create shared bilingual vocabularies from scratch, instead of joint subword vocabularies learned on concatenated source and target corpora. \subsection{Applying SMALA to MT} The majority of current Transformer-based MT systems \citep{NIPS2017_3f5ee243} share the vocabulary and the corresponding embedding layer between the encoder and the decoder of a sequence-to-sequence architecture. To apply SMALA to MT, instead of jointly learning the subwords on the concatenated corpora, we learn separate subword vocabularies for each language, and then merge them into a joint one. We use SMALA to extract alignments from the available parallel data of each language pair, and use aligned pairs as unique subwords (shared entries), serving as anchors in the shared embedding layer. These anchors play the same role as identical subwords in joint vocabularies, and thus address the problem of false negatives. Conversely, identical subwords that are not aligned with SMALA remain two distinct language-specific entries, thus addressing the problem of false positives. To create a subword vocabulary of a given size $n$ using SMALA, we first learn two monolingual vocabularies of size $m > n$, one for the source and one for the target language. Then, we select a number of alignments $\alpha$ with the highest similarity scores, as defined in Section~\ref{sec:anchoring-of-similar-subwords}, with $\alpha = 2m - n$. This ensures that, when the two vocabularies are joined and the $\alpha$ pairs of anchors are merged, the size of the resulting vocabulary is $n$. \subsection{Data, Tools and Settings} We choose four language pairs that represent different levels of data availability and language relatedness, and run experiments in both directions: Russian, German, Romanian and Arabic, to and from English. Training and test data comes from WMT17\footnote{\href{http://statmt.org/wmt17/translation-task.html}{http://statmt.org/wmt17/translation-task.html}} for En-Ru and En-De, WMT16\footnote{\href{http://statmt.org/wmt16/translation-task.html}{http://statmt.org/wmt16/translation-task.html}} for En-Ro, and IWSLT17\footnote{TED talks from: \href{https://wit3.fbk.eu/}{https://wit3.fbk.eu/}} for En-Ar. We tokenize the data using the Unigram LM model \citep{Kudo2018-xx} as implemented in SentencePiece\footnote{\href{https://github.com/google/sentencepiece}{https://github.com/google/sentencepiece}}. We choose the size of the shared subword vocabulary based on the size of the data, following \citet{Kudo2018-xx}: 32k for high-resource pairs (En-Ru and En-De) and 16k for medium and low-resource pairs (En-Ro and En-Ar). We tokenize data using the Moses Tokenizer~\cite{koehn-etal-2007-moses}. We report BLEU scores \citep{papineni-etal-2002-bleu} obtained with SacreBLEU \citep{post-2018-call} on detokenized text.\footnote{Signature: BLEU+c.mixed+\#.1+.exp+tok.13a+v.1.5.1} We train OpenNMT-py~\cite{klein-etal-2017-opennmt} for a maximum of 100k steps on high-resource pairs and 40k steps on medium or low-resource ones. Our base model is Transformer-Base ($L$=6, $H$=512) \citep{NIPS2017_3f5ee243} with the same regularization and optimization procedures. We use a batch size of 4k tokens and evaluate every 5k steps. We select the best model based on validation loss. Final translations are generated with a beam width of five. \subsection{Results} \label{sec:results-nmt} We present the results for our method and the baseline in Table~\ref{tab:MT_results_1col}. Our method yields comparable results to the baseline across all conditions of data availability and language relatedness. This demonstrates the viability of SMALA as an alternative for the creation of shared bilingual vocabularies. We observe a slight increase in performance in distant language pairs (En-Ru and En-Ar), which could be explained by the difference in scripts. Indeed, joint tokenization (baseline system) is not able to identify anchors when the script is not shared between languages, resorting to a small number of shared subwords that are mostly uninformative, often due to the presence of English words in the other language. In this case, the anchors found by SMALA (subword pairs corresponding to false negatives in the baseline) help to improve the joint vocabulary. Comparing the results of Tables~\ref{tab:xnli_results} and \ref{tab:MT_results_1col} we see that our approach does not equally improve results in both settings. We attribute this difference to the amount of supervision available in MT in the form of bitext, and to the strong contextual constraints from the decoder. Although false positives and negatives are present in both scenarios, the availability of parallel data for training forces NMT models to disambiguate these subwords based on context in both languages at the same time. \section{Analysis} In this section we attempt to quantify the effect of false positives and false negatives on each of the tasks. \subsection{Ablation Study on XNLI}\label{sec:ablation_xnli} We begin with a model that creates cross-lingual anchors based on surface form (\textsc{joint}) and we address either false positives only (\textsc{$-$fp}) or false negatives only (\textsc{$-$fn}) among shared subwords. In the latter case, if a subword is both a false positive and a false negative, then we treat it as a false negative -- e.g., \textit{also} in English should be not aligned with \textit{also} in German but with \textit{auch}. We follow the pipeline of Section~\ref{sec:lm-transfer} and present the results in XNLI in Table~\ref{tab:only_fp_fn}. \begin{table}[ht] \centering \resizebox{\columnwidth}{!}{% \begin{tabular}{lcccccc} \Xhline{2\arrayrulewidth} Method &Es &De &El &Ru &Ar \\ \hline \textsc{joint} &$70.0$ &$64.4$ &$61.2$ &$56.2$ &$45.8$ \\ \textsc{$-$fp} &$68.5$ &$61.7$ &$62.6$ &$53.6$ &$44.8$ \\ \textsc{$-$fn} &$74.3$ &$70.0$ &$70.2$ &$65.8$ &$63.1$ \\ \textsc{ours ($-$fp$-$fn)} &$74.2$ &$70.6$ &$70.0$ &$65.4$ &$62.3$ \\ \Xhline{2\arrayrulewidth} \end{tabular} } \caption{\label{tab:only_fp_fn} Effect of removing false positives or false negatives in XNLI (accuracy).} \end{table} We observe that by only removing false positives (\textsc{$-$fp}) performance drops compared to \textsc{joint}. This can be attributed to the ability of the model to disambiguate false positives in the presence of context. But this could also be due to a limitation of our method to identify false positives with high precision especially (sub)words that have more than one sense. Conversely, the problem of false negatives seems to be the most important and by addressing it (\textsc{$-$fn}) results improve significantly over \textsc{joint}. The similar performance of \textsc{$-$fn} and \textsc{ours} may be due to the removal of certain false positives along with many false negatives (see also Appendix~\ref{appendix:alignments_smala}). \subsection{False Positives and Negatives in MT} In order to quantify the effect of false positives and false negatives in MT, we compare the performance of joint tokenization with SMALA for cases where the presence of such subwords is significant. Table~\ref{tab:FP_FN_results} presents BLEU scores for sentences that contain a high percentage of false positives and/or negatives (more than 50\% of the tokens) in the source side, along with the number of sentences in this case. BLEU scores for percentages between 0\% and 60\% are represented graphically in the Appendix, Figure~\ref{fig:FP_FN_plot}. \begin{table}[ht] \begin{center} \resizebox{\columnwidth}{!}{% \begin{tabular}{lcccccccc} \Xhline{2\arrayrulewidth} Languages & \multicolumn{2}{c}{En-Ru} & \multicolumn{2}{c}{En-De} & \multicolumn{2}{c}{En-Ro} & \multicolumn{2}{c}{En-Ar} \\ & $\leftarrow$ & $\rightarrow$ & $\leftarrow$ & $\rightarrow$ & $\leftarrow$ & $\rightarrow$ & $\leftarrow$ & $\rightarrow$ \\ \hline Sentences & 49 & 2225 & 1674 & 2216 & 1249 & 1295 & 141 & 866 \\ \hline \textsc{joint} & 39.2 & 27.6 & 33.1 & 27.0 & 31.6 & 24.6 & 37.8 & 16.2 \\ \textsc{ours} & 42.2 & 28.0 & 33.0 & 27.0 & 32.0 & 24.8 & 40.4 & 16.6 \\ $\Delta$ & +3.0 & +0.4 & -0.1 & 0.0 & +0.4 & +0.2 & +2.6 & +0.3 \\ \Xhline{2\arrayrulewidth} \end{tabular} } \caption{\label{tab:FP_FN_results} BLEU scores for sentences where 50\% of tokens are false positives and/or false negatives. The number of selected sentences (out of a total of 3,000) is given for each translation direction.} \end{center} \end{table} The results of Table~\ref{tab:FP_FN_results} show improved performance of our method over the baseline, confirming our original intuition regarding false positives and negatives. Despite the fact that MT models with joint tokenization use context to disambiguate false positives -- as it can help to also disambiguate polysemous words to a certain extent \citep{rios-gonzales-etal-2017-improving,pu-etal-2018-integrating} -- when their number increases performance tends to drop compared to SMALA. The gap in performance between \textsc{joint} and \textsc{ours} (using SMALA) is bigger for pairs that do not have shared scripts (En-Ru and En-Ar) which is a possible indication of the impact of false negatives, despite the smaller sample sizes. Overall, the results of Tables~\ref{tab:MT_results_1col} and \ref{tab:FP_FN_results} demonstrate that our approach is competitive with joint tokenization in most cases and superior in challenging cases with multiple false positives and negatives. \subsection{Cross-lingual Word Representations} In order to validate our claim that SMALA facilitates cross-lingual transfer, we perform an intrinsic evaluation of the obtained representations. We compare the quality of representations created using SMALA vs.\ joint tokenization for Bilingual Lexicon Induction (BLI), a standard evaluation task for cross-lingual word embedding methods. Specifically, we compare the performance of the bilingual models from the first setting (see Section~\ref{sec:lm-transfer}) after the bilingual MLM training step, but before the XNLI fine-tuning. We do not include methods that use alignment to initialize the embedding layer (for these results see Appendix \ref{appendix:bli}), in order to isolate the effect of anchors. We follow the setting of \citet{vulic-etal-2020-probing} to compute word-level representations. We encode each word in isolation using the model, in the form [CLS] \textit{word} [SEP]. We extract the representations from the embedding layer excluding representations of special tokens. If a word is split into more than one subword, we average the obtained representations. We perform this operation for every word of the test set for both languages. We retrieve word translation using \textit{Cross-Domain Similarity Local Scaling} (CSLS) with $K$=10 number of neighbours \citep{lample2018word}. \begin{figure}[ht] \centering \includegraphics[height=5cm, keepaspectratio]{BLI3.pdf} \caption{Precision@1 results for the BLI task.} \label{fig:BLI_results} \end{figure} Our results on the MUSE benchmark~\cite{lample2018word}, a bilingual dictionary induction dataset, are presented in Figure~\ref{fig:BLI_results}, using precision at 1 scores (P@1), following standard practices. We observe that by using SMALA to create cross-lingual anchors (\textsc{ours}) we can greatly improve performance on BLI compared to methods that use identical subwords (\textsc{joint} and mBERT). Figure~\ref{fig:BLI_results} also shows that the performance of \textsc{joint} and mBERT significantly decreases as the two languages are more distant and their vocabulary does not have considerable overlap, which points at the limitations of joint tokenization and especially false negatives which are the most frequent in this case. Similar to \citet{Wang2020Cross-lingual}, we also evaluate on words that are not shared, by removing test pairs with the same surface form (e.g.\ (\textit{epic}, \textit{epic}) as a test pair for en-es) and present the difference in performance in Figure~\ref{fig:BLI_drop}. We find that the performance of \textsc{joint} and mBERT decreases significantly, unlike \textsc{ours}. For languages with different scripts (en-el, en-ru and en-ar) the performance of our approach even increases in this scenario due to the fact that our system is able identify and not retrieve false positives. This confirms our intuition that the use of surface form to create cross-lingual anchors leads to poorly aligned cross-lingual representations for the non-shared subwords. \begin{figure}[ht] \centering \includegraphics[height=5cm, keepaspectratio]{BLI_dif.pdf} \caption{Precision@1 difference on BLI when test pairs of same surface form are removed.} \label{fig:BLI_drop} \end{figure} \section{Conclusion} In this work we introduced SMALA, a novel approach to construct shared subword vocabularies that leverages similarity instead of identical subwords to create anchors. We demonstrate that our approach outperforms current methods for joint construction of multilingual subword vocabularies in cases where there is no cross-lingual signal, apart from the anchors. When cross-lingual supervision is available, our approach performs comparably to the baseline, while showing improved performance in cases with numerous false positive and false negatives. In future work, we aim to extend our method to more than two languages. We also intend to explore the effectiveness of SMALA for closely related languages and compare SMALA to other approaches, such as those using transliteration. In addition, we aim to apply SMALA to settings of varying cross-lingual supervision levels, such as unsupervised MT. \section*{Acknowledgments} We are grateful for their support to the Swiss National Science Foundation through grant n.\ 175693 for the DOMAT project: ``On-demand Knowledge for Document-level Machine Translation'' and to Armasuisse for the FamilyMT project. \bibliographystyle{acl_natbib}
1,116,691,499,925
arxiv
\section{Introduction} \noindent The United States' constitution stipulates that the president serves a 4 year term, with a maximum of two terms. In 2016, Republican candidate Donald Trump and Democratic candidate Hillary Clinton both ran for the presidency. Donald Trump won, becoming the 45th President of the United States, beating Hillary Clinton in electoral votes 306 to 232; Clinton, however, won the popular vote by almost 3 million more votes.\footnote{\url{https://www.nytimes.com/elections/2016/results/president}} Four years later, it is again time for the United States to return to the polling booths (or, in the current times, to the mail box) to vote for the individual who will be the president of United States for the coming 4 years: incumbent Republican Donald Trump or the Democratic challenger, and former Vice-President Joe Biden. Historically, the incumbent president is favored to win the nomination for their party's nominee for the president of the United States;\footnote{\url{https://time.com/5682760/incumbent-presidents-primary-challenges/}} although Trump did face a few challengers from the Republican party, it became increasingly clear that he would gain the Republican party's nomination. The Democratic primary, however, was a contentious race, eliciting one of the largest candidate pools in modern American politics, with 29 candidates vying to be the Democratic party's nominee for president.\footnote{\url{https://www.politifact.com/article/2019/may/02/big-democratic-primary-field-what-need/}} The large pool of candidates necessitated the Democratic debate to be held on two separate nights in order to accommodate the candidates, but as time wore on, many candidates began to drop out of the race. The advent of COVID-19 in the United States in March 2020, and the ensuing regulations to encourage social distancing, forced the remaining campaigns, in particular Joe Biden's and Bernie Sander's campaigns, to shift to a virtual campaign model. On April 8, 2020, Bernie Sanders dropped out, leaving Joe Biden as the presumptive Democratic party presidential nominee. Joe Biden announced his selection of Kamala Harris as his running mate on August 11, 2020, and he officially accepted the Democratic nomination on August 20, 2020, during the Democratic National Convention. Donald Trump officially accepted his nomination on August 27, 2020, during the Republican National Convention. As the final sprint to election day on November 3, 2020 begins, Americans are taking to online social platforms to voice their opinions and engage in conversation surrounding the upcoming elections. Twitter has historically been a platform used by politicians to reach their base \cite{jungherr2016twitter}. Inspired by the impact that our similar initiative to share a COVID-19 Twitter dataset had \cite{chen2020tracking}, in this paper, we briefly document the first public release of our election-related dataset that we have been collecting for over one year. We hope that, in releasing this dataset, the research community can leverage its content to study and understand the dynamics in a highly contentious election held during a pandemic, particularly with reports of confirmed foreign interference already surfacing.\footnote{\url{https://home.treasury.gov/news/press-releases/sm1118}} \section{Data Collection} We uninterruptedly collected election-related tweets beginning \textbf{May 20, 2019}, and have continued collection efforts since then. We use Twitter's streaming API through Tweepy and follow specific mentions and accounts related to candidates who were running to be nominated as their party's nomination for president of the United States, in addition to a manually-compiled, general election-related list of keywords and hashtags. As candidates officially announced the suspension of their campaigns, their respective accounts and mentions were removed from our real-time tracking list. However, for a subset of these accounts, we decided to restart tracking at later dates in the future, for reasons associated to real-world events, most notably political events, in addition to adding supplemental keywords and accounts to our tracking list. This process is documented in Table~\ref{mention_table}. We will continue to collect election-related tweets through the elections and for a few months after the presidential-elect is declared (depending on when votes are all counted due to the mail-in ballots), so as to capture the nation's activity during the election season and the reaction to the result. We have collected well over 600 million tweets, resulting in over 4 TB of raw data. Our first release is of tweets from 6/20/2020 through 9/06/2020, constituting about 240 million tweets and almost 2 TB of raw data. In future releases, we will continue processing and adding data we collected prior to 6/20/2020 and after 9/06/2020. We anticipate that our entire data collection will contain well over one billion tweets, as the data keeps growing rapidly as we get closer to the November 3, 2020 election. \textbf{Note}: Twitter's Developer Agreement \& Policy stipulates that we are unable to share any data specific to individual tweets except for a tweet's Tweet ID. As a result, we are releasing a collection of Tweet IDs that researchers are then able to use in tandem with Twitter's API to retrieve the full tweet payload. We recommend using tools such as DocNow's Hydrator\footnote{\url{https://github.com/DocNow/hydrator}} or Twarc;\footnote{\url{https://github.com/DocNow/twarc}} we do note that if tweets have been deleted from Twitter's platform, researchers will be unable to retrieve the payloads for those tweets. In our repository, we provide ready-to-use Python code scripts to perform all the operations described above. \subsection{Tracked Keywords and Accounts} In order to capture the discourse surrounding the elections, we followed specific user mentions and accounts that are tied to the official and personal accounts of candidates running for president. Twitter's streaming API gives us access to approximately 1\% stream of all tweets in real-time, and takes in a list of keywords and returns any tweet within that sample stream that contains any of the keywords in the metadata and text of the tweet payload. This results in us not needing to track every permutation of each keyword. We list a sample of the mentions and accounts that we tracked in Table \ref{mention_table} and a sample of the keywords we tracked in Table \ref{keyword_table}. A full list can be found in the accounts.txt file and keywords.txt file in our data repository. \begin{table}[t] \centering \footnotesize \begin{tabular}{cccc} \textbf{Mentions} & \textbf{Started Tracking} & \textbf{Stopped} & \textbf{Restarted}\\ \hline @realDonaldTrump & 5/20/19 & - & - \\ @GovBillWeld & 5/20/19 & - & - \\ @MarkSanford & 5/20/19 & 11/14/19 & 9/25/20 \\ @WalshFreedom & 5/20/19 & - & - \\ @MichaelBennet & 5/20/19 & - & - \\ @JoeBiden & 5/20/19 & - & - \\ @CoryBooker & 5/20/19 & 1/13/20 & 9/25/20 \\ @GovernorBullock & 5/20/19 & 12/2/19 & 9/25/20 \\ @PeteButtigieg & 5/20/19 & - & - \\ @JulianCastro & 5/20/19 & 1/2/20 & 9/25/20 \\ @BilldeBlasio & 5/20/19 & 11/14/19 & 9/25/20 \\ @JohnDelaney & 5/20/19 & - & - \\ @TulsiGabbard & 5/20/19 & - & - \\ @gillbrandny & 5/20/19 & 11/14/19 & 6/20/20 \\ @KamalaHarris & 5/20/19 & 12/3/19 & 6/20/20 \\ @SenKamalaHarris & 5/20/19 & 12/3/19 & 6/20/20 \\ @Hickenlooper & 5/20/19 & 11/14/19 & 9/25/20 \\ @JayInslee & 5/20/19 & 11/14/19 & 9/25/20 \\ @amyklobuchar & 5/20/19 & - & - \\ @SenAmyKlobuchar & 5/20/19 & 3/3/20 & 6/20/20 \\ @WayneMessam & 5/20/19 & 12/2/19 & 9/25/20 \\ @sethmoulton & 5/20/19 & 11/14/19 & 9/25/20 \\ @BetoORourke & 5/20/19 & 11/14/19 & 9/25/20 \\ @TimRyan & 5/20/19 & 11/14/19 & 9/25/20 \\ @BernieSanders & 5/20/19 & - & - \\ @ericswalwell & 5/20/19 & 11/14/19 & 9/25/20 \\ @ewarren & 5/20/19 & - & - \\ @SenWarren & 6/20/20 & - & - \\ @marwilliamson & 5/20/19 & - & - \\ @AndrewYang & 5/20/19 & - & - \\ @JoeSestak & 5/20/19 & 12/2/19 & 9/25/20 \\ @MikeGravel & 5/20/19 & 8/6/19 & 9/25/20 \\ @TomSteyer & 5/20/19 & - & - \\ @DevalPatrick & 5/20/19 & - & - \\ @MikeBloomberg & 5/20/19 & - & - \\ @staceyabrams & 6/20/20 & - & - \\ @SenDuckworth & 6/20/20 & - & - \\ @TammyforIL & 6/20/20 & - & - \\ @KeishaBottoms & 6/20/20 & - & -\\ @RepValDemings & 6/20/20 & - & - \\ @val\_demings & 6/20/20 & - & - \\ @AmbassadorRice & 6/20/20 & - & - \\ @GovMLG & 6/20/20 & - & - \\ @Michelle4NM & 6/20/20 & - & - \\ @SenatorBaldwin & 6/20/20 & - & - \\ @tammybaldwin & 6/20/20 & - & - \\ @KarenBassTweets & 6/20/20 & - & - \\ @RepKarenBass & 6/20/20 & - & - \\ @Maggie\_Hassan & 6/20/20 & - & - \\ @SenatorHassan & 6/20/20 & - & - \\ @GovRaimondo & 6/20/20 & - & - \\ @GinaRaimondo & 6/20/20 & - & - \\ @GovWhitmer & 6/20/20 & - & - \\ @gretchenwhitmer & 6/20/20 & - & - \\ \end{tabular} \caption{A sample of the mentions and accounts that we actively tracked (v1.0 --- October 1, 2020).} \label{mention_table} \end{table} \begin{table}[t] \centering \footnotesize \begin{tabular}{cc} \textbf{Keywords} & \textbf{Tracked Since}\\ \hline ballot & 6/20/20 \\ mailin & 6/20/20 \\ mail-in &6/20/20\\ mail in & 6/20/20\\ donaldtrump & 9/12/20 \\ donaldjtrump & 9/12/20 \\ donald j trump & 9/12/20 \\ donald trump & 9/12/20 \\ don trump & 9/12/20 \\ joe biden & 9/12/20 \\ joebiden & 9/12/20 \\ biden & 9/12/20 \\ mike pence & 9/12/20 \\ michael pence & 9/12/20 \\ mikepence & 9/12/20 \\ michaelpence & 9/12/20 \\ kamala harris & 9/12/20 \\ kamala & 9/12/20 \\ kamalaharris & 9/12/20 \\ trump & 9/13/20 \\ PresidentTrump & 9/13/20 \\ MAGA & 9/13/20 \\ trump2020 & 9/13/20 \\ Sleepy Joe & 9/13/20 \\ Sleepyjoe & 9/13/20 \\ HidenBiden & 9/13/20 \\ CreepyJoeBiden & 9/13/20 \\ NeverBiden & 9/13/20 \\ BidenUkraineScandal & 9/13/20 \\ DumpTrump & 9/13/20 \\ NeverTrump & 9/13/20 \\ VoteRed & 9/13/20 \\ VoteBlue & 9/13/20 \\ RussiaHoax & 9/13/20 \\ \end{tabular} \caption{A sample of keywords that we actively tracked in our Twitter collection (v1.0 --- October 1, 2020).} \label{keyword_table} \end{table} \begin{table*}[t] \centering \begin{tabular}{cccc} \textbf{Conservative/Trump Campaign} & \textbf{Liberal/Biden Campaign} & \textbf{Conspiracy} & \textbf{Other}\\ \hline MAGA & DemConvention & WWG1WGA & COVID19 \\ Trump2020 & BidenHarris2020 & QAnon & coronavirus \\ Trump & Biden2020 & Obamagate & BlackLivesMatter \\ RNC2020 & Democrats & & BLM \\ KAG & VoteBlueToSaveAmerica & & WalkAway \\ MAGA2020 & JoeBiden & & BREAKING \\ Trump2020Landslide & Biden & & Hydroxychloroquine \\ AmericaFirst & WakeUpAmerica & & TrumpIsANationalDisgrace \\ KAG2020 & & & TrumpVirus\\ TulsaTrumpRally & & & TraitorTrump\\ VoteRedToSaveAmerica & & & TRE45ON \\ & & & BountyGate\\ & & & TrumpIsALaughingStock\\ \hline \end{tabular} \caption{Top 35 hashtags (v1.0 --- October 1, 2020).} \label{tab:ht_table} \end{table*} \begin{table*}[t] \centering \begin{tabular}{cccc} \textbf{Conservative/Trump Campaign} & \textbf{Liberal/ Biden Campaign} & \textbf{Ballots} & \textbf{Other} \\ \hline president @realdonaldtrump & joe biden & mail-in voting & law order\\ donald trump & @joebiden @kamalaharris & mail-in ballots & law enforcement \\ president trump & kamala harris & postal service & black lives\\ fake news & & post office & white house \\ @whitehouse @realdonaldtrump & & mail sorting & united states \\ @realdonaldtrump @trump & & sorting machines & american people\\ radical left & & & new york\\ @realdonaldtrump @foxnews& & & president united\\ @realdonaldtrump @potus & & & make sure\\ mr. president & & & god bless\\ @potus @realdonaldtrump & & & executive order\\ @gop @realdonaldtrump & & & @realdonaldtrump @joebiden \\ & & & vice president \\ & & & four years \\ & & & @hkrassenstein @realdonaldtrump \\ & & & @itsjefftiedrich @realdonaldtrump \\ & & & health care \\ & & & many people \\ \hline \end{tabular} \caption{The top 40 bigrams categorized by general topic (v1.0 --- October 1, 2020).} \label{tab:bigram_table} \end{table*} \section{Data \& Access Modalities} \subsection{Release v1.0 (October 1, 2020)} This initial dataset includes tweets collected from June 20, 2020 through September 6, 2020, containing \textbf{240,225,806} tweets in all. We note that this is only two months out of well over one year of data at our disposal as of the time of this writing. As we continue our computational efforts to pre-process and clean the rest of our existing dataset, we will be uploading batches of past and future data as they become available. The mentions/accounts and keywords that we follow can be found in Tables \ref{mention_table} and \ref{keyword_table}, respectively. Furthermore, Tables \ref{tab:ht_table} and \ref{tab:bigram_table} show the top 35 most popular hashtags and bigrams in this dataset. Partisan trends emerge \cite{jiang2020political}, alongside with conspiracy theories \cite{ferrara2020types} and public heath related trends intertwined with COVID-19 \cite{chen2020tracking}. \textbf{Access}: The dataset is publicly available and continuously maintained on Github at this address: \textbf{{\url{https://github.com/echen102/us-pres-elections-2020}}} The dataset is released in compliance with the Twitter's Terms \& Conditions and the Developer's Agreement and Policies.\footnote{\url{https://developer.twitter.com/en/developer-terms/agreement-and-policy}} This dataset is still presently being collected and will be periodically updated on our Github repository. Researchers who wish to use this dataset must agree to abide by the stipulations stated in the associated license and conform to Twitter's policies and regulations. If you have technical questions about the data collection, please contact Emily Chen at \url{[email protected]} \noindent If you have any further questions about this dataset please contact Dr. Emilio Ferrara at \url{[email protected]} \bibliographystyle{aaai}
1,116,691,499,926
arxiv
\section{Introduction} The cell, an active agent, is the basic unit of life. The motion of a single cell has been studied intensively in the past few decades\cite{lauffenburger1996cell,horwitz1999cell,bray2000cell,ponti2004two,li2009actin,raab2012crawling,novikova2017persistence,van2018mechanoreciprocity}. However, cells interact with each other and move in a collective manner, which plays an essential role in many biological processes, such as embryonic morphogenesis, organ development, wound healing, and spreading of cancer\cite{friedl2009collective,rorth2009collective,poujade2007collective}. The collective movement of cells using physical models, spanning from sub-cellular to supra-cellular length scales\cite{hakim2017collective,camley2017physical,alert2020physical}, gives rise to unexpected spatial and temporal correlations. Since the discovery that flow-induced stretching of a long polymer in a high viscosity solution could produce flow patterns resembling that found in fully developed turbulence~{\cite{groisman2000elastic}}, a number of studies have shown that turbulent-like features emerge in a variety of biological systems \cite{alert2022active,dombrowski2004self,rossen2014long,bratanov2015new,wensink2012meso}. In these examples, the complex flow patterns are thought to rise from active forces, and hence the collective behavior is referred to as active turbulence, even though the scaling behavior is not universal. Hydrodynamic models, that build on the Toner-Tu equations \cite{toner1998flocks,toner2005hydrodynamics,wensink2012meso,bratanov2015new} and particle based simulations for self-propelled rods \cite{wensink2012meso} have generated turbulent-like patterns. In addition, numerical solutions and theoretical arguments of active nematic liquid crystal models have been used to show that vortex formation and turbulent-like behavior emerges \cite{giomi2015geometry,thampi2015intrinsic,alert2020universal}. In all these models, active forces, the cause for turbulent-like behavior, is introduced phenomenologically (by ``hand" as it were). Recent studies\cite{rossen2014long,doostmohammadi2015celebrating} suggest that cell division and apoptosis lead to turbulent-like velocity fields with long-range vortex structures in two dimensional (2D) cell monolayers upon averaging over hundreds of division events. In addition, it is found that cell displacements exhibit super-diffusive behavior in a 2D confluent monolayer \cite{giavazzi2018tracking}, where cell division and apoptosis occur continuously. By building on our previous studies\cite{malmi2018cell,sinha2020spatially}, which showed that imbalance between the rates of cell division and apoptosis, leads to unusual dynamics in a growing tumor spheroid, we investigate if a similar picture could be used to analyze the time-dependent cell trajectories generated in the 2D monolayer\cite{giavazzi2018tracking} to generate new physical properties. Here, we investigate the collective cell migration induced by division and apoptosis in a confluent 2D cell monolayer using experimental data on Madin-Darby Canine Kidney (MDCK) cells\cite{giavazzi2018tracking}, in combination with an agent-based simulations \cite{malmi2018cell,sinha2020spatially}. We find that cell division leads to a directed motion of cells, which explains the super-diffusive behavior of cells with the mean-square displacement, $\Delta(t) \propto t^{1.5}$. The directed motion of the new born cells induce rapid rearrangements of the neighbors in the confluent tissue through topological (T1 and T2 transitions) changes. The cell rearrangement induced by a single cell division leads to vortex formation. With the occurrence of multiple cell births and deaths, a turbulent-like behavior naturally emerges with the mean size of the vortices being $\sim$ 100 $\mu m$. Interestingly, we find two scaling regimes for the kinetic energy spectrum, $E(k)$ with one following a Kolmogorov-like power law ($E(k) \approx k^{-5/3}$). The long-range correlated vortex motion leads to a long time tail (LTT) for the velocity auto-correlation function ($C_v(t)$), which decays as $C_v(t) \sim t^{-1/2}$, which differs from the well-known result $C_v(t) \sim \frac{1}{t \sqrt{ln t}}$ found in classical hard-disc-model fluids. \section{Results} \textbf{Vortex motion in a confluent cell monolayer:} A confluent MDCK monolayer exhibits rich dynamics (see Figs.~S1-S5 for some general features of the monolayer as it evolves in time) even though the cells are jammed. To probe the motility of the cells in the collective, we created the velocity vector field by applying the digital particle image velocimetry\cite{thielicke2014pivlab} to the experimental data\cite{giavazzi2018tracking}. Collective motion of cells, in which a group of cells move in the same direction with similar magnitude of velocity (${\bf v}$)\cite{vicsek2012collective}, occurs in different regions of the monolayer (see Fig.~\ref{figmodel}a). Interestingly, formation of several vortices are discernible in the velocity field, which indicates a coherent motion of cells (see the dashed rectangle in Fig.~\ref{figmodel}a as an example). We calculated the vorticity field (${\boldsymbol {\omega}} \equiv \nabla \times {\bf v} $), which describes the local rotational motion of the velocity flow (see Fig.~\ref{figmodel}b). Positive vorticity (yellow color) indicates clockwise rotation, while negative vorticity (blue color) shows anticlockwise rotation. The vortices in Fig.~\ref{figmodel}a are clearly elucidated in the vorticity field (see the bright yellow circle in the lower left of Fig.~\ref{figmodel}b and the dashed rectangle in Fig.~\ref{figmodel}a). Outward and inward velocity fluxes, characterized by divergence ($\text{div} ({\bf v}) \equiv \nabla \cdot {\bf v} $) of the velocity flow (Fig.~\ref{figmodel}c), are non-uniform in the monolayer. \textbf{Cell division as the driver of vortex motion:} Previous theoretical and experimental studies\cite{ranft2010fluidization,matoz2017cell,malmi2018cell,petridou2019fluidization} have shown that active forces generated by cell division drives fluidization of tissues, leading to heterogeneous cell motility patterns both in a two-dimensional monolayer and in the three-dimensional spheroids. Such events (Fig.~S1c) occur frequently in the experiments, as shown in Figs.~\ref{figmodel}a-c. Therefore, we surmised that the complex flow patterns observed here are driven by cell birth and death. To test this notion, we first show the positions of newly generated cells (born about 30 minutes before the snapshot in Fig.~\ref{figmodel}a) in the divergence field (see the purple dots in Fig.~\ref{figmodel}c). The origin of the outward flux (colored in yellow) is frequently accompanied by the appearance of newly created cells nearby. In addition, new cells arising from cell divisions are close to the vortices (see Figs.~\ref{figmodel}b-c). Therefore, it is likely that the flow patterns (Figs.~\ref{figmodel}a-c) in the monolayer are driven by active forces produced by the cell division. \textbf{Particle based simulations:} To further elucidate the relevance of cell division, we simulated the dynamics of a cell monolayer using an agent-based model (details are in the Methods section and the SI). The velocity flow pattern in the simulations and experiments are similar (compare Figs.~\ref{figmodel}a and \ref{figmodel}d). In addition, several pairs of vortices with opposite direction of rotation are also observed (see the lower middle of Fig.~\ref{figmodel}e). Note that in contrast to other studies\cite{belmonte2008self,basan2013alignment,bi2016motility}, we do not include self-propulsion in our model. Thus, the observed velocity flow and vortex formation can only arise from self-generated active forces (SGAFs) arising from cell division. Indeed, cell division (see the purple dots in Fig.~\ref{figmodel}f), which produces an outward flow in the the divergence field (Fig.~\ref{figmodel}f), is close to the vortex location (see Figs.~\ref{figmodel}d-e). Hence, cell division and apoptosis generate complex cell motility patterns, resembling turbulent-like structures (Figs.~\ref{figmodel}a-f). To illustrate how cell division leads to vortex formation, we tracked the collective cell motion in the dashed rectangle region in Fig.~\ref{figmodel}a for one hour (see Fig.~\ref{figmodel}g-l). The mother cell (in pink, see Fig.~\ref{figmodel}g at $t =100$ min) divides into two daughter cells (orange and blue) at $t = 110$ min (see Fig.~\ref{figmodel}h). The vortex observed in Fig.~\ref{figmodel}a-b at $t = 120$ min is established at $t = 110$ min (see both the velocity and vorticity fields in Fig.~\ref{figmodel}h) immediately after cell division. The vortex persists for an additional 20 minutes (Fig.~\ref{figmodel}i-j) before vanishing (Fig.~\ref{figmodel}k-l). In contrast to previous experimental observations\cite{rossen2014long,doostmohammadi2015celebrating}, where vortices are discernible only upon averaging over 100 division events, we found that a single cell division event leads to a vortex, lasting for more than 30 minutes. In addition, multiple cell division events drive a turbulent-like flow (Fig.~\ref{figmodel}a-f). \textbf{Directed motion of new born cells leads to anomalous diffusion:} To study how vortices are formed by cell division explicitly, we started by monitoring the cell dynamics in the trajectories during the first $100$ minutes for the newly created cells (see Fig.~\ref{msddis}a). For illustration purposes, we set the initial positions of all the cells at the origin. There is no obvious preferred direction of motion for cells, which suggests that they move isotropically (Fig.~\ref{msddis}a). Surprisingly, the motion is directed after cell division over a distance of $\sim$ 10 $\mu m$ (see Fig.~\ref{msddis}a and the inset figure for a few representative trajectories). The simulations also show a similar directed motion of cells during the first 100 minutes after cell divisions (see Fig.~\ref{msddis}b). To illustrate the consequence of cell-division induced directed motion, we calculated the mean-square displacement ($\Delta(t, t_i) =\langle [\textbf{r}(t_i+t)-\textbf{r}(t_i)]^2 \rangle$) of cells at different initial times $t_i$, where \textbf{r} is the cell position. Both experiments and simulations (Figs.~\ref{msddis}c-d) exhibit superdiffusive behavior, $\Delta(t, t_i) \propto t^\alpha$, with an exponent $\alpha \approx 1.51 \pm 0.1$ and $1.44 \pm 0.1$ respectively, irrespective of the time windows used. The displacement distributions, ($P(\Delta x)$, $P(\Delta y)$), of cells calculated from both experiments and simulations (see Fig.~S6a-d in the SI) deviate from the Gaussian distribution (see the green solid lines). It follows that cell division and apoptosis is the origin of superdiffusive motion of cells \textbf{Cell division and apoptosis result in topological changes:} The SGAFs \cite{sinha2020self} due to cell division and apoptosis lead to a directed motion of newly generated cells, which influences the movement of cells nearby in a confluent tissue. Interestingly, there are several T1 transitions, which are critical during morphogenesis\cite{irvine1994cell}, and cancer invasion\cite{levayer2015cell}, during a single cell division in the MDCK cell experiments. One example is shown in Figs.~\ref{msddis}e-g (see also Fig.~S7 in the SI). The force produced by the division of cell $1$ at $t = 110$ min separates the two neighboring cells (cells 2 and 4) away from each other but brings cells $1$ and $3$ together at later times. Such rearrangement of cells $1-4$ and other cells nearby finally leads to vortex formation (see the velocity vector field in Figs.~\ref{msddis}e-g and also Figs.~\ref{figmodel}g-l). In addition to a single vortex induced by one cell division, (Figs.~1g-l), there is a pair of vortices, with opposite sense of rotation, in certain time frames (see the dashed rectangle in Fig.~\ref{clock}a). In the same region, there are three cell divisions, which lead to dramatic cell rearrangements nearby, 10 minutes before a vortex pair formation (the purple dots in the the dashed rectangle). To describe the formation of one vortex pair quantitatively, we plot the vorticity value along the arrow in the dashed rectangle in Fig.~\ref{clock}a using the digital particle image velocimetry\cite{thielicke2014pivlab}. A smooth transition from anticlockwise to clockwise vorticity is clearly depicted (see Fig.~\ref{clock}b). Similarly, several pairs of vortices with opposite directions of rotation are found in simulations (see the lower middle of Fig.~\ref{figmodel}e). Besides, there is evidence for T2 transition (Fig.~S8 in the SI), which arises from the extrusion (apoptosis) of a cell from the confluent monolayer. Taken together, these results suggest that cell division and apoptosis not only regulate cell numbers during morphogenesis and cancer development, they also drive dramatic cell rearrangements, as discovered recently for chick gastrulation\cite{firmino2016cell}, and germband extension of Drosophila\cite{da2007oriented}. Finally, the cell rearrangement through topological T1/T2 transitions leads to the formation of vortices and turbulent-like structure in the confluent MDCK tissue. \textbf{Spatial correlations:} To probe the spatial variations in the turbulent-like flow, we calculated space-dependent correlations associated using the velocity field, $C_{v}(R)$. From the cell position, $\textbf{r}_{i}(t)$, of the $i^{th}$ cell at different times, we calculated the velocity, $\textbf{v}_{i}(t) \equiv [\textbf{r}_{i}(t+\delta t/2)-\textbf{r}_{i}(t-\delta t/2)]/{\delta t}$, at time $t$ where $\delta t$ is the time interval. The spatial velocity correlation function ($C_{v}(R)$) is defined as, \begin{equation} C_{v}(R) = \langle \frac{\sum_{i}(\bf{v}(r_{i})-\langle v(r_{i})\rangle)\cdot(v(r_{i}+R)-\langle v(r_{i})\rangle)}{\sum_{i}(\bf{v}(r_{i})-\langle v(r_{i})\rangle) \cdot (v(r_{i})-\langle v(r_{i})\rangle)} \rangle . \label{cdr} \end{equation} The correlation function at different times (see Fig.~\ref{cvt}a) has negative minimum at a distance $R_{v} \approx 100~\mu m$, which is about $10$ times greater than the mean cell size (see Fig.~S3). The negative values for the correlation function are due to the anti-parallel velocity on the opposite sides of the vortices (see Fig.~\ref{figmodel}b). Therefore, $R_{v}$ is roughly the size of the vortices (Fig.~\ref{figmodel}), which are about 100 $\mu m$. The $C_{v}(R)$ calculated from our simulations supports such a link (see Fig.~\ref{cvt}b). Interestingly, the collective motion of cells is quite robust, persisting for a long time, despite the increase in the cell density (Fig.~S4a). \textbf{Long time tails (LTT):} We calculated the velocity correlation function ($C^{i}_{v}(t)$) for cell $i$, \begin{equation} C^{i}_{v}(t) = \langle \frac{[\textbf{v}_{i}(t_{0}+t)- \langle\textbf{v}_{i} \rangle]\cdot [\textbf{v}_{i}(t_{0})- \langle\textbf{v}_{i} \rangle]}{[\textbf{v}_{i}(t_{0})- \langle\textbf{v}_{i} \rangle]\cdot [\textbf{v}_{i}(t_{0}) -\langle\textbf{v}_{i} \rangle]} \rangle , \end{equation} where $\langle\textbf{v}_{i} \rangle$ is the mean velocity of cell $i$, and the average is over time $t_{0}$ (24 hours). The ensemble average $C_{v}(t)$ for the monolayer is obtained by averaging $C^{i}_{v}(t)$ over all the cells. The decay of $C_{v}(t)$ follows $C_v(t) \sim t^{-\beta}$ with $\beta = 0.4 \pm 0.1$ (see the blue dash-dotted line in Fig.~\ref{cvt}c). Our simulations also show a power-law decay with $\beta = 0.59 \pm 0.03$ (see Fig.~\ref{cvt}d). Based on dimensional argument, we expect that the decay of $C_{v}(t) \sim \Delta(t)/t^2 \sim t^{-0.5}$, which is fairly close to the experimental and simulation results. Using theory that accounts for SGAFs, the exponent $\beta = 1/2$ in 2D (see the Methods section). A comparison from three different fit functions to the experimental and simulation data are also shown in the Fig.~S9 in the SI. The emergence of LTT in $C_{v}(t)$ shows that motion of cells is persistent in a specific direction (Figs.~\ref{msddis}a-d), which in turn is linked to vortex motion. To ascertain whether the power-law relation depends on the time interval selected to calculate $C_{v}(t)$, we varied $\delta t$. The value of $\beta$ stays around 1/2 (see Fig.~S10 in the SI). A long time tail relation is also found for the current-current correlation functions in disordered systems, which decays by $t^{-d/2}$ ( $d$ is the dimension of space)\cite{belitz1994anderson}. Such a relation was first reported in simulations of classical hard-disc (sphere)-model fluids\cite{alder1970decay}, and was explained by correlated collision events theoretically\cite{ernst1970asymptotic,dorfman1970velocity}. The difference in the value of $\beta$ between abiotic systems and MDCK monolayer is due to the generation of SGAFs due to cell division, which results in persistent (almost ballistic) directional motion of cells for long times. \textbf{Energy spectrum:} It is clear that turbulent-like flow emerges naturally in the collective motion of cells (Figs.~\ref{figmodel}a-c). To characterize the nature of turbulent-like motion in the epithelial cells, we calculated the wave vector ($k$) dependent energy spectrum, $E(k)$, at different times (see Fig.~\ref{cvt}e). There are two scaling regimes in $E(k)$ as a function of $k$. In the intermediate values of $k$, $E(k)\sim k^{-1.58(0.2)}$. The value of the exponent is close to the Kolmogorov-Kraichnan prediction, $-5/3$, found in the inertial turbulence\cite{kolmogorov1941local}. We found a similar exponent value ($-1.4\pm0.2$) from our simulations (see Fig.~\ref{cvt}f). In the MDCK cells, the Reynolds number is small ($\approx 0$)\cite{marchetti2013hydrodynamics}, which shows that the underlying mechanisms must be quite different for the turbulent-like motion\cite{alert2022active}. At smaller scales (large $k$), the results (Fig.~4e) obtained by analyzing the experimental trajectories show that $E(k)\sim k^{-\lambda}$ ($\lambda = 3.5 \pm 0.2$). From the agent-based simulations, we deduce that $E(k)\sim k^{-3.7(0.2)}$. Two comments are worth making: $(i)$ The $\lambda$ values from experiments and simulations are fairly close. It is interesting that scaling anticipated in the context of inertial turbulence is obeyed in living systems, given that energy injection, which is a consequence of cell division and apoptosis, is autonomous. $(ii)$ The prediction for inertial 2D turbulence at large $k$ is $\lambda \approx 3$, which differs from the estimated value for MDCK cells and simulations. It is unclear if the large $k$ finding in living tissues is simply a consequence of non-universal behavior or if large $k$ limit is not accessed in experiments and simulations. In active systems, such as microtubule-kinesin complex and bacterial suspensions, $\lambda$ ranges from 2 to 4.5\cite{wensink2012meso,lin2021energetics,guillamat2017taming,alert2022active}. \section{Discussion:} We investigated the dynamics of cell rearrangement and collective motion in a biologically relevant tissue \cite{friedl2009collective,rorth2009collective,poujade2007collective}, by analyzing data from confluent MDCK cell monolayers, combined with agent-based simulations and theory. Cell division and apoptosis lead to a directed motion of cells for hours, which results in a universal super-diffusive behavior of cells characterized by the mean-square displacement exponent, $\alpha \approx 1.51$, irrespective of the initial time $t_i$ considered in the experiments\cite{giavazzi2018tracking}. Our agent-based simulations yield $\alpha = 1.44\pm 0.1$. It is worth emphasizing that, in the absence of cell division, there is complete cessation of motion in the simulations. Therefore, the complex dynamics in the simulations, which capture the salient features in the MDCK cells, arises solely due to self-generated active forces generated by cell division and apoptosis\cite{malmi2018cell,sinha2020spatially,sinha2020self}. The directed motion of newly generated cells induces rapid rearrangements of cells nearby through T1--cell intercalation and T2 transitions. Interestingly, cell rearrangements due to a single cell division result in vortex formation in confluent tissues, with lifetimes lasting for 30 minutes. More importantly, turbulent-like flux patterns emerges in the region where multiple divisions occur, which is also supported by our agent-based simulation model. From the spatial velocity correlation function, we estimate that the mean size of the vortices can be as large as to 100 $\mu m$, which is nearly 10 times larger than the size of a single cell ($\sim 10~\mu m$). Finally, the kinetic energy spectrum ($E(k)$) exhibits two distinct scaling behaviors of the collective cell movement. At intermediate values of $k$, $E(k) \sim k^{1.58\pm 0.2}$, which is close to the predicted Kolmogorov-Kraichnan behavior for the inertial turbulence. At large $k$ value, $E(k) \sim k^{-3.5}$, which does not seems to have counter part in inertial turbulence, and is likely to be non-universal. The exponent characterizing large $k$ behavior of $E(k)$ show substantial variations, depending on the system\cite{alert2022active,blanch2018turbulent,mueller2019emergence}. In classical hard-disc-model fluids\cite{alder1970decay,ernst1970asymptotic,dorfman1970velocity}, a long time tail is expected for the velocity autocorrelation function with $C_{v}(t) \propto \frac{1}{t \sqrt{ln t}}$. Both experiments and simulations show a slower power law decay $C_{v}(t)\propto t^{-1/2} $ in the MDCK cell monolayer. To explain this finding, we developed a theory that accounts for active forces leading to the prediction of a slower decay, $C_{v}(t) \propto t^{-1/2}$. The theory and the agent-based simulations indicate that self-generated active forces lead to highly correlated responses. Given the simple model considered here, we expect that our studies can be used to describe collective motion of other cell types or a mixture of different cell types, and even other active systems in general. In addition, we showed that the amplitude for the mean-square displacement (Fig.~2c) and the mean velocity of cells (Fig.~S4b) decrease, as the initial time $t_i$ increases, which indicates a slowing down due to jamming or aging of the cell dynamics. Similar aging behavior has been noted in other types of cell monolayers\cite{garcia2015physics}. In addition, all the dynamical characteristics for cells, (superdiffusion, compressed exponential relaxation and aging) are found in many soft glassy systems\cite{bouzid2017elastically,gnan2019microscopic,galloway2020scaling} in the absence of active forces. Thus the physics used to describe abiotic glassy materials could be applied to understand living active matter\cite{kirkpatrick2015colloquium,tjhung2020analogies}. Finally, more detailed models by incorporating the size reductive divisions\cite{puliafito2012collective}, maturation and strengthening of cell-cell adhesion\cite{garcia2015physics} or other potential processes could be developed based on our present study to explain aging and other complex cell activities. \section{Methods:} \textbf{Summary of MDCK epithelial cell experiments\cite{giavazzi2018tracking}:} The MDCK (Madin-Darby Canine Kidney) epithelial cells were maintained in Dulbeccos Modified Eagle Medium, supplemented with $5\%$ fetal bovine serum and $1\%$ L-Glutamine. First, they were seeded in a 6-well plate to grow in complete medium until a confluent cell monolayer formed. Then, the images of the cell monolayer were acquired by Olympus IX81 inverted microscope (both in phase contrast and wide-field fluorescence) every minute for a 24-hour period. The phase-contrast microscopy was used to visualize the cell membrane and different organelles (see Fig.~S1a). The nuclei of the cells at each time frame were also captured by wide-field fluorescence microscopy of EGFP-H2B expressing cells (see the cell nuclei shown in Fig.~S1b for an example) and their positions were recorded from the single particle tracking by using the ImageJ\cite{rueden2017imagej2}. A number of cell division events were observed in such a confluent monolayer (see Fig.~S1c), while cell apoptosis/extrusion process led to the removal of cells from the monolayer. From the cell nuclei positions at each time frame, the cell trajectories can be obtained (see Fig.~S1d, shows the first 150 minutes of recording for one of the field-of-views (FOV)). The cell trajectory in the figure exhibits a highly heterogeneous behavior. Some cells move in a rather straight and regular trajectory, while others exhibit irregular curly motion. The diverse movement is even clearer if the cell trajectories are plotted over the whole 24-hour period (see the two examples shown in the insets of Fig.~S1d). The dynamically heterogeneous behavior is reminiscent of supercooled liquids, as was first shown by Angelini et al.\cite{angelini2011glass} in the context of cell monolayers. \textbf{Particle-based simulations:} We used a two-dimensional version of a particle-based model \cite{malmi2018cell,sinha2020spatially} to simulate cell dynamics. We mainly focused on investigating the roles of the cell division and apoptosis on the collective cell motion in order to provide microscopic insights into experiments\cite{giavazzi2018tracking}. A single cell is modeled as a two-dimensional deformable disk. We use a periodic boundary conditions to mimic large systems. The cells interact with each other through repulsive (Hertzian contact mechanics) and attractive interactions (cell-cell adhesion). The motion of cells is described by an overdamped Langevin equation. The cells were allowed to grow and divide, as in the experiment. We also removed the cells from the monolayer at a constant rate to model the cell apoptosis and extrusion events that were observed in experiments. Because we do not consider self-propulsion, the cell motility can only arise from the self-generated active forces through cell growth, division and apoptosis\cite{sinha2020self}. The SI contains details of the model. \textbf{Theory of LTT driven by cell division:} The turbulent-like motion in the MDCK cells in which active forces are generated by cell division, suggests that we use methods developed to treat turbulence in conventional fluids \cite{forster1977large,dedominicis1979energy,yakhot1986renormalization,smith1992yakhot}. The fundamental difference is that the random stirring that causes the turbulence in the conventional case is replaced by the internally generated cell division process in the biological case. Our aim is to compute the turbulent renormalized transport coefficients in the biological case, and from that determine the diffusive velocity autocorrelation function and the mean-squared-displacement. The relevant spatial dimension is $d=2$. The model dynamical equations are, \begin{equation} \partial_tu_i+u_j\nabla_ju_i=-\nabla_ip+\nu_0\nabla^2u_i+f_i \label{veloeq} \end{equation} \begin{equation} \partial_tc+u_j\nabla_jc=D_0\nabla^2c+g \end{equation} \begin{equation} \nabla\cdot\mathbf{u}=0 \label{ueq} \end{equation} Here, $u_i$ is the fluid velocity in the $i$-direction (repeated indices are to be summed over), $p$ is the pressure, $\nu_0$ is the bare kinematic viscosity, $c$ is the density of the active matter (MDCK cells), $D_0$ is a bare diffusion coefficient. The equations given above couple vorticity in the fluid motion to cell density. We have also imposed the incompressibility condition in Eq.~(\ref{ueq}). The statistical forces $f_i$ and $g$ are Gaussian noise terms that have zero mean and a nonzero two-point correlation functions that are local in space and time with gradient independent correlation strengths given by $\Delta_f$ and $\Delta_g$, respectively. We treat the nonlinearities in these equations as a perturbation and work in Fourier space in both position and time, $(\mathbf{k},\omega)$. Further, the equations for both $u_i$ and $c$ are diffusive \footnote{The pressure term in Eq.~(\ref{veloeq}) is only used to enforce the incompressibility condition given by Eq.~(\ref{ueq}).} and because $f_i$ and $g$ have identical statistical properties, we focus only on the mathematical structures and scaling solutions. We refer to the bare transport coefficients $D_0$, and their renormalized values by $D$. Structurally the one-loop contribution to $D$, denoted by $\delta D$, is given by, \begin{equation} \begin{split} &\delta D(\mathbf{k},\omega)\propto\Delta\int_{\mathbf{q},\omega_1}\frac{1}{[-i\omega-i\omega_1+D_0(\mathbf{k}+\mathbf{q})^2][\omega_1^2+D_0^2q^4]} \\ &\propto\Delta\int_{q>(\frac{\omega}{D_0})^{1/2}, \omega_1}\frac{1}{[-i\omega_1+D_0q^2][\omega_1^2+D_0^2q^4]} , \end{split} \label{deltaD} \end{equation} where we have assumed that $\Delta\propto\Delta_f\propto\Delta_g$. In going from the first to the second line in Eq.~(\ref{deltaD}) we have replaced the external frequency and wavenumber dependence in the integrand by a frequency cutoff. For $\mathbf{k}=0$, this can be justified in a scaling sense. Carrying out the frequency and wave number integrals in $d=2$ gives, \begin{equation} \delta D(\omega\rightarrow 0)\propto \frac{\Delta}{D_0\omega} \label{deltaD2} \end{equation} Since $\delta D$ diverges as $\omega\rightarrow 0$, we use $\delta D\approx D$ and self-consistently replace $D_0$ in Eq.~(\ref{deltaD2}) by $D$. We conclude that in $d=2$, \begin{equation} D(\omega\rightarrow 0)\propto\frac{1}{\omega^{1/2}}. \end{equation} This result in turn implies that the velocity autocorrelation function behaves as $C(t\rightarrow\infty)\propto 1/t^{1/2}$ and the mean-squared displacement is $\propto t^{3/2}$. These results are consistent with our numerical work as well as with experiments. In a similar manner, one obtains $D(\omega\rightarrow 0)\propto\frac{1}{\omega^{1/5}}$, which implies a velocity auto-correlation function $\propto 1/t^{4/5}$ in $d=3$. \bigskip \bigskip \bigskip \section*{} \noindent \textbf{Acknowledgements} \noindent We are grateful to Giorgio Scita et al. for providing us the original experimental data. We thank Davin Jeong for help in producing figures. This work is supported by the National Science Foundation (PHY 17-08128), and the Collie-Welch Chair through the Welch Foundation (F-0019). \noindent \noindent \noindent \noindent \noindent \textbf{Competing financial interests} \noindent The authors declare no competing financial interests. \bibliographystyle{naturemag} \section*{} In the Supplementary Information (SI), we provide a summary of experimental results, several additional findings, description of the simulations using a particle-based model, and related discussions that are pertinent to the results present in the main text. \textbf{Microscopy Experiments:} Because the present study uses results from the imaging experiments\cite{Giavazzi2018}, we describe them in some detail. Two types of microscopes, phase-contrast and wide-field fluorescence microscopy are used in the MDCK cell experiments. A confluent monolayer, with cells distributed evenly over the whole space, is imaged using phase contrast microscopy, allowing the capture of the cell membrane (see Fig.~\ref{exfigure}a and movie 2 in the supplementary materials of the original article\cite{Giavazzi2018}). In addition, Enhanced Green Fluorescent Protein (EGFP)-tagged H2B histones were constitutively expressed by retroviral infection using standard protocols\cite{Giavazzi2018}. The images of the nuclei of all the cells, at each time frame, were captured by the wide-field fluorescence microscopy (see Fig.~\ref{exfigure}b and also the movie 1 in the supplementary materials in the experiments\cite{Giavazzi2018}). Cell division occurred frequently (see Fig.~\ref{exfigure}c). The left panel of Fig.~\ref{exfigure}c shows an example of a cell division event using the phase-contrast microscopy, while the right panel illustrates the same event using the wide-field fluorescence microscopy. The protocol in the experiments \cite{Giavazzi2018} was designed to avoid any directional bias for cell motility patterns in order to reduce the inhomogeneity and anisotropy of the cell density and shape in the monolayer. Using MosaicSuite plugin of the ImageJ\cite{Rueden2017,Schindelin12} for single particle tracking over the fluorescence images of the cell nuclei (one-minute resolution over 24 hours), the positions of all the cells were recorded in consecutive frames. Finally, the trajectories of all the cells in one field-of-view (FOV) could be reconstructed (see Fig.~\ref{exfigure}d). The data from the same FOV, as in Fig.~\ref{exfigure}d, was used for the analyses presented in the main text and the SI. \textbf{Temporal evolution of a cell monolayer:} First, we explore some general features of the temporal evolution of the MDCK monolayer using the experimentally generated trajectories\cite{Giavazzi2018}. Although the MDCK monolayer is confluent initially, it does not stay in a stationary state but evolves with time, exhibiting rich dynamics driven by cell division and apoptosis. {\it Cell divisions and apoptosis:} The number of cell divisions (see Fig.~\ref{exfigure}c) between two frames (one minute interval) is quantified in Fig.~\ref{newcell}a (see the green circles). Cell apoptosis/extrusion events are also shown in the same figure by yellow stars. These two numbers fluctuate between 0 and 12. Therefore, the confluent cell monolayer is dynamic that undergoes continuous cell division and apoptosis, which generate active forces leading to the collective cell motions\cite{Sinha20,Doostmohammadi2015,Rossen2014}. The mean value for the number of new cells (5.14) is a slightly higher than the dead cells (5.05). Hence, the number of cells in the FOV increases with time, as shown in Fig.~\ref{newcell}b. A power-law relation describes the cell number as a function of time that is shown by the green solid line and the expression is given in the figure caption. {\it Cell size distributions:} Due to cell growth, division and apoptosis, the cell size is not uniformly distributed. In order to calculate the distribution of cell sizes, we used Voronoi tessellation of the positions of all the cell nuclei in the FOV (Fig.~\ref{exfigure}d). The Voronoi cells at two different time frames are shown in Figs.~\ref{voronoicell}a-b, and the area of each cell is color coded as indicated in the bar on the right side of the figures. There is a large variation (from 100 to 400 $\mu m^2$) in the cell area. The cell area distributions, $P(A)$, at early (sampled over 20 images from $t$ = 180 to 199 minutes) and later (1350-1369 minutes) times are illustrated in Fig.~\ref{voronoicell}c (without considering the boundary Voronoi cells). A Gaussian distribution (see the solid lines and the functional forms shown in Fig.~\ref{voronoicell}c) describes the $P(A)$ reasonably well. The mean value of the cell area decreases from 231 $\mu m^2$ at early times to 187 $\mu m^2$ at later times during a 24-hour period in the experiments. This is an indication of cell jamming \cite{Atia2018,Bi2016}. {\it Cell density and velocity:} As the number of cells increases (see Fig.~\ref{newcell}b) in the same FOV, so does the cell density (see Fig.~\ref{velocityvstime}a, defined as the ratio of the cell number and the area in the FOV). A similar power-law relation is found for the cell density and the cell number (see the solid lines and functional forms shown in Fig.~\ref{newcell}b and Fig.~\ref{velocityvstime}a). The mean velocity, defined in the main text above Eq.~(1) with $\delta t = 10$ min) of cells, decreases with time (see the green line in Fig.~\ref{velocityvstime}b). Therefore, the overall motility of cells is inhibited as the cell density increases, even though the frequencies of active cell division and apoptosis are nearly constant (see Fig.~\ref{newcell}a). {\it Distribution of cell coordination number: } The number of neighbors (coordination number, $Z$) per cell in MDCK monolayer varies from cell to cell (see Figs.~\ref{voronoicell}a-b) because of the generation of active forces. From the Voronoi tessellation (Fig.~\ref{voronoicell}), we calculated the cell coordination number distribution, $P(Z)$, for the MDCK cell monolayer (see the histograms in blue in Fig.~\ref{numofneighboringcells}). Each histogram is calculated using ten time frames from the experiments\cite{Giavazzi2018}. The value of $Z$ varies from 4 to 9, and its distribution is well described by a Gaussian distribution with a mean value around 6 (see the solid lines), which remains robust during the 24 hours experiment (see Figs.~\ref{numofneighboringcells}a-b). The distribution, $P(Z)$, obtained from our simulations agrees well with the experimental results (see the histograms and also the solid lines in Fig.~\ref{numofneighboringcells}). Therefore, a predominantly hexagonal pattern can be driven purely by cell division and apoptosis, which leads to an emergent phenomena of geometric order, similar to that found in proliferating metazoan epithelia\cite{gibson2006emergence}. \textbf{Displacement distribution of cells: } We calculated the displacement distributions ($P(\Delta x)$, $P(\Delta y)$) of cells using the trajectories from the MDCK cell experiments and our simulations (see Fig.~\ref{displacemsd}). Dramatic deviations from the Gaussian distribution (see the green solid lines) are found in both experiments and simulations. \textbf{T2 transition induced by cell apoptosis: } The T1 transition induced by cell divisions is illustrated in Figs.~2e-g in the main text. We also found evidence for another type of topological change, the T2 transition, caused by cell apoptosis/extrusion from the confluent monolayer (see Fig.~\ref{T2transitionfinal}). Through Voronoi tessellation, using the cell nuclei position data, we found that cell $0$ (the one shown in red) is extruded from the cell monolayer at a later time (see Fig.~\ref{T2transitionfinal}b), leading to the formation of new neighborhood constituting cells $1-6$. For completeness, we should point out that T1 and T2 transitions are well known in the dynamics of foams \cite{Weaire1984} where connections to cells was also explained. \textbf{Velocity autocorrelation at different $\delta t$: } We showed that the velocity autocorrelation function exhibits long time tail (LTT) behavior in the main text (see Fig.~4c). To ascertain whether LTT depends on the time interval, $\delta t$, we calculated velocity autocorrelation of MDCK cells \cite{Giavazzi2018} at several values of $\delta t$ (2, 4, and 10 minutes). The result is robust (see the different symbols and the dash-dotted lines in Fig.~\ref{cvtm}). \textbf{Energy spectrum:} The kinetic energy $E$ is given by, \begin{equation} E = \frac{1}{2} \int |\textbf{v}(\textbf{r})|^2 d\textbf{r} = \int E(\textbf{k}) d\textbf{k}, \end{equation} where $\textbf{v}(\textbf{r})$ is the cell velocity at position $\textbf{r}$, and $k = |\textbf{k}|$ is the wave number. To calculate the energy spectrum $E(k)$ at a time point, we first use the digital particle image velocimetry\cite{Thielicke2014} to get the velocity field $\textbf{v}(\textbf{r})$ (using two images with $\delta t = 10$ minutes) at regular grid points. Then, we used the fast Fourier transform (FFT) to convert the velocity field $\textbf{v}(\textbf{r})$ to $\tilde{\textbf{v}}(\textbf{k})$. Finally, the energy spectrum $E(k)$ is calculated using, \begin{equation} E(k) = \frac{kA_0}{4\pi}\sum \tilde{\textbf{v}}^*(\textbf{k}) \cdot \tilde{\textbf{v}}(\textbf{k}) , \end{equation} where $\tilde{\textbf{v}}^*(\textbf{k}) $ is the complex conjugate of $\tilde{\textbf{v}}(\textbf{k})$ and $A_0$ is the area of the FOV. Two examples for $E(k)$ are illustrated here: one is the Fig.~4e in the main text and the other is shown in Fig.~\ref{EK500}. In each figure, we calculated the $E(k)$ at six different time frames. The dependence of $E(k)$ on $k$ shows that there are two scaling regimes (see the solid lines and also the values of the slope). The two solid lines give the best power-law fit over the data (all six time frames) for different regimes of the wavevector $k$. \textbf{Agent-based simulations:} We use a two dimensional (2D) version of the three dimensional (3D) agent-based model\cite{Malmi2018,Sinha2020}. In 2D, the cells are discs. We just give a sketch of the model because additional details can be found elsewhere\cite{Malmi2018}. We use a deformable disc to represent each cell. The cell area can increase at a constant rate ($r_{A}$) as long as the pressure, $p$, it experiences due to contact with the neighboring cells is less than a preassigned critical value $p_{c}$\cite{Malmi2018}. If $p$ exceeds $p_c$, the cell enters a dormant state, stops growing unless the normal pressure is below $p_{c}$, which could occur as the system evolves. The use of $p_{c}$ accounts for cell proliferation inhibition by a mechanical feedback mechanism\cite{Shraiman2005,Montel2011,Quail2013,Jain2014,Delarue2016}. After reaching the fixed mitotic radius $R_{m}$, with $p<p_{c}$, a cell divides into two with identical radius, $R_d = R_{m}2^{-1/2}$ ($\pi R_m^2 = 2 \pi R_d^2$). The new cells are placed randomly in space (the mother cell occupied before division) at a center-to-center distance $d = 2R_{m}(1-2^{-1/2})$. The cell area grows at a rate $r_{A} = \pi R_m^2/(2\tau_{min})$ ($\tau_{min}$ is the cell cycle time). We update the cell radius ($R$) from a Gaussian distribution with the mean value $\dot R = r_{A}/(2\pi R)$. The elastic force $F_{ij}^{el}$ between two cells of radii $R_{i}$ and $R_{j}$ are modeled by Hertzian contact mechanics~\cite{schaller2005multicellular, pathmanathan2009computational, drasdo2005single}, \begin{equation} \label{rep} F_{ij}^{el} = \frac{h_{ij}^{3/2}(t)}{\frac{3}{4}(\frac{1-\nu_{i}^2}{E_i} + \frac{1-\nu_{j}^2}{E_j})\sqrt{\frac{1}{R_{i}(t)}+ \frac{1}{R_{j}(t)}}}, \end{equation} where $E_{i}$ and $\nu_{i}$ are the elastic modulus and Poisson ratio of cell $i$, respectively. The overlap $h_{ij}$ between the two cells is defined as $\mathrm{max}[0, R_i + R_j - |\vec{r}_i - \vec{r}_j|]$ with $|\vec{r}_i - \vec{r}_j| \equiv r_{ij}$, being the center-to-center distance of the cells. In the simulations, we use $E_i$ and $\nu_{i}$ to be independent of $i$. Cell-cell adhesive interaction, $F_{ij}^{ad}$, is mediated by receptor and ligand proteins on the cell membrane which is calculated using\cite{schaller2005multicellular}, \begin{equation} \label{ad} F_{ij}^{ad} = L_{ij}f^{ad}\frac{1}{2}(c_{i}^{rec}c_{j}^{lig} + c_{j}^{rec}c_{i}^{lig}), \end{equation} where the $c_{i}^{rec}$ ($c_{i}^{lig}$) is the receptor (ligand) concentration, which is considered to be constant here, and $f^{ad}$ is a coupling constant used to rescale the adhesion force. The length of the contact line, $L_{ij}$ between two cells, is given by $L_{ij} = \sqrt{|4r_{ij}^2 R_{i}^2-(r_{ij}^2-R_{j}^2+R_{i}^2)^2 |}/{r_{ij}}$. The total force on the $i^{th}$ cell is the sum over its nearest neighbors ($NN(i)$), \begin{equation} \vec{F}_{i} = \Sigma_{j \epsilon NN(i)}(F_{ij}^{el}-F_{ij}^{ad})\vec{n}_{ij}. \end{equation} where $\vec{n}_{ij}$ is the unit vector pointing from the center of cell $j$ to the center of cell $i$. The spatial dynamics of the cell is calculated by integrating the equation of motion for a cell of mass $m_{i}$, \begin{equation} \label{eom} m_{i}\ddot{\vec{r}}_{i} = \vec{F}_{i}(t) - \gamma_{i} \dot{\vec{r}}_{i}(t), \end{equation} where $\gamma_{i}$ is the modified mobility coefficient of cells with $\gamma_{i}= \gamma_{i}^{visc} + \gamma_{i}^{ad}$. The first term comes from the contribution of cell to extracellular matrix (ECM) friction ($\eta r_{i}$, with $\eta$ the mobility constant) and the second term describes the cell-to-cell friction with, \begin{eqnarray} \gamma_{i}^{ad} =&& \gamma^{max}\Sigma_{j \epsilon NN(i)} [L_{ij}\frac{1}{2}(1+\frac{\vec{F}_{i} \cdot \vec{n}_{ij}}{|\vec{F}_{i}|})\times \\ \nonumber &&\frac{1}{2}(c_{i}^{rec}c_{j}^{lig} + c_{j}^{rec}c_{i}^{lig})] \, . \end{eqnarray} Because Reynolds number for cells in a tissue is low~\cite{dallon2004cellular}, the inertial term in Eq.~(\ref{eom}) can be neglected~\cite{schaller2005multicellular}. Therefore, the equation of motion becomes, \begin{equation} \label{eqforce} \dot{\vec{r}}_{i} = \frac{\vec{F}_{i}}{\gamma_i}. \end{equation} We calculated the pressure $p$ as in previous studies (Eq.~(8) in Ref.\cite{Malmi2018}) except the contact line length, $L_{ij}$, is used instead of the contact area, $A_{ij}$, between two cells. We employed periodic boundary conditions instead of the free boundary for a growing tumor spheroid\cite{Malmi2018,Sinha2020}. We start from 300 cells that are randomly distributed in a box with the size $340\mu m \times 340\mu m$, similar to the one in the FOV in the experiments\cite{Giavazzi2018} (see Fig.~1 in the main text). The parameters in the agent-based model used in the simulations are given in Table~\ref{tableparameter}. Note that the ratio of the rate of apoptosis to cell division ($0.54$) is comparable to our estimation using the experimental data. \clearpage \newpage \begin{table}[ht!] \caption{The parameters used in the simulation.} \begin{tabular}{lcr} \toprule[1.5pt] \bf{Parameters} & \bf{Values} & \bf{References} \\ \hline Time step ($\Delta t$)& 10 $\mathrm{s}$ & ~\cite{Malmi2018} \\ Critical Radius for Division ($R_{m}$) & 11 $\mathrm{\mu m}$ & This paper\\ Mobility constant ($\eta$) & 0.1 $\mathrm{kg/ (\mu m~s)}$ & This paper \\ Benchmark Cell Cycle Time ($\tau_{min}$) & 54000 $\mathrm{s}$ & ~\cite{Freyer1986,Casciari1992,Landry1981}\\ Adhesive Coefficient ($f^{ad})$& $10^{-4} \mathrm{\mu N/\mu m}$ & ~\cite{schaller2005multicellular} \\ Mean Cell Elastic Modulus ($E_{i}) $ & $10^{-3} \mathrm{MPa}$ & ~\cite{Galle2005} \\ Mean Cell Poisson Ratio ($\nu_{i}$) & 0.5 & ~\cite{schaller2005multicellular} \\ Death Rate ($b$) & $10^{-5} \mathrm{s^{-1}}$ & This paper \\ Mean Receptor Concentration ($c^{rec}$) & 1.0 (Normalized) & ~\cite{schaller2005multicellular} \\ Mean Ligand Concentration ($c^{lig}$) & 1.0 (Normalized) & ~\cite{schaller2005multicellular} \\ Adhesive Friction $\gamma^{max}$ & $10^{-4} \mathrm{kg/ (\mu m~s)}$ & ~\cite{Malmi2018}\\ Threshold Pressue ($p_c$) & $5\times10^{-3} \mathrm{\mu N/\mu m}$ & This paper \\ \bottomrule[1.5pt] \end{tabular} \label{tableparameter} \end{table} \clearpage \newpage
1,116,691,499,927
arxiv
\section{\bf Introduction} \label{S1} \end{center} The Einstein's general theory of relativity (GR) could not supply scientist by any explanation that can support the discovery of the expansion rate of our universe that has been established 20 years ago by various observations \cite{GR}--\cite{He}. Therefore, scientists refuge to many other theories that can support this expansion rate. Among these theories is the one in which we add the cosmological constant to the field equations of GR. The output model of this theory becomes dominated by this constant that can explain the accelerated expansion in the late time. This model is known in the literature by $\Lambda$ cold dark matter ($\Lambda$CDM). The $\Lambda$CDM has a contradiction between gravity and quantum field theory \cite{Ws1}. This leads scientists to modify the structure of GR either within Riemannian geometry or using another geometry. Among the modified theories that used other Riemannian geometry is the $f(T)$ theory, where $T$ is the torsion scalar in teleparallel gravity. This theory is of second order differential equations \cite{FF7}--\cite{BF9} that makes it easy to analyze its physics. Many applications in $f(T)$ have been done in the domain of cosmology \cite{LSB}--\cite{AG} and in the solar system \cite{Nprd3}--\cite{CGSV}. The other modified theories that used Riemannian geometry are:\vspace{0.1cm}\\ i-String theory, which is one of the most possible candidates for the quantum theory for gravitation \cite{KQU}.\vspace{0.1cm}\\ ii-Lovelock theories which is the natural generalization of Einstein's GR to higher dimensions \cite{HD}.\vspace{0.1cm}\\ iii- Brans-Dicke theory whose interaction considered by GR tensor and scalar field \cite{KBS}.\vspace{0.1cm}\\ v- The $f(R)$ theory which we focused our present study on it \cite{Gha,PS}.\vspace{0.1cm} (For recent reviews on dark energy problem and modified gravity theories to explain the late-time cosmic acceleration as well as inflation in the early universe, see, for instance,~\cite{Nojiri:2010wj, Capozziello:2011et, Capozziello:2010zz, Bamba:2015uma, Cai:2015emx, Nojiri:2017ncd, Bamba:2012cp}.)\\ $f(R)$ theory has many successful applications in the domain of cosmology \cite{AL}--{\cite{AK}. However, it should be associated with other tests to achieves the success of GR in the solar system \cite{Kk}. $f(R)$ is the modification of Einstein's GR and is considered as a novel geometrodynamical theory with degrees of freedom in the field equations of gravitation \cite{BF9,UD}--\cite{MB}. The action integral of this theory contains an appropriate function of the Ricci scalar $R$ and the field equations are of the fourth-order. The lower order of the field equations gives the field equations of the Einstein's GR, which is a second order. A coincidence between $f(R)$ and other modified gravitational theories through different frames can be found in \cite{NS}. A viable inflationary model in gravity that took into account quantum corrections and includes $R+R^2$ gravity as a particular case was derived in \cite{Saa}. The true forms of the scalar and tensor perturbations that created through inflation in such model is discussed in \cite{Saa1}. $f(R)$ theory can describe the inflationary stage as well as the dark energy dominated stage, in which the late-time cosmic acceleration is realized \cite{SF,FT1, GWF1}. Static spherically symmetric solution are discussed in \cite{Ct,SZ} while studies of gravitational collapse can be found in \cite{CB,ZTW}. Many black holes are derived in the framework of $f(R)$ \cite{CDM}-\cite{CGL} and their physical consequences are discussed in \cite{Aa17,AC7,Fv10}. To provide a good probe of $f(R)$ gravitational theory, one has to analyze the black hole solutions. Exact solutions for $f(R)$ are hot topic and there are many bibliographies that cover this topic starting from 3-dimension \cite{HH92} to N-dimension \cite{CO17,SSB}. Analytic solutions that describe rotating black holes are derived in \cite{Sa12, SSB,HS13}. The main purpose of this work is to derive the $N$-dimension black string solutions having flat or cylindrical horizons in the framework of $f(R)$ in details without any assumptions. To execute this aim, we use a general $N$-dimension metric that possesses $k-dimension$ Euclidean metric. The constitution of the present paper is as follows. In Section \ref{S2}, the basics of $f(R)$ gravitational theory is mainly given. Moreover, $N$-dimension spacetime with one unknown function is presented and it is applied to the quadratic form of $f(R)$ field equations. In Section \ref{S2}, black string solutions are derived for two different cases, i.e., 4-dimensional case and $N>4$. In Section \ref{S3}, we apply a coordinate transformation to the black string solutions derived in Section \ref{S2} and it is demonstrated that the obtained rotating black string solutions can satisfy the field equations in terms of the quadratic form of $f(R)$ theory. We analyze the conserved quantities of the rotating solutions with the Komar method, in Section \ref{S4}. In Section \ref{S5}, we calculate the thermodynamic quantities such as the Hawking temperature and entropy. In addition, it is shown that the first law of thermodynamics can always be met for all of the solutions derived in this study by calculating the Smarr-type formula for the total mass and angular momentum for the derived solutions. In Section \ref{S5}, the stability of the black string solutions is examined locally and it is explained that the derived solutions are stable from the viewpoint of thermodynamics. Finally, our conclusions and discussion are presented in Section \ref{S6}. \section{Basics of $f(R)$ gravitational theory} \label{S2} We consider a gravitating field with the cosmological constant. The action of this field is given by \begin{equation} S:=\frac{1}{2\chi} \int d^Nx \sqrt{-g} (f(R)-\Lambda),\end{equation} where $\Lambda$ is the cosmological constant and $\chi$ is the $N$-dimension gravity constant represented by $\chi =2(N-3)\Omega_{N-1} G_N$, where $G_N$ is the gravitation constant of Newton in $N$-dimensions. In this study, $\Omega_{N-1}$ shows the volume for the $(N-1)$-dimensional unit sphere. It is given by \begin{equation} \Omega_{N-1} = \frac{2\pi^{(N-1)/2}}{\Gamma((N-1)/2)},\end{equation}, where $\Gamma$ is the gamma function. Carrying out the variations of $S$ in terms of the metric tensor $g_{\mu \nu}$, the field equations for $f(R)$ can be derived in the form \cite{CENOZ,KK}: \begin{equation} \label{fe} E_{\mu \nu}\equiv R_{\mu \nu} f_R-\frac{1}{2}g_{\mu \nu}f(R)-2g_{\mu \nu}\Lambda +g_{\mu \nu} \Box f_R-\nabla_\mu \nabla_\nu f_R-2\kappa T_{\mu \nu}=0,\end{equation} where $R_{\mu \nu}$ is the Ricci tensor given by \begin{equation} \label{rt} R_{\mu \nu}=R^{\rho}{}_{\mu \rho \nu}=\partial_\rho\Gamma^\rho{}_{\mu \nu}-\partial_\nu\Gamma^\rho{}_{\mu \rho}+\Gamma^\rho{}_{\rho \beta}\Gamma^\beta{}_{\mu \nu}-\Gamma^\rho{}_{\nu \beta}\Gamma^\beta{}_{\mu \rho}= 2\Gamma^\rho{}_{\mu [\nu,\rho]}+2\Gamma^\rho{}_{\beta [\rho}\Gamma^\beta{}_{\nu] \mu},\end{equation} and $\Gamma^\rho{}_{\mu \nu}$ is the Christoffel symbols second kind and the square brackets means skew-symmetrization. The D'Alembert operator $\Box$ is defined as $\Box= \nabla_\alpha\nabla^\alpha $, where $\nabla_\alpha V^\beta$ is the covariant derivatives in terms of the vector $V^\beta$, $f_R=\frac{df(R)}{dR}$, and $T_{\mu \nu}$ is the energy-momentum tensor. The trace of the field equations (\ref{fe}), in vacuum case, gives \begin{equation} \label{fe1} Rf_R-2f(R)-8\Lambda+3\Box f_R=0. \end{equation} Equation (\ref{fe1}) with $f(R)=R$ gives $R=-8\Lambda$. We apply the field equations (\ref{fe}) to the following metric \begin{equation} \label{m2} ds^2= -h(r)dt^2+\frac{1}{h(r)}dr^2+r^2\left(\sum_{i=1}^{\ell}d\phi^2_i+\sum_{k=1}^{N-\ell-2}dz_kdz_k\right). \end{equation} Here, $0\leq r< \infty$, $-\infty < t < \infty$, $0\leq \phi_{\ell}< 2\pi$, $-\infty < z_k < \infty$, and $h(r)$ is an unknown function in terms of the radial coordinate $r$. By using Eq. (\ref{m2}), we obtain the Ricci scalar, given by \begin{equation} R=-\frac{r^2 h''+2(N-2)r h'+(N-2)(N-3)h}{r^2},\end{equation} where $h'=\frac{dh(r)}{dr}$ and $h''=\frac{d^2h(r)}{dr^2}$. The non-zero components of the $f(R)$ field equations, Eqs. (\ref{fe}), for $f(R)=R+b R^2$, where $b$ is a dimension parameter, and $T_\mu{}^\nu=0$, take the form\footnote{The detailed calculations of the Ricci curvature tensor are given in Appendix B.} \begin{eqnarray} \label{fe5} &&E_t{}^t= \frac{1}{2r^4}\Bigg(b r^3\Bigg\{2h'''[rh'+6h(N-2)]+4rh h''''-rh''^2\Bigg\}+2(N-2)r^2b h''(2[3N-11]h+rh')+2(N^2-7N+10)b r^2 h'^2\nonumber\\ &&-h'[(N-2)r^3-2b r h(N-2)(3N^2-29N+64)]-h(N^2-5N+6)r^2+2b(N^2-N-2)h^2+4 r^4\Lambda\Bigg)=0,\nonumber\\ && E_r{}^r= \frac{1}{2r^4}\Bigg(b r^3\Bigg\{2h'''[rh'+2h(N-2)]-rh''^2\Bigg\}+2(N-2)r^2b h''(4[N-2]h+rh')+2(N^2-7N+10)b r^2 h'^2\nonumber\\ &&-h'[(N-2)r^3+2\{4(N-2)-3(N-4)(N^2-5N+6)\}b r h]-(N-2)(N-3)[hr^2-b\{N^2-13N+22\}h^2]+4 r^4\Lambda\Bigg)=0,\nonumber\\ && E_{\phi_1{}^{\phi_1} }=E_{\phi_2{}^{\phi_2}} \cdots E_{\phi_{N-\ell-2}{}^{\phi_{N-\ell-2}}=E_{z_1{}^{z_1}}=E_{z_2{}^{z_2}}=\cdots E_{z_{N-2}{}^{z_{N-2}}}} =\frac{1}{2r^4}\Bigg(b r^3\Bigg\{4h'''[rh'+h(3N-7)]+4rh h''''+rh''^2\Bigg\}\nonumber\\ &&-r^2 h''[r^2-2b \{2(3N-7)rh'-[2(N-2)-(N-4)(7N-15)]h\}]+4(N-2)[2N-9]b r^2 h'^2-(N-3)(N-4)r^2h+4 r^4\Lambda\nonumber\\ &&-h'(2(N-3)r^3+8(N-2)\{2(N-3)-(N-4)(N-5)\}b r h)-2b(N-6)[(N-3)(2N-1)-2(N-4)(N-5)]h^2\Bigg)=0.\nonumber\\ && \end{eqnarray} The above system cannot have a general solution. Therefore, we deal with it in two separate cases. The first case is the 4-dimension one in which the solution for the above system is expressed as \begin{eqnarray} \label{4d} h(r)=\frac{r^2\Lambda}{3}+\frac{c_1}{r}. \end{eqnarray} Equation (\ref{4d}) shows that the higher curvature has no effect, i.e., solution (\ref{4d}) is identical with the GR completely. The second case is the one in which $N>4$ and its solution takes the form \begin{eqnarray} \label{nd} h(r)=\frac{(N-2)r^2\left[1\pm\sqrt{1-\frac{16N(N-4)b \Lambda}{(N-2)^2}}\right]}{2N(N-1)(N-4)b}+\frac{c_2}{r^{N-3}}. \end{eqnarray} where $c_1$ and $c_2$ are constants of integration. Equation (\ref{nd}) shows how the solution is affected by the dimension parameter $b$. Also we stress on the fact that the parameter $b$ should not equal zero when we take the $+ve$ sign in front of the square root of Eq. (\ref{nd}). \begin{figure} \begin{center} \includegraphics[scale=0.4]{JF1} \caption{The function $h(r)$ versus the radial coordinate $r$ for $c_1 =1$, $N = 6$, $b=0.5$ (red curve), $b=1$ (black-dash curve), $b=-5$ (blue curve) and $b=-1$ (brown curve).} \end{center} \end{figure} Figure 1 shows that $h(r)$ is always positive under the restriction $\Lambda=\frac{(N-2)^2}{16N(N-4)b }$. The metric spacetimes of solutions (\ref{4d}) and (\ref{nd}) have the form \begin{eqnarray} \label{m5} ds_1{}^2&=&-\left\{\frac{2r^2\Lambda}{3}+\frac{c_1}{r}\right\}dt^2+\left\{\frac{2r^2\Lambda}{3}+\frac{c_1}{r}\right\}^{-1}dr^2+r^2\left(d\phi^2_1+dz^2_1\right)\;, \qquad \qquad \qquad N=4,\nonumber\\ ds_2{}^2&=&-\left\{\frac{(N-2)r^2\left[1\pm\sqrt{1-\frac{16N(N-4)b \Lambda}{(N-2)^2}}\right]}{2N(N-1)(N-4)b}+\frac{c_2}{r^{N-3}}\right\}dt^2+\left\{\frac{(N-2)r^2\left[1\pm\sqrt{1-\frac{16N(N-4)b \Lambda}{(N-2)^2}}\right]}{2N(N-1)(N-4)b}+\frac{c_2}{r^{N-3}}\right\}^{-1}dr^2\nonumber\\ &&+r^2\left(\sum_{i=2}^{n}d\phi^2_i+\sum_{k=2}^{N-n-2}dz_kdz_k\right)\;, \qquad \qquad \qquad \qquad \qquad \qquad \qquad N>4. \end{eqnarray} The asymptote of Eq. (\ref{m5}) behaves as AdS/dS. \section{Rotating black string solutions}\label{S3} In this section, we analyze the rotating solutions which satisfy the quadratic form of the field equations (\ref{fe}) for $f(R)$. To execute it, we explore two cases separately: \vspace{0.1cm}\\ i-The rotating case when $N=4$, \hspace{3cm} ii-The rotating case when $N>4$.\vspace{0.3cm}\\ \underline{i-The rotating case when $N=4$ }:\vspace{0.2cm}\\ In this case, we apply the following coordinate transformations\footnote{From now on, we write the cosmological constant into the form $\Lambda=-\frac{(N-1)(N-2)}{(2\lambda^2)}.$} \begin{equation} \label{t3} {\phi}'_1 =\frac{ a_1}{\lambda^2}~t-\Xi~ {\phi_1},\qquad \qquad \qquad {t}'= \Xi~ t-a_1~ \phi_1, \end{equation} where $a_1$ is the rotation parameter and $\Xi$ is defined as \[\Xi:=\sqrt{1+\frac{a_1{}^2}{\lambda^2}}.\] With the transformation (\ref{t3}) to the metric (\ref{m5}) in case of $N=4$, we acquire \begin{eqnarray} \label{m11} && ds_1{}^2=-\Big(\frac{\Xi^2 \lambda^2 h(r)-a_1{}^2 r^2}{\lambda^2}\Big)dt'^2 +\frac{dr^2}{h(r)}+r^2\left(\Xi^2d\phi'^2_1+dz^2_1\right)-a^2h(r)d\phi'^2_1+\frac{2\Xi a_1[r^2+\lambda h(r)]d\phi'_1 dt'}{\lambda}\;,\nonumber\\ && \end{eqnarray} where $h(r)$ is given by Eq. (\ref{4d}). It is important to mention that for the rotation parameter $a_1=0$, we return to the spacetime (\ref{m5}) with $N=4$. \underline{ii-The rotating case when $N>4$}:\vspace{0.2cm}\\ For this case, we apply the following coordinate transformations, \begin{equation} \label{t1} {\phi}'_i =-\Xi~ {\phi}_i+\frac{ a_i}{\lambda^2}~t,\qquad \qquad \qquad {t}'= \Xi~ t-\sum\limits_{i=2}^{{\ell}} a_i~ \phi_i, \end{equation} where $a_i$, $i\geq1$ is the number of rotation parameters and $\Xi$ is defined as \[\Xi:=\sqrt{1+\sum\limits_{i=1}^{{\ell}}\frac{ a_i{}^2}{\lambda^2}}.\] Applying the transformation (\ref{t1}) to the metric (\ref{m5}) in case $N>4$, we get \begin{eqnarray} \label{m1} && ds_2{}^2=-h(r)\left[\Xi d{t'} -\sum\limits_{i=2}^{\ell} a_{i}d{\phi'} \right]^2+\frac{dr^2}{h(r)}+\frac{r^2}{\lambda^4}\sum\limits_{i=1 }^{\ell}\left[a_{i}d{t'}-\Xi \lambda^2 d{\phi'}_i\right]^2+ r^2d\Sigma^2-\frac{r^2}{\lambda^2}\sum\limits_{i<j }^{\ell}\left(a_{i}d{\phi'}_j-a_{j}d{\phi'}_i\right)^2,\end{eqnarray} where $h(r)$ is given by Eq. (\ref{nd}), $d\Sigma^2=dz^kdz^k$ is the Euclidean metric on (N-$\ell$-2)-dimensions and $k = 1,2\cdots N-2$. We note that the static configuration (\ref{m5}) can be recovered as a special case when the rotation parameters $a_\ell$ equal zero. It is important to stress on the fact the when the physical quantities $c_1$ for $N=4$ and $c_2$ for $N>4$ are vanishing we get an odd AdS spacetime. It is an easy task to show that the resulting limiting metric is a Minkowski spacetime by calculating its curvature components and show that they are all vanishing identically. Finally, it should be stressed that the coordinate transformations (\ref{t3}) are admitted locally but not globally \cite{Lj95, Aa2} due to the fact that the compactified angular coordinate $\phi$ is mixed with the temporal coordinate $t$ by the coordinate transformations. This fact has been discussed in \cite{Sj82} by pointing out that if the first Betti number for the manifold is a non-zero value, the global diffeomorphisms, by which two spacetimes can be connected, do not exist and hence there is a new manifold parameterized globally by the rotation parameters $a$. The solution given by Eq. (\ref{m11}) shows that the first Betti number is one of these solutions for the cylindrical or toroidal horizons. The same analysis can be applied for the coordinate transformation (\ref{t1}), for which the first Betti non-zero number is derived by the compactification of the certain numbers of the angular coordinates in $(N-2)$ dimensional, $\phi_i$, to the submanifold of the solution. In this paper, we call these coordinates as the rotation parameters of the solution. \section{Total conserved charge} We study the conserved quantities of the solutions found in the preceding section. For this purpose, we present the bases of the Einstein-Cartan geometry used ifor these calculations\footnote{Since the Ricci scalar of the solutions (\ref{m11}) and (\ref{m1}) are equal to -$8\Lambda$ and $\frac{-(N-2)\left[1+\sqrt{1-\frac{16N(N-4)b \Lambda}{(N-2)^2}}\right]}{2(N-4)b}$, respectively, we use the Komar formula to investigate the quantities conserved in terms of the solutions obtained in the preceding section.}. The Lagrangian of this theory has the form \cite{OR6}\footnote{The basic notations are given in Appendix A.}:\begin{equation} \label{lag} {V(\vartheta^i, \ {R^j}_{k}):=-\frac{1}{2\kappa}\left(R^{i j}\wedge \eta_{i j}-2\Lambda \eta\right)},\end{equation} where ${\vartheta^i}$ is the coframe and ${ R^{i j}}$ the Ricci that are one and two forms respectively. Making the variational principle of Eq. (\ref{lag}) we obtain \cite{OR6,Kw1} \begin{eqnarray} && { E_{i}:= -\frac{\partial V}{\partial \vartheta^i}=-\frac{1}{2\kappa}\left(R^{ j k}\wedge \eta_{i j k}-2\Lambda \eta_i\right) , \qquad \qquad B_{i j}:= -\frac{\partial V}{R_{i j}}=\frac{1}{2\kappa}\lambda_{i j}},\end{eqnarray} where ${B_{i j}}$ and $ { E_{i}}$ refer to the rotational gauge and energy-momentum, respectively. We can also define the following quantities \begin{eqnarray} && E_{i j}:= -\vartheta_{[i}\wedge H_{j]}=0}, \qquad { H_{i}:=-\frac{\partial V}{\partial T^i}=0,\end{eqnarray} that refer to spin and translation, respectively. The conserved quantity is represented in the form \cite{OR6} \begin{eqnarray} \label{con} && { \jmath[\xi]=\frac{1}{2\kappa}d\left\{^{^{*}}\left[dk+\xi\rfloor\left(\vartheta^i\wedge T_i\right)\right]\right\}, \quad \textrm{where} \quad k=\xi_i \vartheta^i, \qquad \textrm{and} \qquad \xi^i=\xi\rfloor\vartheta^i}, \end{eqnarray} where $\xi$ is a vector expressed as $\xi=\xi^i\partial_i$ with $\xi^i$ being $N$ parameters and $*$ is the Hodge duality. When the torsion one-form is vanishing, i.e., $T_i= 0$, the total charge of Eq. (\ref{con}) reads \begin{equation} \label{con1} {{\cal Q}}[\xi]=\frac{1}{2\kappa}\int_{\partial S}{^*}dk. \end{equation} This is the invariant quantity conserved \cite{Ka2}--\cite{Ka3}. We apply Eq. (\ref{con1}) to the solutions (\ref{m11}) and (\ref{m1}). We calculate the necessary components for the case $N=4$ and have the co-frame as \begin{eqnarray} \label{cof} && {{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{0}}=\sqrt{h(r)}[\Xi dt'-a_1d\phi'_1], \qquad {{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{\ 1}}= \frac{dr}{\sqrt{h(r)}}, \qquad {{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{\ 2}}= r dz_1, \qquad {{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{\ 3}}=r\Xi d\phi'_1-\frac{r a_1}{\lambda^2}dt'.\nonumber\\ && \end{eqnarray} With Eq. (\ref{cof}) for Eq. (\ref{con}), we obtain \begin{equation} \label{kfor} k=\frac{\lambda^4 h^2(r)(a_1\xi_3-\Xi\xi_0)(\Xi dt'-a_1d\phi'_1) +\lambda^4 \xi_1dr+r^2h(r)[\lambda^4 \xi_2dz_1 +(\lambda^4 \Xi^2 \xi_3-\lambda^2 a_1\Xi \xi_0)d\phi'_1+a_1( a_1\xi_0- \lambda^2 \Xi \xi_3)dt']}{\lambda^4 h(r)}.\end{equation} The total derivative of Eq. (\ref{kfor}) gives \begin{eqnarray} \label{dfor} && dk= \frac{1}{\lambda^4h(r)}\Bigg[\Bigg\{h'(r)\{\lambda^4\Xi( a_1\xi_3-\Xi \xi_0)h(r)+r^2a_1(\lambda^2\Xi \xi_3-a_1\xi_0)\}-ra_1(\lambda^2 \Xi \xi_3-a_1\xi_0)[rh'(r)+2h(r)]\Bigg\}(dr \wedge dt')\nonumber\\ && +2\lambda^4rh(r)\xi_2(dr \wedge dz_1)-\lambda^2\Bigg\{ h'(r)\{\lambda^2a_1( a_1\xi_3-\Xi \xi_0)h(r)+r^2\Xi(\lambda^2\Xi \xi_3-a_1\xi_0)\}\nonumber\\ && -r\Xi(\lambda^2\Xi \xi_3-a_1\xi_0)[rh'(r)+2h(r)]\Bigg\}(dr \wedge d\phi'_1)\Bigg]. \end{eqnarray} Using Eq. (\ref{cof}), we get \begin{eqnarray} \label{cof1} && dt'=\frac{1}{r}\left( \frac{{{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{0}}r\Xi}{\sqrt{h(r)}}+\frac{{{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{\ 3}} a_1}{\lambda^2}\right), \qquad \qquad d\phi'_1=\frac{1}{r}\left( {{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{3}}\Xi+\frac{{{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{\ 0}} ra_1}{\lambda^2\sqrt{h(r)}}\right), \qquad dr= {{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{\ 1}}\sqrt{h(r)}, \qquad dz_1=\frac{{{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{\ 2}}}{ r} .\nonumber\\ && \end{eqnarray} By combining Eq. (\ref{dfor}) with Eq. (\ref{cof1}) and Eq. (\ref{con1}) and operating the Hodge-dual to $dk$, we acquire the following forms for the total conserved charge \begin{equation} \label{4dcon} { {{\cal Q}}[\xi_t']=\frac{\Xi^2}{\lambda^2}(M-4r^3) , \qquad \qquad {{\cal Q}}[\xi_r]={{\cal Q}}[\xi_{z_k}]=0, \qquad {{\cal Q}}[\xi_{\phi'_1}]=\frac{a_1 \Xi}{\lambda^2}(M-4r^3)},\end{equation} where $M=-c_1$. Throughout the same calculations for the case $N>4$ as previously, with Eq. (\ref{m1}), we find \begin{eqnarray} \label{cofn} && {{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{0}}=\sqrt{h(r)}[\Xi dt'-\sum\limits_{i=1 }^{\ell}a_id\phi'_i], \qquad {{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{\ 1}}= \frac{dr}{\sqrt{h(r)}}, \qquad {{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{\ z_1}}= r dz_1, \qquad {{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{\ z_2}}= r dz_2, \cdots \qquad {{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{\ z_{N-\ell-2}}}= r dz_{N-\ell-2},\nonumber\\ && {{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{\ \phi'_i}}=r\Xi d\phi'_i-\frac{r a_i}{\lambda^2}dt',\nonumber\\ \end{eqnarray} where $i=1\cdots \cdots \ell$. By substituting Eq. (\ref{cofn}) into Eq. (\ref{con1}), we acquire \begin{eqnarray} \label{cofnn1} && k=\frac{1}{\lambda^4 h(r)}\Bigg[h^2(r)\lambda^4(\sum\limits_{i=1 }^{\ell}a_i\xi_{i+k+1}-\Xi\xi_0)(\Xi dt'-\sum\limits_{i=1 }^{\ell}a_id\phi'_i) +\lambda^4 \xi_1dr+r^2h(r)(\lambda^4 \sum\limits_{i=1 }^{k}\xi_{i+1}dz_i\nonumber\\ && +\sum\limits_{i=1 }^{\ell}(\lambda^4 \Xi^2 \xi_{{i+k+1}}-\lambda^2 a_i\Xi \xi_0)d\phi'_i+\sum\limits_{i=1 }^{\ell}( a_i{}^2\xi_0- \lambda^2 \Xi a_i\xi_{i+k+1})dt')\Bigg].\end{eqnarray} The total derivative of Eq. (\ref{cofnn1}) yields \begin{eqnarray} \label{cofnnn} && dk= \frac{1}{\lambda^4h(r)}\Bigg[\Bigg\{h'(r)\Bigg[\lambda^4\Xi( \sum\limits_{i=1 }^{\ell}a_i\xi_{i+k+1}-\Xi \xi_0)h(r)+r^2(\lambda^2\Xi\sum\limits_{i=1 }^{\ell}a_i\xi_{i+k+1} -\left(\sum\limits_{i=1 }^{\ell}a_i\right)^2\xi_0)\Bigg]\nonumber\\ &&-r(\lambda^2\Xi\sum\limits_{i=1 }^{\ell}a_i\xi_{i+k+1} -\left(\sum\limits_{i=1 }^{\ell}a_i\right)^2\xi_0)[rh'(r)+2h(r)]\Bigg\}(dr \wedge dt') +2\lambda^4rh(r)\sum\limits_{i=1 }^{k}\xi_{i+1}(dr \wedge dz_i)\nonumber\\ && -\lambda^2\sum\limits_{i=1 }^{\ell}(dr \wedge d\phi'_i) \Bigg\{ h'(r)\{\lambda^2 a_i( \sum\limits_{j=1 }^{\ell}a_j\xi_{j+k+1}-\Xi \xi_0)h(r)+r^2\Xi(\lambda^2\Xi \xi_{i+k+1}-a_i\xi_0)\}\nonumber\\ && -r\Xi(\lambda^2\Xi \xi_{i+k+1}-a_i\xi_0)[rh'(r)+2h(r)]\Bigg\}\Bigg]. \end{eqnarray} We calculate the inverse of Eq. (\ref{cofn}), as we have done in the 4-dimensional case. By combining the result and Eq. (\ref{cofnnn}) and using the Hodge-dual, we find that the conservation of $N$-dimensional spacetime of Eq. (\ref{m1}) is represented as \begin{equation} \label{Ncon} {{\cal Q}}[\xi_t]=\frac{\Omega_{D-1}h'(r)}{16 \pi \lambda^2}, \qquad \qquad {{\cal Q}}[\xi_r]={{\cal Q}}[\xi_{z_i}]=0, \qquad {{\cal Q}}[\xi_{\phi_i}]=\frac{a_i h'(r) \Omega_{D-1} }{16 \pi \lambda^2},\end{equation} where $h(r)$ in case of 4-dim is given by Eq. (\ref{4d}) and $h'(r)=\frac{dh(r)}{dr}$ and in case $N>4$ $h(r)$ is given by (\ref{nd}). Equations (\ref{4dcon}) and (\ref{Ncon}) show that the conserved quantities of the spacetimes (\ref{m11}) and (\ref{m1}) are divergent. Thus, the regularization is necessary for Eq. (\ref{con1}). \section{Regularization with the relocalization for the conserved charge}\label{S4} It is seen that for the general coordinate and local Lorentz transformations, Eq. (\ref{lag}) is invariant. In \cite{OR6}, however, it has been indicated that we have one more vagueness in terms of the definition for the conserved quantities rather than the diffeomorphism and the local Lorentz freedom. This fact comes from the point that the relocalization in terms of the momenta of the gravitational field can always be allowed by the field equations. A relocalization is generated from the change of the Lagrangian for the gravitational field in terms of the total derivative, described as \begin{equation} \label{lag2} V'=V+d\Phi, \qquad \Phi=\Phi({{{\vartheta^{{}^{{}^{\!\!\!\!}}}}{_{}{_{}{_{}}}}}^{\alpha}}, {\Gamma_\alpha}^\beta, T^\alpha, {R_\alpha}^\beta).\end{equation} The second term in the right-hand-side of Eq. (\ref{lag2}) is responsible for the change of the boundary part for the action only, and hence the field equations are left unaltered \cite{OR6}. With the relocalization method, the conserved charge reads \cite{OR6} \begin{equation} \label{conr} {{\cal J}}[\xi]=-\frac{\lambda^2}{4\kappa }\int_{\partial S} \eta_{\alpha \beta \mu \nu}\Xi^{\alpha \beta} W^{\mu \nu}. \end{equation} Here, $W^{\mu \nu}$ is the Weyl 2-form, given by \begin{equation} W^{\alpha \beta}=\frac{1}{2}{C_{\mu \nu}}^{\alpha \beta}{\vartheta}^{\mu}\wedge {\vartheta}^{\nu},\end{equation} where ${C_{\mu \nu}}^{\alpha \beta}={h_\mu}^i{h_\nu}^j {h^\alpha}_k {h^\beta}_l{C_{i j}}^{k l}$ is the Weyl tensor, and $\Xi^{\alpha \beta}$ is represented as\footnote{In \cite{OR6,OR7,OR8}, the explanations for the way of deriving Eq. (\ref{conr}) are written.} \begin{equation} \Xi_{\alpha \beta}=\frac{1}{2}e_\beta\rfloor e_\alpha \rfloor dk.\end{equation} It is known that for not only the coordinate transformation but also the local Lorentz one, the conserved currents ${{\cal J}}[\xi]$ do not change. A vector field $\xi$ on the manifold of the spacetime associates with the currents ${{\cal J}}[\xi]$. To analyze the conserved quantities in terms of the spacetimes (\ref{m11}) and (\ref{m1}), Eq. (\ref{conr}) is used. In the case of $N=4$-dimension, with the metric spacetime (\ref{m11}), we examine the components of Eq. (\ref{conr}). The non-zero components in terms of $\Xi^{\alpha \beta}$ read\footnote{ In Appendix, the non-zero components of the Weyl tensor are described.} \begin{equation} \label{s1} \Xi_{01} =-\frac{[\Xi\xi_0+a_1\xi_3][c_1\lambda^2-4r^3]}{2r^2\lambda^2},\qquad \qquad \qquad \Xi_{1 3} =-\frac{[a_1\xi_0-\Xi\xi_3\lambda^2]\sqrt{h(r)}}{\lambda^2}, \end{equation} Using Eqs. (\ref{conr}) and (\ref{s1}), we get \begin{equation} \label{con4} \eta_{\alpha \beta \mu \nu}\Xi^{\alpha \beta} W^{\mu \nu}\cong-\frac{4c_1([a_1{}^2+2\lambda^2\Xi^2]\xi_0+a_1 \Xi\lambda^2\xi_3)}{\lambda^4}+O\left(\frac{1}{r^3}\right).\end{equation} The substitution of Eq. (\ref{con4}) into Eq. (\ref{conr}) leads to \begin{equation} \label{s2} {{\cal J}}[\xi_t]=M[3\Xi^2-1],\qquad {{\cal J}}[\xi_r]={{\cal J}}[\xi_\theta]=0, \qquad {{\cal J}}[\xi_{\phi_1}]=Ma_1\Xi,\end{equation} which is consistent with the result found in \cite{Aa02, Sa12}. Throughout the same procedure for the metric spacetime (\ref{m1}), we acquire the following non-zero components in terms of $\Xi^{\alpha \beta}$ \begin{eqnarray} && \Xi_{1 t} =\left[\sum\limits_{i=0 }^{\ell}a_{1+i}\xi_{n-k+i}-\Xi\xi_0\right]h'(r),\qquad \Xi_{1 n-k+i} =-\frac{2[a_{1+i}\xi_0-\Xi\xi_{n-k+i}\lambda^2]\sqrt{h(r)}}{\lambda^2}, \end{eqnarray} where $h(r)$ is given by Eq. (\ref{nd}). By using Eqs. (\ref{conr}), we have \begin{equation} \label{con5} \eta_{\alpha \beta \mu \nu}\Xi^{\alpha \beta} W^{\mu \nu}\cong\frac{4c_1\left([\sum\limits_{i=0 }^{\ell}a_i{}^2+(n-2)\lambda^2\Xi^2]\xi_0+\sum\limits_{i=0 }^{\ell}a_i \Xi\lambda^2\xi_3\right)}{\lambda^2}+O\left(\frac{1}{r^6}\right).\end{equation} By combining Eqs. (\ref{con5}) and (\ref{conr}), we obtain \begin{equation} \label{conrr} {{\cal J}}[\xi_t]=M[(n-1)\Xi^2-1]\xi_0,\qquad {{\cal J}}[\xi_r]={{\cal J}}[\xi_\theta]=0, \qquad {{\cal J}}[\xi_{\phi_{1+i}}]=Ma_{1+i}\Xi\xi_{n-\ell+i},\end{equation} where $i=0,1 \cdots \ell-1$. Equation (\ref{conrr}) is compatible with what derived in \cite{Aa02}. \section{Thermodynamics for black holes}\label{S5} In this section, we investigate the thermodynamic quantities like temperature and entropy of the solutions (\ref{m11}) and (\ref{m1}) derived in Sec. III. We calculate the temperature of Hawking, which we can derive by analyzing the singularity of the horizon and put it equal zero in the process of the Euclidean continuation in terms of the solutions of the black strings. For this purpose, we take $t \rightarrow i\tau$ and describe the Euclidean section of the metric (\ref{m11}) so that the period of the Euclidean time, namely, the temperature of the outer event horizon at $r = r_h$, can be written as \begin{equation} T=\frac{1}{4\pi\Xi}\left(\frac{d g_{tt}(r)}{dr}\right)_{r_h}=\frac{3r_{h}}{4\lambda^2\Xi \pi},\end{equation} where $r_h$ is the root of $h(r)$, given by Eq. (\ref{4d}), with the largest value among the several roots of $h(r)$. In the process of the Euclidean continuation, we put $a_1 \rightarrow \sqrt{-1}a_1$. By identifying $\tau \sim \tau +T^{-1}$, we have $\phi_1\sim \phi_1 +\sqrt{-1} T^{-1} \Omega$ \cite{HHR}. Here, the angular velocity at the horizon is described as \begin{equation} \omega_1=\frac{a_1}{\Xi \lambda^2}. \end{equation} For the case of $D>4$, through the same calculation, we obtain \begin{equation} T=\frac{1}{4\pi \Xi}\left(\frac{d g_{tt}(r)}{dr}\right)_{r_h}=\frac{r_{h}(N-1)}{4\pi \Xi \lambda^2},\end{equation} and the angular velocity takes the form \begin{equation} \omega_i=\frac{a_i}{\Xi \lambda^2}.\end{equation} Based on the investigations in \cite{BNOV}, we consider the entropy of black holes in the framework of $f(R)$ gravity\footnote{The interpretation of the relation between thermodynamics and gravitation as the equation of state has been studied in \cite{Jacobson:1995ab} for general relativity and in \cite{Eling:2006aw} for $f(R)$ gravity. The applications of these arguments to various modified gravity theories have been investigated in \cite{Bamba:2009gq, Bamba:2009id}.}. From the Noether method used to calculate the black hole entropy in $f(R)$ gravity with the constant Ricci scalar, we obtain \cite{CENOZ} \begin{equation} S=\frac{A}{4G}f'(R)\mid_{r=r_h},\end{equation} with $A$ the event horizon area. With the solution (\ref{m11}), for $D=4$, we have \begin{equation} \label{s4} S=\frac{(1-16b \Lambda )\Xi r_h{}^2}{4\lambda^2}, \end{equation} and for $D>4$, we get \begin{equation} \label{sn} S=\frac{\Xi {r_{h}{}^{N-2}(1-16b \Lambda) \Omega_{N-2}}}{4\lambda^2}, \end{equation} where $\Omega_{N-2}$ means the volume of the $(N-2)$-dimensional unit sphere. We examine whether the first law of thermodynamics for black strings is satisfied by using the thermodynamic quantities of the system as well as the conserved ones. With the representation of mass in Eq. (\ref{conrr}), angular momentum in Eq. (\ref{s4}), entropy in (\ref{sn}), and charge, given by the fact that $h(r_h)$ = 0, a Smarr-type formula is acquired as \begin{equation} M(S,J)=\frac{J(3\zeta-1)}{3\lambda \zeta\sqrt{\zeta-1}},\end{equation} where $\zeta=\Xi^2$ is the real positive roots of Eqs. (\ref{4d}) and (\ref{nd}), which can be rewritten as follows \begin{eqnarray} && J-\frac{3(4\lambda^4S\sqrt{\zeta})^{3/2}\omega_1}{\sqrt{{\zeta}(\lambda^2+48b)^3}}=0, \qquad N=4,\nonumber\\ && J-\omega_{1+i}\sqrt{\zeta}\left(\frac{4\lambda^4S}{ \Omega_{N-2}\sqrt{\zeta}(\lambda^2+48b)}\right)^{(N-1)/(N-2)}=0, \qquad N>4. \end{eqnarray} The entropy $S$ and the angular momentum $J$ can be considered as a complete set of the extensive quantities for the mass $M(S, J)$. Thus, the conjugate intensive quantities to those should be defined. They are temperature and angular velocities, represented as \cite{Sa12} \begin{equation} T=\left(\frac{\partial M}{\partial S}\right)_{J}, \qquad \omega_i=\left(\frac{\partial M}{\partial J_i}\right)_{S}.\end{equation} It can be demonstrated that the following first law of thermodynamics is met by the conserved quantities and the thermodynamic ones \begin{equation} dM=TdS+\sum\limits_{i=1 }^{\ell} \Omega_i dJ_i.\end{equation} We emphasize that both temperature and other conserved quantities do not depend on the coordinates choice generally. The thermodynamic and conserved quantities in the static case, in which the rotation parameters vanish, are different from those in the rotating spacetimes. As a consequence, it is found that the static metric is different from the rotating one and thus they represent distinct two spacetimes. Figures 2 and 3 show that our system is thermally stable for $N=4$ and $N>4$. \begin{figure*} \centering \includegraphics[scale=0.5]{JF2} \caption{The function ${{\left(\frac{\partial^2M}{\partial S^2}\right)}_{}}_J$ versus $r_h$ for N = 4.} \label{Fig:1} \end{figure*} \begin{figure*} \centering \includegraphics[scale=0.5]{JF3} \caption{The function ${{\left(\frac{\partial^2M}{\partial S^2}\right)}_{}}_J$ versus $r_h$ for N = 9.} \label{Fig:1} \end{figure*} \section{Summary and discussion}\label{S6} In this study, we have derived $N$-dimension black string static flat or cylindrical horizons solutions in $f(R)$ gravitation theory. We have applied a spacetime, that possesses the $k$-dimension Euclidean metric, $\ell$-dimension angular coordinates and one unknown function of the radial coordinate to the gravitational field equations in $f(R)$ gravity with the quadratic form $f(R)=R+b R^2$. The resulting differential equations are solved exactly without any assumption and general solutions for $N=4$ and $N>4$ have been derived. These solutions are classified as follows:\vspace{0.1cm}\\ i) The solution of $N=4$ is completely identical with GR and behaves asymptotically as AdS/dS spacetime.\vspace{0.1cm}\\ ii) The solutions with $N>4$ is affected by the higher curvature order, i.e., the solutions contain the dimensional parameter $b$. In general the solutions of the case $N>4$ cannot reduce to the GR solutions due to the fact that the parameter $b$ is not allowed to goes to zero. To construct rotating black string solutions we have applied a coordinate transformation that relates the temporal coordinate and the rotating one in the case $N=4$ and between the temporal and angular-coordinates in case $N>4$ and have derived the solutions of the rotating black strings that satisfy the gravitational field equations for $f(R)=R+bR^2$. The topology of the output solution in case $N=4$ is a cylindrical spacetime with $R\times S^1$ and $0\leq \phi_{1}< 2\pi$ and $-\infty < z_1 < \infty$ and in case $N>4$ is $0\leq \phi_{\ell}< 2\pi$ and $-\infty < z_k < \infty$ in case $N>4$. We have studied the physics of the rotating black string solutions by calculating their conserved quantities through the use of Komar \cite{Ka2}. This method gives divergent quantities of energy and angular momentum in the two cases of $N=4$ and $N>4$. We have therefore used the regularization method to obtain the energy momentum and the angular one with their finite values. The regularization method used in this study is the relocalization, which is created from the change of the Lagrangian for the gravitational field in terms of the total derivative. From the method, the representations for the energy momentum and the angular one have been acquired. It has also been confirmed that these results are consistent with the consequences derived in \cite{Aa02,Sa12}. Finally, we have explored the entropy for the solutions of the rotating black strings. We have also acquired the Smarr-type formula of the mass, which is a function in terms of entropy and angular momentum, i.e., described by $M(S, J)$. It has been verified that the first law of thermodynamics for the black strings can be satisfied by the derived conserved quantities and the thermodynamic ones \cite{Sa12}. Furthermore, we have analyzed the phases for the cases $N=4$ and $N>4$. It has been shown that for these both cases, the system can be stable in a thermodynamic sense as demonstrated in Figs. 2 and 3. \vspace{0.3cm} {\centerline{\bf Appendix I: The symbols used the calculations of conserved quantities}}\vspace{0.3cm} The indices ${ k, l, \cdots }$ are the coframe indices and $\gamma$, $\delta$, $\cdots$ are the coordinate ones. The wedge product is represented by $\wedge$ and the interior product is given by $\xi \rfloor \Psi$. The coframe ${\vartheta}^{i}$ is defined as ${ \vartheta}^{i}={e^i}_\mu dx^\mu$ and the frame $e_i$ is defined as ${ e_i={e_i}^\mu \partial_\mu}$ with ${ {e^i}_\mu}$ and ${ {e_i}^\mu} $ the covariant of the vielbein and the contravariant of it, respectively. The volume is expressed by $\eta:=\vartheta^{\hat{0}}\wedge \vartheta^{\hat{1}}\wedge \vartheta^{\hat{2}}\wedge\vartheta^{\hat{3}}$. In addition, we describe \[{ \eta}_i:=e_i \rfloor \eta = \ \frac{1}{3!} \ \epsilon_{i j k l} \ { \vartheta}^j \wedge { \vartheta}^k \wedge { \vartheta}^l,\] where $\epsilon_{ i j k l}$ is completely antisymmetric. \vspace{0.3cm} {\centerline{\bf Appendix II: Non-zero components for the Christoffel symbols second kind}} {\centerline{\bf and Ricci curvature tensor}}\vspace{0.3cm} With Eq. (\ref{m2}), we find the non-zero components for the Christoffel symbols second kind and Ricci curvature tensor \begin{eqnarray*} && \Gamma^t{}_{t\; t}=-\Gamma^r{}_{r\; r}=\frac{h'}{2h},\qquad \qquad \Gamma^t{}_{r\; t}=\frac{hh'}{2}, \nonumber\\ && \Gamma^{r}{}_{\phi_1\; {\phi_1}}=\Gamma^{r}{}_{\phi_2\; {\phi_2}}\cdots \cdots \Gamma^{r}{}_{\phi_{N-\ell}\; {\phi_{N-\ell}}}=\Gamma^{r}{}_{z_1\; {z_1}}=\Gamma^{r}{}_{z_2\; {z_2}}\cdots \cdots \Gamma^{r}{}_{z_{N-\ell-2}\; {z_{N-\ell-2}}}=-r h,\nonumber\\ &&\Gamma^{\phi_1}{}_{r\; {\phi_1}}=\Gamma^{\phi_2}{}_{r\; {\phi_2}}\cdots \cdots \Gamma^{\phi_{N-\ell}}{}_{r\; {\phi_{N-\ell}}}=\Gamma^{z_1}{}_{r\; {z_1}}=\Gamma^{z_2}{}_{r\; {z_2}}\cdots \cdots \Gamma^{z_{N-\ell-2}}{}_{r\; {\phi_{N-\ell-2}}}=\frac{1}{r}. \end{eqnarray*} \begin{eqnarray*} && R_{t\;r\;t\;r}=\frac{h''}{2}, \qquad R_{t\;\phi_1\;t\;\phi_1}= R_{t\;\phi_2\;t\;\phi_2}=\cdots \cdots R_{t\;\phi_{N-\ell}\;t\;\phi_{N-\ell}}=R_{t\;z_1\;t\;z_1}= R_{t\;z_2\;t\;z_2}=\cdots \cdots R_{t\;z_{N-\ell-2}\;t\;z_{N-\ell-2}}=\frac{r\;h h'}{2}, \nonumber\\ && R_{r\;\phi_1\;r\;\phi_1}= R_{r\;\phi_2\;r\;\phi_2}=\cdots \cdots R_{r\;\phi_{N-\ell-2}\;r\;\phi_{N-\ell-2}}=R_{r\;z_1\;r\;z_1}= R_{r\;z_2\;r\;z_2}=\cdots \cdots R_{r\;z_{N-2}\;r\;z_{N-2}}=-\frac{r h'}{2h},\nonumber\\ && R_{\phi_1\;\phi_2\;\phi_1\;\phi_2}= R_{\phi_1\;\phi_3\;\phi_1\;\phi_3}=\cdots \cdots R_{\phi_1\;\phi_{N-\ell}\;\phi_1\;\phi_{N-\ell}}= R_{\phi_2\;\phi_3\;\phi_2\;\phi_3}=R_{\phi_2\;\phi_4\;\phi_2\;\phi_4}\cdots \cdots= \nonumber\\ && R_{\phi_2\;\phi_{N-\ell}\;\phi_2\;\phi_{N-\ell}} \cdots \cdots R_{\phi_{N-\ell-1}\;\phi_{N-\ell}\;\phi_{N-\ell-1}\;\phi_{N-\ell}}= R_{z_1\;z_2\;z_1\;z_2}= R_{z_1\;z_3\;z_1\;z_3}=\cdots \cdots R_{z_1\;z_{N-\ell-2}\;z_1\;z_{N-\ell-2}}=\nonumber\\ && R_{z_2\;z_{N-\ell-2}\;z_2\;z_{N-\ell-2}} \cdots \cdots R_{z_{N-\ell-3}\;z_{N-\ell-2}\;z_{N-\ell-3}\;z_{N-\ell-2}}=-r^2 h,\end{eqnarray*} \subsection*{Acknowledgments} This work was supported in part by the Egyptian Ministry of Scientific Research under project No. 24-2-12. Moreover, the work of KB was partially supported by the JSPS KAKENHI Grant Number JP 25800136 and Competitive Research Funds for Fukushima University Faculty (18RI009).
1,116,691,499,928
arxiv
\section{Introduction} Dense neutrino gases in early universe or emitted from core-collapse supernovae (SNe) represent unique cases to probe the effect of the neutrino-neutrino interactions on the flavor conversions. Indeed, in these environments the neutrino-neutrino interactions would generate a large neutrino potential $\mu \sim \sqrt{2} G_F n_\nu$ that in some cases can exceed the ordinary matter term $\lambda= \sqrt{2} G_F n_e$ and the neutrino vacuum oscillation frequency $\omega= \Delta m^2/2E$. When this situation is encountered the neutrino-neutrino potential would dominate the flavor evolution producing large self-induced flavor conversions (see~\cite{Duan:2010bg} for a review). A vivid activity on these effects in the context of SN neutrinos has flourished since a decade~\cite{Duan:2005cp,Duan:2006an,Hannestad:2006nj,Fogli:2007bk}. Indeed, it has been realized that in the deepest SN regions self-induced effects can produce collective neutrino oscillations, leading to peculiar spectral features in the oscillated neutrino spectra, dubbed as spectral swaps and splits~\cite{Raffelt:2007cb,Duan:2007bt,Dasgupta:2009mg,Friedland:2010sc,Dasgupta:2010cd}. The development of the self-induced flavor conversions is associated with \emph{instabilities} in the flavor space that are triggered by the interacting neutrinos. The first one to be noticed was the \emph{bimodal} instability present even in an homogeneous and isotropic neutrino gas~\cite{Hannestad:2006nj}. In particular, it was shown that an ensemble initially composed of equal densities of $\nu_e$ and $\bar\nu_e$ in the presence of a dominant neutrino-neutrino interaction term would exhibit in inverted mass hierarchy ($\Delta m^2 <0$) large pair-conversions of the type $\nu_e \bar\nu_e \leftrightarrow \nu_x \bar\nu_x$ even with a small mixing angle. This behavior has been explained in terms of an unstable pendulum in flavor space, where the instability is associated with the tiny mixing angle~\cite{Hannestad:2006nj,Duan:2007fw}. Furthermore, it has been shown that if one introduces an anisotropy in the neutrino gas, this can dramatically change the previous solution. Indeed, in a non-isotropic neutrino ensemble, the neutrino-neutrino interaction term contains {\it multi-angle} effects since the current-current nature of the low-energy weak interactions introduces an angle dependent term $(1-{\bf v}_{\bf p} \cdot {\bf v}_{\bf q})$ between two interacting neutrino modes~\cite{Qian:1994wh,Duan:2006an}. In the case of a gas completely symmetric in flavor content of $\nu$ and $\bar\nu$, even a small deviation from a perfect isotropy is enough to produce a multi-angle {\it dechoerence} leading to a flavor equilibrium among the different neutrino species in both the mass hierarchies~\cite{Raffelt:2007yz}. Multi-angle effects have been extensively studied in the context of flavor evolution of SN neutrinos~\cite{Mirizzi:2010uz}, whose emission is far from isotropic. It has been realized that in some cases they can destroy the collective behavior of the flavor evolution observed in an isotropic environment~\cite{Raffelt:2007yz,EstebanPretel:2007ec,Sawyer:2008zs}. Multi-angle effects can also lead to a trajectory-dependent matter term, which if strong enough suppresses the self-induced conversions~\cite{Chakraborty:2011nf,Chakraborty:2011gd,Saviano:2012yh,Sarikas:2011am}. In the context of SN neutrinos it has been often assumed an axially symmetric neutrino emission in oder to integrate out the azimuthal angle in the multi-angle kernel. However, it has been found that lifting this assumption, neutrino-neutrino interactions can break axial symmetry and lead to azimuthal-angle dependent flavor conversions~\cite{Raffelt:2013rqa,Raffelt:2013isa,Duan:2013kba,Mirizzi:2013rla,Mirizzi:2013wda,Chakraborty:2014nma,Chakraborty:2014lsa}. The lesson that has been gained from these situations is that self-interacting neutrinos can \emph {spontaneously break} the symmetries of the initial conditions, since small deviations from them can be dramatically amplified during the further flavor evolution. This insight has recently stimulated doubts about the validity of the solution of the SN neutrino equations of motion worked out in the so-called \emph{``bulb model''}~\cite{Duan:2006an,Fogli:2007bk,EstebanPretel:2007ec}. In this framework it is assumed the spherical symmetry about the center of the SN and the axial symmetry about any radial direction. These two symmetries allow one to reduce the problem to a one-dimensional evolution along a radial direction. Remarkably, removing the assumption of spherical symmetry it would necessary to solve a challenging multi-dimensional problem to characterize the neutrino flavor evolution. In this context, in order to show how deviations from the spatial symmetries of a system would affect the flavor evolution a simple two-dimensional model has been recently proposed in~\cite{Duan:2014gfa}. Namely, monochromatic neutrinos streaming in a stationary way in two directions (``left'' $L$ and ``right'' $R$, respectively) from an infinite boundary plane at $z=0$ with periodic conditions on $x$ and translation invariance along the $y$ direction. Remarkably, there is a correspondence between the symmetries of the bulb model and the ones of this planar case. Indeed, the translational symmetry in the $x$ direction in the planar model corresponds to the spherical symmetry of the bulb-model and the $L$-$R$ symmetry is equivalent to the axial symmetry in the spherical case. By means of a stability analysis of the linearized equations of motion, it has been shown in the planar model that if one perturbs the initial symmetries of the flavor content in both the two emission modes and along the boundary in the $x$ direction, then self-induced oscillations can spontaneously break both these spatial symmetries~\cite{Duan:2014gfa}. In~\cite{Mirizzi:2015fva} we have recently performed a numerical study of the flavor evolution for this case. We found that the initial small perturbations are amplified by neutrino interactions, leading to non-trivial two-dimensional structures in the flavor content and lepton number of the neutrino enseble, that would exhibit large space fluctuations. The purpose of this paper is to develop a two-dimensional model to capture more closely some of the features of the SN environment. In particular, with respect to the planar model considered in~\cite{Duan:2014gfa,Mirizzi:2015fva} we make the following improvements: \emph{(i)} $\nu$ emission from a ring mimicking the neutrino-sphere, \emph{(ii)} parameters inspired by the SN neutrino emissivity, \emph{(iii)} declining neutrino density from the boundary, \emph{(iv)} multi-angle effects. We also assume that self-induced flavor conversions would develop without any hindrance caused by a large matter term. Perturbing the neutrino emission in the translational symmetry on the ring and in the emission directions, we find the spontaneous breaking of these symmetries in both normal and inverted mass hierarchies. As a consequence the flavor content and the lepton number of the neutrino ensemble acquires seizable variations along different lines of sight. These findings are presented as follows. In Sec.~II we describe the features of our two-dimensional model. We discuss the equations of motion to characterize the two-dimensional flavor evolution. We show how it is possible to solve this problem by Fourier transforming these equations, obtaining a tower of ordinary differential equations for the different Fourier modes. In Sec.~III we present the numerical results of our study. We show how the breaking of the spatial symmetries produce direction-dependent variations in the flavor content of the ensemble. Finally in Sec.~IV we discuss about future developments and we conclude. \section{Two-dimensional model} \subsection{Equations of motion} Characterizing the SN neutrino flavor dynamics amounts to follow the spatial evolution of the neutrino fluxes. For a stationary neutrino emission, the Equations of Motion (EoMs) of the $\nu$ space-dependent occupation numbers ${\varrho} ({\bf r}, {\bf p})$ with momentum ${\bf p}$ at position ${\bf r}$ are~\cite{Sigl:1992fn,Strack:2005ux} \begin{eqnarray} && {\bf v} \cdot \nabla_{\bf r}\, {\varrho} = - i [{\sf\Omega}, \varrho] \,\ , \label{eq:eom} \end{eqnarray} where we indicate with sans-serif vectors in flavor space, while for the ones in real space we use the bold-face. At the left-hand-side of Eq.~(\ref{eq:eom}) there is the Liouville operator representing the drift term proportional to the neutrino velocity ${\bf v}$, due to particle free streaming. Note that we are neglecting external forces and an explicit time dependence of the occupation numbers. On the right-hand-side of Eq.~(\ref{eq:eom}) the matrix $\Omega$ is the full Hamiltonian that reads \begin{equation} {\sf \Omega} = \frac{{\sf M}^2}{2 E} + \sqrt{2} G_F \left[{\sf N}_l + \int_{-\infty}^{+\infty} d E^{\prime} {E^{\prime}}^2 \int \frac{d {\bf v}^\prime}{(2 \pi)^3} \varrho^{\prime} (1- {\bf v} \cdot {\bf v}^{\prime}) \right] \,\ , \end{equation} where ${\sf M}^2$ is the matrix of the mass-squared, responsible of the vacuum oscillations. The ordinary matter effects on neutrino flavor conversions is accounted by the matrix of charged lepton densities $N_l$. Finally, the neutrino-neutrino interaction potential is represented by the last term of the right-hand-side, where the integral in ${d {\bf v}^\prime}$ is on the unit sphere and the occupation numbers $ \varrho^{\prime}$ depend on ${\bf r}, E^{\prime}, {\bf v}^\prime$. Note that we use negative $E$ to denote anti-neutrinos. In order to show the effect of spontaneous breaking of spatial symmetries we consider them to be emitted in a plane from a ring with radius $r=R$. We have then a two-dimensional model for which it is natural to use a system of polar coordinates to describe the neutrino position vector ${\bf r}=(r, \phi)$ where $r$ is the radius, $\phi \in [0;2 \pi]$ is the polar angle, as shown in Fig.~\ref{bulb}. The neutrino velocity can be decomposed in the radial ($v_r$) and transverse ($v_t$) component defined as ${\bf v} = (v_r, v_{\pm}) = (\cos \theta_r, \pm \sin \theta_r)$, where $\theta_r \in [0, \pi/2]$ is the angle between the radial direction and the one of the neutrino propagation (see, e.g.,~\cite{Buras:2005rp}), and the $\pm$ sign indicate a transverse velocity in the clock-wise ($v_+$) or anti-clockwise ($v_-$) direction with respect to the radial direction, respectively. We mention that in the recent multi-angle study~\cite{Abbar:2015mca}, where neutrinos emitted from a plane were considered, the range in the emission angles was $\theta \in [0, \pi]$. With this choice it is not necessary to distinguish clock-wise or anti-clockwise modes. However, in our work we preferred to use have the $\theta_r \in [0, \pi/2]$ in order to start with a situation symmetric in the two emission directions $\pm$ and show the effect of breaking of this discrete symmetry. Note that the local angle $\theta_r$ would depend on the radius $r$. In order to avoid this effect, in the literature it is preferred to label the neutrino modes in terms of their emission angle $\vartheta_{R} \in [0, \pi/2]$ along the boundary at $r=R$. The two angles $\theta_r$ and $\theta_{R}$ are related by~\cite{EstebanPretel:2007ec} \begin{equation} R \sin \theta_R = r \sin \theta_r \,\ . \label{eq:geom} \end{equation} Furthermore, we introduce the angular variable $u=\sin \theta_R$, $u \in[0,1]$. With this choice the components of the neutrino velocity are~\cite{Raffelt:2013rqa} \begin{eqnarray} v_r & = & \cos \theta_r = \sqrt{1-\frac{R^2}{r^2} u^2} \,\ , \nonumber \\ v_t \equiv v_{\pm} &=& \pm \sin \theta_r = \pm \frac{R}{r} u \,\ . \end{eqnarray} \begin{figure}[!t]\centering \includegraphics[angle=0,width=1.\columnwidth]{bulb.eps} \caption{Two-dimensional model for the neutrino beams emitted from a ring with radius $r=R$. \label{bulb}} \end{figure} We assume that the neutrino distributions at $r=R$ in the energy $E$ and in the angular variables $u$ and in the $R,L$ directions can be factorized as \begin{equation} F_{\pm}(E, u) = F_{\nu}(E) \times F_{\nu}(u) \times F_{\pm} \,\ . \end{equation} We assume the neutrino angular distributions to be flat in $u$ and equal for all the flavors, i.e. $F_{\nu}(u)=1$. Concerning the distributions in the clock-wise ($+$) or anti-clockwise ($-$) directions, we assume that these are given by \begin{equation} F_{\pm} = \frac{(1+\beta_{\pm})}{2+\beta_+ +\beta_-} \,\ , \label{eq:anguldistr} \end{equation} where the quantities $\beta_{\pm} \ll 1$ are introduced to slightly perturb the $\pm$ of a given $u$ mode at the boundary. The neutrino number flux $F_{\nu}(E)$ at the ring is given by \begin{equation} F_{\nu}(E) = \frac{1}{4 \pi R^2} \frac{L_{\nu}}{\langle E_\nu \rangle} f_{\nu}(E) \,\ , \end{equation} where we have normalized the neutrino emission on a sphere with radius $R$. \subsection{Two-flavor case} In the following we will consider only a two-flavor system $(\nu_e,\nu_x)$ where $x=\mu, \tau$ and we will describe the neutrino energy modes in terms of the two neutrino frequency $\omega= \Delta m^2/2E_0$, where $\Delta m^2 = m_2^2 - m_1^2$ is the mass-squared difference. We have assumed a monochromatic neutrino emission with $E=E_0$. In the two-flavor case the density matrices are projected over the Pauli matrices $\sigma$ obtaining the polarization vectors in the usual way~\cite{EstebanPretel:2007ec}, where we normalize the (anti)neutrino polarization vectors to the difference of the anti-neutrino fluxes at the boundary. The Liouville operator on the left-hand-side of the EoMs [Eq.~(\ref{eq:eom})] assumes the form \begin{equation} {\bf v} \cdot \nabla_{\bf r} = v_r \frac{d}{dr} + \frac{v_{\pm}}{r} \frac{d}{d\phi} \,\ , \end{equation} so that the EoMs read (see also~\cite{Raffelt:2013rqa}) \begin{eqnarray} \frac{d}{dr} {\sf P}_{\pm, u} &=& - \frac{v_{\pm}}{v_r r}\frac{d}{d\phi}{\sf P}_{\pm, u} \nonumber \\ &+& \left[\frac{\omega}{v_r} {\sf B} + \Omega^{\nu\nu}_{\pm} \right] \times {\sf P}_{\pm, u} \,\ , \label{eq:eompol} \end{eqnarray} where we indicated with sans-serif the vectors in flavor space. In particular, the unit vector ${\sf B}=({\sf B}^1, {\sf B}^2, {\sf B}^3)$ points in the mass eigenstate direction in flavor space, such that ${\sf B}\cdot{\sf e}_3=-\cos \vartheta$, where $\vartheta$ is the vacuum mixing angle. For simplicity we neglect a possible matter effect, assuming that its only role would be to reduce the effective in-medium mixing angle, $\vartheta \ll 1$~\cite{Hannestad:2006nj}. The neutrino-neutrino interaction terms has a multi-angle kernel $(1-{\bf v}_{\bf p} \cdot {\bf v}_{\bf q})$ which takes the form \begin{eqnarray} & &\frac{1}{v_r}\int d{\theta^\prime}_r [1-v_r {v_r^\prime} -v_t {v_t^\prime}] {\sf D^\prime} \nonumber \\ &=&\frac{R}{r} \int d{\vartheta_R^\prime} \cos{\vartheta_R^\prime} \left[\frac{1}{v_r {v_r^\prime}} -1 - \frac{v_t {v_t^\prime}}{v_r {v_r^\prime}} \right] {\sf D^\prime} \label{eq:multian} \end{eqnarray} where ${\sf D}$ is the difference between the neutrino and anti-neutrino polarization vector of a given mode, and we used from Eq.~(\ref{eq:geom}) \begin{equation} d \theta_r =\frac{R}{r} \frac{\cos{\vartheta}_R}{\cos \theta_r} d{\vartheta}_R =\frac{R}{r} \frac{\cos{\vartheta}_R}{v_r} d{\vartheta}_R = \frac{R}{r} \frac{du}{v_r} \,\ . \end{equation} In the large-distance limit $(r \gg R)$ one can expand Eq.~(\ref{eq:multian}) obtaining \begin{equation} \frac{1}{2} \frac{R}{r}\int d{u^\prime} [v_t -{v_t^\prime}]^2 {\sf D^\prime}\,\ . \end{equation} We note that for the case we are studying the self-interaction term declines as $r^{-3}$, while in the SN case it declines as $r^{-4}$. Considering the contribution of the clockwise ($+$) and anti-clockwise ($-$) modes in the previous equation one gets \begin{eqnarray} {v_t}^2 {\sf D^\prime }&=& \left(\frac{R}{r} \right)^2 u^2 ({\sf D}_{+, u^\prime} +{\sf D}_{-, u^\prime}) \,\ \nonumber \\ v_t {v_t^\prime} {\sf D^\prime} &=& \mp \left(\frac{R}{r} \right)^2 {u u^\prime} ({\sf D}_{+, u^\prime} -{\sf D}_{-, u^\prime}) \,\ . \end{eqnarray} Then, the neutrino self-interaction term in the large distance limit $r \gg R$ assumes the form \begin{eqnarray} \Omega^{\nu\nu}_{\pm} &=& \mu_r \int_{0}^{1} d u^\prime \large[(u^2 + {u^\prime}^2) \frac{({\sf D}_{+, u^\prime} +{\sf D}_{-, u^\prime})}{2} \nonumber \\ &\mp & {u u^\prime} ({\sf D}_{+, u^\prime} -{\sf D}_{-, u^\prime}) \large] \label{eq:mularge} \end{eqnarray} where the $\mp$ refers to the $\pm$ modes respectively, and \begin{eqnarray} \mu_r &=& [F_{{\bar\nu}_e}(R)- F_{{\bar\nu}_x}(R)] \frac{R^3}{2 r^3} \nonumber \\ &=& {3.5 \times 10^{5}} \,\ \textrm{km}^{-1} \left(\frac{R}{r} \right)^3 \left(\frac{L_{\bar\nu_e}}{\langle E_{\bar\nu_e} \rangle} -\frac{L_{\bar\nu_x}}{\langle E_{\bar\nu_x} \rangle} \right) \nonumber \\ &\times & \frac{15 \,\ \textrm{MeV}}{10^{51} \,\ \textrm{MeV}/\textrm{s}}\left(\frac{10 \,\ \textrm{km}}{R} \right)^2 \,\ . \label{eq:mupotent} \end{eqnarray} An equation analogous to Eq.~(\ref{eq:multian}) can be written for the anti-neutrinos. One can define a conserved ``lepton current'' ${\textrm L}^\mu =({\textrm L}_0, {\bf L})$ whose components are (see also~\cite{Duan:2008fd}) \begin{eqnarray} {\textrm L}_0 &=& \int_{0}^{1} d u^\prime \frac{1}{2}({\sf D}_{+, u^\prime} + {\sf D}_{-, u^\prime}) \cdot{\sf B} \,\ , \label{eq:lepton0} \\ {\textrm L}_r &=& \int_{0}^{1} d u^\prime {v_r} \frac{1}{2}({\sf D}_{+, u^\prime} + {\sf D}_{-, u^\prime}) \cdot{\sf B} \,\ , \\ {\textrm L}_t &=& \int_{0}^{1} d u^\prime |{v_t}| \frac{1}{2}({\sf D}_{+, u^\prime} - {\sf D}_{-, u^\prime}) \cdot{\sf B} \,\ , \label{eq:lepton} \end{eqnarray} where ${\bf L}$ is a two-dimensional vector $({\textrm L}_r,{\textrm L}_\theta)$, and ${\sf D}_{R(L), u} \cdot{\sf B} \simeq {\sf D}^3_{R(L), u}$. From Eq.~(\ref{eq:eom}) one realizes that the lepton current satisfies a continuity equation \begin{equation} \partial_{0} {\textrm L}_0 + \nabla_{\bf r} \cdot {\bf L} =\nabla_{\bf r} \cdot {\bf L}=0 \,\ , \label{eq:leptcons} \end{equation} where first equality follows since $\partial_0 {\textrm L}_0 =0$ having we assumed a stationary solution. Eq.~(\ref{eq:leptcons}) generalizes the lepton-number conservation law of the one dimensional case~\cite{Hannestad:2006nj}. \subsection{Equations of motion in Fourier space} The differential operators in Eq.~(\ref{eq:eompol}) implies that the flavor evolution is characterized by a partial differential equation problem in $r$ and $\phi$. In~\cite{Mangano:2014zda,Mirizzi:2015fva} (see also~\cite{Duan:2014gfa}) it has been shown how it is possible to solve such a problem by Fourier transforming the equations of motion with respect to the coordinate along which a perturbation is introduced. We assume a perturbation of the polarization vectors at $r=R$ with period $2 \pi$ in $\phi$ so that \begin{equation} {\sf P}_{\pm,u}(R, \phi) = {\sf P}^{0}_{\pm} +2 {\sf e }_z \delta \cos\phi \,\ , \label{eq:polarseed} \end{equation} where ${\sf P}^{0}_{\pm}$ is the unperturbed value of the polarization vector, and $\delta \ll 1$ is the amplitude of the perturbation. Up to the small difference in the emission modes $\pm$ [see Eq.~(\ref{eq:anguldistr})] the initial values of the polarization vectors are \begin{eqnarray} {\sf P}^{0}_{\pm} (\nu) & \simeq & (1 + \alpha) {\sf e}_z \,\ , \\ {{{\sf P}}}^{0}_{\pm} (\bar\nu) & \simeq & {\sf e}_z \,\ , \end{eqnarray} where the initial flavor asymmetry is given by \begin{equation} \alpha = \frac{F_{\nu_e} - F_{\bar{\nu}_e}}{F_{\bar{\nu}_e} - F_{\bar{\nu}_x}} \,\ . \label{eq:alpha} \end{equation} The functions ${\sf P}_{\pm, u}(r, \phi)$ are periodic in $\phi$ with period $2\pi$. Their Fourier transform is then \begin{equation} {\sf P}_{\pm, u,n}(r) = \frac{1}{2 \pi} \int_{0}^{2\pi} {\sf P}_{\pm, u}(r, \phi) e^{-i n \phi} d \phi \,\ , \end{equation} so that \begin{equation} {\sf P}_{\pm, u}(r, \phi) =\sum_{n={-\infty}}^{+\infty} {\sf P}_{\pm, u,n}(r) e^{+i n \phi} \,\ . \label{eq:invfour} \end{equation} The EoMs for the Fourier modes at large $r \gg R$ assume the form \begin{eqnarray} & &\frac{d}{dr} {{{\sf P}}_{\pm, u,n}}(r) = \mp i n u \,\ \frac{R}{r^2} {{{\sf P}}_{\pm,u,n}} \nonumber \\ &+& \frac{\omega}{v_r} {\sf B}\times {{{\sf P}}_{\pm, u,n}} \nonumber \\ &+& \mu_r \sum_{j=-\infty}^{+\infty} \int_{0}^{1} d u^{\prime} [(u^2 + {u^\prime}^2)\frac{({{{\sf D}}_{+, u^\prime,{n-j}}}+ {{{\sf D}}_{-, u^\prime,{n-j}}})}{2} \nonumber \\ & & \mp {u u^\prime} ({{{\sf D}}_{+, u^\prime,{n-j}}} - {{{\sf D}}_{-, u^\prime,{n-j}}}) ] \times {{{\sf P}}_{\pm, u,j}} \nonumber \,\ . \label{eq:eompertradialb} \end{eqnarray} We stress that it is enough to follow the evolution for positive modes $n \geq 0$, since the ${\sf P}_{\pm, u}(r, \phi)$ are real functions and therefore \begin{equation} {{\sf P}^{\ast}_{\pm, u,n}}= {{\sf P}}_{\pm, u,-n} \,\ . \end{equation} Once the evolution of the harmonic modes is obtained from Eq.~(\ref{eq:eompertradialb}), the polarization vector in configuration space can be obtained by inverse Fourier transform [Eq.~(\ref{eq:invfour})]. \section{Numerical examples} We present the results of the flavor evolution in the two-dimensional model described above. To calculate the $\nu$-$\nu$ interaction strength in Eq.~(\ref{eq:mupotent}) and the flavor asymmetry parameter in Eq.~(\ref{eq:alpha}) we use benchmark values often used in previous studies of self-induced neutrino oscillations (see, e.g.~\cite{Mirizzi:2010uz}), i.e. we take as average energies \begin{equation} (\langle E_{\nu_e} \rangle, \langle E_{{\bar\nu}_e} \rangle, \langle E_{\nu_x} \rangle) = (12, 15, 18) \,\ \textrm{MeV} \,\ , \end{equation} while for the neutrino luminosities (in units of $10^{51}$~erg/s) we assume \begin{equation} L_{\nu_e}=2.40 \,\ \,\ , \,\ \,\ L_{\bar\nu_e}=2.0 \,\ \,\ , \,\ \,\ L_{\nu_x}=1.50 \,\ . \end{equation} These values are typical of the early time SN accretion phase, and corresponds to an asymmetry parameter $\alpha=1.34$. As specified before we work in a single-energy scheme, where we take as representative vacuum oscillation frequency the one corresponding to the average of the $\nu$ ensemble with the emissivity parameters chosen above (see~\cite{Mirizzi:2010uz}). Namely we take $\omega= 0.68$~km$^{-1}$. Concerning the neutrino oscillation parameters we choose a small mixing angle $\vartheta=10^{-2}$. Moreover, we assume $N_u=100$ modes for the angular variable $u \in [0;1]$ in order to have numerical convergence of the results and to avoid spurious instabilities due to few angular modes. \begin{figure}[!t]\centering \includegraphics[angle=0,width=1.\columnwidth]{pznht_low.eps} \vspace{2.cm} \includegraphics[angle=0,width=1.\columnwidth]{pziht_low.eps} \vspace{-2.cm} \caption{Two-dimensional evolution of the $3$-rd component $P_3$ of the ${\bar\nu}$ polarization vector in the $r$-$\phi$ plane, and its map on the bottom plane breaking only the $\pm$ symmetry. Upper plot refers to NH, while lower panel is for IH. \label{pzt}} \end{figure} \begin{figure}[!t]\centering \includegraphics[angle=0,width=1.\columnwidth]{pznhnot_low.eps} \vspace{2.cm} \includegraphics[angle=0,width=1.\columnwidth]{pzihnot_low.eps} \vspace{-2.cm} \caption{Two-dimensional evolution of the $3$-rd component $P_3$ of the ${\bar\nu}$ polarization vector in the $r$-$\phi$ plane, and its map on the bottom plane breaking the azimuthal invariance for the $\pm$ modes and the translational symmetry on the ring. Upper plot refers to NH, while lower panel is for IH. \label{pznot}} \end{figure} It is known that forcing the $\pm$ symmetry (taking $\beta_+=\beta_-=0$ in Eq.~(\ref{eq:anguldistr})) and the translational symmetry on the ring (taking $\delta=0$ in Eq.~(\ref{eq:polarseed})) the ensemble is stable in normal mass hierarchy (NH, $\Delta m^2 >0$) while in inverted mass hierarchy (IH, $\Delta m^2 <0$) it exhibits large bimodal flavor changes in the form of \emph{pair conversions} $\nu_e \bar\nu_e \to \nu_x\bar\nu_x$~\cite{Hannestad:2006nj}. If we perturb the $\pm$ symmetry taking small seeds $\beta_+ = -\beta_-$ in the distributions of Eq.~(\ref{eq:anguldistr}) the system now exhibits the analogous of the so-called \emph{multi-azimuthal-angle} (MAA) instability of the bulb model~\cite{Raffelt:2013rqa,Mirizzi:2013rla,Mirizzi:2013wda}. The result of the flavor evolution is shown in Fig.~\ref{pzt} where it is represented the behavior of the $3$-rd component of the (integrated over $u$) anti-neutrino polarization vector ${\sf P}_3 = 1/2 ({\sf P_+}+{\sf P_-})$ in the $(r, \phi)$ plane for NH (upper panel) and IH (lower panel). In these numerical examples we have chosen $\beta_+= 10^{-2}$. The most striking effect of the MAA instability is that now also NH exhibits flavor conversions at $r \gtrsim 60$~km. The choice of the initial seed $\beta_{\pm}$ determines the onset radius of the flavor conversions: the largest the seed, the earliest flavor conversions start. In IH (lower panel) flavor conversions start as in the $\pm$ symmetric case at $r \gtrsim 50$~km and MAA has a minor impact on the flavor evolution. From the Figure we realize that the behavior of the flavor conversions is uniform in the $\phi$ variable since the translational symmetry has remained unbroken. Indeed we have solved only the EoMs [Eq.~(\ref{eq:eompertradialb})] for the $n=0$ Fourier mode in this case. \begin{figure}[!t]\centering \includegraphics[angle=0,width=0.8\columnwidth]{leptonhcart_low.eps} \vspace{2.cm} \includegraphics[angle=0,width=0.8\columnwidth]{leptoihcart_low.eps} \vspace{-2.cm} \caption{Component ${\textrm L}_r$ of the vector lepton number ${\bf L}$ in cartesian coordinates in NH (upper panel) and IH (lower panel), respectively. \label{leptocart}} \end{figure} The next step is to perturb also the translational symmetry on the ring assuming a seed $\delta$ in the longitutudinal distribution of the polarization vectors on the boundary [see Eq.~(\ref{eq:polarseed})]. In this case we consider the evoltion of the first N=100 Fourier modes in Eq.~(\ref{eq:eompertradialb}). In this way we are sensitive to variations occuring at an angular scale $\Delta \phi \gtrsim 3^{\circ}$. Results are shown in Fig.~\ref{pznot} with the same format of the previous Figure. We used as seed to break the $\phi$-symmetry $\delta = 3 \times 10^{-3}$. In both NH (upper panel) and IH case (lower panel) flavor conversions start as in the translational invariant case, i.e. the planes of common oscillation phase are flat in $\phi$ direction. However this behavior is not stable. In the NH case around $r \simeq 100$~km something occurs: The ${\sf P}_3$ component is no longer flat in $\phi$, while it starts to acquire notable variations ($\sim 20~\%$) at different longitude. In IH after flavor conversions develop, also the translational symmetry is perturbed at $r \gtrsim 100$~km, with variations at different $\phi$ with values up to $30~\%$. In Fig.~\ref{leptocart} we represent the component ${\textrm L}_r$ of the vector lepton number ${\bf L}$ [Eq.~(\ref{eq:lepton})] in cartesian coordinates \begin{eqnarray} x &=& r \cos \phi \,\ , \nonumber \\ y &=& r \sin \phi \,\ . \end{eqnarray} We realize that when the spherical symmetry is broken, the lepton number acquires significant variations in different directions at a given $r$ with respect to the initial uniform value ${\textrm L}_r=\alpha=1.34$. In particular one finds $\sim 20~\%$ variations. \begin{figure}[!t]\centering \includegraphics[angle=0,width=1.\columnwidth]{normSNnh.eps} \vspace{2.5cm} \includegraphics[angle=0,width=1.\columnwidth]{normSNih.eps} \vspace{-3.cm} \caption{Contour plots of the first 100 Fourier modes $|{\sf P}_{n}|$ (in logarithmic scale) in the plane $n$-$r$ in NH (upper panel) and IH (lower panel) respectively. \label{normcart}} \end{figure} In order to clarify better this flavor dynamics, in Fig.~\ref{normcart} we show a contour plot representing the growth of the different Fourier modes $|{\sf P}_n|$ (in logarithmic scale) in the plane $n$-$r$ for NH (upper panel) and IH (lower panel). We consider the evolution of the first $N=100$ modes. We realize that the breaking of the translational symmetry at $r \simeq 60$~km in NH corresponds to the rapid excitation of the $n >0$ harmonics that reach values $|{\sf P}_n| \lesssim 10^{-2}$. Instead in IH for $r \gtrsim 50$~km the modes start to get excited and can grow to $|{\sf P}_n| \gtrsim 10^{-1.5}$ at $r \gtrsim 150$~km. This explains why the effect of the breaking of the translational symmetry is more pronounced in IH rather than in NH. \section{Conclusions} We have considered a simple two-dimensional toy-model, namely neutrino beams emitted from a ring, to point-out the effect of spontaneous breaking of axial and spherical symmetries in the self-induced flavor conversions of SN neutrinos. We found that if the slightly perturb the space symmetries on the boundary, these perturbation seeds are dramatically amplified altering the flavor conversions found in a symmetric model. Therefore, the flavor content of the self-interacting SN neutrinos would acquire significant direction-dependent variations. These results are qualitatively similar to what we found in the planar model we studied in~\cite{Mirizzi:2010uz}. Our findings suggest that the characterization on the flavor conversions obtained before should be critically reconsidered, including these spontaneous symmetry breaking effects. In order to have a realistic characterization of the possible SN neutrino spectra our simple toy model should be improved on different aspects. In particular, one should extend this model to a realistic three-dimensional spherical case. In this case one would have the possibility to break the spherical symmetry in both longitudinal and latitudinal directions. Moreover, in order to get our numerical solution we have considered $N=100$ Fourier modes. In this cases we have not found the presence of flavor converions at lower radii than in the spherically symmetric case. However, in~\cite{Duan:2014gfa} it has been shown that harmonics with sufficiently high $n$ could become unstable also at low-radii. Increasing $N$ to 500 we have not found any sizeble change in the onset of the flavor changes and in the subsequent flavor evolution in the non-linear regime. However, it remains to be seen if with a much higher number of harmonics low-radii effects could occur. In this regard, it would be useful a stability analysis performed along the lines of~\cite{Chakraborty:2015tfa}. Continuous energy spectra should also be taken into account to understand how the spectral splitting features found in the bulb model would be modified in this case. The role of matter effects that would suppress self-induced flavor conversions during the accretion phase should also be investigated. The final goal would be to study of the self-induced neutrino flavor conversions in realistic multi-dimensional supernova models accounting for largely aspherical neutrino emission and matter profiles. This objective is particularly timely now since in the last recent years, SN model simulations have experienced several breakthroughs. After 1D~\cite{Fischer:2009af} and 2D~\cite{Buras:2005rp} models, the forefront has reached 3D SN simulations~\cite{Wongwathanarat:2014yda,Lentz}. Therefore, it seems the perfect juncture to connect realistic SN simulations with nonlinear neutrino oscillations. This open issue makes compulsory the need for further dedicated studies to fully clarify the fascinating behavior of the interacting neutrino field. \section*{Acknowledgements} The author warmly thanks Pasquale Serpico for useful comments on this manuscript. This work is supported by the Italian Ministero dell'Istruzione, Universit\`a e Ricerca (MIUR) and Istituto Nazionale di Fisica Nucleare (INFN) through the ``Theoretical Astroparticle Physics'' projects.
1,116,691,499,929
arxiv
\section{Introduction} Nowadays it became customary when speaking about VLBI astrometry to implicitly assume differential astrometry \citep[see, for instance,][]{r:honma_redi14,r:rio20}. Differential VLBI astrometry is based on analysis of fringe phase differences between a target and a calibrator located within several degrees of each other. This approach allows to estimate a position offset between a target and a calibrator with a precision of 30--50~$\mu$as. This high precision becomes feasible because the contribution of path delay in the atmosphere is diluted by the factor equal to a target--calibrator separation expressed in radians. At frequencies below approximately 5~GHz the contribution of the ionosphere usually dominates, at higher frequencies its contribution is less than the contribution of the neutral atmosphere. Since the number of potential calibrators grows with a decrease of frequency, observations at low frequencies 1--2~GHz are usually made with shorter target--calibrator arcs. This better compensates for the residual ionospheric contribution which is larger at low frequencies. Therefore, accuracy of differential astrometry is approximately the same within a range of frequencies 1--43~GHz. I should stress the fact that is often omitted in literature: differential astrometry alone cannot provide position of a target. For some applications, such as parallax determination, measurement of proper motion, or estimation of an orbital motion of a galactic object, knowledge of a precise position is not required. But there are applications that do require knowledge of precise positions. Source position can be determined by observing either a target or a calibrator with the absolute astrometry method. In a latter case, position of the target can be derived from the position of the calibrator and the position target/calibrator offset from differential VLBI observations under assumption that a)~positions of the calibrator are stable between epochs of observations; b)~a position offset of the calibrator at the frequency of differential observations with respect to its position at the frequency of absolute astrometry observations is well known. Method of VLBI absolute astrometry involves observations of many active galactic nuclei approximately uniformly distributed over the sky. Data analysis of group delays derived from these observations is usually performed in the accumulative mode, i.e. all VLBI absolute astrometry and geodesy observations are processed in a single least square solution for estimation of source coordinates, orientation parameters, stations positions, and nuisance parameters, such as atmospheric path delay in zenith direction and clock function. The errors in source position adjustments are due to the thermal noise and the inaccuracy of modeling path delay in the atmosphere. They vary in a wide range from 0.05 to 50 mas depending on source flux density, network geometry, and the number of observables, with 1~mas error being typical. Since the contribution of the ionosphere to group delay is reciprocal to the square of the observing frequency, usually absolute astrometry experiments are performed at two widely separated bands, 2.3/8.4 or 4.3/7.6~GHz \citep{r:wfcs}. Processing of dual-band data allows to eliminate effectively the contribution of the ionosphere. However, there are situations when absolute astrometry observations are available only at one band. We know that compared with dual-band observations, single band absolute astrometry observations are affected by the contribution of the path delay in the ionosphere. Two questions arise: 1)~which data analysis strategy does provide source coordinate estimates with the lowest uncertainties and 2)~how to account for the contribution of errors in the ionosphere modeling to reported source coordinate uncertainties? The goal of this study is to provide answers to these questions. I took a trial dataset of over 4~millions of dual-band twenty-four hour VLBI observations publicly available at the US National Radio Astronomy Observatory (NRAO) archive and used it as a testbed. I dropped existing observations at the second band during data analysis and compared results of single-band data against the reference solution that used both bands. \section{VLBI dataset used for trials} I used a dataset of 263 twenty four hour experiments at Very Large Baseline Array (VLBA) network since April 1998 though May 2021. All these data are publicly available at NRAO archive\footnote{\web{https://data.nrao.edu}}. The dataset consists of observing sessions under regular VLBI geodesy program RDV \citep{r:rdv}, astrometric VCS-II program \citep{r:vcs-ii}, its follow-ups VCS-III and VCS-IV, and geodetic CONT17 campaign \citep{r:beh22}. The motivation of this choice is to have a long history of observations, a homogeneous network, and both short and long baselines. VLBI absolute astrometry programs at declinations above $-40^\circ$ are almost exclusively run with VLBA. Therefore, conclusions made from processing trial runs at the VLBA can be propagated directly to the past and future astrometry programs at that network. \section{Modeling the ionospheric contribution to path delay} \subsection{Dual-band observations} The impact of the ionosphere dispersiveness on fringe phase is reciprocal to frequency in the first approximation. Therefore, fringe phase in channel $i$ in the presence of the ionosphere becomes \begin{eqnarray} \phi_i = 2\pi \tau_{p} \, f_0 + \tau_{g} \, (f_i - f_0) - \Frac{\alpha}{f_i}, \eeq{e:i1} where $\tau_p$ and $\tau_g$ are phase and group delays, $f_i$ is the frequency of the $i$th spectral channel, $f_0$ is the reference frequency and \begin{eqnarray} \alpha = \Frac{e^2}{ 8\, \pi^2 \, c \, m_e \, \epsilon_o } \left( \int N_v \, d s_1 - \int N_v \, d s_2 \right) , \eeq{e:i2} where $N_v$ --- electron density, $e$ --- charge of an electron, $m_e$ --- mass of an electron, $\epsilon_o$ --- permittivity of free space, and $c$ --- velocity of light in vacuum. Integration is carried along the line of sight. Having substituted values of constants \citep{r:klo96} and expressing the total electron contents along the line of sight $\int N_v \, d s$ in $\flo{1}{16}$ electrons/$m^2$ (so-called TEC units or TECU), we arrive to $ \alpha = \flo{1.345}{9} \:\: \mbox{sec}$/TECU. Phase and group delay are computed using fringe phases $\phi_i$ with weights $w_i$ using least squares. The result can be expressed analytically after some algebra: \begin{eqnarray} \tau_{\rm gi} = \tau_{\rm if} + \Frac{\alpha}{f_e^2} \, \rm TEC, \eeq{e:i3} where $\tau_{\rm if}$ is the ionosphere-free group delay, $\rm TEC$ is $\int N_v \, d s$ expressed in TEC units, and $f_e$ is the effective ionospheric frequency \begin{eqnarray} f_e = \sqrt \Frac { \displaystyle\sum_i^n w_i \cdot \displaystyle\sum_i^n w_i ( f_i - f_0)^2 \: - \: \left( \displaystyle\sum_i^n w_i ( f_i - f_0 ) \right) ^2 \, } { \displaystyle\sum_i^n w_i ( f_i - f_0 ) \displaystyle\sum_i^n \frac{w_i}{f_i} \: - \: \displaystyle\sum_i^n w_i \cdot \displaystyle\sum_i^n w_i \frac{(f_i - f_0)}{f_i} } \eeq{e:i3a} that depends on weights of spectral channels $w_i$. Typically, the effective ionospheric frequency is within several percents of the central frequency of the observing band. The best way to mitigate the impact of the ionosphere on group delay is to observe simultaneously at two or more widely separated frequency bands. Then the following linear combination of two group delays at the upper and lower bands, $\tau_{u}$ and $\tau_{l}$, respectively is ionosphere free: \begin{eqnarray} \tau_{if} = \Frac{f_{u}^2}{f_{u}^2 - f_{l}^2} \tau_{u} - \Frac{f_{u}^2}{f_{u}^2 - f_{l}^2} \tau_{l}. \eeq{e:i4} Here $f_{u}$ and $f_{l}$ are effective ionospheric frequencies at the upper an lower bands respectively. The residual contribution of the ionosphere is caused by a)~higher order terms in the expansion of the dispersiveness on frequency \citep{r:iono2nd}; b)~the contribution of frequency-dependent source structure, and c)~the dispersiveness in the signal chain. These contributions affect group delay at a level of several picoseconds and they are considered insignificant with respect to other systematic errors. \subsection{The use of TEC maps for computation of ionospheric path delay} TEC maps \citep{r:igs-gim} also known as Global Ionospheric Model (GIM) derived from analysis of Global Navigation Satellite System (GNSS) data are used for reduction of single band observations. In particular, I used CODE TEC time series \citep{r:schaer99} available at \href{ftp://ftp.aiub.unibe.ch/CODE}{ftp://ftp.aiub.unibe.ch/CODE} since January~01 1995 with a spatial resolution of $5^\circ \times 2.5^\circ$. Time resolution varied. It was $24^h$ since 01 January 1995 through March~27, 1998; $2^h$ since March~28, 1998 through October~18, 2014; and $1^h$ after that date. The ionosphere is considered as a thin shell at the constant height $H_i$ of 450~km above the mean Earth's radius. The ionospheric contribution is expressed via TEC as \begin{eqnarray} \tau_i = \Frac{\alpha}{f^2_e} \: M(e) \: \mbox{TEC}, \eeq{e:i4a} where $M(e)$ is the so-called thin shell ionospheric mapping function \begin{eqnarray} M(e) = \Frac{1}{\sqrt{1 - \left( \Frac{ \bar{R}_\oplus} {R_\oplus + H_i} \right) ^2 \, \cos^2{ e_{\rm gc}} } }, \eeq{e:i5} and $R_\oplus$ is the mean Earth's radius and $e_{\rm gc}$ is the geocentric elevation angle with respect to the radius vector between the geocenter and the station. Computation of TEC value at a given moment of time is reduced to computation of the position of the ionosphere piercing point at a given azimuth and elevation and interpolation of TEC at the piercing point at that latitude, longitude, and time. TEC maps from GNSS is a coarse model of the ionosphere. Errors of $\tau_i$ computed according to expression~\ref{e:i4} are greater than the residual ionosphere contribution of ionosphere-free linear combinations of group delays. Therefore, dual-band delay observables are preferred when they are available. However, there are two cases when they are not available: a)~dual-band observing sessions with some source detected only at one band; b)~single-band observing sessions. In these two cases we resort to computation of the ionospheric contribution to path delays using GNSS TEC maps and evaluation of uncertainties of these contributions. \subsection{Ionospheric contribution in dual-band observing sessions when a source is detected at one band only} The simplest way to deal with a mixture of dual- and single- band data is to process experiments three times: 1)~using dual-band data of those observations that detected a source in both bands, 2)~using a low band data, and 3)~using an upper band data with applying ionospheric path delay computed from GNSS TEC maps. However, typically only a fraction, 2 to 20\% of observations is detected at only one band; the rest of observations are detected at both bands. Therefore, we can use available dual-band observations at a given observing session to improve the TEC model. I represent ionospheric path delay at stations $j,k$ as \begin{eqnarray} \begin{array}{lcl} \tau_i(t) & = & b_j(t) - b_k(t) \: + \: \\ & & \Frac{\alpha}{f^2_e} \biggl( \Bigl(\rm TEC_j(\phi_j,\lambda_j,t) + a_j(t) \Bigr) M(e_j) \: - \: \\ & & \phantom{aaaa} \Bigl(\rm TEC_k(\phi_k,\lambda_k,t) + a_k(t) \Bigr) M(e_k) \biggr), \end{array} \eeq{e:i6} where $b_j(t) = \sum_i^n B^0_i(t) b_{ij}$ is a delay bias expanded over the B-spline basis of the 0th degree, $a_j(t) = \sum_i^n B^3_i(t) a_{ij}$ is the TEC bias expanded over the B-spline basis of the 3rd degree. $\phi, \lambda$ are coordinates of the ionosphere piercing point that depend on positions of observing stations as well as azimuths and elevations of observed sources. The clock bias occurs due to path delay in VLBI hardware that is different at different bands. This bias is constant for most of the experiments, however occasionally breaks may happen at some stations. Epochs of these breaks coincide with the epochs of breaks in the clock function. Expansion over the B-spline basis of the 0th degree accounts for these breaks. (B-spline of the 0th degree is 1 within the range of knots [i, i+1] and 0 otherwise.) I estimated parameters $a_j$ for all the stations and $b_j$ for all the stations except the one taken as a reference using all available dual-band observations of a given experiment using least squares with weights \begin{eqnarray} w_i = \Frac{1}{\sqrt{y^2 + \frac{ f_{u}^4 \sigma^2(\tau_u) + f_{l}^4 \sigma^2(\tau_l) } {(f_{u}^2 - f_{l})^2} } }, \eeq{e:i7} where $\sigma(\tau_u)$ and $\sigma(\tau_u)$ are group delay uncertainties and $y$ is the error floor, 12~ps, introduced to avoid observations with very high signal to noise ratios to dominate the solution. The time span of the B-spline knot sequence for TEC bias in my solutions was 15~minutes. I applied constraints on the value of the B-spline coefficients and on the first and the second derivative with reciprocal weights $5 \cdot 10^{-10}$ $s$, $4 \cdot 10^{-14}$, and $2 \cdot 10^{-18}$ $s^{-1}$ respectively. These constraints were applied to ensure the continuity of biases and to prevent a singularity in rare cases when too few available observations at a given station could be used for bias estimation at a given spline segment. Figure~\ref{f:iono_bias} illustrates estimates of the ionospheric bias. \begin{figure} \noindent\includegraphics[width=0.47\textwidth]{iono_adj.pdf} \caption{Adjustment to the ionosphere path delay bias at 8.4 GHz with respect to the path delay derived from GNSS TEC maps at {\sc mk-vlba} station from processing dual-band observations on April 22, 2015. 1~TECU causes group delay 21~ps at 8.4~GHz. } \label{f:iono_bias} \end{figure} The resulting total electron contents model $\rm TEC_j(t) + a_j(t)$ is more precise than the a~priori $\rm TEC_j(t)$ taken from GNSS maps because it uses additional information. Using estimates of $a_j$ and $b_j$ spline coefficients, I compute $\tau_i(t)$ and its uncertainty according to the law of error propagation using the full variance-covariance matrix of spline estimate coefficients. In order to evaluate the realism of these errors, I processed the trial dataset and computed $\tau_i(t)$ using the estimates of clock and TEC biases and compared them with the ionospheric contribution derived from dual-band observations. I removed clock biases from VLBI dual-band ionospheric contributions $\tau_{\rm vi}$, formed the differences $\tau_i - \tau_{\rm vi}$, and then divided them by $\sigma(\tau_i)$ derived from the variance-covariance matrix of $a_j, b_j$. I generated the normalized histogram from the dataset of 4,343,782 differences and computed the first two moments of the empirical distribution shown in Figure~\ref{f:iono_est_mod_dstr}. The fitting parameters of the first and second moments of the distribution are 0.003 and 0.889 respectively. Two factors cause a deviation of the second moment from 1.0 in the opposite way: a)~TEC variations not accounted by the parametric model; b) statistical dependence of the estimates of $a_j, b_j$ and VLBI path delay. After scaling the variance-covariance matrix by square of $0.889$, the distribution of the normalized residuals becomes close to Gaussian. The closeness of the empirical distribution to the normal distribution provides us a confidence that the extra noise introduced by the mismodeled ionospheric path delay after applying clock and TEC biases is properly accounted for. \begin{figure} \includegraphics[width=0.47\textwidth]{iono_mod_est_distr.pdf} \caption{Empirical distribution of the normalized differences of the ionosphere path delay computed from the GNSS TEC maps adjusted for clock and TEC biases (green dots). The normal distribution with $\sigma=1$ is shown as a reference (solid blue line). } \label{f:iono_est_mod_dstr} \end{figure} The closeness of the distribution of normalized differences to Gaussian with $\sigma=1$ is encouraging, but it does not guarantee that the residual errors of the sum of TEC from GNSS and TEC bias adjustments cause no systematic error in estimates of source position. To characterize the impact of residual errors of the ionospheric contribution on source position, I ran a solution XIA that had the following differences with respect to the reference solution: 1)~it used X-band group delays; 2)~data reduction for the ionosphere accounted for both a~priori TEC from GNSS maps and the ionosphere bias adjustment (expression in the parentheses \ref{e:i6}); 3)~errors of the ionospheric biases $\sigma = a_{\rm ij} \, \alpha/f^2_e$ were added in quadrature to reciprocal weights of observables. I ran also solution SIA that differed from XIA by using S-band group delays and the S-band effective ionospheric frequency $f_e$. For control, I ran solutions XIN and SIN that used the ionosphere-free combinations of group delays, the same weights as XIA and SIA solutions respectively, and the zero mean Gaussian noise with $\sigma = a_{\rm ij} \, \alpha/f^2_e$ was added to each observable. The differences in XIA and SIA solutions with respect to the reference solution provide us a measure of the impact of residual ionospheric errors on source position. The difference in XIN and SIN solutions characterize the impact of the residual ionospheric errors if they were Gaussian and totally uncorrelated. Figure~\ref{f:post_iono_resid} shows the distribution of differences in declinations of source position estimates from XIA and XIN solutions. There is no noticeable deviation from the Gaussian shape. Table~\ref{t:post_iono_resid} lists first two moments of the distribution of position differences in both right ascension and declination. The second moment from SIN solution is close to 1.0, while the second moment from XIN solution is 0.56. The errors of the ionospheric contribution at S-band dominate the error budget. These errors are 14 times less at X-band and are only a fraction of overall group delay errors. The mean biases in right ascension and declination are negligible. The second moment of position estimates from XIA and SIA solutions is 15\% higher than moments from positions from XIN and SIN solutions respectively. This increase occurs due to non-randomness of residual ionospheric errors and can be viewed as a measure of unaccounted systematic errors in source positions due to the ionosphere. Analysis of these trial solutions demonstrates that we are able to predict the impact of ionospheric errors on source position with an accuracy of 15\% when TEC bias is estimated using dual-band observations. \begin{table}[h] \begin{center} \begin{tabular}{|l|rr|rr|} \hline Parameter & \nntab{l|}{TEC biases adjusted} & \nntab{l|}{Gaussian noise added} \\ & mean & $\sigma$ & mean & $\sigma$ \\ \hline $\Delta \alpha$ X-band & -0.03 & 0.63 & -0.01 & 0.56 \\ $\Delta \delta$ X-band & 0.02 & 0.64 & 0.01 & 0.56 \\ $\Delta \alpha$ S-band & -0.05 & 1.14 & -0.01 & 1.03 \\ $\Delta \delta$ S-band & 0.06 & 1.13 & 0.02 & 1.02 \\ \hline \end{tabular} \end{center} \caption{The first and second moments of position differences of trial solutions XIA, SIA, XIN, and SIN with respect to the position estimates derived from the reference solution. The a~priori TEC from GNSS and bias adjustment from dual-band solutions were applied in XIA and SIA solutions (the 2nd column). The zero-mean Gaussian noise with $\sigma$ equal to the uncertainty in path delay from the TEC bias adjustment was added in XIN and SIN solutions (the 3rd column). } \label{t:post_iono_resid} \end{table} \begin{figure*} \includegraphics[width=0.48\textwidth]{ds_estmod_est.pdf} \hspace{0.03\textwidth} \includegraphics[width=0.47\textwidth]{ds_estmod_noi.pdf} \caption{The distribution of normalized differences in declination from the trial VLBI solutions with respect to the reference dual-band solution. {\it Left:} solution with GNSS TEC maps + adjustments of TEC biases using dual-band VLBI group delays (XIA). {\it Right:} solution with added Gaussian noise with $\sigma$ equal to errors in path delay that correspond to TEC bias adjustments uncertainties (XIN). Blue thick lines show fitted Gaussian distributions with the first and second moments listed in Table~\ref{t:post_iono_resid}. } \label{f:post_iono_resid} \end{figure*} \subsection{Ionospheric contribution in single-band observing sessions} When an entire session is observed only at one band, TEC biases cannot be computed. Therefore, we have to resort to deriving a regression model to provide estimates of these errors. In the past, \citet{r:lcs2} derived a regression against the so-called global TEC: the integral of TEC over the entire Earth following ideas of \citet{r:gtec} and \citet{r:kra21} derived a regression against the root mean square (rms) of the total ionospheric path delay from GNSS TEC. In this study I use the second approach with slight modifications. Following general results of the turbulence theory \citep[see][]{r:tat71}, we can expect that fluctuations at scales $x$ are related to fluctuations at scales $y$ via a power law. I processed the same dataset of 263 twenty-four hour VLBI experiments that was used in the previous section and computed residual ionospheric path delay for each observation as \begin{eqnarray} \tau_r = (\tau_{\rm gi} - \tau_{\rm vi} - (b_j - b_k)) \; \tilde{M}, \eeq{e:i8} where $\tau_{\rm gi}$ is the vertical ionospheric path delay from GNSS TEC maps, $\tau_{\rm vi}$ is the vertical ionospheric path delay from VLBI, $b_j - b_k$ are contributions of the clock bias, and $\tilde{M}$ is the averaged ionosphere mapping function between stations 1 and 2 of a given baseline: $\tilde{M} = (M(e_1) + M(e_2))/2.0$. The clock biases are routinely adjusted during analysis of VLBI observations and therefore, their contribution on VLBI results, such as source positions, is entirely eliminated. Subtracting them in expression \ref{e:i8}, I eliminate their impact on statistics as well. I used only twenty-four hour VLBI experiments for deriving statistics because the ionospheric path delay strongly depends on Solar time, especially at low latitudes, and statistics derived at shorter time intervals are not representative. \begin{table} \caption{Coefficients of the B-spline expansion of the dependence of the rms of residual ionospheric path delay derived from GNSS TEC maps on the rms of the total ionospheric path delay from GNSS TEC maps at 8~GHz. } \begin{center} \begin{tabular}{crr} knot index & knot argument & B-spline value \\ & (ps) & (ps) \\ \hline -2 & & 6.3 \\ -1 & & 14.8 \\ \phantom{-} 0 & & 23.5 \\ \phantom{-} 1 & 0.0 & 114.0 \\ \phantom{-} 2 & 35.0 & 114.0 \\ \phantom{-} 3 & 120.0 & 114.0 \\ \phantom{-} 4 & 1300.0 & \\ \hline \end{tabular} \end{center} \label{t:iono_rms_rms} \end{table} Figure~\ref{f:iono_rms_rms} shows the dependence of rms of residuals $\tau_r$ on the rms of total ionospheric path delay from GNSS TEC maps $\tau_{\rm gi}$. Each point on the plot corresponds to the rms for a given baseline and a given observing session. I confirm the early result of \citet{r:kra21} but here I used a much larger dataset. The result reported in \citet{r:kra21} was slightly affected by an error in computation of ionospheric group delays for a case when some data are flagged because of radio interference. This error has been fixed and the affected experiments have been reprocessed from scratches. This dependence can be coarsely described as a square root of the rms of the total ionospheric path delay. For a better approximation I sought a regression in the form of an expansion over B-splines of the 3rd degree. The spline coefficients computed using least squares are listed in Table~\ref{t:iono_rms_rms}. \begin{figure} \includegraphics[width=0.47\textwidth]{iono_rms_rms.pdf} \caption{Dependence of the rms of residual ionospheric path delay derived from GNSS TEC maps $\sigma_{\rm gr}$ on the rms of the total ionospheric path delay from these maps $\sigma_{\rm gt}$. No adjustment to TEC has been applied. Path delay is computed for the reference frequency 8~GHz. The blue smooth line shows the regression model in a form of a B-spline that fits the data. } \label{f:iono_rms_rms} \end{figure} Using that regression, I developed the following algorithm for computation of errors of the ionospheric path delay from GNSS TEC. First, coordinates of $K$ points uniformly distributed over the sphere are computed using a random number generator. Then for each baseline and each time epoch azimuth and elevation angles of that point are computed at both stations of the baseline, and if the elevations above the horizon are greater than $5^\circ$ at both stations, that point is selected for further computations. If not, the next point is drawn. Then total ionospheric path delay $\tau_i(A_1,e_1,A_2,e_2)$ is computed using GNSS TEC maps. It is worth mentioning here that unlike to troposphere path delay, $\tau_i(A,e) \neq \tau_i(A,\pi/2) \cdot M(e)$, since path delay depends on positions of the ionosphere piercing points. It is not sufficient to compute the ionospheric path delay in zenith direction and then map it via $M(e)$: latitude and longitude of the piercing point can be as far as 1000~km from the station. Following this approach, we sample piercing points uniformly distributed within the mutual visibility zone. The process is repeated for 1440 time epochs that cover the time interval of VLBI experiment under consideration with a step of 1~minute. Then for each baseline $\sigma(\tau_{\rm gt})$ is computed over a time series of 1440 $\tau_i$ values. Finally, the estimate of the rms of residual ionospheric path delay derived from GNSS TEC maps is computed from this regression via rms of the total ionospheric path delay as \begin{eqnarray} \sigma_{\rm rr} = \sum_k^n B^3_k(\sigma(\tau_{\rm gt})) \, \sqrt{M^2(e_1) + M^2(e_2)}. \eeq{e:i9} Baseline-dependent datasets are considered independent for this computation: the mutual visibility at all the stations of the network at a given moment of time is not enforced. For several baselines longer than 96\% Earth diameter, this algorithm has a poor performance for selecting points above $5^\circ$. Therefore a minor modification is made for such an extreme case: the elevation angle is fixed to $5^\circ$, mutual visibility is not enforced, and azimuths are selected randomly within a range of $[0, 2\pi]$ independently for both stations. In order to evaluate the validity of this regression model of residual ionospheric path delay errors, I computed $\tau_r$ from dual-band observations, $\sigma_{\rm gt}$ following the algorithm described above for 263 twenty-four experiments, and then computed the histograms of normalized residuals $\tau_r/\sigma_{\rm rr}$. The histogram is presented in Figure~\ref{f:iono_regr_mod_dstr}. The first two moments of the distribution are -0.083 and 1.214 respectively. Since regression $\sigma_{\rm rr}$ was found using least squares, the number of observations with $\sigma(\tau_{\rm gr})$ less and greater than $\sigma_{\rm rr}$ for given $\sigma(\tau_{\rm gt})$ is approximately equal: the thick blue line cuts the cloud of green points in Figure~\ref{f:iono_regr_mod_dstr} almost by half. However, the variance of the contribution of those points with $\sigma(\tau_r) > \sigma_{\rm rr}$ overweights the contribution of those points with $\sigma(\tau_r) < \sigma_{\rm rr}$ because variance quadratically depends on $\tau_r$. This causes a positive bias. After multiplying $\sigma_{\rm rr}$ by 1.214, the distribution of normalized residuals becomes almost Gaussian. One can notice that $\sqrt{M^2(e_1) + M^2(e_2)}$ in expression \ref{e:i9} is not the same as $\tilde{M} = (M(e_1) + M(e_2))/2$ used for computation of the regression. I found that using $\tilde{M}$ instead of $\sqrt{M^2(e_1) + M^2(e_2)}$ decreases the second moment from to 1.214 to 1.196, which is negligible, \begin{figure} \includegraphics[width=0.47\textwidth]{iono_mod_regr_distr.pdf} \caption{The distribution of the normalized differences of the ionospheric path delay computed from the GNSS TEC maps against VLBI ionospheric path delay with clock biases subtracted (green dots). The normal distribution with $\sigma=1$ (solid blue line) is shown as a reference. } \label{f:iono_regr_mod_dstr} \end{figure} The distributions shown in Figures~\ref{f:iono_est_mod_dstr} and \ref{f:iono_regr_mod_dstr} are computed for the entire dataset of 4.3 million path delays and they represent the general population over the interval of 23~years. Statistics for an individual observing session may differ. In order to evaluate the scatter of the statistics, I computed the time series of second moments of the distribution of normalized residuals of ionospheric path delays and their uncertainties with and without TEC biases adjusted for each observing session separately. I divided the normalized residuals by scaling factors of 0.889 and 1.196 respectively. I computed the distribution of second moment estimates and showed it in Figure~\ref{f:iono_mod_sess_nrml_dstr}. The scatter of the second moments is small when TEC biases are adjusted. That means this statistics is robust. When TEC is not adjusted, the scatter is significantly larger, but even in that case 90\% of the second moment estimates deviate from 1.0 by no more than 30\%. This provides us a measure of uncertainties in computation of ionospheric path delay errors in individual single-band observing sessions. \begin{figure} \includegraphics[width=0.47\textwidth]{iono_mod_sess_nrml_dstr} \caption{The distribution of second moment estimates of the normalized differences of ionospheric path delays derived from VLBI dual-band observations and GNSS TEC maps among individual observing sessions. The narrow green curve shows the statistics of the normalized residuals with TEC biases adjusted and the wide blue curve show the statistics of normalized residuals without TEC adjustment. } \label{f:iono_mod_sess_nrml_dstr} \end{figure} \subsection{The impact of the residual ionospheric errors in source position in a case of single-band observing sessions} Error analysis presented in the previous section characterizes our ability to predict the first and second moments of the distribution, but it does not guarantee that residual errors due to the ionosphere do not cause systematic errors in source positions. I ran trial solutions XIT and SIT that used X- and S-band group delays respectively, applied ionospheric path delays derived from GNSS TEC maps, and inflated reciprocal weights of observables by adding in quadrature errors in ionospheric path delays from TEC maps. Analysis of differences in source positions from trial XIT and SIT solutions with respect to the reference dual-band solution revealed no peculiarities in right ascensions, but revealed significant systematic errors in declinations (see Figure~\ref{f:x_bias_1000}). The pattern of systematic errors in declination from the S-band solution is similar but greater by a factor of $f^2_x/f^2_s \approx 14$. We cannot consider a solution with such errors as satisfactory. This was unexpected because prior work of \citet{r:sek03,r:hob06,r:det11,r:mot22} claimed a good agreement between TEC derived from VLBI and GNSS. And indeed, the plot of ionospheric contributions in zenith direction from VLBI after removal clock biases against the ionospheric contributions from GNSS (Figure~\ref{f:gv_reg}) shows no peculiarities and fits the straight line $\tau_{\rm vi} = -3.9 \:{\rm ps} + 1.06 \cdot \tau_{\rm gi}$. Although the residuals of this dependence {\it look} random, they still cause systematic errors in source position. \begin{figure} \includegraphics[width=0.47\textwidth]{bias_1000_north.pdf} \caption{Differences in declinations from the X-band single band solution XIT with the data reduction for the ionosphere using ionospheric path delay from GNSS TEC maps applied with respect to the dual-band reference solution. The thick blue line shows the differences smoothed with the Gaussian filter. } \label{f:x_bias_1000} \end{figure} \begin{figure} \includegraphics[width=0.35\textwidth]{gv_reg.pdf} \caption{Dependence of VLBI ionospheric group delay at 8~GHz against the ionospheric group delay from GNSS. The blue straight line is the least square fit of this dependence. } \label{f:gv_reg} \end{figure} I made a number of trial solutions. The following leads turned out productive: 1)~to modify mapping function and 2)~to scale ionospheric path delays from TEC. \citet{r:schaer99} suggested the following modification of the ionospheric mapping function: \begin{eqnarray} M(e) = k \, \Frac{1}{\sqrt{1 - \left( \Frac{ \bar{R}_\oplus} {R_\oplus + H_i + \Delta H} \right) ^2 \, \cos^2{ \alpha e_{\rm gc}} } }, \eeq{e:i10} arguing that varying parameters $\alpha$ and $H_i$ one can account for a more realistic electron density distribution with height than a thin shell model. Here $\Delta H$ is an increment in the ionosphere height and $\alpha$ is a fudge factor. I ran trial solutions: XIE0 with $k=1.0,\, \Delta H = 0,\, \alpha = 1.00$, XIE1 with $k=1.0,\, \Delta H = 56.7$ km,\, $\alpha$ = 0.9782, and XIE2 with $k=1.0,\, \Delta H = 150.0$ km,\, $\alpha$ = 0.9782. The choice of $k, \Delta H, \alpha$ in XIE0 solution corresponds to the thin shell model and the choice in XIE1 solution corresponds to the so-called JPL modified single layer ionospheric mapping function. That flavor of the mapping function was used in a number of papers \citep{r:li19,r:xiang19,r:shao21,r:wie21}, however no details how these values of $k,\, \Delta H,$ and $\alpha$ were derived have been provided. I estimated admittance factors of the ionospheric path delay from GNSS maps using all X-band group delays in a number of least square solutions. No ionospheric contribution was applied in these trials. The partial derivative of the admittance factor over observed group delay is the GNSS ionospheric path delay itself. Table~\ref{t:adm} shows the estimates of the admittance factors. \begin{table} \caption{Estimates of the admittance factors of the ionospheric path delay from GNSS TEC maps from X-band group delays. } \begin{tabular}{ll} \hline Solution & Admittance \\ \hline XIE0 & $ 0.750 \pm 0.002 $ \\ XIE1 & $ 0.802 \pm 0.002 $ \\ XIE2 & $ 0.835 \pm 0.002 $ \\ \hline \end{tabular} \label{t:adm} \end{table} The admittance factor estimates deviate from 1.0 at a level of $\sim\!\! 100 \sigma$, i.e. on an extremely high significance level. The admittance factor becomes closer to 1.0 when parameter height of the ionosphere in the mapping function is increased. I interpret this result as a deficiency of GNSS TEC maps and I associate the origin of this deficiency with an over-simplification of the mapping function that was used for derivation of the TEC maps from processing of GNSS observations. Unfortunately, it does not look possible to {\it fix} the deficiency of the GNSS TEC maps without re-processing GNSS observations, which is well beyond the scope of this article, but nevertheless, it is still feasible to {\it mitigate} the impact of the imperfection of GNSS TEC maps on source position estimates. I ran 12 trial solutions with mapping functions with parameters 1)~$\Delta H=0.0,\, \alpha=1.0$; 2)~$\Delta H=56.7~{\rm km},\, \alpha=0.9782$; and 3)~$\Delta H=150.0~{\rm km},\, \alpha=0.9782$. I varied the mapping function scaling factor $k$ setting it to 0.7, 0.8, 0.9, and 1.0 and computed declination biases with respect to the reference solution. The results are presented in Figures~\ref{f:bias_4_map}--\ref{f:bias_4_mhi}. \begin{figure} \includegraphics[width=0.47\textwidth]{bias_4_map.pdf} \caption{Smoothed declination biases from four solutions with respect to the dual-band reference solution using X-band observables and the ionospheric mapping function with parameters $\Delta H=0$, $\alpha=1.0$ with different scaling factors $k= 0.7,\, 0.8,\, 0.9,\, {\rm and} 1.0$. } \label{f:bias_4_map} \end{figure} \begin{figure} \includegraphics[width=0.47\textwidth]{bias_4_mod.pdf} \caption{Smoothed declination biases from four solution with respect to the dual-band reference solution using X-band observables and the ionospheric mapping function with parameters $\Delta H=56.7$~km and $\alpha=0.9782$ with different scaling factors $k= 0.7,\, 0.8,\, 0.9,\, {\rm and} 1.0$. } \label{f:bias_4_mod} \end{figure} \begin{figure} \includegraphics[width=0.47\textwidth]{bias_4_mhi.pdf} \caption{Smoothed declination biases from four solutions with respect to the dual-band reference solution using X-band observables and the ionospheric mapping function with parameters $\Delta H=150$~km and $\alpha=0.9782$ with different scaling factors $k= 0.7,\, 0.8,\, 0.9,\, {\rm and} 1.0$. } \label{f:bias_4_mhi} \end{figure} Close analysis of these figures reveals that the declination bias has three components: 1)~a constant bias; 2)~a linear increase in the declination bias with a decrease of declination; 3)~a feature $\delta (x \, \sin \delta + y \, \cos \delta)$ with the minimum at approximately $-12^\circ$, where $\delta$ is declination. All three components depend on parameters of the mapping function $\Delta H,\, \alpha$ and on scaling factor $k$. Selecting appropriate $\Delta H,\, \alpha,\, k$, we can reduce declination biases. I selected $\Delta H = 56.7~{\rm km},\, \alpha=0.9782,$ and $k=0.85$. This choice makes the weighted mean declination bias over all sources 0.013~mas. There is an element of subjectivity in this specific choice, since there exist another combinations of $\Delta H, \alpha, k$ that provide the zero mean bias. \citet{r:wie21} showed that the mean rms of the difference between GNSS TEC maps and GNSS slant TEC was reduced from 2 to 6\% when this modified mapping function was used. That choice of $\Delta H$ and $\alpha$ led me to a selection of the specific scaling factor $k$. Although scaling and modification of the ionospheric mapping function results in a substantial reduction of the declination bias, the remaining bias is still worrisome. To mitigate the bias even further, I introduce the ad~hoc correction for the ionospheric bias in the data reduction model. I smoothed the biases $D(\delta)$ with the Gaussian filter with $\sigma=8^\circ$ --- these smoothed biases are shown in Figures~\ref{f:bias_4_map}--\ref{f:bias_4_mhi} --- and expanded them over the basis of B-splines of the 3rd degree with 13 equi-distant knots in the range of $[-50^\circ, 90^\circ]$. I added correction \begin{eqnarray} \der{\tau}{\delta} \, D(\delta) \, f^2_d/f^2_e \eeq{e:i11} to observables. Here $f_d$ is the reference frequency of the bias model and $f_e$ is the effective ionospheric path delay of a given observation. Figure~\ref{f:debias_corr} shows the effect of applying de-bias correction. The bias has gone. I ran two solutions using X-band only data (XIS) and S-band only data (SIS), applied ionospheric path delay derived from the a~priori TEC from GNSS maps using the modified mapping function with parameters $\Delta H=56.7 {\rm km},\, \alpha =0.9782,\, k=0.85$, and applied the declination bias correction. The reciprocal weights of observables were adjusted by adding in quadrature the errors of residual ionospheric errors $\sigma_{\rm rr}$ modeled according to regression expression \ref{e:i9}. The first and the second moments of the normalized differences in sources positions are presented in Table~\ref{t:post_iono_sba}. The second moments are less than 1.0 in the X-band solution. This indicates that the ionospheric contribution is not the dominating error source. \begin{table}[h] \begin{center} \begin{tabular}{|l|rr|rr|} \hline Parameter & \nntab{l|}{ X-band solution} & \nntab{l|}{S-band solution} \\ & mean & $\sigma$ & mean & $\sigma$ \\ \hline $\Delta \alpha$ & -0.07 & 0.45 & -0.11 & 0.82 \\ $\Delta \delta$ & -0.02 & 0.50 & -0.06 & 1.00 \\ \hline \end{tabular} \end{center} \caption{The first and second moments of position differences of trial solutions XIS and SIS with respect to the position estimates derived from the reference solution. The a~priori ionospheric contribution was computed from GNSS TEC maps using the modified mapping function with parameters $\Delta H=56.7 {\rm km},\, \alpha =0.9782,\, k=0.85$, the declination bias correction was applied, but no reduction for TEC bias adjustment has been applied. } \label{t:post_iono_sba} \end{table} \begin{figure*} \includegraphics[width=0.47\textwidth]{bias_850_mod.pdf} \hspace{0.03\textwidth} \includegraphics[width=0.47\textwidth]{bias_850_moda.pdf} \caption{The declination bias with the ionospheric path delay using mapping function with parameters $\Delta H=56.7 {\rm km}, \alpha =0.9782, k=0.85$ with ({\it Right}) and without ({\it Left}) applying the empirical de-bias correction. } \label{f:debias_corr} \end{figure*} \section{Discussion} \subsection{Inaccuracy of GNSS TEC maps} A number of authors compared the ionospheric contribution from GNSS, VLBI, DORIS, radio occultation observation, and dual-band satellite altimeters \citep{r:sek03,r:hob06,r:det11,r:para17,r:cokrlic18,r:li18,r:li19, r:xiang19,r:wie21,r:shao21,r:mot22}. A message these publications convey is there is a reasonable agreement between GNSS TEC maps and other techniques and there are no majors problems. Therefore, large systematic errors driven by mismodeling of the ionospheric contribution derived from GNSS maps came as a big surprise. Do results presented in this study contradict to prior publications? Comparison in the past was often performed using very short continuous dataset of VLBI observations, from 5 to 15 days \citep{r:hob05,r:det11,r:ete21,r:mot22}. The level of the agreement was characterized in terms of additive errors of GNSS TEC model. Unfortunately, when data from a short time interval are analyzed, a distinction between additive and multiplicative errors becomes blurry. Characterizing the differences in terms of biases and rms of their scatter did not turn productive in revealing of systematic errors. Moreover, an unconscious bias of focusing a study on assessment of an agreement rather than investigation of disagreements, that were just noticed and reported, diverted attention from investigation the differences in depth. However, reading between lines of published papers, we can find pieces of evidences supporting findings in this work. The distribution of differences in VLBI vertical TEC with respect TEC derived from GNSS TEC maps presented in Figure~6 in \citet{r:hob06} shows a very significant skew. The distribution has a negative mean and a much greater left tail than the right tail. VLBI vertical TEC from that study appeared less than the TEC from GNSS TEC maps, in agreement with what I have found. The authors did not investigate that stark deviation of the distribution of over one million differences from the Gaussian shape, only noting that the bias is less than 3~TECU. Considering the errors are normally distributed and {\it additive} and considering that the derivation of TEC from VLBI does not introduce serious errors, one can expect to arrive to a Gaussian distribution of residuals. For instance, the distribution of differences in Figure~\ref{f:iono_regr_mod_dstr} indeed does not show any measurable deviation from the Gaussian shape. However, if errors are {\it multiplicative}, the residuals will be non-Gaussian and their distribution will be skewed. Satellite altimetry using Jason satellites \citep[][and references therein]{r:jason} provides an independent way for assessment of a level of the disagreement between direct vertical TEC measurements and GNSS TEC maps. \citet{r:li18} showed that comparisons of the differences revealed significant systematic biases that depend on geomagnetic latitude. However, an attempt to characterize these additive biases in terms of rms was not very productive. \citet{r:liu18} presented spatial distribution of the differences (Figures 6--7). That distribution strikingly reminds the average distribution of TEC itself, suggesting the differences are multiplicative. A more recent paper of \citet{r:det22} revealed a highly significant scaling factor between GNSS TEC maps and four altimeter missions. The scaling factors varied from 0.809 to 0.919 which is very close to what has been found in my analysis of astrometric observations of extragalactic radio sources. \citet{r:li19} performed a comparison of TEC maps with Jason satellite altimetry and with ionospheric radio occultations \citep[see the overview of this technique in][]{r:ro}. They characterized the differences as a superposition of an additive bias and a multiplicative scale factor. The scale factors defined as TEC${}_{\rm Jason/GIM}$ and TEC${}_{\rm COSMIC}$/TEC${}_{\rm GIM}$ vary with time and latitude and stay in a range of 0.7--0.9 at night and 0.9--1.0 during daytime. Since Jason orbits have altitude 1,350~km and GNSS orbits have altitude about 20,000~km, Jason altimetry misses the contribution from the upper layers of the ionosphere and plasmoshere. However, the electron density at altitudes above 1,350~km is too low to account for 15--20\% of the TEC that is not present in Jason or COSMIC data. If the electron contents above altitude 1,350~km were really so high to contribute to TEC at that level, the thin shell model with the altitude of 450~km would be grossly inadequate. \begin{figure*} \includegraphics[width=0.47\textwidth]{src_tec_alpha.pdf} \hspace{0.05\textwidth} \includegraphics[width=0.47\textwidth]{src_tecest_alpha.pdf} \includegraphics[width=0.47\textwidth]{src_tec_delta.pdf} \hspace{0.05\textwidth} \includegraphics[width=0.47\textwidth]{src_tecest_delta.pdf} \caption{The rms of the differences in right ascensions ({\it upper row}) and declinations ({\it lower row}) when single band group delays are used with respect to the positions from the dual-band reference solution. A~priori ionospheric path delay using GNSS TEC with the modified mapping function ($\Delta H=56.7~{\rm km}, \alpha=0.9782, k=0.85$) was used adjusting TEC biases from dual-band observations ({\it right}) and without adjusting biases but with applying the de-bias correction from expression~\ref{e:i11} ({\it left}). The upper blue band shows the differences for S-band, next red line shows the differences for C-band, next green line shows the differences for X-band, and the bottom purple line shows the differences for K-band. } \label{f:err_src_pos} \end{figure*} It is essential to note that analysis of Jason and COSMIC data provides estimates of vertical TEC, while analysis if GNSS observations provides slant TEC that is converted to vertical TEC using the mapping function. The dependence of the admittance factor estimate on the mapping function found in this study suggests that a simple thin model may not be adequate. \citet{r:schaer99} discussed the dependence of the effective ionosphere height on solar zenith angle. \citet{r:xiang19} studied it in more details. The height of the peak electron density has annual and diurnal variations. The latter variations have an amplitude of about 100~km, being lower at daytime. They showed that the instantaneous mapping function that accounts for the height of the electron content maximum achieves 8\% reduction of mapping errors. It should be mentioned that if the effective height of the ionosphere is changed, the latitude and longitude of the ionosphere piercing point for an observation with a given elevation and azimuth will change as well, which will provide an additional change in path delay. These works strengthen argumentation in favor of that a thin shell model used for derivation of TEC maps since mid 1990s is oversimplified. The effective height of the ionosphere changes with time and latitude and therefore, a realistic mapping function should also vary with time and latitude. Omission of this complexity results in a deficiency of GNSS TEC maps that manifests in multiplicative (scale) and additive (bias) errors that varies with time and latitude. The non-linear dependence of the declination bias with declination in a form of $\delta (x \, \sin \delta + y \, \cos \delta)$ may be caused by the dependence of GNSS TEC scaling factor with latitude reported by \citet{r:li19}. Radio waves from southern hemisphere sources observed at the array located in the Northern Hemisphere propagate through regions in the ionosphere with systematically lower latitudes than from northern hemisphere sources. Since a fixed mapping function is used for both computation of TEC maps and computation of slant ionospheric path delay from these maps, no modification of that fixed mapping function for data reduction of astronomical data is able to account for this kind of complexity, but as it was shown earlier, it is still possible to mitigate it. The remaining bias can be eliminated by applying an empirical de-bias correction. \subsection{Omitted refinements of the ionosphere contribution} \citet{r:ete21} processed 60 days of VLBI data and estimated not only TEC for each sitet using B-splines of the 1st degree, but also TEC derivatives over longitude and latitude that were considered constant over 24~hour periods. They claim that estimating TEC partial derivatives decreases discrepancies of TEC from VLBI to GNSS TEC maps by 36\%. Inspired by this results, I introduced estimation of latitude and longitude TEC gradients in a form of B-spline in addition to TEC estimation using dual-band data. I considered the weighted rms (wrms) of the differences between the parameterized model of TEC adjustment to vertical TEC from dual-band data as a metric of an improvement. I did not find a reduction of rms greater then several percents and abandoned this approach. I should note this result does not disprove findings of \citet{r:ete21} since they used a different metric. Expression for the ionospheric contribution \ref{e:i1} is an approximation. \citet{r:iono2nd} considered the impact of the higher order of expansion on group delay, namely proportional to $f^{-3}$. They found that the maximum contribution of the 3rd degree term varied from 3 to 9~ps at 8~GHz depending on the baseline. This contribution is approximately one order of magnitude less than the contribution of residual ionospheric errors after taking into account GNSS TEC maps. It should be noted that the 3rd degree term affects both single-band and ionosphere-free linear combinations of group delays at two bands and therefore, cannot be retrieved in analysis of the differences with respect to the dual-band reference solution. \subsection{The impact of remaining errors in modeling of the ionosphere on source positions} \begin{figure*} \includegraphics[width=0.49\textwidth]{iono_k_da_errors.pdf} \hspace{0.01\textwidth} \includegraphics[width=0.49\textwidth]{iono_k_dd_errors.pdf} \caption{An increase in source position errors in right ascension ({\it Left}) and declination ({\it Right}) due to the residual ionospheric contribution at K-band (22~GHz) after applying data reduction based on GNSS TEC maps with the modified mapping function ($\Delta H=56.7~{\rm km}, \alpha=0.9782, k=0.85$) } \label{f:k_err} \end{figure*} As it was shown before, our ability to model the ionospheric contribution using single-band observations is limited. After applying the de-bias data reduction, the declination bias is virtually eliminated. The residual ionospheric contribution causes additional random errors. In order to investigate their impact, I ran four trial solutions. I used ionosphere-free linear combinations of dual-band data and added to them the ionospheric contribution derived from VLBI data scaled to the specific frequency of a trial solution. Then I applied the data reduction for the ionosphere to this dataset of modified observables using GNSS TEC maps as if I processed single-band observations and estimated source positions. The differences in source positions from these solutions with respect to the reference solutions are interpreted as an impact of the residual ionospheric errors at different frequencies. The declination dependence of the differences in right ascensions and declination for four solutions for frequency 2.3~GHz (S-band), 4.3~GHz (C-band), 8.4~GHz (X-band), and 23.7~GHz (K-band) is shown in Figure~\ref{f:err_src_pos}. These plots help to quantify additional errors in source positions that would arise if the dataset of 263 twenty-four hour experiments used for this study would have been observed at one band only. We see that estimation of TEC biases reduces the ionosphere-driven errors by a factor of 2--4. Still, even after estimation of TEC biases, the declination dependence of the ionosphere-driven errors on declination remains. It is known that unaccounted source structure affects source position estimates. In general, the contribution of source structure at higher frequencies is less because jets become optically thin. Therefore, there was an expectation that observing at high frequencies, such as 22--24~GHz (K-band) one may obtain more precise source positions. So far, observational evidence do not support that prediction \citep{r:kq_astro,r:icrf3}. A detailed comparison of K-band absolute astrometry versus dual-band astrometry of \citet{r:kar19} did not reveal an improvement. Therefore, it is instructive to see what is the contribution of the ionospheric errors at K-band after applying data reduction from GNSS TEC maps. Figure~\ref{f:k_err} shows additional errors in right ascensions and declinations due to residual contribution of the ionosphere. We see that additional errors in right ascension are about 0.1~mas. Declination errors of northern hemisphere sources are at the same level. However, these errors grow with a decrease of declination for sources in the southern hemisphere approximately linearly and reach 0.3~mas at declinations $-40^\circ$. Therefore, the unmodeled ionospheric contribution sets the error floor in position accuracy. This error floor is not accounted in the ICRF3 catalogue, and the ionospheric contribution is 300\% greater than the noise floor at K-band adopted by \citet[]{r:icrf3} according to their Table~6. The potential of K-band astrometry cannot be utilized unless the accuracy of ionosphere modeling will be substantially improved. \subsection{The rms of residual atmospheric errors} I investigated how the rms of the residual ionospheric contributions for 45 VLBA baselines for the dateset of 263 twenty-four observing sessions varied with time. I computed three statistics for each observing session: 1)~$\tau_v - b_i$, 2)~$\tau_v - \tau_g - b_i$; and 3)~$\tau_v - \tau_g - a_i - b_i$, and re-scaled them to 8~GHz. Here $\tau_v$ and $\tau_g$ are ionospheric path delays from VLBI and TEC maps respectively, $a_i$ is the adjusted TEC bias, and $b_i$ is clock bias. The statistics are shown in Figure~\ref{f:iono_mod_stat}. In order to improve readability, the time series were smoothed using Gaussian kernel with parameter $\sigma = 1$ year. The first statistics characterizes the impact of the total ionosphere on group delay. The second statistics characterizes the rms of the impact of the residual ionospheric errors on group delay after applying the a~priori GNSS TEC model. The final statistics characterizes the impact of residual errors after applying the a~priori GNSS TEC and adjusting TEC biases using dual-band VLBI data. \begin{figure} \par\bigskip\par \includegraphics[width=0.47\textwidth]{iono_mod_stat.pdf} \caption{The rms of the mean residual ionospheric contribution at VLBA baselines at 8~GHz for three cases: 1)~no a~priori ionospheric contribution is applied (upper red curve); 2)~the a~priori ionospheric path delay computed using GNSS TEC maps is applied (middle blue curve); and 3)~the a~priori ionospheric path delay computed using GNSS TEC maps are applied and TEC biases are adjusted (lower green curve). 1~TECU causes group delay 21~ps at 8~GHz. } \label{f:iono_mod_stat} \end{figure} \subsection{A combined use of dual-band and single-band data} The lack of biases and the realistic assessment of errors due to residual ionospheric contribution allows us to use a mixture of dual-band and single-band group delays in a single least square solution. Source positions derived from single-band data are in general less precise than those derived from dual-band data because an additional factor affects the uncertainties of group delays. These additional errors can be computed and accounted for in deriving weights of observables. An increase in uncertainties of group delay observables propagates to uncertainties of source positions. Combined analysis of a heterogeneous datasets provides a significant advantage because all the data are used. We can fuse dual-band and single-band data in one dataset and use it for estimation of source positions. A fused dataset consists of observables of three types: \begin{enumerate} \item Dual-band ionosphere-free linear combinations of group delays. \item Single-band delays of dual-band experiments detected at one band only. Data reduction accounts for the ionospheric path delay computed from a sum of GNSS TEC and TEC bias adjustments. The uncertainty of the bias adjustment is added in quadrature to the group delay uncertainty for reciprocal weights of such observables. \item Group delays from single-band experiments. Data reduction accounts for the ionospheric path delay computed from GNSS TEC and applies the de-bias correction. The uncertainty of the ionospheric model computed from the regression model is added in quadrature to the group delay uncertainty for reciprocal weights of such observables. \end{enumerate} Solving for source coordinates using a fused dataset provides estimates with minimum uncertainties and minimum correlations since it uses all available data under condition that the frequency-dependent position biases due to source structure are negligible. This condition may be violated for some strong sources with significant structure. Analysis of position offsets between single band and dual-band solutions will help to identify such sources \citep[see, for example,][]{r:vgaps}, but these cases are encountered infrequently. \subsection{Future work} An improvement in modeling of ionospheric path delay when TEC biases are adjusted is quite impressive. However, this requires using additional information about the state of the ionosphere. Specifically, VLBI ionospheric path delay from dual-band data was used for computation of these biases. This information is not available in processing of single-band observations. \citet{r:ros00} considered the use of dual-band GPS observations from collocated receivers for analysis of radio astronomy observations. They have shown that the GPS determination of TEC from ground receivers alone without the use of GNSS TEC maps can be successfully applied to the astrometric analysis of VLBI observations. There are on-going efforts to install advanced GNSS receivers within a hundred meters of each VLBA antenna. The use of geometry-free pseudo-ranges at 1.2 and 1.5~GHz in a similar way as I used VLBI ionosphere-free group delays for adjustments of TEC biases promises a similar level of improvement. \section{Conclusions} Single-band group delays are affected by the contribution of the ionosphere. This contribution is noticeable at frequencies below 30~GHz and becomes a dominating source of errors at frequencies below 5--8~GHz. Compared with astrometric solutions based on the use of the ionosphere-free linear combination of dual-band observables, an absolute source position catalogue based on a single-band solution is affected by additional random and systematic errors caused by mismodeling the contribution of the ionosphere to path delay. I explored two approaches to modeling the ionospheric path delay using GNSS TEC and assessed residual ionospheric errors. The findings can be summarized as follows: \begin{enumerate} \item In a case when an experiment was recorded at two widely separated bands, but a fraction of sources were detected at one band only, estimation of the TEC bias in a form of an expansion over B-spline basis using dual-band data and then applying that bias in addition to GNSS TEC provides an unbiased estimates of sources positions. The stochastic model that describes residual errors of TEC bias adjustment predicts an increase of positions errors with an accuracy of 15\%. No remaining systematic errors were found. This approach provides source positions with the lowest uncertainties with respect to other approaches. \item In a case of single-band observations, path delay is computed using GNSS TEC maps. Using the thin spherical shell model at the ionosphere height 450~km above the mean Earth's radius causes a strong systematic bias in declination that reaches 1~mas at 8~GHz an 12~mas at 2.3~GHz. This bias can be virtually eliminated when a)~the modified ionospheric mapping function with parameters $\Delta H=56.7~{\rm km},\, \alpha=0.9782,\, k=0.85$ is used; and b)~the empirical de-bias correction is applied. \item Ionospheric errors are mainly multiplicative. \item The scaling factor of GNSS TEC that provides the zero mean average declination bias, 0.85, is in a good agreement with a totally independent comparison of vertical TEC determined from satellite altimetry with GNSS TEC maps. Since VLBI is sensitive to a delay incurred in the total ionosphere, interpretation of a scaling factor of TEC from altimetry and radio occultation as a contribution of upper layers of the ionosphere at altitudes above Jason orbit, i.e. 1,350~km, suggested by \citet{r:liu18} and \citet{r:det22} is ruled out. \item I have found that the scaling factor of GNSS TEC maps that provides zero mean declination bias depends on the used ionospheric mapping function. Therefore, I surmise that the established deficiency of GNSS TEC maps is caused by over-simplification of the ionospheric mapping function used for their derivation that considers the ionosphere as a thin spherical shell. The electron contents in the real ionosphere varies not only in latitude and longitude, but also in height. Diurnal variations of the effective ionosphere height at a level of 100~km, i.e. over 20\%, are large enough to cause errors of the magnitude that was found in analysis of VLBI data. \item The impact of the ionosphere on path delay depends on the Solar cycle. Modeling ionospheric path with GNSS TEC maps reduces the residuals at 8~GHz by a factor of 2 during the Solar maximum and only by 10\% during the solar minimum. Estimation of the TEC bias reduces ionospheric errors further by a factor of 2 regardless of the phase of the Solar maximum. \item The impact of the ionosphere on source position errors can be modeled with an accuracy of 15\%. It remains noticeable even at frequencies as high as 22--24~GHz (K-band). In particular, the ionospheric errors even after applying data reduction based on GNSS TEC with the modified mapping function and the de-bias correction exceed 0.1~mas. Declination errors of sources with declinations in a range of [$-40^\circ, 0^\circ$] are in a range of 0.1 to 0.3~mas. An assertion that K-band astrometry is able to provide results more precise than 0.1~mas is not true at the current state of our ability to model ionospheric path delay. Considering on-going efforts to install advanced GNSS receivers in the close vicinity of VLBA stations and other radio telescopes, the situation may change in the future. \end{enumerate} \par\vspace{-1ex}\par This study lays the foundation of the single-band absolute astrometry. Dual-band astrometric observations still provide the best accuracy. The use of singe-band data with the procedure of data reduction and weighting described above allows us to get unbiased positions with {\it known} added errors. \par\vspace{-2ex}\par \begin{acknowledgments} This work was done with datasets RDV, RV, CN, UF001, UG003, BL122, BL166, BP133, BP138, GC073, BC204, BG219, and V17 collected with VLBA instrument of the NRAO and available at \web{https://archive.nrao.edu/archive}. The NRAO is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This work made use of the Swinburne University of Technology software correlator, developed as part of the Australian Major National Research Facilities Programme and operated under license. It is my pleasure to thank Yuri Y.~Kovalev, Urs Hugentobler, and Robert Heinkelmann for discussions of presented results. \end{acknowledgments} \facility{VLBA} \software{PIMA} \bibliographystyle{aasjournal}
1,116,691,499,930
arxiv
\section{Introduction} Despite huge efforts of researchers and industry put into identifying vulnerable software, many software systems still suffer from various security weaknesses. The concept of a code property graph (CPG) \cite{yamaguchi2014modeling} has been introduced to simplify the identification of vulnerabilities and bugs in the source code of programs. A CPG is a super-graph covering properties of an abstract syntax tree (AST), a control flow graph and data flow graphs, among others, thus containing all information relevant for a security analysis. The CPG enables its user to identify vulnerabilities or bugs by performing reusable graph queries. This perk led to a widespread adaption of the technique with several implementations \cite{xiaomeng2018cpgva,yamaguchi2014modeling,Graft,Plume,Joern,CPG,banse2021cloud,weiss2022languageindependent}. Even if the graphs mimic the source code with a minimal loss of information, the graph provides an abstraction of the actual code. This abstraction is suitable to support a language-agnostic analysis of software. Unfortunately, the implementations are limited with respect to the supported programming languages since each language requires a separate translation. As compilers suffer from a similar problem, the use of intermediate representations (IR) has become popular. The IR abstracts from the programming language but, in many cases, still contains a significant amount of high-level information such as the types of variables which is lost in the compiled binary and can barely be recovered \cite{mantovani2022the}. The lack of such information can worsen the analysis results. A very popular IR is LLVM-IR \cite{llvmir} which is part of the LLVM project. Numerous compiler frontends exist to translate programming languages to LLVM-IR. E.g., clang \cite{clang} translates the languages C, C++ and Objective-C to LLVM-IR and has been extended by Apple to support Swift \cite{swiftClang}. Other frontends exist to support a wide range of programming languages (e.g., Rust). While LLVM-IR was designed for compilers, it is also frequently used by binary lifters. E.g., RetDec \cite{RetDec}, McSema \cite{mcsema}, llvm-mctoll \cite{mctoll} and reopt \cite{reopt} can lift a binary to LLVM-IR. Since some lifters support multiple architectures and types of binary files, this avoids to implement the translation for different flavors of assembly code or application binary interfaces. Recent research \cite{liu2022sp} showed that binary lifting is meanwhile a stable technique and its produced output is suitable for a security analysis of the program. Another use-case for analyzing LLVM-IR is to consider the effects of compiler optimizations during the analysis. As an example, recent research showed that side channel vulnerabilities introduced by the compiler are still a major concern of developers of cryptographic libraries \cite{jancar2022sp} and that the source code and the final binary files can differ significantly \cite{balakrishnan2010wysinwyx}. Since the LLVM-IR can be emitted after the optimizations, it can already contain the vulnerabilities or bugs which stem from the compiler and is therefore an interesting analysis target. In this paper, we present an approach how to overcome shortcomings of existing CPG tools by enabling the analysis of LLVM-IR in a code property graph. This bridges the gap between the analysis of source code written in higher-level programming languages and the analysis of programs (or dependencies) that may only exist in binary form. While supporting LLVM-IR in a CPG seems to be straightforward, several challenges arise from the static single assignment (SSA) form, the exception handling routine, instructions which do not exist in high-level programming languages and significantly different syntactic representations of some concepts in LLVM-IR and other languages. Contrary to prior work, we do not require to run any LLVM passes beforehand, which helps us to keep the graph smaller. At the same time, we aim to retrieve as much high-level information as possible and map the code to high-level concepts whenever possible. Rather than handling LLVM-IR-specific instructions, e.g., \textit{cmpxchg}\footnote{The \textit{cmpxchg} instruction compares a given argument against a value stored in a memory address. If they are equal, a new value, specified in a second argument is stored in memory. This is similar to \texttt{if(*addr == arg0) {*addr = arg1;}} in C/C++.}, only as a generic function call, we translate the concepts into existing CPG node types that represent the behavior of a higher-level programming language. This allows us to re-use existing concepts in queries, such as if-statements or pointer referencing. Overall, integrating LLVM-IR into a CPG allows to support more programming languages, to analyze binary files and to validate that a compiler did not introduce new bugs. It enables that existing analyses queries for source code can be applied to the LLVM-IR without any modifications. In summary, our contributions are as follows: \begin{itemize} \item We are the first to present a mapping of all LLVM-IR instructions to existing CPG nodes with full compatibility to the existing structure. This ensures that existing analyses are fully compatible with the representation. \item We show how we can keep the size of the CPG minimal. \item We are the first to include LLVM-IR's exception handling routines in a CPG. \item We extended the open source project \textit{cpg} \cite{weiss2022languageindependent,CPG} to support our concepts. \end{itemize} \section{Background} \label{sec:background} \subsection{The Code Property Graph} The \textit{cpg} project \cite{weiss2022languageindependent,CPG} enables a graph-based representation of source code of different programming languages. To date, the focus lies in Java and C/C++ but experimental support for Python, Go and TypeScript is also available. The goal of the project is to provide a language-agnostic representation of the source code. This enables a security expert to identify vulnerabilities or bugs. Furthermore, the \textit{cpg} library comprises a way to store the graph in neo4j\footnote{\url{https://neo4j.com/}}, and makes the graph accessible via a command line interface. For some cases, the library can also evaluate the value which can be held by a node. All this allows a security expert to write custom queries either to the graph database or the in-memory representation of the CPG. The \textit{cpg} library is designed in a way to allow reusing these queries among all supported programming languages. To fulfill this goal, the \textit{cpg} library implements a thorough class hierarchy which accounts for various types of statements and expressions. The CPG encodes information such as the class hierarchy of the code under analysis, the control flow graph, and the call graph in a single graph. The current design mainly targets object-oriented programming languages. To deal with a possible lack of some code fragments or errors in the code, the library is resilient to incomplete, non-compilable and to a certain extent even incorrect code. \subsection{The LLVM Intermediate Representation} \noindent\textbf{The Instructions.}~ The LLVM-IR is used as IR of the LLVM project. Its main purpose lies in providing an abstraction of code to ease the optimization and analysis of the program in a language- and architecture-independent way. The LLVM-IR holds values in global variables (prefixed with \texttt{@}) and local variables (prefixed with \texttt{\%}) both of which can be named or unnamed. The LLVM-IR follows the static single assignment (SSA) form. Hence, every variable can be written to exactly once. This limitation does not affect global variables as they are represented as memory locations and are accessed via store or load operations. Overall, the LLVM-IR differentiates between 65 instructions. Of these, 13 are arithmetic operations, 6 are bitwise operations, and 13 instructions cast types. The remaining instructions call functions, handle exceptions, load from or store to memory, manipulate aggregated types or jump to other program locations. The instructions can be enhanced with metadata to note the calling convention, optimizations or desired properties of functions and parameters, among others. Besides the basic instructions, LLVM-IR contains numerous so-called ``intrinsics''. Those are functions which model certain standard library functionality, or model frequent actions which have to be represented differently on different architectures. Some intrinsics repeat or refine basic instructions, others insert functionality such as the automated memory management in Objective-C. The LLVM-IR supports a simple type system and differentiates between a set of primitive types and aggregated types such as structs, arrays and vectors. Additionally, LLVM-IR has a type for labels (i.e., jump targets), metadata and a so-called token which is used by certain instructions to transport information. Overall, the type system resembles C rather than object-oriented programming languages. In fact, object-oriented concepts are handled by the respective language frontend in LLVM. The frontend translates the object-oriented properties to concepts such as VTables for overriding methods, and method name mangling to support overloaded functions. In the case of Objective-C, it uses the dynamic dispatching strategy. Other languages make use of similar concepts. \noindent\textbf{Accessing LLVM-IR.}~ The LLVM project offers a C++ and a C API to parse LLVM-IR and LLVm bitcode files. As the CPG project is mainly implemented in Java, access to the API has to be provided via the Java Native Interface (JNI). We use the open source project javacpp-presets\footnote{\url{https://github.com/bytedeco/javacpp-presets/tree/master/llvm/src/gen/java/org/bytedeco/llvm}} which provides access to the C API via JNI. Unfortunately, the C API has a flat type hierarchy in its functions to access the LLVM-IR's AST, thus making the parsing of instructions and the extraction of their elements more error-prone if not parsed correctly\footnote{Typically, an incorrect API call leads to a segfault.} However, as our evaluation in Section \ref{sec:eval} shows, our implementation works in a stable way. \section{Related Work} \label{sec:related_work} \noindent\textbf{Code Property Graphs.}~ Researchers and industry proposed multiple use cases and implementations of CPGs and analysis tools \cite{xiaomeng2018cpgva,click1995a,yamaguchi2014modeling,Graft,Plume,Joern,CPG,banse2021cloud,weiss2022languageindependent,schuette2019lios}. All of these tools differ in their support for programming languages. Closest to our work is the tool \texttt{llvm2cpg} \cite{llvm2cpg} which uses Joern \cite{Joern} as graph representation. The respective CPG represents most instructions as function calls and does not try to infer any of the high-level information. Furthermore, it uses the \textit{reg2mem} LLVM pass to address the $\varphi$ instruction of LLVM-IR, which significantly increases the code base. This results in additional instructions present in the graph and thus slows down the analysis and makes it more error-prone. liOS \cite{schuette2019lios} constructs a CPG holding assembly instructions and the function bodies lifted to LLVM-IR to analyze iOS apps. The graph model cannot be used to represent source code. Furthermore, liOS does not specifically address LLVM-IR instructions since the analyses mainly operate on assembly code. Plume \cite{Plume} and Graft \cite{Graft,keirsgieter2020graft} only support Java bytecode, a different low-level language. Plume builds the graph incrementally to analyze data flows and has been merged into Joern in a revised version. Graft follows a similar goal. Other tools \cite{click1995a,yamaguchi2014modeling,Joern,CPG,weiss2022languageindependent} analyze source code and differ in their level of abstractions and supported languages. Some tools extend CPGs for specific use cases, e.g., analyzing cloud apps \cite{banse2021cloud} or finding vulnerabilities with deep learning \cite{xiaomeng2018cpgva}. \noindent\textbf{Graph-based Security Analysis.}~ Various other works investigated in the usage of other graph-based representations of the source code to identify bugs or vulnerabilities \cite{urma2015source,yamaguchi2012generalized,yamaguchi2015automatic} or similar code fragments\cite{gascon2013structural,baxter1998clone}, traverse the graph \cite{Rodriguez2015} or improve the analysis \cite{lam2005context}. These works aim to provide a rich basis for analyzing the graphs. Many of the proposed techniques operate on other graph structures (e.g. the AST). However, the CPG combines a multitude of information and includes the respective relations, thus making the required information available for the analysis. Hence, these approaches can still be applied to the CPG \noindent\textbf{Static Analysis of Multiple Programming Languages.}~ Other works target the analysis of multiple programming languages \cite{caracciolo2014pangea,flores2015cross,flores2011towards,angerer2014variability,mushtaq2017multilingual,mayer2012cross}. Some of the frameworks rely on language-agnostic ASTs \cite{schiewe2022advancing,zugner2021language} or aim to provide a common pattern for the AST of multiple languages \cite{rakic2013language,strein2006cross}. However, ASTs are cannot be used to find all kinds of bugs as they do not contain the required information \cite{yamaguchi2014modeling}. Teixeira et al. \cite{teixeira2021multi} even propose to translate source code to a custom language. Furthermore, various intermediate representations (IRs) have been proposed either for compilers (e.g., LLVM \cite{lattner2004llvm}, GIMPLE \cite{gimple}, HIR \cite{hir} or CIL \cite{ecmaecma}), or specifically targeting code analysis (e.g. VEX IR \cite{nethercote2007valgrind,vex}, jimple \cite{vallee1998jimple}, BIL \cite{brumley2011bap}, REIL \cite{dullien2009reil}, ESIL \cite{esil}, DBA \cite{bardin2011bincoa,david2016binsec} or RASCAL \cite{klint2009rascal}). Since the IRs are often tailored to a specific use case or language, they differ in the information available in the instructions and their abstractions. Many of the IRs are integrated in analysis toolchains whose analyses are often specific to a use case and cannot easily be ported to other tools. Therefore, integrating such IRs in an abstract analysis platform like the CPG can enable further generalized security analysis. Numerous tools \cite{fbinfer,sonarqube,checkmarx,appscreener,codacy,codeql,avgustinov2016ql,de2007keynote,codechecker,coverity,deepsource,lgtm,wala} can analyze multiple programming languages. However, they can often barely share the analyses between the languages. The CPG representation allows reusing analyses across languages. \section{Mapping LLVM-IR to CPG nodes} \label{sec:mapping} We aim to include LLVM-IR in the CPG while reusing only the existing node types and representing LLVM-specific constructs similar to their equivalents in languages which are already supported by the CPG. We also want to keep the number of nodes minimal. In this section, we present how we represent 1) arithmetic and logical instructions, 2) access to aggregate types, 3) the $\varphi$ instruction with a minimal increase of nodes, and 4) LLVM-IR's exception handling routine. \subsection{Basic Instructions} Many instructions are known from other programming languages. We can coarsely differentiate between arithmetic and logical operations, operations which enforce specific interpretations of types, and operations which are composed of numerous steps but are often performed atomically on the CPU. In this section, we explain how we include those respective instructions in the CPG. Almost all programming languages have a common subset of instructions or operations. This includes arithmetic, bitwise and logic operations, or comparisons which we map to their representation in high-level languages (\texttt{+}, \texttt{-}, \texttt{*}, \texttt{/}, \texttt{\%}, \texttt{$\hat{ }$} , \texttt{\&}, \texttt{|}, \texttt{<<}, \texttt{>>}, \texttt{<}, \texttt{<=}, etc.). Other instructions like jumps, calls, return instructions are modeled with their representation in C code. For if- and switch/case-statements, the branches or cases contain a simple goto statement. Later, a CPG pass removes such indirections whenever possible to reduce the size of the graph. For some instructions, LLVM-IR can enforce a specific interpretation of the types of the arguments. E.g., the instructions \texttt{udiv}, \texttt{sdiv} and \texttt{fdiv} represent a division and are mapped to the binary operator \texttt{/}. However, they interpret the values as unsigned (\texttt{udiv}), signed (\texttt{sdiv}) or as floating point value (\texttt{fdiv}). In the CPG, we add typecasts to the arguments to enforce the correct interpretation. In addition, some comparators of floating point values check if a number is ordered or not (i.e., if it is \texttt{NAN}). We split these comparisons into a check if the number is ordered and then the actual comparison. E.g., the comparators \texttt{ult} and \texttt{olt} compare two floating point values and are mapped to the \texttt{<} operator. However, the \texttt{ult} comparison checks if a value \texttt{a} is unordered or less than value \texttt{b} and thus is modeled as the statement \texttt{std::isunordered(a)||a<b}. Similarly, we model the \texttt{olt} comparison with \texttt{!std::isunordered(a)\&\&!std::isunordered(b)\&\&a<b}. Some of LLVM's instructions like \texttt{atomicrmw} and \texttt{cmpxchg} are known from assembly code rather than high-level languages and perform multiple operations atomically. The \texttt{cmpxchg} instruction loads a value from memory and replaces it with an operand if the value equals another operand. In the CPG, we model this by a block of statements holding the comparison, an if statement and the assignment in the then-branch. We annotate the block to keep the information that all this is performed atomically. Similarly, we model \texttt{atomicrmw} as a block of statements performing a load, an optional comparison and if-statement and an assignment to a variable. By modeling these instructions with a representation similar to source code, we simplify subsequent analyses. In contrast, prior work \cite{llvm2cpg} models these instructions as a call to custom functions. \subsection{Handling Aggregate Types} High-level languages provide syntactic means to access elements of complex types like arrays, structs or objects. In LLVM-IR, arrays and structs are still present and their values can be accessed by special instructions. For arrays which are represented as a vector, the instructions \texttt{extractelement} and \texttt{insertelement} provide access to the elements. \begin{figure}[t] \centering \includegraphics[width=0.78\columnwidth]{figures/insertvalue.png} \caption{The graph representing the insertvalue instruction. We can see the literal struct which is generated as well as the access to the field.} \label{fig:insertvalue} \end{figure} Both instructions are represented as an \texttt{ArraySubscriptionExpression} in the CPG, one being the left-hand side of the assignment and one the right-hand side. Note that \texttt{insertelement} returns the modified vector and does not modify the existing one. For all other aggregated types, the instructions \texttt{getelementptr}, \texttt{extractvalue}, and \texttt{insertvalue} model the access to the element either by the index inside an array or by the position of a field inside a structure. The code \texttt{\%b = insertvalue {i32, i8} \%a, i8 7, 1} shows how the second element of the variable \texttt{a} is set to \texttt{7}. We model the instruction as a copy of \texttt{a} to the variable \texttt{b} and an assignment of the value \texttt{7} to the accessed \texttt{field\_1}. Figure \ref{fig:insertvalue} shows the resulting graph with the initialization of \texttt{b} on the bottom right, and the access to the field on the left. The example uses an interesting concept of LLVM-IR: a so-called literal structure, a struct whose layout is defined in the instruction. For such structs, we generate a type which is identified by the types of its fields. Hence, all literal structs with the same list of fields are regarded as the same type. In our example, the struct is named \texttt{literal\_i32\_i8} and has the fields \texttt{field\_0} of type \texttt{i32} and \texttt{field\_1} of type \texttt{i8}. The top left of Figure \ref{fig:insertvalue} shows the declaration of the type. While the instructions \texttt{insertvalue} or \texttt{extractvalue} read or write values from memory, it is sometimes desirable to retrieve a pointer to an element of a structure. For this case, the instruction \texttt{getelementptr} computes a memory address without accessing memory. Listing \ref{lst:getelementptr} illustrates the usage of this instruction on a named struct. Listing \ref{lst:cGetelementptr}, in turn, shows the same code written as C. Figure \ref{fig:getelementptr} shows the definition of the named struct and the connections between the fields for the graph retrieved from LLVM-IR. The result is remarkably similar to the graph in Figure \ref{fig:cGetelementptr} which represents the C code. This similarity lets us reuse existing analyses for the graphs retrieved from LLVM-IR and shows that the graphs are structurally identical. In fact, the relations between variables and fields could be better resolved which can lead to improved analysis results. \begin{minipage}{\linewidth} \begin{lstlisting}[caption={The instruction getelementptr for a named struct},label={lst:getelementptr}] define i32* @foo i64 1, i32 2, i32 1, i64 5, i64 13 ret i32* } \end{lstlisting} \end{minipage} \begin{minipage}{\linewidth} \begin{lstlisting}[caption={The C code for the example in Listing \ref{lst:getelementptr}},label={lst:cGetelementptr}] struct RT { char A; int B[10][20]; char C; }; struct ST { int X; double Y; struct RT Z; }; int *foo(struct ST *s) {return &s[1].Z.B[5][13];} \end{lstlisting} \end{minipage} \begin{figure}[htb!] \centering \begin{subfigure}{\textwidth} \includegraphics[width=\columnwidth]{figures/graph_getelementptr_comparison_llvm.png} \caption{The graph representing the getelementptr instruction.} \label{fig:getelementptr} \end{subfigure} \begin{subfigure}{\textwidth} \includegraphics[width=\columnwidth]{figures/graph_getelementptr_comparison_c.png} \caption{The graph representing the C code.} \label{fig:cGetelementptr} \end{subfigure} \caption{Comparison of the LLVM code using getelementptr and the respective C code. The graph contains structs (light pink) and their fields (light brown), the access to fields (dark pink), access to elements in arrays (brown) and the return instruction (purple). The green nodes are constant values, the yellow node is the method's argument. The structure of both graphs is nearly identical.} \end{figure} \subsection{The $\varphi$-Instruction} The SSA form enforces that each variable is assigned exactly once in LLVM-IR. However, in some cases, it is required to assign a value multiple times. A frequent example is a loop counter which is set before executing the loop and is updated on each iteration. To allow such behavior without duplicating code and without storing the values in memory, the $\varphi$-instruction is used. It assigns the target variable one of the inputs based on the previously executed basic block (BB). As most programming languages do not have such an instruction, there is no fitting node to represent this in the CPG. To address this issue, prior work \cite{llvm2cpg} relied on the LLVM reg2mem pass\footnote{\url{https://llvm.org/doxygen/Reg2Mem_8cpp_source.html}} which translates the instruction to multiple load and store operations. However, this pass also transforms the access to other variables and thus significantly increases the size of the resulting CPG. As this reduces the scalability of subsequent analyses, we avoid this LLVM pass. We collect all $\varphi$-instructions during the translation. Finally, we parse the instructions to identify the predecessor BBs and add an assignment to the target variable at the end of the BB. To keep the CPG clean, we further insert a declaration of the variable at the beginning of the function containing the $\varphi$-instruction and all BB \footnote{For all other variables, the statement of the assignment performs the declaration.}. This, however, breaks the SSA form. The snippet in Listing \ref{lst:phi} contains the $\varphi$-instruction while Listing \ref{lst:phiAfter} shows the function's model in the CPG. \begin{figure}[tb] \noindent\begin{minipage}{.45\textwidth} \begin{lstlisting}[caption={Code snippet using the $\varphi$-instruction},label={lst:phi}] define i32 @main(i32 br i1 label BB1: br label BB2: br label BB3: [ ret i32 } \end{lstlisting} \end{minipage} \hfill \begin{minipage}{0.45\textwidth} \begin{lstlisting}[caption={Snippet using the $\varphi$-instruction as modeled in the CPG},label={lst:phiAfter}] define i32 @main(i32 ; VariableDeclaration of br i1 label BB1: br label BB2: br label BB3: ret i32 } \end{lstlisting} \end{minipage} \end{figure} \subsection{Exception handling} LLVM-IR offers a rich system for exception handling. The CPG represents exception handling routines with try-catch statements. To make the LLVM-IR fit into this pattern, we need to identify which instructions form a try-block and which ones a catch-block. Concerning the try-block, we represent the \texttt{invoke} instruction as a try-block surrounding a function call and a goto-statement. For the catch-blocks, however, such a straightforward model is not possible. In LLVM, the \texttt{catchswitch} instruction selects a matching \texttt{catchpad} based on the signature of the catchpad-instruction of a basic block. The catchpad contains the code of the catch-block and is ended by a \texttt{catchret} instruction. However, the matching and signature cannot easily be transferred to a high-level name. Therefore, we model this construct as a catch-block which catches all exceptions and contains if-statements representing the signature matching. If none of them matches, the exception is thrown again. The remaining constructs such as the \texttt{cleanuppad} and its \texttt{cleanupret} instruction are not modeled specifically. Another way to mark a catch-block is the \texttt{landingpad}-instruction which, again, filters for the right object to catch. Once more, the matching is specific to the programming language and thus, modelling this is left to future work. If we cannot translate the instructions to concepts supported by the CPG, we model them as special functions similar to the LLVM intrinsics. \section{LLVM-Specific CPG Passes} \label{sec:pass} During the translation of the LLVM instructions to CPG nodes, the frontend generates various instructions which later turn out to be unnecessary and thus can be removed. This clean-up phase takes place in a pass over the CPG nodes. First, none of the conditional jumps and switch/case-statements incorporates a meaningful body of statements. Instead, they are modeled as goto statements to another basic block. The pass identifies all basic blocks which have only a single predecessor and replaces the respective goto-statement with the basic block. Note that we do not perform this transformation if multiple predecessors exist because it would unnecessarily increase the number of nodes in the graph. Second, the pass removes the instructions which serve as intermediate steps during the generation of catch-blocks and propagates the caught exception to the final throw statement if none of the catchpad instructions matches. As we explicitly aim to handle lifted or decompiled code, a second pass can remove method stubs, i.e., methods whose only purpose is to call a library method. The main purpose of this pass is to simplify subsequent analyses. \section{Experimental Evaluation} \label{sec:eval} To reuse the same analyses for the graphs constructed from source code as well as the ones containing LLVM-IR, we carefully designed the translation in a way to mimic the concepts used in source code as closely as possible. In this section, we first show a case study which advocates that we can reuse queries that aim to identify security concerns in source code to query LLVM-IR. Second, we test the implementation against the Rust standard library to show the applicability of the approach to large-scale projects. All measurements were performed on a Ubuntu 20.04 running on an Intel i5-6200U CPU and 20 GB of RAM. \begin{table}[tb] \centering \caption{Results for detecting misuse of cryptographic libraries.} \label{tab:ssl_res} \begin{tabular}{|l|c|c|c|c|}\hline & \textbf{Analysis time [ms]} & \textbf{\# Nodes} & \textbf{\# Functions} & \textbf{Problem found} \\\hline \multicolumn{5}{|c|}{\cellcolor{gray}Source Code} \\\hline Original file & 171 & 328 & 38 & Yes \\\hline \multicolumn{5}{|c|}{\cellcolor{gray}macOS M1 using XCode} \\\hline Compiled ll & 1091 & 5279 & 151 & Yes \\\hline Lifted ll & 256 & 1743 & 76 & Yes \\\hline Decompiled & 179 & 971 & 149 & No \\\hline \multicolumn{5}{|c|}{\cellcolor{gray}Ubuntu x86-64 clang} \\\hline Compiled ll & 163 & 1371 & 57 & Yes \\\hline Lifted ll & 127 & 911 & 48 & Yes \\\hline Decompiled & 80 & 594 & 101 & Yes \\\hline \multicolumn{5}{|c|}{\cellcolor{gray}Ubuntu x86-64 g++} \\\hline Lifted ll & 242 & 1702 & 89 & Yes \\\hline Decompiled & 148 & 1137 & 200 & Yes \\\hline \multicolumn{5}{|c|}{\cellcolor{gray}Linux AArch64 (cross compiled)} \\\hline Lifted ll & 250 & 1891 & 93 & Yes \\\hline Decompiled & 158 & 1176 & 209 & Yes \\\hline \multicolumn{5}{|c|}{\cellcolor{gray}Linux arm 32 bit (cross compiled)} \\\hline Lifted ll & 132 & 1123 & 51 & Yes \\\hline Decompiled & 71 & 626 & 102 & Yes \\\hline \end{tabular} \end{table} \subsection{Case Study: Cryptographic Misuse} \label{sec:crypto} This case study is driven by the anticipated usages of the CPG on LLVM-IR. First, it should enable a security analysis of the LLVM-IR without the need to rewrite existing analyses. Second, it should be scalable by introducing a minimal number of nodes. The toolchain should be able to operate on LLVM-IR emitted during the compilation of a program (subsequently, we call this ``compiled LLVM-IR'') or when lifting a binary (we call this ``lifted LLVM-IR''). To show that these properties are fulfilled, we 1) compare the sizes of graphs retrieved from compilers and lifters, 2) compare the runtime of the analysis, and 3) show that the weakness can be identified with the same analysis in all samples. We implemented a TLS-client in C++ which uses the \texttt{openssl} library. It accepts the insecure hashing algorithm MD5 as one of the options. First, we tested the toolchain against the original cpp file, which identified the respective issue. Next, we used XCode on macOS with the M1 chip and clang on Ubuntu running on a x64 CPU to emit the LLVM-IR which can be retrieved during compilation. As LLVM-IR also serves as target LLVM-IR for many lifters, we lifted binaries of the test file which had been compiled on the Mac and on Ubuntu with various compilers. We use RetDec \cite{RetDec} to lift the binaries to LLVM-IR and also decompiled them to a C-style file\footnote{We compiled a custom version of RetDec to update the disassembler and support the \texttt{endbr64} instruction which had not been supported at the time of the experiments.}. Table \ref{tab:ssl_res} summarizes the analysis time, how many nodes and functions are included in the graph and if the problem could be found successfully. We discuss the observations in the following paragraphs. \noindent\textbf{Size of the graphs.}~ One of our goals is to keep the sizes of the graph small. Therefore, we compare the size of the graphs retrieved from compiled and lifted LLVM-IR and when decompiling a binary file. One observation is the significant increase in functions contained in the LLVM-IR compared to the original C file. This can be explained by stubs introduced by the compiler. Note, however, that RetDec seems to remove some of the functions which have been introduced during compilation. This reduction facilitates and speeds up a subsequent security analysis on the resulting graph. Not only does RetDec reduce the number of functions contained in the binary but it also reduces the number of nodes compared to compiled LLVM-IR. This observation is in-line with recent research which found that some lifters, including RetDec, can reduce the complexity of the code represented by LLVM-IR as well as the number of elements it contains \cite{liu2022sp} while keeping the main functionality of the code available. The authors further observed that RetDec's output is not suitable for recompiling in most cases. However, as the CPG library aims to handle incomplete, non-compilable and to a certain extent even incorrect code, this limitation should not affect the representation and further analysis. Compared to the lifted LLVM-IR, the decompiled C files contain more functions but less nodes. This is explained by the possibility to summarize multiple LLVM-IR instructions in a single C statement. Overall, the reduction of nodes can be explained by RetDec's passes which aim to eliminate unnecessary code. \noindent\textbf{Runtime of the analysis.}~ We ran the translation to the CPG and the bug detection query 100 times for each of the files and report the average runtimes in Table \ref{tab:ssl_res}. First, it is interesting to note that the analysis time of the decompiled files is comparable to the one of the original cpp-file. The reduced number of nodes explains the speedup in some cases. The overall analysis time for the LLVM-IR files is ranging between 0.74 to 11.1 times the time of the original file. It is notable that the graphs retrieved from the LLVM-IR files contain 2.8 to 16.1 times the amount of nodes of the original file and still the runtime improved. \noindent\textbf{Identification of weaknesses.}~ To detect the misconfiguration in the test file, we implemented a query to identify the arguments of calls to the function \texttt{SSL\_CTX\_set\_cipher\_list}. To implement this analysis, we use the constant propagation implemented in the analysis module included in the CPG library\footnote{\url{https://github.com/Fraunhofer-AISEC/cpg/tree/master/cpg-analysis}}. With the query, we are able to identify the flaw in the original C file and in the compiled and lifted LLVM-IR files. However, when decompiling the binary compiled on macOS using the M1 chip, we failed to identify the misuse. We manually investigated the case and found that the CDT library\footnote{\url{https://www.eclipse.org/cdt/}} which the CPG library uses for parsing the C file fails to identify the name of a field correctly. Therefore, the data flow between the field and the method call is not resolved. \noindent\textbf{Stability of the translation.}~ All samples could be represented in the CPG without crashes. However, the LLVM-IR retrieved during compilation of a program contains a much richer semantics and uses various different instructions. This results in warnings, some of which show that nested instructions are not yet handled. The other ones indicate that a different scoping for variables in a try-catch block is expected because LLVM-IR's scoping differs to other languages. \begin{table}[tb] \centering \caption{Performance when analyzing Rust libraries.} \label{tab:rust} \resizebox{\textwidth}{!}{ \begin{tabular}{|l|l|c|c|c|c|c|}\hline \# & \textbf{Filename} & \textbf{LoC} & \textbf{\# Nodes} & \textbf{\# Functions} & \textbf{\# Errors} & \textbf{Analysis time [ms]} \\\hline 1 & addr2line & 879 & 2327 & 29 & 9 & 3641 \\\hline 2 & adler & 488 & 1707 & 25 & 2 & 507 \\\hline 3 & alloc & 4925 & 13482 & 253 & 91 & 6505 \\\hline 4 & cfg\_if & 9 & 1 & 0 & 0 & 23 \\\hline 5 & compiler\_builtins & 9990 & 34304 & 338 & 0 & 23670 \\\hline 6 & core & 80193 & 263729 & 3608 & 1879 & 2872096 \\\hline 7 & gimli & 23702 & 72845 & 411 & 43 & 112269 \\\hline 8 & hashbrown & 276 & 529 & 26 & 0 & 193 \\\hline 9 & libc & 1477 & 3619 & 130 & 0 & 646 \\\hline 10 & memchr & 11063 & 40602 & 257 & 108 & 32639 \\\hline 11 & miniz\_oxide & 15760 & 54868 & 294 & 166 & 79863 \\\hline 12 & object & 14174 & 50060 & 277 & 5 & 47806 \\\hline 13 & panic\_abort & 71 & 87 & 9 & 0 & 124 \\\hline 14 & panic\_unwind & 927 & 2619 & 67 & 25 & 610 \\\hline 15 & proc\_macro & 92115 & 244010 & 5488 & 2570 & 15260350 \\\hline 16 & rustc\_demangle & 14669 & 44069 & 437 & 309 & 43281 \\\hline 17 & rustc\_std\_workspace\_alloc & 9 & 1 & 0 & 0 & 107 \\\hline 18 & rustc\_std\_workspace\_core & 9 & 1 & 0 & 0 & 102 \\\hline 19 & std & 157377 & 468223 & 5923 & 2629 & 9303378 \\\hline 20 & std\_detect & 558 & 1921 & 15 & 0 & 659 \\\hline 21 & unwind & 106 & 230 & 2 & 0 & 273 \\\hline \end{tabular}} \end{table} \subsection{Application to the Rust Runtime} To assess the applicability to real-world programs, we retrieved the LLVM-IR from the standard and core libraries of Rust. We chose Rust since it is not yet supported by the CPG implementation and provides the option to compile to LLVM-IR. Overall, the test set includes 21 distinct LLVM files which are listed in Table \ref{tab:rust} together with their size and the results. We report the time it took to translate the file (including various CPG passes) as well as the number of nodes which could not be parsed accurately. For the latter, we need to extend the LLVM-specific translation to include more cases of ``nested'' LLVM expressions. \begin{figure}[tb] \centering \resizebox{0.9\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ xlabel={Nodes/LoC}, ylabel={ProblemNodes/Nodes [\%]}, ] \addplot[color=blue, mark=*, only marks] coordinates {(2.647326507,0.3867641) (3.49795082,0.1171646) (2.7374619,0.674974) (0.11111111,0) (3.433833934,0) (3.288678563,0.71247379) (3.073369336,0.05902944) (1.91166667,0) (2.450236967,0) (3.670071409,0.265996749) (3.481472081,0.30254428811) (3.531818823,0.00998801438) (1.225352113, 0) (2.825242718,0.95456281023) (2.648971394,1.05323552313) (3.0042266,0.70117316027) (0.11111111,0) (0.11111111,0) (2.97516791,0.56148459174) (3.44265233,0) (2.169811321,0)}; \end{axis} \begin{axis}[ xlabel={LoC}, ylabel={ProblemNodes/Nodes [\%]}, xticklabel pos=right, xlabel near ticks, legend pos=outer north east ] \addplot[color=blue, mark=*, only marks] coordinates {(1,0)}; \addlegendentry{Nodes/LoC vs. ProblemNodes/Nodes}; \addplot[color=red, mark=o, only marks] coordinates {(879,0.3867641) (488,0.1171646) (4925,0.674974) (9,0) (9990,0) (80193,0.71247379) (23702,0.05902944) (276,0) (1477,0) (11063,0.265996749) (15760,0.30254428811) (14174,0.00998801438) (71, 0) (927,0.95456281023) (92115,1.05323552313) (14669,0.70117316027) (9,0) (9,0) (157377,0.56148459174) (558,0) (106,0)}; \addlegendentry{LoC vs. ProblemNodes/Nodes}; \end{axis} \end{tikzpicture}} \caption{Relation between lines of code, nodes in the CPG and the fraction of ProblemNodes. For non-trivial samples, the error-rates are randomly distributed.} \label{fig:errors} \end{figure} \noindent\textbf{Stability.}~ We want to assess the maturity level of the translation step against a large and unknown codebase consisting of a total of $428,777$ lines of LLVM-IR. To measure this, the graph includes specific nodes, called \texttt{ProblemNode}, for each expression which could not be parsed correctly. While we handle all types of instructions, some arguments of the instructions can be computed in line by type casts, or simple arithmetic operations, among others. Overall, we could observe $7,836$ of such \texttt{ProblemNode}s, which accounts for $0.60\%$ of all $1,299,234$ nodes. This result is encouraging and indicates that the current implementation is already capable of handling the vast majority of all combinations of statements\footnote{We will manually investigate the ProblemNodes to parse the statements in the future.}. The fraction of nodes which cannot be handled differs significantly among the samples and ranges between $0\%$ to $1.05\%$. Larger files are more likely to lead to an error during the translation. In addition, it is possible that the varying amount of complexity of the code could trigger more errors. To validate this, we set the average number of CPG nodes per line of code as complexity of the LLVM instructions. Among the samples, this ratio ranges between $1.22$ and $3.67$. We plot this relation in Figure \ref{fig:errors}. Neither of the graphs gives a strong indication for this idea since the error rates seem to be randomly distributed for all non-trivial samples. Neither the size nor the complexity of the samples lead to a conceptual limitation. Instead, some samples use unsoppurted expressions more frequently which can easily be addressed in the implementation. \begin{figure}[tb] \centering \resizebox{0.6\textwidth}{!}{ \begin{tikzpicture} \begin{axis}[ xlabel={\# Nodes in the CPG}, ylabel={Analysis time [s]}, ymode=log, legend pos=south east ] \addplot[color=blue, mark=square] coordinates {(1,0.023) (1,0.102) (1,0.107) (87,0.124) (230,0.273) (529,0.193) (1707,0.507) (1921,0.659) (2327,3.641) (2619,0.610) (3619,0.646) (13482,6.505) (34304,23.670) (40602,32.639) (44069,43.281) (50060,47.806) (54868,79.863) (72845,112.269) (244010,15260.350) (263729,2872.096) (468223,9303.378)}; \addlegendentry{Including CPG passes}; \addplot[color=red, mark=o] coordinates {(1,0.012) (1,0.181) (1,0.519) (87,0.137) (230,0.22) (529,0.129) (1707,0.885) (1921,0.349) (2327,5.108) (2619,0.296) (3619,0.404) (13482,2.057) (34304,5.203) (40602,3.828) (44069,7.850) (50060,3.846) (54868,10.354) (72845,24.883) (244010,248.362) (263729,132.879) (468223,515.532)}; \addlegendentry{Only LLVM-related CPG pass}; \end{axis} \end{tikzpicture}} \caption{Analysis time vs. \# Nodes. Note the logarithmic y scale.} \label{fig:at_nodes} \end{figure} \noindent\textbf{Scalability.}~ Another goal is to assess the scalability of the implementation on real-world software with many lines of code. Two factors can impact the analysis time: The lines of code and the number of nodes in the graph. According to Table \ref{tab:rust}, an increase of LoC leads to more nodes in the graph in most cases. Figure~\ref{fig:at_nodes} plots the time of the analysis (i.e., the translation to the CPG and all CPG passes but the \texttt{ControlFlowSensitiveDFGPass}) for the number of nodes. With the exception of one sample, the analysis time seems to grow linearly depending on the number of nodes in the graph. Interestingly, when we only consider the analysis time of the LLVM-specific translation and pass of the CPG, the outlier is no longer present. This shows that the LLVM-related translation and CPG pass do scale well even for larger samples but that some of the other CPG passes seem to perform poorly in the presence of a specific combination of nodes. \begin{figure}[h!] \centering \begin{minipage}{.5\textwidth} \centering \begin{tikzpicture} \begin{axis}[ xbar=0pt, width = \textwidth, axis y line*=left, axis x line=bottom, height = 300pt, enlarge y limits=0.025, bar width=5pt, xmajorgrids = true, ylabel = {File}, xlabel = {Analysis time [ms]}, ytick = data, xmode=log, scaled y ticks = true, axis line style={-}, legend columns=2, legend cell align=left, legend style={ at={(0.5,-0.11)}, anchor=north, column sep=1ex, nodes={scale=0.75, transform shape} }, nodes near coords, nodes near coords style={font=\tiny}, nodes near coords align=horizontal, point meta=rawx ] \addplot+[xbar] coordinates {(3641, 1) (507, 2) (6505, 3) (0023, 4) (23670, 5) (2872096, 6) (112269, 7) (0193, 8) (0646, 9) (32639, 10) (79863, 11) (47806, 12) (0124, 13) (0610, 14) (15260350, 15) (43281, 16) (0107, 17) (0102, 18) (9303378, 19) (0659, 20) (0273, 21)}; \addplot+[xbar] coordinates {(3847, 1) (1118, 2) (13347, 3) (0011, 4) (108124, 5) (41934277, 6) (635146, 7) (2477, 8) (1687, 9) (184776, 10) (476777, 11) (64749, 12) (0051, 13) (0762, 14) (19795079, 15) (182274, 16) (0034, 17) (0020, 18) (30598095, 19) (0382, 20) (0067, 21)}; \legend{Our approach, reg2mem} \end{axis} \end{tikzpicture} \end{minipage}% \begin{minipage}{.5\textwidth} \begin{tikzpicture} \begin{axis}[ xbar=0pt, width=\textwidth, axis x line=bottom, height=300pt, bar width=5pt, enlarge y limits=0.025, xmajorgrids=true, xlabel={\#Nodes}, ylabel={File}, ytick=data, axis line style={-}, nodes near coords, node near coords style={font=\tiny}, nodes near coords align=horizontal, point meta=rawx, legend columns=2, legend cell align=left, legend style={ at={(0.5,-0.11)}, anchor=north, column sep=1ex, nodes={scale=0.75, transform shape} }, ] \addplot+[xbar, style=black!60!green, fill=black!40!green] coordinates {(2327, 1) (1707, 2) (13482, 4) (1, 4) (34304, 5) (263729, 6) (72845, 7) (529, 8) (3619, 9) (40602, 10) (54868, 11) (50060, 12) (87, 13) (2619, 14) (244010, 15) (44069, 16) (1, 17) (1, 18) (468223, 19) (1921, 20) (230, 21)}; \addplot+[xbar, style=black!40!orange, fill=black!20!orange] coordinates {(3695, 1) (2952, 2) (21858, 3) (1, 4) (59673, 5) (400775, 6) (102504, 7) (687, 8) (4111, 9) (70026, 10) (102923, 11) (56010, 12) (103, 13) (4288, 14) (385270, 15) (78289, 16) (1, 17) (1, 18) (770152, 19) (2554, 20) (369, 21)}; \legend{Our approach, reg2mem} \end{axis} % \end{tikzpicture} \end{minipage} \caption{Performance comparison of our approach and prior work. The analysis time and the number of nodes are typically much smaller with our improvements.} \label{fig:comparison} \end{figure} \noindent\textbf{Comparison to prior work.}~ To compare our approach to prior work which relied on LLVM's reg2mem pass to remove $\varphi$ nodes, we ran our toolchain but first executed the respective pass. As Figure \ref{fig:comparison} shows, our approach leads to a significant reduction of nodes and time required to generate the graph. \section{Discussion} \label{sec:discussion} Our evaluation suggests that our translation and CPG model can unify source code and low-level representations such as LLVM-IR in a single graph representation. This increases the reusability of analyses and queries on the graph. We found that the LLVM-IR retrieved from binary lifters is significantly easier to handle by the graph. This is due to the fact that most lifters tend to use rather conservative steps for their translation. This results in the LLVM-IR being closer to assembly code with comparably simple types of instructions. The LLVM-IR retrieved during the compilation, in contrast, features numerous highly specialized instructions which typically make the translation more difficult. Furthermore, the graphs retrieved from lifted binaries are typically smaller than the ones which can be retrieved when the LLVM-IR is retrieved during the compilation. This makes it an interesting application since it simplifies and speeds up the analysis. Last, we found that the graph of the decompiled binary is only marginally smaller than the one holding the lifted LLVM-IR instructions. This small advantage will, however, not outweigh the error-prone and time-consuming decompliation step in most scenarios which is required to retrieve the code. \noindent\textbf{Validity of the Results.}~ The main threat to the validity of the findings is the set of test samples. In particular, as we could see in Section \ref{sec:crypto}, the compiler has a significant impact on the generated LLVM-IR and the resulting complexity which needs to be handled by our toolchain. Hence, testing the toolchain against different compilers and configurations might lead to different results. To address this potential issue, we used XCode on macOS and clang on Ubuntu, and we also generated the LLVM-IR with Rust's crates build system. Furthermore, we used a binary lifter to showcase a possible application to such a scenario. \noindent\textbf{Limitations.}~ As our evaluation against the Rust standard library showed, a small amount of instructions could not be parsed correctly. This is explained by the possibility of LLVM-IR to hold sub-statements for the arguments. While we do handle the concepts and operators (e.g., casts), their potential usage in a specific sub-statement needs to be added to the translation step. To identify all possible combinations, a more extensive testing is required. \noindent\textbf{Future Work and Research Directions.}~ The resulting graph can be used as an entry point for further research to better include specifics of certain platforms. One example is the analysis of the LLVM-IR emitted by XCode for apps written in Apple's programming languages Swift or Objective-C. Their calling conventions differ significantly from other programming languages. As an example, Objective-C makes use of a dynamic dispatching routine which requires extensive tracing of a method's arguments to recover type information and the method name as a string \cite{schuette2019lios,egele2011pios}. This information is present in the CPG but has to be combined to identify the calls. Similarly, it is necessary to model Swift's calling conventions and memory model since it differs significantly from the one of C++ \cite{tiganov2020swan,kraus2018the}. However, to date, the differences are not fully explored. Future work should identify differences and integrate this knowledge into the CPG. Furthermore, software written in C or C++ can rely on macros which are used similar to function calls in the source code and represented as such but are replaced with their specific implementation in LLVM-IR. This discrepancy needs to be addressed appropriately to better analyze such programs. In the current stage, addressing such inconsistencies between source code and the binary is left to manual efforts of the user of the cpg library. Additional efforts are necessary to reduce these manual efforts and ease the usability of the analysis toolchain. Last, adapting the solution to the analysis of closed-source software is promising. Recent research \cite{liu2022sp} showed that lifting is a stable technique for many applications. However, lifted or decompiled binaries still suffer from a lack of information which are crucial for a security analysis \cite{mantovani2022the}. Hence, further research should study which gaps still exist to apply existing tools to lifted binaries. \noindent\textbf{Generalizability.}~ Since the SSA form is also used by other IRs (e.g. Shimple \cite{shimple}, WALA \cite{wala}, SIL \cite{sil}), some of the challenges generalize to those IRs. Hence, the concepts presented in this paper can be reused to add further code representations using the SSA form to the CPG. Furthermore, some parts of our concept could be ported to other projects which suffer from similar issues. However, the applicability and impact depend on the projects' data models. \section{Conclusion} \label{sec:conclusion} We showed how we extended an open source CPG implementation to handle LLVM-IR. While the majority of instructions can easily be mapped to the high-level equivalents, the $\varphi$ instruction and the LLVM exception handling instructions impose challenges to the translation. However, we could transform the program to the CPG representation with a reasonable increase in nodes while prior work suffered from huge performance penalties. The similarity between the resulting graph and the one of the code fractions in high-level languages allows to reuse existing analyses detecting security weaknesses or bugs. Our evaluation suggests that the approach scales to larger projects. Future work is necessary to include characteristics of some programming languages (e.g. Swift), to add analyses for further use cases, and to study the gaps of binary lifting. \textbf{Acknowledgements.}~ This work was partially funded by the Horizon 2020 project MEDINA, grant agreement ID 952633. \bibliographystyle{splncs04} \urlstyle{tt}
1,116,691,499,931
arxiv
\section{Introduction} \label{sec:intro} Most molecular electronic structure methods rely on different descriptions for ground and excited states. The ground state is described first, at a given level of theory, providing a baseline for later accessing the excited states, which in turn makes use of another approach or a distinct formalism altogether. For example, Kohn-Sham (KS) density-functional theory (DFT) is a ground-state method, \cite{Hohenberg_1964,Kohn_1965,Parr_book} whereas the excited states are obtained later with a linear response treatment of time-dependent density-functional theory (TDDFT). \cite{Runge_1984,Burke_2005,Casida_2012,Huix-Rotllant_2020} Similarly, the coupled-cluster (CC) \cite{Cizek_1966,Crawford_2000,Bartlett_2007,Shavitt_2009} equations are solved for the ground state, whereas a diagonalization of the similarity-transformed Hamiltonian is implied in excited-state calculations based on the equation-of-motion (EOM) \cite{Rowe_1968,Stanton_1993} or linear-response \cite{Monkhorst_1977,Koch_1994} formalisms. Within configuration interaction (CI) methods, \cite{Szabo_book} the underlying formalism is the same for ground and excited states, but typical implementations also rely on a special treatment for the ground state, given the use of ground-state Hartree-Fock (HF) orbitals and the fact that the truncated CI space is spanned by excitations from the ground-state HF determinant. \alert{Whereas the above-mentioned methods rely on a single determinant reference, enlarging the reference space with more than one determinant gives rise to multi-reference approaches. In multiconfigurational self-consistent field (MCSCF), \cite{Das_1966,Roos_1980,Roos_1980a,book_multiconfigurational} the wave function is expanded as a linear combination of an arbitrary set of determinants, and the orbitals (and the coefficients of these determinants) are optimized to make the energy stationary. The most employed type of MCSCF is complete active space self-consistent field (CASSCF), \cite{Roos_1980} which allows for all determinants generated by distributing a given number of electrons in a given number of active orbitals. Multireference CI (MRCI) offers a route to go beyond MCSCF by considering excited determinants generated from the reference space, which in practice is limited to single and double excitations (MRCISD). The MRCISD energy can be further improved with so-called Davidson corrections. \cite{Langhoff_1974,Pople_1977,Szalay_2012} } Apart from multireference approaches, \cite{Szalay_2012,Lischka_2018} \alert{single-reference} excited state methods entail a formal distinction between the targeted excited states and the ground state. It is thus important to devise methods that minimize this unbalance as much as possible, aiming at a more unified description of ground and excited states, while keeping a modest computational cost. This also means a more balanced description among the excited states, and here we highlight the case of singly- and doubly-excited states, which differ by the number of excited electrons during the electronic transition. Most excited-state methodologies either fail to properly describe doubly-excited states or require higher-order excitations to be accounted for. \cite{Loos_2019} In this sense, a methodology that offers a comparable accuracy for singly- and doubly-excited states would be equally desirable. \alert{MCSCF methods can be either state-averaged, when the reference space is optimized for an ensemble of (typically equally weighted) states, or state-specific, when the optimization is performed for one targeted state. The state-averaged strategy is much more used in practice, mostly because of the more straightforward and reliable orbital optimization and the easier calculation of transition properties (given the common set of orbitals), when compared to the state-specific approach. \cite{Das_1973,Dalgaard_1978,Lengsfield_1980,Bauschlicher_1980,Bauschlicher_1980a,Werner_1981,Golab_1983,Olsen_1983} However, state-averaged MCSCF faces several issues. It struggles to describe higher-lying states or a large number of states, the orbitals may favor some states in the detriment of others, \cite{Meyer_2014,Segado_2016,Tran_2019,Tran_2020} the potential energy curves can become discontinuous, \cite{Zaitsevskii_1994,Glover_2014,Tran_2019} and the calculation of energy derivatives is complicated by the energy averaging. \cite{Dudley_2006,Granovsky_2015,Snyder_2017} Many if not all of these problems do not appear in state-specific MCSCF, which in turn has to deal with a more challenging orbital optimization problem. } In light of these motivations, there has been an ever-growing interest in \alert{state-specific MCSCF, \cite{Shea_2018,Tran_2019,Tran_2020,Burton_2022} and state-specific methods in general.} The general principle is to employ a single formalism, approaching each state of interest independently, and without resorting to any prior knowledge about the other states. The first and probably the most well-known state-specific method is $\Delta$SCF, \cite{Ziegler_1977,Kowalczyk_2011} where excited states are described by a single determinant and represent higher-lying solutions of the HF or KS equations. By optimizing the orbitals for a non-Aufbau determinant, $\Delta$SCF attempts to recover relaxation effects already at a mean-field level. There is a growing body of evidence showing that DFT-based $\Delta$SCF usually outperforms TDDFT, \cite{Kowalczyk_2013,Gilbert_2008,Barca_2018,Hait_2020,Hait_2021,Shea_2018,Shea_2020,Hardikar_2020,Levi_2020,Carter-Fenk_2020,Toffoli_2022} most notably for doubly-excited and charge transfer states. \cite{Hait_2020,Hait_2021} However, $\Delta$SCF still represents a major limitation to open-shell singlet states, because of strong spin-contamination associated with the single-determinant ansatz. Restricted open-shell Kohn-Sham (ROKS) \cite{Filatov_1999,Kowalczyk_2013} offers one way around this problem, by optimizing the orbitals for a Lagrangian that considers both the mixed-spin determinant and the triplet determinant with spin projection $m_s$ = 1. In wave-function-based methods, excited-state mean field (ESMF) theory \cite{Shea_2018,Shea_2020,Hardikar_2020} has been proposed as a state-specific \alert{MCSCF} alternative to excited states. In the ESMF approach, excited-state orbitals are optimized for a CI with single excitations (CIS) ansatz, \cite{Szabo_book} and energies can be further corrected with second-order M{\o}ller-Plesset (MP2) perturbation theory. \cite{Shea_2018,Clune_2020} An extension of ESMF to DFT has also been proposed. \cite{Zhao_2020} Variants of CC methods that directly target excited states have also been actively pursued. \cite{Piecuch_2000,Mayhall_2010,Lee_2019,Kossoski_2021} An important practical question for all the aforementioned methods concerns the optimization of orbitals for excited states, which typically appear as saddle point solutions in the orbital parameter space, \cite{Kossoski_2021,Marie_2021,Damour_2021,Burton_2021,Burton_2022} therefore being more difficult to obtain than ground-state solutions. \cite{Cuzzocrea_2020,Otis_2020,Shepard_2022} In this sense, specialized algorithms for obtaining excited-state orbitals have been proposed and developed by several groups. \cite{Gilbert_2008,Barca_2018,Hait_2020,Levi_2020,Carter-Fenk_2020} Related methods that aim at describing multiple states within the same theoretical framework, though usually in a state-averaged fashion, include CASSCF, \cite{Roos_1980} ensemble DFT, \cite{Cernatic_2022,Gould_2022,Gould_2021,Marut_2020,Loos_2020c,Deur_2019,Gould_2018,Deur_2017,Filatov_2016,Senjean_2015} and multi-state TDDFT. \cite{Gao_2016,Lu_2022,Ren_2016,Horbatenko_2019,Horbatenko_2021} \section{State-specific CI} \label{sec:ssCI} \alert{Here we propose a particular realization of state-specific MCSCF and MRCI as a route for excited-state calculations.} First, the orbitals are optimized \alert{at the MCSCF level comprising} a minimal set of configuration state functions (CSFs), as illustrated in Fig.~\ref{fig:determinants}, which provides a state-specific reference. By running separate calculations for the ground state and for a targeted excited state, excitation energies can be obtained as the energy difference between the individual total energies. We label this approach $\Delta$CSF, in close parallel to the $\Delta$SCF method. \alert{When compared with larger MCSCF choices, the compactness of $\Delta$CSF avoids redundant solutions and is expected to facilitate the convergence toward excited states. For a single CSF ansatz in particular, the CI coefficients are fixed by the desired spin eigenstate, eliminating the redundancies associated with the coupling between CI coefficients and orbital rotations.} Furthermore, by being a proper eigenstate of the total spin operator, $\Delta$CSF cures the spin-contamination problem of $\Delta$SCF, thus leading to truly state-specific orbitals and an improved reference, particularly for singlet excited states. Finally, being a mean-field method [with an $\order*{N^5}$ computational cost associated with the integral transformation, where $N$ is the number of basis functions], $\Delta$CSF is intended to provide a balanced set of reference wave functions for a subsequent, more accurate calculation. At this second stage, correlation effects are captured by performing separate \alert{MRCI} calculations for each state. \alert{Since ground- and excited-state references are of mean-field quality and are state-specific, this particular type of MRCI calculation is labeled $\Delta$CI here.} When accounting for all single and double excitations, we obtain the $\Delta$CISD model, which is now expected to provide decent excitation energies with an $\order*{N^6}$ computational scaling. Notice that, because we perform all singles and doubles with respect to each reference determinant, the maximum excitation degree is potentially higher than two (except of course for a one-determinant reference). This also applies to higher-order CI calculations. In this way, each state is described as much as possible in a state-specific way, with a different set of orbitals as well as determinants. We can further compute the renormalized second-order Epstein-Nesbet (EN2) perturbation correction \cite{Garniron_2019} from the determinants left outside the truncated CISD space of each calculation, giving rise to the $\Delta$CISD+EN2 model. The EN2 perturbative correction involves a single loop over external determinants that are connected to the internal determinants via at most double excitations, thus entailing an overall $\order*{N^8}$ scaling associated with the number of quadruply-excited determinants. Despite this unfavorable scaling, the corresponding prefactor of the EN perturbative correction is rather small, making such calculations still affordable. Alternatively, we could calculate one of the several types of a posteriori Davidson corrections \cite{Langhoff_1974,Pople_1977,Szalay_2012} in a state-specific fashion, leading to a $\Delta$CISD+Q approach. We recall that computing Davidson corrections is virtually free, such that $\Delta$CISD+Q presents the same computational cost and $\order*{N^6}$ scaling as $\Delta$CISD. \footnote{It is worth noting that the $\Delta$CSF, $\Delta$CISD, and $\Delta$CISD+Q excitation energies are invariant under rotations within the doubly-occupied and virtual blocks, although the $\Delta$CISD+EN2 energies are not, which is well-known for the EN2 perturbative correction. \cite{Garniron_2019}} \begin{figure \includegraphics[width=1.0\linewidth]{fig1.pdf} \caption{Types of configuration state functions (CSFs) employed as a reference for different classes of excited states in our $\Delta$CSF and $\Delta$CISD approaches.} \label{fig:determinants} \end{figure} The remaining question is how to build an appropriate reference for each state of interest. Our general guideline is to select the smallest set of CSFs that provides a qualitatively correct description of the state of interest, as shown in Fig.~\ref{fig:determinants}. Here we adopted the spin-restricted formalism. The HF determinant is the obvious choice for the ground state of closed-shell singlets. For singly-excited states of closed-shell systems, we chose either one or two CSFs, depending on each particular excited state. For most cases, a single CSF associated with two unpaired electrons should be enough. Some excited states, however, display a strong multireference character, like those of \ce{N2}, \ce{CO2}, and acetylene, thus requiring two CSFs. For genuine doubly-excited states where a pair of opposite-spin electrons are promoted from the same occupied to the same virtual orbital, we selected a single determinant associated with the corresponding double excitation. \alert{In turn, open-shell doubly-excited states were described with a single open-shell CSF (just as for most singly-excited states).} For ground and excited states of open-shell doublets, a single-determinant restricted open-shell HF reference is adopted as well. As mentioned before, our $\Delta$CISD approach can be seen as a type of MRCI, though with two key differences with respect to typical realizations of MRCI. \cite{Szalay_2012} First, it relies on a minimal set of CSFs as the reference space, whereas in typical applications of MRCI the reference is built from a much larger complete active space. This means that the CI space becomes more amenable in the former approach, enabling calculations for larger systems. The second important difference is that the reference in $\Delta$CISD is state-specific, which is expected to favor the overall fitness of the orbitals when compared with state-averaged orbitals of standard MRCI (whenever excited states are involved). $\Delta$CISD also resembles the ESMF theory \cite{Shea_2018,Shea_2020,Hardikar_2020} of Neuscamman and coworkers in their underlying motivation: a state-specific mean-field-like starting point, subject to a subsequent treatment of correlation effects. In $\Delta$CISD, however, the starting point is much more compact and arguably closer to a mean-field description than the CIS-like ansatz of ESMF. This makes the CI expansion up to double excitations feasible in our approach, though not in ESMF, which in turn resorts to generalized MP2 to describe correlation. \cite{Shea_2018,Clune_2020} This $\Delta$CSF ansatz has already been suggested as a more compact alternative to the ESMF one, \cite{Shea_2020} but again in the spirit of recovering correlation at the MP2 level, whereas we propose a state-specific CISD expansion, that could be followed by Davidson or EN2 perturbative corrections. \section{Computational Details} \label{sec:comp_det} Our state-specific CI approach was implemented in {\textsc{quantum package}}, \cite{Garniron_2019} whose determinant-driven framework provides a very convenient platform for including arbitrary sets of determinants in the CI expansion. In this way, we can easily select only the determinants that are connected to the reference determinants according to a given criterion provided by the user. On top of that, the state-specific implementation further profits from the \textit{configuration interaction using a perturbative selection made iteratively} (CIPSI) algorithm \cite{Huron_1973,Giner_2013,Giner_2015,Garniron_2018} implemented in {\textsc{quantum package}}, which allows for a large reduction of the CI space without loss of accuracy. At each iteration of the CIPSI algorithm, the CI energies are obtained with the Davidson iterative algorithm, \cite{Davidson_1975} which is ended when the EN2 perturbation correction computed in the truncated CI space lies below \SI{0.01}{\milli\hartree}. \cite{Garniron_2018} Our state-specific CI implementation can be employed for different selection criteria for the excited determinants, based for example on the seniority number, \cite{Bytautas_2011} the hierarchy parameter, \cite{Kossoski_2022} or the excitation degree. Here, we considered the more traditional excitation-based CI. After the CI calculation, we computed the renormalized EN2 perturbation correction \cite{Garniron_2019} from the determinants left outside the truncated CI space, which is relatively cheap because of the semi-stochastic nature of our algorithm. \cite{Garniron_2017} We also evaluate the seven variants of Davidson corrections discussed in Ref.~\onlinecite{Szalay_2012}. To get state-specific orbitals, we first run a CIS calculation and obtained the natural transition orbitals (NTOs), \cite{Martin_2003} which proved to be more suitable guess orbitals than the canonical HF orbitals. The dominant hole and particle NTOs are taken as the singly-occupied orbitals, and for pronounced multireference states, the second most important pair of NTOs was also considered (as illustrated in Fig.~\ref{fig:determinants}). \alert{For the doubly-excited states, a non-Aufbau occupation of the canonical HF orbitals were used as guess orbitals, based on the expected character of the excitation.} The orbital optimization was performed with the Newton-Raphson method, also implemented in {\textsc{quantum package}}. \cite{Kossoski_2021,Damour_2021} Having our state-specific approaches presented, our main goal here is to assess their performance in describing electronic excited states. For that, we calculated vertical excitation energies for an extensive set of 294 electronic transitions, for systems, states, and geometries provided in the QUEST database. \cite{Veril_2021} We considered small- \cite{Loos_2018,Loos_2021a} and medium-sized \cite{Loos_2020,Loos_2022} organic compounds, radicals and ``exotic'' systems, \cite{Loos_2020a} and doubly-excited states. \cite{Loos_2020,Loos_2021a,Loos_2022} The set of excited states comprises closed-shell (singlets and triplets) and open-shell (doublets) systems, ranging from one up to six non-hydrogen atoms, and of various characters (valence and Rydberg states, singly- and doubly-excited states). We employed the aug-cc-pVDZ basis set for systems having up to three non-hydrogen atoms, and the 6-31+G(d) basis set for the larger ones. We compared the excitation energies obtained with our state-specific approaches against more established alternatives, such as CIS, \cite{DelBene_1971} CIS with perturbative doubles [CIS(D)], \cite{Head-Gordon_1994,Head-Gordon_1995} CC with singles and doubles (CCSD) \cite{Purvis_1982,Scuseria_1987,Koch_1990,Stanton_1993} and the second-order approximate CC with singles and doubles (CC2), \cite{Christiansen_1995,Hattig_2000} the latter two understood as EOM-CC. The excitation energies obtained with the different methodologies were gauged against very accurate reference values, of high-order CC or extrapolated full CI quality. \cite{Loos_2018,Loos_2021a,Loos_2020,Loos_2022,Loos_2020a,Loos_2020} The complete set of reference methods and energies are provided in the {\textcolor{blue}{Supporting Information}}. \section{Results and discussion} \label{sec:res} \subsection{Orbital optimization} Our first important result is that the combination of the Newton-Raphson method starting with NTOs proved to be quite reliable in converging the $\Delta$CSF ansatz to excited-state solutions. \alert{To a great extent, this is assigned to the compact reference of $\Delta$CSF, which avoids redundant solutions associated with larger MCSCF references.} In most cases, the orbitals are optimized with relatively few iterations (typically less than 10), and to the correct targeted state. A second-order method such as Newton-Raphson is required if the targeted solution is a saddle point in the orbital rotation landscape, which is expected to be the case for excited states. \cite{Burton_2021,Burton_2022} At convergence, the number of negative eigenvalues of the orbital Hessian matrix, i.e., the saddle point order, can provide further insights about the topology of the solutions for a given CSF ansatz. The full list of saddle point orders is shown in the {\textcolor{blue}{Supporting Information}}. For the closed-shell systems, we found that the lowest-lying solution (global minimum) obtained with the open-shell CSF is always an excited state since it cannot properly describe the closed-shell character of the ground state. In turn, higher-lying excited states tend to appear as saddle points, with increasing order as one goes up in energy, even though this behavior is not very systematic. It was not uncommon, for example, to encounter two different excited states as local minima or sharing the same saddle point order. For some systems, we searched for symmetry-broken solutions of excited states by rotating the orbitals along the direction associated with a negative eigenvalue of the orbital Hessian, but this procedure leads to solutions representing a different state. We did not explore this exhaustively though, and we cannot rule out the existence of symmetry-broken excited-state solutions. \alert{It is also worth mentioning that the starting orbitals typically presented a much larger number of negative Hessian eigenvalues, which decreased in the course of orbital optimization. This means that the saddle point order cannot be anticipated based on information about the unrelaxed orbitals or the expected ordering of the states.} Importantly, state-specific solutions could be found for different types of states, including singly- and doubly-excited states, for closed-shell singlets and open-shell doublets, and for the first as well as higher-lying states of a given point group symmetry. \alert{For this last class of states, however, our single CSF approach is not always reliable, specially for fully symmetric higher-lying states. In some cases, a closed-shell determinant is also important (as revealed by the subsequent CISD calculation) but remains outside the open-shell CSF reference. In these situations, employing both open- and closed-shell determinants in the reference is expected to improve the description of these higher-lying excited states, and we plan to explore this approach in the future. More generally, convergence issues would be expected at energies displaying a high density of excited states.} The excited-state reference could also be based on single-determinant $\Delta$SCF orbitals, rather than the $\Delta$CSF orbitals we have adopted. However, the former method is heavily spin-contaminated, being an exact mixture of singlet and triplet, whereas the latter method targets one spin multiplicity at a time. In this way, the excitation energies obtained with $\Delta$CSF appear above (for singlets) and below (for triplets) the single energy obtained with $\Delta$SCF, overall improving the comparison with the reference values. In turn, we compared $\Delta$SCF and $\Delta$CSF excited-state orbitals for $\Delta$CISD calculations, and found overall little differences in the excitation energies. Still, we think $\Delta$CSF is preferable because it delivers truly state-specific orbitals, whereas $\Delta$SCF produces the same orbitals for the singlet and triplet states, and is thus less state-specific. \subsection{State-specific \textit{vs} standard CI} The state-specific $\Delta$CI approach offers a well-defined route towards full CI by increasing the excitation degree, in analogy with standard ground-state-based CI methods. We explored both routes by calculating 16 excitation energies for small systems, by considering up to quadruple excitations (full set of results are available in the {\textcolor{blue}{Supporting Information}}). Even though this is a small set for obtaining significant statistics, it is enough to showcase the main trends when comparing state-specific and ground-state-based CI methods. The mean signed error (MSE), mean absolute error (MAE), \alert{and root-mean-square error (RMSE)} are shown in Table \ref{tab:DCIvsCI}. The convergence for standard CI is quite slow, with CISD largely overestimating the excitation energies, CISDT leading to more decent results, which are improved at the CISDTQ level. In turn, we found that $\Delta$CI displays much more accurate results and accelerated convergence than their ground-state-based counterparts. Already at the $\Delta$CISD level, the accuracy is far superior to that of standard CISD, being comparable to CISDT. Going one step further ($\Delta$CISDT) does not lead to a visible improvement, whereas the state-specific quadruple excitations of $\Delta$CISDTQ recover much of the remaining correlation energy \alert{of each state, hence the very accurate excitation energies.} \alert{These observations parallel the common knowledge that the ground state correlation energy is mostly affected by the double excitations, and that quadruples are more important than triples, meaning that the state-specific $\Delta$CI approach manages to capture correlation effects in a reasonably balanced way for ground and excited states.} \alert{This also motivates us to investigate various flavors of Davidson correction, which attempts to capture the missing contribution from the quadruple excitations.} \alert{As will be discussed in more detail later, the popular Pople correction, \cite{Pople_1977} labeled $\Delta$CISD+PC from hereon, was found to be somewhat more accurate than the others.} The comparable MAEs of $\Delta$CISD and CISDT can be understood from the observation that the doubly-excited determinants accessed from the excited-state reference can only be achieved via triple excitations from the ground-state reference. The comparison between state-specific and ground-state-based CI for a given excitation degree ($\Delta$CISD against CISD and $\Delta$CISDTQ against CISDTQ) shows that the MAEs are reduced by one order of magnitude in the former route, when compared with the latter. No gain is observed from CISDT to $\Delta$CISDT, though. \begin{table}[ht!] \caption{Mean Signed Error (MSE), Mean Absolute Error (MAE), and Root-Mean Square Error (RMSE) in Units of eV, with Respect to Reference Theoretical Values, for the Set of 16 Excitation Energies Listed in the {\textcolor{blue}{Supporting Information}}. } \label{tab:DCIvsCI} \begin{ruledtabular} \begin{tabular}{lddd} method & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c}{MAE} & \multicolumn{1}{c}{RMSE} \\ \hline CISD & +3.91 & 3.91 & 16.63 \\ CISDT & +0.07 & 0.17 & 0.04 \\ CISDTQ & +0.13 & 0.13 & 0.02 \\ \hline $\Delta$CISD & -0.14 & 0.18 & 0.05 \\ $\Delta$CISDT & -0.20 & 0.20 & 0.05 \\ $\Delta$CISDTQ & -0.02 & 0.02 & 0.00 \\ $\Delta$CISD+EN2 & -0.00 & 0.03 & 0.00 \\ $\Delta$CISD+PC & -0.10 & 0.14 & 0.02 \\ \hline CIS(D) & -0.03 & 0.35 & 0.16 \\ CC2 & -0.05 & 0.32 & 0.13 \\ CCSD & +0.01 & 0.08 & 0.01 \\ CC3 & +0.01 & 0.03 & 0.00 \\ CCSDT & -0.01 & 0.02 & 0.00 \\ \end{tabular} \end{ruledtabular} \end{table} \subsection{State-specific CI \textit{vs} other methods} \begin{figure* \includegraphics[width=1.0\linewidth]{fig2.pdf} \caption{Distribution of Errors in Excitation Energies with Respect to Reference Theoretical Values and Corresponding Mean Signed Error (MSE) and Mean Absolute Error (MAE), for Various Excited-State Methodologies.} \label{fig:PDF} \end{figure*} We now start the discussion on how well our state-specific CI approaches compare with more established methods by presenting in Fig.~\ref{fig:PDF} \alert{and in Table \ref{tab:singly}} the distribution of errors and statistical measures associated with a set of 237 singly-excited states of closed-shell systems. At the $\Delta$CSF level, the excitation energies are systematically underestimated, thus a substantially negative MSE. This happens because the CSF reference for the excited states (typically containing two determinants) already recovers some correlation, whereas the one-determinant HF reference of the ground state does not. The MAE of the $\Delta$CSF approach (\SI{0.62}{\eV}) is comparable to that of CIS (\SI{0.65}{\eV}). The overall similar performance of these two methods is somewhat expected since the orbital relaxation that takes place in the state-specific CSF is partially described via the single excitations of CIS. Moving to the $\Delta$CISD level, we find that correlation effects are described in a reasonably balanced way for ground and excited states. The MAE is significantly reduced (\SI{0.18}{\eV}) with respect to that of $\Delta$CSF, being smaller than in CIS(D) (\SI{0.21}{\eV}) and comparable to CC2 (\SI{0.17}{\eV}). The absolute MSE also decreases, but remains negative, whereas the other CI- or CC-based methods present positive MSEs. This shows that there is still some bias toward a better description of excited states at the $\Delta$CISD level, probably reflecting the two-determinant reference (compared to one-determinant for the ground state). \alert{In addition, higher-lying fully symmetric states are not as well described at the $\Delta$CISD level, reflecting the lack of a closed-shell determinant in the reference, as discussed above. We did not discard these states from the statistics though.} \begin{table}[ht!] \caption{Mean Signed Error (MSE), Mean Absolute Error (MAE), and Root-Mean Square Error (RMSE), in Units of eV, with Respect to Reference Theoretical Values, for the Set of 237 Excitation Energies for Singly-Excited States of Closed-Shell Systems Listed in the {\textcolor{blue}{Supporting Information}}. } \label{tab:singly} \begin{ruledtabular} \begin{tabular}{lddd} \multicolumn{1}{l}{Method} & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c}{MAE} & \multicolumn{1}{c}{RMSE} \\ \hline $\Delta$CSF & -0.55 & 0.62 & 0.74 \\ $\Delta$CISD & -0.11 & 0.18 & 0.23 \\ $\Delta$CISD+EN2 & +0.04 & 0.08 & 0.12 \\ $\Delta$CISD+PC & -0.07 & 0.10 & 0.17 \\ \hline CIS & +0.19 & 0.65 & 0.68 \\ CIS(D) & +0.09 & 0.21 & 0.27 \\ CC2 & +0.01 & 0.17 & 0.25 \\ CCSD & +0.05 & 0.08 & 0.11 \\ \end{tabular} \end{ruledtabular} \end{table} The perturbative correction introduced at the $\Delta$CISD+EN2 approach reduces the statistical errors even more, showing the same MAE as CCSD (\SI{0.06}{\eV}). At times, however, the EN2 correction leads to erroneous results due to the presence of intruder states, \alert{which sometimes appears for the more problematic higher-lying states of a given symmetry.} \alert{We have discarded 10 out of 294 problematic cases when evaluating the statistics of the $\Delta$CISD+EN2 results.} Instead of relying on perturbation theory to correct the CISD total energies, we can resort to one of Davidson corrections. \cite{Szalay_2012} Even though this correction is not as accurate as the EN2 perturbative energy, more often than not it improves upon $\Delta$CISD, and with virtually no additional computational cost. \alert{For the $\Delta$CISD+Q statistics, we discarded 12 out of 294 data points where $\norm{\boldsymbol{c}} < 0.9$, where $\boldsymbol{c}$ gathers the coefficients of the reference determinants in the CI expansion.} We found that all seven $\Delta$CISD+Q variants provide MAEs in the \SIrange{0.10}{0.12}{\eV} range, \alert{with the individual distribution of errors and statistical measures presented in Fig.~\ref{fig:PDF_Q}.} \alert{As alluded to before, the Pople corrected flavor, $\Delta$CISD+PC, is arguably the most well-behaved one, with fewer outlier excitation energies and the lowest MAEs, of \SI{0.10}{\eV}.} \begin{figure* \includegraphics[width=1.0\linewidth]{fig3.pdf} \caption{Distribution of Errors in Excitation Energies with Respect to Reference Theoretical Values and Corresponding Mean Signed Error (MSE) and Mean Absolute Error (MAE), for Various Forms of Davidson-Corrected $\Delta$CISD+Q Models. The Different Types of Davidson Corrections Can Be Found in Ref.~\onlinecite{Szalay_2012}. } \label{fig:PDF_Q} \end{figure*} We also surveyed the performance of our state-specific methods for 10 genuine doubly-excited states \cite{Loos_2019} and 47 excited states of open-shell doublets (doublet-doublet transitions), \cite{Loos_2020a} both sets were extracted from the QUEST database. \cite{Veril_2021} The statistical measures are shown in Table \ref{tab:all_sets}, together with those of singly-excited states of closed-shell systems. The important finding in this comparison is that the state-specific methods provide reasonably similar MAEs for the three types of excited states. For instance, $\Delta$CISD has MAEs of \SI{0.18}{\eV} for singly-excited states of closed-shell singlets, \SI{0.17}{\eV} for doublet-doublet transition and \SI{0.16}{\eV} for doubly-excited states. This contrasts with the case of more familiar methods, which cannot describe doubly-excited states unless higher-order excitations are included. \cite{Loos_2019} We notice that the MSE of $\Delta$CSF is more negative for singly-excited states of closed-shell molecules (\SI{-0.55}{\eV}) than for doubly-excited states (\SI{-0.20}{\eV}), being closer to zero for doublet-doublet transitions (\SI{+0.07}{\eV}), which reflects the one-determinant reference adopted for both the excited and ground states in the latter cases. This difference does not translate into comparatively smaller errors in the correlated results though. \begin{table*}[ht!] \caption{Mean Signed Error (MSE), Mean Absolute Error (MAE), and Root-Mean Square Error (RMSE), in Units of eV, with Respect to Reference Theoretical Values, for the Excitation Energies of 237 Singly-Excited States (Set A) and 10 Doubly-Excited States (Set B) from Closed-Shell Singlets, and of 47 Singly-Excited States from Open-Shell Doublets (Set C) Listed in the {\textcolor{blue}{Supporting Information}}. } \label{tab:all_sets} \begin{ruledtabular} \begin{tabular}{lddddddddddddd} & & \multicolumn{3}{c}{$\Delta$CSF} & \multicolumn{3}{c}{$\Delta$CISD} & \multicolumn{3}{c}{$\Delta$CISD+EN2} & \multicolumn{3}{c}{$\Delta$CISD+PC} \\ \cline{3-5} \cline{6-8} \cline{9-11} \cline{12-14} & \multicolumn{1}{c}{\# States} & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c}{MAE} & \multicolumn{1}{c}{RMSE} & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c}{MAE} &\multicolumn{1}{c}{RMSE} & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c}{MAE} & \multicolumn{1}{c}{RMSE} & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c}{MAE} &\multicolumn{1}{c}{RMSE} \\ \hline Set A & 237 & -0.55 & 0.62 & 0.74 & -0.11 & 0.18 & 0.23 & +0.04 & 0.08 & 0.12 & -0.07 & 0.10 & 0.17 \\ Set B & 10 & -0.20 & 0.46 & 0.64 & +0.03 & 0.16 & 0.22 & -0.05 & 0.13 & 0.16 & -0.10 & 0.13 & 0.16 \\ Set C & 47 & +0.07 & 0.49 & 0.64 & +0.07 & 0.17 & 0.25 & -0.02 & 0.07 & 0.11 & -0.03 & 0.09 & 0.16 \\ \hline All sets & 294 & -0.44 & 0.61 & 0.72 & -0.07 & 0.17 & 0.23 & +0.02 & 0.08 & 0.12 & -0.06 & 0.10 & 0.17 \\ \end{tabular} \end{ruledtabular} \end{table*} For the doubly-excited states, we further compare in Table \ref{tab:doubly} the performance of state-specific CI against higher-order CC methods. The accuracy of the $\Delta$CSF mean-field model is superior to CC3 and approaches CCSDT, which highlights the importance of orbital relaxation for doubly-excited states. $\Delta$CISD is significantly more accurate than CCSDT, whereas the perturbative and the Davidson corrections bring a small improvement. \alert{It is worth mentioning recent developments and promising results with state-specific CC \cite{Lee_2019,Levi_2020,Kossoski_2021,Marie_2021} and DFT \cite{Hait_2020,Hait_2021} for doubly-excited states. However, these approaches are still restricted to states dominated by a single closed-shell determinant, whereas the $\Delta$CISD approach can handle both closed- and open-shell doubly-excited states. Out of the 10 doubly-excited states we investigated, only 5 (beryllium, ethylene, formaldehyde, nitroxyl, and nitrosomethane) can be qualitatively described with a single closed-shell determinant, whereas at least two determinants are needed for the remaining 5 states: two closed-shell determinants for glyoxal and carbon dimer (\ce{C2}), and four closed-shell determinants for carbon trimer (\ce{C3}). } \begin{table}[ht!] \caption{Mean Signed Error (MSE) and Mean Absolute Error (MAE), in Units of eV, with Respect to Reference Theoretical Values, for the Set of 10 Doubly-Excited States Listed in the {\textcolor{blue}{Supporting Information}}. } \label{tab:doubly} \begin{ruledtabular} \begin{tabular}{ldd} Method & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c}{MAE} \\ \hline $\Delta$CSF & -0.20 & 0.46 \\ $\Delta$CISD & +0.03 & 0.16 \\ $\Delta$CISD+EN2 & -0.05 & 0.13 \\ $\Delta$CISD+PC & +0.02 & 0.13 \\ \hline CC3 & +0.85 & 0.85 \\ CCSDT & +0.38 & 0.38 \\ CCSDTQ & +0.03 & 0.03 \\ \end{tabular} \end{ruledtabular} \end{table} \subsection{Types of excitation} The performance of our state-specific methods can also be assessed for specific types of excited states, say for $\pi\pi^*$ transitions or for systems of a given size. This is shown in Table \ref{tab:mae_singly}, which compares the MAEs across different categories, \alert{whereas the corresponding MSEs and RMSEs can be found in Tables S1 and S2 of the {\textcolor{blue}{Supporting Information}}.} Many trends can be identified, but here we highlight the most notorious and interesting ones. Starting with spin multiplicity, we found that the $\Delta$CISD results are comparable for singlets and triplets, whereas the perturbative correction has a more pronounced effect for the triplets, bringing the MAE down to \SI{0.06}{\eV}, the same as obtained with CCSD. To some extent, the worse performance of the EN2 correction for the singlets stems from intruder states (most noticeably the ground state when it shares the same point group symmetry as the targeted excited state). We also found that the Davidson corrections bring a somewhat larger improvement for triplets than for singlets, with some flavors having MAEs of \SI{0.07}{\eV} for the triplets, being essentially as accurate as CCSD (MAE of \SI{0.06}{\eV}). For the doublet-doublet transitions, the EN2 and +Q corrections are as helpful as they are for the triplets (see Table \ref{tab:all_sets} and \ref{tab:mae_singly}). \begin{table*}[ht!] \caption{Mean Absolute Error, in Units of eV, with Respect to Reference Theoretical Values, for Different Types of Singly-Excited States of Closed-Shell Systems Listed in the {\textcolor{blue}{Supporting Information}}. } \label{tab:mae_singly} \begin{ruledtabular} \begin{tabular}{lddddddddd} & \multicolumn{1}{c}{\# States} & \multicolumn{1}{c}{$\Delta$CSF} & \multicolumn{1}{c}{$\Delta$CISD} & \multicolumn{1}{c}{$\Delta$CISD+EN2} & \multicolumn{1}{c}{$\Delta$CISD+PC} & \multicolumn{1}{c}{CIS} & \multicolumn{1}{c}{CIS(D)} & \multicolumn{1}{c}{CC2} & \multicolumn{1}{c}{CCSD} \\ \hline All states & 237 & 0.62 & 0.18 & 0.08 & 0.10 & 0.65 & 0.21 & 0.17 & 0.08 \\ \hline Singlet & 127 & 0.56 & 0.17 & 0.10 & 0.12 & 0.68 & 0.22 & 0.19 & 0.10 \\ Triplet & 110 & 0.69 & 0.18 & 0.06 & 0.09 & 0.61 & 0.19 & 0.15 & 0.06 \\ \hline Valence & 155 & 0.65 & 0.21 & 0.08 & 0.10 & 0.61 & 0.19 & 0.14 & 0.08 \\ Rydberg & 82 & 0.56 & 0.12 & 0.10 & 0.11 & 0.72 & 0.24 & 0.22 & 0.08 \\ \hline $n\pi^*$ & 56 & 0.60 & 0.16 & 0.06 & 0.07 & 0.54 & 0.10 & 0.09 & 0.06 \\ $\pi\pi^*$ & 80 & 0.75 & 0.26 & 0.10 & 0.12 & 0.74 & 0.27 & 0.20 & 0.10 \\ $\sigma\pi^*$ & 18 & 0.39 & 0.13 & 0.03 & 0.06 & 0.26 & 0.11 & 0.07 & 0.04 \\ $n$ Rydberg & 40 & 0.61 & 0.12 & 0.12 & 0.14 & 1.17 & 0.37 & 0.39 & 0.07 \\ $\pi$ Rydberg & 38 & 0.46 & 0.10 & 0.08 & 0.09 & 0.29 & 0.11 & 0.06 & 0.10 \\ \hline 1-2 non-H atoms & 69 & 0.83 & 0.18 & 0.06 & 0.12 & 0.71 & 0.24 & 0.23 & 0.06 \\ 3-4 non-H atoms & 122 & 0.57 & 0.18 & 0.09 & 0.10 & 0.70 & 0.20 & 0.16 & 0.08 \\ 5-6 non-H atoms & 46 & 0.41 & 0.15 & 0.12 & 0.07 & 0.43 & 0.18 & 0.12 & 0.10 \\ \end{tabular} \end{ruledtabular} \end{table*} Regarding the character of the excitations, we found that $\Delta$CISD is considerably better for Rydberg (MAE of \SI{0.12}{\eV}) than for valence (MAE of \SI{0.21}{\eV}) excited states. In turn, the EN2 correction has a larger impact on valence excitations, making little difference for the Rydberg states, such that the $\Delta$CISD+EN2 results become comparable for both types of excitation, \alert{with MAEs of \SIrange{0.08}{0.10}{\eV}.} Additional trends can be observed when dividing the valence excitations into $n\pi^*$, $\pi\pi^*$, or $\sigma\pi^*$, and the Rydberg excitations as taking place from $n$ or $\pi$ orbitals. Our state-specific methods are typically more accurate for $n\pi^*$ excitations than for $\pi\pi^*$ excitations. $\Delta$CISD+EN2, for example, is as accurate as CCSD for $n\pi^*$ transitions, with corresponding MAEs of \SI{0.06}{\eV}. We also found that the less common $\sigma\pi^*$ excitations are much better described across all methods than the more typical $n\pi^*$ and $\pi\pi^*$ transitions. For this type of state, $\Delta$CISD+EN2 is the best performing method, with MAEs as small as \SI{0.03}{\eV}. When separating the Rydberg states by the character of the hole orbital, $n$ or $\pi$, additional interesting features can be seen. Except for CCSD, all the other methods considered here provide more accurate results for the Rydberg excitations involving the $\pi$ orbitals. Not only that, but the MAEs are quite small and comparable across all methods (except for $\Delta$CSF and CIS), ranging from \SIrange{0.06}{0.11}{\eV}. Surprisingly, CIS is much more accurate for $\pi$ Rydberg (MAE of \SI{0.29}{\eV}) than for $n$ Rydberg (MAE of \SI{1.17}{\eV}) excitations. The third and most important line of comparison concerns the system size. Under this criterion, we divided the excited states into three groups, small, medium, and large, depending on the number of non-hydrogen atoms (see Table \ref{tab:mae_singly}). We found that $\Delta$CSF becomes more accurate as the system size increases, which we assign to a diminishing effect of the one- \textit{vs} two-determinant imbalance discussed above. As the system size increases, the correlation energy recovered by the two determinants of the excited states \alert{(at the $\Delta$CSF level)} is expected to become smaller in comparison to the total correlation energy \alert{(associated with the full Hilbert space)}, thus alleviating this imbalance. In contrast, CISD should provide less accurate total energies for larger systems, due to its well-known lack of size-consistency. This issue would be expected to reflect on excitation energies to some degree, which are not absolute but relative energies. However, a more balanced reference provided by $\Delta$CSF might compensate for the lack of size-consistency when larger systems are targeted. Indeed, $\Delta$CISD presents comparable MAEs across the three sets of system size (\SIrange{0.15}{0.18}{\eV}). In contrast, $\Delta$CISD+Q and $\Delta$CISD+EN2 seem to go opposite ways: the former becomes more accurate and the latter less as a function of system size. Similarly, CC2 becomes more accurate and CCSD loses accuracy as the system size increases, \cite{Loos_2020,Loos_2021} to the point where the theoretically more approximate CC2 becomes the favored methodological choice. \alert{It remains to be seen how the absence of size-consistency in $\Delta$CISD impairs the results for even larger systems than those considered here, and the extent to which Davidson or perturbative corrections reduce this problem.} For molecules containing five or six non-hydrogen atoms, $\Delta$CISD+EN2 becomes practically as accurate as CCSD, with MAEs in the \SIrange{0.10}{0.12}{\eV} range. The $\Delta$CISD+Q models turn out to be the most accurate choice for systems of this size, with MAEs ranging from \SIrange{0.07}{0.09}{\eV} (see Table S3 of the {\textcolor{blue}{Supporting Information}}), \alert{$\Delta$CISD+PC displaying a MAE of only \SI{0.07}{\eV}.} \alert{In particular, it is more accurate than CCSD while sharing the same $\order*{N^6}$ computational scaling, and more accurate than CC2, despite remaining less black-box and more expensive than the $\order*{N^5}$ scaling of the latter. Overall, the present statistics place our state-specific approaches as encouraging alternatives for describing larger systems, despite the remaining issues regarding higher-lying excited states.} \alert{The MAEs of the seven variants of Davidson corrections, separated by type of excitation, are presented in Table S3 of the {\textcolor{blue}{Supporting Information}}.} \alert{We recall that different basis have been used, the aug-cc-pVDZ basis set for systems with up to three non-hydrogen atoms, and the 6-31+G(d) basis set for the larger ones, which could have some impact on the trends as a function of system size, for a given method. Despite the different basis sets, the comparison between different methods, for a given system size, remains valid.} \subsection{Specific applications} Butadiene, glyoxal, \ce{C2} and \ce{C3} are particularly interesting and challenging systems that deserve a dedicated discussion. \alert{The excitation energies are gathered in Tables \ref{tab:butadiene}, \ref{tab:glyoxal}, \ref{tab:c2}, and \ref{tab:c3}, respectively.} \begin{table}[ht!] \caption{Excitation Energies of Butadiene, in Units of eV, According to Different Methodologies. The reference method is CCSDTQ \cite{Loos_2022}. Only the lowest-lying optically bright ($1 {}^1 B_u$) and dark ($2 {}^1 A_g$) states and their energy gap are compared here. Seven more states have been computed, which can also be found in the {\textcolor{blue}{Supporting Information}}. } \label{tab:butadiene} \begin{ruledtabular} \begin{tabular}{lddddd} \multicolumn{1}{l}{Method} & \multicolumn{1}{c}{$1 {}^1 B_u$} & \multicolumn{1}{c}{$2 {}^1 A_g$} & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c}{MAE} & \multicolumn{1}{c}{$1 {}^1 B_u$/$2 {}^1 A_g$ gap} \\ \hline $\Delta$CSF & 6.53 & 7.18 & +0.37 & 0.37 & 0.65 \\ $\Delta$CISD & 6.54 & 6.93 & +0.25 & 0.25 & 0.39 \\ $\Delta$CISD+EN2 & 6.51 & 5.59\footnotemark[1] & - & - & - \\ $\Delta$CISD+PC & 6.48 & 1.61\footnotemark[1] & - & - & - \\ \hline CIS & 6.30 & 7.56 & +0.44 & 0.44 & 1.25 \\ CIS(D) & 6.15 & 7.53 & +0.49 & 0.49 & 1.12 \\ CC2 & 6.32 & 7.26 & +0.31 & 0.40 & 0.94 \\ CCSD & 6.55 & 7.20 & +0.39 & 0.39 & 0.65 \\ \hline Reference method & 6.41 & 6.56 & - & - & 0.15 \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Intruder state problem for this state.} \end{table} The dark $2 {}^1 A_{g}$ excited state of butadiene is a notoriously famous example, having received much attention (see Ref.~\onlinecite{Loos_2019} and references therein). Prior studies had assigned it as a doubly-excited state, due to important contributions from doubly-excited determinants. \cite{Serrano-Andres_1993,Starcke_2006} More recently, though, it has been re-assigned as a singly-excited state, \cite{Barca_2018a} meaning that the doubly-excited determinants actually represent strong orbital relaxation effects \alert{(single excitations from the dominant singly-excited determinant)}. Here, our state-specific results \alert{(shown in Table \ref{tab:butadiene})} support this interpretation, since one CSF associated with a single excitation provided reasonable excitation energies, whereas attempts to employ a doubly-excited reference produced much higher-lying solutions. Already at the $\Delta$CSF level, we obtained an excitation energy (\SI{7.18}{\eV}) comparable to the much more expensive CCSD (\SI{7.20}{\eV}), though still overestimating the CCSDTQ reference value of \SI{6.56}{\eV}. \cite{Loos_2020} This result demonstrates the ability of $\Delta$CSF to capture orbital relaxation effects, at only a mean-field cost, which in contrast needs at least double excitations in EOM-CC. Inclusion of correlation at the $\Delta$CISD level brings the excitation energy down to \SI{6.93}{\eV}. An important question in butadiene concerns the energy gap between the $2 {}^1 A_g$ dark state and the lower-lying $1 {}^1 B_u$ bright state, whose correct ordering has only recently been settled. \cite{Watson_2012} Having the CCSDTQ reference value of \SI{0.15}{\eV} for the energy gap, \cite{Loos_2020} we observe that EOM-CC methods considerably overestimate it (\SI{0.94}{\eV} in CC2 and \SI{0.65}{\eV} in CCSD), whereas the state-specific methods deliver improved results (\SI{0.65}{\eV} in $\Delta$CSF and \SI{0.39}{\eV} in $\Delta$CISD). \begin{table*}[htb!] \caption{Excitation Energies of Glyoxal, in Units of eV, According to Different Methodologies. The number in parenthesis represents the number of CSFs considered in the reference. The reference method is CCSDTQ for the singlets \cite{Loos_2022} and CCSDT for the triplets. \cite{Loos_2020} } \label{tab:glyoxal} \begin{ruledtabular} \begin{tabular}{ldddddddd} \multicolumn{1}{l}{Method}& \multicolumn{1}{c}{$1 {}^1 A_u$} & \multicolumn{1}{c}{$1 {}^1 B_g$} & \multicolumn{1}{c}{$1 {}^3 A_u$} & \multicolumn{1}{c}{$1 {}^3 B_g$} & \multicolumn{1}{c}{$1 {}^3 B_u$} & \multicolumn{1}{c}{$1 {}^3 A_g$} & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c}{MAE} \\ \hline $\Delta$CSF(1) & 4.11 & 5.88 & 3.53 & 5.50 & 4.69 & 7.44 & +0.97 & 1.14 \\ $\Delta$CSF(2) & 3.34 & 4.56 & 2.70 & 3.91 & 3.86 & 5.06 & -0.31 & 0.58 \\ $\Delta$CISD(1) & 3.49 & 5.27 & 3.00 & 4.85 & 5.37 & 7.23 & +0.65 & 0.65 \\ $\Delta$CISD(2) & 3.12 & 4.51 & 2.59 & 3.97 & 4.75 & 5.90 & -0.08 & 0.22 \\ $\Delta$CISD(1)+EN2 & 3.16 & 4.86 & 2.80 & 4.46 & 5.57 & 7.00 & +0.42 & 0.42 \\ $\Delta$CISD(2)+EN2 & 3.17 & 4.57 & 2.61 & 4.08 & 5.10 & 6.27 & +0.08 & 0.15 \\ $\Delta$CISD(1)+PC & 3.10 & 4.72 & 2.70 & 4.32 & 5.35 & 6.74 & +0.27 & 0.27 \\ $\Delta$CISD(2)+PC & 2.92 & 4.33 & 2.50 & 3.91 & 5.03 & 6.19 & -0.07 & 0.08 \\ \hline Reference method & 2.94 & 4.31 & 2.55 & 3.95 & 5.20 & 6.35 & - & - \\ \end{tabular} \end{ruledtabular} \end{table*} Another challenging system is glyoxal, which presents excited states of genuine multireference character. \cite{Hollauer_1991} While the first pair of NTOs has a dominant weight, the second pair is non-negligible. In this sense, most of the first excited states of glyoxal lie in between the cases of most singly-excited states (that can be qualitatively described with one CSF) and those that need two CSFs. Being an intermediate case, here we performed $\Delta$CISD calculations with references containing one or two CSFs, for the first two singlet states and first four triplet states \alert{(results presented in Table \ref{tab:glyoxal})}. With one CSF only, $\Delta$CSF typically overestimates the reference excitation energies, with the corresponding $\Delta$CISD improving the overall comparison. For this set of six excited states, the associated MAEs are \SI{1.14}{\eV} for $\Delta$CSF and \SI{0.65}{\eV} for $\Delta$CISD when using a single CSF as reference. Despite the improvement brought at the CISD level, this is still limited by the lack of an actual multiconfigurational reference for these states. When two CSFs are employed as the reference for the excited states, the MAEs are reduced to \SI{0.58}{\eV} ($\Delta$CSF) and \SI{0.22}{\eV} ($\Delta$CISD), which can be further decreased to \SI{0.08}{\eV} with $\Delta$CISD+PC. We thus recommend augmenting the excited-state reference whenever it displays at least some multireference character, and the weight of the first pairs of NTOs could serve as an easy proxy for this. \begin{table}[ht!] \caption{Excitation Energies of \ce{C2}, in Units of eV, According to Different Methodologies. The coupled-cluster and reference (extrapolated full configuration interaction) results are from Ref.~\onlinecite{Loos_2021a}. } \label{tab:c2} \begin{ruledtabular} \begin{tabular}{ldddd} \multicolumn{1}{l}{Method} & \multicolumn{1}{c}{$1 {}^1 \Delta_g$} & \multicolumn{1}{c}{$2 {}^1 \Sigma_g^+$} & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c}{MAE} \\ \hline $\Delta$CSF & 0.83 & 1.37 & -1.26 & 1.26 \\ $\Delta$CISD & 1.80 & 2.33 & -0.29 & 0.29 \\ $\Delta$CISD+EN2 & 2.16 & 2.35 & -0.10 & 0.10 \\ $\Delta$CISD+PC & 2.09 & 2.18 & -0.24 & 0.24 \\ \hline CC3 & 3.11 & 3.28 & +0.84 & 0.84 \\ CCSDT & 2.63 & 2.87 & +0.40 & 0.40 \\ CC4 & 2.34 & 2.60 & +0.11 & 0.11 \\ CCSDTQ & 2.24 & 2.52 & +0.02 & 0.02 \\ \hline Reference method & 2.21 & 2.50 & - & - \\ \end{tabular} \end{ruledtabular} \end{table} \begin{table}[ht!] \caption{Excitation Energies of \ce{C3}, in Units of eV, According to Different Methodologies. The coupled-cluster and reference (extrapolated full configuration interactions) results are from Ref.~\onlinecite{Loos_2019}. } \label{tab:c3} \begin{ruledtabular} \begin{tabular}{ldddd} \multicolumn{1}{l}{Method}& \multicolumn{1}{c}{$1 {}^1 \Delta_g$} & \multicolumn{1}{c}{$2 {}^1 \Sigma_g^+$} & \multicolumn{1}{c}{MSE} & \multicolumn{1}{c}{MAE} \\ \hline $\Delta$CSF & 5.10 & 5.88 & -0.06 & 0.06 \\ $\Delta$CISD & 5.17 & 5.93 & +0.01 & 0.04 \\ $\Delta$CISD+EN2 & 5.29 & 5.29\footnotemark[1] & - & - \\ $\Delta$CISD+PC & 5.12 & 4.19\footnotemark[1] & - & - \\ \hline CC3 & 6.65 & 7.20 & +1.38 & 1.38 \\ CCSDT & 5.82 & 6.49 & +0.61 & 0.61 \\ CCSDTQ & 5.31 & 6.00 & +0.11 & 0.11 \\ \hline Reference method & 5.21 & 5.88 & - & - \\ \end{tabular} \end{ruledtabular} \footnotetext[1]{Intruder state problem for this state.} \end{table} Finally, we comment on the lowest-lying $1 ^1 \Delta_g$ and higher-lying $2 ^1 \Sigma_g^+$ doubly-excited states of \ce{C2} and \ce{C3}, which would require at least CCSDTQ quality calculations to become accurate to within \SI{0.1}{\eV}. \cite{Loos_2019} \ce{C2} displays a strong multireference ground state, and thus we employed two CSFs as the reference, the closed-shell HF and the determinant associated with the $(\sigma_{2s}^*)^2 \to (\sigma_{2p_z})^2$ transition. For its doubly-excited states, we employed the two CSFs needed to describe both doubly-excited states, generated from the HF determinant through the $(\pi_{2p_x})^2 \to (\sigma_{2p_z})^2$ and $(\pi_{2p_y})^2 \to (\sigma_{2p_z})^2$ excitations, $\pi_{2p_x}$ and $\pi_{2p_y}$ being degenerate orbitals. In \ce{C3}, the multireference character of the ground state is weaker, and thus we adopted a single HF determinant as a reference. In turn, four CSFs are needed for its doubly-excited states, built from the HF determinant by performing $(\sigma_g)^2 \to (\pi_{2p_x}^*)^2$, $(\sigma_g)^2 \to (\pi_{2p_y}^*)^2$, $(\sigma_u)^2 \to (\pi_{2p_x}^*)^2$, and $(\sigma_u)^2 \to (\pi_{2p_y}^*)^2$ transitions, where $\pi_{2p_x}^*$ and $\pi_{2p_y}^*$ are degenerate orbitals. We therefore re-assign the doubly-excited states of \ce{C3} as $(\sigma)^2 \to (\pi^*)^2$, which had been first assigned as $(\pi)^2 \to (\sigma^*)^2$. \cite{Loos_2019} Notice that, for both systems, what differentiates $1 ^1 \Delta_g$ and $2 ^1 \Sigma_g^+$ is essentially the phase between the two CSFs differing by the occupation of the degenerate orbitals ($\pi$ in \ce{C2}, $\pi^*$ in \ce{C3}). Thus, the higher-lying state orbitals were obtained by optimizing for the second CI root associated with the reference (two CSFs in \ce{C2}, four in \ce{C3}). \alert{The computed excitation energies of \ce{C2} and \ce{C3} are shown in Table \ref{tab:c2} and \ref{tab:c3}, respectively.} We found that $\Delta$CISD is more accurate than CCSDT for \ce{C2}, and even more accurate than CCSDTQ for \ce{C3}. \section{Conclusions} \label{sec:ccl} \alert{To summarize, here we have presented and benchmarked a particular state-specific realization of MCSCF and MRCI as a route to perform excited-state calculations.} The orbitals are optimized for a targeted state with a minimal set of CSFs, serving as the reference wave function for the CI calculations, which can be further corrected with Epstein-Nesbet perturbation theory or with a posteriori Davidson corrections. We have surveyed these methods against more established alternatives by computing excitation energies for a vast class of molecules and types of excitations from the QUEST database. State-specific CI was found to be substantially more accurate than the standard CI methods based on a ground-state reference. Importantly, it delivers reliable results across different types of excited states, most notably when comparing singly- and doubly-excited states, and can easily handle ground and excited states of multireference nature. The overall accuracy of $\Delta$CISD rivals that of CC2 (MAEs of \SIrange{0.17}{0.18}{\eV}), whereas $\Delta$CISD+EN2 is comparable to CCSD (MAEs of \SI{0.08}{\eV}), with $\Delta$CISD+Q lying in-between (MAE of \SIrange{0.10}{0.12}{\eV}). For larger systems, $\Delta$CISD+Q leads to more accurate results (MAE of \SIrange{0.07}{0.09}{\eV}) than CC2 and CCSD (MAEs of \SIrange{0.10}{0.12}{\eV}). There are many exciting possibilities to be pursued from this work. One is to develop analogous state-specific coupled-cluster methods. In light of the huge improvement we have observed when going from ground-state-based to state-specific CI, we expect a similar gain when comparing EOM-CC to state-specific CC methods where tailored CSFs are employed as the reference wave function. \cite{Piecuch_1992,Piecuch_1994,Lutz_2018} One could also develop state-specific implementations of seniority-based \cite{Bytautas_2011} and hierarchy-based \cite{Kossoski_2022} CI for excited states. It would be important to assess the performance of our state-specific approaches to \alert{charge-transfer states} and even larger systems, which would require switching from a determinant-driven to an integral-driven implementation. Although evaluating nonorthogonal matrix elements is more challenging than their orthogonal analogs, the calculation of static properties such as dipole moments and oscillator strengths is possible thanks to the recent generalized extension of the nonorthogonal Wick's theorem proposed by Burton. \cite{Burton_2021a,Burton_2022a} Yet another exciting possibility is to move from a state-specific to a state-averaged reference, while contemplating only a small set of important determinants for describing a given set of states. This would be expected to solve some of the issues encountered here when two states of the same symmetry are strongly coupled. \begin{acknowledgements} This work was performed using HPC resources from CALMIP (Toulouse) under allocation 2021-18005. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (Grant agreement No.~863481). \end{acknowledgements} \section*{Supporting information available} \label{sec:SI} Additional statistical measures for different sets of excited states and for all flavors of $\Delta$CISD+Q models. For the full set of 294 excited states, total energies and excitation energies obtained at the $\Delta$CSF, $\Delta$CISD, $\Delta$CISD+EN2, and seven variants of $\Delta$CISD+Q models, excitation energies computed with CIS, CIS(D), CC2, and CCSD, number of determinants in the reference, saddle point order associated with the $\Delta$CSF solutions, the reference excitation energies and corresponding method, and additional statistical measures. For a subset of 16 excited states, total energies and excitation energies obtained at the $\Delta$CISDT, $\Delta$CISDTQ and ground-state-based CISDT and CISDTQ levels of theory. For the subset of 10 doubly-excited states, excitation energies obtained at the CC3, CCSDT, CC4, and CCSDTQ levels of theory.
1,116,691,499,932
arxiv
\section{Introduction} The study of submanifolds of a given ambiant space is a naturel interesting problem which enriches our knowledge and understanding of the geometry of the space itself, see \cite{Berger, Carmo}. The theory of ruled surfaces in $\mathbb{R}^3$ is a classical subject in diffrential geometry and ruled hypersurfaces in higher dimensions have also been studied by many authors. For Ruled surfaces and their study one can see \cite{Dillen, Divjak, Flory, Guler}.\\ A 2-ruled hypersurface in $\mathbb{R}^4$ is a one-parameter family of planes in $\mathbb{R}^4$. This is a generalization of ruled surfaces in $\mathbb{R}^3$.\\ In \cite{Saji}, K. Saji study singularities of 2-ruled hypersurfaces in Euclidean 4-space. After defining a non-degenerate 2-ruled hypersurface he gives a necessary and sufficient condition for such a map germ to be right-left equivalent to the cross cap $\times$ interval. And he discusses the behavior of a generic 2-ruled hypersurface map.\\ In \cite{Mustapha} the authors obtain the Gauss map (unit normal vector field) of a 2-ruled hypersurface in Euclidean 4-space with the aid of its general parametric equation. They also obtain Gaussian and mean curvatures of the 2-ruled hypersurface and they give some characterizations about its minimality. Finally, they deal with the first and second Laplace-Beltrami operators of 2-ruled hypersurfaces in $\mathbb{E}^4$. In \cite{yayli,yayli1}, Aslan et al. characterize the ruled surface through quaternions in $\mathbb{E}^3$ and $\mathbb{E}_{1}^3$. In three dimensions, the quaternions can be used to characterize the ruled surfaces. Identically, the 2-ruled hypersurfaces can be constructed by octonions, for more information about octonions see \cite{John}.\\ Motivated by the above two works, we study in this paper the 2-Ruled hypersurfaces in the Minkowski 4-space $\mathbb{E}^4_1$. We define three types of 2-Ruled hypersurfaces in $\mathbb{E}^4_1$ and we obtain Gaussian and mean curvatures of the 2-ruled hypersurface and some characterizations about its minimality. Moreover, we contract these surfaces via octonions in $\mathbb{E}^4_1$. We also deal with the first Laplace-Beltrami operators of these type of 2-Ruled hypersurfaces in $\mathbb{E}^4_1$. At the end, as an application, we investigate the geometric evolution of a linearly polarized light wave along an optical fiber by means of the 2-ruled hypersurfaces in a four-dimensional Minkowski space. \section{Preliminaries} Let $\mathbb{R}^4 = \lbrace (x_0, x_1, x_2, x_3) | x_i \in\mathbb{R} (i = 0, 1, 2, 3) \rbrace$ be an 4-dimensional cartesian space. For any $x = (x_0, x_1, x_2, x_3)$, $y = (y_0, y_1, y_2, y_3) \in\mathbb{R}^4$, the pseudo-scalar product of $x$ and $y$ is defined by \begin{eqnarray}\label{1} \langle x, y\rangle=-x_0y_0+\sum_{i=1}^3 x_iy_i. \end{eqnarray} We call $(\mathbb{R}^4, \langle , \rangle)$ the Minkowski 4-space. We shall write $\mathbb{R}^4_1$ instead of $(\mathbb{R}^4, \langle , \rangle)$. We say that a non-zero vector $x\in\mathbb{R}^4_1$ is spacelike, lightlike or timelike if $\langle x , x\rangle>0$, $\langle x, x\rangle=0$ or $\langle x,x \rangle <0$ respectively. The norm of the vector $x\in\mathbb{R}^4_1$ is \begin{eqnarray}\label{2} \Vert x\Vert=\sqrt{\vert\langle x,x \rangle\vert }. \end{eqnarray} We now define the Hyperbolic 3-space by \begin{eqnarray}\label{3} H^3_+(-1)=\lbrace x\in\mathbb{R}^4_1 \vert \langle x , x\rangle=-1, x_0>0\rbrace, \end{eqnarray} and the Sitter 3-space by \begin{eqnarray}\label{4} S^3_1=\lbrace x\in\mathbb{R}^4_1 \vert \langle x , x\rangle=1\rbrace. \end{eqnarray} We also define the light cone at the origin by \begin{eqnarray}\label{5} LC=\lbrace x\in\mathbb{R}^4_1 \vert x_0\neq 0, \langle x , x\rangle=0\rbrace. \end{eqnarray} If $\overrightarrow{x}=(x_0,x_1,x_2, x_3) $, $\overrightarrow{y}=(y_0,y_1,y_2,y_3)$ and $\overrightarrow{z}=(z_0,z_1,z_2,z_3)$ are three vectors in $\mathbb{R}^4_1$, then vector product are defined by \begin{eqnarray}\label{eq-(1.2)} \overrightarrow{x}\times\overrightarrow{y}\times\overrightarrow{z}=\det \left[ \begin{array}{cccc} -e_1 & e_2 & e_3 & e_4 \\ x_0 & x_1 & x_2 & x_3 \\ y_0 & y_1 & y_2 & y_3 \\ z_0 & z_1 & z_2 & z_3 \end{array} \right]. \end{eqnarray} If \begin{align*} \varphi : \mathbb{R}^3&\longrightarrow \mathbb{R}^4_1 \cr (x_0,x_1,x_2)&\longmapsto \varphi(x_0,x_1,x_2)=(\varphi_1(x_0,x_1,x_2),\varphi_2(x_0,x_1,x_2),\varphi_3(x_0,x_1,x_2),\varphi_4(u_1,u_2,u_3)) \end{align*} is a hypersurface in Minkowski 4-space $\mathbb{R}^4_1$, then the Gauss map (i.e., the unit normal vector field), the matrix forms of the first and second fundamental forms are \begin{align}\label{eq-(1.4)} G=\frac{\varphi_{x_0}\times\varphi_{x_1}\times\varphi_{x_2}}{\Vert\varphi_{x_0}\times\varphi_{x_1}\times\varphi_{x_2}\Vert}, \end{align} \begin{align}\label{eq-(1.5)} [g_{ij}]= \left[ \begin{array}{ccc} g_{11} & g_{12} & g_{13} \\ g_{21} & g_{22} & g_{23} \\ g_{31} & g_{32} & g_{33} \end{array} \right] \end{align} and \begin{align}\label{eq-(1.6)} [h_{ij}]= \left[ \begin{array}{ccc} h_{11} & h_{12} & h_{13} \\ h_{21} & h_{22} & h_{23} \\ h_{31} & h_{32} & h_{33} \end{array} \right], \end{align} respectively, where the coefficients $g_{ij}=\langle \varphi_{x_i},\varphi_{x_j}\rangle$, $h_{ij}=\langle \varphi_{x_ix_j},G\rangle$, $\varphi_{x_i}=\frac{\partial \varphi(x_0,x_1,x_2)}{\partial x_i}$, $\varphi_{x_ix_i}=\frac{\partial^2\varphi(x_0,x_1,x_2)}{\partial x_ix_j}$, $i,j \in \{0,1,2\}$.\\ Also, the matrix of shape operator of the hypersurface $\varphi$ is \begin{eqnarray}\label{eq-(1.7)} S=[a_{ij}]=[g^{ij}]\cdot[h_{ij}], \end{eqnarray} where $[g^{ij}]$ is the inverse matrix of $[g_{ij}]$.\\ With aid of (\ref{eq-(1.5)})-(\ref{eq-(1.7)}), the Gaussian curvature and mean curvature of a hypersurface in $E^4$ are given by \begin{eqnarray}\label{eq-(1.8)} K=\frac{\det[h_{ij}]}{\det[g_{ij}]} \end{eqnarray} and \begin{eqnarray}\label{eq-(1.9)} 3H=trace(S), \end{eqnarray} respectively (\ref{eq-(1.9)}). Let the octonion parameterized by \begin{eqnarray} q=a_0+a_1e_1+a_2e_2+a_3e_3+a_4e_4+a_5e_5+a_6e_6+a_7e_7, \end{eqnarray} where $a_i, i=0,1,...,7$ are real numbers and the $e_i, i=0,1,...,7$ satisfy the following \begin{itemize} \item $e_1,...,e_7$ are square roots of $-1$, \item $e_i$ and $e_j$ anticommute when $i\neq j$: $$e_ie_j=-e_je_i$$ \item the index cycling identity holds: $$e_ie_j=e_k\Rightarrow e_{i+1}e_{j+1}=e_{k+1}$$ where we think of the indices as living in $\mathbb{Z}_7$, and \item the index doubling identity holds: $$e_ie_j=e_k\Rightarrow e_{2i}e_{2j}=e_{2k}.$$ \end{itemize} Now we assume that the reals $a_5=a_6=a_7=0$ and we get the expression \begin{eqnarray} Q=a_0+a_1e_1+a_2e_2+a_3e_3+a_4e_4, \end{eqnarray} called particular octonion.\\ This particular octonion can be also given in the form \begin{eqnarray} Q=S(Q) + V (Q), \end{eqnarray} where $S(Q) = a_0$ is the scalar and $V (Q)= a_1e_1+a_2e_2+a_3e_3+a_4e_4$ is the vector part of $Q$. If $S(Q) = 0$, then $Q = a_1e_1+a_2e_2+a_3e_3+a_4e_4$ is called a pure particular octonion. Particular octonion product of any particular octonion $Q = S(Q) + V (Q)$ and $P = S(P) + V (P)$ is defined by \begin{eqnarray} Q\star P\star I=S(Q)S(P)-\langle V(Q), V(P)\rangle+S(q)V (p)\nonumber\\ + S(p)V (q) + V (q)\times V (p)\times I, \end{eqnarray} where $\langle ,\rangle$ and $\times$ denote the usual scalar and vector products in $\mathbb{R}_{1}^4$, respectively, and $I$ is a unitary element of particular octonion.\\ Now we denote the set of all dual numbers by \begin{eqnarray} \mathbb{D}=\lbrace A=a+\varepsilon a^*/a,a^*\in\mathbb{R}\rbrace, \end{eqnarray} where $\varepsilon$ is the dual unit and satisfying $$\varepsilon\neq 0, \,\,\varepsilon^2=0\,\,\,\,\,and\,\,\,\,\, r\varepsilon=\varepsilon r,\,\,\,\,\forall r\in\mathbb{R}.$$ For any dual numbers $A=a+\varepsilon a^*$ and $B=b+\varepsilon b^*$, we have the addition and the multiplication expressed by $$A+B=(a+b)+\varepsilon(a^*+b^*)$$ and $$AB=ab+\varepsilon(a^*b+ab^*),$$ respectively.\\ Dual numbers form the module \begin{eqnarray} \mathbb{D}^4=\lbrace \tilde{A}=a+\varepsilon a^*/a,a^*\in\mathbb{R}^4\rbrace, \end{eqnarray} which is a commutative and associative ring.The element $\tilde{A}\in\mathbb{D}^4$ is called dual vector. The scalar and vector products of any dual vectors $\tilde{A}=a+\varepsilon a^*$ and $\tilde{B}=b+\varepsilon b^*$ are defined by \begin{eqnarray} \langle\title{A}, \title{B}\rangle_D=\langle a,b\rangle+\varepsilon(\langle a,b^*\rangle+\langle a^*,b\rangle) \end{eqnarray} and \begin{eqnarray} \tilde{A}\times_D\tilde{B}\times_D I=a\times b\times I+\varepsilon(a\times b^*\times I+a^*\times b\times I), \end{eqnarray} respectively. In the last two equalities, $\langle,\rangle$ and $\times$ denote the usual scalar and vector products in $\mathbb{R}_{1}^4$, respectively. And the norm of a dual vector $\tilde{A}=a+\varepsilon a^*$ is defined to be \begin{eqnarray} N_{\tilde{A}}=\langle\title{A}, \title{A}\rangle_D=\vert a\vert^2+2\varepsilon\langle a,a^*\rangle\in\mathbb{D}. \end{eqnarray} Unit dual sphere is defined by \begin{eqnarray} \mathbb{S}^3_\mathbb{D}=\lbrace \tilde{A}=a+\varepsilon a^*/ \vert\tilde{A}\vert=1, \tilde{A}\in\mathbb{D}^4\rbrace. \end{eqnarray} \section{2-Ruled hypersurfaces of type-1 in $\mathbb{R}^4_1$} A $2$-ruled hypersurface of type-1 in $\mathbb{R}^4_1$ means (the image of) a map $\varphi:I_1\times I_2\times I_3\longrightarrow \mathbb{R}^4_1$ of the form \begin{eqnarray}\label{eq-(2.1)} \varphi(x,y,z)=\alpha(x)+y\beta(x)+z\gamma(x), \end{eqnarray} where $\alpha: I_1\longrightarrow \mathbb{R}^4_1$, $\beta: I_2\longrightarrow S^3_1$, $\gamma :I_3\longrightarrow S^3_1$ are smooth maps, $S^3_1$ is the Sitter 3-space of $\mathbb{R}^4_1$ and $I_1, I_2, I_3$ are open intervals.\\ We call $\alpha$ a base curves $\beta$ and $\gamma$ director curves. The planes $ (y,z)\longrightarrow \alpha(x)+y\beta(x)+z\gamma(x)$ are called rulings.\\ \quad So, if we take \begin{eqnarray}\label{eq-(2.2)} \left. \begin{array}{llllll} \alpha(x)&=(&\alpha_{1}(x),&\alpha_2(x),&\alpha_3(x),&\alpha_4(x)) \\ \beta(x )&=(&\beta_{1} (x),&\beta_2 (x),& \beta_3(x),&\beta_4(x) ) \\ \gamma(x)&=(&\gamma_{1}(x),&\gamma_2(x),&\gamma_3(x),&\gamma_4(x)) \end{array} \right\rbrace, \end{eqnarray} in (\ref{eq-(2.1)}), then we can write the 2-ruled hypersurface of type-1 as \begin{align}\label{eq-(2.3)} \varphi(x,y,z)&= \alpha(x)+y\beta(x)+z\gamma(x)\cr &=(\varphi_1(x,y,z),\varphi_2(x,y,z),\varphi_3(x,y,z),\varphi_4(x,y,z))\cr &=\left( \begin{array}{cc} \alpha_1(x)+y\beta_1(x)+z\gamma_1(x), & \alpha_2(x)+y\beta_2(x)+z\gamma_2(x),\\ \alpha_3(x)+y\beta_3(x)+z\gamma_3(x), & \alpha_4(x)+y\beta_4(x)+z\gamma_4(x) \end{array} \right). \end{align} We see that $\displaystyle\left(-\beta_1^2+\sum^{3}_{i=1}(\beta_i)^2\right)=\left(-\gamma_1^2+\sum^{3}_{i=1}(\gamma_i)^2\right)=1$ and we state $\alpha_i=\alpha_i(x)$, $\beta_i=\beta_i(x)$, $\gamma_i=\gamma_i(x)$, $\varphi_i=\varphi_i(x,y,z)$, $f'=\frac{\partial f(x)}{\partial x}$, $f''=\frac{\partial^2f(x)}{\partial x\partial x}$, $i\in \{1,2,3,4\}$ and $f\in \{\alpha,\beta,\gamma\}$.\\ We denote by \begin{eqnarray} E_{ij}=\gamma_i(\alpha'_j+y\beta'_j+z\gamma'_j)\\ F_{ij}=\beta_i(\alpha'_j+y\beta'_j+z\gamma'_j). \end{eqnarray} Now, let us prove the following theorem which contains the Gauss map of the 2-ruled hypersurface of type-1 (\ref{eq-(2.3)}). \begin{theorem}\label{thm-(2)} The Gauss map of the 2-ruled hypersurface of type-1 $(\ref{eq-(2.3)})$ is \begin{eqnarray}\label{eq-(2.12)} G(x,y,z)=\frac{G_1(x,y,z)e_1+G_2(x,y,z)e_2+G_3(x,y,z)e_3+G_4(x,y,z)e_4}{D}, \end{eqnarray} where \begin{eqnarray}\label{eq-(2.13)} G_1(x,y,z)=\beta_2(E_{43}-E_{34})+\beta_3(E_{24}-E_{42})+\beta_4(E_{32}-E_{23})\nonumber\\ G_2(x,y,z)=\beta_1(E_{43}-E_{34})+\beta_3(E_{14}-E_{41})+\beta_4(E_{31}-E_{13})\nonumber\\ G_3(x,y,z)=\beta_1(E_{24}-E_{42})+\beta_2(E_{41}-E_{14})+\beta_4(E_{12}-E_{21})\nonumber\\ G_4(x,y,z)=\beta_1(E_{32}-E_{23})+\beta_2(E_{13}-E_{31})+\beta_3(E_{21}-E_{12}) \end{eqnarray} and \begin{eqnarray}\label{eq-(2.15)} D=\sqrt{-G^2_1(x,y,z)+\sum^4_{i=2}G^2_i(x,y,z)}. \end{eqnarray} \end{theorem} \begin{proof} If we differentiate (\ref{eq-(2.3)}) we get \begin{eqnarray*} {\left\{ \begin{array}{ccc} \varphi_x(x,y,z)&=&\big(\alpha'_1+y\beta'_1+z\gamma'_1,\alpha'_2+y\beta'_2+z\gamma'_2, \alpha'_3+y\beta'_3+z\gamma'_3, \alpha'_4+y\beta'_4+z\gamma'_4\big)\\ \varphi_y(x,y,z) &=& \big(\beta_1, \beta_2, \beta_3, \beta_4\big)\\ \varphi_z(x,y,z) &=& \big(\gamma_1, \gamma_2, \gamma_3, \gamma_4\big). \end{array}\right.} \end{eqnarray*} By using the vector product in (\ref{eq-(1.2)}), we get \begin{eqnarray*} \varphi_x\times\varphi_y\times\varphi_z &=&\Big(\beta_2(E_{43}-E_{34})+\beta_3(E_{24}-E_{42})+\beta_4(E_{32}-E_{23})\Big)e_1\\ &&+\Big(\beta_1(E_{43}-E_{34})+\beta_3(E_{14}-E_{41})+\beta_4(E_{31}-E_{13}) \Big)e_2\\ &&+\Big( \beta_1(E_{24}-E_{42})+\beta_2(E_{41}-E_{14})+\beta_4(E_{12}-E_{21})\Big)e_3\\ &&+\Big( \beta_1(E_{32}-E_{23})+\beta_2(E_{13}-E_{31})+\beta_3(E_{21}-E_{12}) \Big)e_4 \end{eqnarray*} Now using the unit normal vector formula in (\ref{eq-(1.4)}) we get the result. \end{proof} From (\ref{eq-(1.5)}) we obtain the matrix of the first fundamental form \begin{align}\label{eq-(1.5(1))} [g_{ij}]= \left[ \begin{array}{ccc} -(\alpha'_1+y\beta'_1+z\gamma'_1)^2+\sum_{i=2}^4(\alpha'_i+y\beta'_i+z\gamma'_i)^2 & -F_{11}+\sum_{i=2}^4F_{ii} & -E_{11}+\sum_{i=2}^4E_{ii} \\ -F_{11}+\sum_{i=2}^4F_{ii} & 1 & -\beta_1\gamma_1+\sum_{i=2}^4\beta_i\gamma_i \\ -E_{11}+\sum_{i=2}^4E_{ii} & -\beta_1\gamma_1+\sum_{i=2}^4\beta_i\gamma_i & 1 \end{array} \right]. \end{align} And we obtain the inverse matrix $[g^{ij}]$ of $[g_{ij}]$ as \begin{align}\label{eq-(1.5(2))} [g^{ij}]=\frac{1}{\det[g_{ij}]} \left[ \begin{array}{ccc} 1-e^2 & ce-b & be-c \\ ce-b & a-c^2 &bc-ae \\ be-c & bc-ae & a-b^2 \end{array} \right]. \end{align} where \begin{equation}\label{eq-(2.22)} \left. \begin{aligned} a=&-(\alpha'_1+y\beta'_1+z\gamma'_1)^2+\sum_{i=2}^4(\alpha'_i+y\beta'_i+z\gamma'_i)^2,\cr b=&-F_{11}+\sum_{i=2}^4F_{ii},\cr c=& -E_{11}+\sum_{i=2}^4E_{ii},\cr e=&-\beta_1\gamma_1+\sum_{i=2}^4\beta_i\gamma_i \end{aligned} \right\rbrace \end{equation} and\\ \begin{align}\label{eq-(2.23)} \det[g_{ij}]=-b^2+2cbe-c^2-ae^2+a=D. \end{align} Furthermore, from (\ref{eq-(1.6)}), the matrix from of the second fundamental from of the 2-ruled hypersurface (\ref{eq-(2.3)} is obtained by \begin{equation}\label{eq-(2.24)} [h_{ij}]= \left[ \begin{array}{ccc} h_{11} & h_{12} & h_{13} \cr h_{21} & 0 & 0 \cr h_{31} & 0 & 0 \end{array} \right], \end{equation} where \begin{equation}\label{eq-(2.25)} \left. \begin{aligned} h_{11}&=\displaystyle \frac{-G_1(\alpha_1''+y\beta_1''+z\gamma_1'')+\sum^{4}_{i=2}G_i(\alpha_i''+y\beta_i''+z\gamma_i'')}{\sqrt{-G^2_1(x,y,z)+\sum^3_{i=1}G^2_i(x,y,z)}},\cr h_{12}&=h_{21}=\displaystyle\frac{-G_1\beta'_1+\sum_{i=2}^4 G_i\beta'_i}{\sqrt{-G^2_1(x,y,z)+\sum^3_{i=1}G^2_i(x,y,z)}},\cr h_{13}&=h_{31}=\displaystyle\frac{-G_1\gamma'_1+\sum_{i=2}^4 G_i\gamma'_i}{\sqrt{-G^2_1(x,y,z)+\sum^3_{i=1}G^2_i(x,y,z)}}. \end{aligned} \right\rbrace \end{equation} We can see easily that the $\det[h_{ij}]=0$. \\ Then we can give the following theorem by using (\ref{eq-(1.8)}) \begin{theorem} The 2-ruled hypersurfaces of type-1 defined in (\ref{eq-(2.3)}) is flat. \end{theorem} Now we will prove the following theorem about the mean curvature \begin{theorem} The 2-ruled hypersurfaces of type-1 defined in (\ref{eq-(2.3)}) is minimal in $\mathbb{R}^4_1$, if \begin{eqnarray}\label{eq-2.26(1)} (1-e^2)\left[-G_1(\alpha_1''+y\beta_1''+z\gamma_1'')+\sum^{4}_{i=2}G_i(\alpha_i''+y\beta_i''+z\gamma_i'')\right]\nonumber\\ +(ce-b)\left[-G_1\beta'_1+\sum_{i=2}^4 G_i\beta'_i\right]+(be-c)\left[-G_1\gamma'_1+\sum_{i=2}^4 G_i\gamma'_i\right]\nonumber\\ +(ce-b)\left[-G_1\beta'_1+\sum_{i=2}^4 G_i\beta'_i\right] +(be-c)\left[-G_1\gamma'_1+\sum_{i=2}^4 G_i\gamma'_i\right]=0 \end{eqnarray} \end{theorem} \begin{proof} By (\ref{eq-(1.7)}) the matrix of the shape operator is \begin{align*} S= \left[ \begin{array}{ccc} 1-e^2 & ce-b & be-c \\ ce-b & a-c^2 &bc-ae \\ be-c & bc-ae & a-b^2 \end{array} \right]\left[ \begin{array}{ccc} h_{11} & h_{12} & h_{13} \\ h_{21} & 0 &0 \\ h_{31} & 0 & 0 \end{array} \right]. \end{align*} Then we get the coefficients of $S$ by \begin{eqnarray*} S_{11}&=&(1-e^2)h_{11}+(ce-b)h_{21}+(be-c)h_{31}\\ S_{22}&=&(ce-b)h_{12}\\ S_{33}&=& (be-c)h_{13}. \end{eqnarray*} And using (\ref{eq-(2.25)}) and (\ref{eq-(1.9)}) we see that the 2-ruled hypersurfaces is minimal if \begin{eqnarray*} S_{11}+S_{22}+S_{33}=0, \end{eqnarray*} then that end the proof. \end{proof} \begin{corollary} If the curves $\beta$ and $\gamma$ are orthogonal then the 2-ruled hypersurfaces of type-1 defined in (\ref{eq-(2.3)}) is minimal if \begin{eqnarray}\label{eq-2.27} \left[-G_1(\alpha_1''+y\beta_1''+z\gamma_1'')+\sum^{4}_{i=2}G_i(\alpha_i''+y\beta_i''+z\gamma_i'')\right]\nonumber\\ -b\left[-G_1\beta'_1+\sum_{i=2}^4 G_i\beta'_i\right]-c\left[-G_1\gamma'_1+\sum_{i=2}^4 G_i\gamma'_i\right]\nonumber\\ -b\left[-G_1\beta'_1+\sum_{i=2}^4 G_i\beta'_i\right] -c\left[-G_1\gamma'_1+\sum_{i=2}^4 G_i\gamma'_i\right]=0. \end{eqnarray} \end{corollary} The Laplace-Beltrami operator of a smooth function $f=f(x^1, x^2, x^3)$ of class $C^3$ with respect to the first fundamental form of a hypersurface is defined as follows: \begin{eqnarray}\label{eq-3.1} \Delta f=\frac{1}{\sqrt{\det[g_{ij}]}}\sum_{i,j}^3\frac{\partial}{\partial x^i}\left(\sqrt{\det[g_{ij}]}g^{ij}\frac{\partial f}{\partial x^j}\right). \end{eqnarray} Using (\ref{eq-3.1}) we get the Laplace-Beltrami operator of the 2-ruled hypersurface of type-1 (\ref{eq-(2.2)}) by \begin{eqnarray*} \Delta\varphi=(\Delta\varphi_1,\Delta\varphi_2,\Delta\varphi_3,\Delta\varphi_4), \end{eqnarray*} where \begin{eqnarray}\label{eq-3.2} \Delta\varphi_i=\frac{1}{\sqrt{ D} }\left[ \begin{aligned} & \frac{\partial}{\partial x}\left(\frac{(1-e^2)\varphi_{ix}+(ce-b)\varphi_{iy}+(be-c)\varphi_{iz}}{\sqrt{\det[g_{ij}]}}\right)\\ & + \frac{\partial}{\partial y}\left(\frac{(ce-b)\varphi_{ix}+(a-c^2)\varphi_{iy}+(bc-ae)\varphi_{iz}}{\sqrt{\det[g_{ij}]}}\right) \\ &+ \frac{\partial}{\partial z}\left(\frac{(be-c)\varphi_{ix}+(bc-ae)\varphi_{iy}+(a-b^2)\varphi_{iz}}{\sqrt{\det[g_{ij}]}}\right) \end{aligned} \right]. \end{eqnarray} That is \begin{eqnarray}\label{eq-3.3} \Delta\varphi_i=\frac{1}{\sqrt{ D} }\left[ \begin{aligned} & \frac{\partial}{\partial x}\left(\frac{(1-e^2)(\alpha'_i+y\beta'_i+z\gamma'_i)+(ce-b)\beta_i+(be-c)\gamma_i}{\sqrt{\det[g_{ij}]}}\right)\\ & + \frac{\partial}{\partial y}\left(\frac{(ce-b)(\alpha'_i+y\beta'_i+z\gamma'_i)+(a-c^2)\beta_i+(bc-ae)\gamma_i}{\sqrt{\det[g_{ij}]}}\right) \\ &+ \frac{\partial}{\partial z}\left(\frac{(be-c)(\alpha'_i+y\beta'_i+z\gamma'_i)+(bc-ae)\beta_i+(a-b^2)\gamma_i}{\sqrt{\det[g_{ij}]}}\right) \end{aligned} \right]. \end{eqnarray} If we suppose that $\beta$ and $\gamma$ are orthogonal,then the Laplace-Beltrami operator of the 2-ruled hypersuface of type-1 is given by \begin{eqnarray}\label{eq-3.4} \Delta\varphi_i=\frac{1}{\sqrt{ a-b^2-c^2} }\left[ \begin{aligned} & \frac{\partial}{\partial x}\left(\frac{(\alpha'_i+y\beta'_i+z\gamma'_i)-b\beta_i-c\gamma_i}{\sqrt{a-b^2-c^2}}\right)\\ & + \frac{\partial}{\partial y}\left(\frac{-b(\alpha'_i+y\beta'_i+z\gamma'_i)+(a-c^2)\beta_i+bc\gamma_i}{\sqrt{a-b^2-c^2}}\right) \\ &+ \frac{\partial}{\partial z}\left(\frac{-c(\alpha'_i+y\beta'_i+z\gamma'_i)+bc\beta_i+(a-b^2)\gamma_i}{\sqrt{a-b^2-c^2}}\right) \end{aligned} \right]. \end{eqnarray} \begin{theorem} The components of the Laplace-Beltrami operator of the 2-ruled hypersurface of type-1 are \begin{eqnarray}\label{eq-3.5(2)} \Delta\varphi_i=\frac{1}{\sqrt{Q} }\left[ \begin{aligned} &\frac{(\alpha''_i+y\beta''_i+z\gamma''_i)-(b\beta_i)_x-(c\gamma_i)_x)Q-P_1(\alpha'_i+y\beta'_i+z\gamma'_i-b\beta_i-c\gamma_i )}{Q^{\frac{3}{2}}}\\ & +\frac{(-b\beta'_i+((a-c^2)\beta_i)_y+(bc\gamma_i)_y)Q-P_2(-b(\alpha'_i+y\beta'_i+z\gamma'_i)+(a-c^2)\beta_i+bc\gamma_i)}{Q^\frac{3}{2}} \\ &+ \frac{(-c\gamma'_i+(bc\beta_i)_z+((a-b^2)\gamma_i)_z)Q-P_3(-c(\alpha'_i+y\beta'_i+z\gamma'_i)+bc\beta_i+(a-b^2)\gamma_i)}{Q^\frac{3}{2}} \end{aligned} \right], \end{eqnarray} where $i = 1, 2, 3, 4$; $\beta$ and $\gamma$ are orthogonal; $Q=a-b^2-c^2$, $P_1=a_x-2bb_x-2cc_x$, $P_2=a_y-2bb_y-2cc_y$, $P_3=a_z-2bb_z-2cc_z$. \begin{example} Let $\varphi$ be the 2-ruled hypersurface of type-1 defined by \begin{eqnarray*} \varphi(x,y,z)=\Big(3x+7+\frac{y}{\sqrt{7}}, -5x+1+\frac{z}{\sqrt{5}}, x+\frac{2y\sqrt{2}}{\sqrt{7}}, -4x-1+\frac{2z}{\sqrt{5}}\Big). \end{eqnarray*} We take $\alpha(x)=(3x+7, -5x+1, x; -4x-1)$, $\beta(x)=(\frac{1}{\sqrt{7}}, 0, \frac{2\sqrt{2}}{\sqrt{7}}, 0)$, $\gamma(x)=(0,\frac{1}{\sqrt{5}},0, \frac{2}{\sqrt{5}})$.\\ An easy computation show that $\varphi$ is minimal. And the Laplace-Beltrami operator of $\varphi$ is zero. \end{example} \end{theorem} \section{2-Ruled hypersurfaces of type-2 in $\mathbb{R}^4_1$} A $2$-ruled hypersurface of type-1 in $\mathbb{R}^4_1$ means (the image of) a map $\varphi:I_1\times I_2\times I_3\longrightarrow \mathbb{R}^4_1$ of the form \begin{eqnarray}\label{eq-(4.1)} \varphi(x,y,z)=\alpha(x)+y\beta(x)+z\gamma(x), \end{eqnarray} where $\alpha: I_1\longrightarrow \mathbb{R}^4_1$, $\beta: I_2\longrightarrow H^3_+(-1)$, $\gamma :I_3\longrightarrow H^3_+(-1)$ are smooth maps, $H^3_+(-1)$ is the hyperbolic 3-space of $\mathbb{R}^4_1$ and $I_1, I_2, I_3$ are open intervals.\\ We call $\alpha$ a base curve, $\beta$ and $\gamma$ director curves. The planes $ (y,z)\longrightarrow \alpha(x)+y\beta(x)+z\gamma(x)$ are called rulings.\\ \quad So, if we take \begin{eqnarray}\label{eq-(4.2)} \left. \begin{array}{llllll} \alpha(x)&=(&\alpha_{1}(x),&\alpha_2(x),&\alpha_3(x),&\alpha_4(x)) \\ \beta(x )&=(&\beta_{1} (x),&\beta_2 (x),& \beta_3(x),&\beta_4(x) ) \\ \gamma(x)&=(&\gamma_{1}(x),&\gamma_2(x),&\gamma_3(x),&\gamma_4(x)) \end{array} \right\rbrace \end{eqnarray} in (\ref{eq-(2.1)}), then we can write the 2-ruled hypersurface of type-1 as \begin{align}\label{eq-(4.3)} \varphi(x,y,z)&= \alpha(x)+y\beta(x)+z\gamma(x)\cr &=(\varphi_1(x,y,z),\varphi_2(x,y,z),\varphi_3(x,y,z),\varphi_4(x,y,z))\cr &=\left( \begin{array}{cc} \alpha_1(x)+y\beta_1(x)+z\gamma_1(x), & \alpha_2(x)+y\beta_2(x)+z\gamma_2(x),\\ \alpha_3(x)+y\beta_3(x)+z\gamma_3(x), & \alpha_4(x)+y\beta_4(x)+z\gamma_4(x) \end{array} \right). \end{align} We see that $\displaystyle\left(-\beta_1^2+\sum^{3}_{i=1}(\beta_i)^2\right)=\left(-\gamma_1^2+\sum^{3}_{i=1}(\gamma_i)^2\right)=-1$ and we state $\alpha_i=\alpha_i(x)$, $\beta_i=\beta_i(x)$, $\gamma_i=\gamma_i(x)$, $\varphi_i=\varphi_i(x,y,z)$, $f'=\frac{\partial f(x)}{\partial x}$, $f''=\frac{\partial^2f(x)}{\partial x\partial x}$, $i\in \{1,2,3,4\}$ and $f\in \{\alpha,\beta,\gamma\}$.\\ From (\ref{eq-(1.5(1))}) we obtain the matrix of the first fundamental form \begin{align}\label{eq-(4.4)} [g_{ij}]= \left[ \begin{array}{ccc} -(\alpha'_1+y\beta'_1+z\gamma'_1)^2+\sum_{i=2}^4(\alpha'_i+y\beta'_i+z\gamma'_i)^2 & -F_{11}+\sum_{i=2}^4F_{ii} & -E_{11}+\sum_{i=2}^4E_{ii} \\ -F_{11}+\sum_{i=2}^4F_{ii} & -1 & -\beta_1\gamma_1+\sum_{i=2}^4\beta_i\gamma_i \\ -E_{11}+\sum_{i=2}^4E_{ii} & -\beta_1\gamma_1+\sum_{i=2}^4\beta_i\gamma_i & -1 \end{array} \right]. \end{align} And we obtain the inverse matrix $[g^{ij}]$ of $[g_{ij}]$ as \begin{align}\label{eq-(4.5)} [g^{ij}]=\frac{1}{\det[g_{ij}]} \left[ \begin{array}{ccc} 1-e^2 & ce+b & be+c \\ ce+b & -a-c^2 &bc-ae \\ be+c & bc-ae & -a-b^2 \end{array} \right]. \end{align} where $a$, $b$, $c$ and $e$ are the same in (\ref{eq-(2.22)}) and \begin{align}\label{eq-(4.6)} \det[g_{ij}]=b^2+2cbe+c^2-ae^2+a=D. \end{align} Furthermore, from (\ref{eq-(1.6)}), the matrix from of the second fundamental from of the 2-ruled hypersurface (\ref{eq-(4.3)}) is the same given in (\ref{eq-(2.24)}) and (\ref{eq-(2.25)}). And we have the following theorem since the $\det [h_{ij}]=0$. \begin{theorem} The 2-ruled hypersurfaces of type-2 defined in (\ref{eq-(4.3)}) is flat. \end{theorem} For the mean curvature we have \begin{theorem} The 2-ruled hypersurfaces of type-2 defined in (\ref{eq-(4.3)}) is minimal in $\mathbb{R}^4_1$, if \begin{eqnarray}\label{eq-2.26} (1-e^2)\left[-G_1(\alpha_1''+y\beta_1''+z\gamma_1'')+\sum^{4}_{i=2}G_i(\alpha_i''+y\beta_i''+z\gamma_i'')\right]\nonumber\\ +(ce+b)\left[-G_1\beta'_1+\sum_{i=2}^4 G_i\beta'_i\right]+(be+c)\left[-G_1\gamma'_1+\sum_{i=2}^4 G_i\gamma'_i\right]\nonumber\\ +(ce+b)\left[-G_1\beta'_1+\sum_{i=2}^4 G_i\beta'_i\right] +(be+c)\left[-G_1\gamma'_1+\sum_{i=2}^4 G_i\gamma'_i\right]=0. \end{eqnarray} \end{theorem} \begin{proof} By (\ref{eq-(1.7)}) the matrix of the shape operator is \begin{align*} S= \left[ \begin{array}{ccc} 1-e^2 & ce-b & be-c \\ ce-b & a-c^2 &bc-ae \\ be-c & bc-ae & a-b^2 \end{array} \right]\left[ \begin{array}{ccc} h_{11} & h_{12} & h_{13} \\ h_{21} & 0 &0 \\ h_{31} & 0 & 0 \end{array} \right] \end{align*} Then we get the coefficients of $S$ by \begin{eqnarray*} S_{11}&=&(1-e^2)h_{11}+(ce+b)h_{21}+(be+c)h_{31}\\ S_{22}&=&(ce+b)h_{12}\\ S_{33}&=& (be+c)h_{13}. \end{eqnarray*} And using (\ref{eq-(2.25)}) and (\ref{eq-(1.9)}) we see that the 2-ruled hypersurfaces of type-2 is minimal if \begin{eqnarray*} S_{11}+S_{22}+S_{33}=0, \end{eqnarray*} then that end the proof. \end{proof} \begin{corollary} If the curves $\beta$ and $\gamma$ are orthogonal then the 2-ruled hypersurfaces of type-2 defined in (\ref{eq-(4.3)}) is minimal if \begin{eqnarray}\label{eq-2.27(1)} \left[-G_1(\alpha_1''+y\beta_1''+z\gamma_1'')+\sum^{4}_{i=2}G_i(\alpha_i''+y\beta_i''+z\gamma_i'')\right]\nonumber\\ +b\left[-G_1\beta'_1+\sum_{i=2}^4 G_i\beta'_i\right]+c\left[-G_1\gamma'_1+\sum_{i=2}^4 G_i\gamma'_i\right]\nonumber\\ +b\left[-G_1\beta'_1+\sum_{i=2}^4 G_i\beta'_i\right] +c\left[-G_1\gamma'_1+\sum_{i=2}^4 G_i\gamma'_i\right]=0. \end{eqnarray} \end{corollary} To end this section, we will give the operator of Laplace-Beltrami in the following theorem \begin{theorem} The components of the Laplace-Beltrami operator of the 2-ruled hypersurface of type-2 are \begin{eqnarray} \label{eq-3.5} \Delta\varphi_i=\frac{1}{\sqrt{Q} }\left[ \begin{aligned} &\frac{(\alpha''_i+y\beta''_i+z\gamma''_i)+(b\beta_i)_x+(c\gamma_i)_x)Q-P_1(\alpha'_i+y\beta'_i+z\gamma'_i+b\beta_i+c\gamma_i )}{Q^{\frac{3}{2}}}\\ & +\frac{(b\beta'_i+((-a-c^2)\beta_i)_y+(bc\gamma_i)_y)Q-P_2(b(\alpha'_i+y\beta'_i+z\gamma'_i)+(-a-c^2)\beta_i+bc\gamma_i)}{Q^\frac{3}{2}} \\ &+ \frac{(c\gamma'_i+(bc\beta_i)_z+((-a-b^2)\gamma_i)_z)Q-P_3(c(\alpha'_i+y\beta'_i+z\gamma'_i)+bc\beta_i+(-a-b^2)\gamma_i)}{Q^\frac{3}{2}} \end{aligned} \right], \end{eqnarray} where $i = 1, 2, 3, 4$; $\beta$ and $\gamma$ are orthogonal; $Q=a+b^2+c^2$, $P_1=a_x+2bb_x+2cc_x$, $P_2=a_y+2bb_y+2cc_y$, $P_3=a_z+2bb_z+2cc_z$. \end{theorem} \begin{example}\label{E1} Let $\varphi$ be the 2-ruled hypersurface of type-2 defined by \begin{eqnarray*} \varphi(x,y,z)=\Big(\frac{x^4}{4}-\frac{2y}{\sqrt{3}}+\sqrt{2}, 2x+1+\frac{z}{\sqrt{7}}, -3x+\frac{y}{\sqrt{3}}, \frac{x^3}{3}+\frac{z\sqrt{6}}{\sqrt{7}}\Big). \end{eqnarray*} An easy computation show that $\varphi$ is minimal. And the Laplace-Beltrami operator of $\varphi$ is zero. \end{example} \section{2-ruled hypersurfaces constructed by particular octonions} Now we give the definition of the 2-ruled hypersurface constructed by the particular octonion. \begin{definition} Let $\tilde{\gamma}=a(t)+\varepsilon a^*(t)$ and $\tilde{\beta}=b(t)+\varepsilon b^*(t)$ be two curves on the unit dual sphere $\mathbb{S}^3_\mathbb{D}$, the 2-ruled hypersurfaces corresponding to these curves is \begin{eqnarray} \varphi(r,s,t)=\alpha(t)+sa(t)+rb(t), \end{eqnarray} where $\alpha(t)=a(t)\times a^*(t)\times I+b(t)\times b^*(t)\times I$. \end{definition} Let $u(t)$ be a curve in $\mathbb{R}^4$. We can define two particular octonions \begin{eqnarray*} Q(s,t)=s+u(t), \,\,\,\,\,P(r,t)=r+u(t), \end{eqnarray*} where $S(Q(s,t))=s$, $S(P(s,t))=r$ and $V(Q(s,t))=V(P(s,t))=u(t)$. \begin{theorem} Let $v(t)$ and $w(t)$ be two curve on unit sphere in $\mathbb{S}^3_\mathbb{D}$ and let their position vectors be perpendicular to the position vector of the curve $u(t)$ (i.e $\vert v(t)\vert=\vert w(t)\vert=1$ and $\langle v(t), u(t)\rangle=\langle w(t), u(t)\rangle=0$. Then the sum defined by \begin{eqnarray} \varphi(s,r,t)=\alpha(t)+sw(t)+rv(t), \end{eqnarray} where $\alpha(t)=u(t)\times v(t)\times I+u(t)\times w(t)\times I$, is a 2-ruled hypersurface constructed by the two particular octonions. \end{theorem} \begin{proof} Since $S(Q(s,t))=s$, $S(P(s,t))=r$ and $V(Q(s,t))=V(P(s,t))=u(t)$, using the octonion product operator we have \begin{eqnarray}\label{p1} Q(s,t)\star w(t)\star I&=&(s + u(t))\star w(t)\star I\nonumber\\ &=&-\langle u(t),w(t)\rangle + sw(t) + u(t)\times w(t)\times I\nonumber\\ &=& sw(t) + u(t)\times w(t)\times I, \end{eqnarray} and the same calculus give \begin{eqnarray}\label{p2} Q(s,t)\star v(t)\star I&=&(s + u(t))\star v(t)\star I\nonumber\\ &=&rv(t) + u(t)\times v(t)\times I. \end{eqnarray} If we put (\ref{p1})+(\ref{p2}), we get \begin{eqnarray*} \varphi(s,r,t)&=&u(t)\times v(t)\times I+u(t)\times w(t)\times I+sw(t)+rv(t)\\ &=& \alpha(t)+sw(t)+rv(t), \end{eqnarray*} where $\alpha(t)=u(t)\times v(t)\times I+u(t)\times w(t)\times I$. \end{proof} \begin{corollary} Let $\tilde{\gamma}_1=a+\varepsilon a^*$ and $\tilde{\gamma}_2=b+\varepsilon b^*$ be dual number in $\mathbb{S}^3_\mathbb{D}$. Then, the particular octonion $\varphi(s,r,t)$ can be written as follows \begin{eqnarray} \varphi(s,r,t)=\alpha(t)+sa(t)+rb(t), \end{eqnarray} where $\alpha(t)=a(t)\times {a}^{\ast}(t)\times I+b(t)\times {b}^{\ast}(t)\times I$, is a 2-ruled hypersurface constructed by the two particular octonions. \end{corollary} \begin{proof} Let $\tilde{\gamma}_1=a+\varepsilon a^*$ and $\tilde{\gamma}_2=b+\varepsilon b^*$ be dual number in $\mathbb{S}^3_\mathbb{D}$. We know that $$Q(s,t)=s+a^*(t)$$ and $$P(r,t)=r+b^*(t)$$ are two particular octonions where $S(Q(s,t))=s$, $S(P(r,t))=r$, $V(Q(s,t))= a^*(t)$ and $V(P(r,t))=b^*(t)$. So using the octonion product we have \begin{eqnarray*} a(t)\star Q(s,t)\star I&=&a(t)(s+a^*(t))\nonumber\\ &=&-\langle a(t), a^*(t)\rangle+sa(t)+a(t)\times a^*(t)\times I. \end{eqnarray*} Since $\vert\tilde{\gamma}_1\vert=1$ we have $\langle a(t), a^*(t)\rangle=0$. Then \begin{eqnarray} a(t)\star Q(s,t)\star I=a(t)\times a^*(t)\times I+sa(t).\label{p3} \end{eqnarray} The same calculus gives also \begin{eqnarray} b(t)\star P(r,t)\star I=b(t)\times b^*(t)\times I+rb(t).\label{p4} \end{eqnarray} If we take (\ref{p3})+(\ref{p4}) and denote by $\varphi(s,r,t)=a(t)\star Q(s,t)\star I+b(t)\star P(r,t)\star I$, we get \begin{eqnarray*} \varphi(s,r,t)=\alpha(t)+sa(t)+rb(t). \end{eqnarray*} \end{proof} \begin{example}\label{Ex3} Let us take the particular octonions $Q(s,t)=s+u(t)$ and $P(r,t)=r+u(t)$ defined by $u(t)=(-\cos t\cos 2 t,\cos t\sin 2 t, 0, 0)\in \mathbb{R}^{4}_1$. Then, we can find \begin{align*} w(t)=(\sin t\sin 2 t,\sin t\cos 2 t, \cos t, \sin t)\textit{ and }v(t)=(\cos t\sin 2 t,\sin t\sin 2 t, \sin t, -\cos t). \end{align*} Thus, we can compute \begin{align*} \alpha(t)&=u(t)\times w(t)\times I+=u(t)\times v(t)\times I\\ &=(0,0,\sin 2 t(\frac{1}{2}\sin 2t-\cos^{2} t),\sin 2 t(\frac{1}{2}\sin 2t+\cos^{2} t)). \end{align*} Then, we reach the following 2-ruled hypersurface of type-1, \begin{displaymath} \varphi(s,t,r)=\left( \begin{array}{c} s \sin t\sin 2t+r\cos t\sin 2 t \\ s \sin t\cos 2t+r\sin t\sin 2 t \\ \sin 2 t(\frac{1}{2}\sin 2t-\cos^{2} t)+s\cos t+r\sin t\\ \sin 2 t(\frac{1}{2}\sin 2t+\cos^{2} t)+s\sin t+r\cos t \end{array}\right). \end{displaymath} \end{example} Next, the image of the projections of 2-ruled hypersurface of type-1 in Example \ref{Ex3} onto $\mathbb{R}_{1}^{3}$ constructed by particular octonion are visualized in Figure \ref{F1}. \begin{figure}[H] \begin{center} \subfigure[]{\includegraphics[width=3.3in]{Figure1.jpg}} \subfigure[]{\includegraphics[width=3.1in]{Figure2.jpg}} \subfigure[]{\includegraphics[width=3.3in]{Figure3.jpg}} \subfigure[]{\includegraphics[width=3.1in]{Figure4.jpg}} \end{center} \caption{Some projections of 2-ruled hypersurface of type-1 constructed by particular octonion in $\mathbb{R}_{1}^{4}$} \label{F1} \end{figure} \section{Some discussions related to the electromagnetic theory} By identifying an optical fiber with a curve, we can give a geometric interpretation of the motion of a linearly polarized light wave through Frenet roof elements. As the linearly polarized light wave moves along the optical fiber, the Polarization plane rotates, and the image of the polarization vector (electric field) in the plane is a linear line.\\ Therefore, we can use ruled surfaces to model this movement geometrically. In particular, it would be very advantageous to use ruled surface equations instead of standard calculations when expressing the motion of a linearly polarized light wave along the optical fiber in 4 dimensions.\\ In this study, we defined three types of 2-ruled hypersurfaces in 4-dimensional Minkowski space $\mathbb{R}_{1}^{4}$. In this section we will give an interpretation of the motion of the polarized light wave in the 4-dimensional Minkowski space of these surfaces and give some motivated examples and visualize them through MAPLE program.\\ We demonstrate that the evolution of a linearly polarized light wave is associated with the movement of the parameter curve, which is the line segment in the formation of the ruled surface. If we match the parameter curve, which is the line segment of the ruled surface, with the polarization vector, the optical fiber as the other parameter curve is matched. Hence, the polarization vector moves in parallel along an optical fiber. This allows us to interpret the movement of the polarization vector (electric field) along an optical fiber geometrically in 4-dimensional space. \section{Conclusions} In this paper, we gave the definition of three types of 2-ruled hypersurfaces and we calculated the mean curvature, the Gauss curvature and the Laplace-Bertrami operator of the two types of 2-ruled hypersurfaces. After, we constructed those 2-ruled hypersurfaces by using the particular octonion. In this construction, we gave an example and we visualized the images with MAPLE program. This construction is new and original. Then, we presented some discussions related to the 2-ruled hypersurfaces and the electromagnetic theory. For perspective, one can do the same in also Riemannian 4-manifolds and pseudo-Riemannian 4-manifolds.
1,116,691,499,933
arxiv
\section{Introduction} The axion which was initially introduced as a solution to the strong CP problem \cite{Peccei:1977hh, Peccei:1977ur, Weinberg:1977ma, Wilczek:1977pj} has turned out to have many interesting phenomenological consequences \cite{Kim:2008hd,DiLuzio:2020wdo,Choi:2020rgn}. After recognizing that axions provide a compelling candidate for the dark matter in the Universe \cite{Preskill:1982cy,Abbott:1982af,Dine:1982ah}, a lot of efforts have been made to search for axions over the parameter space of the representative axion models \cite{Kim:1979if,Shifman:1979if,Dine:1981rt,Zhitnitsky:1980tq}. Since the viable parameter region is in the very weakly coupled regime, most of the laboratory experiments searching for axions are ongoing in the direction of the precision measurements using, for example, resonant cavities, nuclear magnetic resonance, light shining through the walls, and polarization of lights in magnetic fields (see \cite{Graham:2015ouw,Irastorza:2018dyq,Semertzidis:2021rxs} for comprehensive reviews). A complementary approach which can severely constrain the couplings of light axions is to use astrophysical objects forming a hot and dense environment, e.g., supernovae, stars on the horizontal and red giant branches, neutron stars, and even white dwarfs (see \cite{Raffelt:2006cw} for a review and also \cite{DiLuzio:2021ysg} for a recent overview). Axions can be produced abundantly from those objects, thereby altering their evolution. One can then derive constraints on the couplings of axions by requiring that the axion emission does not significantly alter the standard evolution scenario which is consistent with the observational data. A core-collapse supernova, e.g., SN1987A, is known to provide stringent constraints on the axion couplings to hadrons, particularly on the couplings to nucleons \cite{Turner:1987by,Raffelt:1987yt}. The observation of the neutrino flux from SN1987A, which is consistent with the standard scenario \cite{Burrows:2000mk,Woosley:2005cha}, suggests that the additional cooling by axion emission from the associated proto-neutron star is constrained as $L_a \lesssim L_\nu = {\cal O}(1- 10) \times 10^{51}\,{\rm erg}/\sec$, where $L_a$ and $L_\nu$ denote the axion and neutrino luminosities around $1-10\,\sec$ after the formation of the proto-neutron star \cite{Raffelt:2006cw}. Among the processes producing axions from supernovae, the nucleon bremsstrahlung process $N+N\rightarrow N+N+a$ ($N=n,p$) has been considered as the dominating process for many years \cite{Iwamoto:1984ir,Brinkmann:1988vi,Raffelt:1993ix,Iwamoto:1992jp,Carenza:2019pxu}. However, recently it has been noticed that the number density of negatively charged pions inside supernovae can be significantly enhanced by pion-nucleon interactions \cite{Fore:2019wib}. Based on this observation, the pion-induced Compton-like process $\pi^-+p \rightarrow n + a$, which was originally studied in \cite{Turner:1991ax,Keil:1996ju}, has been revisited. Taking into account medium effects, Refs.~\cite{Carenza:2020cis,Fischer:2021jfm} show that the process dominates over the nucleon-nucleon bremsstrahlung for a wide range of astrophysical conditions encountered inside supernovae.\footnote{ The medium effects also modify the axion-nucleon couplings. The modification is expected to be an O(1) effect in general, while it could result in $\sim$10-times enhancement of the axion-neutron coupling in the KSVZ model because the accidental cancellation of that coupling in vacuum is spoiled \cite{Balkin:2020dsr}. Here we presume the values of axion couplings in vacuum for our numerical estimation. } Motivated by the importance of the process $\pi^-+p \rightarrow n + a$, in this paper we extend the previous analysis of axion emission from supernovae with a complete set of relevant axion couplings including the axion-pion-nucleon and axion-pion {\it contact interactions} which were ignored in the previous studies. Our primary concern is how significantly the contact interactions can affect the axion emissivity. We start with a general axion Lagrangian above the QCD confinement scale, which determines the axion couplings to hadrons below that scale. To highlight the coupling dependence of the axion emissivity more clearly, we compare a new contribution including the effect of contact terms to that from the axion-nucleon couplings only and take the ratio between the two contributions. It is expected that the ratios can lead to cancellation of the uncertainties in nuclear physics and the medium effect. Thus, as a first step towards understanding the contributions of the contact interactions, we consider the tree-level diagrams in the leading order pion-nucleon couplings and the one-pion exchange diagrams for the nucleon-nucleon bremsstrahlung. We also ignore the background matter effect, which should be included in future work. In such an approximation, two processes are affected by the axion-pion-nucleon contact interaction, $\pi^-+p \rightarrow n + a$ and $n+p\rightarrow n+p+a$. We find that the axion-pion-nucleon contact interaction can enlarge the emission rate of $\pi^-+p \rightarrow n + a$ by a factor of $2-4$ depending on the pattern of axion couplings, while the effect on $n+p\rightarrow n+p+a$ is negligible. We also examine other pion-induced processes such as $\pi^0+ n\rightarrow n+a$ and $\pi^-+\pi^0\rightarrow \pi^-+a$, where the latter process is induced by the axion-pion contact interaction. We then find that $\pi^0+ n\rightarrow n+a$ can be as important as $\pi^-+p \rightarrow n + a$, again depending on the pattern of axion couplings, while $\pi^-+\pi^0\rightarrow \pi^-+a$ is negligible compared to $\pi^-+p \rightarrow n + a$ over the entire axion parameter space for astrophysical conditions encountered inside proto-neutron stars. This paper is organized as follows. In Sec.~\ref{Sec:UVmodel}, we introduce the relevant axion couplings to nucleons and pions in the context of a generic axion model and discuss the model dependence of couplings for a simple class of axion models. In Sec.~\ref{Sec:Calc}, we investigate the axion emission from supernovae by a variety of pion-induced processes and the nucleon-nucleon bremsstrahlung processes, with a complete set of relevant axion couplings. Sec.~\ref{Sec:Conc} is a summary and conclusion. \section{Axion Couplings to Nucleons and Pions \label{Sec:UVmodel}} In this section, we briefly discuss the axion couplings to nucleons and pions for generic axions whose couplings are constrained only by the (approximate) global $U(1)$ Peccei-Quinn (PQ) symmetry \cite{Peccei:1977hh, Peccei:1977ur, Weinberg:1977ma, Wilczek:1977pj}. Without loss of generality, at scales below the axion decay constant $f_a$, one can always choose a field basis for which {\it only} the axion field transforms under the PQ symmetry as \begin{eqnarray} U(1)_{\rm PQ}:\quad a \, \to \,a+ {\rm constant}, \end{eqnarray} while all other fields are invariant \cite{Georgi:1986df}. In such a field basis, the axion couplings at low energy scales around $\mu={\cal O}(1)$ GeV include \begin{eqnarray} \label{Lquark} {\cal L}_{\rm eff} &=& c_G\frac{g_s^2}{32\pi^2}\frac{a}{f_a} G_{\mu\nu}^a \tilde G^{a\mu\nu} + \frac{\partial_\mu a}{2 f_a} \Big( C_u \bar u \gamma^\mu \gamma_5 u + C_d\bar d \gamma^\mu\gamma_5 d \Big), \end{eqnarray} where the axion decay constant $f_a$ defines the axion field range as $a\cong a+2\pi f_a$, $G^a_{\mu\nu}$ are the gluon field strength, and $u$ and $d$ are the up and down quarks. Here $c_G$ is an integer-valued parameter describing the $U(1)_{\rm PQ}$ breaking by the QCD anomaly, while $C_u$ and $C_d$ are continuous real-valued parameters describing the $U(1)_{\rm PQ}$-preserving axion couplings to the light quarks renormalized at $\mu={\cal O}(1)$ GeV. For axion models which have a UV completion with a linearly realized $U(1)_{\rm PQ}$, the low energy parameters $c_G$ and $C_{u,d}$ in Eq.~\eqref{Lquark} are determined mainly by the $U(1)_{\rm PQ}$ charges defined in the UV model.\footnote{ For string-theoretic axions that arise from the zero modes of higher-dimensional $p$-form gauge field, there is no UV completion with a linearly realized $U(1)_{\rm PQ}$. It has been noted that the tree-level values of $C_{u,d}$ for string-theoretic axions are of the order of $\alpha_{\rm GUT}/2\pi$ \cite{Choi:2021kuy}. } As an illustrative example, let us consider axion models in which the first generation quark masses are generated by the following Yukawa couplings\footnote{Here for simplicity we ignore the effects of flavor mixings.}: \begin{eqnarray} \label{yukawa} {\cal L}_{\rm Yukawa} = \lambda_u \left(\frac{\sigma}{\Lambda}\right)^{n_u}Q_1 u^c_1 H_u + \lambda_d \left(\frac{\sigma}{\Lambda}\right)^{n_d} Q_1 d^c_1 H_{d} + h.c., \end{eqnarray} where $\sigma$ is a PQ-charged gauge-singlet scalar field whose vacuum expectation value determines the axion decay constant as \begin{eqnarray} \langle \sigma\rangle =\frac{1}{\sqrt{2}}f_a e^{ia/f_a},\end{eqnarray} $Q_i$ and $u^c_i, d^c_i$ ($i=1,2,3$) denote the three generations of the left-handed $SU(2)_L$-doublet quarks and the left-handed $SU(2)_L$-singlet antiquarks, respectively, $H_u$ and $H_d$ are $SU(2)_L$-doublet Higgs fields, and finally $\Lambda$ is a cutoff scale of the model. To derive the low energy axion couplings in this model, we first make the following axion-dependent field redefinition at a scale around $f_a$: \begin{eqnarray} \label{redef} \Phi\,\rightarrow \, e^{iq_\Phi a/f_a} \Phi\quad (\Phi=\psi, H_{u,d}),\end{eqnarray} and subsequently integrate out all massive fields heavier than $\mu={\cal O}(1)$ GeV, where $q_\Phi$ is the PQ charge of $\Phi$ (in the normalization convention with $q_\sigma=1$) for the linearly realized $U(1)_{\rm PQ}$, and $\psi$ stands for all chiral fermions in the model. Then the axion-gluon coupling $c_G$, which arises as a consequence of the axion-dependent field redefinition of $\psi$, corresponds to the coefficient of the $U(1)_{\rm PQ}$-$SU(3)_c$-$SU(3)_c$ anomaly, while the couplings $C_{u,d}$ to the light quarks are determined by (i) a contribution from the axion-dependent field-redefinition of $\{Q_1,u^c_1,d^c_1\}$, (ii) the tree-level threshold correction from the axion mixing with the $Z$ boson which is induced by the field redefinition of $H_{u,d}$, and finally (iii) the radiative corrections caused by the gauge and Yukawa couplings in the model \cite{Choi:2021kuy}. Putting these together, one finds \begin{eqnarray} \label{cucd} c_G& =&2\sum_\psi q_\psi {\rm Tr}(T^2_c(\psi)),\nonumber \\ C_u &=& -n_u - (q_{H_u}+q_{H_d})\cos^2\beta+\Delta C_u,\nonumber \\ C_d&=& -n_d -(q_{H_u}+q_{H_d})\sin^2\beta+\Delta C_d, \end{eqnarray} where $T_c(\psi)$ is the color charge of $\psi$, $\tan\beta =\langle H_u\rangle/\langle H_d\rangle$, and the radiative corrections $\Delta C_{u,d}={\cal O}(10^{-2}-10^{-3})$ can be safely ignored if the tree level values of $C_{u,d}$ are of order unity \cite{Choi:2021kuy}. The above results indicate that a variety of different patterns of $c_G$ and $C_{u,d}$ are possible even within the framework of relatively simple axion models. Let us present explicitly the parameter values for some examples. In the KSVZ model \cite{Kim:1979if,Shifman:1979if}, $H_u=(i\sigma_2 H_d)^*$ and all SM fields are neutral under the linearly realized $U(1)_{\rm PQ}$, and therefore $n_u=n_d=q_{H_u}=q_{H_d}=0$. The model also involves a heavy PQ-charged exotic quark ${\cal Q}$ generating the $U(1)_{\rm PQ}$-$SU(3)_c$-$SU(3)_c$ with $c_G=1$. The resulting couplings of the KSVZ axion at $\mu={\cal O}(1)$ GeV are given by \begin{eqnarray} \label{model:KSVZ} {\rm KSVZ:} \quad c_G=1, \quad C_u=\Delta C_u={\cal O}(10^{-2}), \quad C_d=\Delta C_d ={\cal O}(10^{-2}),\end{eqnarray} where $\Delta C_{u,d}$ are induced mostly by the axion-gluon coupling $c_G$ causing a running of $C_{u,d}$ over the scales from the mass of the exotic quark ${\cal Q}$ to $\mu={\cal O}(1)$ GeV \cite{Choi:2021kuy}. On the other hand, the minimal DFSZ model \cite{Dine:1981rt,Zhitnitsky:1980tq} has $n_u=n_d=0$, $q_{H_u}=q_{H_d}=-1$ and all chiral fermions in the SM model have $q_\psi=1/2$, which result in \begin{eqnarray} \label{model:DFSZ} {\rm DFSZ:}\quad c_G=6,\quad C_u=2\cos^2\beta +\Delta C_u, \quad C_d=2\sin^2\beta+\Delta C_d\end{eqnarray} with $\Delta C_{u,d}={\cal O}(10^{-3})$ which are smaller than those of the KSVZ model because in the DFSZ model the running of $C_{u,d}$ starts from a lower scale around the top quark mass \cite{Choi:2021kuy}. It is an interesting possibility that $U(1)_{\rm PQ}$ plays the role of a flavor symmetry which explains the fermion mass hierarchies~\cite{Ema:2016ops,Calibbi:2016hwq,Bjorkeroth:2017tsz}. In such a case, $n_{u,d}$ can be non-zero integers and the model can have a more diverse pattern of $c_G$ and $C_{u,d}$. Note that, while $n_{u,d}$ in the Yukawa couplings (Eq.~\eqref{yukawa}) are required to be non-negative, the sign in front of $n_{u,d}$ in Eq.~\eqref{cucd} can be flipped by replacing $\sigma$ in Eq.~\eqref{yukawa} with $\sigma^*$. One can further generalize the model by introducing additional $U(1)_{\rm PQ}$-charged Higgs doublet, and then $C_{u,d}$ receive additional contribution depending on the vacuum expectation value of the added Higgs field. With this observation, in the following we regard $C_u$ and $C_d$ as real-valued free parameters, and $c_G$ as an integer-valued additional free parameter, without specifying the underlying UV model. From the couplings in Eq.~(\ref{Lquark}) defined at $\mu={\cal O}(1)$ GeV, we can derive the axion couplings to nucleons and pions which are relevant for the axion emission from supernova. Including the conventional pion-nucleon couplings, the interactions are given by \cite{Chang:1993gm,DiLuzio:2020wdo} \begin{eqnarray} \label{Lhadron} {\cal L}_{\rm int} &=& \frac{g_A}{2f_\pi} \left( \partial_\mu\pi^0 (\bar p\gamma^\mu\gamma_5 p - \bar n \gamma^\mu \gamma_5 n) + \sqrt{2}\partial_\mu\pi^+\bar p \gamma^\mu \gamma_5 n + \sqrt{2}\partial_\mu\pi^- \bar n \gamma^\mu\gamma_5 p \right) \nonumber\\ && + \frac{\partial_\mu a}{2 f_a} \left( C_{ap} \bar p \gamma^\mu\gamma_5 p + C_{an} \bar n \gamma^\mu \gamma_5 n + \frac{C_{a\pi N}}{f_\pi} ( i \pi^+ \bar p \gamma^\mu n - i \pi^- \bar n\gamma^\mu p) \right) \nonumber\\ &&+ \frac{\partial_\mu a}{2 f_a} \frac{C_{a\pi}}{f_\pi} \Big( \pi^0 \pi^+ \partial^\mu\pi^- +\pi^0\pi^-\partial^\mu\pi^+ - 2\pi^+\pi^-\partial^\mu\pi^0 \Big), \end{eqnarray} where $f_\pi= 92.4$ MeV is the pion decay constant and \begin{eqnarray} \label{Coeffs} C_{ap}- C_{an} &=& g_A \left(C_u - C_d + \Big(\frac{m_u - m_d}{m_u+ m_d} \Big)c_G\right),\nonumber\\ C_{ap} + C_{an} &=& g_0 \Big(C_u + C_d - c_G \Big), \nonumber\\ C_{a\pi N} &=& \frac{C_{ap} - C_{an}}{\sqrt{2} g_A}, \quad C_{a\pi} = \frac{2(C_{ap} -C_{an})}{3 g_A} \end{eqnarray} with the nucleon matrix elements of the light quark axial vector currents given by \begin{eqnarray} \label{n_matrix} g_A&=&\Delta u -\Delta d \simeq 1.2723(23), \nonumber \\ g_0&=&\Delta u+\Delta d \simeq 0.521(53),\end{eqnarray} where $s^\mu \Delta q=\langle N|\bar q\gamma^\mu\gamma_5 q|N\rangle$ ($q=u,d$) for the nucleon spin four vector $s^\mu$. Here the numerical value of $g_0$ is chosen for the axion-quark couplings $C_{u,d}$ renormalized at $\mu=2$ GeV in the $\overline{\rm MS}$ scheme \cite{diCortona:2015ldu}, and the small contributions from the axion couplings to the heavier quarks $Q=\{s,c,b,t\}$ are ignored. The above results show that the entire axion couplings to nucleons and pions, including the axion-pion-nucleon contact interaction $C_{a\pi N}$ and the axion-pion contact interaction $C_{a\pi}$, are determined by the two free parameters $C_{an}$ and $C_{ap}$. An interesting feature of these parameters is that in some axion models they can have a hierarchical pattern such as $|C_{ap}|\gg |C_{an}|$ or $|C_{ap}-C_{an}|\gg |C_{ap}+C_{an}|$ without fine tuning of any continuous parameter in the underlying UV model. For instance, including the radiative corrections induced by the axion-gluon coupling $c_G$, the KSVZ and string-theoretic axions have $|C_{u,d}|={\cal O}(10^{-2} c_G)$ \cite{Choi:2021kuy}, which results in \begin{eqnarray} |C_{ap}|\simeq 0.48 |c_G|\,\gg\, |C_{an}| = {\cal O}(10^{-2}|c_G|)\end{eqnarray} for the nucleon matrix elements in Eq.~\eqref{n_matrix} and the light quark mass ratio $m_u/m_d=0.48(3)$. Also, for the axion couplings in Eq.~\eqref{cucd}, the anomaly coefficient $c_G$ and the tree level value of $C_u+C_d$ are all quantized parameters. Then, for a model with $U(1)_{\rm PQ}$-charges yielding \begin{eqnarray} c_G=-\left(n_u+n_d+q_{H_u}+q_{H_d}\right)={\cal O}(1)\,, \label{eq:maximalcase} \end{eqnarray} the model predicts \begin{eqnarray} C_{ap}-C_{an}={\cal O}(1),\quad C_{ap}+C_{an}=g_0(\Delta C_u+\Delta C_d)\lesssim{\cal O}(10^{-2})\,. \end{eqnarray} At any rate, axions generically have the axion-pion-nucleon contact interaction given by $C_{a\pi N}=(C_{ap}-C_{an})/\sqrt{2}g_A$ ($g_A\simeq 1.27$) and the axion-pion contact interaction $C_{a\pi}=2(C_{ap}-C_{an})/3g_A$. On the other hand, these contact interactions were not taken into account in the previous studies of axion emission from supernovae. In Sec.~\ref{Sec:Calc}, we will examine the effects of those contact interactions on the axion emission rates to see how important they can be. \section{Axion Emission from Supernovae by hadronic processes\label{Sec:Calc}} In this section, we examine the axion production by hadron collisions inside a newly born proto-neutron star. We consider three types of processes, the pion-nucleon scattering $\pi+N\rightarrow N+a$ ($N=n,p$), the nucleon-nucleon bremsstrahlung $N+N\rightarrow N+N+a$, and the pion-pion scattering $\pi+\pi\rightarrow \pi+a$. The relative importance of each process depends on the pattern of axion couplings, as well as on the density and temperature of the corresponding astrophysical environment. Our prime goal is to examine the effects of the two contact interactions, the axion-pion-nucleon contact coupling $C_{a\pi N}$ and the axion-pion contact coupling $C_{a\pi}$ in Eq.~(\ref{Lhadron}), which were not taken into account before except for the nucleon-nucleon bremsstrahlung \cite{Carena:1988kr}. We will examine this question in a simple approximation keeping only the leading order in pion-nucleon couplings and ignoring medium effects. Accordingly, we can find a simple form of the coupling dependence in that approximation, and it shows the relative importance of the contribution from each coupling at a rough estimate. \subsection{Pion-nucleon scattering} Let us first discuss the pion-nucleon scattering process $\pi+N\rightarrow N+a$. For $T\sim 40$ MeV and the nucleon mass density $\rho\sim 10^{14} \,{\rm g}/{\rm cm}^3$ encountered inside a proto-neutron star, the pion and nucleon number densities roughly obey~\cite{Fore:2019wib} \begin{eqnarray} \frac{n_{\pi^0}}{n_{\pi^-}}\sim \frac{n_{\pi^+}}{n_{\pi^0}}\sim \frac{n_p}{n_n}= {\cal O}(0.1)\,. \end{eqnarray} It is then expected that $\pi^-+p\rightarrow n+a$ and $\pi^0+n\rightarrow n+a$ are the dominating process depending upon the involved axion couplings. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figures/proton_pion.pdf} \caption{Diagrams for $\pi^-+p\to n+a$ from the axion couplings in Eq.~(\ref{Lhadron}).} \label{diagram:pionproton} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{figures/npi0na.pdf} \caption{Diagrams of axion production for the process $\pi^0+ n\to n+ a$.} \label{diagram:npi0na} \end{figure} The Feynman diagrams for these processes are depicted in Fig.~\ref{diagram:pionproton} and Fig.~\ref{diagram:npi0na}, showing that at leading order in pion-nucleon couplings $\pi^0+n\rightarrow n+a$ involves only the axion-neutron coupling $C_{an}$, while $\pi^-+p\rightarrow n+a$ depends on three axion couplings, $C_{ap}$, $C_{an}$, and the axion-pion-nucleon contact interaction $C_{a\pi N}$. Recently, the process $\pi^-+p\rightarrow n+a$ has been argued to be the dominating process to produce axions for a wide range of astrophysical conditions encountered inside supernovae~\cite{Carenza:2020cis,Fischer:2021jfm}. The axion emissivity (the energy loss induced by axion emissions per unit volume and second) of this process is given by \begin{eqnarray} Q_a^{p\pi^-} &=&\int \prod_{\alpha=\pi, p, n, a}\frac{d^3 {\bf p}_\alpha}{(2\pi)^3 2 E_\alpha} \Big[ (2\pi)^4 \delta^{(4)}(p_{\pi}+p_{p} - p_n-p_a) \nonumber\\ &&\quad \times\, f_{\pi} (p_{\pi}) f_p(p_p) (1- f_n(p_n)) \, \sum_{s_p, s_n} |{\cal M}_{\pi^-+p\to n+a}|^2 \, E_a \Big], \end{eqnarray} where $p_\alpha = (E_\alpha , {\bf p}_\alpha)$ are the particle four-momenta, $f_\alpha(p_\alpha)$ are the Fermi-Dirac or Bose-Einstein distribution function, and $s_N$ ($N=p,n$) denotes the nucleon spin. Although the integrand has angular dependence after applying the energy-momentum conservation, the nucleon distribution functions can be approximated to be independent of those angles in the non-relativistic limit. Then, the squared matrix element can be first integrated over the relative angle of $\mathbf{p}_p$ with respect to $\mathbf{p}_{a}$, while the integration over the solid angle of $\mathbf{p}_a$ amounts to a factor of $4\pi$.\footnote{The squared matrix element depends on two independent angles, each from the Mandelstam variables $s$ and $u$. We approximate these Mandelstam variables as $s= (p_p+p_{\pi^-})^2 \simeq m_p^2 + m_{\pi^-}^2 + 2 m_p E_{\pi^-}$ and $u= (p_p-p_a)^2 \simeq m_p^2 - 2 m_p E_a$.} Taking the non-relativistic limit for the initial proton and integrating over the relative angle between $\mathbf{p}_{\pi^{-}}$ and $\mathbf{p}_a$, we find \begin{eqnarray} \int d\Omega_{\pi^-} \sum_{s_p, s_n} |{\cal M}_{\pi^-+p\to n+ a}|^2 =\frac{8\pi m_N^4}{f_a^2 f_\pi^2} {\cal C}_a^{p\pi^-}, \end{eqnarray} where $m_N$ is the nucleon mass and ${\cal C}_a^{p\pi^-}$ is a dimensionless quantity which can be expanded in powers of $1/m_N$ as \begin{eqnarray} \label{Ca2} {\cal C}_a^{p\pi^-}&\simeq& \frac{2}{3}g_A^2 \left(\frac{|{\bf p}_{\pi}|}{m_N}\right)^2\left( 2C_+^2+C_{-}^2\right) + \left(\frac{E_{\pi}}{m_N}\right)^2 C_{a\pi N}^2 \nonumber\\ && + \sqrt{2}g_A \left(\frac{E_{\pi}}{m_N}\right)^3 \left(1 -\frac{1}{3}\left( \frac{|{\bf p}_{\pi}|}{E_{\pi}}\right)^2 \right) C_{a\pi N}C_{-}, \end{eqnarray} where \begin{eqnarray} C_{\pm}=\frac{1}{2}\left(C_{ap}\pm C_{an}\right), \quad E_{\pi} = \sqrt{m_{\pi^-}^2 + |{\bf p}_{\pi}|^2}.\end{eqnarray} We remark that the above expression of ${\cal C}_a^{p\pi^-}$ corresponds to the leading order result (in $1/m_N$) for which the three axion coupling combinations, i.e., $2C_+^2+C^2_-$, $C_{a\pi N}C_-$ and $C_{a\pi N}^2$, are pretended to be independent parameters. As already noticed, $C_{a\pi N}$ is not an independent parameter, but is determined as $C_{a\pi N}=\sqrt{2}C_-/g_A$ (see Eq.~(\ref{Coeffs})). Then the third term can be interpreted as a higher order term as it is suppressed compared to other terms by additional power of $E_\pi/m_N$. However, our numerical estimation gives ${\cal C}_a^{p\pi^-} \simeq 0.02\, (2C_+^2+C_-^2) + 0.04 \,C_{a\pi N}^2+0.01 \,C_{a\pi N}C_-$ for typical parameter values, e.g., $T=40\,{\rm MeV}$, $|{\bf p}_{\pi}|\simeq \sqrt{3 m_{\pi^-} T} \simeq 130\,{\rm MeV}$. Note that the third term is comparable to the others. The relative importance of each term coincides with the final axion emissivity in Eq.~\eqref{fit:ppi} up to a small enhancement by the phase space integration. For $|\mathbf{p}_p|\gg |\mathbf{p}_{\pi, a}|$, the axion emissivity can be further approximated as \begin{eqnarray} \hskip -1cm Q_a^{p\pi^-} &\simeq & \frac{ z_p z_{\pi^-} }{f_a^2 f_\pi^2} \sqrt{\frac{m_N^7 T^{11}}{128\pi^{10}}} \int d x_p \left( \frac{x_p^2 e^{x_p^2} }{ (e^{x_p^2} + z_n) (e^{x_p^2} + z_p)}\right) \int dx_\pi \left(\frac{x_\pi^2 \epsilon_\pi {\cal C}_a^{p\pi^-} }{e^{\epsilon_\pi -y_\pi} - z_{\pi^-}} \right), \label{eq:Qapip} \end{eqnarray} where $z_i= e^{(\mu_i-m_i)/T}$ are the fugacities, $\epsilon_\pi = E_\pi/T$, $y_\pi= m_{\pi^-}/T$, $x_\pi = |{\bf p}_{\pi}|/T$, and $x_p =|{\bf p}_p|/\sqrt{2m_N T}$. The emissivity Eq.~\eqref{eq:Qapip} depends on many astrophysical parameters which are related to each other by the equation of state, e.g., the temperature $T$ and the chemical potentials $\mu_i$ ($i=n,p,\pi^-$). To parameterize the astrophysical condition in terms of $T$ and the total mass density $\rho$, we use the fugacities obtained in \cite{Fore:2019wib} and numerically calculate the integral in Eq.~(\ref{eq:Qapip}) around $T\sim 40$ MeV and $\rho\sim 10^{14}\,{\rm g}/{\rm cm}^3$. We then find \begin{align} \frac{Q_a^{p\pi^-}}{\text{erg}\cdot\text{cm}^{-3}\,\text{s}^{-1}} &\simeq 1.4\times 10^{33} \, T_{40}^{7.2}\rho_{14}^{1.1} \left(\frac{10^9 \, {\rm GeV}}{f_a}\right)^2 \left(2C_+^2+C_{-}^2\right) \nonumber \\ & \quad + 2.3\times 10^{33} \, T_{40}^{6.6}\rho_{14}^{1.1} \left(\frac{10^9 \, {\rm GeV}}{f_a}\right)^2 C_{a\pi N}^2 \nonumber \\ & \quad + 1.2\times 10^{33} \, T_{40}^{7.5}\rho_{14}^{1.1} \left(\frac{10^9 \, {\rm GeV}}{f_a}\right)^2 C_{a\pi N}C_- , \label{fit:ppi} \end{align} where $T_{40} \equiv T/(40\,{\rm MeV})$ and $\rho_{14} \equiv \rho/(10^{14}\,{\rm g}/{\rm cm}^3)$. We stress that the above approximation is valid only for a narrow range of $T$ and $\rho$, i.e., for $T_{40}\in [0.9,1.1]$ and $\rho_{14}\in [1, 3]$, which is enough for our purpose to examine the effect of the axion-pion-nucleon contact interaction for an ambient condition inside supernovae.\footnote{The emissivity of $\pi^-+p\rightarrow n+a$ obtained in \cite{Carenza:2020cis} for $C_{a\pi N}=0$ is bigger than ours by a factor $\sim 2$. As the analysis of \cite{Carenza:2020cis} takes into account leading order medium effects, while ours does not, it is likely that this difference originates from medium effects.} Obviously the second and third terms in the RHS of Eq.~\eqref{fit:ppi} represent the contributions to the axion emissivity from the axion-pion-nucleon contact interaction $C_{a\pi N}$. Because it is expected that the ratio is less sensitive to the uncertainty in nuclear physics, we take the ratio between each term as \begin{align} \frac{\Delta Q_{a,\,C_{a\pi N}^2}^{p\pi^-}}{\Delta Q_{a,\,(2C_+^2+C_{-}^2)}^{p\pi^-}} \simeq 2.0\left(\frac{C_-^2}{2C_+^2+ C_-^2}\right), \quad \frac{\Delta Q_{a,\,C_{a\pi N}C_-}^{p\pi^-}}{\Delta Q_{a,\,(2C_+^2+C_{-}^2)}^{p\pi^-}} \simeq 0.9\left( \frac{C_-^2}{2C_+^2+C_-^2}\right). \end{align} Here we use the relation $C_{a\pi N}=\sqrt{2}C_-/g_A$ in Eq.~\eqref{Coeffs}. Therefore, the contributions from the contact interactions enhance the axion emissivity by $\mathcal{O}(1)$ in general. We highlight in Fig.~\ref{fig:ppiCapiNratio} how much $C_{a\pi N}$ enhances the axion emissivity for three benchmark axion models with $f_{a9}\equiv (f_a/c_G)/10^9\, {\rm GeV}=1$; the KSVZ model of Eq.~\eqref{model:KSVZ} (red), the DFSZ model of Eq.~\eqref{model:DFSZ} with $\tan\beta=5$ (blue), and a model (green) to realize $|C_-|\gg |C_+|\simeq 0$ by satisfying the condition Eq.~\eqref{eq:maximalcase} for the PQ charges. For the third model, we choose $c_G=2$, $n_u=n_d=0$, $q_{H_u}=q_{H_d}=-1$, $\tan\beta=5$ for the model parameters in Eq.~\eqref{cucd}, which result in $C_{ap}\simeq -C_{an}\simeq -1.62$.\footnote{A simple way to realize such a case is to introduce PQ-charged exotic quarks in the minimal DFSZ model, which generate $\Delta c_G=-4$.} Note that in our convention, the axion decay constant $f_a$ is defined by the axion field range $a\cong a+2\pi f_a$, and the axion-gluon coupling is given by $c_G/f_a$ for an integer-valued parameter $c_G$. We show that the contact interaction can enhance the axion emissivity by a factor $2-4$, depending on the pattern of axion couplings, and this conclusion will not change significantly when we include the corrections, e.g., the medium effects \cite{Carenza:2020cis,Fischer:2021jfm}. \begin{figure}[t!] \includegraphics[width=0.496\textwidth]{figures/Qappi1.pdf} \includegraphics[width=0.496\textwidth]{figures/Qappi2.pdf} \caption{Axion emissivities of $\pi^-+p\rightarrow n+a$ for the KSVZ, DFSZ, and a model with $|C_-|\gg |C_+|$. All models are assumed to have $f_{a9}\equiv (f_a/c_G)/10^9\, {\rm GeV}=1$. The solid curves represent the total emissivity including the effect of the contact interaction $C_{a\pi N}$, while the dashed curves are the emissivity without including the contribution from $C_{a \pi N}$. } \label{fig:ppiCapiNratio} \end{figure} \begin{comment} \renewcommand{\arraystretch}{1.5} \begin{table} \begin{center} \begin{tabular}{ |c||c|c| } \hline Models & $Q_a^{p\pi^-} ~$\big[erg$\,\cdot\,$cm$^{-3}$\,s$^{-1}$\big] & $Q_a^{p\pi^-} / (Q_a^{p\pi^-})_{C_{a\pi N}=0}$ \\ \hline\hline KSVZ & $4.9\times10^{32}\, T_{40}^{7.2} \rho_{14}^{1.1}f_{a9}^{-2}$ & 1.7 $T_{40}^{-0.20}$ \\[0.5ex] \hline DFSZ ($\tan\beta=1$) & $3.7\times10^{32}\, T_{40}^{7.1} \rho_{14}^{1.1}f_{a9}^{-2}$ & 2.1 $T_{40}^{-0.27}$ \\[0.5ex] \hline DFSZ ($\tan\beta=50$) & $1.1\times10^{33}\, T_{40}^{7.0} \rho_{14}^{1.1}f_{a9}^{-2}$ & 2.8 $T_{40}^{-0.33}$ \\[0.5ex] \hline $C_{ap}=-C_{an}=1/2$ & $1.4\times10^{33}\, T_{40}^{7.0} \rho_{14}^{1.1}f_{a9}^{-2}$ & 3.4 $ T_{40}^{-0.36}$ \\[0.5ex] \hline \end{tabular} \end{center} \caption{The axion emissivity $Q^{p\pi^-}_a$ with the contributions from $C_{a\pi N}=(C_{ap}-C_{an})/\sqrt{2}g_A$ and the ratio $(Q_a^{p\pi^-})_{C_{a\pi N}=0}$ for four benchmark axion models. Here $T_{40} \equiv T/(40\,{\rm MeV})$, $\rho_{14} \equiv \rho/(10^{14}{\rm g}/{\rm cm}^3)$, $f_{a9}=(f_a/10^9\, {\rm GeV})$, and $\tan\beta = \langle H_u\rangle/\langle H_d\rangle$, where $H_{u,d}$ are the two Higgs doublets in DFSZ axion model. The presented scaling relations are valid only for $T_{40}\in [0.9,1.1]$ and $\rho_{14}\in [1, 3]$.} \label{ratio} \end{table} \end{comment} Since $z_p/z_n\sim z_{\pi^0}/z_{\pi^-}={\cal O}(0.1)$ inside proto-neutron star \cite{Fore:2019wib}, the process $\pi^0+n\rightarrow n+a$ shown in Fig.~\ref{diagram:npi0na} can be as important as $\pi^-+p\rightarrow n+a$. Taking the same approach as Eq.~(\ref{eq:Qapip}), the axion emissivity of $\pi^0 + n \to n + a$ can be approximated as \begin{eqnarray} Q_a^{n\pi^0 } &\simeq & \frac{1}{2}\frac{ z_n z_{\pi^0} }{f_a^2 f_\pi^2} \sqrt{\frac{m_N^7 T^{11}}{128\pi^{10}}} \int_0^\infty d x_{n} \frac{x_n^2 e^{x_n^2} }{ (e^{x_n^2} + z_n)^2} \int_0^\infty dx_\pi \left(\frac{x_\pi^2 \epsilon_{\pi^0} \, {\cal C}_a^{n\pi^0} }{e^{\epsilon_{\pi^0} -y_{\pi^0}} - z_{\pi^0}} \right), \label{eq:Qapin} \end{eqnarray} where $\epsilon_{\pi^0} = E_{\pi^0}/T$, $y_{\pi^0} = m_{\pi^0}/T$, and \begin{eqnarray} {\cal C}_a^{n\pi^0}&\simeq& \frac{4}{3}g_A^2 \left(\frac{|{\bf p}_{\pi}|}{m_N}\right)^2 C_{an}^2 . \end{eqnarray} Using again the fugacities obtained in \cite{Fore:2019wib}, $Q_a^{n\pi^0 }$ can be further approximated as \begin{align} \frac{Q_a^{n\pi^0}}{\text{erg}\cdot\text{cm}^{-3}\,\text{s}^{-1}} \simeq 1.5\times 10^{33} \, T_{40}^{7.5}\rho_{14}^{1.0} \left(\frac{10^9 \, {\rm GeV}}{f_a}\right)^2 C_{an}^2 \label{fit:npi0} \end{align} for $T_{40}\in [0.9,1.1]$ and $\rho_{14}\in [1, 3]$. This shows that $Q_a^{n\pi^0}$ can be comparable to $Q_a^{p\pi^-}$ for $T\sim 40$ MeV and $\rho\sim 10^{14}\,{\rm g}/{\rm cm}^3$, {\it unless} $|C_{an}|\ll |C_{ap}|$. It is also straightforward to confirm that the other pion-nucleon scattering processes, i.e. $\pi^0+p\rightarrow p+a$ and $\pi^++n\rightarrow p+a$, give subleading contribution relative to $\pi^-+p\rightarrow n+a$ and $\pi^0+n\rightarrow n+a$ for $T\sim 40$ MeV and $\rho\sim 10^{14}\,{\rm g}/{\rm cm}^3$. For instance, for the process $\pi^0+p\rightarrow p+a$, we find \begin{align} \frac{Q_a^{p\pi^0}}{\text{erg}\cdot\text{cm}^{-3}\,\text{s}^{-1}} &\simeq 2.7\times 10^{32} \, T_{40}^{10.3}\rho_{14}^{0.56} \left(\frac{10^9 \, {\rm GeV}}{f_a}\right)^2 C_{ap}^2 &\simeq \frac{z_p}{z_n}\frac{C_{ap}^2}{C_{an}^2} \frac{Q_a^{n\pi^0}}{\text{erg}\cdot\text{cm}^{-3}\,\text{s}^{-1}} , \end{align} where the fugacities of \cite{Fore:2019wib} are used for the last expression. This shows that for $T_{40}\in [0.9,1.1]$ and $\rho_{14}\in [1, 3]$, $Q_a^{p\pi^0}<Q_a^{p\pi^-}$ over the entire axion parameter space. \begin{comment} \renewcommand{\arraystretch}{1.5} \begin{table} \begin{center} \begin{tabular}{ |c||c| } \hline Models & $Q_a^{n\pi^0} ~$\big[erg$\,\cdot\,$cm$^{-3}$\,s$^{-1}$\big] \\ \hline\hline KSVZ & $3.0\times10^{30}\, T_{40}^{7.5} \rho_{14}^{1.0}f_{a9}^{-2}$ \\[0.5ex] \hline DFSZ ($\tan\beta=1$) & $5.4\times10^{30}\, T_{40}^{7.5} \rho_{14}^{1.0}f_{a9}^{-2}$ \\[0.5ex] \hline DFSZ ($\tan\beta=50$) & $1.5\times10^{32}\, T_{40}^{7.5} \rho_{14}^{1.0}f_{a9}^{-2}$ \\[0.5ex] \hline $C_{ap}=-C_{an}=1/2$ & $5.5\times10^{32}\, T_{40}^{7.5} \rho_{14}^{1.0}f_{a9}^{-2}$ \\[0.5ex] \hline \end{tabular} \end{center} \caption{ The axion emissivity $Q^{n\pi^0}_a$ is estimated for four benchmark axion models. We define the parameters as $T_{40} \equiv T/(40\,{\rm MeV})$, $\rho_{14} \equiv \rho/(10^{14}{\rm g}/{\rm cm}^3)$, $f_{a9}=(f_a/10^9\, {\rm GeV})$, and $\tan\beta = \langle H_u\rangle/\langle H_d\rangle$. As before, the fitting formulae are valid in the ranges of $T_{40}\in [0.9,1.1]$ and $\rho_{14}\in [1, 3]$. } \label{table:npi0} \end{table} \end{comment} \subsection{Nucleon-nucleon bremsstrahlung} \begin{figure}[t!] \centering \includegraphics[width=0.85\textwidth]{figures/others.pdf} \caption{Other diagrams involving the contact interactions.} \label{diagram:others} \end{figure} For many years, the nucleon-nucleon bremsstrahlung has been considered to be the dominating process for axion emission from supernovae. Although a recent study indicates that the axion emissivity of the bremsstrahlung process is sensitive to the corrections to the one-pion exchange as well as the medium effects~\cite{Carenza:2019pxu}, here we do a simpler analysis ignoring these corrections since we are mainly concerned with a relative importance of the axion-pion-nucleon contact interaction $C_{a\pi N}$ compared to the other axion-nucleon interactions. In \cite{Carena:1988kr}, the same analysis has been done for the nucleon-nucleon bremsstrahlung with the contact interaction. It shows that the contribution from the contact interaction is negligible at the squared matrix element level. In this subsection, we examine the contribution from the contact interaction to the final axion emissivity including the phase space integration, and confirm that it is still negligible for the environmental parameters of SN~1987A.\footnote{In appendix~\ref{appendixA}, we estimate the axion emissivity in the degenerate limit by applying the analytic method presented in \cite{Iwamoto:1992jp}. In that estimation, the contact interaction seems to contribute to the axion emissivity in the same order of magnitude, but the environmental parameters given in \cite{Carenza:2020cis,Fore:2019wib} turn out to be not degenerate enough to apply the method.} Among the three possible nucleon-nucleon bremsstrahlung processes, $n+n\rightarrow n+n+a$, $n+p\rightarrow n+p+a$, and $p+p\rightarrow p+p+a$, at leading order in pion-nucleon couplings only the second process is affected by $C_{a\pi N}$ through the first two diagrams of Fig.~\ref{diagram:others}. The axion emissivity of the three bremsstrahlung processes is given by \begin{align} \label{eq:NNbremgeneral} Q_a^I &= \int \prod_{\substack{\alpha=N_1,N_2,\\N_3,N_4,\,a}} \frac{d^3 {\bf p}_\alpha}{(2\pi)^3 2 E_\alpha} \Big[(2\pi)^4 \delta^{(4)}(p_1+p_2-p_3-p_4-p_a) \nonumber\\ & \times f_1(p_1) f_2(p_2) (1-f_3(p_3)) (1-f_4(p_4)) \sum_\text{spins} S_I |\mathcal{M}_I|^2 E_a \Big] \quad (I=nn, \, np, \, pp) , \end{align} where $p_{1,2}=(E_{1,2}, \mathbf{p}_{1,2})$ and $p_{3,4}=(E_{3,4}, \mathbf{p}_{3,4})$ denote the initial and final nucleon four-momenta, $p_a=(E_a, \mathbf{p}_a)$ is the axion four-momentum, and $S_I$ is a symmetry factor for identical particles in the initial and final states, i.e., $S_{nn}=S_{pp}=1/4$ and $S_{np}=1$. In the supernova environments, $|\mathbf{p}_a|\sim T\ll|\mathbf{p}_N| \sim \text{max}\left[\sqrt{m_N T}, \, p_F\right]$ where $p_F$ is the nucleon Fermi momentum. Therefore we take the following approximation \begin{align} \label{eq:assumptionNN} \mathbf{p}_1+\mathbf{p}_2 \simeq \mathbf{p}_3 + \mathbf{p}_4\,, \end{align} which simplifies the kinematics significantly. With this approximation and also at leading order in $1/m_N$, the squared matrix elements averaged over the axion momentum direction are given by \begin{align} \left\langle\sum_\text{spins} |\mathcal{M}_{nn}|^2\right\rangle &= \frac{16}{3} \frac{g_A^4}{f_\pi^4} \frac{m_N^4}{f_a^2} C_{an}^2 \left( \frac{|\mathbf{k}|^4}{(|\mathbf{k}|^2+m_\pi^2)^2} + \frac{|\mathbf{l}|^4}{(|\mathbf{l}|^2+m_\pi^2)^2} \right. \nonumber\\ & \left. \hspace{3.5cm} + (1-\beta) \frac{|\mathbf{k}|^2 |\mathbf{l}|^2}{(|\mathbf{k}|^2+m_\pi^2)(|\mathbf{l}|^2+m_\pi^2)}\right) , \\ \left\langle\sum_\text{spins} |\mathcal{M}_{pp}|^2 \right\rangle &= \frac{16}{3} \frac{g_A^4}{f_\pi^4} \frac{m_N^4}{f_a^2} C_{ap}^2 \left( \frac{|\mathbf{k}|^4}{(|\mathbf{k}|^2+m_\pi^2)^2} + \frac{|\mathbf{l}|^4}{(|\mathbf{l}|^2+m_\pi^2)^2} \right. \nonumber\\ & \left. \hspace{3.5cm} + (1-\beta) \frac{|\mathbf{k}|^2 |\mathbf{l}|^2}{(|\mathbf{k}|^2+m_\pi^2)(|\mathbf{l}|^2+m_\pi^2)}\right) , \\ \left\langle\sum_\text{spins}|\mathcal{M}_{np}|^2 \right\rangle &= \frac{16}{3} \frac{g_A^2}{f_\pi^4} \frac{m_N^4}{f_a^2} \left[ g_A^2 \left\{ (4C_+^2+2C_-^2) \frac{|\mathbf{k}|^4}{(|\mathbf{k}|^2+m_\pi^2)^2} + (C_+^2+C_-^2) \frac{|\mathbf{l}|^4}{(|\mathbf{l}|^2+m_\pi^2)^2} \right. \right. \nonumber \\ & \left. \hspace{2.2cm} - 2 \left( (C_+^2+C_-^2)- (3C_+^2+C_-^2)\,\frac{\beta}{3}\right) \frac{|\mathbf{k}|^2 |\mathbf{l}|^2}{(|\mathbf{k}|^2+m_\pi^2)(|\mathbf{l}|^2+m_\pi^2)} \right\} \nonumber \\ & \hspace{2.5cm} \left. + 3 C_{a\pi N}^2 \frac{|\mathbf{p}_a|^2|\mathbf{k}|^2}{(|\mathbf{k}|^2+m_\pi^2)^2} \right], \label{np:matrixelem} \end{align} where \begin{align} \mathbf{k} \equiv \mathbf{p}_1 - \mathbf{p}_3, \quad \mathbf{l} \equiv \mathbf{p}_1 - \mathbf{p}_4, \quad \beta \equiv 3\left(\frac{{\mathbf{k}}\cdot{\mathbf{l}}}{\left|\mathbf{k}\right|\left|\mathbf{l}\right|}\right)^2. \end{align} For the neutron-proton bremsstrahlung, we define the momentum exchanges as $\mathbf{k} = \mathbf{p}_n^i - \mathbf{p}_p^f$ and $\mathbf{l} = \mathbf{p}_n^i - \mathbf{p}_n^f$. While the squared matrix elements of $n+n\rightarrow n+n+a$ and $p+p\rightarrow p+p+a$ are the same as the previous results~\cite{Carenza:2019pxu}, the squared matrix element of $n+p\rightarrow n+p+a$ includes an additional contribution from the contact interaction $C_{a\pi N}$. We remark that we have only displayed the leading-order contribution (in $1/m_N$) for each coupling term in the angle-averaged squared matrix elements. Then, compared to other terms, the term induced by $C_{a\pi N}$ in Eq.~\eqref{np:matrixelem} is intrinsically higher order as it is suppressed by $|\mathbf{p}_a|^2/ |\mathbf{p}_N|^2\sim T/m_N$ for $|\mathbf{k}|\sim |\mathbf{l}|\sim |\mathbf{p}_N|$. This indicates that the contribution from $C_{a\pi N}$ to the axion emissivity of $n+p\rightarrow n+p+a$ is likely to be negligible as pointed out in \cite{Carena:1988kr}. If the typical values into the kinetic parameters are taken, e.g., $T= 40\,{\rm MeV}$, $\beta \simeq 1.3$ (non-degenerate limit), $|\mathbf{k}|\sim |\mathbf{l}|\sim |\mathbf{p}_N|\simeq \sqrt{3 m_N T}\simeq 340\,{\rm MeV}$, $|\mathbf{p}_a|\simeq E_a \simeq |\mathbf{p}_N|^2/(2m_N) \simeq 60\,{\rm MeV}$, we could see a numerical estimate of the square brackets in Eq.~\eqref{np:matrixelem}, $\left[\,\cdots \right]\simeq 6.6 \,C_+^2 + 2.2\,C_-^2+0.07 \,C_{a\pi N}^2$. The estimation predicts a relative importance of each term, which is shown in Eq.~\eqref{fit:np}, although there appears some enhancement of the contribution from the contact interaction after the phase space integration. The axion emissivity in Eq.~(\ref{eq:NNbremgeneral}) can be simplified by taking non-relativistic limit for nucleons together with the approximation Eq.~(\ref{eq:assumptionNN}). Following \cite{Brinkmann:1988vi,Raffelt:1993ix}, we can write the axion emissivities in a form which allows a numerical calculation of the phase space integration: \begin{align} Q_a^I &\simeq \sqrt{\frac{m_NT^{13}}{2^{9}\pi^{16}}} \int_0^\infty du_+ \int_0^\infty du_- \int_{-1}^1 d\gamma_{+-} \int_0^{u_-} d u_{3c} \int_{4\pi} d\Omega_{3c} \sqrt{u_+u_- u_{3c}} (u_- - u_{3c})^2 \nonumber\\ & \hspace{1.5cm} \times f_1 f_2 (1-f_3) (1-f_4) \, \sum_\text{spins} S_I \left\langle |\mathcal{M}_I|^2 \right\rangle_{_{\mathbf{p}_{4c}=-\mathbf{p}_{3c},\, E_a = 2T(u_- - u_{3c})}} \,, \label{eq:QaInumerics} \end{align} where \begin{align} u_i \equiv \frac{\mathbf{p}_i^2}{2m_N T}, \quad \mathbf{p}_{\pm} \equiv \frac{\mathbf{p}_1 \pm \mathbf{p}_2}{2}, \quad \mathbf{p}_{j c} \equiv \mathbf{p}_j- \mathbf{p}_+, \quad \gamma_{kl} \equiv \frac{\mathbf{p}_k\cdot \mathbf{p}_l}{|\mathbf{p}_k| |\mathbf{p}_l|}. \end{align} Again, we use the fugacities of nucleons from \cite{Fore:2019wib} to numerically calculate the above axion emissivities, which results in\footnote{Our numerical results agree well with the analytic results of the previous works~\cite{Brinkmann:1988vi,Iwamoto:1992jp}; for the contributions from $C_\pm$, the agreement is at the level of ${\cal O}(10)\%$ discrepancy in both degenerate ($z_{n/p}\gg1$) and non-degenerate ($z_{n/p}\ll1$) limits. We also confirm that the contribution from $C_{a\pi N}$ agrees well with an analytic result in the degenerate limit \cite{Iwamoto:1992jp}. See appendix~\ref{appendixA}.} \begin{align} \frac{Q_a^{nn}}{\text{erg}\cdot\text{cm}^{-3}\,\text{s}^{-1}} &\simeq 3.5\times 10^{33}\, T_{40}^{3.9}\rho_{14}^{2.1} \left(\frac{10^9 \,{\rm GeV}}{f_a}\right)^2 C_{an}^2 , \label{fit:nn} \\ \frac{Q_a^{np}}{\text{erg}\cdot\text{cm}^{-3}\,\text{s}^{-1}} &\simeq 7.5\times 10^{33}\, T_{40}^{6.9}\rho_{14}^{1.5} \left(\frac{10^9 \,{\rm GeV}}{f_a}\right)^2 C_+^2 \nonumber\\ & \hspace{0.5cm} + 2.5\times 10^{33}\, T_{40}^{6.9}\rho_{14}^{1.5} \left(\frac{10^9 \,{\rm GeV}}{f_a}\right)^2 C_-^2 \label{fit:np} \\ &\hspace{0.5cm} + 2.7\times 10^{32} \, T_{40}^{7.9}\rho_{14}^{1.5} \left(\frac{10^9 \,{\rm GeV}}{f_a}\right)^2 C_{a\pi N}^2 , \nonumber \\ \frac{Q_a^{pp}}{\text{erg}\cdot\text{cm}^{-3}\,\text{s}^{-1}} &\simeq 9.9\times 10^{31}\, T_{40}^{9.9}\rho_{14}^{0.92} \left(\frac{10^9 \,{\rm GeV}}{f_a}\right)^2 C_{ap}^2\, \label{fit:pp} \end{align} for $T_{40}\in [0.9,1.1]$ and $\rho_{14}\in [1, 3]$. The above result shows that, as anticipated from the structure of the squared matrix element, the contribution to $Q^{np}_a$ from $C_{a\pi N}$ is indeed about one order of magnitude smaller than the contribution from $C_-$ for astrophysical environments with $T\sim 40$ MeV and $\rho\sim 10^{14} \, {\rm g}/{\rm cm}^3$. Using the relation $C_{a\pi N}=\sqrt{2}C_-/g_A$, the ratios between the contribution from the contact interaction and the other terms become, respectively, \begin{align} \frac{\Delta Q_{a,\,C_{a\pi N}^2}^{np}}{\Delta Q_{a,\,C_+^2}^{np}} \simeq 0.04 \left(\frac{C_{a\pi N}^2}{C_+^2}\right),\quad \frac{\Delta Q_{a,\,C_{a\pi N}^2}^{np}}{\Delta Q_{a,\,C_{-}^2}^{np}} \simeq 0.1. \end{align} As in the case of $Q_a^{p\pi^-}$, these ratios are expected to be less sensitive to the corrections beyond the one-pion exchange and the medium effects, so the effect of the contact interaction $C_{a\pi N}$ on the nucleon-nucleon bremsstrahlung is negligible. In Fig.~\ref{fig:npCapiNratio}, we compare the total value of the axion emissivity $Q^{np}_a$ (solid curves) with the piece $\Delta Q_{a,\,C_{a\pi N}^2}^{np}$ (dotted curves) induced only by $C_{a\pi N}$ for the three benchmark models considered in Fig.~\ref{fig:ppiCapiNratio}. The result shows that the contribution from $C_{a\pi N}$ is negligible for $30\lesssim T/{\rm MeV} \lesssim 50$ and $1\lesssim \rho/(10^{14} \,{\rm g}/{\rm cm}^3)\lesssim 3$, which is expected to be true for even wider range of $T$ and $\rho$. Note that $Q^{np}_a$ is comparable to (or even larger than) $Q^{nn}_a$, although the proton number density is significantly smaller than the neutron number density. This is partly due to the symmetry factor $S$ compensating the small proton fraction.\footnote{The relative importance of the neutron-proton bremsstrahlung is discussed within the framework of the neutrino emission through the nucleon-nucleon bremsstrahlung~\cite{Yakovlev:2000jp}.} \begin{figure}[t!] \includegraphics[width=0.496\textwidth]{figures/Qanp1.pdf} \includegraphics[width=0.496\textwidth]{figures/Qanp2.pdf} \caption{The axion emissivity $Q^{np}_a$ for the three benchmark models considered in Fig.~\ref{fig:ppiCapiNratio}. The solid (dotted) curves correspond to the total value of $Q^{np}_a$ (the piece induced only by $C_{a\pi N}$). } \label{fig:npCapiNratio} \end{figure} \subsection{Pion-pion scattering: $\pi+\pi\rightarrow \pi+a$} Let us finally consider the possible consequence of the axion-pion contact interaction $C_{a\pi}$ in Eq.~(\ref{Lhadron}). Axions can be produced by this coupling through the pion-pion scattering process $\pi^-+\pi^0\to \pi^- + a$ (see the third diagram in Fig.~\ref{diagram:others}). The corresponding emissivity can be simplified without any kinematic approximation as follows: \begin{align} Q_a^{\pi^-\pi^0} =& \frac{9}{2^{12}\pi^7 } \frac{C_{a\pi}^2}{f_\pi^2f_a^2}z_{\pi^-} z_{\pi^0} T^9\int dx_\text{in} dx_0 d\Omega_{\pi^-}^\text{in} d\Omega_{\pi^0} \frac{x_\text{in}^2}{\sqrt{y_\pi^2+x_\text{in}^2}}\frac{x_0^2}{\sqrt{y_\pi^2+x_0^2}}\frac{x_a^2}{E_{\rm out}/T} \left(\frac{\mathbf{p}_{\pi^0}\cdot \mathbf{p}_a}{T^2}\right)^2 \nonumber\\ &\hspace{1.5cm}\times \frac{1}{e^{\sqrt{y_\pi^2+x_\text{in}^2}-y_\pi}-z_{\pi^-}}\frac{1}{e^{\sqrt{y_\pi^2+x_0^2}-y_\pi}-z_{\pi^0}} \frac{e^{E_{\rm out}/T-y_\pi}}{e^{E_{\rm out}/T-y_\pi}-z_{\pi^-}} , \label{eq:Qpionpionprocess} \end{align} where $x_{\rm in} = {|\mathbf{p}_{\pi^-}^\text{in}|}/{T}$ for the incoming $\pi^-$, $x_0 = {|\mathbf{p}_{\pi^0}|}/{T}$, $x_a = {|\mathbf{p_a}|}/{T}$, $y_\pi = {m_\pi}/{T}$, and finally $E_{\rm out} = \sqrt{m_\pi^2+(\mathbf{p}_{\pi^-}^{\text{in}}+\mathbf{p}_{\pi^0}-\mathbf{p}_a)^2}$ is the energy of the outgoing $\pi^-$. Like the emissivity of other processes, we use the fugacities of pions from \cite{Fore:2019wib} and calculate the integral in Eq.~(\ref{eq:Qpionpionprocess}) numerically to find \begin{align} \frac{Q_a^{\pi^-\pi^0}}{\text{erg}\cdot\text{cm}^{-3}\,\text{s}^{-1}} & = 3.6\times 10^{31}\, T_{40}^{10.1}\rho_{14}^{1.1} \left(\frac{10^9\, {\rm GeV}}{f_a}\right)^2 C_{a\pi}^2. \label{fit:pipi} \end{align} The above result shows that the axion emissivity of the pion-pion scattering $\pi^-+\pi^0\to \pi^- + a$ is negligible compared to that of $\pi^-+p\to n+a$ for $T\sim 40$ MeV and $\rho\sim 10^{14}\,{\rm g}/{\rm cm}^3$ and $C_{a\pi} = 2(C_{ap} -C_{an})/(3 g_A)$ (see Eq.~(\ref{Coeffs})). \section{Conclusions and Discussion\label{Sec:Conc}} In this paper, we have studied the axion emission from supernovae with a complete set of relevant axion couplings including the axion-pion-nucleon contact interaction $C_{a\pi N}$ and the axion-pion contact interaction $C_{a\pi}$ in Eq.~(\ref{Lhadron}). A recent study suggests that the abundance of negatively charged pions inside supernovae is significantly enhanced by the strong interactions \cite{Fore:2019wib}, indicating that the pion-induced process $\pi^-+p\rightarrow a+n$ is the dominating process for a wide range of astrophysical conditions encountered inside supernovae \cite{Carenza:2020cis,Fischer:2021jfm}. We thus examined how this pion-induced process is affected by $C_{a\pi N}$. We also examined the effect of $C_{a\pi N}$ on the nucleon-nucleon bremsstrahlung which has been considered as the dominating process for many years. Since we are mainly concerned with the role of the two previously ignored couplings $C_{a\pi N}$ and $C_{a\pi}$, we have focused on the axion coupling dependence of the axion emissivity within a simple approximation to keep only the leading order in pion-nucleon couplings, which also ignores medium effects. In such an approximation, we could show the axion coupling dependence more explicitly and examine the ignored couplings for three processes. Two processes, $\pi^-+p\rightarrow a+n$ and $n+p\to n+p+a$, are affected by $C_{a\pi N}$, and the pion-pion scattering, $\pi+\pi\rightarrow \pi+a$, is affected by $C_{a\pi}$. We found that $C_{a\pi N}$ can enhance the axion emissivity of $\pi^-+p\rightarrow a+n$ by a factor of $2-4$, depending on the pattern of axion couplings determined by the underlying axion model, while there is no substantial effect on $n+p\rightarrow n+p+a$. Although it is independent of $C_{a\pi N}$, we have also examined the axion emissivity of $\pi^0+n\to n+a$ and find that it can be comparable to the emissivity of $\pi^-+p\rightarrow a+n$ over a wide range of axion parameter space. For the axion-pion contact interaction $C_{a\pi}$, we find that the corresponding axion emissivity is always negligible compared to that of $\pi^-+p\rightarrow a+n$ for ambient conditions encountered inside supernovae. Let us make final remarks on the approximation we made. For the matrix elements, the higher-order diagrams could give comparable contributions due to the strong interaction. Moreover, the medium effects significantly change the axion emissivity, particularly for the nucleon-nucleon bremsstrahlung~\cite{Carenza:2019pxu, Fischer:2021jfm}. However, even including these effects, the relative contribution of the axion-pion-nucleon contact interaction $C_{a\pi N}$ to the axion emissivity would remain similar because it is likely that the ratio $\Delta Q^I_{a,\,C_{a\pi N}}/Q^I_a$ is less sensitive to the corrections than the emissivity itself, where $Q^I_a$ is the axion emissivity of the $I$-th process, and $\Delta Q^I_{a,\,C_{a\pi N}}$ is the part of $Q^I_a$ induced by $C_{a\pi N}$. One of the purposes of this work is to call attention to the possible importance of the contact interactions which have been neglected so far. It would be interesting to perform the analysis taking into account more precise matrix elements and medium effects with a complete set of axion couplings. We will investigate this issue in a self-consistent way for both the pion-nucleon scattering and the nucleon-nucleon bremsstrahlung in future works. \section*{Acknowledgments} This work was supported by IBS under the project code, IBS-R018-D1. We are grateful to S. Yun for helpful discussions, especially on the nucleon-nucleon bremsstrahlung process in the early stage of this project.
1,116,691,499,934
arxiv
\section{Introduction} Observations with the {\it Extreme Ultraviolet Explorer} (EUVE) have provided evidence that a number of clusters of galaxies produce intense EUV emission (e.g., Bowyer et al. 1997). The initial explanation for this emission was that it is produced by a diffuse, (5--10) $\times 10^5$K thermal gas component of the intracluster medium (ICM). Gas at these temperatures cools very rapidly, however, and there is no obvious energy source to re-heat it (Fabian, 1996). Consequently, a number of other mechanisms have been investigated as the source of the emission. Inverse Compton (IC) scattering of cosmic microwave background photons by relativistic electrons present in the ICM was proposed as the source of the observed EUV emission in the Coma cluster (Hwang 1997; En{\ss}lin \& Biermann 1998). However, Bowyer \& Bergh\"ofer (1998) have shown that the spatial distribution of the EUV emission in this cluster is not consistent with IC emission from the observed population of relativistic electrons. A variety of alternative explanations has been advanced which dismiss the EUVE excess in clusters of galaxies. Most recently, Arabadjis \& Bregman (1999) argue that the EUV excess can be explained away by a different cross section for absorption by hydrogen and helium in the foreground ISM column. Bowyer, Bergh\"ofer \& Korpela (1999) find that in some clusters this may account for some of the excess present in the ROSAT PSPC data, however, this cannot explain the intense EUV excesses found with EUVE. Bowyer, Bergh\"ofer \& Korpela (1999) reexamined EUVE data of the clusters Abell 1795, Abell 2199, and Coma. They demonstrated that the initially reported results are based on an improper background subtraction. In previous analyses a flat background had been assumed. However, a detailed investigation of blank field observations with the EUVE Deep Survey (DS) instrument shows that the background consists of two independent components, a non-photonic background and a background due to photons. The non-photonic background level can be determined in obscured regions of the detector and can be directly subtracted from the raw data. However, the photonic background is affected by telescope vignetting and must be treated differently. In the case of Abell 1795 and Abell 2199, Bowyer, Bergh\"ofer \& Korpela (1999) show that the extent of the diffuse EUV emission is much smaller than earlier reported. Furthermore, the radial EUV emission profiles of these two clusters show a flux deficit compared to the soft energy tail of the X-ray emitting intracluster gas. These findings are consistent with the presence of strong cooling flows in Abell 1795 and Abell 2199. In this paper we employ our new reduction method to EUVE archival data of the central part of the Virgo cluster. We compare our results with results derived from radio observations of this region. We consider the possibility that the observed diffuse EUV excess emission is due to an inverse Compton process of the known population of relativistic electrons in the ICM near M87. Furthermore, we investigate the emission originating from the jet in M87 and compare our results with observations at other wavelengths. \section{Data and Data Analysis} The Virgo cluster has been observed for 36,000 s with the Deep Survey (DS) Telescope of EUVE (Bowyer \& Malina, 1991). Data reduction was carried out with the EUVE package built in IRAF which is especially designed to process EUVE data. In order to reduce the non-photonic (non-astronomical) background contribution to the data we investigated the pulse-height spectrum of all detected events. A large number of EUVE DS observations of all kinds of astronomical targets has shown that a typical pulse-height spectrum consists of two components, a Gaussian profile representing the source events and an exponential background distribution. More details about the different background contributions to the DS data and the method of pulse-height thresholding can be found in Bergh\"ofer et al. (1998). From our experience with stellar and extragalactic observations with EUVE we know that the pulse-height selection effectively reduces the non-astronomical background in the data without any significant reduction of the source signal. By comparing the source signal with and without pulse-height selection we find that the effect on the source signal is lower than 4\%. Then we applied corrections for detector dead time and for telemetry limitations (primbsching) to the screened event list and produced a DS EUV image of the Virgo cluster. We then determined the non-photonic background level in the image from highly obscured regions at the outer most parts of the field of view near the Lexan/B filter frame bars. This non-astronomical background contribution is assumed to be constant over the entire detector field and was subtracted from the image. In order to subtract the (vignetted) photonic background we computed the azimuthally averaged radial emission profile centered on M87. We used the EUVE DS sensitivity map provided by Bowyer, Bergh\"ofer \& Korpela (1999) to determine a radial sensitivity profile centered on the detector position of M87. This was then fit to the outer part (15--20\arcmin) of the radial emission profile to determine the scaling factor between sensitivity profile and the photonic background in the data. The radial emission profile and the best fit background model are shown in Figure\ \ref{m87rad}. \section{Results} \label{results} The data in Figure\ \ref{m87rad} demonstrate the presence of diffuse EUV emission in the vicinity of M87 which extends to a radius of $\approx$13\arcmin. At larger radii the radial profile is well fit by the background model demonstrating the absence of any significant cluster emission beyond this. The initial publication on the diffuse EUV emission from Virgo (Lieu et al. 1996) claimed to detect excess emission to 20\arcmin. In Figure\ \ref{m87sig} we plot the background subtracted radial EUV emission profile (solid line). The dashed line shows the expected EUV emission of the low energy tail of the X-ray emitting diffuse intracluster gas as derived in the following. Note that the inner 1\arcmin\ bin is dominated by the core and jet of M87 and must be ignored for the discussion of the diffuse emission. To determine the diffuse X-ray contribution to the observed EUV emission we processed ROSAT PSPC archival data of the Virgo cluster. We used standard procedures implemented in the EXSAS software package to produce an image from the photon event list. Then a vignetting corrected exposure map was computed for this data set and a PSPC count rate image was generated by dividing the PSPC image by the exposure map. We point out that the background in the ROSAT PSPC hard energy band is dominated by the photonic (vignetted) background and the contribution of the non-photonic background is minor. Therefore, a similar analysis as described for the EUVE DS data including a separation of the photonic and non-photonic background contributions is not essential. However, in the case of detectors with low effective areas (e.g., BeppoSAX), and less efficient rejection mechanisms for non-photonic events, this background contribution must be treated separately. For our analysis of the ROSAT PSPC data we selected only photon events in the hardest energy band, channels 90--236. This channel selection has several advantages: First, any contamination by a possible steep-spectrum source at soft X-ray energies is excluded and, therefore, ensures that this band pass represents only thermal contributions to the overall diffuse emission in Virgo. Second, this part of the ROSAT band pass is nearly unaffected by interstellar absorption. This minimizes errors due to possible differential ISM absorption effects when modeling conversion factors between DS and PSPC counts. Third, the count rate conversion factor between DS and PSPC is nearly temperature independent in the range of X-ray temperatures measured in the central Virgo region and, thus, ROSAT count rates of the diffuse X-ray emission can be converted into DS count rates by using one single conversion factor. In order to be able to convert PSPC counts into DS counts we modeled conversion factors for a range of plasma temperatures (0.1--2.7 keV) employing the MEKAL plasma emission code with abundances of 0.34 solar (Hwang et al. 1997). These calculations include absorption by the interstellar medium. We used an ISM absorption column density of $1.72 \times 10^{20}$cm$^{-2}$ (Hartmann \& Burton 1997) and an absorption model including cross sections and ionization ratios for the ISM as described in Bowyer, Bergh\"ofer \& Korpela\ (1999). In Figure\ \ref{m87theo} we show the DS to PSPC count rate conversion factor. The left-hand scale and the solid curve gives the plasma temperature as a function of the DS to PSPC count rate ratio. As can be seen, for a wide range of temperatures (0.6--2.7 keV) the model conversion factor is constant within 15\%. According to B\"ohringer et al.\ (1995) and B\"ohringer (1999), the temperature of the X-ray emitting intracluster gas in the Virgo cluster is $\approx$2 keV. In addition to this thermal gas component these authors detected several diffuse emission features near M87 which are significantly softer than the average Virgo cluster gas temperature. However, spectral fits to the ROSAT data do not provide any evidence for gas at temperatures below 1 keV (B\"ohringer, private communication). For temperatures near 1 keV the modeled conversion factor for a thermal gas is slightly lower than for higher temperatures. Therefore, the contribution of the lower temperature components to the overall diffuse X-ray emission in the EUV band pass is lower than the dominant 2 keV cluster gas component. Using the conversion factor appropriate for the mean cluster gas temperature of 2 keV for the entire emission including the softer thermal enhancements, slightly overestimates the low energy X-ray contribution to the EUV emission. We also modeled the DS to PSPC conversion factor for a non-thermal power law type spectrum including ISM absorption. The right-hand scale and dashed curve give the power law spectral index as a function of the modeled conversion factor. In Figure\ \ref{m87ratio} we show the observed ratio between azimuthally averaged radial intensity profiles observed with the EUVE DS and PSPC. Within the error bars the ratio is constant (reduced $\chi^2$ = 0.9). The best fit value is $0.0186 \pm 0.0057$. The ratio for the inner 1\arcmin\ bin is consistent with this value, however, we excluded this point due to the presence of emission from the core and jet of M87. Sarazin \& Lieu (1998) have suggested that an increasing EUV to X-ray emission ratio towards larger distances from the cluster center is an indication of an inverse Compton process producing the EUV emission in the cluster. However, the data in Figure\ \ref{m87ratio} demonstrate that this is not observed in the central Virgo region. Our best fit value of 0.0186 is $\approx$4.3 times larger than expected for the low energy tail of the X-ray emitting gas in the Virgo cluster. Therefore, the X-ray contribution to the observed EUV excess in the central part of the Virgo cluster must be minor. It is clear that the ratio between observed EUV flux and modeled X-ray contribution cannot directly be used to determine the physical parameters of the source. Instead, one must first subtract the X-ray contribution from the observed EUV emission. In Figure\ \ref{imaeuv} we show the spatial distribution of the EUV excess emission in the central Virgo region; the background and the contribution of the low energy tail of the X-ray emitting ICM have been subtracted. The central emission peak at the position of M87 is surrounded by a diffuse EUV emission structure which is asymmetric in shape. Its extent varies between 1\arcmin\ and 7\arcmin. Several arm-like features are visible. At larger radii the EUV emission results from a number of apparently discrete and extended diffuse features in the M87 radio halo region. These emission features are consistent with the emission seen in the surface brightness profile (Figure\ \ref{m87sig}) between 9--13\arcmin. These asymmetric features show the flux is not produced by a gravitationally bound thermal gas. For the diffuse EUV emission within 7\arcmin\ (excluding the core + jet emission in the inner 1\arcmin) we determine a total count rate of $(0.036 \pm 0.006)$ counts\,s$^{-1}$. Assuming an extraction radius of 13\arcmin\ results in a total count rate of $(0.066 \pm 0.009)$ counts\,s$^{-1}$. We also investigated the EUV emission peak at the position of M87. X-ray observations with the {\em Einstein} and ROSAT HRIs have demonstrated that the central X-ray emission peak splits into two major components which are associated with the core and mainly knots A+B+C of the jet in M87. The spatial resolution of the EUVE DS ($\approx$ 20\arcsec) is not sufficient to completely resolve the jet from the galaxy core. However, the central peak indicates emission slightly elongated by about one resolution element in the direction from the core to the jet. The central emission peak (core + jet) provides a total count rate of $(4.9 \pm 0.6) \times 10^{-3}$ counts\,s$^{-1}$ in excess of the diffuse emission component. \section{Discussion and Conclusions} \label{discuss} \subsection{Diffuse EUV emission} \label{disdiffuse} The results of our reanalysis show a clear EUV excess in the central Virgo region around M87. Compared to previous studies the azimuthally averaged extent of this emission is smaller and extends only to $\approx$13\arcmin\ from the center of M87. To explore the nature of the EUV excess we compare this emission with a 90 cm radio map of the central Virgo region near M87 (Owen, Eilek \& Kassim 1999). If the diffuse EUV emission is due to inverse Compton processes in the ICM, one would expect to see similar emission features in both the EUV and radio image. In Figure\ \ref{radeuv} we show a contour plot of the EUV emission superposed on the 90 cm radio map. As can be seen, the EUV emission peaks at the position of the radio emission of the core and jet of M87. EUV excess emission features are, however, not directly coincident with any of the other brighter features visible in the radio map. The EUV emission is also not associated with the higher temperature X-ray emission features seen in the ROSAT PSPC images in Virgo (cf. B\"ohringer 1999 and Harris 1999). We next investigate whether the integrated flux of the diffuse EUV emission is compatible with an inverse Compton origin of the observed EUV excess in the central Virgo region. We use the observed radio synchrotron power law spectrum of the M87 halo ($\alpha = 0.84$, Herbig \& Readhead 1992) to compute the underlying distribution of relativistic electrons in this region and its inverse Compton flux. Note that the radio spectrum needs to be extrapolated into the low frequency range near 1 MHz which is not observable due to ionospheric effects. The conversion from the synchrotron spectrum into an electron energy distribution depends on the magnetic field strength in the ICM. We derive a relation between magnetic field strength and the inverse Compton flux produced by the relativistic electrons; the results are shown in Figure\ \ref{icrates}. The flux is folded with the EUVE DS response and given in units of DS counts\,s$^{-1}$ which allows a direct comparison to the observed integrated DS count rate of the diffuse emission (horizontal line in Figure\ \ref{icrates}). As can be seen, for a magnetic field strength of $\approx 3\mu$G the observed flux matches the model flux. Note that this value would also be consistent with Faraday rotation measurements in the M87 halo (Dennison 1980). However, with $\alpha = 0.84$ the radio synchrotron spectrum is inconsistent with the required steep EUV to X-ray power law spectrum. In Figure\ \ref{m87theo} we show three dotted vertical lines labeled with 100\%, 10\%, and 5\%. These lines indicate relative contributions of the hard energy tail of the EUV excess component to the overall X-ray emission in the ROSAT band. A contribution of 100\% is obviously not realistic since this would require that no emission is seen from the gravitationally bound intracluster gas. The other two dotted lines show 10\% and 5\% contributions, respectively. No other emission component in excess of the thermal component has been detected in the ROSAT PSPC data of Virgo and only an upper limit can be derived from this data. A determination of an accurate upper limit for the EUV excess component in the ROSAT band is highly model dependent. However, from our experience with ROSAT data of diffuse sources we estimate that a contribution of 10\% should be detectable. If we assume a 10\% contribution as the upper limit for the EUV excess component in the ROSAT band, according to Figure\ \ref{m87theo} a power law photon number index of $\alpha \geq 3.2$ is required to explain the observed EUV flux and the upper limit in the ROSAT PSPC hard band (channels 90--238) by a non-thermal power law source. Therefore, inverse Compton emission from the known population of relativistic electrons in the M87 halo cannot account for the observed EUV excess in the central Virgo region. We compute the total luminosity of the diffuse EUV emission for a steep non-thermal power law spectrum and for a low temperature thermal plasma spectrum since these have been discussed in the literature, but we make no claim that either of these are the correct spectral distribution for the emission. Assuming a power law spectrum with $\alpha = 3.2$ results in a luminosity of $5.2 \times 10^{42}$ erg\,s$^{-1}$ in the 0.05--0.2 keV band. For a thermal plasma with a temperature of 0.15 keV we obtain a luminosity of $5.7 \times 10^{42}$ erg\,s$^{-1}$. These values were derived from the total count rate of the diffuse EUV emission within 7\arcmin. Including the apparently discrete and extended diffuse EUV emission detected between 7\arcmin\ and 13\arcmin\ increases the luminosity by 80\%. Assuming larger power law indices or lower plasma temperatures result in higher luminosities. For the luminosity calculations we assume a distance of 17\,Mpc. \subsection{EUV emission of the jet in M87} Since the core and jet of M87 cannot be resolved in the EUVE image of M87, we assume that the X-ray flux ratio between core and jet which can be determined from the ROSAT HRI observations is also valid for the EUV fluxes. Harris, Biretta \& Junor (1997) give a ratio of $\approx$ 1.5 for the core/jet X-ray flux ratio. Based on their compilation of measurements for the jet in M87, Meisenheimer et al. (1996) derived a spectral index of 0.65 for the radio to near-UV spectrum. In order to be able to explain the X-ray emission of the jet in M87 by the same spectrum, these authors introduced a spectral cut-off near 10$^{15}$Hz. The spectral index of the UV to X-ray power law spectrum then has to be $\alpha \approx 1.4$ to explain the UV and X-ray data. Based on these assumptions we compute a flux of $3.4 \times 10^{-12}$\,erg\,cm$^{-2}$\,s$^{-1}$ ($6.5 \times 10^{-6}$\,Jy) and a luminosity of $1.2 \times 10^{41}$\,erg\,s$^{-1}$ for the emission of the M87 jet in the EUVE DS bandpass. For the luminosity calculation we assume a distance of 17\,Mpc. In Figure\ \ref{jetspec} we show the radio-to-X-ray spectrum of the jet in M87 including the EUVE data point. As can be seen, the spectral model provided by Meisenheimer et al. (1996) also fits the EUVE observations. This confirms the suggested cut-off in the UV and further supports that the entire jet emission, from the radio to the X-ray band, is synchrotron radiation produced by relativistic electrons in the jet. \section{Summary} The observed EUV excess in the central Virgo region is not spatially coincident with either the distribution of the radio emission or the observed high temperature thermal X-ray emission seen in the ROSAT images. This provides strong evidence that a separate source mechanism is present. In addition, due to the required steep EUV to X-ray spectrum, this emission cannot be produced by an extrapolation to lower energies of the observed synchrotron radio emitting electrons. If the observed EUV excess is inverse Compton emission, a new population population of relativistic electrons is required. Therefore, the same difficulties as in the case of the explanation of the EUV excess of the Coma cluster (cf. Bowyer \& Bergh\"ofer 1998) exist in the central Virgo region. The EUVE observations of M87 are consistent with the spectral cut-off in the spectrum of the jet in M87 as suggested by Meisenheimer et al. (1996). This further supports the idea that the EUV and X-ray emission of the jet is synchrotron radiation. \acknowledgments We thank Jean Eilek for providing us a postscript file of the M87 radio map. We acknowledge useful discussions with John Vallerga, Jean Dupuis, and Hans B\"ohringer. This work was supported in part by NASA contract NAS 5-30180. TWB was supported in part by a Feodor-Lynen Fellowship of the Alexander-von-Humboldt-Stiftung.
1,116,691,499,935
arxiv
\subsubsection*{\bibname}} \bibliographystyle{abbrvnat} \usepackage{bm} \usepackage{placeins} \usepackage{epsf} \usepackage{fancyhdr} \usepackage{graphics} \usepackage{graphicx} \usepackage{psfrag} \usepackage{microtype} \usepackage{subfigure} \usepackage{algorithmic} \usepackage[linesnumbered,ruled]{algorithm2e \DontPrintSemicolon \usepackage{color} \usepackage{amsthm} \usepackage{amsfonts} \usepackage{amsmath} \usepackage{amssymb,bbm} \usepackage{url} \usepackage{hyperref} \begin{document} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{B}}{\mathbb{B}} \newcommand{\mathbb{E}}{\mathbb{E}} \newcommand{\mathbb{I}}{\mathbb{I}} \newcommand{\top}{\top} \newcommand{\textnormal{s.t.}}{\textnormal{s.t.}} \newcommand{\textnormal{Tr}\,}{\textnormal{Tr}\,} \newcommand{\textnormal{Tr}}{\textnormal{Tr}} \newcommand{\textnormal{conv}}{\textnormal{conv}} \newcommand{\textnormal{diag}}{\textnormal{diag}} \newcommand{\textnormal{Diag}\,}{\textnormal{Diag}\,} \newcommand{\textnormal{Prob}}{\textnormal{Prob}} \newcommand{\textnormal{var}}{\textnormal{var}} \newcommand{\textnormal{rank}}{\textnormal{rank}} \newcommand{\textnormal{sign}}{\textnormal{sign}} \newcommand{\textnormal{cone}\,}{\textnormal{cone}\,} \newcommand{\textnormal{cl}\,}{\textnormal{cl}\,} \newcommand{\textnormal{vec}\,}{\textnormal{vec}\,} \newcommand{\textnormal{sym}\,}{\textnormal{sym}\,} \newcommand{\matr}{\boldsymbol M}\, \newcommand{\vect}{\boldsymbol V}\, \newcommand{\mathscr{T}\,}{\mathscr{T}\,} \newcommand{\textnormal{feas}\,}{\textnormal{feas}\,} \newcommand{\textnormal{opt}\,}{\textnormal{opt}\,} \newcommand{\mathbf\Sigma}{\mathbf\Sigma} \newcommand{\mathbf S}{\mathbf S} \newcommand{\mathbf R}{\mathbf R} \newcommand{\mathbf x}{\mathbf x} \newcommand{\mathbf y}{\mathbf y} \newcommand{\mathbf s}{\mathbf s} \newcommand{\mathbf a}{\mathbf a} \newcommand{\mathbf g}{\mathbf g} \newcommand{\mathbf e}{\mathbf e} \newcommand{\mathbf z}{\mathbf z} \newcommand{\mathbf w}{\mathbf w} \newcommand{\mathbf u}{\mathbf u} \newcommand{\mathbf v}{\mathbf v} \newcommand{\mathop{\rm argmin}}{\mathop{\rm argmin}} \newcommand{\mathop{\rm argmax}}{\mathop{\rm argmax}} \newcommand{\mathcal{L}}{\mathcal{L}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{D}}{\mathcal{D}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\mathcal{G}}{\mathcal{G}} \newcommand{\mathcal{K}}{\mathcal{K}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\mathcal{Q}}{\mathcal{Q}} \newcommand{\mathcal{S}}{\mathcal{S}} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathcal{Y}}{\mathcal{Y}} \newcommand{\mathcal{I}}{\mathcal{I}} \newcommand{\mathcal{J}}{\mathcal{J}} \newcommand{\frac{1}{2}}{\frac{1}{2}} \newcommand{\mbox{$\mathbb K$}}{\mbox{$\mathbb K$}} \newcommand{\mbox{$\mathbb Z$}}{\mbox{$\mathbb Z$}} \newcommand{\textnormal{card}}{\textnormal{card}} \newcommand{\textnormal{trace}}{\textnormal{trace}} \newcommand{\textnormal{prox}}{\textnormal{prox}} \newcommand{\textnormal{diam}}{\textnormal{diam}} \newcommand{\textnormal{dom}}{\textnormal{dom}} \newcommand{\textnormal{proj}}{\textnormal{proj}} \newcommand{{\textbf{E}}}{{\textbf{E}}} \newcommand{\mathcal L}{\mathcal L} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\mathbb{S}}{\mathbb{S}} \newcommand{\mathbb{N}}{\mathbb{N}} \newcommand{\begin{array}}{\begin{array}} \newcommand{\end{array}}{\end{array}} \newcommand{\mathcal{A}}{\mathcal{A}} \newcommand{\mathcal{B}}{\mathcal{B}} \newcommand{\mathcal{Z}}{\mathcal{Z}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathcal{R}}{\mathcal{R}} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\mathbb{P}}{\mathbb{P}} \newcommand{\mathbb{Q}}{\mathbb{Q}} \newcommand{\mathcal{N}}{\mathcal{N}} \newcommand{\mathbb{I}}{\mathbb{I}} \newcommand{\mathcal{U}}{\mathcal{U}} \newcommand{\color{red}}{\color{red}} \newcommand{\mathbb{R}}{\mathbb{R}} \newcommand{\textbf{1}}{\textbf{1}} \newcommand{\mathcal{O}}{\mathcal{O}} \newcommand{\widetilde{\mathcal{O}}}{\widetilde{\mathcal{O}}} \newcommand{:=}{:=} \newcommand{:=}{:=} \newcommand{\icol}[1] \left(\begin{smallmatrix}#1\end{smallmatrix}\right)% } \newtheorem{theorem}{Theorem}[section] \newtheorem{corollary}[theorem]{Corollary} \newtheorem{lemma}[theorem]{Lemma} \newtheorem{proposition}[theorem]{Proposition} \newtheorem{condition}{Condition} \newtheorem{definition}{Definition} \newtheorem{example}{Example} \newtheorem{question}{Question} \newtheorem{remark}[theorem]{Remark} \newtheorem{assumption}[theorem]{Assumption} \twocolumn[ \aistatstitle{Fast Algorithms for Computational Optimal Transport and Wasserstein Barycenter} \aistatsauthor{Wenshuo Guo \And Nhat Ho \And Michael I. Jordan } \aistatsaddress{ UC Berkeley \And UC Berkeley \And UC Berkeley } ] \begin{abstract} We provide theoretical complexity analysis for new algorithms to compute the optimal transport (OT) distance between two discrete probability distributions, and demonstrate their favorable practical performance compared to state-of-art primal-dual algorithms. First, we introduce the \emph{accelerated primal-dual randomized coordinate descent} (APDRCD) algorithm for computing the OT distance. We show that its complexity is $\widetilde{\mathcal{O}}(\frac{n^{5/2}}{\varepsilon})$, where $n$ stands for the number of atoms of these probability measures and $\varepsilon > 0$ is the desired accuracy. This complexity bound matches the best known complexities of primal-dual algorithms for the OT problems, including the adaptive primal-dual accelerated gradient descent (APDAGD) and the adaptive primal-dual accelerated mirror descent (APDAMD) algorithms. Then, we demonstrate the improved practical efficiency of the APDRCD algorithm through comparative experimental studies. We also propose a greedy version of APDRCD, which we refer to as \emph{accelerated primal-dual greedy coordinate descent} (APDGCD), to further enhance practical performance. Finally, we generalize the APDRCD and APDGCD algorithms to distributed algorithms for computing the Wasserstein barycenter for multiple probability distributions. \end{abstract} \section{Introduction} Optimal transport has become an important topic in statistical machine learning. It finds the minimal cost couplings between pairs of probability measures and provides a geometrically faithful way to compare two probability distributions, with diverse applications in areas including Bayesian nonparametrics \citep{Nguyen-2013-Convergence, Nguyen-2016-Borrowing}, scalable Bayesian inference \citep{Srivastava-2015-WASP, Srivastava-2018-Scalable}, topic modeling~\citep{Lin-2018-Sparsemax}, isotonic regression~\citep{Rigollet-2019-Uncoupled}, and deep learning~\citep{Courty-2017-Optimal, Arjovsky-2017-Wasserstein, Tolstikhin-2018-Wasserstein}. Nevertheless, the practical impact of OT has been limited by its computational burden. By viewing the optimal transport distance as a linear programming problem, interior-point methods have been employed as a computational solver, with a best known practical complexity of $\widetilde{\mathcal{O}}(n^{3})$ ~\citep{Pele-2009-Fast}. Recently, Lee and Sidford~\citep{Lee-2014-Path} proposed to use the Laplace linear system solver to theoretically improve the complexity of interior-point methods to $\widetilde{\mathcal{O}}(n^{5/2})$. However, it remains a practical challenge to develop efficient interior-point implementations in the high-dimensional settings for OT applications in machine learning. Several algorithms have been proposed to circumvent the scalability issue of the interior-point methods, including the Sinkhorn algorithm~\citep{Sinkhorn-1974-Diagonal, Knight-2008-Sinkhorn, Kalantari-2008-Complexity,Cuturi-2013-Sinkhorn}, which has a complexity bound of $\widetilde{\mathcal{O}}(\frac{n^{2}}{\varepsilon^2})$ where $\varepsilon > 0$ is the desired accuracy~\citep{Dvurechensky-2018-Computational}. The Greenkhorn algorithm~\citep{Altschuler-2017-Near} further improves the performance of the Sinkhorn algorithm, with a theoretical complexity of $\widetilde{\mathcal{O}}(\frac{n^{2}}{\varepsilon^2})$~\citep{lin2019efficient}. However, for large-scale applications of the OT problem, particularly in randomized and asynchronous scenarios such as computational Wasserstein barycenters, existing literature has shown that neither the Sinkhorn algorithm nor the Greenkhorn algorithm are sufficiently scalable and flexible~\citep{Cuturi-2014-Fast, dvurechenskii2018decentralize}. Recent research has demonstrated the advantages of the family of accelerated primal-dual algorithms over the Sinkhorn algorithms. This family includes the adaptive primal-dual accelerated gradient descent (APDAGD) algorithm~\citep{Dvurechensky-2018-Computational} and the adaptive primal-dual accelerated mirror descent (APDAMD) algorithms~\citep{lin2019efficient}, with complexity bounds of $\widetilde{\mathcal{O}}(\frac{n^{2.5}}{\varepsilon})$ and $\widetilde{\mathcal{O}}(\frac{n^{2}\sqrt{\gamma}}{\varepsilon})$, respectively, where $\gamma \leq \sqrt{n}$ is the inverse of the strong complexity constant of Bregman divergence with respect to the $l_{\infty}$-norm. In addition, the primal-dual algorithms possess the requisite flexibility and scalability compared to the Sinkhorn algorithm, which is crucial for computational OT problems in large-scale applications~\citep{Cuturi-2014-Fast,Ho-2018-Probabilistic}. Specifically, they are flexible enough to be generalized to the computation of the Wasserstein barycenter for multiple probability distributions in decentralized and asynchronous settings~\citep{Cuturi-2014-Fast, dvurechenskii2018decentralize}. In the optimization literature, primal-dual methods have served as standard techniques that are readily parallelized for high-dimensional applications~\citep{combettes2012primal, wainwright2005map}. On the other hand, coordinate descent methods have been shown to be well suited to the solution of large-scale machine learning problems~\citep{nesterov2012efficiency, richtarik2016parallel, fercoq2015accelerated}. \textbf{Our contributions.} The contributions of the paper are three-fold. \begin{enumerate} \item We introduce an \emph{accelerated primal-dual randomized coordinate descent} (APDRCD) algorithm for solving the OT problem. We provide a complexity upper bound of $\widetilde{\mathcal{O}}(\frac{n^{5/2}}{\varepsilon})$ for the APDRCD algorithm, which is comparable to the complexity of state-of-art primal-dual algorithms for OT problems, such as the APDAGD and APDAMD algorithms~\citep{Dvurechensky-2018-Computational, lin2019efficient}. To the best of our knowledge, this is the first accelerated primal-dual coordinate descent algorithm for computing the OT problem. \item We show that the APDRCD algorithm outperforms the APDAGD and APDAMD algorithms with experiments on both synthetic and real image datasets. To further improve the practical performance of the APDRCD algorithm, we present a greedy version of it which we refer to as the \emph{accelerated primal-dual greedy coordinate descent} (APDGCD) algorithm. To the best of our knowledge, the APDRCD and APDGCD algorithms achieve the best performance among state-of-art accelerated primal-dual algorithms on solving the OT problem. \item We demonstrate that the APDRCD and APDGCD algorithms are suitable and parallelizable for other large-scale problems besides the OT problem, e.g., approximating the Wasserstein barycenter for a finite set of probability measures stored over a distributed network. \end{enumerate} \textbf{Organization.} The remainder of the paper is organized as follows. In Section~\ref{Sec:setup}, we provide the formulation of the entropic OT problem and its dual form. In Section~\ref{Sec:APDRCD_algorithm}, we introduce the APDRCD algorithm for solving the regularized OT problem and provide a complexity upper bound for it. We then present the greedy APDGCD algorithm. In Section~\ref{Sec:experiments}, we present comparative experiments between the APDRCD, the APDGCD algorithms and other primal-dual algorithms including the APDAGD and APDAMD algorithms. We conclude the paper with a few future directions in Section~\ref{Sec:discussion}. Finally, the proofs of all the results are included in the Appendix~\ref{sec:supplementary_material}, and the generalized APDRCD and APDGCD algorithms for approximating Wasserstein barycenters are presented in Appendix~\ref{Sec:WB}. Additional experiments are presented in Appendix~\ref{subsec:further_exp}. \textbf{Notation.} We denote the probability simplex $\Delta^n : = \{u = \left(u_1, \ldots, u_n\right) \in \mathbb{R}^n: \sum_{i = 1}^{n} u_{i} = 1, \ u \geq 0\}$ for $n \geq 2$. Furthermore, $[n]$ stands for the set $\{1, 2, \ldots, n\}$ while $\mathbb{R}^n_+$ stands for the set of all vectors in $\mathbb{R}^n$ with nonnegative components for any $n \geq 1$. For a vector $x \in \mathbb{R}^n$ and $1 \leq p \leq \infty$, we denote $\|x\|_p$ as its $\ell_p$-norm and $\text{diag}(x)$ as the diagonal matrix with $x$ on the diagonal. For a matrix $A \in \mathbb{R}^{n \times n}$, the notation $\text{vec}(A)$ stands for the vector in $\mathbb{R}^{n^2}$ obtained from concatenating the rows and columns of $A$. $\textbf{1}$ stands for a vector with all of its components equal to $1$. $\partial_x f$ refers to a partial gradient of $f$ with respect to $x$. Lastly, given the dimension $n$ and accuracy $\varepsilon$, the notation $a = \mathcal{O}\left(b(n,\varepsilon)\right)$ stands for the upper bound $a \leq C \cdot b(n, \varepsilon)$ where $C$ is independent of $n$ and $\varepsilon$. Similarly, the notation $a = \widetilde{\mathcal{O}}(b(n, \varepsilon))$ indicates the previous inequality may depend on the logarithmic function of $n$ and $\varepsilon$, and where $C>0$. \section{Problem Setup} \label{Sec:setup} In this section, we provide necessary background for the entropic regularized OT problem. The objective function for the problem is presented in Section~\ref{Sec:reg_OT} while its dual form as well as the key properties of that dual form are given in Section~\ref{Sec:dual_reg_OT}. \subsection{Entropic Regularized OT} \label{Sec:reg_OT} As shown in~\citep{Kantorovich-1942-Translocation}, the problem of approximating the OT distance between two discrete probability distributions with at most $n$ components is equivalent to the following linear programming problem: \begin{align}\label{prob:OT} &\min\limits_{X \in \mathbb{R}^{n \times n}} \left\langle C, X\right\rangle \\ &\ \ \ \textnormal{s.t.} \ \ X\textbf{1} = r, \ X^\top\textbf{1} = l, \ X \geq 0 \nonumber \end{align} where $X$ is a \textit{transportation plan}, $C = (C_{ij}) \in \mathbb{R}_+^{n \times n}$ is a cost matrix with non negative elements, and $r$ and $l$ refer to two known probability distributions in the probability simplex $\Delta^n$. The best known practical complexity bound for~\eqref{prob:OT} is $\widetilde{\mathcal{O}}(n^3)$~\citep{Pele-2009-Fast}, while the best theoretical complexity bound is $\widetilde{\mathcal{O}}(n^{2.5})$~\citep{Lee-2014-Path}, achieved via interior-point methods. However, these methods are not efficient in the high-dimensional setting of OT applications in machine learning. This motivates the \emph{entropic regularized OT} problem~\citep{Cuturi-2013-Sinkhorn}: \begin{align}\label{prob:regOT} &\min\limits_{X \in \mathbb{R}_{+}^{n \times n}} \left\langle C, X\right\rangle - \eta H(X) \\ & \ \ \ \textnormal{s.t.} \ \ X\textbf{1} = r, \ X^\top\textbf{1} = l, \nonumber \end{align} where $\eta > 0$ is the \textit{regularization parameter} and $H(X)$ is the entropic regularization given by $H(X) \ : = \ - \sum_{i, j = 1}^n X_{ij} \log(X_{ij})$. The main focus of the paper is to determine an \emph{$\varepsilon$-approximate transportation plan} $\hat{X} \in \mathbb{R}_{+}^{n \times n}$ such that $\hat{X}\textbf{1} = r$ and $\hat{X}^\top\textbf{1} = l$ and the following bound holds: \begin{equation}\label{Criteria:Approximation} \langle C, \hat{X}\rangle \ \leq \ \langle C, X^*\rangle + \varepsilon, \end{equation} where $X^*$ is an optimal solution; i.e., an optimal transportation plan for the OT problem~\eqref{prob:OT}. To ease the ensuing presentation, we let $\langle C, \hat{X}\rangle$ denote an \emph{$\varepsilon$-approximation} for the OT distance. Furthermore, we define matrix $A$ such that $A\text{vec}(X) : = \begin{pmatrix} X\textbf{1} \\ X^\top\textbf{1} \end{pmatrix}$ for any $X \in \mathbb{R}^{n \times n}.$ \subsection{Dual Entropic Regularized OT} \label{Sec:dual_reg_OT} The Lagrangian function for problem~\eqref{prob:regOT} is given by \begin{align*} \mathcal{L}(X, \alpha, \beta) : = &\left\langle C, X\right\rangle - \eta H(X)+ \langle\alpha, r\rangle\\ & + \langle\beta, l\rangle - \langle\alpha, X\textbf{1}\rangle - \langle\beta, X^\top\textbf{1}\rangle. \end{align*} Given the Lagrangian function, the dual form of the entropic regularized OT problem can be obtained by solving the optimization problem $\min_{X \in \mathbb{R}^{n \times n}} \mathcal{L}(X, \alpha, \beta)$. Since the Lagrangian function $\mathcal{L}(\cdot, \alpha, \beta)$ is strictly convex, that optimization problem can be solved by setting $\partial_X \mathcal{L}(X, \alpha, \beta) = 0$, which leads to the following form of the transportation plan: $X_{ij} \ = \ e^{\frac{-C_{ij} + \alpha_i + \beta_j}{\eta} - 1}$ for all $i, j \in [n]$. With this solution, we have $\min_{X \in \mathbb{R}^{n \times n}} \mathcal{L}(X, \alpha, \beta) \ = \ -\eta\sum_{i,j=1}^n e^{- \frac{C_{ij} - \alpha_i - \beta_j}{\eta}-1} + \left\langle \alpha, r\right\rangle + \left\langle \beta, l\right\rangle$. The \emph{dual entropic regularized OT} problem is, therefore, equivalent to the following optimization problem: \begin{align} \label{eq:dual_entropic} \lefteqn{\min_{\alpha, \beta \in \mathbb{R}^{n}} \varphi(\alpha, \beta)} \nonumber \\ &:= \eta \sum_{i,j=1}^n e^{- \frac{C_{ij} - \alpha_i - \beta_j}{\eta}-1} - \left\langle \alpha, r\right\rangle - \left\langle \beta, l\right\rangle \end{align} Building on Lemma 4.1 in~\citep{lin2019efficient}, the dual objective function $\varphi(\alpha, \beta)$ can be shown to be smooth with respect to $\|\cdot\|_{2}$ norm: \begin{lemma} \label{lemma:dual_smooth} The dual objective function $\varphi$ is smooth with respect to $\|.\|_{2}$ norm: \begin{equation*} \varphi(\lambda_1)-\varphi(\lambda_2)-\left \langle \nabla \varphi(\lambda_2), \lambda_1 - \lambda_2 \right \rangle \leq \frac{2}{\eta}||\lambda_1-\lambda_2||^2_2. \end{equation*} \end{lemma} \begin{proof} The proof is straightforward application of the result from Lemma 4.1 in~\citep{lin2019efficient}. Here, we provide the details of this proof for the completeness. Indeed, invoking Lemma 4.1 in~\citep{lin2019efficient}, we find that \begin{equation*} \varphi(\lambda_1)-\varphi(\lambda_2)-\left \langle \nabla \varphi(\lambda_2), \lambda_1 - \lambda_2 \right \rangle \leq \frac{||A||_1^2}{2\eta}||\lambda_1-\lambda_2||^2_\infty. \end{equation*} Since $||A||_1$ is equal to the maximum $\ell_1$-norm of a column of A and each column of A contains only two nonzero elements which are equal to one, we have $||A||_1 = 2$. Combining with the fact that $||\lambda_1-\lambda_2||^2_\infty \leq ||\lambda_1-\lambda_2||^2_2 $, we establish the result. \end{proof} \section{Accelerated Primal-Dual Coordinate Descent Algorithms} \label{Sec:APDRCD_algorithm} In this section, we present and analyze an accelerated primal-dual coordinate descent algorithms to obtain an $\varepsilon$-approximate transportation plan for the OT problem~\eqref{prob:OT}. First, in Section~\ref{sec:algorithmic_framework}, we introduce the accelerated primal-dual randomized coordinate descent (APDRCD) method for the entropic regularized OT problem. Then, following the approximation scheme of~\citep{Altschuler-2017-Near}, we show how to approximate the OT distance within the APDRCD algorithm; see Algorithm~\ref{Algorithm:ApproxOT_APDCD} for the pseudo-code for this problem. Furthermore, we provide theoretical analysis to establish a complexity bound of $\mathcal{O}(\frac{n^{\frac{5}{2}}\sqrt{||C||_\infty\log(n)}}{\varepsilon})$ for the APDRCD algorithm to achieve an $\varepsilon$-approximate transportation plan for the OT problem in Section~\ref{sec:complexity}. This complexity upper bound of the APDRCD algorithm matches the best known complexity bounds of the APDAGD~\citep{Dvurechensky-2018-Computational} and APDAMD algorithms~\citep{lin2019efficient}. Finally, to further improve the practical performance of the algorithm, we present a greedy variant---the accelerated primal-dual greedy coordinate descent (APDGCD) algorithm---in Section~\ref{Sec:APDGCD}. \begin{algorithm}[!ht] \caption{APDRCD ($C, \eta, A, b, \varepsilon'$)} \label{Algorithm:APDCD} \textbf{Input:} $\{\theta_i| \theta_0=1, \frac{1-\theta_{i+1}}{\theta_{i+1}^2}=\frac{1}{\theta_i^2}\}, C_0 = 1, \lambda^0 = z^0 = k =0, L = \frac{4}{\eta}$ \\ \While{$||Ax^k-b||_1 > \varepsilon'$} {Set $y^k = (1-\theta_k)\lambda^k+\theta_k z^k$ \\ Compute $x^{k} = \frac{1}{C_{k}} \biggr(\sum_{j = 0}^{k} \dfrac{x(y^j)}{\theta_{j}} \biggr)$ \\ \textbf{Randomly sample one coordinate} $i_k$ \textbf{where} $i_k \in \{1,2,..., 2n\}$:\\ Update \begin{equation}\label{a2} \lambda^{k+1}_{i_k} = y^k_{i_k} - \frac{1}{L}\nabla_{i_k}\varphi(y^k) \end{equation} \\ Update \begin{equation}\label{a3} z^{k+1}_{i_k} = z^k_{i_k}-\frac{1}{2n L \theta_k} \nabla_{i_k} \varphi(y^k) \end{equation} \\ Update $k = k+1$ and $C_k = C_k + \frac{1}{\theta_k}$ } \textbf{Output:} $X^k$ where $x^k = vec(X^k)$ \end{algorithm} \subsection{Accelerated Primal-Dual Randomized Coordinate Descent (APDRCD)} \label{sec:algorithmic_framework} We denote by $L$ the Lipschitz constant for the dual objective function $\varphi$, which means that $L := \frac{4}{\eta}$, and $x(\lambda) : = \mathop{\arg \max}\limits_{x \in \mathbb{R}^{n \times n}} \biggr\{ - \left\langle C, x\right\rangle - \left\langle A^{\top}\lambda, x\right\rangle \biggr\}$. The APDRCD algorithm is initialized with the auxiliary sequence $\{\theta_i\}$ and two auxiliary dual variable sequences $\{\lambda_i\}$ and $\{\mathbf z_i\}$, where the first auxiliary sequence $\{\theta_k\}$ is used for the key averaging step and the two dual variable sequences are used to perform the accelerated randomized coordinate descent on the dual objective function $\varphi$ as a subroutine. The APDRCD algorithm is composed of two main parts. First, exploiting the convexity property of the dual objective function, we perform a randomized accelerated coordinate descent step on the dual objective function as a subroutine in step \ref{a2} and \ref{a3}. In the second part, we take a weighted average over the past iterations to get a good approximate solution for the primal problem from the approximate solutions to the dual problem~\eqref{eq:dual_entropic}. Notice that the auxiliary sequence $\{\theta_k\}$ is decreasing and the primal solutions corresponding to the more recent dual solutions have larger weight in this average. \begin{algorithm}[!ht] \caption{Approximating OT by APDRCD} \label{Algorithm:ApproxOT_APDCD} \begin{algorithmic} \STATE \textbf{Input:} $\eta = \dfrac{\varepsilon}{4\log(n)}$ and $\varepsilon'=\dfrac{\varepsilon}{8\left\|C\right\|_\infty}$. \STATE \textbf{Step 1:} Let $\tilde{r} \in \Delta_n$ and $\tilde{l} \in \Delta_n$ be defined as \begin{equation*} \left(\tilde{r}, \tilde{l}\right) = \left(1 - \frac{\varepsilon'}{8}\right) \left(r, l\right) + \frac{\varepsilon'}{8n}\left(\textbf{1}, \textbf{1}\right). \end{equation*} \STATE \textbf{Step 2:} Let $A \in \mathbb{R}^{2n \times n^2}$ and $b \in \mathbb{R}^{2n}$ be defined by \begin{equation*} Avec(X) = \icol{X\boldsymbol{1}\\X^T\boldsymbol{1}} \ \ \text{and} \ \ b = \icol{\tilde{r}\\\tilde{l}} \end{equation*} \STATE \textbf{Step 3:} Compute $\tilde{X} = \text{APDRCD}\left(C, \eta, A, b, \varepsilon'/2\right)$ with $\varphi$ defined in~\ref{eq:dual_entropic}. \STATE \textbf{Step 4:} Round $\tilde{X}$ to $\hat{X}$ by Algorithm 2~\citep{Altschuler-2017-Near} such that $\hat{X}\textbf{1} = r$, $\hat{X}^\top\textbf{1} = l$. \STATE \textbf{Output:} $\hat{X}$. \end{algorithmic} \end{algorithm} \subsection{Complexity Analysis of APDRCD} \label{sec:complexity} Given the updates from APDRCD algorithm in Algorithm~\ref{Algorithm:APDCD}, we have the following result regarding the difference of the values of $\varphi$ at $\lambda^{k + 1}$ and $y^{k}$: \begin{lemma}\label{lemma2} Given the updates $\lambda^{k + 1}$ and $y^{k}$ from the APDRCD algorithm, we have the following inequality: \begin{equation*} \varphi(\lambda^{k+1})-\varphi(y^k) \leq -\frac{1}{2L}|\nabla_{i_k} \varphi (y^k)|^2, \end{equation*} where $i_{k}$ is chosen in the APDRCD algorithm. \end{lemma} \begin{proof} For convenience, we define a vector-valued function $h(i_k)\in \mathbb{R}^{2n}$ such that $h(i_k)_i=1$ if $i=i_k$, and $h(i_k)_i=0$ otherwise. By the update in Eq.~\eqref{a2} of Algorithm~\ref{Algorithm:APDCD}, we obtain: \begin{equation} \label{lemma1eq1} \varphi(\lambda^{k+1})-\varphi(y^k) = \varphi \biggr( y^k-h(i_k) \cdot \frac{1}{L}\nabla_{i_k}\varphi(y^k) \biggr)-\varphi(y^k). \end{equation} Due to the smoothness of $\varphi$ with respect to $\|.\|_{2}$ norm in Lemma~\ref{lemma:dual_smooth}, the following inequalities hold: \begin{align} \label{lemma1eq2} \lefteqn{\varphi \biggr( y^k-h(i_k)\frac{1}{L}\nabla_{i_k}\varphi(y^k) \biggr)-\varphi(y^k)} \nonumber \\ \leq & \left \langle \nabla \varphi(y^k),- h(i_k)\frac{1}{L}\nabla_{i_k}\varphi(y^k)\right\rangle \nonumber \\ & \ + \frac{L}{2}||h(i_k)\frac{1}{L}(\nabla_{i_k}\varphi(y^k))||^2 \nonumber \\ =& -\frac{1}{L}\left \langle \nabla\varphi(y^k),\nabla_{i_k}\varphi(y^k)h(i_k)\right \rangle + \frac{1}{2L}(\nabla_{i_k}\varphi(y^k))^2 \nonumber \\ =& -\frac{1}{L}(\nabla_{i_k}\varphi(y^k))^2 + \frac{1}{2L}(\nabla_{i_k}\varphi(y^k))^2 \nonumber \\ =& -\frac{1}{2L}(\nabla_{i_k}\varphi(y^k))^2. \end{align} Combining the results of Eq.~\eqref{lemma1eq1} and Eq.~\eqref{lemma1eq2} completes the proof of the lemma. \end{proof} The result of Lemma~\ref{lemma2} is vital for establishing an upper bound for $\mathbb{E}_{i_k}\varphi(\lambda^{k+1})$, as shown in the following lemma. \begin{lemma}\label{lemma 3} For each iteration ($k>0$) of the APDRCD algorithm, we have \begin{align*} \lefteqn{\mathbb{E}_{i_k} \big[\varphi(\lambda^{k+1}) \big]} \\ & \leq (1-\theta_k)\varphi(\lambda^k) +\theta_k \big[ \varphi(y^k)+(\lambda-y^k)^T\nabla\varphi(y^k) \big] \\ & \ +2 L^2 n^2 \theta_k^2 \biggr(||\lambda-z^k||^2 - \mathbb{E}_{i_k} \big[||\lambda-z^{k+1}||^2 \big]\biggr), \end{align*} where the outer expectation in the above display is taken with respect to the random coordinate $i_{k}$ in Algorithm~\ref{Algorithm:APDCD}. \end{lemma} The proof of Lemma~\ref{lemma 3} is in Appendix~\ref{subsec:proof:lemm_3}. Now, equipped with the result of Lemma~\ref{lemma 3}, we are ready to provide the convergence guarantee and complexity bound of the APDRCD algorithm for approximating the OT problem. First, we start with the following result regarding an upper bound on $k$ to reach the stopping rule $||A \text{vec}(X^k)-b||_1 \leq \varepsilon'$ for $\varepsilon' = \dfrac{\varepsilon}{8\left\|C\right\|_\infty}$. Here, the outer expectation is taken with respect to the random coordinates $i_{j}$ in Algorithm~\ref{Algorithm:APDCD} for $1 \leq j \leq k$. \begin{theorem} \label{thm: OTconvergence} The APDRCD algorithm for approximating optimal transport (Algorithm~\ref{Algorithm:ApproxOT_APDCD}) returns an output $X^k$ that satisfies the stopping criterion $\mathbb{E} \big[||A \text{vec}(X^k)-b||_1\bigr] \leq \varepsilon'$ in a number of iterations k bounded as follows: \begin{equation*} k \leq 12 n^{\frac{3}{2}}\sqrt{\frac{R+1/2}{\varepsilon}} +1, \end{equation*} where $R := \frac{||C||_\infty}{\eta}+ \log(n) - 2 \log(\min \limits_{1\leq i,j\leq n} \{r_i,l_i\})$. Here, $\varepsilon'$ and $\eta$ are chosen in Algorithm~\ref{Algorithm:ApproxOT_APDCD}. \end{theorem} The proof of Theorem~\ref{thm: OTconvergence} is provided in Appendix~\ref{subsec:proof:thm: OTconvergence}. Given an upper bound on $k$ for the stopping rule in Theorem~\ref{thm: OTconvergence} where $\varepsilon' = \dfrac{\varepsilon}{8\left\|C\right\|_\infty}$ in Theorem~\ref{thm: OTconvergence}, we obtain the following complexity bound for the APDRCD algorithm. \begin{theorem} \label{theorem:complex_APDRCD} The APDRCD algorithm for approximating optimal transport (Algorithm \ref{Algorithm:ApproxOT_APDCD}) returns $\hat{X}\in \mathbb{R}^{n\times n}$ satisfying $\hat{X}\textbf{1} = r$, $\hat{X}^T\textbf{1} = l$ and $\mathbb{E} [\langle C, \hat{X} \rangle ] - \left \langle C, X^\ast \right \rangle \leq \epsilon$ in a total number of \begin{equation*} \mathcal{O} \biggr(\frac{n^{\frac{5}{2}}\sqrt{||C||_\infty\log(n)}}{\varepsilon}\biggr) \end{equation*} arithmetic operations. \end{theorem} The proof of Theorem~\ref{theorem:complex_APDRCD} is provided in Appendix~\ref{subsec:proof:theorem:complex_APDRCD}. We show in Appendix~\ref{subsec:proof:theorem:complex_APDRCD} that Theorem~\ref{theorem:complex_APDRCD} also directly implies a complexity bound for obtaining an $\epsilon$-optimal solution with high probability. Theorem~\ref{theorem:complex_APDRCD} indicates that the complexity upper bound of APDRCD matches the best known complexity $\widetilde{\mathcal{O}}(\frac{n^{5/2}}{\varepsilon})$ of the APDAGD~\citep{Dvurechensky-2018-Computational} and APDAMD~\citep{lin2019efficient} algorithms. Furthermore, that complexity of APDRCD is better than that of the Sinkhorn and Greenkhorn algorithms, which is $\widetilde{\mathcal{O}}(\frac{n^{2}}{\varepsilon^2})$, in terms of the desired accuracy $\varepsilon$. Later in Section~\ref{Sec:experiments}, we demonstrate that the APDRCD algorithm also has better practical performance than APDAGD and APDAMD algorithms on both synthetic and real datasets. \begin{algorithm}[h] \caption{APDGCD ($C, \eta, A, b, \varepsilon'$)} \label{Algorithm:APDGCD} \textbf{Input:} $\{\theta_i| \theta_0=1, \frac{1-\theta_{i+1}}{\theta_{i+1}^2}=\frac{1}{\theta_i^2}\}, C_0 = 1, \lambda^0 = z^0 = k =0, L = \frac{4}{\eta}$ \\ \While{$||Ax^k-b||_1 > \varepsilon'$} {Set $y^k = (1-\theta_k)\lambda^k+\theta_k z^k$ \\ Compute $x^{k} = \frac{1}{C_{k}} \biggr(\sum_{j = 0}^{k} \dfrac{x(y^j)}{\theta_{j}} \biggr)$ \\ \textbf{Select coordinate} $i_k =\mathop{\rm argmax} \limits_{i_k \in \{1,2,..., 2n\} } |\nabla_{i_k}\varphi(y^k)|$: Update \begin{align*} \lambda^{k+1}_{i_k} = y^k_{i_k} - \frac{1}{L}\nabla_{i_k}\varphi(y^k) \end{align*} Update \begin{align*} z^{k+1}_{i_k} = z^k_{i_k}-\frac{1}{2n L \theta_k} \nabla_{i_k} \varphi(y^k) \end{align*} Update $k = k+1$ and $C_k = C_k + \frac{1}{\theta_k}$ } \textbf{Output:} $X^k$ where $x^k = vec(X^k)$ \end{algorithm} \begin{figure*}[!ht] \begin{minipage}[b]{.5\textwidth} \includegraphics[width=65mm,height=48mm]{rcd-agd-syn-1.jpg} \end{minipage} \quad \begin{minipage}[b]{.5\textwidth} \includegraphics[width=65mm,height=48mm]{rcd-agd-syn-2.jpg} \end{minipage} \\ \begin{minipage}[b]{.5\textwidth} \includegraphics[width=65mm,height=48mm]{rcd-agd-syn-3.jpg} \end{minipage} \quad \begin{minipage}[b]{.5\textwidth} \includegraphics[width=65mm,height=48mm]{rcd-agd-syn-4.jpg} \end{minipage} \caption{\small Performance of APDRCD and APDAGD algorithms on the synthetic images. In the top two images, the comparison is based on using the distance $d(P)$ to the transportation polytope, and the maximum, median and minimum of competitive ratios on ten random pairs of images. In the bottom left image, the comparison is based on varying the regularization parameter $\eta\in\{1, 5, 9\}$ and reporting the optimal value of the original optimal transport problem without entropic regularization. Note that the foreground covers 10\% of the synthetic images here. In the bottom right image, we compare the algorithms by using the median of competitive ratios with varying coverage ratio of foreground in the range of $\{0.1, 0.5, 0.9\}$.} \label{fig:rcd-agd-syn} \end{figure*} \subsection{Accelerated Primal-Dual Greedy Coordinate Descent (APDGCD)} \label{Sec:APDGCD} We next present a greedy version of APDRCD algorithm, which we refer to as the \emph{accelerated primal-dual greedy coordinate descent} (APDGCD) algorithm. The detailed pseudo-code of that algorithm is in Algorithm~\ref{Algorithm:APDGCD} while an approximating scheme of OT based on the APDGCD algorithm is summarized in Algorithm~\ref{Algorithm:ApproxOT_APDGCD}. Both the APDGCD and APDRCD algorithms follow along the general accelerated primal-dual coordinate descent framework. Similar to the APDRCD algorithm, the algorithmic framework of APDGCD is composed by two main parts: First, instead of performing randomized accelerated coordinate descent on the dual objective function as a subroutine, the APDGCD algorithm chooses the best coordinate that maximizes the absolute value of the gradient of the dual objective function of regularized OT problem among all the coordinates. In the second part, we follow the key averaging step in the APDRCD algorithm by taking a weighted average over the past iterations to get a good approximated solution for the primal problem from the approximated solutions to the dual problem. Since the auxiliary sequence is decreasing, the primal solutions corresponding to the more recent dual solutions have larger weight in this average. We demonstrate that the APDGCD algorithm enjoys favorable practical performance than APDRCD algorithm on both synthetic and real datasets (cf.\ Appendix~\ref{subsec:further_exp}). \begin{algorithm}[h] \caption{Approximating OT by APDGCD} \label{Algorithm:ApproxOT_APDGCD} \begin{algorithmic} \STATE \textbf{Input:} $\eta = \dfrac{\varepsilon}{4\log(n)}$ and $\varepsilon'=\dfrac{\varepsilon}{8\left\|C\right\|_\infty}$. \STATE \textbf{Step 1:} Let $\tilde{r} \in \Delta_n$ and $\tilde{l} \in \Delta_n$ be defined as \begin{equation*} \left(\tilde{r}, \tilde{l}\right) = \left(1 - \frac{\varepsilon'}{8}\right) \left(r, l\right) + \frac{\varepsilon'}{8n}\left(\textbf{1}, \textbf{1}\right). \end{equation*} \STATE \textbf{Step 2:} Let $A \in \mathbb{R}^{2n \times n^2}$ and $b \in \mathbb{R}^{2n}$ be defined by \begin{equation*} Avec(X) = \icol{X\boldsymbol{1}\\X^T\boldsymbol{1}} \ \ \text{and} \ \ b = \icol{\tilde{r}\\\tilde{l}} \end{equation*} \STATE \textbf{Step 3:} Compute $\tilde{X} = \text{APDGCD}\left(C, \eta, A, b, \varepsilon'/2\right)$ with $\varphi$ defined in~\ref{eq:dual_entropic}. \STATE \textbf{Step 4:} Round $\tilde{X}$ to $\hat{X}$ by Algorithm 2~\citep{Altschuler-2017-Near} such that $\hat{X}\textbf{1} = r$, $\hat{X}^\top\textbf{1} = l$. \STATE \textbf{Output:} $\hat{X}$. \end{algorithmic} \end{algorithm} \begin{figure*}[!ht] \begin{minipage}[b]{.5\textwidth} \includegraphics[width=65mm,height=48mm]{rcd-amd-syn-1.jpg} \end{minipage} \quad \begin{minipage}[b]{.5\textwidth} \includegraphics[width=65mm,height=48mm]{rcd-amd-syn-2.jpg} \end{minipage} \\ \begin{minipage}[b]{.5\textwidth} \includegraphics[width=65mm,height=48mm]{rcd-amd-syn-3.jpg} \end{minipage} \quad \begin{minipage}[b]{.5\textwidth} \includegraphics[width=65mm,height=48mm]{rcd-amd-syn-4.jpg} \end{minipage} \caption{\small Performance of APDRCD and APDAMD algorithms on the synthetic images. The organization of the images is similar to those in Figure~\ref{fig:rcd-agd-syn}.} \label{fig:rcd-amd-syn} \end{figure*} \section{Experiments} \label{Sec:experiments} We carry out comparative experiments between the APDRCD, APDGCD algorithms and the state-of-art primal-dual algorithms for the OT problem including the APDAGD and APDAMD algorithms, on both synthetic images and the MNIST Digits dataset.\footnote{\href{http://yann.lecun.com/exdb/mnist/}{http://yann.lecun.com/exdb/mnist/}} Due to space constraints, the comparative experiments between the APDGCD algorithm and APDAGD/APDAMD algorithms, further experiments for the APDRCD algorithm on larger synthetic datasets and CIFAR10 dataset are deferred to Appendix~\ref{subsec:further_exp}. Note that for the above comparisons, we also utilize the default linear programming solver in MATLAB to obtain the optimal value of the original optimal transport problem without entropic regularization. \subsection{APDRCD Algorithm with Synthetic Images} \label{subsec:APDRCD_synthetic} We compared the performance of the APDRCD algorithm with the APDAGD and APDAMD algorithms on synthetic images. The generation of synthetic images follows the procedure of~\citep{Altschuler-2017-Near, lin2019efficient}. The images are of size $20\times20$ and generated by randomly placing a foreground square in the otherwise black background. For the intensities of the background pixels and foreground pixels, we choose uniform distributions on [0,1] and [0, 50] respectively. We vary the proportion of the size of the foreground square in {0.1, 0.5, 0.9} of the full size of the image and implement all the algorithms on different kinds of synthetic images. \textbf{Evaluation metric:} We utilize the metrics from~\citep{Altschuler-2017-Near}. The first metric is the distance between the output of the algorithm and the transportation polytope, $d(X) : = ||r(X)-r||_1+||l(X)-1||_1$, where $r(X)$ and $l(X)$ are the row and column marginal vectors of the output matrix $X$ while $r$ and $l$ stand for the true row and column marginal vectors. The second metric is the competitive ratio, defined by $\log(\frac{d(X1)}{d(X2)})$ where $d(X1)$ and $d(X2)$ refer to the distance between the outputs of two algorithms and the transportation polytope. \begin{figure*}[!ht] \begin{minipage}[b]{.3\textwidth} \includegraphics[width=52mm,height=39mm]{rcd-agd-mnist-1.jpg} \end{minipage} \quad \begin{minipage}[b]{.3\textwidth} \includegraphics[width=52mm,height=39mm]{rcd-agd-mnist-2.jpg} \end{minipage} \quad \begin{minipage}[b]{.3\textwidth} \includegraphics[width=52mm,height=39mm]{rcd-agd-mnist-3.jpg} \end{minipage} \\ \begin{minipage}[b]{.3\textwidth} \includegraphics[width=52mm,height=39mm]{rcd-amd-mnist-1.jpg} \end{minipage} \quad \begin{minipage}[b]{.3\textwidth} \includegraphics[width=52mm,height=39mm]{rcd-amd-mnist-2.jpg} \end{minipage} \quad \begin{minipage}[b]{.3\textwidth} \includegraphics[width=52mm,height=39mm]{rcd-amd-mnist-3.jpg} \end{minipage} \caption{\small Performance of the APDRCD, APDAGD and APDAMD algorithms on the MNIST images. In the first row of images, we compare the APDRCD and APDAGD algorithms in terms of iteration counts. The leftmost image specifies the distances $d(P)$ to the transportation polytope for two algorithms; the middle image specifies the maximum, median and minimum of competitive ratios on ten random pairs of MNIST images; the rightmost image specifies the values of regularized OT with varying regularization parameter $\eta = \{1,5,9\}$. In addition, the second row of images present comparative results for APDRCD versus APDAMD.} \label{fig:rcd-agd-amd-mnist} \end{figure*} \textbf{Experimental settings and results:} We perform two pairwise comparative experiments for the APDRCD algorithm versus the APDAGD and APDAMD algorithms by running these algorithms with ten randomly selected pairs of synthetic images. We also evaluate all the algorithms with varying regularization parameter $\eta \in \{1, 5, 9\}$ and the optimal value of the original OT problem without the entropic regularization, as suggested by~\citep{Altschuler-2017-Near, lin2019efficient}. We present the results in Figure~\ref{fig:rcd-agd-syn} and Figure~\ref{fig:rcd-amd-syn}. The APDRCD algorithm has better performance than the APDAGD and APDAMD algorithms in terms of the iterations. When the number of iterations is small, the APDRCD algorithm achieves faster and more stable decrements than the other two algorithms with regard to both the distance to polytope and the value of OT during the computing process, which is beneficial for the purposes of tuning and illustrates the advantage of using randomized coordinate descent on the dual regularized problem. \subsection{APDRCD Algorithm with MNIST Images} \label{subsec:APDRCD_mnist} We compare the APDRCD algorithm and the APDAGD and APDAMD algorithms on MNIST images with the same set of evaluation metrics. The image pre-processing follows the same pre-processing procedure as suggested in~\citep{lin2019efficient}. We omit the details for the sake of brevity. We present the results with the MNIST images in Figure~\ref{fig:rcd-agd-amd-mnist} with various values for the regularization parameter $\eta \in \{1, 5, 9\}$. We also evaluate the algorithms with the optimal value of the original OT problem without entropic regularization. As shown in Figure~\ref{fig:rcd-agd-amd-mnist}, the APDRCD algorithm outperforms both the APDAGD and APDAMD algorithms on the MNIST dataset in terms of the number of iterations. Additionally, the APDRCD algorithm displays faster and smoother convergence than the other algorithms at small iteration numbers with regard to both the evaluation metrics, which implies the advantage that it is easier to be tuned in practice. \FloatBarrier \section{Discussion} \label{Sec:discussion} We have proposed and analyzed new accelerated primal-dual coordinate descent algorithms for approximating the optimal transport distance between two discrete probability measures. These accelerated primal-dual coordinate descent algorithms have comparable theoretical complexity as that of existing accelerated primal-dual algorithms while enjoying better experimental performance. Furthermore, we show that the APDRCD and APDGCD algorithms are suitable for other large-scale problems apart from computing the OT distance; we propose extensions that approximate the Wasserstein barycenters for multiple probability distributions. There are several directions for future work. Given the favorable practical performance of the APDRCD and the APDGCD algorithms over existing primal-dual algorithms, it is of interest to carry out more experiments of the distributed APDRCD and APDGCD algorithms for computing Wasserstein barycenters. Another important direction is to construct fast distributed algorithms for the case of time-varying and directed machine networks to approximate the Wasserstein barycenters. It remains as an interesting and important open research question how the dynamics of the network affect the performance of these algorithms. \subsubsection*{Acknowledgements} We thank Tianyi Lin for many helpful discussions. This work was supported in part by the Mathematical Data Science program of the Office of Naval Research under grant number N00014-18-1-2764. \newpage
1,116,691,499,936
arxiv
\section{Introduction} \label{Intro} Several theories based on the picture of particle-hole excitations have successfully described the nuclear spin-isospin response. Not only the lowest-order one-particle one-hole (1p1h) excitation but also many-particle many-hole excitations have been observed to play a significant role. For neutrino-nucleus quasielastic scattering, the inclusion of the two-particle two-hole (2p2h) configuration enhances the cross section to the point where a theoretical calculation is consistent with the experimental data~\cite{MartiniPhysRevC.80.065501}. For a nuclear spin-isospin response, the 2p2h-configuration mixing partly explains the Gamow-Teller (GT) quenching problem, which is known not to follow the Ikeda sum rule~\cite{Ikeda01031964,FUJITA1965145,Gaarde1983} and is one of the long-standing problems in nuclear physics. Other reasons for this quenching are the existence of the $\Delta$-$h$ excitation and the coupling of the GT-$1^+$ state to the spin-quadrupole (SQ) $1^+$ state mediated by the tensor force \cite{BAI2009}. For a more detailed review, see Ref.~\cite{Prog.Part.Nucl.Phys.56.446}. Recently the effect of 2p2h mixing on the GT-transition strength $B({\rm GT})$ employing the fully self-consistent second Tamm-Dancoff approximation (STDA) was investigated~\cite{MinatoPhysRevC.93.044319}. The calculated $B({\rm GT})$ distribution of $^{48}{\rm Ca}$ was compared with its experimentally measured value~\cite{YakoPhysRevLett.103.012503}, which was derived through an analysis of the charge-exchange reaction $^{48}{\rm Ca}(p,n)^{48}{\rm Sc}$ by means of the distorted-wave impulse approximation (DWIA). It was shown that the STDA (where the 2p2h model space was confined to seven single-particle levels) described the experimental data better than the 1p1h Tamm-Dancoff approximation (TDA). It was also confirmed that the broadening of the $B({\rm GT})$ distribution by the 2p2h effect was essential to account for the observed experimental behavior, as preceding works have shown using different models~\cite{Phys.Lett.166B.18,Phys.Rev.C90.054328}. Charge exchange reactions such as $(p,n)$ and $(^3{\rm He},t)$ excite target nuclei through the one-body operator which populates 1p1h states in target nuclei. Therefore, the 2p2h configuration is not directly involved in the above reactions. However, it would have an indirect influence on the cross sections through the transition from 1p1h states to 2p2h states. Even though the importance of the 2p2h configuration in $B({\rm GT})$ has been pointed out by many authors~\cite{Phys.Lett.166B.18, Wakasa2012, Prog.Part.Nucl.Phys.56.446, Wambach1988}, a microscopic understanding of how such a higher-order configuration is involved in the charge-exchange cross section is not so transparent. In the present paper we therefore investigate the effect of the 2p2h configuration on the angular-distributed cross section, which is directly comparable with experimental data. In order to calculate the cross section of spin-flip transitions, we work with the distorted-wave Born approximation (DWBA) with the microscopic transition density obtained by the STDA. The $\Delta l=2$ transition also generates $1^+$ resonance states at excitation energies equal to that of GT-$1^+$ states. When the experimental $B({\rm GT})$ of $^{48}{\rm Ca}$ was evaluated~\cite{YakoPhysRevLett.103.012503}, it was assumed that the zero-degree charge-exchange cross section is proportional to $B({\rm GT})$ and the $\Delta l=2$ transition plays a negligible role. The assumption, however, was confirmed only for the $^{13}{\rm C}(p,n)^{13}{\rm N}$ reaction~\cite{Taddeucci1987}. Therefore we investigate whether the assumption is valid also for the present system, $^{48}{\rm Ca}(p,n)^{48}{\rm Sc}$. In this article both the spin-flip and non-spin-flip transitions are surveyed. Note that, in the Fermi transition, the nucleus populates the isobaric analogue state (IAS), which has the same isospin $T_A$ as that of the ground state of the parent nucleus. It does not couple to other 1p1h states having $T_A \pm 1$; i.e., the 2p2h effect is expected to be small for the Fermi transition as seen in outgoing neutron spectra of $(p,n)$ reactions (see, for example, Ref.~\cite{PhysRevC.31.1147}). Therefore we investigate the Fermi transition just to demonstrate our theoretical framework. This paper is organized as follows. Section~\ref{formulation} is dedicated to the formulation of the structure and reaction models as well as the form factor. In Sec.~\ref{result}, results on the non-spin-flip (Fermi-type) transition are shown. The 2p2h effect on the cross section of the spin-flip transition is also discussed. A summary is given in Sec.~\ref{summary}. \section{Theoretical framework} \label{formulation} Convolution of the DWBA and the random phase approximation (which includes a correlation in ground states in addition to the TDA) is widely used to analyze experimental data at intermediate energies, as in the measurements of charge-exchange cross sections at zero degree~\cite{Taddeucci1987,Phys.Rev.Lett.59.1401,Phys.Rev.C91.064316} accompanied by the multipole decomposition technique~\cite{Prog.Part.Nucl.Phys.56.446}. Here we use DWBA+TDA/STDA based on the Skyrme energy density functional~\cite{VautherinBrink} in order to discuss the effect of the 2p2h configuration on charge-exchange cross sections. Because the ground state correlation is expected to be small for the Fermi- and GT-type charge-exchange reactions~\cite{NISHIZAKI1988231,NGUYENDINHDANG1997719}, the use of the TDA will not affect our results. We briefly describe the STDA as well as the TDA in Sec.~\ref{formulation1} and illustrate the formulations of the form factor and the DWBA in Secs.~\ref{formulation2} and~\ref{formulation3}, respectively. \subsection{Structure model} \label{formulation1} We consider the transition $A \rightarrow B$ induced by the charge-exchange reaction $A(p,n)B$. To describe the GT and $\Delta l=2$ transitions as well as other $1^+$ multipole spin-flip transitions relevant to the reaction studied, we adopt the STDA as explained in Ref.~\cite{MinatoPhysRevC.93.044319}. In the STDA the many-body wave function $\ket{B_{\alpha}}$, which is a resonance state of $B$ with respect to the $A$'s ground state $\ket{A}$, is written as \begin{align} \ket{B_\alpha} &= \left[ \sum_{mi}X_{mi} a_m^\dagger a_i +\sum_{mnij}\mathcal{X}_{mnij} a_m^\dagger a_n^\dagger a_i a_j \right] \ket{A} \nonumber\\ &\equiv \sum_{mi}X_{mi} \ket{m(i)^{-1}} + \sum_{mnij}\mathcal{X}_{mnij} \ket{mn(ij)^{-1}}, \label{STDAOpe} \end{align} where $a_\nu^\dagger$ ($a_\nu$) is the creation (annihilation) operator in the single-particle state $\nu$, and $\nu=m,n,p,q$ ($\nu=i,j,k,l$) for the particle (hole) states. We introduce the index $\alpha$ to express the non-spin-flip transition ($\alpha=s0$), the spin-flip transition ($\alpha=s1$), and the $\Delta l=2$ transition ($\alpha=l2$). We work with the Skyrme-Hartree-Fock model to obtain $\ket{A}$. The coefficients $X_{mi}$ and $\mathcal{X}_{mnij}$ are determined by solving the so-called STDA equation~\cite{Yannouleas1987}, \begin{align} \left( \begin{array}{cc} A & \mathcal{A}_{12} \\ \mathcal{A}_{21} & \mathcal{A}_{22} \\ \end{array} \right) \left( \begin{array}{c} X \\ \mathcal{X}\\ \end{array} \right) = \varepsilon \left( \begin{array}{c} X \\ \mathcal{X}\\ \end{array} \right). \label{STDAeq} \end{align} Here the matrix elements in Eq.~\eqref{STDAeq} are given by Ref.~\cite{Yannouleas1987} and $\varepsilon$ is the phonon energy. A value of $\mathcal{X}_{mnij}=0$ corresponds to the standard TDA, which does not include 2p2h-configuration mixing. In the TDA, $\Ket{B_\alpha}$ is given by \begin{align} \Ket{B_\alpha} = \sum_{mi} X_{mi} a_m^\dagger a_i \ket{A}. \label{TDAOpe} \end{align} The coefficients $X_{mi}$ are obtained from the so-called TDA equation; see Refs.~\cite{DJRowe,RingandSchuck}. Since the 2p2h effect on the IAS originating from the Fermi transition is known to be negligible (see Sec.~\ref{Intro}), we describe $\Ket{B_{s0}}$ by Eq.~\eqref{TDAOpe}. The transition density, which is employed to calculate the form factor shown later, is given by \begin{align} g_\alpha(r_{i_t}) &= \frac{1}{\hat{j}_B} \sum_{mi} X_{mi}R_m(r_{i_t})R_i(r_{i_t}) \left\langle j_ml_m\left|\left| \mathcal{G}_\alpha \right|\right|j_il_i\right\rangle, \label{traden} \end{align} where $R_m$ $(R_i)$ is the radial part of the single-particle wave-function and $j_m = l_m \pm 1/2$ ($j_i = l_i \pm 1/2$) with the magnitude of the orbital angular momentum $l_m$ ($l_i$) of state $m$ $(i)$. The coordinate of the $i_t$ th nucleon in the target is $\boldsymbol{r}_{i_t}$ and $j_B$ is the magnitude of the spin of $B$. We use the abbreviation $\hat{j}_B=\sqrt{2j_B+1}$. The transition operator for the non-spin-flip transition is \begin{align} \mathcal{G}_{s0}=\boldsymbol{\tau}Y_{l=0,0}(\hat{\boldsymbol{r}}_{i_t}), \label{GF} \end{align} and those of the spin-flip and $\Delta l=2$ transitions are, respectively, \begin{align} \mathcal{G}_{s1}&=\boldsymbol{\tau}Y_{l=0,0}(\hat{\boldsymbol{r}}_{i_t})\boldsymbol{\sigma}, \label{GGT}\\ \mathcal{G}_{l2}&=\boldsymbol{\tau}\left[Y_{l=2}(\hat{\boldsymbol{r}}_{i_t})\otimes\boldsymbol{\sigma}\right]_{1M}, \label{Gl2} \end{align} where $\boldsymbol{\sigma}$ ($\boldsymbol{\tau}$) is the Pauli spin (isospin) operator. Here $l$ corresponds to the orbital angular momentum transfer of the relative motion [see Eq.~\eqref{transAM}], and $M=0,\pm 1$. \subsection{Form factor} \label{formulation2} The form factor is expressed by \begin{align} \mathcal{F}_{\alpha}(\boldsymbol{R}) =\Braket{nB\left|v_{\alpha}\right|pA}, \label{FF1} \end{align} where $\boldsymbol{R}$ is the relative coordinate of the $p$-$A$ and $n$-$B$ system. The ket (bra) vector represents the product of the spin-wave function of the projectile (ejectile) and the many-body wave function of $A$ ($B$). The transitions of non-spin-flip (the spin transfer $\Delta s=0$), spin-flip ($\Delta s=1$), and $\Delta l=2$ components are respectively caused by the interactions, \begin{align} v_{s0}&=\sum_{i_pi_t}V_{s0}(\rho)\boldsymbol{\tau}_{i_p}\cdot\boldsymbol{\tau}_{i_t}, \label{vF}\\ v_{s1}&=\sum_{i_pi_t}V_{s1}(\rho)\left(\boldsymbol{\sigma}_{i_p}\cdot\boldsymbol{\sigma}_{i_t}\right) \left(\boldsymbol{\tau}_{i_p}\cdot\boldsymbol{\tau}_{i_t}\right), \label{vGT}\\ v_{l2}&=\sum_{i_pi_t}V_{l2}(\rho)\left([\boldsymbol{\sigma}_{i_p}Y_2]_1\cdot[\boldsymbol{\sigma}_{i_t}Y_2]_1\right) \left(\boldsymbol{\tau}_{i_p}\cdot\boldsymbol{\tau}_{i_t}\right), \label{vl2} \end{align} where $\rho=\left|r_{i_p}-r_{i_t}\right|$ and the sums over $i_p$ ($i_t$) represent the nucleon number of the projectile (target nucleus) running up to its mass number. We assume that the radial parts $V_{s0}$, $V_{s1}$, and $V_{l2}$ are the one-range Gaussian functions given by \begin{align} V_{s0}(\rho)=\bar{V}_{0}e^{-\left(\frac{\rho}{\rho_0}\right)^2}, \quad V_{s1}(\rho)=V_{l2}(\rho)=\bar{V}_{1}e^{-\left(\frac{\rho}{\rho_1}\right)^2}, \end{align} the parameters of which are determined phenomenologically. Since the present work focuses on the investigation of the 2p2h effect, we use this phenomenological interaction rather than a microscopic one. Following the formalism of Refs.~\cite{Petrovich1977,Cook1984}, the form factor is obtained through the partial-wave expansion. The radial part of $\mathcal{F}_{\alpha}$ is calculated as \begin{align} F_{lsj}^\alpha(R)=\frac{i^l}{\pi^2}\hat{j}\int \tilde{V}_\alpha (K) \tilde{g}_{\alpha} (K)j_l(KR)K^2 dK , \label{radFF} \end{align} where $s$ and $j$ are the transferred angular momenta defined by \begin{align} \boldsymbol{j}=\boldsymbol{j}_B-\boldsymbol{j}_A,\quad \boldsymbol{s}=\boldsymbol{j}_p-\boldsymbol{j}_n,\quad \boldsymbol{l}=\boldsymbol{j}-\boldsymbol{s}, \label{transAM} \end{align} with the spin $\boldsymbol{j}_x$ of the particle $x$~$(=p,n,A,~{\rm and}~B)$. The interaction and transition density in momentum space, regarding $K$ associated with $R$, are respectively defined by \begin{align} \tilde V_\alpha (K) &= 4\pi \int d\rho \rho^2 \frac{\sin(K\rho)}{K\rho} V_\alpha(\rho), \label{momv}\\ \tilde g_{\alpha} (K) &= 4\pi \int dr_{i_t} \, r_{i_t}^2 j_{l} (Kr_{i_t}) g_\alpha(r_{i_t}) \label{momg}, \end{align} with the spherical Bessel function $j_l$ ($l=0$ for the non-spin-flip and spin-flip transitions, and $l=2$ for the $\Delta l=2$ case). Expanding the spherical Bessel function of Eq.~\eqref{momg} in terms of $K$, we obtain the integrands proportional to $g_\alpha, r_{i_t}^2g_\alpha$, and so on. The lowest-order terms for $\alpha=s1$ and $l2$ correspond to the GT- and SQ-$1^+$ transitions, respectively. Incidentally, the first order for $\alpha=s1$ is the spin-monopole transition, which is difficult to distinguish from the GT transition experimentally~\cite{YakoPhysRevLett.103.012503}. Higher-order contributions can be safely ignored, as they are negligibly small in excitation energies studied in this work. \subsection{Reaction model} \label{formulation3} The following expression is based on the formalism in Ref.~\cite{Cook1984} but now generalized to include the spin-orbit interaction regarding the coupling between the projectile's (ejectile's) spin and the $p$-$A$ ($n$-$B$) orbital angular momentum in the initial (final) channel. The transition matrix with the DWBA under the partial-wave expansion is given by \begin{align} T^{\rm (DWBA)}_{\alpha;m_p m_n m_A m_B} &=\frac{4\pi}{K_pK_n}(-)^{j_n+m_n} \hat{j}_n \nonumber\\ &\times \sum_{jm_j}\left( j_A m_A j m_j | j_B m_B \right) \mathcal{S}_{\alpha;jm_j}^{m_pm_n}, \label{TmatGen} \end{align} where $m_j$ and $m_x$ respectively correspond to the $z$-projections of $\boldsymbol{j}$ and $\boldsymbol{j}_x$. The magnitude of the wave number of the projectile (ejectile) is expressed by $K_p$ ($K_n$). The function $\mathcal{S}_{\alpha;jm_j}^{m_pm_n}$ is defined by \begin{align} \mathcal{S}_{\alpha;jm_j}^{m_pm_n} &= \left(4\pi\right)^{-\frac{1}{2}} \sum_{\substack{J_iJ_f\\L_iL_f\\ls}} i^{L_i-L_f-l} \hat{s}\hat{J}_i\hat{J}_f\hat{L}_i^2\hat{L}_f^2 I_{J_iJ_fL_iL_f}^{\alpha;lsj} \nonumber\\ &\times \left( L_i 0 L_f 0 | l 0 \right) \begin{Bmatrix} L_f & L_i & l \\ j_n & j_p & s \\ J_f & J_i & j \end{Bmatrix} f_{jj_pj_nL_iL_f}^{m_jm_pm_n} \left(\cos\theta\right), \label{redTmatGen} \end{align} where $L_i$ ($L_f$) is the magnitude of the orbital angular momentum regarding the relative $p$-$A$ ($n$-$B$) motion, and its coupled spin with $j_p$ ($j_n$) is expressed by $J_i$ ($J_f$). The conservation of the total angular momentum is given by \begin{align} \left[\left[\boldsymbol{j}_p \otimes \boldsymbol{L}_i\right]_{\boldsymbol{J}_i} \otimes \boldsymbol{j}_A\right] = \left[\left[\boldsymbol{j}_n \otimes \boldsymbol{L}_f\right]_{\boldsymbol{J}_f} \otimes \boldsymbol{j}_B\right]. \label{totalJ} \end{align} The overlap integral $I_{J_iJ_fL_iL_f}^{\alpha;lsj}$ and the function $f_{jj_pj_nL_iL_f}^{m_jm_pm_n}$ are respectively defined as \begin{align} I_{J_iJ_fL_iL_f}^{\alpha;lsj}&=\int dR \tilde \xi_{n;J_fL_f}(K_f, R)F_{lsj}^\alpha(R)\tilde \xi_{p;J_iL_i}(K_i, R), \label{ovlI} \end{align} \begin{align} &f_{jj_pj_nL_iL_f}^{m_jm_pm_n} \left(\cos\theta\right) \nonumber\\ &\quad= \left( J_i m_p J_f, m_j-m_p | j m_j \right) \left( j_p m_p L_i 0 | J_i m_p \right) \nonumber\\ &\quad\times \left( j_n, -m_n L_f, m_j-m_p+m_n | J_f, m_j-m_p \right) \nonumber\\ &\quad\times \left[ \frac{(L_f-\left|m_j-m_p+m_n \right|)!}{(L_f+\left|m_j-m_p+m_n\right|)!} \right]^{\frac{1}{2}} P_{L_f,m_j-m_p+m_n} \left(\cos\theta\right), \label{fmmm} \end{align} with the Legendre function $P_{L_f,m_j-m_p+m_n}$ as a function of the emitting angle $\theta$. The partial wave $\tilde\xi_{\gamma;JL}~=~P_{\rm NL}^{(\gamma)}\xi_{\gamma;JL}$ ($\gamma=p~{\rm or}~n$) is given as the solution of the Schr\"odinger equation, \begin{align} \left[\frac{d^2}{dR^2} +K_\gamma -\frac{L(L+1)}{R^2} -\frac{2\mu_\gamma}{\hbar^2}U_\gamma(R) \right] \xi_{\gamma;JL}(K_\gamma,R) =0, \label{ScheqLS} \end{align} where the reduced mass is represented by $\mu_\gamma$ and the distorting potential $U_\gamma$ involves the central, spin-orbit, and Coulomb terms. Here, in order to take into account the nonlocality of the nucleon optical potential, we multiply the distorted wave $\xi_{\gamma;JL}$ by the so-called Perey factor $P_{\rm NL}^{(\gamma)}$~\cite{PEREY1962353}, \begin{align} P_{\rm NL}^{(\gamma)}(R) &= \left[ 1-\frac{\mu_p\beta^2}{2\hbar^2}U_\gamma^{\rm (N)}(R) \right]^{-\frac{1}{2}}, \label{Pereyfac} \end{align} with the nonlocal parameter $\beta$ and the nuclear part $U_\gamma^{\rm (N)}$ of the distorting potential. The cross section is calculated as \begin{align} \frac{d\sigma_\alpha}{d\Omega} &= \frac{\mu_p\mu_n}{\left(2\pi\hbar^2\right)^2}\frac{K_n}{K_p}\frac{1}{\left(\hat{j}_p\hat{j}_A\right)^2} \sum_{\substack{m_pm_n\\m_Am_B}}\left|T^{\rm (DWBA)}_{\alpha;m_p m_n m_A m_B}\right|^2 \nonumber\\ &= \frac{1}{E_pE_n}\frac{K_n}{K_p}\left(\frac{\hat{j}_n\hat{j}_B}{\hat{j}_p\hat{j}_A}\right)^2 \sum_{jm_j}\frac{1}{\hat{j}^2}\sum_{m_pm_n}\left|\mathcal{S}_{\alpha;jm_j}^{m_pm_n}\right|^2 \label{csGen}, \end{align} with $E_\gamma=\left(\hbar K_\gamma\right)^2/(2\mu_\gamma)$. \section{Results and discussion} \label{result} \subsection{Model setting} \label{result1} The ground state wave function of $^{48}{\rm Ca}$ is calculated by the Skyrme-Hartree-Fock approach~\cite{VautherinBrink} with the SGII effective interaction \cite{Phys.Lett.B106_379}. To obtain the non-spin-flip $0^+$ and spin-flip $1^+$ excited states, we solve the STDA and TDA equations with the same force in a self-consistent manner, and the transition density given by Eq.~\eqref{traden} is calculated for each state. The model space of the STDA and TDA calculations consists of single-particle orbits up to $100$~MeV for the 1p1h configuration and $1d_{5/2}, 1d_{3/2}, 2s_{1/2}, 1f_{7/2}, 2p_{3/2}, 2p_{1/2}$, and $1f_{5/2}$ orbits for the 2p2h configuration as performed in Ref.~\cite{MinatoPhysRevC.93.044319}. The neutron and proton orbits are assumed to be fully occupied up to $1f_{7/2}$ and $2s_{1/2}$, respectively. To calculate the form factor, we adjust the strengths $\bar{V}_0$ and $\bar{V}_1$, while keeping the range parameter fixed at $\rho_0=\rho_1=1.484$~fm~\cite{Ohmura1970}. For the non-spin-flip transition, we let $\bar V_0=-1762.4$~MeV in order to fit the calculated cross section to the measured data at forward angle. For the spin-flip transitions, we use $\bar{V}_1=-275.8$~MeV and $-153.9$~MeV for the low-lying and giant resonances respectively, to make the calculated cross section with the STDA transition density identical to the measured data at 0.2$^\circ$. The same parameters $\bar{V}_1$ and $\rho_1$ are used in the calculation of the form factor with the TDA transition density. For $U_\gamma^{\rm (N)}$, we adopt the phenomenological optical potential~\cite{KONING2003231} and the ``Fit 1'' parameter set of the Dirac phenomenology~\cite{HamaPhysRevC.41.2737}, for the non-spin-flip and the spin-flip transitions, respectively. We also include the prescription~\cite{SatchlerPhysRev.136.B637} that the incident energy dependence of the optical potential for the Fermi transition should be adjusted as $E_{\rm lab}-Q/2$, where $E_{\rm lab}$ is the incident energy in the laboratory frame and $Q$ stands for the $Q$ value. The nonlocal range parameter is $\beta=0.85$~fm~\cite{PEREY1962353}, and the Coulomb potential is chosen to be a uniformly charged sphere with the charge radius of 4.61~fm~\cite{KONING2003231}. The partial wave $\xi_{\gamma}$ is calculated up to $J=20.5$ ($J=100.5$) for the non-spin-flip (spin-flip) transition. For each transition the integration in Eq.~\eqref{ovlI} is performed up to 20~fm; our work assumes the relativistic kinematics. \subsection{Non-spin-flip transitions} \label{result2} \begin{figure}[!b] \begin{center} \includegraphics[width=0.48\textwidth,clip]{./strength.eps} \caption{(a) Strength functions of the GT and IAS resonances of $^{48}{\rm Ca}$ calculated with the STDA and TDA. The filled and slash-shaded bars are the results of the GT transition calculated with the TDA and STDA, respectively, and the cross-shaded bars are for the IAS resonance. (b) Strength functions of the SQ-1$^+$ transition.} \label{strength} \end{center} \end{figure} To demonstrate our model, we first discuss the Fermi transition measured from the $^{48}{\rm Ca}(p,n)^{48}{\rm Sc(IAS)}$ reaction. Figure \ref{strength} shows the strength functions of the Fermi and GT transitions of $^{48}{\rm Ca}$ calculated with the STDA and TDA. The corresponding excitation energies of the resonance states in question are written explicitly in Fig.~\ref{strength}(a). The TDA calculation gives the $0^+$ IAS of $^{48}{\rm Sc}$ at $\varepsilon=7.0$~MeV. In the reaction calculation the $Q$ value is calculated with the experimental excitation energy 6.7~MeV~\cite{BURROWS20061747} of the IAS. Note that it is confirmed numerically that the excitation energies of both the TDA and experiment produce identical cross sections. In addition to the TDA form factor given in Eq.~\eqref{radFF}, we carry out a phenomenological calculation using the Lane model~\cite{Lane1962676}, which is conventionally adopted to compare theoretical charge-exchange cross section values for the Fermi transition with experimental data. In the Lane model the radial form factor $F_{000}^{s0 {\rm(Lane)}}$ is given as the difference of the optical potentials between the final and initial channels: \begin{align} F_{000}^{s0 {\rm(Lane)}}(R)=\frac{\left(8\pi T_A\right)^{\frac{1}{2}}}{2T_A-1}\left[U_n^{\rm (N)}(R)-U_p^{\rm (N)}(R)\right]. \label{LaneFF} \end{align} where the phenomenological optical potential~\cite{KONING2003231} is used. In Fig.~\ref{fig1}, the calculated cross sections of the charge-exchange reaction $^{48}{\rm Ca}(p,n)^{48}{\rm Sc(IAS)}$ at incident proton energies $E_{\rm lab}=$~25, 35, and 45~MeV as a function of the $n$ emitting angle $\theta$ are compared with experimental data~\cite{DoeringPhysRevC.12.378,JonPhysRevC.62.044609}. The cross sections calculated by the TDA (Lane) form factor are shown by the solid (dashed) lines. Note that the theoretical results and experimental data at 35 and 45~MeV are multiplied by $10^{-2}$ and $10^{-4}$, respectively, in order to make them distinguishable from the cross section at 25~MeV. \begin{figure}[!t] \begin{center} \includegraphics[width=0.48\textwidth,clip]{./fig1.eps} \caption{The cross sections of the $^{48}{\rm Ca}(p,n)^{48}{\rm Sc(IAS)}$ reaction at $E_{\rm lab}=$~25,~35,~and~45~MeV. The solid lines are the calculated results from the TDA form factor, while the dashed lines are ones from the Lane form factor. The measured data are taken from Refs.~\cite{DoeringPhysRevC.12.378,JonPhysRevC.62.044609}. The lines and the dots are multiplied by $10^{-2}$ ($10^{-4}$) at 35 (45)~MeV.} \label{fig1} \end{center} \end{figure} One finds that, in Fig.~\ref{fig1}, the results using the TDA form factor reasonably coincide with the experimental angular distribution for 35 and 45~MeV. Although at 25~MeV the TDA result underestimates the data at $\theta \gtrsim 30^\circ$, it appears to be better than the Lane model in accounting for the measured behavior. While the Lane model is able to roughly describe the experimental data, it is not as good as the TDA result in the sense of being able to predict the data. It should be mentioned that a different choice of optical potential for the Lane model may improve the prediction of the calculation because its form factor strongly depends on the optical potential used, as reported in Ref.~\cite{KhoaPhysRevC.76.014603}, for example. \subsection{Spin-flip transitions} \label{result3} We have shown that our framework adequately describes the differential cross section of the $^{48}{\rm Ca}(p,n)^{48}{\rm Sc}({\rm IAS})$ reaction. Now we switch gears and investigate the 2p2h effect of the $^{48}{\rm Ca}(p,n)^{48}{\rm Sc}(1^+)$ reaction. As seen in Fig.~\ref{strength}(a), the GT strengths manifest themselves in two distant regions: one is around 3~MeV, which we refer to as the low-lying resonance, and the other is around 11~MeV, which is nothing but the giant GT resonance. In the STDA, the GT resonance distributes widely due to the 2p2h effect as discussed in Ref.~\cite{MinatoPhysRevC.93.044319}. Note that we choose the most prominent strength from each region of the low-lying and giant GT resonances when calculating the differential cross sections. The strengths of the SQ-$1^+$ transition $B({\rm SQ})$, which are the leading part of the $\Delta l=2$ transition, are shown in Fig.~\ref{strength}(b). When we compare cross sections calculated with the STDA and TDA transition densities, the experimental resonance energy of $\varepsilon=2.6$~MeV (11.0~MeV)~\cite{YakoPhysRevLett.103.012503} is used for the low-lying (giant) resonance. As in the case of the Fermi transition, this slight shift of the $Q$ value from the theoretical one does not vary the calculated cross section significantly{; the effect on the cross section at $\theta=0^\circ$ is less than 1\%. Let us first focus on the low-lying resonance. Figure~\ref{fig2} shows the differential cross section of the $^{48}{\rm Ca}(p,n)^{48}{\rm Sc}$ reaction at $E_{\rm lab}=$~295~MeV for the low-lying $1^+$ resonance as a function of $\theta$ up to $40^\circ$. The cross section calculated by the DWBA with the STDA-transition density is indicated by the solid line, whereas the cross section calculated with the TDA is represented by the dashed line. Here the theoretical cross section includes both the GT-type and $\Delta l=2$ transitions. \begin{figure}[!t] \begin{center} \includegraphics[width=0.48\textwidth,clip]{./fig2.eps} \caption{The differential cross section of the $^{48}{\rm Ca}(p,n)^{48}{\rm Sc(GT)}$ reaction at $E_{\rm lab}=$~295~MeV for the low-lying $1^+$ resonance state. The calculated result with (without) the 2p2h configuration shown by the solid (dashed) line is compared with the experimental data (open circle) taken from Ref.~\cite{YakoPhysRevLett.103.012503}.} \label{fig2} \end{center} \end{figure} Our calculation reproduces the diffraction pattern of the measured cross section reasonably well for both the STDA and TDA. A difference can be observed only in terms of the magnitude between them. Using the same value of $\bar{V}_1$ for the TDA and STDA, the cross sections at $\theta=0^\circ$ of the TDA are higher than those of the STDA by about 20\%, and the difference remains almost the same for other angles. The reductions of the cross section by the 2p2h configuration within the STDA are associated with the reduction of $B({\rm GT})$, $B({\rm SQ})$, and so on. We obtained $B({\rm GT})=4.726$ for the STDA and 5.681 for the TDA as shown in Fig.~\ref{strength}(a). The missing strength is brought to a higher energy region \cite{MinatoPhysRevC.93.044319}. The difference of $B({\rm GT})$ between the TDA and STDA is approximately 20\% and is equivalent to the reduction due to the 2p2h effect on the cross section. This proportionality is consistent with the conclusion by Taddeucci {\it et al}.~\cite{Taddeucci1987} although they neglect the $\Delta l=2$ transition. This fact implies that these contributions are negligibly small (this point will be addressed later). \begin{figure}[!t] \begin{center} \includegraphics[width=0.48\textwidth,clip]{./fig3.eps} \caption{The transition density of the low-lying $1^+$ resonance state of $^{48}{\rm Sc}$ calculated with the STDA (thick solid line for $l=0$, thin solid line for $l=2$) and TDA (thick dashed line for $l=0$, thin dashed line for $l=2$).} \label{fig3} \end{center} \end{figure} We plot the transition density $g_{\alpha}$ in Fig.~\ref{fig3} to investigate the difference in the calculated cross sections of the TDA and STDA in detail. The thick (thin) solid and thick (thin) dashed lines are respectively the results of the STDA and TDA for $l=0$ ($l=2$) corresponding to the GT ($\Delta l=2$) transition. One finds the difference in the amplitudes of the transition density between the STDA and TDA. Taking the ratio of the STDA amplitude for $l=0$ at the peak around $r_{i_t} \sim 4$~fm with the similar amplitude calculated from the TDA, we obtain $0.156/0.173\sim0.902$. Because $B({\rm GT})$ is proportional to $g^2_\alpha$, one obtains $(0.902)^2=0.814$, which is consistent with the reduction of $B({\rm GT})$. \begin{table}[!b] \caption{Leading configurations of the $1^+$ resonance and its amplitude defined by $X_{mi}^2$ of the 1p1h states calculated by the TDA and STDA are listed. The 2p2h amplitude $P_{\rm 2p2h}$ is calculated by $P_{\rm 2p2h}=\sum_{mnij}\mathcal{X}_{mnij}^2$.} \begin{tabular}{cccc} & Configuration & TDA & STDA \\ \hline \hline \multirow{3}{*}{Low-lying GT} &$\pi(1f_{7/2})\nu(1f_{7/2})^{-1}$& 0.954 & 0.858 \\ &$\pi(1f_{5/2})\nu(1f_{7/2})^{-1}$& 0.043 & 0.047 \\ &$\pi(2f_{7/2})\nu(1f_{7/2})^{-1}$& 0.001 & 0.001 \\ &$P_{\rm 2p2h}$ & 0.000 & 0.091 \\ \hline \multirow{3}{*}{Giant GT} &$\pi(1f_{7/2})\nu(1f_{7/2})^{-1}$& 0.042 & 0.043 \\ &$\pi(1f_{5/2})\nu(1f_{7/2})^{-1}$& 0.950 & 0.483 \\ &$\pi(2f_{5/2})\nu(1f_{7/2})^{-1}$& 0.004 & 0.002 \\ &$P_{\rm 2p2h}$ & 0.000 & 0.470 \\ \end{tabular} \label{collectivity} \end{table} The diffraction pattern of the cross section has a sensitivity to the shape of the transition density rather than its amplitude because the angular distribution is determined by the region where the incident proton interacts with the target nucleus. In Fig.~\ref{fig3}, the STDA and TDA lines have a similar $r_{i_t}$ dependence for each $l$. Inclusion of the 2p2h configuration does not significantly change the shape of the transition density although the amplitudes are about 10\% (7\%) smaller for the TDA for $l=0$ $(l=2)$. In Table~\ref{collectivity}, the 1p1h configurations contributing to the low-lying GT resonance and its amplitude defined by $X_{mi}^2$ are listed. The main configurations are $\pi(1f_{7/2})\nu(1f_{7/2})^{-1}$ and $\pi(1f_{5/2})\nu(1f_{7/2})^{-1}$ for both the TDA and STDA. While the amplitude of $\pi(1f_{5/2})\nu(1f_{7/2})^{-1}$ is almost the same for both, the amplitude of $\pi(1f_{7/2})\nu(1f_{7/2})^{-1}$ for the STDA is about 0.1 smaller than that for the TDA. This difference might change the shape of the transition density if the radial dependences of the wave functions of $\pi(1f_{7/2})$ and $\pi(1f_{5/2})$ are different. However, they are almost the same because they are spin-orbit partners. Therefore, unless another configuration intervenes, the shape of the transition density will not change significantly. As a consequence, we obtained differential cross sections of similar shape for the STDA and TDA. Figure~\ref{fig4} shows the cross sections calculated with $g_{s1}$ and $g_{l2}$ (solid line), only with $g_{s1}$ (dashed line), and only with $g_{l2}$ (dotted line) by means of the STDA, as well as experimental data~\cite{YakoPhysRevLett.103.012503} (open circle). Throughout the observed region of $\theta$, the result including only the $\Delta l=2$ transition is about two orders smaller than the others. At $\theta=0^\circ$, in particular, it is about five orders smaller than that of GT alone even though $r_{i_t}^2g_{l2}$ has a peak amplitude about 36\% smaller than that of $r_{i_t}^2g_{s1}$ (see Fig.~\ref{fig3}). It indicates that there are dynamical processes such as angular-momentum coupling coefficients and coherent summation in Eq.~\eqref{redTmatGen}, which hinder the $\Delta l=2$ components, and thus the effect of the $\Delta \l=2$ transition on the transition density does not coincide quantitatively with that observed on the cross section. \begin{figure}[!t] \begin{center} \includegraphics[width=0.48\textwidth,clip]{./fig4.eps} \caption{The cross section of $^{48}{\rm Ca}(p,n)^{48}{\rm Sc}$ at 295~MeV for the low-lying $1^+$ resonance state calculated with the STDA transition density of the GT and $\Delta l=2$ transitions (solid line), the GT transition only (dashed line), and $\Delta l=2$ transition only (dotted line).} \label{fig4} \end{center} \end{figure} \begin{figure}[!t] \begin{center} \includegraphics[width=0.48\textwidth,clip]{./fig5.eps} \caption{Same as Fig.~\ref{fig2} but for the giant resonance.} \label{fig5} \end{center} \end{figure} Next we discuss the 2p2h effect on the giant GT resonance. In Fig.~\ref{fig5} the lines and open circles are defined in the same way as in Fig.~\ref{fig2} but for the giant resonance with $\theta$ up to $20^\circ$. The result of the STDA reasonably traces the first two points of the experimental data, but fails for the third one. By the 2p2h effect, the cross section of the STDA is smaller than that of the TDA by about 43\% at $\theta=0^\circ$ but does not change its shape significantly. Again, comparing $B({\rm GT})$ of the STDA and TDA shown in Fig.~\ref{strength}(a), the 2p2h effect on $B({\rm GT})$ of the giant resonance is about a 42\% reduction, which agrees with the value of its effect on the cross section. \begin{figure}[!b] \begin{center} \includegraphics[width=0.48\textwidth,clip]{./fig6.eps} \caption{Same as Fig.~\ref{fig3} but for the giant resonance.} \label{fig6} \end{center} \end{figure} Figure~\ref{fig6} shows the transition density of the giant resonance. From the difference between the STDA and TDA, we find that the 2p2h configuration reduces the amplitude of $r_{i_t}^2g_{\alpha}$ at the peak position around $r_{i_t}=4$~fm by about 25\% (43\%) for $l=0$ ($l=2$). As we did in the low-lying resonance, calculating the squared ratio of the amplitude of the STDA to that of the TDA, one obtains $(0.211/0.281)^2\sim0.564$, which is almost consistent with the reductions of $B(\rm{GT})$ and the cross section. From Table~\ref{collectivity}, the 1p1h configurations mainly contributing to the giant GT resonance are $\pi(1f_{7/2})\nu(1f_{7/2})^{-1}$ and $\pi(1f_{5/2})\nu(1f_{7/2})^{-1}$ both for the TDA and STDA, as in the case of the low-lying resonance. While the amplitude of $\pi(1f_{7/2})\nu(1f_{7/2})^{-1}$ almost remains the same for both the TDA and STDA, that of $\pi(1f_{5/2})\nu(1f_{7/2})^{-1}$ for the STDA is half of that for the TDA. However, this difference does not make a significant change in the shape of the transition density and accordingly in the diffraction pattern of the cross section, similar to the low-lying resonance, as seen in Fig.~\ref{fig5}. \begin{figure}[!t] \begin{center} \includegraphics[width=0.48\textwidth,clip]{./fig7.eps} \caption{Same as Fig.~\ref{fig4} but for the giant resonance.} \label{fig7} \end{center} \end{figure} Figure~\ref{fig7} shows the cross section at the giant GT resonance. The result of $\Delta l=2$ transition only is negligibly small as compared to the others. It is about two orders smaller than the GT transition in $\theta > 0$, and the ratio of their cross sections at $\theta=0^\circ$ is approximately $10^{-5} $, similar to the result of the low-lying resonance. As a consequence, qualitatively the 2p2h effect reduces the amplitude of the cross section but does not change the diffraction pattern. The values of the decrease on the cross section due to the 2p2h configuration are essentially consistent with those obtained from the structural calculation. Last, we comment on the tensor-force contribution, which was reported~\cite{MinatoPhysRevC.93.044319} to change the excitation energy of the spin-flip resonance states and the corresponding $B({\rm GT})$ values. However, we have confirmed numerically that the inclusion of the tensor force does not change the diffraction pattern of the cross section. \section{Summary} \label{summary} The charge-exchange reaction $^{48}{\rm Ca}(p,n)^{48}{\rm Sc}$ has been investigated theoretically to clarify the effect of 2p2h-configuration mixing on the GT-resonance states. We have carried out the STDA calculation in order to prepare the transition density, and the form factor has been obtained by employing the phenomenological nucleon-nucleon interaction. The angular-distributed cross section has been computed by means of the DWBA with the microscopic form factor. The Fermi transition has also been calculated to demonstrate the effectiveness of our framework. The calculated cross sections of the Fermi transition caused by the $^{48}{\rm Ca}(p,n)^{48}{\rm Sc(IAS)}$ reaction at $E_{\rm lab}=$~25, 35, and 45~MeV coincide well with the measured data~\cite{DoeringPhysRevC.12.378,JonPhysRevC.62.044609}. It has been found that the 2p2h effect on the cross section of the $^{48}{\rm Ca}(p,n)^{48}{\rm Sc}$ reaction at 295~MeV decreases the amplitude of the cross section and does not change the angular distribution for either the low-lying or giant resonances. This feature is consistent with the result of the structural calculation. However, the 2p2h effect on the angular distribution may become important for other multipole transitions because it was reported that the transition densities of the isovector monopole and the quadrupole of $^{16}$O were changed significantly~\cite{Phys.Rev.C81.054312}. Quantitatively, the reduction of the cross section due to the 2p2h effect can be explained by that of $B({\rm GT})$ and the corresponding transition density. The role of the $\Delta l=2$ transition on $1^+$-resonance states has also been surveyed and found to give a negligibly small contribution. It supports the proportion relation~\cite{Taddeucci1987} between $B({\rm GT})$ and the charge-exchange cross section at zero degree. Note that, in our model, the form factor of the $\Delta l=2$ transition has been calculated using the same nucleon-nucleon interaction as that of the GT transition. A different nucleon-nucleon interaction should be tested, for example, the $t$ matrix of Franey and Love~\cite{FLPhysRevC.31.488} or the $g$ matrix of Jeukenne-Lejeune-Mahaux~\cite{JeukennePhysRevC.16.80}, as adopted in previous studies~\cite{Taddeucci1987,KERMAN1959551,BE0034-4885-50-6-001,KhoaPhysRevC.76.014603}. A systematic comparison of the reaction models such as the DWBA, DWIA, and coupled-channels method for the charge-exchange reaction at several incident energies with several target nuclei will provide important guidance for analyses of experimental data. \begin{acknowledgments} The authors thank K.~Hagino, O.~Iwamoto, and K.~Minomo for constructive comments and suggestions. They also thank E.~Olsen for helpful advice and refining our discussion. \end{acknowledgments} \nocite{*}
1,116,691,499,937
arxiv
\section{Introduction} The paper concerns the problems of the long-time convergence to the equilibrium distribution for the discrete systems. For one-dimensional chains of harmonic oscillators the problem is analyzed in \cite{BPT, SL}: in \cite{SL} -- for initial measures which have distinct temperatures to the left and to the right, and in \cite{BPT} -- for a more general class of initial measures characterized by a mixing condition of Rosenblatt- or Ibragimov- type and which are asymptotically translation-invariant to the left and to the right. For many-dimensional harmonic crystals the convergence has been proved in \cite{LL} for initial measures which are absolutely continuous with respect to the canonical Gaussian measure. In \cite{DKKS}--\cite{DK2} we have started the convergence analysis for partial differential equations of hyperbolic type in $\R^d$, $d\ge1$. In \cite{DKS1}--\cite{DS} we extended the results to harmonic crystals. In the harmonic approximation the crystal is characterized by the displacements $u(z,t)\in\R^n$, $z\in\Z^d$, of the crystal atoms from their equilibrium positions. The field $u(z,t)$ is governed by a discrete wave equation. In the papers mentioned above the lattice dynamics has been studied in the whole space $\Z^d$. In the present work the dynamics of the harmonic crystals is studied in the half-space $\Z^d_+$, $d\ge 1$, \begin{equation}\label{1+} \ddot u(z,t)=-\sum\limits_{z'\in \Z^d_+}\left(V(z-z')-V(z-\tilde z')\right) u(z',t),\,\,\,\,z\in\Z^d_+,\,\,\,\,t\in\R, \end{equation} where $\tilde z:=(-z_1,\bar z)$, $\bar z=(z_2,\dots,z_d)\in \Z^{d-1}$, with zero boundary condition, \begin{equation}\label{2+} u(z,t)|_{z_1=0}=0, \end{equation} and with the initial data \begin{equation}\label{3+} u(z,0)=u_0(z),\quad \dot u(z,0)=u_1(z),\quad z\in\Z^d_+. \end{equation} Here $\Z^d_+=\{z\in \Z^d:\,z_1>0\}$, $V(z)$ is the interaction (or force) matrix, $\left(V_{kl}(z)\right)$, $k,l=1,\dots,n$, $u(z,t)=(u_1(z,t),\dots,u_n(z,t))$, $u_0(z)=(u_{01}(z),\dots,u_{0n}(z))\in\R^n$ and correspondingly for $u_1(z)$. To coordinate the boundary and initial conditions we suppose that $u_0(z)=u_1(z)=0$ for $z_1=0$. Denote $Y(t)=(Y^0(t),Y^1(t))\equiv (u(\cdot,t),\dot u(\cdot,t))$, $Y_0=(Y_0^0,Y_0^1)\equiv (u_0(\cdot),u_1(\cdot))$. Then (\ref{1+})--(\ref{3+}) takes the form of the evolution equation \begin{equation}\label{CP1} \dot Y(t)={\cal A}_+Y(t),\quad t\in\R,\,\,z\in\Z^d_+, \quad Y^0(t)|_{z_1=0}=0,\quad Y(0)=Y_0. \end{equation} Here ${\cal A}_+=\left(\begin{array}{cc}0&1\\-{\cal V}_+&0\end{array}\right)$, with ${\cal V}_+u(z):= \sum\limits_{z'\in\Z^d_+}(V(z-z')-V(z-\tilde z'))u(z')$. It is assumed that the initial state $Y_0$ is given by a random element of the Hilbert space ${\cal H}_{\alpha,+}$ of real sequences, see Definition \ref{d1.1} below. The distribution of $Y_0$ is a probability measure $\mu_0$ satisfying conditions {\bf S1}--{\bf S4} below. In particular, the initial correlation function $Q_0(z,z')$ is asymptotically translation-invariant as $z_1,z'_1\to+\infty$ (see Condition {\bf S2}) and the measure $\mu_0$ has some mixing properties (see Condition {\bf S4}). Given $t\in\R$, denote by $\mu_t$ the probability measure on ${\cal H}_{\alpha,+}$ giving the distribution of the random solution $Y(t)$ to the problem (\ref{CP1}). Our main result gives the weak convergence of measures $\mu_t$ on the space ${\cal H}_{\alpha,+}$, with $\alpha<-d/2$, to a limit measure $\mu_{\infty}$, \begin{equation}\label{1.8i} \mu_t \,\buildrel {\hspace{2mm}{\cal H}_{\alpha,+}}\over {- \hspace{-2mm} \rightharpoondown } \mu_\infty\quad{\rm as}\,\,\,\, t\to \infty, \end{equation} where $\mu_\infty$ is an equilibrium Gaussian measure on ${\cal H}_{\alpha,+}$. This means the convergence $$ \int f(Y)\mu_t(dY)\rightarrow \int f(Y)\mu_\infty(dY),\quad t\to \infty, $$ for any bounded continuous functional $f$ on ${\cal H}_{\alpha,+}$. Explicit formulas for the correlation functions of the measure $\mu_0$ are given in (\ref{1.13})--(\ref{1.15}). The paper is organized as follows. The conditions on the interaction matrix $V$ and the initial measure $\mu_0$ are given in Section~2. The main result is stated in Section 3. Examples of harmonic crystals and the initial measures satisfying all conditions imposed are constructed in Section 4. The convergence of correlation functions of $\mu_t$ is established in Section 5, the compactness of $\mu_t$, $t\ge0$, and the convergence of characteristic functionals of $\mu_t$ are proved in Sections 6 and 7, respectively. \setcounter{equation}{0} \section{Conditions on the system and the initial measure} \subsection{Dynamics} Let us assume that \begin{equation}\label{condE0} V(z)=V(\tilde z),\quad \mbox{where }\,\tilde z:=(-z_1,\bar z), \quad\bar z=(z_2,\dots,z_d)\in \Z^{d-1}. \end{equation} Then the solution to the problem (\ref{1+})--(\ref{3+}) can be represented as the restriction of the solution to the Cauchy problem with the odd initial date on the half-space. More precisely, consider the following Cauchy problem for the harmonic crystal in the whole space $\Z^d$: \begin{eqnarray}\label{CP1'} \left\{\begin{array}{l} \ddot v(z,t)=-\sum\limits_{z'\in \Z^d}V(z-z')v(z',t),\,\,\,\,z\in\Z^d, \,\,\,\,t\in\R,\\ v(z,0)=v_0(z),\quad \dot v(z,0)=v_1(z),\quad z\in\Z^d. \end{array}\right. \end{eqnarray} Denote $X(t)=(X^0(t),X^1(t))\equiv (v(\cdot,t),\dot v(\cdot,t))$, $X_0=(X_0^0,X_0^1)\equiv (v_0(\cdot),v_1(\cdot))$. Then (\ref{CP1'}) has a form \begin{equation}\label{CP1''} \dot X(t)={\cal A}X(t),\quad t\in\R,\quad X(0)=X_0. \end{equation} Here ${\cal A}=\left(\begin{array}{cc}0&1\\-{\cal V}&0\end{array}\right)$, where ${\cal V}$ is a convolution operator with the matrix kernel $V$. Let us assume that the initial date $X_0(z)$ be an odd function w.r.t. $z_1\in\Z^1$, i.e., $X_0(z)=-X_0(\tilde z)$. Then the solution $v(z,t)$ of (\ref{CP1'}) is also an odd function w.r.t. $z_1\in\Z^1$. Let us restrict the solution $v(z,t)$ on the domain $\Z^d_+$ and put $u(z,t)=v(z,t)|_{z_1\ge0}$. Then $u(z,t)$ is the solution to the problem (\ref{CP1}) with the initial date $Y_0(z)=X_0(z)|_{z_1\ge0}$. Assume that the initial date $Y_0$ of the problem (\ref{CP1}) belongs to the phase space ${\cal H}_{\alpha,+}$, $\alpha\in\R$, defined below. \begin{definition} \label{d1.1} $ {\cal H}_{\alpha,+}$ is the Hilbert space of $\R^n\times\R^n$-valued functions of $z\in\Z^d_+$ endowed with the norm \begin{eqnarray}\nonumber \Vert Y\Vert^2_{\alpha,+} = \sum_{z\in\Z^d_+}\vert Y(z)\vert^2(1+|z|^2)^{\alpha} <\infty. \end{eqnarray} \end{definition} In addition it is assumed that the initial date $Y_0=0$ if $z_1=0$. We impose the following conditions {\bf E1}--{\bf E6} on the matrix $V$. \medskip\\ {\bf E1}. There exist positive constants $C,\gamma$ such that $\|V(z)\|\le C e^{-\gamma|z|}$ for $z\in \Z^d$, $\|V(z)\|$ denoting the matrix norm. \medskip Let $\hat V(\theta)$ be the Fourier transform of $V(z)$, with the convention $$ \hat V(\theta)= \sum\limits_{z\in\Z^d}V(z)e^{iz\cdot\theta}\,,\theta \in {\rm\bf T}^d, $$ where "$\cdot$" stands for the scalar product in Euclidean space $\R^d$ and ${\rm\bf T}^d$ denotes the $d$-torus $\R^d/(2\pi \Z)^d$. \medskip\\ {\bf E2}. $ V$ is real and symmetric, i.e., $V_{lk}(-z)=V_{kl}(z)\in \R$, $k,l=1,\dots,n$, $z\in \Z^d$. \medskip\\ Both conditions imply that $\hat V(\theta)$ is a real-analytic Hermitian matrix-valued function in $\theta\in {\rm\bf T}^d\!$. \medskip\\ {\bf E3}. The matrix $\hat V(\theta)$ is non-negative definite for every $\theta \in {\rm\bf T}^d$. \medskip Let us define the Hermitian non-negative definite matrix, \begin{equation}\label{Omega} \Omega(\theta)=\big(\hat V(\theta )\big)^{1/2}\ge 0. \end{equation} $\Omega(\theta)$ has the eigenvalues $0\leq\omega_1(\theta)<\omega_2(\theta) \ldots <\omega_s(\theta)$, $s\leq n$, and the corresponding spectral projections $\Pi_\sigma(\theta)$ with multiplicity $r_\sigma=\mathop{\rm tr}\nolimits\Pi_\sigma(\theta)$. $\theta \mapsto\omega_\sigma(\theta)$ is the $\sigma\!$-th band function. There are special points in ${\rm\bf T}^d$, where the bands cross, which means that $s$ and $r_\sigma$ jump to some other value. Away from such crossing points $s$ and $r_\sigma$ are independent of $\theta$. More precisely one has the following lemma. \begin{lemma}\label{lc*} (see \cite[Lemma 2.2]{DKS1}). Let the conditions {\bf E1} and {\bf E2} hold. Then there exists a closed subset ${\cal C}_*\subset {\rm\bf T}^d$ such that we have the following:\\ (i) the Lebesgue measure of ${\cal C}_*$ is zero.\\ (ii) For any point $\Theta\in {\rm\bf T}^d\setminus{\cal C}_*$ there exists a neighborhood ${\cal O}(\Theta)$ such that each band function $\omega_\sigma(\theta)$ can be chosen as the real-analytic function in ${\cal O}(\Theta)$.\\ (iii) The eigenvalue $\omega_\sigma(\theta)$ has constant multiplicity in ${\rm\bf T}^d\setminus{\cal C}_*$.\\ (iv) The spectral decomposition holds, \begin{equation}\label{spd'} \Omega(\theta)=\sum_{\sigma=1}^s \omega_\sigma (\theta)\Pi_\sigma(\theta),\quad \theta\in {\rm\bf T}^d\setminus{\cal C}_*, \end{equation} where $\Pi_\sigma(\theta)$ is the orthogonal projection in $\R^n$. $\Pi_\sigma$ is a real-analytic function on ${\rm\bf T}^d\setminus{\cal C}_*$. \end{lemma} For $\theta\in {\rm\bf T}^d\setminus{\cal C}_*$, we denote by Hess$(\omega_\sigma)$ the matrix of second partial derivatives. The next condition on $V$ is the following: \smallskip\\ {\bf E4}. Let $D_\sigma(\theta)=\det\big(\rm{Hess}(\omega_\sigma(\theta))\big)$. Then $D_\sigma(\theta)$ does not vanish identically on ${\rm\bf T}^d\setminus{\cal C}_*$, $\sigma=1,\ldots,s$. \medskip Let us denote \begin{equation}\label{c0ck} {\cal C}_0=\{\theta\in {\rm\bf T}^d:\det \hat V(\theta)=0\}\,\, \mbox{and }\, {\cal C}_\sigma=\{\theta\in {\rm\bf T}^d\setminus {\cal C}_*:\,D_\sigma(\theta)=0\},\,\,\, \sigma=1,\dots,s. \end{equation} Then the Lebesgue measure of ${\cal C}_\sigma$ vanishes, $\sigma=0,1,...,s$ (see \cite[Lemma 2.3]{DKS1}). The last conditions on $V$ are the following: \medskip\\ {\bf E5}. For each $\sigma\ne \sigma'$, the identities $\omega_\sigma(\theta) \pm\omega_{\sigma'}(\theta)\equiv\mathop{\rm const}\nolimits_\pm$, $\theta\in {\rm\bf T}^d\setminus {\cal C}_*$, do not hold with $\mathop{\rm const}\nolimits_\pm\ne 0$. \medskip\\ This condition holds trivially in the case $n=1$. \medskip\\ {\bf E6}. $\Vert \hat V^{-1}(\theta)\Vert\in L^1({\rm\bf T}^d)$.\medskip\\ If ${\cal C}_0=\emptyset$, then $\|\hat{V}^{-1}(\theta)\|$ is bounded and {\bf E6} holds trivially. \medskip Denote by $ {\cal H}_\alpha$ the Hilbert space of $\R^n\times \R^n$-valued functions of $z\in\Z^d$ endowed with the norm $$ \Vert X\Vert^2_{\alpha} = \sum_{z\in\Z^d}\vert X(z)\vert^2 (1+|z|^2)^{\alpha} <\infty. $$ \begin{pro} \label{p1.1} (see \cite[Proposition 2.5]{DKS1}). Let conditions {\bf E1} and {\bf E2} hold, and choose some $\alpha\in\R$. Then (i) for any $X_0 \in {\cal H}_\alpha$, there exists a unique solution $X(t)\in C(\R, {\cal H}_\alpha)$ to the Cauchy problem (\ref{CP1''}).\\ (ii) The operator $U(t):X_0\mapsto X(t)$ is continuous in ${\cal H}_\alpha$. \end{pro} \begin{cor}\label{c1} Let conditions (\ref{condE0}), {\bf E1} and {\bf E2} hold. Then (i) for any $Y_0 \in {\cal H}_{\alpha,+}$, there exists a unique solution $Y(t)\in C(\R, {\cal H}_{\alpha,+})$ to the mixed problem (\ref{CP1}).\\ (ii) The operator $U_+(t):Y_0\mapsto Y(t)$ is continuous in ${\cal H}_{\alpha,+}$. \end{cor} {\bf Proof}. Corollary \ref{c1} follows from Proposition \ref{p1.1}. Indeed, the solution $X(z,t)$ of (\ref{CP1''}) admits the representation \begin{equation}\label{solGr} X(z,t)=\sum\limits_{z'\in\Z^d}{\cal G}_t(z-z')X_0(z'), \end{equation} where the Green function ${\cal G}_t(z)$ has the Fourier representation \begin{equation}\label{Grcs} {\cal G}_t(z):= F^{-1}_{\theta\to z}[ \exp\big(\hat{\cal A}(\theta)t\big)] =(2\pi)^{-d}\int\limits_{{\rm\bf T}^d}e^{-iz\cdot\theta} \exp\big(\hat{\cal A}(\theta)t\big)\,d\theta \end{equation} with \begin{equation}\label{hA} \hat{\cal A}(\theta)=\left( \begin{array}{cc} 0 & 1\\ -\hat V(\theta) & 0 \end{array}\right),\,\,\,\,\theta\in {\rm\bf T}^d. \end{equation} Therefore, the solution to the problem (\ref{CP1}) has a form \begin{equation}\label{sol} Y(z,t)=\sum\limits_{z'\in\Z^d_+} {\cal G}_{t,+}(z,z') Y_0(z'),\quad z\in\Z^d_+, \end{equation} where ${\cal G}_{t,+}(z,z'):= {\cal G}_t(z-z')-{\cal G}_t(z-\tilde z')$. Corollary \ref{c1} follows. {\hfill\hbox{\enspace{\vrule height 7pt depth 0pt width 7pt}}} \subsection{Random initial data and statistical conditions} Denote by $\mu_0$ a Borel probability measure on ${\cal H}_{\alpha,+}$ giving the distribution of $Y_0$. Expectation with respect to $\mu_0$ is denoted by $E$. Assume that the initial measure $\mu_0$ has the following properties. \medskip\\ {\bf S1}. $Y_0(z)$ has zero expectation value, $E_0 \big(Y_0(z)\big) = 0$, $z\in\Z^d_+$. For $a,b,c \in \C^n$, denote by $a\otimes b$ the linear operator $(a\otimes b)c=a\sum^n_{j=1}b_j c_j$.\\ {\bf S2}. The correlation matrices of the measure $\mu_0$ have a form \begin{eqnarray} \label{1.9'} Q^{ij}_0(z,z')= E_0\big(Y^i_0(z)\otimes {Y^j_0(z')} \big)= q^{ij}_0(z_1,z'_1,\bar z-\bar z'),\,\,\,z,z'\in\Z^d_+,\,\,\,i,j=0,1. \end{eqnarray} where \\ (i) $q^{ij}_0(z_1,z'_1,\bar z)=0$ for $z_1=0$ or $z_1'=0$,\\ (ii) $\lim_{y\to+\infty} q^{ij}_0(z_1+y,y,\bar z) ={\bf q}_0^{ij}(z)$, $z=(z_1,\bar z)\in\Z^d$. Here ${\bf q}_0^{ij}(z)$ are correlation functions of some translation invariant measure $\nu_0$ with zero mean value in ${\cal H}_{\alpha}$. \begin{definition} A measure $\nu$ is called translation invariant if $\nu(T_h B)= \nu(B)$, $B\in{\cal B}({\cal H}_{\alpha})$, $h\in\Z^d$, where $T_h X(z)= X(z-h)$, $z\in\Z^d$. \end{definition} {\bf S3}. The measure $\mu_0$ has a finite variance and finite mean energy density, \begin{eqnarray}\label{med} e_0(z)=E_0 \big(\vert Y_0^0(z)\vert^2 + \vert Y_0^1(z)\vert^2\big) =\mathop{\rm tr}\nolimits\left[Q_0^{00}(z,z)+Q_0^{11}(z,z)\right]\le e_0<\infty,\,\,\,z\in\Z^d_+. \end{eqnarray} Finally, it is assumed that the measure $\mu_0$ satisfies a mixing condition. To formulate this condition, let us denote by $\sigma ({\cal A})$, ${\cal A}\subset \Z^d_+$, the $\sigma $-algebra in ${\cal H}_{\alpha,+}$ generated by $Y_0(z)$ with $z\in{\cal A}$. Define the Ibragimov mixing coefficient of the probability measure $\mu_0$ on ${\cal H}_{\alpha,+}$ by the rule (cf. \cite[Definition 17.2.2]{IL}) \begin{eqnarray} \label{ilc} \varphi(r)= \sup_{\scriptsize{\begin{array}{cc} {\cal A},{\cal B}\subset \Z^d_+\\ \mathop{\rm dist}\nolimits({\cal A},\,{\cal B})\geq r \end{array}}} \sup_{\scriptsize{ \begin{array}{cc} A\in\sigma({\cal A}),B\in\sigma({\cal B})\\ \mu_0(B)>0\end{array}}} \frac{|\mu_0(A\cap B) - \mu_0(A)\mu_0(B)|}{ \mu_0(B)}. \end{eqnarray} \begin{definition} A measure $\mu_0$ satisfies the strong uniform Ibragimov mixing condition if $\varphi(r)\to 0$ as $r\to\infty$. \end{definition} {\bf S4}. The measure $\mu_0$ satisfies the strong uniform Ibragimov mixing condition with \begin{equation}\label{1.12} \int\limits_{0}^{\infty} r^{d-1}\varphi^{1/2}(r)\,dr <\infty\,. \end{equation} This condition can be considerably weakened (see Remarks \ref{remi-iii} (i), (ii)). \setcounter{equation}{0} \section{Main results} \begin{definition} \label{dmut} (i) We define $\mu_t$ as the Borel probability measure on ${\cal H}_{\alpha,+}$ which gives the distribution of the random solution $Y(t)$, \begin{eqnarray}\nonumber \mu_t(B) = \mu_0(U_+(-t)B),\,\,\,\, B\in {\cal B}({\cal H}_{\alpha,+}),\,\,\,t\in \R\,. \end{eqnarray} (ii) The correlation functions of the measure $\mu_t$ are defined by \begin{equation}\label{qd} Q_t^{ij}(z,z')= E \Big(Y^i(z,t)\otimes Y^j(z',t)\Big),\,\,\,i,j= 0,1,\,\,\,\,z,z'\in\Z^d_+. \end{equation} Here $Y^i(z,t)$ are the components of the random solution $Y(t)=(Y^0(\cdot,t),Y^1(\cdot,t))$ to the problem (\ref{CP1}). \end{definition} The main result of the paper is the following theorem. \medskip\\ {\bf Theorem A} {\it Let $d,n\ge 1$, $\alpha<-d/2$, and assume that the conditions (\ref{condE0}), {\bf E1}--{\bf E6} and {\bf S1}--{\bf S4} hold. Then\\ (i) the convergence in (\ref{1.8i}) holds. \smallskip\\ (ii) The limit measure $ \mu_\infty$ is a Gaussian measure on ${\cal H}_{\alpha,+}$.\smallskip\\ (iii) The correlation matrices of the measures $\mu_t$ converge to a limit, for $i,j=0,1$, \begin{equation}\label{corf} Q^{ij}_t(z,z')=\int\big( Y^i(z)\otimes Y^j(z')\big) \,\mu_t(dY)\to Q^{ij}_\infty(z,z'),\,\,\,\,t\to\infty,\quad z,z'\in\Z^d_+. \end{equation} The correlation matrix $Q^{ij}_\infty(z,z')=(Q^{ij}_\infty(z,z'))_{i,j=0}^1$ of the limit measure $\mu_{\infty}$ has a form \begin{equation}\label{1.13} Q_\infty(z,z')=q_\infty(z-z')-q_\infty(z-\tilde z')-q_\infty(\tilde z-z')+ q_\infty(\tilde z-\tilde z'),\quad z,z'\in\Z^d_+. \end{equation} Here $q_\infty(z)= q^+_{\infty}(z)+ q^-_{\infty}(z)$, where in the Fourier transform we have \begin{eqnarray} \hat q^+_{\infty}(\theta)&=&\frac{1}{4} \sum\limits_{\sigma=1}^s \Pi_\sigma(\theta)\left(\hat {\bf q}_0(\theta) +C(\theta)\hat {\bf q}_0(\theta)C(\theta)^*\right)\Pi_\sigma(\theta), \label{1.14}\\ \hat q^-_{\infty}(\theta)&=&\frac{i}{4}\sum\limits_{\sigma=1}^s {\rm sign} \left(\partial_{\theta_1}\omega_\sigma(\theta)\right) \Pi_\sigma(\theta) \left(C(\theta)\hat {\bf q}_0(\theta) - \hat {\bf q}_0(\theta)C(\theta)^*\right)\Pi_\sigma(\theta), \,\,\,\theta\in{\rm\bf T}^d\setminus {\cal C}_*,\,\label{1.15} \end{eqnarray} $\Pi_\sigma(\theta)$ is the spectral projection from Lemma \ref{lc*} (iv) and \begin{equation}\label{C(theta)} C(\theta)=\left(\begin{array}{cc} 0&\Omega(\theta)^{-1}\\ -\Omega(\theta)&0 \end{array}\right)\,,\quad C(\theta)^*=\left(\begin{array}{cc} 0&-\Omega(\theta)\\ \Omega(\theta)^{-1}&0 \end{array}\right). \end{equation} (iv) The measure $\mu_\infty$ is time stationary, i.e., $[U_+(t)]^*\mu_\infty=\mu_\infty$, $t\in\R$.\\ (v) The group $U_+(t)$ is mixing with respect to the measure $\mu_\infty$, i.e., $$ \lim_{t\to\infty} \int f(U_+(t)Y)g(Y)\,\mu_{\infty}(dY)=\int f(Y)\,\mu_{\infty}(dY) \int g(Y)\,\mu_{\infty}(dY) $$ for any $f,g\in L^2({\cal H}_{\alpha,+},\mu_\infty)$. } \medskip The assertions {\it (i)}, {\it(ii)} of Theorem~A can be deduced from Propositions \ref{l2.1} and \ref{l2.2} below. \begin{pro}\label{l2.1} The family of measures $\{\mu_t,\,t\in \R\}$ is weakly compact on the space ${\cal H}_{\alpha,+}$ for any $\alpha<-d/2$, and the bounds $\sup\limits_{t\ge 0} E \Vert U_+(t)Y_0\Vert^2_{\alpha,+} <\infty$ hold. \end{pro} Set ${\cal S}=[S(\Z^d_+)\otimes \R^n]^2$, where $S(\Z^d_+)$ stands for the space of rapidly decreasing real sequences. Denote $\langle Y,\Psi\rangle_+ =\langle Y^0,\Psi^0\rangle_+ +\langle Y^1,\Psi^1\rangle_+$ for $Y=(Y^0,Y^1)\in {\cal H}_{\alpha,+}$ and $\Psi=(\Psi^0,\Psi^1)\in {\cal S}$, where $$ \langle Y^i,\Psi^i \rangle_+=\sum\limits_{z\in\Z^d_+}Y^i(z)\cdot\Psi^i(z), \quad i=0,1. $$ \begin{pro}\label{l2.2} For every $\Psi\in {\cal S}$, the characteristic functionals converge to a Gaussian one, \begin{equation}\label{2.6i} \hat\mu_t(\Psi): = \int e^{i\langle Y,\Psi\rangle_+}\mu_t(dY) \rightarrow \exp\left\{-\frac{1}{2}{\cal Q}_\infty (\Psi ,\Psi)\right\},\,\,\, t\to\infty, \end{equation} where ${\cal Q}_\infty$ is the quadratic form defined as $$ {\cal Q}_\infty (\Psi ,\Psi)=\sum\limits_{i,j=0}^1 \sum\limits_{z,z'\in\Z^d_+} \Bigl(Q_\infty^{ij}(z, z'),\Psi^i(z)\otimes \Psi^j(z') \Bigr). $$ \end{pro} Proposition \ref{l2.1} ensures the existence of the limit measures of the family $\{\mu_t,\,t\in \R\}$, while Proposition \ref{l2.2} provides the uniqueness. They are proved in Sections 6 and 7, respectively. The assertion {\it (iii)} of Theorem~A is proved in Section 5, item {\it(iv)} follows from (\ref{1.8i}) and item {\it(v)} can be proved using a method of \cite{D2}. \begin{remarks}\label{remi-iii} {\rm (i) The {\it uniform Rosenblatt mixing condition} \cite{Ros} also suffices, together with a higher power $>2$ in the bound (\ref{med}): there exists $\delta >0$ such that \begin{equation}\label{med'} E \Big( \vert Y^0_0(z)\vert^{2+\delta}+\vert Y^1_0(z)\vert^{2+\delta} \Big) \le C <\infty,\quad z\in\Z^d_+. \end{equation} Then (\ref{1.12}) requires a modification: $$ \displaystyle\int_0^{+\infty}\displaystyle r^{d-1}\alpha^{p}(r)dr <\infty,\quad\mbox{with }\, p=\min(\delta/(2+\delta), 1/2). $$ Here $\alpha(r)$ is the Rosenblatt mixing coefficient defined as in (\ref{ilc}) but without $\mu_0(B)$ in the denominator: $$ \alpha(r)=\sup\{\alpha_Y({\cal A},{\cal B}): \,{\cal A},{\cal B}\subset \Z^d_+,\,\,\mathop{\rm dist}\nolimits({\cal A},\,{\cal B})\geq r\}, $$ where $$ \alpha_Y({\cal A},{\cal B})=\sup\{|\mu_0(A\cap B)-\mu_0(A)\mu_0(B)|:\, A\in\sigma({\cal A}),\,B\in\sigma({\cal B})\}. $$ Under these modifications, the statements of Theorem A and their proofs remain essentially unchanged. (ii) The uniform Rosenblatt mixing condition also could be weakened. Let $K(z,s)=\prod\limits_{i=1}^d[z_i-s,z_i+s]$, where $s>0$, $z\in\Z^d$, stand for the cube in $\Z^d$, $\bar K_r=\Z^d\setminus K(z,s+r)$. Let us define the mixing coefficient $\alpha_l(r)$ by the rule $$ \alpha_l(r)=\sup\left\{ \alpha_Y\left(K(z,s)\cap \Z^d_+,\bar K_r\cap \Z^d_+\right): \,z\in\Z^d_+,\,0\le s\le l\right\}. $$ To prove Theorem A it suffices to assume, together with (\ref{med'}), that $$ \alpha_l(r)\le \frac{C~l^\kappa}{(1+r)^{\kappa'}}, $$ with some constants $C,\kappa, \kappa'>0$. See \cite{Bu, Do} for a more detailed discussion about the different mixing conditions. (iii) The condition {\bf E5} could be considerably weakened. Namely, it suffices to assume the following condition:\\ {\bf E5'} If for some $\sigma\not=\sigma'$ one has $\omega_\sigma(\theta)\pm\omega_{\sigma'}(\theta)\equiv \mathop{\rm const}\nolimits_\pm$ with $\mathop{\rm const}\nolimits_\pm\not=0$, then \begin{equation}\label{psisi'} \left\{ \begin{array}{rr} p_{\sigma\sigma'}^{11}(\theta)\mp \omega_{\sigma}(\theta)\omega_{\sigma'}(\theta) p_{\sigma\sigma'}^{00}(\theta)\equiv 0,\\ \omega_\sigma(\theta)p_{\sigma\sigma'}^{01}(\theta)\pm \omega_{\sigma'}(\theta)p_{\sigma\sigma'}^{10}(\theta)\equiv 0. \end{array}\right. \end{equation} Here $p_{\sigma\sigma'}^{ij}(\theta)$ stand for the matrices $$ p_{\sigma\sigma'}^{ij}(\theta):= \Pi_\sigma(\theta)\hat {\bf q}_0^{ij}(\theta)\Pi_{\sigma'}(\theta),\,\,\,\, \theta\in {\rm\bf T}^d,\,\,\,\, \sigma,\sigma'=1,\dots,s,\,\,\,\, i,j=0,1, $$ $\hat {\bf q}_0^{ij}(\theta)$ are Fourier transforms of the correlation functions ${\bf q}^{ij}_0(z)$. Note that the condition {\bf E5'} is fulfilled, for instance, if ${\bf q}_0(z)$ is a covariance matrix of a Gibbs measure $\nu_0$ on ${\cal H}_\alpha$, $\alpha<-d/2$. Formally the Gibbs measure $\nu_0$ is $$ \nu_{0}(dv_0, dv_1)= \frac{1}{Z_\beta} \displaystyle e^{-\displaystyle \frac{\beta}{2}\displaystyle \sum_{z\in\Z^d} (|v_1(z)|^2 +{\cal V}v_0(z)\cdot v_0(z))}\prod_{z} dv_0(z)dv_1(z), $$ where $Z_\beta$ is the normalization factor, $\beta= T^{-1}$, $T>0$ is an absolute temperature. Let us introduce the Gibbs measure $\nu_{0}$ as the Gaussian measure with the correlation matrices defined by their Fourier transform as $$ \hat {\bf q}_0^{00}(\theta)= T\hat V^{-1}(\theta),~~~ \hat {\bf q}_0^{11}(\theta)= T \left(\delta_{kl}\right)_{k,l=1}^n,~~~ \hat {\bf q}_0^{01}(\theta)= \hat {\bf q}_0^{10}(\theta)= 0. $$ Then $p_{\sigma\sigma'}^{ij}=0$ for $\sigma\not=\sigma'$ and (\ref{psisi'}) holds.} \end{remarks} \setcounter{equation}{0} \section{Examples} Let us give the examples of the equations (\ref{1+}) and measures $\mu_0$ which satisfy all conditions {\bf E1}--{\bf E6} and {\bf S1}--{\bf S4}, respectively. \subsection{Harmonic crystals} The conditions {\bf E1}--{\bf E6} in particular are fulfilled in the case of {\it the nearest neighbor crystal}, i.e., when the interaction matrix $V(z)=(V_{kl}(z))_{k,l=1}^n$ has a form: \begin{equation}\label{V} \begin{array}{ccl} V_{kl}(z)&=&0\mbox{ for }k\not=l,\\ V_{kk}(z)&=&\left\{\begin{array}{ll}-\gamma_k&\mbox{for }\, |z|=1,\\ 2\gamma_k+m_k^2& \mbox{for }\, z=0,\\ 0&\mbox{for }\, |z|\ge2,\end{array}\right. \quad k=1,\dots,n, \end{array} \end{equation} with $\gamma_k>0$, $m_k\ge0$. Then the equation (\ref{1+}) becomes $$ \ddot u_k(z,t)=(\gamma_k\Delta_L-m_k^2)u_k(z,t),\quad k=1,\dots,n. $$ Here $\Delta_L$ stands for the discrete Laplace operator on the lattice $\Z^d$, $$ \Delta_L u(z):= \sum\limits_{e,|e|=1}(u(z+e)-u(z)). $$ Then the eigenvalues of $\hat V(\theta)$ are $$ \tilde{\omega}_k(\theta)= \sqrt{\, 2 \gamma_1(1-\cos\theta_1)+...+2 \gamma_d (1-\cos\theta_d)+m_k^2}\,,\quad k=1,\dots,n. $$ These eigenvalues have to be labelled as follows \begin{eqnarray} \tilde\omega_1(\theta)\equiv\dots\equiv\tilde\omega_{r_1}(\theta),\,\,\, \tilde\omega_{r_1+1}(\theta)\equiv\dots\equiv\tilde\omega_{r_2}(\theta), \,\,\, \dots,\,\,\,\, \tilde\omega_{r_{s-1}+1}(\theta)\equiv\dots\equiv \tilde\omega_{r_s}(\theta) \nonumber\\ \omega_1(\theta)\equiv \tilde\omega_{r_1}(\theta)< \omega_2(\theta)\equiv \tilde\omega_{r_2}(\theta)<\dots< \omega_s(\theta)\equiv \tilde\omega_{r_s}(\theta). \nonumber \end{eqnarray} Clearly conditions {\bf {E1}}--{\bf {E5}} hold with ${\cal C}_*=\emptyset$. In the case all $m_k>0$ the set ${\cal C}_0$ is empty and condition {\bf E6} holds automatically. Otherwise, if $m_k=0$ for some $k$, ${\cal C}_0=\{0\}$. Then {\bf E6} is equivalent to the condition $\omega_\sigma^{-2}(\theta)\in L^1({\rm\bf T}^d)$, which holds if $d\ge 3$. Therefore, the conditions {\bf E1}--{\bf E6} hold provided either (i) $d\ge 3$, or (ii) $d=1,2$ and all $m_k >0$. In the case (\ref{V}) formulas (\ref{1.14}) and (\ref{1.15}) can be rewritten as follows. Denote $$ \chi_{kl}(\sigma)= \left\{ \begin{array}{rl} 1 &{\rm if} \,\,\,\, k,l\in[r_{\sigma-1}+1, r_{\sigma}]\\ 0& {\rm otherwise} \end{array} \right. \,\,\, \sigma=1,...,s, $$ Then $$ \hat q^{ij}_{\infty\,kl}=\frac14\sum\limits_{\sigma=1}^s \chi_{kl}(\sigma)M_{kl}^{ij},\quad i,j=0,1,\quad k,l=1,\dots,n, $$ where \begin{eqnarray} M_{kl}^{11}&=& \omega_\sigma^2 M_{kl}^{00}= \left[ \omega_\sigma^2 \hat {\bf q}_0^{00}+ \hat {\bf q}_0^{11} -i\mathop{\rm sign}\nolimits(\sin\theta_1)\omega_\sigma(\hat {\bf q}^{01}_0-\hat {\bf q}^{10}_0) \right]_{kl},\nonumber \\ M_{kl}^{01}&=&-M_{kl}^{10}=\Big[\hat {\bf q}^{01}_0-\hat {\bf q}^{10}_0 +i\, \frac{\mathop{\rm sign}\nolimits(\sin\theta_1)}{\omega_\sigma(\theta)} \left(\omega_\sigma^2 \hat {\bf q}_0^{00}+ \hat {\bf q}_0^{11} \right) \Big]_{kl}.\nonumber \end{eqnarray} \subsection{Gaussian measures} We consider $n=1$ and construct Gaussian initial measures $\mu_0$ satisfying {\bf S1}--{\bf S4}. Let us define $\nu_0$ in ${\cal H}_\alpha$ by the correlation functions ${\bf q}_0^{ij}(z-z')$ which are zero for $i\not= j$, while for $i=0,1$, \begin{equation}\label{S04} \hat {\bf q}_0^{ii}(\theta):=F_{z\to\theta} [ {\bf q}_0^{ii}(z)]\in L^1({\rm\bf T}^d),\,\,\,\, \hat {\bf q}_0^{ii}(\theta) \ge 0. \end{equation} Then by the Minlos theorem there exists a unique Borel Gaussian measure $\nu_0$ on ${\cal H}_\alpha$, $\alpha<-d/2$, with the correlation functions ${\bf q}^{ij}_0(z-z')$, because $$ \int\Vert X\Vert^2_\alpha\nu_0(dX) =\sum\limits_{z\in\Z^d}(1+|z|^2)^\alpha (\mathop{\rm tr}\nolimits {\bf q}^{00}_0(0)\!+\!\mathop{\rm tr}\nolimits {\bf q}_0^{11}(0)) =C(\alpha,d)\int\limits_{{\rm\bf T}^d} \mathop{\rm tr}\nolimits( \hat {\bf q}^{00}_0(\theta)\!+\!\hat {\bf q}_0^{11}(\theta))\,d\theta <\infty. $$ The measure $\nu_0$ satisfies {\bf S1} and {\bf S3}. Let us take a function $\zeta\in C(\Z)$ such that $$ \zeta(s)= \left\{ \begin{array}{ll} 1,~~\mbox{for }~ s>\,a,\\ 0,~~\mbox{for }~ s\le0\end{array}\right. \quad\mbox{with an }\, a>0. $$ Let us introduce $X(z)$ as a random function in probability space $({\cal H}_\alpha,\nu_0)$. Define a Borel probability measure $\mu_0$ on ${\cal H}_{\alpha,+}$ as a distribution of the random function $Y_0(z)= \zeta(z_1)X(z)$, $z\in\Z^d_+$. Then correlation functions of $\mu_0$ are $$ Q_0^{ij}(z,z')= {\bf q}_0^{ij}(z-z')\zeta(z_1)\zeta(z'_1),~~i,j= 0,1, $$ where $z= (z_1,\dots,z_d)$, $z'= (z'_1,\dots,z'_d)\in \Z^d_+$, and ${\bf q}_0^{ij}$ are the correlation functions of the measure $\nu_0$. The measure $\mu_0$ satisfies {\bf S1}--{\bf S3}. Further, let us provide, in addition to (\ref{S04}), that \begin{equation}\label{S5} {\bf q}_0^{ii}(z)=0,\,\,\,|z|\geq r_0. \end{equation} Then the mixing condition {\bf S4} follows with $\varphi(r)=0$, $r\geq r_0$. For instance, (\ref{S04}) and (\ref{S5}) hold if we set ${\bf q}_0^{ii}(z)= f(z_1)f(z_2)\cdot\dots\cdot f(z_d)$, where $f(z)=N_0-|z|$ for $|z|\le N_0$ and $f(z)=0$ for $|z|> N_0$ with $N_0:=[r_0/\sqrt d]$ (the integer part). Then $ \hat f(\theta)=(1-\cos N_0\theta)/(1-\cos\theta)$, $\theta\in {\rm\bf T}^1$, and (\ref{S04}) holds. \setcounter{equation}{0} \section{Convergence of correlation functions} \subsection{Bounds for initial covariance} \begin{definition} By $l^p\equiv l^p(\Z^d)\otimes \R^n$ $(l^p_+\equiv l^p(\Z^d_+)\otimes \R^n)$, $p\ge 1$, $n\ge 1$, we denote the space of sequences $f(z)=(f_1(z),\dots,f_n(z))$ endowed with norm $\Vert f\Vert_{l^p}=\Big(\sum\limits_{z\in\Z^d}|f(z)|^p\Big)^{1/p}$ ($\Vert f\Vert_{l^p_+}:=\Big(\sum\limits_{z\in\Z^d_+}|f(z)|^p\Big)^{1/p}$, resp.). \end{definition} The next Proposition reflects the mixing property of initial correlation functions. \begin{pro} \label{l4.1} (i) Let conditions {\bf S1}--{\bf S4} hold. Then for $i,j=0,1$, the following bounds hold \begin{eqnarray} \sum\limits_{z'\in\Z^d_+} |Q^{ij}_0(z,z')| &\le& C<\infty\,\,\,\mbox{ for all }\,z\in\Z^d_+, \label{pr1}\\ \sum\limits_{z\in\Z^d_+} |Q^{ij}_0(z,z')| &\le& C<\infty\,\,\,\mbox{ for all }\,z'\in\Z^d_+. \label{pr2} \end{eqnarray} Here the constant $C$ does not depend on $z,z'\in \Z^d_+$.\\ (ii) $\hat {\bf q}^{ij}_0\in C({\rm\bf T}^d)$, $i,j=0,1$. \end{pro} {\bf Proof}. (i) By \cite[Lemma 17.2.3]{IL}, conditions {\bf S1}, {\bf S3} and {\bf S4} imply \begin{equation}\label{4.9'} |Q^{ij}_0(z,z')| \le C e_0\,\varphi^{1/2}(|z-z'|),~~ z,z'\in\Z^d_+. \end{equation} Hence, condition (\ref{1.12}) implies (\ref{pr1}), \begin{equation}\label{qp} \sum\limits_{z\in\Z^d_+}|Q^{ij}_0(z,z')| \le C e_0 \sum\limits_{z\in\Z^d} \varphi^{1/2}(|z|) <\infty. \end{equation} (ii) The bound (\ref{4.9'}) and condition {\bf S2} imply the following bound: \begin{equation}\label{4.9} |{\bf q}^{ij}_0(z)|\le C e_0\,\varphi^{1/2}(|z|),~~ z\in\Z^d. \end{equation} Hence, from (\ref{1.12}) it follows that ${\bf q}^{ij}_0(z)\in l^1$, what implies $\hat {\bf q}^{ij}_0\in C({\rm\bf T}^d)$.{\hfill\hbox{\enspace{\vrule height 7pt depth 0pt width 7pt}}} \begin{cor}\label{c4.10} Proposition \ref{l4.1} (i) implies, by the Shur lemma, that for any $\Phi,\Psi\in l^2_+$ the following bound holds: $$ |\langle Q_0(z,z'),\Phi(z)\otimes\Psi(z')\rangle_+|\le C\Vert\Phi\Vert_{l^2_+} \Vert\Psi\Vert_{l^2_+}. $$ \end{cor} \subsection{Proof of the convergence (3.2)} From condition (\ref{condE0}), formulas (\ref{Grcs}) and (\ref{hA}) it follows that ${\cal G}_t(z)={\cal G}_t(\tilde z)$ with $\tilde z=(-z_1,z_2,\dots,z_d)$. Then, by the explicit representation (\ref{sol}), the covariance $Q_t(z,z')$ can be decomposed into a sum of four terms: $$ Q_t(z,z')=\sum\limits_{y,y'\in\Z^d_+}{\cal G}_{t,+}(z,y)Q_0(y,y') {\cal G}^T_{t,+}(z',y') =R_t(z,z')-R_t(z,\tilde z')-R_t(\tilde z,z')+R_t(\tilde z,\tilde z'), $$ where $$ R_t(z,z'):=\sum\limits_{y,y'\in\Z^d_+} {\cal G}_t(z-y)Q_0(y,y'){\cal G}^T_t(z'-y'). $$ Therefore, (\ref{corf}) follows from the following convergence \begin{equation}\label{5.6} R_t(z,z')\to q_\infty(z-z')\quad \mbox{as }\,t\to\infty,\quad z,z'\in\Z^d. \end{equation} To prove (\ref{5.6}) let us define $$ Q_*(z,z')=\left\{ \begin{array}{cl} Q_0(z,z')& \mbox{for }\,z,z'\in\Z^d_+,\\ 0& \mbox{otherwise}. \end{array}\right. $$ First we split the function $Q_*(z,z')$ into the following three matrices \begin{eqnarray} Q^+(z,z')&:=&\frac12{\bf q}_0(z-z'),\label{d1'}\\ Q^-(z,z')&:=&\frac12{\bf q}_0(z-z')\mathop{\rm sign}\nolimits (z'_1),\label{d1''}\\ Q^r(z,z')&:=&Q_*(z,z')-Q^+(z,z')-Q^-(z,z').\label{d1'''} \end{eqnarray} Next introduce the matrices \begin{equation}\label{Qta} R^a_{t}(z,z')= \sum\limits_{y,y'\in\Z^d} \Big({\cal G}_t(z-y) Q^a(y,y') {\cal G}_t^T(z'-y')\Big),\,\,\,\,z,z'\in \Z^d,\,\,\,\,t>0, \end{equation} for each $a=\{+,-,r\}$, and split $R_t(z,z')$ into three terms: $R_t(z,z')= R^+_{t}(z,z')+R^-_{t}(z,z')+R^r_{t}(z,z')$. Then the convergence (\ref{5.6}) follows from the following lemma. \begin{lemma}\label{Qt1} (i) $\lim\limits_{t\to\infty} R_t^+(z,z')= q^+_\infty(z-z')$, $z,z'\in\Z^d$, with the matrix $q^+_{\infty}$ defined in (\ref{1.14}),\\ (ii) $\lim\limits_{t\to\infty} R_t^-(z,z')= q^-_\infty(z-z')$, $z,z'\in\Z^d$, with the matrix $q^-_{\infty}$ defined in (\ref{1.15}).\\ (iii) $\lim\limits_{t\to\infty} R_t^r(z,z')=0$, $z,z'\in\Z^d$. \end{lemma} This lemma can be proved using the technique of \cite[Proposition 7.1]{DKM}. To justify the main idea of the proof we sketch the proof of Lemma \ref{Qt1} (i) in Appendix. \setcounter{equation}{0} \section{Compactness of measures family} Proposition \ref{l2.1} follows from the bound (\ref{20.1}) by the Prokhorov compactness theorem \cite[Lemma II.3.1]{VF} by a method used in \cite[Theorem XII.5.2]{VF}, since the embedding ${\cal H}_{\alpha,+}\subset {\cal H}_{\beta,+}$ is compact if $\alpha>\beta$. \begin{lemma}\label{lcom} Let conditions {\bf S1}, {\bf S3}, {\bf S4} hold and $\alpha<-d/2$. Then the following bounds hold \begin{eqnarray} \label{20.1} \sup\limits_{t\ge 0} E\Vert U_+(t)Y_0\Vert^2_{\alpha,_+}<\infty. \end{eqnarray} \end{lemma} {\bf Proof}. Definition \ref{d1.1} implies $$ E \Vert Y(\cdot,t)\Vert^2_{\alpha,_+}= \!\sum\limits_{z\in \Z^d_+} (1+|z|^2)^\alpha \Big({\rm tr}\,Q_t^{00}(z,z)+{\rm tr}\,Q_t^{11}(z,z)\Big)<\infty. $$ Since $\alpha<-d/2$, it remains to prove that $$ \sup\limits_{t\in\R} \sup\limits_{z,z'\in \Z^d_+} \Vert Q_t(z,z')\Vert\le C<\infty. $$ The representation (\ref{sol}) gives \begin{eqnarray} Q^{ij}_t(z,z')&=&E\Big(Y^i(z,t)\otimes Y^j(z',t)\Big) = \sum\limits_{y,y'\in \Z^d_+} \sum\limits_{k,l=0,1} {\cal G}^{ik}_{t,+}(z,y)Q^{kl}_0(y,y'){\cal G}^{jl}_{t,+}(z',y')\nonumber\\ &=& \langle Q_0(y,y'), \Phi^i_{z}(y,t)\otimes \Phi^j_{z'}(y',t)\rangle_+,\nonumber \end{eqnarray} where \begin{eqnarray} \Phi^i_{z}(y,t)&:=&\Big( {\cal G}^{i0}_{t,+}(z,y),{\cal G}^{i1}_{t,+}(z,y)\Big)\nonumber\\ &=&({\cal G}_t^{i0}(z-y)-{\cal G}_t^{i0}(z-\tilde y), {\cal G}_t^{i1}(z-y)-{\cal G}_t^{i1}(z-\tilde y)), \,\,\,\,\,i=0,1.\nonumber \end{eqnarray} Note that the Parseval identity, formula (\ref{hatcalG}) and condition {\bf E6} imply $$ \Vert\Phi^i_{z}(\cdot,t)\Vert^2_{l^2}= (2\pi)^{-d} \int\limits_{{\rm\bf T}^d} |\hat\Phi^i_{z}(\theta,t)|^2\,d\theta \le C\int\limits_{{\rm\bf T}^d} \Big( |\hat{\cal G}^{i0}_t(\theta)|^2 +|\hat{\cal G}^{i1}_t(\theta)|^2\Big) \,d\theta \le C_0<\infty. $$ Then Corollary \ref{c4.10} gives $$ |Q^{ij}_t(z,z')|= |\langle Q_0(y,y'), \Phi^i_{z}(y,t)\otimes \Phi^j_{z'}(y',t)\rangle_+| \le C\Vert\Phi^i_{z}(\cdot,t)\Vert_{l^2_+}\, \Vert\Phi^j_{z'}(\cdot,t)\Vert_{l^2_+}\le C_1<\infty, $$ where the constant $ C_1$ does not depend on $z,z'\in\Z^d_+$, $t\in\R$. {\hfill\hbox{\enspace{\vrule height 7pt depth 0pt width 7pt}}} \setcounter{equation}{0} \section{Convergence of characteristic functionals} We derive (\ref{2.6i}) by using the explicit representation (\ref{sol}) of the solution $Y(t)$, the Bernstein `room - corridor' technique and a method of \cite{DKKS}--\cite{DKM}. The method gives a representation of $\langle Y(t),\Psi\rangle_+$ as a sum of weakly dependent random variables (see formula (\ref{razli}) below). Then (\ref{2.6i}) follows from the central limit theorem under a Lindeberg-type condition. The similar technique of the proof is applied in \cite[Sections 9, 10]{DKM}. Then we remark only the main steps of the proof. \subsection{Asymptotics of $U'_+(t)\Psi$} At first, let us evalute of scalar product $\langle Y(t),\Psi\rangle_+$. Let us introduce a function $\Psi_*(z)$ as $$ \Psi_*(z)=\left\{\begin{array}{cl} \Psi(z),&\mbox{if }\,z_1>0,\\ 0,&\mbox{if }\,z_1=0,\\ -\Psi(\tilde z),&\mbox{if }\,z_1<0. \end{array}\right. $$ Therefore \begin{eqnarray}\label{YP} \langle Y(z,t),\Psi(z)\rangle_+=\langle Y(z,t),\Psi_*(z)\rangle_+= \langle Y_0(z'),\Phi(z',t)\rangle_+, \end{eqnarray} where \begin{eqnarray}\label{7.2} \Phi(z',t)&:=&U'_+(t)\Psi_*(z') =\sum\limits_{z\in\Z^d_+} {\cal G}_{t,+}^{T}(z,z')\Psi_*(z) =\sum\limits_{z\in\Z^d} {\cal G}_{t}^{T}(z-z')\Psi_*(z)\nonumber\\ &=&(2\pi)^{-d}\int\limits_{{\rm\bf T}^d} e^{-iz'\cdot\theta}\hat{\cal G}^*_t (\theta)\hat\Psi_*(\theta)\,d\theta. \end{eqnarray} \begin{definition}\label{dC} (i) The critical set ${\cal C}:={\cal C}_0\cup{\cal C}_* \cup\Big(\cup_1^s{\cal C}_\sigma\Big)$ (see {\bf E4}).\\ (ii) ${\cal S}^0:=\{\Psi\in{\cal S}=[S(\Z^d)\otimes\R^n]^2: \hat\Psi(\theta)=0\,\, \mbox{\rm in a neighborhood of}\,\,{\cal C}\}$. \end{definition} Note that {\rm mes}\,${\cal C}=0$ (see \cite[lemma 7.3]{DKM}) and it suffices to prove (\ref{2.6i}) for $\Psi_*\in {\cal S}^0$ only. For the function $\Phi(z,t)$ the following lemma holds. \begin{lemma}\label{l5.3} (cf Lemma 9.1 from \cite{DKM}) Let conditions {\bf E1}--{\bf E4} and {\bf E6} hold. Then for any fixed $\Psi_*\in {\cal S}^0$, the following bounds hold:\\ (i) $\sup_{z\in\Z^d}|\Phi(z,t)| \le C~t^{-d/2}$.\\ (ii) For any $p>0$ there exist $C_p,\gamma>0$ such that \begin{equation}\label{conp} |\Phi(z,t)|\le C_p(1+|z|+|t|)^{-p},\quad |z|\ge\gamma t. \end{equation} \end{lemma} This lemma follows from (\ref{7.2}), (\ref{hatcalG}), Definition \ref{dC} and the standard stationary phase method. \subsection{Bernstein's argument} Let us introduce a `room - corridor' partition of the half-ball $\{z\in\Z^d_+:~|z|\le \gamma t\}$, with $\gamma$ from (\ref{conp}). For $t>0$ we choose $\Delta_t$ and $\rho_t\in\N$. Let us choose $0<\delta<1$ and \begin{equation}\label{rN} \rho_t\sim t^{1-\delta}, ~~~\Delta_t\sim\frac t{\log t},~~~~\,\,\,t\to\infty. \end{equation} Let us set $h_t=\Delta_t+\rho_t$ and $$ a^j=jh_t,\,\,\,b^j=a^j+\Delta_t,\,\,\, j=0,1,2,\dots,\,N_t=[(\gamma t)/h_t]. $$ We call the slabs $R_t^j=\{z\in\Z^d_+:|z|\le N_t h_t,\,a^j\le z_1< b^j\}$ the `rooms', $C_t^j=\{z\in\Z^d_+: |z|\le N_t h_t,\, b^j\le z_1<a^{j+1}\}$ the `corridors' and $L_t=\{z\in\Z^d_+: |z|> N_t h_t\}$ the 'tail'. Here $z=(z_1,\dots,z_d)$, $\Delta_t$ is the width of a room, and $\rho_t$ of a corridor. Let us denote by $\chi_t^j$ the indicator of the room $R_t^j$, $\xi_t^j$ that of the corridor $C_t^j$, and $\eta_t$ that of the tail $L_t$. Then $$ {\sum}_t [\chi_t^j(z)+\xi_t^j(z)]+ \eta_t(z)=1,\,\,\,z\in\Z^d_+, $$ where the sum ${\sum}_t$ stands for $\sum\limits_{j=0}^{N_t-1}$. Hence, we get the following Bernstein's type representation: \begin{equation}\label{res} \langle Y_0,\Phi(\cdot,t)\rangle_+ = {\sum}_t \left[\langle Y_0,\chi_t^j\Phi(\cdot,t)\rangle_+ + \langle Y_0,\xi_t^j\Phi(\cdot,t)\rangle_+ \right]+ \langle Y_0,\eta_t\Phi(\cdot,t)\rangle_+. \end{equation} Let us define the random variables $r_{t}^j$, $c_{t}^j$, $l_{t}$ by $$ r_{t}^j= \langle Y_0,\chi_t^j\Phi(\cdot,t)\rangle_+,~~ c_{t}^j= \langle Y_0,\xi_t^j\Phi(\cdot,t)\rangle_+, \,\,\,l_{t}= \langle Y_0,\eta_t\Phi(\cdot,t)\rangle_+. $$ Therefore, from (\ref{YP}) and (\ref{res}) it follows that \begin{equation}\label{razli} \langle Y(t),\Psi\rangle_+=\langle Y_0,\Phi(\cdot,t)\rangle_+ = {\sum}_t (r_{t}^j+c_{t}^j)+l_{t}. \end{equation} \begin{lemma} \label{l5.1} Let {\bf S1}--{\bf S4} hold and $\Psi_*\in{\cal S}^0$. The following bounds hold for $t>1$: \begin{eqnarray} E|r^j_{t}|^2&\le& C(\Psi)~\Delta_t/ t,\,\,\,\forall j,\nonumber\\ E|c^j_{t}|^2&\le& C(\Psi)~\rho_t/ t,\,\,\,\forall j,\nonumber\\ E|l_{t}|^2&\le& C_p(\Psi)~t^{-p},\,\,\,\,\forall p>0. \nonumber \end{eqnarray} \end{lemma} The proof is based on Lemma \ref{l5.3} and Proposition \ref{l4.1} (i) (see \cite[Lemma 9.2]{DKM}). \medskip Further, to prove (\ref{2.6i}) we use a version of the central limit theorem developed by Ibragimov and Linnik. If ${\cal Q}_{\infty}(\Psi,\Psi)=0$, the convergence (\ref{2.6i}) follows from (\ref{corf}). Thus, we may assume that for a given $\Psi_*\in{\cal S}^0$, \begin{equation}\label{5.*} {\cal Q}_{\infty}(\Psi,\Psi)\not=0. \end{equation} At first, we obtain $$ | E\exp\{i \langle Y_0,\Phi(\cdot,t)\rangle_+\} - \hat\mu_{\infty}(\Psi)| =\left|E\exp\left\{i{\sum}_t r_t^j\right\} -\exp\left\{-\frac12 {\sum}_t E|r_t^j|^2\right\}\right|+o(1),\,\,\,t\to\infty. $$ This fact follows from Lemma \ref{l5.1}, convergence (\ref{corf}), condition {\bf S4} and (\ref{rN}) (cf \cite[p.1073-1075]{DKM}). Secondly, by the mixing condition {\bf S4}, we derive that $$ \left|E\exp\left\{i{\sum}_t r_t^j\right\}-\prod\limits_{0}^{N_t-1} E\exp\left\{i r_t^j\right\}\right| \le C N_t\varphi(\rho_t)\to 0,\quad t\to\infty. $$ Hence, it remains to check that $$ \left|\prod\limits_{0}^{N_t-1} E\exp\left\{ir_t^j\right\} -\exp\left\{-\frac12{\sum}_{t} E|r_t^j|^2\right\}\right| \to 0,~~t\to\infty. $$ According to the standard statement of the central limit theorem (see, e.g. \cite[Theorem 4.7]{P}), it suffices to verify the Lindeberg condition: $\forall\delta>0$, $$ \frac{1}{\sigma_t}{\sum}_t E_{\delta\sqrt{\sigma_t}} |r_t^j|^2 \to 0,~~t\to\infty. $$ Here $\sigma_t\equiv {\sum}_t E |r^j_t|^2$, and $E_\varepsilon f\equiv E (X_\varepsilon f)$, where $X_\varepsilon$ is the indicator of the event $|f|>\varepsilon^2$. Note that (\ref{corf}) and (\ref{5.*}) imply that $\sigma_t \to{\cal Q}_{\infty}(\Psi, \Psi)\not= 0$, $t\to\infty$. Hence it remains to verify that $$ {\sum}_t E_{\varepsilon} |r_t^j|^2 \to 0,~~t\to\infty, ~~ \mbox{ for any }\, \varepsilon>0. $$ This condition is checked using the technique from \cite[section 10]{DKM}. {\hfill\hbox{\enspace{\vrule height 7pt depth 0pt width 7pt}}} \setcounter{equation}{0} \section{Appendix. Outline of the proof of Lemma 5.4 (i)} Obviously, the assertion of Lemma \ref{Qt1} (i) is equivalent to the next proposition. \begin{pro} Let conditions {\bf E1}--{\bf E6} and {\bf S1}--{\bf S4} hold. Then for any $\Psi\in{\cal S}$, \begin{equation}\label{8.1} \lim\limits_{t\to\infty}\langle R_t^+(z,z'),\Psi(z)\otimes\Psi(z')\rangle = \langle q^+_\infty(z-z'),\Psi(z)\otimes\Psi(z')\rangle. \end{equation} \end{pro} {\bf Proof}. It suffices to prove (\ref{8.1}) for $\Psi\in{\cal S}^0$ only. It can be proved similarly as in \cite[Lemma 7.6]{DKM}. At first, let us apply the Fourier transform to the matrix $R_{t}^+(z,z')$ defined by (\ref{Qta}): $\hat R^{+}_{t}(\theta,\theta'):= F\!\!\!_{\scriptsize {\begin{array}{ll}z\to\theta\\ z'\to \theta' \end{array}}}\!\! R^{+}_{t}(z,z')= \hat {\cal G}_t(\theta)\hat Q^{+}(\theta,\theta')\hat {\cal G}_t^T(\theta')$, where $\hat Q^{+}(\theta,\theta'):=F\!\!\!_{\scriptsize {\begin{array}{ll} z\to\theta\\ z'\to \theta' \end{array}}}\!\! Q^+(z,z')$. From (\ref{d1'}) it follows that $\hat Q^+(\theta,\theta')= \delta(\theta+\theta')~(2\pi)^{d}~\hat{\bf q}_0(\theta)/2$. Hence, $$ \hat R^+_{t}(\theta,\theta')=(2\pi)^d\frac12\delta(\theta+\theta') \hat {\cal G}_t(\theta)\hat{\bf q}_0(\theta) \hat{\cal G}_t^T(-\theta). $$ Secondly, $\hat{\cal G}_t( \theta)$ has a form \begin{equation}\label{hatcalG} \hat{\cal G}_t( \theta)= \left( \begin{array}{cc} \cos\Omega t &~ \sin \Omega t~\Omega^{-1} \\ -\sin\Omega t~\Omega & \cos\Omega t\end{array}\right), \end{equation} where $\Omega=\Omega(\theta)$ is the Hermitian matrix defined by (\ref{Omega}). Let $C(\theta)$ be defined by (\ref{C(theta)}) and $I$ be the identity matrix. Then \begin{equation}\label{Gtdec} \hat{\cal G}_t( \theta)=\cos\Omega t\, I+\sin\Omega t\, C(\theta). \end{equation} Moreover, by condition {\bf E2}, $\hat {\cal G}_t^T(-\theta)=\hat {\cal G}_t^*(\theta)= \cos\Omega t\, I+\sin\Omega t\, C(\theta)^*$. Therefore, \begin{eqnarray}\label{Qt,1} \langle R_t^+(z,z'),\Psi(z)\otimes\Psi(z')\rangle &=&(2\pi)^{-2d}\langle \hat R_t^+(\theta,\theta'),\hat \Psi(\theta) \otimes\hat\Psi(\theta')\rangle \nonumber\\ &=&\frac1{2(2\pi)^{d}}\langle\hat {\cal G}_t(\theta) \hat {\bf q}_0(\theta)\hat {\cal G}_t^*(\theta), \hat \Psi(\theta)\otimes\overline{\hat \Psi}(\theta)\rangle. \end{eqnarray} Further, let us choose certain smooth branches of the functions $\Pi_\sigma(\theta)$ and $\omega_\sigma(\theta)$ to apply the stationary phase arguments which require a smoothness in $\theta$. We choose a finite partition of unity \begin{equation}\label{part} \sum_{m=1}^M g_m(\theta)=1,\,\,\,\,\theta\in \mathop{\rm supp}\nolimits\hat\Psi, \end{equation} where $g_m$ are nonnegative functions from $C_0^\infty({\rm\bf T}^d)$ and vanish in a neighborhood of the set ${\cal C}$ defined in Definition \ref{dC}, (i). Further, using (\ref{part}) we rewrite the RHS of (\ref{Qt,1}). Applying formula (\ref{Gtdec}) for $\hat {\cal G}_t(\theta)$, one obtains $$ \langle R_t^+(z,z'),\Psi(z)\otimes\Psi(z')\rangle =\frac1{2(2\pi)^{d}}\sum_{m}\sum\limits_{\sigma,\sigma'=1}^s \int\limits_{{\rm\bf T}^d} g_m(\theta) \Big(\Pi_\sigma(\theta) R_{t,\sigma\sigma'}(\theta) \Pi_{\sigma'}(\theta), \hat \Psi(\theta)\otimes\overline{\hat \Psi}(\theta)\Big)\,d\theta, $$ where $R_{t,\sigma\sigma'}(\theta)$ stands for the $2n\times 2n$ matrix, \begin{eqnarray}\label{7.9} R_{t,\sigma\sigma'}(\theta) &=&\frac{1}{2}\sum\limits_{\pm}\Big\{ \cos\big(\omega_\sigma(\theta)\pm\omega_{\sigma'}(\theta)\big)t \Big[\hat {\bf q}_0(\theta)\mp C(\theta)\hat {\bf q}_0(\theta) C(\theta)^*\Big] \nonumber\\ &&+\sin\big(\omega_\sigma(\theta)\pm\omega_{\sigma'}(\theta)\big)t ~\Big[ C(\theta)\hat {\bf q}_0(\theta)\pm\hat {\bf q}_0(\theta) C(\theta)^*\Big]\Big\}. \end{eqnarray} If $\sigma=\sigma'$, then \begin{eqnarray}\label{8.7} R_{t,\sigma\sigma}(\theta) &=&\frac{1}{2}\Big[\hat {\bf q}_0(\theta)+ C(\theta)\hat {\bf q}_0(\theta) C(\theta)^*\Big] +\frac12\cos\big(2\omega_\sigma(\theta)t\big)\Big[\hat {\bf q}_0(\theta)- C(\theta)\hat {\bf q}_0(\theta) C(\theta)^*\Big]\nonumber\\ &&+\frac12\sin\big(2\omega_\sigma(\theta)t\big) \Big[C(\theta)\hat {\bf q}_0(\theta)+ \hat {\bf q}_0(\theta) C(\theta)^*\Big]. \end{eqnarray} By Lemma \ref{lc*} and the compactness arguments, we choose the eigenvalues $\omega_\sigma(\theta)$ and the matrix $\Pi_\sigma(\theta)$ as real-analytic functions inside the $\mathop{\rm supp}\nolimits g_m$ for every $m$: we do not mark the functions by the index $m$ to not overburden the notations. Let us analyze the Fourier integrals with $g_m$. At first, note that the identities $\omega_\sigma(\theta)+\omega_{\sigma'}(\theta)\equiv\mathop{\rm const}\nolimits_+$ or $\omega_\sigma(\theta)-\omega_{\sigma'}(\theta)\equiv\mathop{\rm const}\nolimits_-$ with the $\mathop{\rm const}\nolimits_\pm\ne 0$ are impossible by condition {\bf E5}. Furthermore, the oscillatory integrals with $\omega_\sigma(\theta)\pm \omega_{\sigma'}(\theta)\not\equiv \mathop{\rm const}\nolimits$ vanish as $t\to\infty$. Hence, only the integrals with $\omega_\sigma(\theta)-\omega_{\sigma'}(\theta)\equiv 0$ contribute to the limit, since $\omega_\sigma(\theta)+\omega_{\sigma'}(\theta)\equiv 0$ would imply $\omega_\sigma(\theta)\equiv\omega_{\sigma'}(\theta)\equiv 0$ which is impossible by {\bf E4}. By formulas (\ref{7.9}) and (\ref{8.7}), one obtains \begin{eqnarray} &&\langle R_t^+(z,z'),\Psi(z)\otimes\Psi(z')\rangle\nonumber\\ &=& (2\pi)^{-d}\sum\limits_m\sum\limits_{\sigma=1}^s\frac14 \int\limits_{{\rm\bf T}^d} g_m(\theta)\Bigl(\Pi_\sigma(\theta) \Big[\hat {\bf q}_0(\theta)+ C(\theta)\hat {\bf q}_0(\theta) C(\theta)^*\Big] \Pi_\sigma(\theta)+\dots, \hat \Psi(\theta)\otimes\overline{\hat \Psi}(\theta)\Big)\,d\theta \nonumber\\ &=&(2\pi)^{-d}\int\limits_{{\rm\bf T}^d} \Bigl(\hat q^+_{\infty}(\theta), \hat \Psi(\theta)\otimes\overline{\hat \Psi}(\theta)\Big)\,d\theta +\dots, \nonumber \end{eqnarray} where $"\dots"$ stands for the oscillatory integrals which contain $\cos(\omega_\sigma(\theta)\pm\omega_{\sigma'}(\theta))t$ and $\sin(\omega_\sigma(\theta)\pm\omega_{\sigma'}(\theta))t$ with $\omega_\sigma(\theta)\pm\omega_{\sigma'}(\theta)\not\equiv$const. The oscillatory integrals converge to zero by the Lebesgue-Riemann theorem since all the integrands in `$...$' are summable, and $\nabla(\omega_\sigma(\theta)\pm\omega_{\sigma'}(\theta))=0$ only on the set of the Lebesgue measure zero. The summability follows from Proposition \ref{l4.1} (ii) and {\bf E6} (if ${\cal C}_0\not=\emptyset$) since the matrices $\Pi_\sigma(\theta)$ are bounded. The zero measure follows similarly to Lemma \ref{lc*} (i) since $\omega_\sigma(\theta)\pm\omega_{\sigma'}(\theta)\not\equiv\mathop{\rm const}\nolimits$. Lemma \ref{Qt1} (i) is proved. {\hfill\hbox{\enspace{\vrule height 7pt depth 0pt width 7pt}}} \medskip \begin{center} {\bf Acknowledgment} \end{center} Author would like to thank Prof. H. Spohn for the helpful discussions.
1,116,691,499,938
arxiv
\section*{Abstract} We propose a new approach for solving a class of discrete decision making problems under uncertainty with positive cost. This issue concerns multiple and diverse fields such as engineering, economics, artificial intelligence, cognitive science and many others. Basically, an agent has to choose a single or series of actions from a set of options, without knowing for sure their consequences. Schematically, two main approaches have been followed: either the agent learns which option is the correct one to choose in a given situation by trial and error, or the agent already has some knowledge on the possible consequences of his decisions; this knowledge being generally expressed as a conditional probability distribution. In the latter case, several optimal or suboptimal methods have been proposed to exploit this uncertain knowledge in various contexts. In this work, we propose following a different approach, based on the geometric intuition of distance. More precisely, we define a goal independent quasimetric structure on the state space, taking into account both cost function and transition probability. We then compare precision and computation time with classical approaches. \section*{Introduction} It's Friday evening, and you are in a hurry to get home after a hard day's work. Several options are available. You can hail a taxi, but it's costly and you're worried about traffic jams, common at this time of day. Or you might go on foot, but it's slow and tiring. Moreover, the weather forecast predicted rain, and of course you forgot your umbrella. In the end you decide to take the subway, but unfortunately, you have to wait half an hour for the train at the connecting station due to a technical incident. Situations like this one are typical in everyday life. It is also undoubtedly a problem encountered in logistics and control. The initial state and the goal are known (precisely or according to a probability distribution). The agent has to make a series of decisions about the best transport means, taking into account both uncertainty and cost. This is what we call \emph{optimal control under uncertainty}. Note that he might also have an intuitive notion of some abstract distance: how far am I from home? To what extent will it be difficult or time consuming to take a given path? The problem might become even more difficult if you do not know precisely what state you are in. For instance, you might be caught in a traffic jam in a completely unknown neighborhood. This problem that we propose to deal with in this paper can be viewed as sequential decision making, usually expressed as a Markovian Decision Process (MDP) \cite{Bellman1957, Howard1960, Puterman1994,Boutilier1999} and its extension to Partially Observable cases (POMDP) \cite{Drake1962,Astrom1965}. Knowing the transition probability of switching from one state to another by performing a particular action as well as the associated instantaneous cost, the aim is to define an optimal policy, either deterministic or probabilistic, which maps the state space to the action state in order to minimize the mean cumulative cost from the initial state to a goal (goal-oriented MDPs). This class of problems is usually solved by Dynamic Programming method, using Value Iteration (VI) or Policy Iteration (PI) algorithms and their numerous refinements. Contrasting with this model-based approach, various learning algorithms have also been proposed to progressively build either a value function, a policy, or both, from trial to trial. Reinforcement learning is the most widely used, especially when transition probabilities and cost function are unknown (model-free case), but it suffers from the same tractability problem \cite{Sutton1998}. Moreover one significant drawback to these approaches is that they do not take advantage of the preliminary knowledge of cost function and transition probability. MDPs have generated a substantial amount of work in engineering, economics, artificial intelligence and neuroscience, among others. Indeed, in recent years, Optimal Feedback Control theory has become quite popular in explaining certain aspects of human motor behavior \cite{Todorov2002,Todorov2004}. This kind of method results in feedback laws, which allow for closed loop control. However, aside from certain classes of problems with a convenient formulation, such as the Linear Quadratic case and its extensions \cite{Stengel1986}, or through linearization of the problem, achieved by adapting the immediate cost function \cite{Todorov2009}, the exact total optimal solution in the discrete case is intractable due to the curse of dimensionality \cite{Bellman1957}. Thus, a lot of work in this field is devoted to find approximate solutions and efficient methods for computing them. Heuristic search methods try to speed up optimal probabilistic planning by considering only a subset of the state space (e.g. knowing the starting point and considering only reachable states). These algorithms can provide offline optimal solutions for the considered subspace \cite{Barto1995,Hansen2001,Bonet2003}. Monte-Carlo planning methods that doesn't manipulate probabilities explicitly have also proven very successful for dealing with problems with large state space \cite{Peret2004b,Kocsis2006}. Some methods try to reduce the dimensionality of the problem in order to avoid memory explosion by mapping the state space to a smaller parameter space \cite{Buffet2006,Kolobov2009} or decomposing it hierarchically \cite{Hauskrecht1998, Dietterich1998, Barry2011} . Another family of approximation methods which has recently proven very successful \cite{Little2007} is the ``determinization''. Indeed, transforming the probabilistic problem to a deterministic one optimizing another criterion allows the use of very efficient deterministic planner \cite{Yoon2007,Yoon2008,Teichteil-Konigsbuch2010}. What we propose here is to do something rather different, by considering goal-independent distances between states. To compute the distance we propose a kind of determinization of the problem using a one step transition "mean cost per successful attempt" criterion, which can then be propagated by triangle inequality. The obtained distance function thus confers to the state space a quasi-metric structure that can be viewed as a Value function between all states. Theses distances can then be used to compute an offline policy using a gradient descent like method. We show that in spite of being formally suboptimal (except for the deterministic and a described particular case), this method exhibits several good properties. We demonstrate the convergence of the method and the possibility to compute distances using standard deterministic shortest path algorithms. Comparison with the optimal solution is described for different classes of problems with a particular look at problems with \emph{prisons}. Prisons or absorbing set of states have been recently shown to be difficult cases for state of the art methods \cite{Kolobov2012} and we show how our method naturally deals with these cases. \section*{Materials and Methods} \subsection*{Quasimetric} Let us consider a dynamic system described by its state $x \in X$ and $u \in U(x)$, the action applied at state $x$ leading to an associated instantaneous cost $g(x,u)$. The dynamics can then be described by the Markov model: \[ P(X^{t+1}|X^t,U^t) \] where the state of the system is a random variable $X$ defined by a probability distribution. Assuming stationary dynamics, a function $p\colon X^2U \to [0,1]$ exists, satisfying \[ P([X^{t+1}=y]|[X^t=x],[U^t=u])=p(y|x,u) \] This model enables us to capture uncertainties in the knowledge of the system's dynamics, and can be used in the Markov Decision Process (MDP) formalism. The aim is to find the optimal policy $U(x)$ allowing a goal state to be reached with minimum cumulative cost. The classic method of solving this is to use dynamic programming to build an optimal Value function $v\colon X \to \mathbb{R}$, minimizing the total expected cumulative cost using Bellman equation: \begin{equation}\label{eq:stoch_bellman1} v(x)=\min_u \left\{ g(x,u)+\sum_y v(y)p(y|x,u) \right\} \end{equation} which can be used to specify an optimal \emph{control policy}\\ $\pi\colon X\to U(X)$ \begin{equation}\label{eq:stoch_bellman2} \pi(x) \in \argmin_u \left\{ g(x,u)+\sum_y v(y)p(y|x,u) \right\} \end{equation} In general this method is related to a goal state or a discount factor. Here we propose a different approach by defining a goal independent \emph{quasimetric} structure in the state space, defining for each state couple a distance function $d(x,y)$ reflecting a minimum cumulative cost. This distance has to verify the following properties: \[ \left\{ \begin{array}{l l l} \forall x,y\colon d(x,y) &\geq& 0\\ \forall x\colon d(x,x) &=& 0\\ \forall x,y\colon d(x,y) & =&0 \implies x=y\\ \forall x,y\colon d(x,y)& =&\displaystyle \min_z \left\{d(x,z)+d(z,y)\right\}\\ \end{array} \right. \] leading to the triangle inequality \[ \forall x,y,z\colon d(x,y)\leq d(x,z)+d(z,y) \] Therefore, the resulting quasi-distance function $d\colon X\times X \to \mathbb{R}^+$ confers the property of a quasimetric space to $X$. Notice that this metric need not be symmetric (in general $d(x,y) \neq d(y,x)$). It is in fact a somewhat natural property, e.g. climbing stairs is (usually) harder than going down. By then choosing the cost function $g(x,u)>0$ this distance can be computed iteratively (such as the Value function).\\ For a deterministic problem, we initialize with: \[ \left\{ \begin{array}{l l l} d^0(x,x)&=&0\\ d^0(x,y\neq x)&=&+\infty\\ d^1(x,y)&=&\min\left\{d^0(x,y),\min\limits_{u|y=next(x,u)}\{g(x,u)\}\right\} \end{array} \right. \] with $next(x,u)$ the discrete dynamic model giving the next state $y$ by applying action $u$ in state $x$. Then we apply the recurrence: \begin{equation}\label{eq:qd_rec} d^{i+1}(x,y)=\min_{z} \left\{d^i(x,z)+d^i(z,y)\right\} \quad \forall i>0 \end{equation} We can show that this recurrence is guaranteed to converge in finite time for a finite state-space problem. \begin{proof} \mbox{} \begin{enumerate} \item by recurrence $\forall (x,y), \forall i \colon d^i(x,y) \geq 0$ as: \begin{itemize} \item $\forall (x,y)\colon$\\ $d^1(x,y)= \min\left\{d^0(x,y),\min\limits_{u|y=next(x,u)}\{g(x,u)\}\right\} \geq 0$ as\\ $\forall (x,y) \colon d^0(x,y)\geq 0$ and $g(x,u)>0$ by definition. \item and if $d^i(x,y) \geq 0$ then\\ $d^{i+1}(x,y) = \min_{z} \left\{d^i(x,z)+d^i(z,y)\right\} \geq 0$ \end{itemize} \item $\forall (x,y), \forall i \colon d^{i+1}(x,y) \leq d^{i}(x,y)$ as:\\ $d^{i+1}(x,y)=\min_{z} \left\{d^i(x,z)+d^i(z,y)\right\}$ then $d^{i+1}(x,y) \leq d^i(x,z)+d^i(z,y)$ in particular if we take $z=x$ we have $d^{i+1}(x,y) \leq d^{i}(x,y)$. \item $\forall (x,y) \colon d^{i}(x,y)$ is a decreasing monotone sequence bounded by $0$. \hfill \ensuremath{\blacksquare} \end{enumerate} \end{proof} However, finding a way to initialize $d(x,y)$ (more precisely $d^1(x,y)$) while taking uncertainty into account, presents a difficulty in probabilistic cases as we cannot use the cumulative expected cost like in Bellman equation. For example we can choose: \[ d^1(x,y)=\min\left\{d^0(x,y),\min\limits_{u}\left\{\frac{g(x,u)}{p(y|x,u)}\right\}\right\} \] for the first iteration with $\frac{g(x,u)}{p(y|x,u)}$ as the \emph{one-step distance}. The quotient of cost over transition probability is chosen as it provides an estimate of the \emph{mean cost per successful attempt}. If we attempt $N$ times the action $u$ in state $x$ the cost will be $N.g(x,u)$ and the objective $y$ will be reached on average $N.p(y|x,u)$ times. The mean cost per successful attempt is: \[ \frac{N.g(x,u)}{N.p(y|x,u)}=\frac{g(x,u)}{p(y|x,u)} \] This choice of metric is therefore simple and fairly convenient. All the possible consequences of actions are clearly not taken into account here, thus inducing a huge computational gain but at the price of losing the optimality. In fact, we are looking at the minimum over actions of the \emph{mean cost per successful attempt}, which can be viewed as using the best mean cost, disregarding unsuccessful attempts, i.e. neglecting the probability to move to an unwanted state. In a one-step decision, this choice is a reasonable approximation of the optimal that takes both cost and probability into account. This cost-probability quotient was used before to determinize probabilistic dynamics and extract plans \cite{Keyder2008,Barry2011,Kaelbling2011}. Here we generalize this method to construct an entire metric in the state space using triangle inequality. We also notice that contrary to the dynamic programming approach, the quasimetric is not linked to a specific goal but instead provides a distance between any state pair. Moreover, using this formalism, the instantaneous cost function $g(x,u)$ is also totally goal independent and can represent with greater ease any objective \emph{physical} quantity, such as consumed energy. This interesting property allows for much more adaptive control since the goal can be changed without the need to recompute at all. As shown in the following, it is even possible to replace the goal state by a probability distribution over states. Another interesting property of the quasi-distance ${d\colon X\times X \to \mathbb{R}^+}$ is that it doesn't have local minimum from the action point of view. In fact, for any couple $(x,y)$, $\left\{d^0(x,y), d^1(x,y),\ldots d^n(x,y) \right\}$ is a decreasing finite series of non-negative numbers (finite number of states), which therefore converges to a non-negative number \[ d(x,y)=\lim_{n\to \infty}\left\{ d^n(x,y)\right\} \]\\ Note that if we multiply the cost function by any positive constant, the quasimetric is also multiplied by the same constant. This multiplication has no consequence on the structure of the state space and leaves the optimal policy unchanged, therefore we can choose a constant such that: \[ \min_{x,u}\left\{g(x,u)\right\}=1 \] Let $D_k^n(y)$ be the subset of $X$ associated with a goal $y$ such that: \[ x \in D_k^n(y) \Leftrightarrow d^n(x,y)<k \] and let $D_k(y)=D_k^{\infty}(y)$ the subset of $X$ associated with the goal $y$ such that: \[ x \in D_k(y) \Leftrightarrow d(x,y)<k \] The subset $D_{\infty}(y)$ is the set of states from which the goal $y$ can be reached in a finite time with a finite cost. Starting from $x \notin D_{\infty}(y)$ the goal $y$ will never be reached either because some step between $x$ and $y$ requires an action with an infinite cost, or because there is a transition probability equal to $0$. Then the defined quasimetric admits no local minimum to a given goal in the sense that for a given $k$, if $x \in D_k(y)$ is such that: \[ \forall z \in D_k(y), \forall u \in U\colon P(z|x,u)>k^{-1} \text{ and } d(z,y)>d(x,y) \] then $x=y$ \begin{proof} \mbox{} \begin{enumerate} \item if $x \neq y$ and $x \in D_k^1(y)$, then $\exists u\colon P(y|x,u)>\frac{g(x,u)}{k}\geq k^{-1}$ and\\ $d(x,y)>d(y,y)=0$. As $y \in D_k(y)$ it is a counterexample of the definition. \item if $x \neq y$ and $x \notin D_k^1(y)$, then $\exists n,z\colon d(x,y)=d^n(x,z)+d^n(z,y)$. As $d^n(x,z)\geq 0$, $d(z,y)\leq d^n(z,y) \leq d(x,y) < k$, therefore $z \in D_k(y)$. \begin{itemize} \item If $n=1$, $\exists u\colon P(z|x,u)>\frac{g(x,u)}{k}\geq k^{-1}$ it is a counterexample of the definition. \item If $n>1$, $\exists z'\colon d^n(x,z)=d^{n-1}(x,z')+d^{n-1}(z',z)$. As $d^{n-1}(x,z')\geq 0$ we have still $d(z',y)\leq d^{n-1}(x,z')+d^{n}(z,y)\leq d(x,y)<k$ and therefore $z' \in D_k(y)$. \begin{itemize} \item if $n-1=1$ it is a counterexample. \item else we repeat the search for intermediary state. Thus by recurrence, there exists some state $z_1 \in D_k(y)$ such that $x \in D_k^1(z_1)$ which gives a counterexample to the definition. \end{itemize} \end{itemize} \end{enumerate} Consequently, if $x \in D_{\infty}(y)$, one can set $k=d(x,y)$ (a finite distance) and apply the above property to show that there exists at least one action $u$ transforming the state $x$ to some state $z$ with a transition probability $P(z|x,u)>k^{-1}$ such that $d(z,y)<d(x,y)$. \hfill \ensuremath{\blacksquare} \end{proof}\\ \subsection*{HMM case} In the real world, the state of the system is never really known. The only available knowledge we have consists of a series of observations reflecting \emph{hidden} states. Probabilistic inference based on transition probability and observation likelihood allows to compute the probability distribution over the hidden states. This class of systems is usually modeled as Hidden Markov Models (HMM) and the problem of controlling such a system becomes a Partially Observable Markov Decision Process (POMDP). Extending the quasimetric method to the POMDP case does not however, come without cost. Ideally, as with the theoretical POMDP, we should define a quasi-distance not just on the state space but on the belief space (estimated distribution over states), which is continuous and consequently difficult to deal with \cite{Kaelbling1998}. A possible approximation is to compute the policy not on the belief space, but on the observations-actions space, obtaining $P([U^t=u]|o_{0:t},u_{0:t-1})$. Let us assume that we have a state observer maintaining a distribution over states, knowing all previous observations and actions $P([X^t=x]|o_{0:t},u_{0:t-1})$. At time $t$ we know all the observations $o_{0:t}$ and all the previous actions $u_{o:t-1}$, thus the distribution for the state can be recursively updated by the forward HMM equations: \begin{align*} P([X^t=x]|o_{0:t},u_{o:t-1})&\propto P([O^t=o]|[X^t=x])\\ &\times \sum_{y} \big [P([X^t=x]|[X^{t-1}=y],u^{t-1})\\ &\times P([X^{t-1}=y]|o_{0:t-1},u_{0:t-2}) \big ] \end{align*} with $P(O^t|X^t)$ the observation model. Then the distribution over action space can be computed by marginalizing over state space: \begin{align}\label{eq:pomdp} \nonumber P([U^t=u]|o_{0:t},u_{0:t-1})=& \sum_x \big [P([X^t=x]|o_{0:t},u_{0:t-1})\\ &\times P([U^t=u]|[X^t=x]) \big ] \end{align} assuming we have already computed the state dependent action policy $P([U^t=u]|[X^t=x])$ (see below). Following this, a decision must be made based on this distribution. The chosen action can be random \[ u_t^{random}\sim P([U^t=u]|o_{0:t},u_{0:t-1}) \] the most probable \[ u_t^{max}= \mathop{\mathrm{arg\,max}}_u P([U^t=u]|o_{0:t},u_{0:t-1}) \] or the \emph{mean} \[ u_t^{mean}= \sum_u u.P([U^t=u]|o_{0:t},u_{0:t-1}) \] Here we assume a separation between state estimation and control, considerably reducing the computational cost compared to the optimal POMDP solution, which is intractable for most real-life problems. One drawback however, is that the resulting policy could be less optimal and lacking in information-gathering behavior, for example. \subsection*{Probabilistic policy} As we have seen, in the classic MDP formalism, the policy $\pi(x)$ is a \emph{deterministic} mapping of the state space $X$ toward the action space $U$ (using $\displaystyle \argmin_u$). Pure MDP formalism only considers the optimal action (greedy policy), so a choice is made during the computation of the policy to only consider the one action that minimizes the cost. However, this method could be viewed as arbitrary to a certain extent, especially for multimodal cases where the choice of a unique optimal action may lead to loss of information or blocking behavior. In the field of reinforcement learning, the greedy policy is usually avoided in order to maintain exploratory behavior. To do so, methods such as $\epsilon$-greedy, soft $\epsilon$-greedy and soft-max action selection were employed \cite{Sutton1998}. Here we propose building $P(U^t|X^t, Y^t)$ with $Y$ the goal, using a Gibbs distribution (soft-max like form): \begin{equation}\label{eq:smpolicy} P([U^t=u]|[X^t=x],[Y^t=y]) = \frac{e^{-\beta D_u d(x,y)}}{\displaystyle \sum_{u_i} e^{-\beta D_{u_i} d(x,y)}} \end{equation} with $\beta$ a parameter modulating the \emph{sharpness} of the distribution (and consequently the exploration rate), and $D_u d(x,y)$ a \emph{probabilistic gradient} of the quasi-distance: \begin{equation}\label{eq:probgrad} D_ud(x,y)=g(x,u)+\sum_zd(z,y)p(z|x,u)-d(x,y) \end{equation} This gradient takes the immediate cost of the action into account, as well as the difference between the expected and current quasi-distances. The resulting distribution depends on a goal $y$, which can be fixed or even an evolving distribution $P(Y^t)$. The latter distribution can represent multiple objectives or just uncertainty with respect to the goal. We can then obtain a the state dependent action policy by marginalizing: \begin{equation}\label{eq:probgoal} P(U^t|X^t)=\sum_y P(U^t|X^t,Y^t).P(Y^t) \end{equation} This way to build a policy can certainly be applied to any \emph{potential}, such as the Bellman Value function. Similarly to reinforcement learning methods, actions are weighted according to their ``value estimate'' which, in our case, is the gradient of the expected quasi-distance. In MDP, the current state is known, so that the probability distribution over action space is directly given by the state dependent action policy (eq.~\ref{eq:smpolicy} or eq.~\ref{eq:probgoal}). In POMDP, the current state is not known, but by marginalization over the state space, one can also compute the distribution over the action space (eq.~\ref{eq:pomdp}). We can see that if $\beta \to \infty$, the policy tends toward a Dirac delta distribution (if a unique action minimizing the value exists). This extreme case reduces to the MDP optimal policy where a unique optimal action is mapped to each state, similarly to the Value Iteration or Policy Iteration methods. The knowledge of a distribution on $U$ allows a random draw decision to be made from the distribution which can be useful to avoid blocking behavior or even learning. According to the $\beta$ value, the soft-max policy associated with the random draw decision generates either a more optimal behavior (large $\beta$) or a more exploratory behavior (small $\beta$). \section*{Results and discussion} \subsection*{Comparison with dynamic programming} \subsubsection*{Convergence and complexity} First as we have shown, computation of the quasi-distance is ensured to converge even for infinite horizon (in finite time for a finite state space) while the standard Value Iteration algorithm is not. In fact, it is usually necessary to introduce a \emph{discount factor} $\gamma$ in Bellman's equation to assure convergence, but at the price of sub-optimality whereas such a thing is not needed with the quasimetric. Constructing the initial \emph{one-step distance} $d^1(x,y)$ for all state couples is in $O(|U|.|X|^2)$. Then, directly applying the recurrence in equation \ref{eq:qd_rec} leads to a complexity of $O(|X|^3.log|X|)$ for the whole state space (all-to-all states). However, the quasimetric construction uses probabilities only at the first iteration (i.e. the \emph{one-step distance}) and then propagates these distances with triangle inequality. This propagation of \emph{one-step distances} is completely deterministic and no more probabilities appear afterward. Thus, computing the quasi-distance can be reformulated in a graph theory framework as a deterministic shortest path problem. Let us consider the weighted directed graph (or network) $G=(V,A)$ with vertices $V=X$ and arcs $A$ the set of ordered pairs of vertices. We can assign to each oriented arc $e=(x,y)$ the weight $w_{x,y}=d^1(x,y)$ (the \emph{one-step distance}). Remark that for the sake of efficiency it is preferable to consider an arc only if the associated weight $w_{x,y} \neq +\infty$, i.e. if an action $u$ exists with a finite cost and for which the transition probability $p(y|x,u) \neq 0$. Constructing this graph is of the same complexity than the $d^1(x,y)$ iteration and is only computed once nonetheless. Then, the problem of computing the quasi-distance from $x \to y$ becomes the problem of finding the length of the shortest path between vertices $x$ and $y$. One can compute the whole quasimetric (all-to-all states) by computing the all-pairs shortest paths using for example the Floyd-Warshall algorithm \cite{Roy1959,Floyd1962,Warshall1962}. However, considering the usual MDP problem with a fixed goal, one would prefer to compute the quasi-distance for only one goal, which can be viewed as the multiple-source shortest path. An efficient way to solve this is to consider the transposed graph $G^T$ in which arcs are inverted and to solve the single-source shortest path (from goal vertex) using for example Dijkstra's algorithm \cite{Dijkstra1959} or $A^*$ depending on the problem \cite{Hart1968}. From a computational point of view, using Dijkstra's algorithm to solve the one-goal problem can be done with the $O(|A| + |V|.log|V|)$ worst case complexity using the appropriate data structure \cite{Fredman1987}. Knowing that $|V|=|X|$ and $|A|\leq |X|^2$ it's $O(|X|^2 + |X|.log|X|)=O(|X|^2)$, considering a fully connected graph. This is to be compared to classical discounted Value Iteration method which complexity is $O(|U|.|X|^2)$ for one iteration (or sweep) with the worst case number of iterations to converge proportional to $\frac{1}{1-\gamma}log(\frac{1}{1-\gamma})$, $\gamma$ being the discount factor \cite{Littman1995}. Notice that, transition probabilities are usually sparse allowing the graph to be equally sparse. Hence, considering the mean vertex out-degree $\hat{D}^{+}$, complexity using Dijkstra's method becomes $O(\hat{D}^{+}.|X|+|X|.log|X|)$, $\hat{D}^{+}$ depending on the dispersion of transition probabilities. Therefore, quasi-distance can then be easily solved in a computationally efficient way using the standard deterministic graph theory methods. \subsubsection*{Equivalence} The question is, how much does the quasimetric method diverge from the dynamic programming? In other words, how can we compare the quasi-distance with the value function in order to discuss the optimality approximation? In order to be able to compare, we first have to consider only a subset of the quasimetric by looking at the quasi-distances from all states to one unique state (a goal). If the quasi-distance and the value function are equal for a specific goal (strong equivalence) then clearly policies obtained with both methods will lead to the same behavior. But it is also possible that the quasi-distance differs from the value function and still yield the same policy (weak equivalence). In the deterministic case, the quasi-distance and the value function are trivially equal. But there is at least one other class of problems where these two approaches are strictly equivalent that we call the \emph{probabilistic maze}. Let us consider a probabilistic system where the uncertainties concern the success of actions. If an action $u$ succeeds, it drives the system from one state $x$ to another state $next(x,u)$; if it fails, the system remains in state $x$. We can then call $p_{suc}(u,x)$ the probability that action $u$ is successful from state $x$. This function determines all the transition probabilities that are null except for: \begin{equation}\label{eq:probmazepsuc} \left\{ \begin{array}{l l l} P(next(x,u)|x,u)&=&p_{suc}(u,x)\\ P(x|x,u)&=&1-p_{suc}(u,x) \end{array} \right. \end{equation} This kind of systems was also described as ``self-loop MDPs'' and used for MDPs determinization \cite{Keyder2008}. For this class of systems -- which includes those that are deterministic -- the value function and the quasi-distance are strictly equivalent and lead to the same optimal policy. Indeed, we can inject these probabilities in the Bellman's equation: \begin{align}\label{eq:probmazebellman} \nonumber V(x)=&\min_u\big\{ g(x,u)+V(x)(1-p_{suc}(u,x))\\ &+V(next(x,u))p_{suc}(u,x)\big\} \end{align} So \begin{equation}\label{eq:probmazebellman2} \min_u\big\{ g(x,u)-p_{suc}(u,x)(V(x)-V(next(x,u))\big\}=0 \end{equation} In each state $x$ there exists at least one optimal action $u^*(x)$ such that: \begin{align}\label{eq:probmazeuopt} \nonumber u^*(x) \in \argmin_u\big\{& g(x,u)+V(x)(1-p_{suc}(u,x))\\ &+V(next(x,u))p_{suc}(u,x)\big\} \end{align} In a probabilistic maze, an action can only succeed -- thus driving the system from $x$ to $next(x,u)$ -- or fail and leaving the system unchanged. So starting from any initial state $x_0$, the optimal policy $u^*$ describes a unique optimal trajectory $\{x_0 \to x_1 \to \dots \to x_n=goal\}$. If the optimal action fails in some state $x_i$, the system remains in $x_i$ and the optimal action to apply is still the same. So $u^*(x_i)$ is repeatedly chosen until it succeeds to move the system to $x_{i+1}=next(u^*(x_i),x_i)$ and the probability to succeed in exactly $k$ tries is $p_{suc}(u,x).(1-p_{suc}(u,x))^{k-1}$. Therefore, the mean cost for the transition $x_i \to x_{i+1}$ is: \begin{align}\label{eq:probmazemeancost} \nonumber g(x_i \to x_{i+1})=&p_{suc}(u,x).g(u^*(x_i),x_i)\\ &\times \sum_{k=1}^{\infty}k.(1-p_{suc}(u,x))^{k-1} \end{align} and as $\displaystyle \sum_{k=1}^{\infty}k.(1-p_{suc}(u,x))^{k-1}=\frac{1}{p_{suc}(u,x)^2}$ we have: \begin{equation}\label{eq:probmazemeancost2} g(x_i \to x_{i+1})=\frac{g(u^*(x_i),x_i)}{p_{suc}(u,x)} \end{equation} So the optimal policy is the one which minimizes this mean cost per successful attempt \begin{equation}\label{eq:probmazeu} u^*(x_i) \in \argmin_u\bigg \{\frac{g(u^*(x_i),x_i)}{p_{suc}(u,x)}\bigg\} \end{equation} Finally we have: \begin{equation}\label{eq:probmazevd} V(x_0)=\sum_{i=0}^{n-1}\min_u \bigg \{ \frac{g(u(x_i),xi)}{p_{suc}(u(x_i),x_i)} \bigg \} = d(x_0,x_n) \end{equation}\\ Figure \ref{fig:laby} shows an example of such a maze, with the corresponding quasi-distance and policy. This type of problem may appear somewhat artificial but it can for example, refer to a \emph{compressed} modeling of a deterministic system, exploiting the \emph{structure} of the state space. Let us imagine a mobile agent in a corridor. In a discretized space, the corridor can have length $n$, each action moving the mobile one cell forward with the probability $p \simeq 1$. In order to exit the corridor, the action has to be applied $n$ times. Alternatively, this discrete space can be \emph{compressed} by representing the corridor with a single cell and the probability to succeed (i.e. to exit the corridor) $p=\frac{1}{n}$. The resulting model is the probabilistic maze described above. \subsubsection*{Non-equivalence} In the general case, the quasimetric approach will differ from the dynamic programming method. These differences arise when the transition probabilities are \emph{spread out} along several arrival states. This dispersion of arrival states can produce differences between the quasi-distance and the value function with -- or without -- differences in the optimal policy obtained. \paragraph*{Systems yielding a quasi-distance different to the value function} Here is a simple case illustrating the difference between the two methods. Let us consider a system with fives states $\{A,B,C,D,E\}$ where $A$ is the starting state and $E$ the goal (cf. Fig.~\ref{fig:exemples}A). This system is almost deterministic since the only uncertainty relates to one action in state $A$. For $A$ there are two possible actions, one driving the transition $A \to B$ with a probability of $1$ and a cost of $3$ (action $u_1$) and one driving either $A \to C$ or $A \to D$ with a probability of $0.5$ and a cost of $2$ ($u_2$). Then from $B$, $C$ and $D$ the transition are deterministic, with associated costs of respectively $2$, $2.5$ and $2.5$. The corresponding computed quasi-distances can be found in table~\ref{tab:ex1_qd_v}. The shortest path according to the quasi-distance is $A \to B \to E$. The optimal policy in $A$ however, is to choose the action $u_2$ leading to either $C$ or $D$ with a probability of 0.5 and a cost of 2. Indeed, for the action $u_1$ leading to $B$ we have $g(A,u_1)+V(B)=5$ whereas $g(A,u_2)+0.5.V(C)+0.5.V(D)=4.5$. The value function of state $A$ is slightly lower than $d(A,E)$ but both methods lead to the same optimal choice of $u_2$ while in $A$. In this example, the quasi-distance yields an inaccurate estimate of the mean cost when starting from state $A$. In fact, the quasi-distance computation tends to favor actions with low dispersion in transition probabilities (low uncertainty). So here, the quasi-distance obtained differs from the Value function for state $A$, but generates the same optimal policy. The policy can also differ in the general case. In fact, replacing all the costs in the same example with $1$ leads to $V(B)=V(C)=V(D)=d(B,E)=d(C,E)=d(D,E)=1$. However, due to the uncertainty of action $u_2$ we have $d(A,B)=d(A,C)=\frac{1}{0.5}=2$ and $d(A,D)=1$, thus clearly biasing the policy obtained with the quasi-distance in favor of $u_1$. On the contrary, as the Value function takes all of the consequences of actions into account, $u_2$ leads to $1+0.5V(B)+0.5V(C)=2$ and $u_1$ to $1+V(D)=2$, so the two actions are equivalent. Roughly speaking, the quasi-distance yields an uncertainty aversive policy, resulting from the $\frac{g}{p}$ form of the one-step distance. \paragraph*{Systems with \emph{prison-like} states} Control under uncertainty can be viewed as a continuous decision making where both cost and uncertainty must be dealt with. The trade-off between cost and uncertainty can be illustrated by the spider problem \cite{Kappen2005} where an agent can reach a goal quickly by crossing a narrow bridge or by slowly walking around a lake. In a deterministic case, crossing the bridge is the obvious optimal action, but when there is uncertainty as to whether the spider is able to cross the narrow bridge, the optimal action could be to walk safely around the lake (as falling into the water may be fatal). As regards the spider problem, falling into the water may bear a sufficiently large cumulative cost to justify choosing to walk around the lake hazard-free. But then, what happens when confronted with a choice between a very costly but certain action and a low-cost action where there is a small probability of death? Clearly this problem may be much more difficult as death may not be associated with a high cost per se. The action of ``walking'' when in state ``bridge'' has no objective reason to be higher than ``walking'' when in state ``lakeside'' if we consider energy consumption. Instead, the problem with death does not lie in the cost but in the fact that it is an irreversible state. In our modeling, death can be represented as a \emph{prison} state from which one cannot escape. So this singularity is slightly different from a state-action with a high cost. In general these prisons can be a subset of the state space rather than an unique state. These particular states, sometimes referred as ``dead-ends'', are known to be problematic for MDPs and are implicitly excluded from the standard definition as their existence may prevent solution to converge \cite{Bertsekas1995}. Moreover, recent works in the domain of planning have identified these problems as interesting and difficult, recognizing the need to find specific methods to deal with it \cite{Little2007}. It is to be noted that a prison is an absorbing set of states that are not necessarily absorbing as it may be possible to ``move'' inside the prison. In fact a prison is as a set of states that does not contain the goal and from which we cannot reach the goal. To illustrate this class of problems, let us imagine that the spider is unable to swim. Falling into the water now leads to a prison state (death of the spider). Figure \ref{fig:exemples}B models this problem considering four states $\{A,B,C,D\}$ where $A$ is the initial state and $D$ the goal. There is a choice of actions in state $A$. The first action $u_1$ drives the transition $A \to B$ with a probability $p=1$ and a cost of $1$. Then from $B$, the unique action can lead to $D$ with $p=1-\varepsilon$ or $C$ with a probability $p=\varepsilon$ and a cost of $1$. The second action $u_2$ allows for a transition $A \to D$ with a probability $p=1$ but a cost $\Omega$. In this case what should the spider do? The computed value function and the quasi-distance can be found in table~\ref{tab:exfrogger_qd_val}. According to Bellman's equations, the action $u_1$ should never be attempted. Indeed, for state $A$ and action $u_1$ we have $g(A,u_1)+p(B|A,u_1)V(B)=+\infty$ and for $u_2$ we have $g(A,u_2)+p(D|A,u_2)V(D)=\Omega$. So clearly the optimal choice here according to the value function is $u_2$, independently of cost $\Omega$ which can be seen as rather radical. In contrast, the action policy obtained with the quasi-distance depends on the relative values of $\Omega$ and $\varepsilon$ (cost vs. uncertainty) that are parameters of the problem. Indeed, $d(A,D)=\min\{1+\frac{1}{1-\varepsilon},\Omega\}$ involves: \[ \left\{ \begin{array}{l l l} Du_1(A)&=&1+\frac{1}{1-\varepsilon}-\min\{1+\frac{1}{1-\varepsilon},\Omega \}\\ Du_2(A)&=&\Omega-\min\{1+\frac{1}{1-\varepsilon},\Omega \} \end{array} \right. \] So if $1+\frac{1}{1-\varepsilon} < \Omega$ we have $Du_1(A)=0$ and $Du_2(A)=\Omega-1-\frac{1}{1-\varepsilon}>0$ then $u_1$ is chosen. But if $1+\frac{1}{1-\varepsilon} > \Omega$, $Du_1(A)=1+\frac{1}{1-\varepsilon}-\Omega>0$ and $Du_2(A)=0$, then $u_2$ is chosen. We see that different policies can be chosen depending on the problem whereas dynamic programming will always avoid $u_1$. It is then also possible to modify the cost function in order to move the cursor of the risky behavior by changing the cost of $u_2$. \newline We can formalize these \emph{prison-like} states further in order to better control these effects. Let us define state $x$ as belonging to the prison $J(y)$ of state $y$ if there is no policy allowing the transition from $x$ to $y$ with a non zero probability. We notice that if we compute the reaching set $Q(y)=\{x \in X : \exists \; path(x \to y)\}$, we can obtain $J(y)=X-Q(y)$. Then, by definition $\forall x \in J(y), \; d(x,y)=\infty$. With our method, these prison states are states for which the quasi-distance to a specific (goal) state is infinite. Moreover, contrary to dynamic programming methods, for a finite cost function (and in a finite state space) the prison states are the only states with infinite quasi-distance to the goal, making them easy to identify. In fact, as described, we initialize all distances with: \[ d^1(x,y)=\min\left\{d^0(x,y),\min\limits_{u}\left\{\frac{g(x,u)}{p(y|x,u)}\right\}\right\} \] So any ``one-step'' distance between two states $x$ and $y$ will be finite if at least one action $u$ with a non zero probability $p(y|x,u)$ exists. Then, these ``one-step'' distances are propagated by triangle inequality ensuring that $d^{i+1}(x,y)=\min_{z} \left\{d^i(x,z)+d^i(z,y)\right\}$ is infinite iff the probability of reaching $y$ from $x$ is zero, i.e. there is no path between $x$ and $y$. Thus with our method, considering a finite cost function and a finite state space, every prison state has an infinite distance and every state with infinite distance is a prison state. As described, the proposed general quasi-metric iterative algorithm can detect all the possible prison states and for a goal directed MDP, the proposed deterministic shortest path algorithm for computing the quasi-distance will also naturally detect these prisons without propagating to other states. Furthermore, there are also \emph{risky} states that do not belong to $J(y)$ but are still associated with an infinite Value function. Obviously, all states in $J(y)$ have an infinite Value but contrary to the quasimetric the reciprocal is not true. Therefore, all states with a non zero probability of leading to a prison state also have an infinite Value (propagated by the conditional expectation in Bellman's equation): \[ z \in J(y) \Rightarrow V(z)=\infty \] so we have \begin{equation*} \forall u\ \exists z \in J(y)\colon p(z|x,u) > 0 \Rightarrow \forall u\ \sum_z V(z).p(z|x,u) = \infty \Rightarrow V(x)=\infty \end{equation*} Thus the infinite value can propagate to the whole state space depending on the distributions. This property of the dynamic programming method makes prison states indistinguishable from other risky states if there is no ``complete proper policy'' (a policy leading to the goal with a probability of 1). This may also prevent any policy to be computed. Indeed, all the possible policy can appear to be equivalent when all states have an infinite Value. This case appears if we remove the action $u_2$ in this example, then the prison becomes ``unavoidable''. In recent years, a number of work have been devoted to formalizing, detecting and dealing with these prisons in the planning domain, in particular for ``unavoidable'' ones \cite{Little2007,Kolobov2010,Teichteil-Konigsbuch2011,Kolobov2012}. The straightforward method we propose here allows for a finer grained management of this risk. The set $K(y)$ of these risky states can be constructed iteratively or by looking at $N^-(J(y))$, the set of predecessors of $J(y)$. We can observe that $N^-(J(y))$ is the set of states for which at least one action leads to a prison state with $p>0$. We call this set the \emph{weakly risky} states $K'(y)$: \[ K'(y)=\{ x \notin J(y) \ | \ \exists u \ \exists z \in J(y)\colon p(z|x,u)>0 \} = N^-(J(y)). \] The risky set is: \[ K(y)=\{ x \in N^-(J(y)) \ | \ \forall u \ \exists z \in J(y)\colon p(z|x,u)>0 \}. \] We can even decide a minimal acceptable risk $\varepsilon$ ($\varepsilon$-risky set), such as: \[ K_{\varepsilon}(y)=\{ x \in N^-(J(y)) \ | \ \forall u \ \exists z \in J(y)\colon p(z|x,u)>\varepsilon \} \] As seen in the previous example, from a quasi-distance point of view we can consider a risky state to be very close to the objective. If in a particular state $x$ all actions carry the probability $p>\varepsilon$ of entering a prison state, but at least one (let's say $u$) also has $p>0$ of going directly to the objective $y$, we have $d(x,y) \leq \frac{g(x,u)}{p}$. In order to avoid these risky states it can be decided that $\forall x \in K(y)\colon d(x,y)=\Omega$ with $\Omega$ an arbitrary large value (possibly infinite). This ability to deal with risk contrasts with the classic dynamic programming method, according to which one should \emph{never} cross the road or use a car, considering that there is always a non zero probability of an unavoidable and irremediable accident (prison state). However, by crossing the road or using a car we put ourselves in a risky state, but not in a prison state! Consequently, a distinction between $J(y)$ and $K(y)$ along with the ability to parametrize the risk/cost trade-off enabled by the quasimetric approach is essential and may be interesting for modeling human behavior. \subsection*{Applications} \subsubsection*{Under-actuated pendulum} Let us consider an under-actuated pendulum driven by the following equation: \begin{equation}\label{eq:pendulum} mr^2\ddot{\theta}=C+mgr.sin(\theta) \end{equation} with $m$ the mass, $r$ the radius, $C$ the torque, $\theta$ the angular position and $g=9.81\ m.s^{-2}$. The problem is to reach and maintain the unstable equilibrium $\theta=0$ (upward vertical) from the starting stable one $\theta=\pi$ (downward vertical) with a minimum cumulative cost, knowing that we can only apply a torque $C<mgr$. If we use as the time unit $\tau=\sqrt{\frac{r}{g}}$, the time constant of the pendulum, we can reduce equation \ref{eq:pendulum} to the dimensionless one: \begin{equation}\label{eq:pendulum_dless} \ddot{\theta}=u+sin(\theta) \end{equation} with the normalized torque $u=\frac{C}{mgr}$ such as $|u| \leq u_{max} < 1$. Then, by considering $X$, $Y$ and $U$ as the discrete variables, representing respectively $\theta$, $\dot{\theta}$ and $u$, we can decompose the probabilities as follows: \begin{equation}\label{eq:pendulum_probs} P(X^{t+\Delta t},Y^{t+\Delta t}|X^t,Y^t,U^t)=P(X^{t+\Delta t}|X^t,Y^t,U^t) \times P(Y^{t+\Delta t}|X^t,Y^t,U^t) \end{equation} and the following discrete Gaussian forms: \begin{equation}\label{eq:pendulum_xy} \left\{ \begin{array}{l l l} P(X^{t+\Delta t}|[X^t=x],[Y^t=y],[U^t=u])&\propto&\mathcal{N}(\mu_x=x+\Delta t.y +\frac{1}{2}\Delta t^2(u+sin(\theta)),\sigma_x)\\ P(Y^{t+\Delta t}|X^t,Y^t,U^t)&\propto&\mathcal{N}(\mu_y=y+\Delta t \times (u+sin(\theta)),\sigma_y) \end{array} \right. \end{equation} with $\Delta t$ the discrete time step. So here, we approximate in discrete time the equations of the dynamics with a Gaussian uncertainty hypothesis, described by parameters $\sigma_x$ and $\sigma_y$. Simulations were done for a state space of $|X|=|Y|=51$ and $|U|=21$ with $\sigma_x=\sigma_y=0.2$. Using the following cost function: \[ g(x,u)= \left\{ \begin{array}{l} 0\quad if\quad x=\text{goal and } u=0\\ 1\quad \text{else} \end{array} \right. \] the obtained Value function and quasi-distance are similar but not equal (cf. Fig.~\ref{fig:pendul_bel_qd_all}). For the Value function (without discount factor), the zero cost for the goal state (needed for convergence) only propagates very slightly and distant states have almost the same expected cost. In contrast, the quasi-distance exhibits larger variations over states because it is not smoothed by the computation of the mean cost expectation of the Value Iteration method. The constant cost chosen in this problem results in minimizing ``path length'' (number of state transition) and uncertainty (as the quasi-distance results in $\displaystyle \sum \min_u \{ \frac{1}{p}\}$). We computed the optimal policy with Value Iteration: \[ \pi(x) = \argmin_u \left\{ g(x,u)+\sum_z \gamma V(z)p(z|x,u) \right\} \] Similarly for the quasi-metric method we computed the argmin policy as the policy minimizing the expected distance: \[ \pi(x) = \argmin_u \left\{ g(x,u)+\sum_zd(z,y)p(z|x,u)-d(x,y) \right\} \] Figure \ref{fig:pendul_bel_qd_all} shows the deterministic policies obtained for both dynamic programming and quasi-distance. Here again, small differences occur even though the policies are mostly \emph{bang-bang}. We notice that small differences also occur due to the border effect that is provoked by discretization. Despite these differences in both the Value function and the policy, overall behavior is very similar. Results of these policies can be seen on figure \ref{fig:pendul_pol_bel_qd_ctrl}, starting from position $\pi$ with a velocity of $0$. We can see that the trajectory obtained with the quasimetric method is slightly longer than that obtained with undiscounted Value iteration (optimal), but still better than that obtained with discounted Value iteration (suboptimal with $\gamma=0.95$). We also compared computation time in terms of state space size ($|X|\times|Y|$ with a constant action space size $|U|=21$) between discounted Value iteration and quasimetric methods. Figure \ref{fig:pendul_timing_comp} shows the results obtained for the single-goal quasi-distance (quasi-distance from all states to the goal) and its associated $P(U|X)$ policy, along with Value iteration with different discount factors. These results were computed based on the same input transition probabilities with single thread C++ implementations of algorithms on an Intel Core2 Duo E6700 @ 2.66GHz desktop computer. We can see that Value iteration heavily depends on the chosen discount factor and is much slower than the quasimetric method for discounts close to 1. Notice that computation time for the quasimetric method includes graph construction (which should be done only once), quasi-distance and policy. As an illustration, for a state space of $|X|\times|Y|=91\times91=8281$, graph construction takes 129s, quasi-distance (Dijkstra shortest path) takes 3s and policy takes 31s while Value iteration with $\gamma=0.95$ takes 1469s. \subsubsection*{Nonholonomic system} A slightly more complicated system is the Dubins car model \cite{Dubins1957}. This system is interesting because it exhibits nonholonomic constraints for which optimal control is difficult. However, it has generated a large amount of work during last decades and several studies have provided in-depth understanding and formal solutions of such systems and successfully applied optimal methods for real-world robots (see \cite{Laumond1998,Soueres1998}). A Dubins car nonholonomic system is described with: \begin{equation}\label{eq:nh} \begin{pmatrix} \dot{x}_t \\ \dot{y}_t \\ \dot{\theta}_t \end{pmatrix} = \begin{pmatrix} cos \theta_t \\ sin \theta_t \\ 0 \end{pmatrix} u_l + \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} u_a \end{equation} with the control input $u_l$ and $u_a$ respectively the linear and angular velocity. For the sake of simplicity, we constrain the linear velocity to a constant value $u_l=1.0$ and the angular velocity $u_a=u \in \left[-1;1 \right]$. If we consider a probabilistic version of this system, the transition probabilities for the dynamic model are: \begin{equation}\label{eq:nh_fullprob} P(X^{t+\Delta t},Y^{t+\Delta t},\Theta^{t+\Delta t}|X^{t},Y^{t},\Theta^{t},U^t) \end{equation} which can be rewritten with some independence assumptions as the product: \begin{equation}\label{eq:nh_indprob} P(X^{t+\Delta t}|X^{t},\Theta^{t})P(Y^{t+\Delta t}|Y^{t},\Theta^{t})P(\Theta^{t+\Delta t}|\Theta^{t},U^t) \end{equation} and the following discrete Gaussian forms: \begin{equation}\label{eq:nh_xytheta} \left\{ \begin{array}{l l l} P(X^{t+\Delta t}|[X^{t}=x],[\Theta^{t}=\theta])&\propto&\mathcal{N}(\mu_x=x+(cos(\theta)).\Delta t,\sigma_x)\\ P(Y^{t+\Delta t}|[Y^{t}=y],[\Theta^{t}=\theta])&\propto&\mathcal{N}(\mu_y=y+(sin(\theta)).\Delta t,\sigma_y)\\ P(\theta^{t+\Delta t}|[\Theta^{t}=\theta],[U^t=u])&\propto&\mathcal{N}(\mu_{\theta}=\theta+u.\Delta t,\sigma_{\theta}) \end{array} \right. \end{equation} We computed the quasimetric for this system in a discretized state-space with the following parameters: \[ X=[-5,5] \quad Y=[-5,5] \quad \Theta=[-\pi,\pi] \quad U=[-1,1] \] \[ |X|=|Y|=|\Theta|=51 \quad |U|=11 \] \[ \sigma_x=\sigma_y=\sigma_{\theta}=0.05 \quad \Delta t=0.25 \] The accessibility volume obtained from the origin -- i.e. the volume of the state space that can be reached with a path length inferior to a given value -- is very similar to the optimal deterministic one from \cite{Laumond1998} (cf. Fig.~\ref{fig:nonholo_all}). Although it is not clear what signification accessibility volume could have for a probabilistic model, these two methods behave quite similarly. It is to be noted that this result was obtained with our general method without any specificity about the problem. If we simulate this system by adding noise to the position, according to the described model, and use a random draw policy as described before we can see that the average trajectories obtained correspond mostly to direct loops (cf. Fig. \ref{fig:nonholo_all}). These two symmetrical loops are comparable to the optimal deterministic behavior. On average, the behavior of the system closely match the theoretical deterministic optimal. Due to the noise, it is almost impossible to reach the goal in one trial. Most of the time the goal is missed (with one trajectory passing very close) and then the controller starts another loop or a figure of eight (trajectories on the left side of the red curves). \section*{Conclusion} We propose a new general method for control or decision making under uncertainty. This method applies for the discrete MPD case with positive cost and infinite or indefinite horizon. The principle of this approach is to define a goal independent quasimetric to the state space which can then be used to compute a policy for a chosen goal or set of goals. Thus, each distances from all to one state describing a subspace of the whole quasimetric can be viewed as an approximation of the Value function. To compute the distances between states we proposed to used the ``mean cost per successful attempt'' of a direct transition that we propagate by triangle inequality. We show that this distance computation can be reformulated as a standard deterministic shortest path problem allowing the use of efficient algorithms. Thanks to this property we have shown that the quasimetric approach may lead to a very significant gain in terms of computational cost compared to dynamic programming. Illustrative examples were treated and have shown very good results. We have demonstrated that for systems with possible prison states (excluding the goal), the quasimetric can significantly differ from the optimal solution when prisons are ``avoidable''. Moreover this method is still able to produce a solution for problems with ``unavoidable'' prisons where standard dynamic programming approach cannot. We proposed a way to finely tune risk sensitivity and risk/cost trade-off, defining risky states and a possible threshold on risk taking. Interestingly, this kind of risk-related behavior is reminiscent of that present in humans and is still to be compared to classic methods in human decision making. We also proposed a soft-max like way to compute a policy, which provides an entire distribution rather than a unique optimal deterministic action. Dealing with a probability distribution over actions provides, in our sense, a less restrictive way of considering control under uncertainty. With this method it is for instance, possible to make a decision when faced with multiple equivalent actions, thus introducing variability in actions and allowing exploration. This soft-max method, along with the random draw action, is also applicable to the Value function. Extending this method to the HMM cases we described, is computationally very cheap compared to the optimal POMDP, which is usually intractable. Moreover, one can question whether solving the POMDP is relevant when the model is imperfect or may change over time. Although our method is not optimal in the general case and lacks information-gathering behavior, we think it could be a useful bootstrap for learning using available prior knowledge, even if the latter is very coarse. Indeed, it could occasionally be more interesting to use a simple model with uncertainties than a very complicated model which is nonetheless rarely perfect. Therefore, we could consider our method as a trade-off between solving the POMDP and learning from scratch. Finally, this very general approach can be applied to a wide range of problems involving control under uncertainty. Although it is currently restricted to discrete space, infinite/indefinite horizon cases, we hope to see contributions from the community of control and planning as much of the techniques developed for dynamic programming can be applied to this method. \section*{Acknowledgments} We wish to express our gratitude to Daniel Bennequin and Julien Diard for their very useful comments.
1,116,691,499,939
arxiv
\subsection*{\hbox{}\hfill{\normalsize\sl #1}\hfill\hbox{}}} \textheight 23truecm \textwidth 15truecm \addtolength{\oddsidemargin}{-1.05truecm} \addtolength{\topmargin}{-1.5truecm} \makeatletter \def\l@section{\@dottedtocline{1}{0em}{1.2em}} \makeatother \begin{document} \title{Moduli of pre-${\cal D}$-modules, perverse sheaves\\ and the Riemann-Hilbert morphism -I} \author{Nitin Nitsure\thanks{Tata Institute of Fundamental Research, Bombay} \and Claude Sabbah\thanks{CNRS, URA D0169, Ecole Polytechnique, Palaiseau}} \date{March 28, 1995} \maketitle \begin{abstract} We construct a moduli scheme for semistable pre-${\cal D}$-modules with prescribed singularities and numerical data on a smooth projective variety. These pre-${\cal D}$-modules are to be viewed as regular holonomic ${\cal D}$-modules with `level structure'. We also construct a moduli scheme for perverse sheaves on the variety with prescribed singularities and other numerical data, and represent the de Rham functor (which gives the Riemann-Hilbert correspondence) by an analytic morphism between the two moduli schemes. \end{abstract} \vfill \tableofcontents \vfill \newpage \section{Introduction} This paper is devoted to the moduli problem for regular holonomic ${\cal D}$-modules and perverse sheaves on a complex projective variety $X$. It treats the case where the singular locus of the ${\cal D}$-module is a smooth divisor $S$ and the characteristic variety is contained in the union of the zero section $T^*_XX$ of the cotangent bundle of $X$ and the conormal bundle $N^*_{S,X}$ of $S$ in $X$ (also denoted $T_S^*X$). The sequel (part II) will treat the general case of arbitrary singularities. A moduli space for ${\cal O}$-coherent ${\cal D}$-modules on a smooth projective variety was constructed by Simpson [S]. These are vector bundles with integrable connections, and they are the simplest case of ${\cal D}$-modules. In this moduli construction, the requirement of semistability is automatically fulfilled by all the objects. Next in order of complexity are the so called `regular meromorphic connections'. These ${\cal D}$-modules can be generated by vector bundles with connections which have logarithmic singularities on divisors with normal crossing. These ${\cal D}$-modules are not ${\cal O}$-coherent, but are torsion free as ${\cal O}$-modules. A moduli scheme does not exist for these ${\cal D}$-modules themselves (see section 1 of [N]), but it is possible to define a notion of stability and construct a moduli for vector bundles with logarithmic connections. This was done in [N]. Though many logarithmic connections give rise to the same meromorphic connection, the choice of a logarithmic connection is infinitesimally rigid if its residual eigenvalues do not differ by nonzero integers (see section 5 of [N]). In the present paper and its sequel, we deal with general regular holonomic ${\cal D}$-modules. Such modules are in general neither ${\cal O}$-coherent, nor ${\cal O}$-torsion free or pure dimensional. We define objects called pre-${\cal D}$-modules, which play the same role for regular holonomic ${\cal D}$-modules that logarithmic connections played for regular meromorphic connections. We define a notion of (semi-)stability, and construct a moduli scheme for (semi-) stable pre-${\cal D}$-modules with prescribed singularity stratification and other numerical data. We also construct a moduli scheme for perverse sheaves with prescribed singularity stratification and other numerical data on a nonsingular variety, and show that the Riemann-Hilbert correspondence defines an analytic morphism between (an open set of) the moduli of pre-${\cal D}$-modules and the moduli of perverse sheaves. The contents of this paper are as follows. Let $X$ be a smooth projective variety, and let $S$ be a smooth hypersurface on $X$. In section 2, we define pre-${\cal D}$-modules on $(X,S)$ which may be regarded as ${\cal O}_X$-coherent descriptions of those regular holonomic ${\cal D}_X$-modules whose characteristic variety is contained in $T^*_XX\cup T^*_SX$. The pre-${\cal D}$-modules form an algebraic stack in the sense of Artin, which is a property that does not hold for the corresponding ${\cal D}$-modules. In section 3, we define a functor from the pre-${\cal D}$-modules to ${\cal D}$-modules (in fact we mainly use the presentation of holonomic ${\cal D}$-modules given by Malgrange [Mal], that we call Malgrange objects). This is a surjective functor, and though not injective, it has an infinitesimal rigidity property (see proposition \ref{prop4}) which generalizes the corresponding result for meromorphic connections. In section 4, we introduce a notion of (semi-)stability for pre-${\cal D}$-modules, and show that semistable pre-${\cal D}$-modules with fixed numerical data form a moduli scheme. Next, we consider perverse sheaves on $X$ which are constructible with respect to the stratification $(X-S)\cup S$. These have finite descriptions through the work Verdier, recalled in section 5. We observe that these finite descriptions are objects which naturally form an Artin algebraic stack. Moreover, we show in section 6 that S-equivalence classes (Jordan-H\"older classes) of finite descriptions with given numerical data form a coarse moduli space which is an affine scheme. Here, no hypothesis about stability is necessary. In section 7, we consider the Riemann-Hilbert correspondence. When a pre-${\cal D}$-module has an underlying logarithmic connection for which eigenvalues do not differ by nonzero integers, we functorially associate to it a finite description, which is the finite description of the perverse sheaf associated to the corresponding ${\cal D}$-module by the Riemann-Hilbert correspondence from regular holonomic ${\cal D}$-modules to perverse sheaves. We show that this gives an analytic morphism of stacks from the analytic open subset of the stack (or moduli) of pre-${\cal D}$-modules on $(X,S)$ where the `residual eigenvalues are good', to the stack (or moduli) of finite descriptions on $(X,S)$. In section 8, we show that the above morphism of analytic stacks is in fact a spread (surjective local isomorphism) in the analytic category. We also show that it has removable singularities in codimension 1, that is, is can be defined outside codimension two on any parameter space which is smooth in codimension 1. \paragraph{Acknowledgement} The authors thank the exchange programme in mathematics of the Indo-French Center for the Promotion of Advanced Research, New Delhi, the Ecole Polytechnique, Paris, and the Tata Institute of Fundamental Research, Bombay, for supporting their collaboration. The first author also thanks ICTP Trieste and the University of Kaiserslautern for their hospitality while part of this work was done. \section{Pre-${\cal D}$-modules} Let $X$ be a nonsingular variety and let $S\subset X$ be a smooth divisor (reduced). Let ${\cal I}_S\subset {\cal O}_X$ be the ideal sheaf of $S$, and let $T_X[\log S]\subset T_X$ be the sheaf of all tangent vector fields on $X$ which preserve ${\cal I}_S$. Let ${\cal D}_X[\log S]\subset {\cal D}_X$ be the algebra of all partial differential operators which preserve $I_S$; it is generated as an ${\cal O}_X$ algebra by $T_X[\log S]$. The ${\cal I}_S$-adic filtration on ${\cal O}_X$ gives rise to a (decreasing) filtration of ${\cal D}_X$ as follows: for $k\inZ\!\!\!Z$ define $V^k{\cal D}_X$ as the subsheaf of ${\cal D}_X$ whose local sections consist of operators $P$ which satisfy $P\cdot {\cal I}_S^j\subset {\cal I}_S^{k+j}$ for all $j$. By construction, one has ${\cal D}_X[\log S]=V^0{\cal D}_X$ and every $V^k({\cal D}_X)$ is a coherent ${\cal D}_X[\log S]$-module. Let $p:N_{S,X}\to S$ denote the normal bundle of $S$ in $X$. The graded ring $\mathop{\rm gr}\nolimits_V{\cal D}_X$ is naturally identified with $p_*{\cal D}_{N_{S,X}}$. Its $V$-filtration (corresponding to the inclusion of $S$ in $N_{S,X}$ as the zero section) is then split. There exists a canonical section $\theta$ of the quotient ring ${\cal D}_X[\log S]/{\cal I}_S{\cal D}_X[\log S]=\mathop{\rm gr}\nolimits^0_V{\cal D}_X$, which is locally induced by $x\partial_x$, where $x$ is a local equation for $S$. It is a central element in $\mathop{\rm gr}\nolimits^0_V{\cal D}_X$. This ring contains ${\cal O}_S$ as a subring and ${\cal D}_S$ as a quotient (one has ${\cal D}_S=\mathop{\rm gr}\nolimits^0_V{\cal D}_X/\theta\mathop{\rm gr}\nolimits^0_V{\cal D}_X$). One can identify locally on $S$ the ring $\mathop{\rm gr}\nolimits^0_V{\cal D}_X$ with ${\cal D}_S[\theta ]$. A coherent $\mathop{\rm gr}\nolimits^0_V{\cal D}_X$-module on which $\theta$ acts by $0$ is a coherent ${\cal D}_S$-module. The locally free rank one ${\cal O}_S$-module ${\cal N}_{S,X}={\cal O}_X(S)/{\cal O}_X$ is a $\mathop{\rm gr}\nolimits^0_V{\cal D}_X$-module on which $\theta$ acts by $-1$. \begin{definition}\rm A {\sl logarithmic module} on $(X,S)$ will mean a sheaf of ${\cal D}_X[\log S]$-modules, which is coherent as an ${\cal O}_X$-module. A {\sl logarithmic connection} on $(X,S)$ will mean a logarithmic module which is coherent and torsion-free as an ${\cal O}_X$-module. \end{definition} \refstepcounter{theorem}\paragraph{Remark \thetheorem} It is known that when $S$ is nonsingular, any logarithmic connection on $(X,S)$ is locally free as an ${\cal O}_X$-module. \begin{definition}[Family of logarithmic modules]\rm Let $f:Z\to T$ be a smooth morphism of schemes. Let $Y\subset Z$ be a divisor such that $Y\to T$ is smooth. Let $T_{Z/T}[\log Y]\subset T_{X/Y}$ be the sheaf of germs of vertical vector fields which preserve the ideal sheaf of $Y$ in ${\cal O}_Z$. This generates the algebra ${\cal D}_{Z/T}[\log Y]$. A family of logarithmic modules on $Z/T$ is a ${\cal D}_{Z/T}[\log Y]$-module which is coherent as an ${\cal O}_Z$-module, and is flat over ${\cal O}_T$. When $f:Z\to T$ is the projection $X\times T\to T$, and $Y=S\times T$, we get a {\sl family of logarithmic modules on $(X,S)$ parametrized by $T$}. \end{definition} \refstepcounter{theorem}\paragraph{Remark \thetheorem} The restriction to $S$ of a logarithmic module is acted on by $\theta$: for a logarithmic connection, this is the action of the residue of the connection, which is an ${\cal O}_S$-linear morphism. \refstepcounter{theorem}\paragraph{Remark \thetheorem} There is an equivalence (restriction to $S$) between logarithmic modules supported on the reduced scheme $S$ and $\mathop{\rm gr}\nolimits^0_V{\cal D}_X$-modules which are ${\cal O}_S$-coherent, (hence locally free ${\cal O}_S$-modules, since they are locally ${\cal D}_S$-modules). In the following, we shall not make any difference between the corresponding objects. \bigskip We give two definitions of pre-${\cal D}$-modules. The two definitions are `equivalent' in the sense that they give not only equivalent objects, but also equivalent families, or more precisely, the two definitions give rise to isomorphic algebraic stacks. To give a familier example of such an equivalence, this is the way how vector bundles and locally free sheaves are `equivalent'. Note also that mere equivalence of objects is not enough to give equivalence of families --- for example, the category of flat vector bundles is equivalent to the category of $\pi_1$ representations, but an algebraic family of flat bundles gives in general only a holomorphic (not algebraic) family of $\pi_1$ representations. In their first version, pre-${\cal D}$-modules are objects that live on $X$, and the functor from pre-${\cal D}$-modules to ${\cal D}$-modules has a direct description in their terms. The second version of pre-${\cal D}$-modules is more closely related to the Malgrange description of ${\cal D}$-modules and the Verdier description of perverse sheaves, and the Riemann-Hilbert morphism to the stack of perverse sheaves has direct description in its terms. \begin{definition}[Pre-${\cal D}$-module of first kind on $(X,S)$]\rm Let $X$ be a nonsingular variety, and $S\subset X$ a smooth divisor. A pre-${\cal D}$-module ${\bf E} = (E,F,t,s)$ on $(X,S)$ consists of the following data (1) $E$ is a logarithmic connection on $(X,S)$. (2) $F$ is a logarithmic module on $(X,S)$ supported on the reduced scheme $S$ (hence a flat connection on $S$). (3) $t:(E\vert S) \to F$ and $s:F \to (E\vert S)$ are ${\cal D}_X[\log S]$ linear maps, which satisfies the following conditions: (4) On $E\vert S$, we have $st = R$ where $R\in End(E\vert S)$ is the residue of $E$. (5) On $F$, we have $ts = \theta_F$ where $\theta_F:F\to F$ is the ${\cal D}_X[\log S]$ linear endomorphism induced by any Eulerian vector field $x\partial /\partial x$. \end{definition} If $(E,F,t,s)$ and $(E',F',t',s')$ are two pre-${\cal D}$-modules, a morphism between them consists of ${\cal D}_X[\log S]$ linear morphisms $f_0:E\to E'$ and $f_1:F\to F'$ which commute with $t,t'$ and with $s,s'$. \refstepcounter{theorem}\paragraph{Remark \thetheorem} It follows from the definition of a pre-${\cal D}$-module $(E,F,t,s)$ that $E$ and $F$ are locally free on $X$ and $S$ respectively, and the vector bundle morphisms $R$, $s$ and $t$ all have constant ranks on irreducible components of $S$. \paragraph{Example} Let $E$ be a logarithmic connection on $(X,S)$. We can associate functorially to $E$ the following three pre-${\cal D}$-modules. Take $F_1$ to be the restriction of $E$ to $S$ as an ${\cal O}$-module. Let $t_1 = R$ (the residue) and $s_1 = 1_F$. Then ${\bf E}_1=(E,F_1,t_1,s_1)$ is a pre-${\cal D}$-module, which under the functor from pre-${\cal D}$-modules to ${\cal D}$-modules defined later will give rise to the meromorphic connection corresponding to $E$. For another choice, take $F_2 = E\vert S$, $t_2=1_F$ and $s_2=R$. This gives a pre-${\cal D}$-module ${\bf E}_2 = (E,F_2,t_2,s_2)$ which will give rise to a ${\cal D}$-module which has nonzero torsion part when $R$ is not invertible. For the third choice (which is in some precise sense the minimal choice), take $F_3$ to be the image vector bundle of $R$. Let $t_3 =R:(E\vert S)\to F_3$, and let $s_3:F_3\hookrightarrow (E\vert S)$. This gives a pre-${\cal D}$-module ${\bf E}_3 = (E,F_3,t_3,s_3)$. We have functorial morphisms ${\bf E}_3\to {\bf E}_2 \to {\bf E}_1$ of pre-${\cal D}$-modules. \begin{definition}[Families of pre-${\cal D}$-modules]\rm Let $T$ be a complex scheme. A family ${\bf E}_T$ of pre-${\cal D}$-modules on $(X,S)$ parametrized by the scheme $T$, a morphism between two such families, and pullback of a family under a base change $T'\to T$ have obvious definitions (starting from definition of families of ${\cal D}_X[\log S]$-modules), which we leave to the reader. This gives us a fibered category $PD$ of pre-${\cal D}$-modules over the base category of $C\!\!\!\!I$ schemes. Let $\cal PD$ be the largest (nonfull) subcategory of $PD$ in which all morphisms are isomorphisms. This is a groupoid over $C\!\!\!\!I$ schemes. \end{definition} \begin{proposition} The groupoid $\cal PD$ is an algebraic stack in the sense of Artin. \end{proposition} \paragraph{Proof} It can be directly checked that $\cal PD$ is a sheaf, that is, descent and effective descent are valid for faithfully flat morphisms of parameter schemes of families of pre-${\cal D}$-modules. Let $Bun_X$ be the stack of vector bundles on $X$, and let $Bun_S$ be the stack of vector bundles on $S$. Then $\cal PD$ has a forgetful morphism to the product stack $Bun_X\times_{C\!\!\!\!I} Bun_S$. The later stack is algebraic and the forgetful morphism is representable, hence the desired conclusion follows. \bigskip Before giving the definition of pre-${\cal D}$-modules of the second kind, we observe the following. \refstepcounter{theorem}\paragraph{Remark \thetheorem}\label{rem1} Let $N$ be any line bundle on a smooth variety $S$, and let $\ov{N} = P(N\oplus {\cal O}_S)$ be its projective completion, with projection $\pi : \ov{N} \to S$. Let $S^{\infty} = P(N)$ be the divisor at infinity. For any logarithmic connection $E$ on $(\ov{N} ,S\cup S^{\infty})$, the restriction $E\vert S$ is of course a ${\cal D}_{\ov{N}}[\log S\cup S^{\infty}]$-module. But conversely, for any ${\cal O} $-coherent ${\cal D}_{\ov{N}}[\log S\cup S^{\infty}]$-module $F$ scheme theoretically supported on $S$, there is a natural structure of a logarithmic connection on $(\ov{N} ,S\cup S^{\infty})$ on its pullup $\pi ^*(F)$ to $\ov{N}$. The above correspondence is well behaved in families, giving an isomorphism between the algebraic stack of ${\cal D}_{\ov{N}}[\log S\cup S^{\infty}]$-modules $F$ supported on $S$ and the algebraic stack of logarithmic connections $E$ on $(\ov{N} ,S\cup S^{\infty})$ such that the vector bundle $E$ is trivial on the fibers of $\pi :\ov{N} \to S$. The functors $\pi ^*(-)$ and $(-)\vert S$ are fully faithful. \refstepcounter{theorem}\paragraph{Remark \thetheorem}\label{rem2} Let $S\subset X$ be a smooth divisor, and let $N=N_{S,X}$ be its normal bundle. Then the following are equivalent in the sense that we have fully faithful functors between the corresponding categories, which give naturally isomorphic stacks. (1) ${\cal D}_X[\log S]$-modules which are scheme theoretically supported on $S$. (2) ${\cal D}_N[\log S]$-modules which are scheme theoretically supported on $S$. (3) ${\cal D}_{\ov{N}}[\log S \cup S^{\infty}]$-modules which are scheme theoretically supported on $S$. The equivalence between (2) and (3) is obvious, while the equivalence between (1) and (2) is obtained as follows. The Poincar\'e residue map $res:\Omega ^1_X[\log S] \to {\cal O}_S$ gives the following short exact sequence of ${\cal O}_S$-modules. $$0\to \Omega ^1_S \to \Omega ^1_X[\log S]|S \to {\cal O}_S\to 0$$ By taking duals, this gives $$0 \to {\cal O}_S \to T_X[\log S]|S \to T_S\to 0.$$ It can be shown that there exists a unique isomorphism $T_X[\log S]\vert S \to T_N[\log S]\vert S$ which makes the following diagram commute, where the rows are exact. $$\matrix{ 0 & \to & {\cal O}_S & \to & T_N[\log S]|S & \to & T_S & \to & 0 \cr & & \Vert & & \downarrow & & \Vert & & \cr 0 & \to & {\cal O}_S & \to & T_X[\log S]|S & \to & T_S & \to & 0 \cr }$$ \refstepcounter{theorem}\paragraph{Remarks \thetheorem} (1) Observe that the element $\theta$ is just the image of $1$ under the map ${\cal O}_S \to T_X[\log S]\vert S$. (2) Using the notations of the beginning of this section, one can identify the ring $\pi_*{\cal D}_{\ov{N}}[\log S \cup S^{\infty}]$ with $\mathop{\rm gr}\nolimits^0_V{\cal D}_X$. Hence $\theta$ is a global section of ${\cal D}_{\ov{N}}[\log S \cup S^{\infty}]$. \bigskip We now make the following important definition. \begin{definition}[Specialization of a logarithmic module]\rm Let $E$ be a logarithmic module on $(X,S)$. Then the specialization $\mathop{\rm sp}\nolimits_SE$ will mean the logarithmic connection $\pi ^*(E\vert S)$ on $(\ov{N_{S,X}} , S\cup S^{\infty})$. \end{definition} Now we are ready to define the second version of pre-${\cal D}$-modules. \begin{definition}[Pre-${\cal D}$-modules of the second kind on $(X,S)$]\label{def1}\rm A pre-${\cal D}$-mo\-dule (of the second kind) ${\bf E} = (E_0,E_1,c,v)$ on $(X,S)$ consists of the following data (1) $E_0$ is a logarithmic connection on $(X,S)$, (2) $E_1$ is a logarithmic connection on $(\ov{N_{S,X}},S\cup S^\infty)$, (3) $c:\mathop{\rm sp}\nolimits_SE_0 \to E_1$ and $v:E_1 \to \mathop{\rm sp}\nolimits_SE_0$ are ${\cal D}_{\ov{N_{S,X}}}[\log S\cup S^\infty]$-linear maps, which satisfies the following conditions: (4) on $\mathop{\rm sp}\nolimits_SE_0$, we have $cv = \theta_{\mathop{\rm sp}\nolimits_SE_0}$, (5) on $E_1$, we have $vc = \theta_{E_1}$, (6) the vector bundle underlying $E_1$ is {\sl trivial} in the fibers of $\pi:\ov{N_{S,X}}\to S$ (that is, $E_1$ is locally over $S$ isomorphic to $\pi^*(E_1|S)$). \end{definition} If $(E_0,E_1,c,v)$ and $(E'_0,E'_1,c',v')$ are two pre-${\cal D}$-modules, a morphism between them consists of ${\cal D}_X[\log S]$ linear morphisms $f_0:E_0\to E'_0$ and $f_1:E_1\to E'_1$ such that $\mathop{\rm sp}\nolimits_Sf_0$ and $f_1$ commute with $v,v'$ and with $c,c'$. \begin{definition}[Families of pre-${\cal D}$-modules of the second kind]\rm Let $T$ be a complex scheme. A family ${\bf E}_T$ of pre-${\cal D}$-modules on $(X,S)$ parametrized by the scheme $T$, a morphism between two such families, and pullback of a family under a base change $T'\to T$ have obvious definitions which we leave to the reader. This gives us a fibered category $PM$ of pre-${\cal D}$-modules of second kind over the base category of $C\!\!\!\!I$ schemes. \end{definition} \begin{proposition} The functor which associates to each family of pre-${\cal D}$-module $(E_0,E_1,c,v)$ of the second kind parametrized by $T$ the family of pre-${\cal D}$-module of the first kind $(E_0,E_1|S, c|S, v|S)$ is an equivalence of fibered categories. \end{proposition} \paragraph{Proof} This follows from remarks \ref{rem1} and \ref{rem2} above. \section{From pre-${\cal D}$-modules to ${\cal D}$-modules} In this section we first recall the description of regular holonomic ${\cal D}$-modules due to Malgrange [Mal] and we associate a `Malgrange object' to a pre-${\cal D}$-module of the second kind (Proposition \ref{prop2}), which has good residual eigenvalues (definition \ref{goodres}), each component of $S$ do not differ by positive integers. Having such a direct description of the Malgrange object enables us to prove that every regular holonomic ${\cal D}$-module with characteristic variety contained in $T^*_XX\cup T^*_SX$ arises from a pre-${\cal D}$-module (Corollary \ref{cor3}), and also helps us to prove an infinitesimal rigidity property for the pre-${\cal D}$-modules over a given ${\cal D}$-module (Proposition \ref{prop4}). \inter{Malgrange objects} Regular holonomic ${\cal D}$-modules on $X$ whose characteristic variety is contained in $T^*_XX\cup T^*_SX$ have an equivalent presentation due to Malgrange and Verdier, which we now describe. Let us recall the definition of the {\sl specialization} $\mathop{\rm sp}\nolimits_S(M)$ of a regular holonomic ${\cal D}_X$-module $M$: fix a section $\sigma$ of the projection $C\!\!\!\!I\toC\!\!\!\!I/Z\!\!\!Z$ and denote $A$ its image; every such module admits a unique (decreasing) filtration $V^kM$ ($k\inZ\!\!\!Z$) by ${\cal D}_X[\log S]$-submodules which is good with respect to $V{\cal D}_X$ and satisfies the following property: on $\mathop{\rm gr}\nolimits^k_VM$, the action of $\theta$ admits a minimal polynomial all of whose roots are in $A+k$. Then by definition one puts $\mathop{\rm sp}\nolimits_SM=\oplus_{k\inZ\!\!\!Z}\mathop{\rm gr}\nolimits^k_VM$. One has $(\mathop{\rm sp}\nolimits_SM)[*S]=\mathop{\rm sp}\nolimits_S(M[*S])=(\mathop{\rm gr}\nolimits_{V}^{\geq k}M)[*S]$ for all $k\geq 1$, if we put $\mathop{\rm gr}\nolimits_{V}^{\geq k}M=\oplus_{\ell\geq k}\mathop{\rm gr}\nolimits^\ell_VM$. The $p_*{\cal D}_{N_SX}$-module $\mathop{\rm sp}\nolimits_SM$ does not depend on the choice of $\sigma$ (if one forgets its gradation). If $\theta$ acts in a locally finite way on a $\mathop{\rm gr}\nolimits^0_V{\cal D}_X$ or a $p_*{\cal D}_{N_{S,X}}$-module, we denote $\Theta$ the action of $\exp(-2i\pi\theta)$. Given a regular holonomic ${\cal D}_X$-module, we can functorially associate to it the following modules: (1) $M[*S]={\cal O}_X[*S]\otimes_{{\cal O}_X}M$ is the $S$-localized ${\cal D}_X$-module; it is also regular holonomic; (2) $\mathop{\rm sp}\nolimits_S M$ is the specialized module; this is a regular holonomic $p_*{\cal D}_{N_SX}$-module, which is also {\sl monodromic}, i.e. the action of $\theta$ on each local section is locally (on S) finite. The particular case that we shall use of the result proved in [Mal] is then the following: \begin{theorem} There is an equivalence between the category of regular holonomic ${\cal D}_X$-modules and the category which objects are triples $({\cal M},\overline M,\alpha)$, where ${\cal M}$ is a $S$-localized regular holonomic ${\cal D}_X$-module, $\overline M$ is a monodromic regular holonomic $p_*{\cal D}_{N_SX}$-module and $\alpha$ is an isomorphism (of localized $p_*{\cal D}_{N_SX}$-modules) between $\mathop{\rm sp}\nolimits_S{\cal M}[*S]$ and $\overline M[*S]$. \end{theorem} In fact, the result of [Mal] does mention neither holonomicity nor regularity. Nevertheless, using standard facts of the theory, one obtains the previous proposition. Regularity includes here regularity at infinity, i.e. along $S^\infty$. This statement can be simplified when restricted to the category of regular holonomic ${\cal D}$-modules which characteristic variety is contained in the union $T^*_XX\cup T^*_SX$. \begin{definition}\rm A {\sl Malgrange object} on $(X,S)$ is a tuple $(M_0,M_1,C,V)$ where (1) $M_0$ is an $S$-localized regular holonomic ${\cal D}_X$-module which is a regular meromorphic connection on $X$ with poles on $S$, (2) $M_1$ is a $S$-localized monodromic regular holonomic $p_*{\cal D}_{N_SX}$-module which is a regular meromorphic connection on $N_{S,X}$ (or $\ov{N_{S,X}}$) with poles on $S$ (or on $S\cup S^\infty$), (3) $C,V$ are morphisms (of $p_*{\cal D}_{N_{S,X}}$-modules) between $\mathop{\rm sp}\nolimits_SM_0$ and $M_1$ satisfying $VC=\Theta-\mathop{\rm id}\nolimits$ on $\mathop{\rm sp}\nolimits_SM_0$ and $CV=\Theta-\mathop{\rm id}\nolimits$ on $M_1$. \end{definition} The morphisms between two Malgrange objects are defined in an obvious way, making them an abelian category. The previous result can be translated in the following way, using [Ve]: \begin{corollary} There is an equivalence between the category of regular holonomic ${\cal D}$-modules which characteristic variety is contained in $T^*_XX\cup T^*_SX$ and the category of Malgrange objects on $(X,S)$. \end{corollary} \inter{From pre-${\cal D}$-modules to Malgrange objects} \begin{definition}\label{goodres}\rm (1) We say that a logarithmic connection $F$ on $(X,S)$ has {\sl good residual eigenvalues} if for each connected component $S_a$ of the divisor $S$, the residual eigenvalues $(\lambda _{a,k})$ of $F$ along $S_a$ do not include a pair $\lambda _{a,i},\lambda _{a,j}$ such that $\lambda _{a,i}-\lambda _{a,j}$ is a nonzero integer. (2) We say that a pre-${\cal D}$-module ${\bf E} =(E_0,E_1,s,t)$ has {\sl good residual eigenvalues} if the logarithmic connection $E_0$ has good residual eigenvalues as defined above. \end{definition} We now functorially associate a Malgrange object ${\bf M}=\eta ({\bf E})=(M_0,M_1,C,V)$ to each pre-${\cal D}$-module ${\bf E} = (E_0,E_1,c,v)$ on $(X,S)$ with $E_0$ having good residual eigenvalues. \refstepcounter{theorem}\paragraph{Remark \thetheorem}\label{rem3} By definition of a pre-${\cal D}$-module it follows that the nonzero eigenvalues of $\theta_a$ on $E_0|S_a$ (the residue along $S_a$) are the same as the nonzero eigenvalues of $\theta_a$ on $E_{1,a}$. \begin{proposition}{\bf(The Malgrange object associated to a pre-${\cal D}$-module with good residual eigenvalues)}\quad\label{prop2} Let ${\bf E}=(E_0,E_1,c,v)$ be a pre-${\cal D}$-module on $(X,S)$ of the second kind (definition \ref{def1}), such that $E_0$ has good residual eigenvalues. Let $\eta ({\bf E} ) = (M_0,M_1,C,V)$ where (1) $M_0=E_0[*S]$, (2) $M_1=E_1[*S]$, (3) $C = c\circ \displaystyle{e_{}^{-2i\pi\theta_{E_0}}- 1\over\theta_{E_0}}$. (4) $V=v$ Then $\eta ({\bf E})$ is a Malgrange object, and $\eta$ is functorial in an obvious way. \end{proposition} \paragraph{Proof} Because $E_0$ has good residual eigenvalues, one can use the filtration $V^kE_0[*S]$ $=I_{S}^{k}E_0\subset E_0[*S]$ in order to compute $\mathop{\rm sp}\nolimits_SE_0[*S]$. It follows that the specialization of $E_0[*S]$ when restricted to $N_{S,X}-S$ is canonically isomorphic to the restriction of $\mathop{\rm sp}\nolimits_SE_0=\pi ^*(E_0\vert S)$ to $N_{X,S}-S$. \inter{Essential surjectivity} \begin{proposition}\label{prop3} Every Malgrange object $(M_0,M_1,C,V)$ on $(X,S)$ can be obtained in this way from a pre-${\cal D}$-module. \end{proposition} \paragraph{Proof} This follows from [Ve]: one chooses Deligne lattices in $M_0$ and $M_1$. One uses the fact that every ${\cal D}$-linear map between holonomic ${\cal D}$-modules is compatible with the $V$-filtration, so sends the specialized Deligne lattice of $M_0$ to the one of $M_1$. Moreover, the map $v$ can be obtained from $V$ because the only integral eigenvalue of $\theta$ on the Deligne lattice is $0$, so $\displaystyle{e_{}^{-2i\pi\theta}- 1\over\theta}$ is invertible on it. The previous two propositions give the following. \begin{corollary}\label{cor3} The functor from pre-${\cal D}$-modules on $(X,S)$ to regular holonomic ${\cal D}$-modules on $X$ with characteristic variety contained in $T^*_XX\cup T^*_SX$ is essentially surjective. \end{corollary} \inter{Infinitesimal rigidity} For a regular holonomic ${\cal D}$-module ${\bf M}$ with characteristic variety $T^*_XX\cup T^*_SX$, there exist several nonisomorphic pre-${\cal D}$-modules ${\bf E}$ which give rise to the Malgrange object associated to ${\bf M}$. However, we have the following infinitesimal rigidity result, which generalizes the corresponding results in [N]. \begin{proposition}[Infinitesimal rigidity]\label{prop4} Let $T=\mathop{\rm Spec}\nolimits\displaystyle{C\!\!\!\!I [\epsilon ]\over (\epsilon ^2)}$. Let ${\bf E}_T$ be a family of pre-${\cal D}$-modules on $(X,S)$ parametrized by $T$. Let the associated family ${\bf M}_T$ of ${\cal D}$-modules on $X$ be constant (pulled back from $X$). Let ${\bf E}$ (which is the specialization at $\epsilon =0$) be of the form ${\bf E} = (E,F,s,t)$ where along any component of $S$, no two distinct eigenvalues of the residue of the logarithmic connection $E$ differ by an integer. Then the family ${\bf E}_T$ is also constant. \end{proposition} \paragraph{Proof} By [N], the family $E_{T}$ is constant, as well as the specialization $\mathop{\rm sp}\nolimits_SE_{T}$. As a consequence, the residue $\theta_{E_T}$ is constant. Let us now prove that the family $F_T$ is constant. Let $S_a$ be a component of $S$ along which the only possible integral eigenvalue of $\theta_E$ is $0$. Then it is also the only possible integral eigenvalue of $\theta_F$ along $S_a$ because the generalized eigenspaces of $\theta_E$ and $\theta_F$ corresponding to a nonzero eigenvalue are isomorphic by $s$ and $t$ (see remark \ref{rem3}). We also deduce from [N] that $F_T$ is constant as a logarithmic module along this component. Assume now that $0$ is not an eigenvalue of $\theta_E$ along $S_a$ but is an eigenvalue of $\theta_F$ along this component. Then $\theta_F$ may have two distinct integral eigenvalues, one of which is $0$. Note that, in this case, $\theta_E$ is an isomorphism (along $S_a$), as well as $\theta_{E_T}$ which is obtained by pullback from $\theta_E$. It follows that on $S_a$ we have an isomorphism $F_T\simeq E_T|S_a\oplus \mathop{\rm Ker}\nolimits\theta_{F_T}$. Consequently $\mathop{\rm Ker}\nolimits\theta_{F_T}$ is itself a family. It is enough to show that this family is constant. But the corresponding meromorphic connection on $N_{S,X}^{}-S$ is constant, being the cokernel of the constant map $C_T:M_{0T}\to M_{1T}$. We can then apply the result of [N] because the only eigenvalue on $\mathop{\rm Ker}\nolimits\theta_F$ is $0$. The maps $s_T$ and $t_T$ are constant if and only if for each component $S_a$ of $S$ and for some point $x_a\in S_a$ their restriction to $F_T|{x_a}\times T$ and $E_T|{x_a}\times T$ are constant. This fact is a consequence of the following lemma. \begin{lemma} Let $E$ and $F$ be finite dimensional complex vector spaces, and let $\theta_E\in End (E)$ and $\theta_F\in End (F)$ be given. Let $V\subset W=Hom(F,E) \times Hom(E,F)$ be the closed subscheme consisting of $(s,t)$ with $st=\theta_E$ and $ts=\theta_F$. Let $\phi :W\to W$ be the holomorphic map defined by $$\phi (s,t) = (s, t {e^{st} -1 \over st}).$$ Then the differential $d\phi$ is injective on the Zariski tangent space to $V$ at any closed point $(s,t)$. \end{lemma} \paragraph{Proof} Let $(a, b)$ be a tangent vector to $V$ at $(s,t)$. Then by definition of $V$, we must have $at+sb=0$ and $ta+bs=0$. Using $at+sb=0$, we can see that $d\phi (a, b) = (a, bf(st))$ where $f$ is the entire function on $End(E_0)$ defined by the power series $(e^x-1)/x$. Suppose $(a,bf(st))=0$. Then $a=0$ and so the condition $ta+bs=0$ implies $bs=0$. As the constant term of the power series $f$ is $1$ and as $bs=0$, we have $bf(st)=b$. Hence $b=0$, and so $d\phi$ is injective. \section{Semistability and moduli for pre-${\cal D}$-modules.} In order to construct a moduli scheme for pre-${\cal D}$-modules, one needs a notion of semistability. This can be defined in more than one way. What we have chosen below is a particularly simple and canonical definition of semistability. (In an earlier version of this paper, we had employed a definition of semistability in terms of parabolic structures, in which we had to fix the ranks of $s:E_1\to E_0|S$ and $t:E_0|S \to E_1$ and a set of parabolic weights.) Let $S_a$ be the irreducible components of the smooth divisor $S\subset X$. For a pre-${\cal D}$-module ${\bf E} =(E_0,E_1,s,t)$, we denote by $E_a$ the restriction of $E_1$ to $S_a$, and we denote by $s_a$ and $t_a$ the restrictions of $s$ and $t$. \inter{Definition of semistability} We fix an ample line bundle on $X$, and denote the resulting Hilbert polynomial of a coherent sheaf $F$ by $p(F,n)$. For constructing a moduli, we fix the Hilbert polynomials of $E_0$ and $E_a$, which we denote by $p_0(n)$ and $p_a(n)$. Recall (see [S]) that an ${\cal O} _X$-coherent ${\cal D} _X[\log S]$-module $F$ is by definition {\sl semistable} if it is pure dimensional, and for each ${\cal O} _X$ coherent ${\cal D} _X[\log S]$ submodule $F'$, we have the inequality $p(F',n)/rank (F') \le p(F,n)/rank (F) $ for large enough $n$. We call $p(F,n)/rank (F)$ the {\sl normalized Hilbert polynomial} of $F$. \begin{definition}\rm We say that the pre-${\cal D}$-module ${\bf E}$ is {\sl semistable} if the ${\cal D}_X[\log S]$-modules $E_0$ and $E_a$ are semistable. \end{definition} \refstepcounter{theorem}\paragraph{Remarks \thetheorem} (1) It is easy to prove that the semistability of the ${\cal D} _X[\log S]$-module $E_a$ is equivalent to the semistability of the logarithmic connection $\pi ^*_a(E_a)$ on $P(N_{S_a,X}\oplus 1)$ with respect to a natural choice of polarization. (2) When $X$ is a curve, a pre-${\cal D}$-module ${\bf E}$ is semistable if and only if the logarithmic connection $E_0$ on $(X,S)$ is semistable, for then $E_1$ is always semistable. (3) Let the dimension of $X$ be more than one. Then even when a pre-${\cal D}$-module ${\bf E}$ is a pre meromorphic connection (equivalently, when $s:E_1 \to E_0\vert S$ is an isomorphism), the definition of semistability of ${\bf E}$ does not reduce to the semistability of the underlying logarithmic connection $E_0$ on $(X,S)$. This is to be expected because we do not fix the rank of $s$ (or $t$) when we consider families of pre-${\cal D}$-modules. Also note that even in dimension one, meromorphic connections are not a good subcategory of the abelian category of all regular holonomic ${\cal D}$-modules with characteristic variety contained in $T^*_XX\cup T^*_SX$, in the sense that a submodule or a quotient module of a meromorphic connection is not necessarily a meromorphic connection. \inter{Boundedness and local universal family} We let the index $i$ vary over $0$ and over the $a$. \begin{proposition}[Boundedness] Semistable pre-${\cal D}$-modules with given Hilbert po\-lynomials $p_i$ form a bounded set, that is, there exists a family of pre-${\cal D}$-modules parametrized by a noetherian scheme of finite type over $C\!\!\!\!I$ in which each semistable pre-${\cal D}$-module with given Hilbert polynomials occurs. \end{proposition} \paragraph{Proof} This is obvious as each $E_i$ (where $i=0,a$) being semistable with fixed Hilbert polynomial, is bounded. Next, we construct a local universal family. By boundedness, there exists a positive integer $N$ such that for all $n\ge N$, the sheaves $E_0(N)$ and $E_1(N)$ are generated by global sections and have vanishing higher cohomology. Let $\Lambda = D_X[\log S]$. Let ${\cal O} _X =\Lambda_0 \subset \Lambda_1 \subset \cdots \subset \Lambda$ be the increasing filtration of $\Lambda$ by the order of the differential operators. Note that each $\Lambda_k$ is an ${\cal O}_X$ bimodule, coherent on each side. Let $r$ be a positive integer larger than the ranks of the $E_i$. Let $Q_i$ be the quot scheme of quotients $q_i:\Lambda_r\otimes {\cal O}_X (-N)^{p_i(N)}\to\!\!\!\!\to E_i$ where the right ${\cal O}_X$-module structure on $\Lambda_r$ is used for making the tensor product. Note that $G_i=PGL(p_i(N))$ has a natural action on $Q_i$. Simpson defines a locally closed subscheme $C_i\subset Q_i$ which is invariant under $G_i$, and a local universal family $E$ of $\Lambda$-modules parametrized by $C_i$ with the property that for two morphisms $T\to C_i$, the pull back families are isomorphic over an open cover $T'\to T$ if and only if the two morphisms define $T'$ valued points of $C_i$ which are in a common orbit of $G_i(T')$. On the product $C_0\times C_a$, consider the linear schemes $A_a$ and $B_a$ which respectively correspond to $Hom_{\Lambda}(E_1,E_0)$ and $Hom_{\Lambda}(E_0,E_1)$ (see Lemma 2.7 in [N] for the existence and universal property of such linear schemes). Let $F_a$ be the fibered product of $A_a$ and $B_a$ over $C_0\times C_a$. Let $H_a$ be the closed subscheme of $F_a$ where the tuples $(q_0,q_1,t,s)$ satisfy $st=\theta$ and $ts=\theta$. Finally let $H$ be the fibered product of the pullbacks of the $H_a$ to $C= C_0 \times \prod_a C_a$. Note that $H$ parametrizes a tautological family of pre-${\cal D}$-modules on $(X,S)$ in which every semistable pre-${\cal D}$-module with given Hilbert polynomials occurs. The group $${\cal G} = G_0 \times \prod_a (G_a \times GL(1))$$ has a natural action on $H$, with $$(q_0,q_a,t_a,s_a)\cdot (g_0,g_a,\lambda_a) = (q_0g_0,q_ag_a,(1/\lambda_a)t_a,\lambda_a s_a)$$ It is clear from the definitions of $H$ and this action that two points of $H$ parametrise isomorphic pre-${\cal D}$-modules if and only if they lie in the same $G$ orbit. The morphism $H\to C\times \prod _aC_a$ is an affine morphism which is ${\cal G}$-equivariant, and by Simpson's construction of moduli for $\Lambda$-modules, the action of ${\cal G}$ on $C\times \prod _aC_a$ admits a good quotient in the sense of geometric invariant theory. Hence a good quotient $H//{\cal G}$ exists by Ramanathan's lemma (see Proposition 3.12 in [Ne]), which by construction and universal properties of good quotients is the coarse moduli scheme of semistable pre-${\cal D}$-modules with given Hilbert polynomials. Note that under a good quotient in the sense of geometric invariant theory, two different orbits can in some cases get mapped to the same point (get identified in the quotient). In the rest of this section, we determine what are the closed points of the quotient $H//{\cal G}$. \refstepcounter{theorem}\paragraph{Remark \thetheorem} For simplicity of notation, we assume in the rest of this section that $S$ has only one connected component. It will be clear to the reader how to generalize the discussion when $S$ has more components. \inter{Reduced modules} Assuming for simplicity that $S$ has only one connected component, so that ${\cal G} = {\cal H} \times GL(1)$ where $H=G_0 \times G_1$, we can construct the quotient $H//{\cal G}$ in two steps: first we go modulo the factor $GL(1)$, and then take the quotient of $R=H//GL(1)$ by the remaining factor ${\cal H}$. The following lemma is obvious. \begin{lemma}\label{lem4.5} Let $T$ be a scheme of finite type over $k$, and let $V\to T$ and $W\to T$ be linear schemes over $T$. Let $V\times W$ be their fibered product (direct sum) over $T$, and let $V\otimes W$ be their tensor product. Let $\phi :V\times W\to V\otimes W$ be the tensor product morphism. Then its schematic image $D\subset V\otimes W$ is a closed subscheme which (i) parametrizes all decomposable tensors, and (ii) base changes correctly. Let $GL(1,k)$ act on $V\times W$ by the formula $\lambda \cdot (v,w) = (\lambda v, (1/\lambda )w)$. Then $\phi :V\times W\to D$ is a good quotient for this action. \end{lemma} \paragraph{Proof} The statement is local on the base, so we can assume that (i) the base $T$ is an affine scheme, and (ii) both the linear schemes are closed linear subschemes of trivial vector bundles on the base, that is, $V\subset A^m_T$ and $W\subset A^n_T$ are subschemes defined respectively by homogeneous linear equations $f_p(x_i)=0$ and $g_q(y_j)=0$ in the coordinates $x_i$ on $A^m_T$ and $y_j$ on $A^n_T$. Let $z_{i,j}$ be the coordinates on $A^{mn}_T$, so that the map $\otimes :A^m_T\times _T A^n_T \to A^{mn}$ sends $(x_i,y_j) \mapsto (z_{i,j})$ where $z_{i,j}=x_iy_j$. Its schematic image is the subscheme $C$ of $A^{mn}_T$ defined by the equations $z_{a,b}z_{c,d} - z_{a,d}z_{b,c} = 0$, that is, the matrix $(z_{i,j})$ should have rank $ < 2$. Take $D$ to be the subscheme of $C$ defined by the equations $f_p(z_{1,j},\ldots ,z_{m,j}) = 0$ and $g_q(z_{i,1},\ldots ,z_{i,n}) = 0$. Now the lemma \ref{lem4.5} follows trivially from this local coordinate description. \paragraph{} The above lemma implies the following. To get the quotient $H//GL(1)$, we just have to replace the fibered product $A\times B$ over $C_0\times C_1$ by the closed subscheme $Z\subset D\subset A\otimes B$, where $D$ is the closed subscheme consisting of decomposable tensors $u$, and $Z$ is the closed subscheme of $D$ defined as follows. Let $\mu _0$ and $\mu _1$ be the canonical morphisms from $A\otimes B$ to the linear schemes representing $End_{\Lambda} (E_0|S)$ and $End_{\Lambda} (E_1)$ respectively. Then $Z$ is defined to consist of all $u$ such that $\mu _0(u)=\theta \in End _{\Lambda}(E_0|S) $ and $\mu _1(u) = \theta \in End_{\Lambda}(E_1)$. There is a canonical $GL(1)$ quotient morphism $A\times B \to D$ over $C_0\times C_1$, which sends $(s,t)\mapsto u=s\otimes t$. These give the $GL(1)$ quotient map $H\to Z$. Note that the map $H\to C_0\times C_1$ is ${\cal G}$ equivariant, and the action of $GL(1)$ on $C_0\times C_1$ is trivial, so we get a ${\cal H}$-equivariant map $Z\to C_0\times C_1$. In order to describe the identifications brought about by the above quotient, we make the following definition. \begin{definition}\rm A {\sl reduced module} is a tuple $(E_0,E_1,u)$ where $E_0$ and $E_1$ are as in a pre-${\cal D}$-module, and $u\in Hom_{\Lambda}(E_1,E_0|S)\otimes Hom_{\Lambda}(E_0,E_1)$ is a decomposable tensor, such that the canonical maps $\mu _0:Hom_{\Lambda}(E_1,E_0|S)\otimes Hom_{\Lambda}(E_0,E_1) \to End_{\Lambda}(E_0|S)$ and $\mu _1: Hom_{\Lambda}(E_1,E_0|S)\otimes Hom_{\Lambda}(E_0,E_1) \to End_{\Lambda}(E_1)$, both map $u$ to the endomorphism $\theta$ of the appropriate module. In other words, there exist $s$ and $t$ such that $(E_0,E_1,s,t)$ is a pre-${\cal D}$-module, and $u=s\otimes t$. We say that the reduced module $(E_0,E_1,s\otimes t)$ is the associated reduced module of the pre-${\cal D}$-module $(E_0,E_1,s,t)$. Moreover, we say that a reduced module is semistable if it is associated to a semistable pre-${\cal D}$-module. \end{definition} \begin{lemma} Let $V$ and $W$ are two vector spaces, $v,v'\in V$ and $w,w'\in W$, then (1) If $v\otimes w=0$ then $v=0$ or $w=0$. (2) If $v\otimes w=v'\otimes w'\ne 0$, then there exists a scalar $\lambda \ne 0$ such that $v=\lambda v'$ and $w = (1/\lambda ) w'$. \end{lemma} \refstepcounter{theorem}\paragraph{Remark \thetheorem} The above lemma shows that if ${\bf E}$ and ${\bf E} '$ are two non-isomorphic pre-${\cal D}$-modules whose associated reduced modules are isomorphic, then we must have $s\otimes t =s'\otimes t'=0$. In particular, $\theta$ will act by zero on $E_0|S$ and also on $E_1$ for such pre-${\cal D}$-modules as $st=0$ and $ts=0$. \inter{S-equivalence and stability} \begin{definition}\rm By a {\sl filtration} of a logarithmic connection $E$ we shall mean an increasing filtration $E_p$ indexed by $Z\!\!\!Z$ by subvector bundles which are logarithmic connections. Similarly, a filtration on a ${\cal D} _X[\log S]$-module $F$ supported on $S$ will mean a filtration of the vector bundle $F\vert S$ by subbundles $F_p$ which are ${\cal D} _X[\log S]$-submodules. A filtration of a pre-${\cal D}$-module $(E_0,E_1,s,t)$ is an increasing filtration $(E_i)_p$ of the logarithmic connection $E_i$ ($i=0,1$) such that $s$ and $t$ are filtered morphisms with respect to the specialized filtration of $E_0$ and the filtration of $E_1$. A filtration of a reduced module $(E_0,E_1,u)$, with $u=s\otimes t$ where we take $s=0$ and $t=0$ if $u=0$, is a filtration of the pre-${\cal D}$-module $(E_0,E_1,s,t)$. We shall always assume that this filtration is exhaustive, that is, $(E_i)_p=0$ for $p\ll0$ and $(E_i)_p=E_i$ for $p\gg0$. A filtration is {\sl nontrivial} if some $(E_i)_p$ is a proper subbundle of $E_i$ for $i=0$ or $1$. \end{definition} For a filtered pre-${\cal D}$-module (or reduced module), each step of the filtration as well as the graded object are pre-${\cal D}$-modules (or reduced modules). \refstepcounter{theorem}\paragraph{Remark \thetheorem}\label{deform} There is a natural family $({\bf E}_\tau)_{\tau\in A^1}^{}$ of pre-${\cal D}$-modules or reduced modules parametrized by the affine line $A^1=\specC\!\!\!\!I[\tau]$, which fibre at $\tau=0$ is the graded object ${\bf E}'$ and the fibre at $\tau_0\neq0$ is isomorphic to the original pre-${\cal D}$-module or reduced module ${\bf E}$: put (for $i=0,1$) ${\cal E}_i=\oplus_{p\inZ\!\!\!Z}^{}(E_i)_p\tau^p\subset E_i\otimes C\!\!\!\!I[\tau,\tau_{}^{-1}]$ and the relative ${\cal D}\log$-structure is the natural one. \begin{definition}\rm A {\sl special filtration of a coherent ${\cal O} _X$-module} $E$ is a filtration for which each $E_p$ has the same normalized Hilbert polynomial as $E$. A {\sl special filtration of a reduced module} $(E_0,E_1,u)$ is a filtration of this reduced module which is special on $E_0$ and on $E_1$. \end{definition} The graded reduced module ${\bf E}'$ associated with a special filtration of a semistable reduced module ${\bf E}$ is again semistable. \begin{definition}\rm The equivalence relation on the set of isomorphism classes of all semistable reduced modules generated by this relation (by which ${\bf E} '$ is related to ${\bf E}$) will be called S-equivalence. \end{definition} \begin{definition}\label{defstable}\rm We say that a semistable reduced module is {\sl stable} if it does not admit any nontrivial special filtration. \end{definition} \refstepcounter{theorem}\paragraph{Remarks \thetheorem} (1) Note in particular that if each $E_0$, $E_a$ is stable as a $\Lambda$-module, then the reduced module ${\bf E} $ is stable. Consequently we have the following. Though the definition of stability depends on the ample line bundle $L$ on $X$, irrespective of the choice of the ample bundle, for any pre-${\cal D}$-module such that the monodromy representation of $E_0\vert (X-S)$ is irreducible, and the monodromy representation of $\pi _a ^*E_a \vert (N_{S_a,X}-S_a)$ is irreducible for each component $S_a$, the corresponding reduced module is stable. The converse is not true -- a pre-${\cal D}$-module, whose reduced module is stable, need not have irreducible monodromies. The example 2.4.1 in [N] gives a logarithmic connection, whose associated pre-${\cal D}$-module in which $s$ is identity and $t$ is the residue, gives a stable reduced module, but the monodromies are not irreducible. (2) If $u=0$, the reduced module is stable if and only $E_0$ and each $E_a$ is stable. (3) When $X$ is a curve, a reduced module with $u\ne 0$ is stable if and only if the logarithmic connection $E_0$ is stable. If $u=0$, each $E_a$ must moreover have length at most one as an ${\cal O}_X$-module. Hence over curves, there is a plentiful supply of stable reduced modules. \begin{lemma}\label{uisflat} Let $(E_0,E_1,u)$ be a reduced module and let $(E_i)_p$ be filtrations of $E_i$ ($i=0,1$). Then $s$ and $t$ are filtered morphisms with respect to the specialized filtrations if and only if there exists some point $P\in S$ such that the restrictions of $s$ and $t$ to the fibre $E_{i,P}$ at $P$ are filtered with respect to the restricted filtrations. \end{lemma} \paragraph{Proof} This follows from the fact that if a section $\sigma$ of a vector bundle with a flat connection has a value $\sigma(P)$ in the fibre at $P$ of a sub flat connection, then it is a section of this subbundle: we apply this to $s$ (resp. $t$) as a section of $Hom((E_0)_{p|S},(E_1)_{|S})$ (resp. $Hom((E_1)_{p|S},(E_0)_{|S})$). \inter{A criterion for stability} Let ${\bf E}=(E_0,E_1,u=s\otimes t)$ be a reduced module. Assume that we are given filtrations $0=F_0(E_i)\subset F_1(E_i)\subset\cdots\subset F_{\ell_i}(E_i)=E_i$ of $E_i$ ($i=0,1$) by vector subbundles which are ${\cal D}_X[\log S]$-submodules. For $j=0,\ldots,\ell_i$ let $k(j)$ be the smallest $k$ such that $s(\mathop{\rm sp}\nolimits_SF_j(E_0))\subset F_k(E_1)$ and let $J(s)$ be the graph of the map $j\to k(j)$. A {\sl jump point} is a point $(j,k(j))$ on this graph such that $k(j-1)<k(j)$. Consider also the set $G_s$ made by points under the graph: $G_s=\{ (j,k)\mid k\leq k(j)\}$. For $t$ there is an equivalent construction: we have a map $k\to j(k)$ and a set $G_t$ on the left of the graph $I(t)$: $G_t=\{ (j,k)\mid j\leq j(k)\}$. \begin{definition}\rm $u=s\otimes t$ is {\sl compatible} with the filtrations if the two sets $G_s$ and $G_t$ intersect at most at (common) jump points (where if $u=0$, take $s=0$ and $t=0$). \end{definition} \begin{proposition}\label{nonstable} Let ${\bf E}=(E_0,E_1,u)$ be a semistable reduced module. The following conditions are equivalent: (1) ${\bf E}$ is not stable, (2) there exists a nontrivial special filtration $F_j(E_i)$ ($j=0,\ldots\ell_i$) of each $E_i$ where all inclusions are proper and $u$ is compatible with these filtrations. \end{proposition} \paragraph{Proof} $(1)\Rightarrow(2)$: If ${\bf E}$ is not stable, we can find two nontrivial special filtrations $(E_0)_p$ and $(E_1)_q$ such that $s$ and $t$ are filtered morphisms. Let $p_j$ ($j=1,\ldots ,\ell_0$) be the set of jumping indices for $(E_0)_p$ and $q_k$ ($k=1,\ldots ,\ell_1$) for $(E_1)_q$. For each $j_0$ and $k_0$ we have $j(k(j_0))\leq j_0$ and $k(j(k_0))\leq k_0$. We define $F_j(E_0)=(E_0)_{p_j}$ and $F_k(E_1)=(E_1)_{q_k}$. We get nontrivial filtrations of $E_0$ and $E_1$ where all inclusions are proper. Moreover there cannot exist two distinct points $(j_0,k(j_0))$ and $(j(k_0),k_0)$ with $j_0\leq j(k_0)$ and $k_0\leq k(j_0)$ otherwise we would have $j_0\leq j(k_0)\leq j(k(j_0))\leq j_0$ and the same for $k_0$ so the two points would be the equal. Consequently $u$ is compatible with these filtrations. $(2)\Rightarrow(1)$: We shall construct a special filtration $((E_0)_p,(E_1)_q)$ of the reduced module from the filtrations $F_j(E_i)$ of each $E_i$. Choose a polygonal line with only positive slopes, going through each jump point of $G_s$ and for which each jump point of $G_t$ is on or above this line (see figure \ref{fig1}). \setlength{\unitlength}{.5truecm} \begin{figure}[htb] \begin{center} \begin{picture}(10,8)(0,0) \put(0,0){\line(1,0){10}} \put(0,0){\line(0,1){8}} \put(9.5,-.7){$j$} \put(-.5,7.5){$k$} \put(3,4){\circle*{.2}} \put(6,6){\circle*{.2}} \put(8,8){\circle*{.2}} \put(3,4){\line(1,0){3}} \put(6,6){\line(1,0){2}} \put(8,8){\line(1,0){2}} \put(3,0){\line(0,1){4}} \put(6,4){\line(0,1){2}} \put(8,6){\line(0,1){2}} \put(1,1){\circle{.2}} \put(2,3){\circle{.2}} \put(3,4){\circle{.2}} \put(5,5){\circle{.2}} \put(7,7){\circle{.2}} \put(0,1){\line(1,0){1}} \put(1,3){\line(1,0){1}} \put(2,4){\line(1,0){1}} \put(3,5){\line(1,0){2}} \put(5,7){\line(1,0){2}} \put(1,1){\line(0,1){2}} \put(2,3){\line(0,1){1}} \put(3,4){\line(0,1){1}} \put(5,5){\line(0,1){2}} \put(7,7){\line(0,1){1}} \put(6,3){$G_s$} \put(2,6){$G_t$} \multiput(.2,.2)(.2,.2){4}{\circle*{.1}} \multiput(1,1)(.2,.3){10}{\circle*{.1}} \multiput(3,4)(.4,.2){5}{\circle*{.1}} \multiput(5,5)(.2,.2){15}{\circle*{.1}} \end{picture} \caption{\label{fig1}$\bullet=$ jump points of $s$, $\circ=$ jump points of $t$} \end{center} \end{figure} Choose increasing functions $p(j)$ and $q(k)$ such that $p(j)-q(k)$ is identically $0$ on this polygonal line, is $<0$ above it and $>0$ below it (for instance, on each segment $[(j_0,k_0),(j_1,k_1)]$ of this polygonal line, parametrised by $j=j_0+m\varepsilon_1$, $k=k_0+m\varepsilon_2$, put $p(j)=p(j_0)+\varepsilon_2(j-j_0)$ and $q(k)=q(k_0)+\varepsilon_1(k-k_0)$, and $p(0)=q(0)=0$). For $p(j)\leq p<p(j+1)$ put $(E_0)_p=F_j(E_0)$ and for $q(k)\leq q<q(k+1)$ put $(E_1)_q=F_k(E_1)$. The filtration $((E_0)_p,(E_1)_q,u)$ is then a nontrivial special filtration of the reduced module ${\bf E}$. \begin{proposition}\label{open} Semistability and stability are Zariski open conditions on the parameter scheme of any family of reduced modules. \end{proposition} \paragraph{Proof} As semistability is an open condition on ${\cal D} _X[\log S]$-modules, it follows it is an open condition on reduced modules. Now, for any family of semistable reduced modules parametrised by a scheme $T$, all possible special filtrations of the form given by \ref{nonstable} on the specializations of the family are parametrised by a scheme $U$ which is projective over $T$. The image of $U$ in $T$ is the set of non stable points in $T$, hence its complement is open. \inter{Points of the moduli} We are now ready to prove the following theorem. \begin{theorem} Let $X$ be a projective variety together with an ample line bundle, and let $S\subset X$ be a smooth divisor. (1) There exists a coarse moduli scheme ${\cal P}$ for semistable pre-${\cal D}$-modules on $(X,S)$ with given Hilbert polynomials $p_i$. The scheme ${\cal P}$ is quasiprojective, in particular, separated and of finite type over $C\!\!\!\!I$. (2) The points of ${\cal P}$ correspond to S-equivalence classes of semistable pre-${\cal D}$-modules. (3) The S-equivalence class of a semistable reduced module ${\bf E} $ equals its isomorphism class if and only if ${\bf E} $ is stable. (4) ${\cal P}$ has an open subscheme ${\cal P} ^s$ whose points are the isomorphism classes of all stable reduced modules. This is a coarse moduli for (isomorphism classes of) stable reduced modules. \end{theorem} \paragraph{Proof} Let ${\cal P} = H//{\cal G}$. Then (1) follows by the construction of ${\cal P}$. To prove (2), first note that by the existence of the deformation ${\bf E} _t$ (see \ref{deform}) of any reduced module ${\bf E}$ corresponding to a weighted special filtration, and by the separatedness of ${\cal P}$, the reduced module ${\bf E} $ and its limit ${\bf E} '$ go to the same point of ${\cal P}$. Hence an S-equivalence class goes to a common point of ${\cal P}$. For the converse, first recall that ${\cal G} = {\cal H} \times GL(1)$, and the quotient ${\cal P}$ can be constructed in two steps: ${\cal P} = R//{\cal H}$ where $R=H/{\cal G}$. The scheme $R$ parametrizes a canonical family of reduced modules. Let the ${\cal H}$ orbit of a point $x$ of $R$ corresponding the reduced module ${\bf E} $ not be closed in $R$. Let $x_0$ be any of its limit points. Then there exists a 1-parameter subgroup $\lambda$ of ${\cal H}$ such that $x_0 = \lim _{t\to 0} \lambda (t) x$. This defines a map from the affine line $A^1$ to $R$, which sends $t\mapsto \lambda (t)x$. Let ${\bf E} _t$ be the pull back of the tautological family of reduced modules parametrized by $R$. Then from the description of the limits of the actions of 1-parameter subgroups on a quot scheme given in section 1 of Simpson [S], it follows that ${\bf E}$ has a special filtration such that the family ${\bf E} _t$ is isomorphic to a deformation of the type constructed in \ref{deform} above. Hence the reduced modules parametrized by $x$ and $x_0$ are S-equivalent. This proves (2). If the orbit of $x$ is not closed, then it has a limit $x_0$ outside it under a 1-parameter subgroup, which by above represents a reduced module ${\bf E} '$ which is the limit of ${\bf E} $ under a special filtration. As by assumption ${\bf E} '$ is not isomorphic to ${\bf E} $, the special filtration must be nontrivial. Hence ${\bf E} $ is not stable. Hence stable points have closed orbits in $R$. If $x$ represents a stable reduced module, then $x$ cannot be the limit point of any other orbit. For, if $x$ is a limit point of the orbit of $y$, then by openness of stability (see \ref{open}), $y$ should again represent a stable reduced module. But then by above, the orbit of $y$ is closed. This proves (3). Let $H^s\subset H$ be the open subscheme where the corresponding pre-${\cal D}$-module is stable. By (2) and (3) above, $H^s$ is saturated under the quotient map $H \to {\cal P}$, hence by properties of a good quotient, its image ${\cal P} ^s$ is open in ${\cal P}$. Moreover by (2) and (3) above, $H^s$ is the inverse image of ${\cal P}^ s$. Hence $H^s \to {\cal P} ^s$ is a good quotient, which again by (2) and (3) is an orbit space. Hence points of ${\cal P} ^s$ are exactly the isomorphism classes of stable reduced modules, which proves (4). \section{Perverse sheaves, Verdier objects and finite descriptions} Let $X$ be a nonsingular projective variety and let $S$ be a smooth divisor. The abelian category of perverse sheaves constructible with respect to the stratification $(X-S,S)$ of $X$ is equivalent to the category of `Verdier objects' on $(X,S)$. Before defining this category, let us recall the notion of specialization along $S$. Let ${\cal E}$ be a local system (of finite dimensional vector spaces) on $X-S$. The {\sl specialization} $\mathop{\rm sp}\nolimits_S{\cal E}$ is a local system (of the same rank) on $N_{S,X}^{}-S$ equipped with an endomorphism $\tau_{\cal E}$. It is constructed using the nearby cycle functor $\psi$ defined by Deligne applied to the morphism which describes the canonical deformation from $X$ to the normal bundle $N_{S,X}^{}$. A local system ${\cal F}$ on $N_{S,X}^{}-S$ equipped with an endomorphism $\tau_{\cal F}$ is said to be {\sl monodromic} if $\tau_{\cal F}$ is equal to the monodromy of ${\cal F}$ around $S$. Then $\mathop{\rm sp}\nolimits_S{\cal E}$ is monodromic. \begin{definition}\rm A {\sl Verdier object} on $(X,S)$ is a tuple ${\bf V}=({\cal E},{\cal F},C,V)$ where (1) ${\cal E}$ is a local system on $X-S$, (2) ${\cal F}$ is a monodromic local system on $N_{S,X}^{}-S$, (3) $C:\mathop{\rm sp}\nolimits_S{\cal E}\to{\cal F}$ and $V:{\cal F}\to\mathop{\rm sp}\nolimits_S{\cal E}$ are morphisms of (monodromic) local systems on $N_{S,X}^{}-S$ satisfying (4) $CV=\tau_{\cal F}-\mathop{\rm id}\nolimits$ and $VC=\tau_{\cal E}-\mathop{\rm id}\nolimits$. \end{definition} \refstepcounter{theorem}\paragraph{Remark \thetheorem} The morphisms between Verdier objects on $(X,S)$ are defined in an obvious way, and the category of Verdier objects is an abelian category in which each object has finite length. Hence the following definition makes sense. \begin{definition}\rm We say that two Verdier objects are {\sl S-equivalent} if they admit Jordan-H\"older filtrations such that the corresponding graded objects are isomorphic. \end{definition} \refstepcounter{theorem}\paragraph{Remark \thetheorem} Let $B$ be a tubular neighbourhood of $S$ in $X$, diffeomorphic to a tubular neighbourhood of $S$ in $N_{S,X}^{}$. Put $B^*=B-S$. The specialized local system $\mathop{\rm sp}\nolimits_S{\cal E}$ can be realized as the restriction of ${\cal E}$ to $B^*$, its monodromy $\tau_{\cal E}$ at some point $x\in B^*$ being the monodromy along the circle normal to $S$ going through $x$. Hence a Verdier object can also be described as a tuple ${\bf V}$ where ${\cal F}$ is a local system on $B^*$ and $C$, $V$ are morphisms between ${\cal E}|B^*$ and ${\cal F}$ subject to the same condition (4). \bigskip The notion of a family of perverse sheaves is not straightforward. We can however define the notion of a family of Verdier objects. Let us define first a family of local systems on $X-S$ (or on $N_{S,X}^{}-S$) parametrized by a scheme $T$. This is a locally free $p^{-1}{\cal O}_T$-module of finite rank, where $p$ denotes the projection $X-S\times T\to T$. Morphisms between such objects are $p^{-1}{\cal O}_T$-linear. The notion of a family of Verdier objects is then straightforward. In order make a moduli space for Verdier objects, we shall introduce the category of `finite descriptions' on $(X,S)$. Let us fix the following data (D): (D1) finitely generated groups $G$ and $G_a$ for each component $S_a$ of $S$, (D2) for each $a$ an element $\tau_a$ which lies in the center of $G_a$ and a group homomorphism $\phi_a:G_a\to G$. \begin{definition}\label{def2}\rm A finite description ${\bf D}$ (with respect to the data (D)) is a tuple $(E,\rho,F_a,\rho_a,C_a,V_a)$ where (1) $\rho:G\to GL(E)$ is a finite dimensional complex representation of the group $G$; for each $a$ we will regard $E$ as a representation of $G_a$ via the homomorphism $\phi_a:G_a\to G$; (2) for each $a$, $\rho_a:G_a\to GL(F_a)$ is a finite dimensional complex representation of the group $G$; (3) for each $a$, $C_a:E\to F_a$ and $V_a: F_a\to E$ are $G_a$-equivariant linear maps such that $V_aC_a=\rho(\tau_a)-\mathop{\rm id}\nolimits$ in $GL(E)$ and $C_aV_a=\rho_a(\tau_a)-\mathop{\rm id}\nolimits$ in $GL(F_a)$. \end{definition} A morphism between two finite descriptions has an obvious definition. \refstepcounter{theorem}\paragraph{Remark \thetheorem}\label{rem4} Let $P_0\in X-S$ and let $P_a$ be a point in the component $B*_a$ of $B^*$. Choose paths $\sigma_a:[0,1]\to X-S$ with $\sigma_a(0)=P_0$ and $\sigma_a(1)=P_a$. Let $G$ be the fundamental group $\pi_1(X-S,P_0)$, and let $G_a = \pi_1 (B^*_a,P_a )$. Let $\tau_a \in G_a $ be the positive loop based at $P_a $ in the fiber of $B^*_a\to S_a $. Finally, let $\phi_a:G_a\to G$ be induced by the inclusion $B^*_a\hookrightarrow X-S$ by using the path $\sigma_a$ to change base points. Then, under the equivalence between representations of fundamental group and local system, the category of finite description with respect to the previous data is equivalent to the category of Verdier objects on $(X,S)$. \refstepcounter{theorem}\paragraph{Remark \thetheorem} The category of finite descriptions is an abelian category in which each object has finite length. Therefore the notion of S-equivalence as in definition 5.3 above makes sense for finite descriptions. \begin{definition}\rm A family of finite descriptions parametrized by a scheme $T$ is a tuple $(E_T, \rho_T,F_{T,a}, \rho_{T,a}, C_{T,a}, V_{T,a})$ where $E_T$ and the $F_{T,a}$ are locally free sheaves on $T$, $\rho$ and $\rho_{T,a}$ are families of representations into these, and the $C_{T,a}$ and $V_{T,a}$ are ${\cal O}_T$-homomorphisms of sheaves satisfying the analogues of condition \ref{def2}.3 over $T$. The pullback of a family under a morphism $T'\to T$ is defined in an obvious way, giving a fibered category. Let $PS$ denote the corresponding groupoid. \end{definition} \refstepcounter{theorem}\paragraph{Remark \thetheorem} It can be checked (we omit the details) that the groupoid $PS$ is an Artin algebraic stack. \section{Moduli for perverse sheaves} Let us fix data (D) as above. \begin{theorem} There exists an affine scheme of finite type over $C\!\!\!\!I$, which is a coarse moduli scheme for finite descriptions ${\bf D}=(E,\rho,F_a,\rho_a,C_a,V_a)$ relative to {\rm (D)} with fixed numerical data $n=\dim E$ and $n_a=\dim F_a$. The closed points of this moduli scheme are the S-equivalence classes of finite descriptions with given numerical data $(n,n_a)$. \end{theorem} Using remark \ref{rem4} we get \begin{corollary} There exists an affine scheme of finite type over $C\!\!\!\!I$, which is a coarse moduli scheme for Verdier objects ${\bf V}=({\cal E},{\cal F},C,V)$ (or perverse sheaves on $(X,S)$) with fixed numerical data $n={\rm rank} {\cal E}$ and $n_a={\rm rank}{\cal F}|B^*_a$. The closed points of this moduli scheme are the S-equivalence classes of Verdier objects with given numerical data $(n,n_a)$. \end{corollary} The above corollary and its proof does not need $X$ to be a complex projective variety, and the algebraic structure of $X$ does not matter. All that is needed is that the fundamental group of $X-S$ and that of each $S_a$ is finitely generated. The rest of this section contains the proof of the above theorem. \begin{proposition}\label{prop5} (1) Let ${\bf D}$ be a finite description, and let $\mathop{\rm gr}\nolimits({\bf D})$ be its semisimplification. Then there exists a family ${\bf D}_T$ of finite descriptions parametrized by the affine line $T=A^1$ such that the specialization ${\bf D}_0$ at the origin $0\in T$ is isomorphic to $\mathop{\rm gr}\nolimits({\bf D})$, while ${\bf D}_t$ is isomorphic to ${\bf D}$ at any $t\ne 0$. (2) In any family of finite descriptions parametrized by a scheme $T$, each S-equiva\-len\-ce class (Jordan-H\"older class) is Zariski closed in $T$. \end{proposition} \paragraph{Proof} The statement (1) has a proof by standard arguments which we omit. To prove (2), first note that if ${\bf D}_T$ is any family and ${\bf D}'$ a simple finite description, then the condition that ${\bf D}' \times \{ t \}$ is a quotient of ${\bf D}_t$ defines a closed subscheme of $T$. From this, (2) follows easily. \paragraph{Construction of Moduli} Let $E$ and $F_a$ be vector spaces with $\dim(E)=n$ and $\dim(F_a) =n_a$. Let $\cal R$ be the affine scheme of all representations $\rho$ of $G$ in $E$, made as follows. Let $h_1,\ldots , h_r$ be generators of $G$. Then $\cal R$ is the closed subscheme of the product $GL(E)^r$ defined by the relations between the generators. Similarly, choose generators for each $G_a$, and let ${\cal R}_i$ be the corresponding affine scheme of all representations $\rho_a$ of $G_a$ in $F_a$. Let $$A \subset {\cal R} \times \prod_a ({\cal R}_a \times Hom(E,F_a) \times Hom(F_a,E))$$ be the closed subscheme defined by condition \ref{def2}.3 above. Its closed points are tuples $(\rho,\rho_a, C_a, V_a)$ where the linear maps $C_a:E\to F_a$ and $V_a:F_a\to E$ are $G_a$-equivariant under the representations $\rho \phi_a: G_a\to GL(E)$ and $\rho_a: G_a\to GL(F_a)$, and satisfy $V_aC_a = \rho (\tau_a )-1$ in $GL(E)$, and $C_aV_a = \rho_a(\tau_a) -1$ in $GL(F_a)$ for each $a$. The product group ${\cal G} =GL(E) \times (\prod_a GL(F_a))$ acts on the affine scheme $A$ by the formula $$(\rho ,\rho_a, C_a,V_a)\cdot (g,g_a) = (g^{-1}\rho g, g_a^{-1}\rho_a g_a,g_a^{-1}C_ag, g^{-1}V_ag_a).$$ The orbits under this action are exactly the isomorphism classes of finite descriptions. The moduli of finite descriptions is the good quotient ${\cal F} =A//{\cal G}$, which exists as $A$ is affine and ${\cal G}$ is reductive. It is an affine scheme of finite type over $C\!\!\!\!I$. It follows from \ref{prop5}.1 and \ref{prop5}.2 and properties of a good quotient that the Zariski closures of two orbits intersect if and only if the two finite descriptions are S-equivalent. Hence closed points of ${\cal F}$ are S-equivalence classes (Jordan-H\"older classes) of finite descriptions. \section{Riemann-Hilbert morphism} To any Malgrange object ${\bf M}$, there is an obvious associated Verdier object ${\bf V}({\bf M})$ obtained by applying the de~Rham functor to each component of ${\bf M}$. This defines a functor, which is in fact an equivalence of categories from Malgrange objects to Verdier objects. We have already defined a functor $\eta$ from pre-${\cal D}$-modules with good residual eigenvalues to Malgrange objects. Composing, we get an exact functor from pre-${\cal D}$-modules with good residual eigenvalues to Verdier objects. Choosing base points in $X$ and paths as in remark \ref{rem4} we get an exact functor ${\cal R\!\!H}$ from pre-${\cal D}$-modules to finite descriptions. This construction works equally well for families of pre-${\cal D}$-modules, giving us a holomorphic family ${\cal R\!\!H} ({\bf E}_T)$ of Verdier objects (or finite descriptions) starting from a holomorphic family ${\bf E}_T$ of pre-${\cal D}$-modules with good residual eigenvalues. \refstepcounter{theorem}\paragraph{Remark \thetheorem} Even if ${\bf E}_T$ is an algebraic family of pre-${\cal D}$-modules with good residual eigenvalues, the associated family ${\cal R\!\!H} ({\bf E}_T)$ of Verdier objects may not be algebraic. \refstepcounter{theorem}\paragraph{Remark \thetheorem} If a semistable pre-${\cal D}$-module has good residual eigenvalues, then any other semistable pre-${\cal D}$-module in its S-equivalence class has (the same) good residual eigenvalues. Hence the analytic open subset $T_g$ of the parameter space $T$ of any analytic family of semistable pre-${\cal D}$-modules defined by the condition that residual eigenvalues are good is saturated under S-equivalence. \begin{lemma} If two semistable pre-${\cal D}$-modules with good residual eigenvalues are S-equivalent (in the sense of definition \ref{defstable} above), then the associated finite descriptions are S-equivalent (that is, Jordan-H\"older equivalent). \end{lemma} \paragraph{Proof} Let ${\bf E} =(E_0,E_1,s,t)$ be a pre-${\cal D}$-module with good residual eigenvalues (that is, the logarithmic connection $E_0$ has good residual eigenvalues on each component of $S$) such that $s\otimes t=0$. Then one can easily construct a family of pre-${\cal D}$-modules parametrized by the affine line $A^1$ which is the constant family ${\bf E} $ outside some point $P\in A^1$, and specializes at $P$ to ${\bf E} '=(E_0,E_1,0,0)$. Let $\phi :A^1 \to F$ be the resulting morphism to the moduli ${\cal F}$ of finite descriptions. By construction, $\phi$ is constant on $A^1 -P$, and so as ${\cal F}$ is separated, $\phi$ is constant. As the points of ${\cal F}$ are the S-equivalence classes of finite descriptions, it follows that the finite descriptions corresponding to ${\bf E} $ and ${\bf E} '$ are S-equivalent. Hence the S-equivalence class of the finite description associated to a pre-${\cal D}$-module depends only on the reduced module made from the pre-${\cal D}$-module. Now we must show that any two S-equivalent (in the sense of \ref{defstable}) reduced semistable modules have associated finite descriptions which are again S-equivalent (Jordan-H\"older equivalent). This follows from the deformation given in \ref{deform} by using the separatedness of ${\cal F}$ as above. \bigskip Now consider the moduli ${\cal P} = H//{\cal G}$ of semistable pre-${\cal D}$-modules. Let $H_g$ be the analytic open subspace of $H$ where the family parametrized by $H$ has good residual eigenvalues. By the above remark, $H_g$ is saturated under $H\to {\cal P}$. Hence its image ${\cal P} _g\subset {\cal P}$ is analytic open. Let $\phi :H_g \to {\cal F}$ be the classifying map to the moduli ${\cal F}$ of finite descriptions for the tautological family of pre-${\cal D}$-modules parametrized by $H$, which is defined because of the the above lemma. By the analytic universal property of GIT quotients (see Proposition 5.5 of Simpson [S] and the remark below), $\phi$ factors through an analytic map ${\cal R\!\!H} :P_g \to {\cal F}$, which we call as the {\sl Riemann-Hilbert morphism}. \refstepcounter{theorem}\paragraph{Remark \thetheorem} In order to apply Proposition 5.5 of [S], note that a ${\cal G}$-linear ample line bundle can be given on $H$ such that all points of $H$ are semistable. Moreover, though the proposition 5.5 in [S] is stated for semisimple groups, its proof works for reductive groups. \refstepcounter{theorem}\paragraph{Remark \thetheorem} The Riemann-Hilbert morphism can also be thought of as a morphism from the analytic stack of pre-${\cal D}$-modules with good residual eigenvalues to the analytic stack of perverse sheaves. \section{Some properties of the Riemann-Hilbert morphism} In this section we prove some basic properties of the morphism ${\cal R\!\!H}$, which can be interpreted either at stack or at moduli level. \begin{lemma}[Relative Deligne construction]\label{lemdel} (1) Let $T$ be the spectrum of an Artin local algebra of finite type over $C\!\!\!\!I$, and let $\rho_T$ be a family of representations of $G$ (the fundamental group of $X-S$ at base point $P_0$) parametrized by $T$. Let $E$ be a logarithmic connection with eigenvalue not differing by nonzero integers, such that the monodromy of $E$ equals $\rho$, the specialization of $\rho_T$. Then there exists a family $E_T$ of logarithmic connections parametrized by $T$ such that $E_0=E$ and $E_T$ has monodromy $\rho_T$. (2) A similar statement is true for analytic germs of $G$-representations. \end{lemma} \paragraph{Proof} For each $a$, choose a fundamental domain $\Omega_a$ for the exponential map ($z\mapsto \exp (2\pi \sqrt{-1} z)$) such that the eigenvalues of the residue $R_a(E)$ of $E$ along $S_a$ are in the interior of the set $\Omega_a$. As the differential of the exponential map $M(n,C\!\!\!\!I )\to GL(n,C\!\!\!\!I)$ is an isomorphism at all those points of $M(n,C\!\!\!\!I )$ where the eigenvalues do not differ by nonzero integers, using the fundamental domains $\Omega_a$ we can carry out the Deligne construction locally to define a family $E_T$ of logarithmic connections on $(X,S)$ with $E_0=E$, which has the given family of monodromies. Note that for the above to work, we needed the inverse function theorem, which is valid for Artin local algebras. \refstepcounter{theorem}\paragraph{Remark \thetheorem} If in the above, the family $\rho_T$ of monodromies is a constant family (that is, pulled back under $T\to \mathop{\rm Spec}\nolimits (C\!\!\!\!I )$), then $E_T$ is also a constant family as follows from Proposition 5.3 of [N]. \begin{proposition}[`Injectivity' of ${\cal R\!\!H} $]\label{propinj} Let ${\bf E}=(E,F,t,s )$ and ${\bf E}'=(E',F',t',s')$ be pre-${\cal D}$-modules having good residual eigenvalues, such that for each $a$, the eigenvalues of the residues of $E$ and $E'$ over $S_a$ belong a common fundamental domain $\Omega_a$ for the exponential map $exp :C\!\!\!\!I \to C\!\!\!\!I ^*:z\mapsto \exp (2\pi \sqrt{-1}z)$. Then ${\bf E}$ and ${\bf E}'$ are isomorphic if and only if the finite descriptions ${\cal R\!\!H}({\bf E})$ and ${\cal R\!\!H}({\bf E}')$ are isomorphic. \end{proposition} \paragraph{Proof} It is enough to prove that if the Malgrange objects ${\bf M}$ and ${\bf M}'$ are isomorphic, then so are the pre-${\cal D}$-modules ${\bf E}$ and ${\bf E}'$. First use the fact that, in a given meromorphic connection $M$ on $X-S$ (or on $N_{S,X}^{}-S$), there exists one and only one logarithmic connection having its residue along $S_a$ in $\Omega_a$ for each $a$, to conclude that $E$ and $E'$ (resp. $F$ and $F'$) are isomorphic logarithmic modules. To obtain the identification between $s$ and $s'$ (resp. $t$ and $t'$), use the fact that these maps are determined by their value at a point in each connected component $N_{S_a,X}^{}-S_a$ of $N_{S,X}^{}-S$ and this value is determined by the corresponding $C_a$ or $C'_a$ (resp. $V_a$ or $V'_a$). \begin{proposition}[Surjectivity of ${\cal R\!\!H}$]\label{propsurj} Let ${\bf D}$ be a finite description, and let $\sigma_a:C\!\!\!\!I ^*\to C\!\!\!\!I $ be set theoretic sections of $z\mapsto \exp (2\pi \sqrt{-1}z)$. Then there exists a pre-${\cal D}$-module ${\bf E}$ whose eigenvalues of residue over $S_a$ are in image$(\sigma_a)$, for which ${\cal R\!\!H}({\bf E})$ is isomorphic to ${\bf D}$. \end{proposition} \paragraph{Proof} This follows from proposition \ref{prop3}. \refstepcounter{theorem}\paragraph{Remark \thetheorem} The propositions \ref{propinj} and \ref{propsurj} together say that the set theoretic fiber of ${\cal R\!\!H}$ over a given finite description is in bijection with the choices of `good' logarithms for the local monodromies of the finite description (here `good' means eigenvalues do not differ by nonzero integers). \begin{proposition}[Tangent level injectivity for ${\cal R\!\!H}$]\label{propinfinj} Let $(E,F,t,s)_T$ be a family of pre-${\cal D}$-modules having good residual eigenvalues parametrized by the spectrum $T$ of an Artinian local algebra. Let the family ${\cal R\!\!H}(E,F,t,s)_T$ of finite descriptions parametrized by $T$ be constant (pulled back under $T\to \specC\!\!\!\!I$). Then the family $(E,F,t,s)_T$ is also constant. \end{proposition} \paragraph{Proof} This is just the rigidity result of proposition \ref{prop4}. \begin{proposition}[Infinitesimal surjectivity for ${\cal R\!\!H}$]\label{propinfsurj} Let $T$ be the spectrum of an Artin local algebra of finite type over $C\!\!\!\!I$, and let ${\bf D}$ be a family of finite descriptions parametrized by $T$. Let ${\bf E}$ be a pre-${\cal D}$-module having good residual eigenvalues such that ${\cal R\!\!H}({\bf E})={\bf D}_{\xi}$, the restriction of ${\bf D}$ over the closed point $\xi$ of $T$. Then there exists a family ${\bf E}'_T$ of pre-${\cal D}$-modules having good residual eigenvalues with ${\bf E}'_{\xi}={\bf E}$ and ${\cal R\!\!H}({\bf E}_T)={\bf D}_T$. \end{proposition} \paragraph{Proof} This follows from lemma \ref{lemdel} and the proof of proposition \ref{prop3} which works for families over Artin local algebras. \begin{theorem} The analytic open substack of the stack (or analytic open subset of the moduli) of pre-${\cal D}$-modules on $(X,S)$, where ${\bf E} $ has good residual eigenvalues, is an analytic spread over the stack (or moduli) of perverse sheaves on $(X,S)$ under the Riemann-Hilbert morphism. \end{theorem} \paragraph{Proof} This follows from propositions \ref{propsurj}, \ref{propinfinj} and \ref{propinfsurj} above. Note that we have not defined ${\cal R\!\!H}$ on the closed analytic subset $T_o$ of the parameter space of a family where ${\bf E} $ does not have good residual eigenvalues. Note that $T_o$ is defined by a `codimension one' analytic condition, that is, if $T$ is nonsingular, and if $T_o$ is a nonempty and proper subset of $T$, then $T_o$ has codimension 1 in $T$. However, it follows from Proposition \ref{propremov} below that the morphism ${\cal R\!\!H}$ on $T-T_o$ can be extended to an open subset of $T$ of complementary codimension at least two. However, on the extra points to which it gets extended, it may not represent the de Rham functor. \begin{proposition}[Removable singularities for ${\cal R\!\!H}$]\label{propremov} Let $T$ be an open disk in $C\!\!\!\!I$ centered at $0$. Let ${\bf E}_T=(E,F,t,s)_T$ be a holomorphic family of pre-${\cal D}$-modules parametrized by $T$. Let the restriction $E_z$ have good residual eigenvalues for all $z\in T-\{0\}$. Then there exists a holomorphic family ${\bf D}_U$ of finite descriptions parametrized by a neighbourhood $U$ of $0\in T$ such that on $U-\{0\}$, the families ${\cal R\!\!H}({\bf E}_U\vert U-\{0\}) $ and ${\bf D}_{U- \{0\}}$ are isomorphic. \end{proposition} If at $z=0$ the logarihmic connection $E$ does not have good residual eigenvalues, it is possible to change it to obtain a new logarithmic connection having good residual eigenvalues. This is done by the classical `shearing transformation' that we adapt below ({\sl inferior and superior modifications} for pre-${\cal D}$-modules). This can be done in family and has no effect on the Malgrange object at least locally. \begin{definition}\rm If $E$ is a vector bundle on $X$, and $V$ a subbundle of the restriction $E\vert S$, then the inferior modification ${_VE}$ is the sheaf of all sections of $E$ which lie in $V$ at points of $S$. This is a locally free subsheaf of $E$ (but not generally a subbundle). The superior modification $^VE$ is the vector bundle ${\cal O}_X(S)\otimes {_VE}$. \end{definition} \refstepcounter{theorem}\paragraph{Remark \thetheorem}\label{rem6} If $E\vert S =V\oplus V'$, then we have a canonical isomorphism $${_VE}\vert S \to V \oplus ({\cal N}^*_{S,X}\otimes V')$$ and hence also a canonical isomorphism $$^VE|S\to ({\cal N}_{S,X}\otimes V)\oplus V'$$ \refstepcounter{theorem}\paragraph{Remark \thetheorem}\label{rem7} If $(E,\nabla)$ is a logarithmic connection on $(X,S)$ and $V$ is invariant under the residue, then it can be seen that $_VE$ is invariant under $\nabla$, so is again a logarithmic connection. We call it the inferior modification of the logarithmic connection $E$ along the residue invariant subbundle $V\subset E\vert S$. It has the effect that the residual eigenvalues along $V$ get increased by $1$ when going from $E$ to $_VE$. As ${\cal O}_X(S)$ is canonically a logarithmic connection, the superior modification $^VE$ is also a logarithmic connection, with the residual eigenvalues along $V$ getting decreased by $1$. \bigskip Let $(E,F,t,s)$ be pre-${\cal D}$-module on $(X,S)$ such that $E$ has good residual eigenvalues. Let us for simplicity of writing assume that $S$ is connected. Let $E|S = \oplus_\alpha E^\alpha$ and $F=\oplus_\alpha F^\alpha$ be the respective direct sum decompositions into generalized eigen subbundles for the action of $\theta$. Then (see also remark \ref{rem3}) as $\theta$ commutes with $s$ and $t$, it follows that $t(E^\alpha) \subset F^\alpha$ and $s(F_\alpha)\subset E^\alpha$. Moreover, when $\alpha\ne 0$, the maps $s$ and $t$ are isomorphisms between $E^\alpha$ and $F^\alpha$. Now let $\alpha\ne 0$. Let $V=E^\alpha$ and $V'=\oplus_{\beta\ne \alpha}E^\beta$. Let $F'' = \oplus_{\beta\ne \alpha}F^\beta$. Let $F' = F^\alpha \oplus {\cal N}^*_{S,X}\otimes F''$. Let $E'={_VE}$. Then using \ref{rem6} and the above, we get maps $t':E'|S \to F'$ and $s':F'\to E'|S$ such that $(E',F',s',t')$ is a pre-${\cal D}$-module. \begin{definition}\label{definfmod}\rm We call the pre-${\cal D}$-module $(E',F',s',t')$ constructed above as the inferior modification of $(E,F,s,t)$ along the generalized eigenvalue $\alpha\ne 0$. \end{definition} Similarly, we can define the superior modification along a generalized eigenvalue $\alpha\ne 0$ by tensoring with ${\cal O}_X(S)$. \refstepcounter{theorem}\paragraph{Remark \thetheorem} The construction of inferior or superior modification of pre-${\cal D}$-modules can be carried out over a parameter space $T$ (that is, for families) provided the subbundles $V$ and $V'$ form vector subbundles over the parameter space $T$ (their ranks are constant). \paragraph{Proof of \protect\ref{propremov}} If the restriction $E= E_{T\vert z=0}$ has good residual eigenvalues, then ${\cal R\!\!H} {\bf E}_T$ has the desired property. So suppose $E$ does not have good residual eigenvalues. We first assume for simplicity of writing that $E$ fails to have good residual eigenvalues because its residue $R_a$ on $S_a$ has exactly one pair $(\alpha,\alpha-1)$ of distinct eigenvalues which differ by a positive integer, with $\alpha-1\ne 0$. Let $f_T$ be the characteristic polynomial of $R_{a,T}$. Then $f_0$ has a factorization $f_0 = gh$ such that the polynomials $g$ and $h$ are coprime, $g(\alpha)=0$ and $h(\alpha-1)=0$. On a neighbourhood $U$ of $0$ in $T$ we get a unique factorization $f_T\vert U = g_Uh_U$ where $g_U$ specializes to $g$ and $h_U$ specializes to $h$ at $0$. By taking $U$ small enough, we may assume that $g_U$ and $h_U$ have coprime specializations at all points of $U$. Let $V_U$ be the kernel of the endomorphism $g_U(R_{a,U})$ of the bundle $E_{a,U}$. If $U$ is small enough then $F_U$ is a subbundle. Now take the inferior modification ${\bf E}'= ({_V}E_U, F'_U,t'_U,s'_U)$ of the family $(E,F,t,s)_U$ as given by construction \ref{definfmod}. Then ${_VE}_U$ is a family of logarithmic connections having good residual eigenvalues, so by definition ${\bf E}'$ has good residual eigenvalues. If $(0,1)$ are the eigenvalues, then use superior modification along the eigenvalue $1$. If $R_a$ has eigenvalues $(\alpha,\alpha-k)$ for some integer $k\ge 1$, then repeat the above inferior (or superior) modification $k$ times (whether to choose an inferior or superior modification is governed by the following restriction : the multiplicity of the generalized eigenvalue $0$ should not decrease at any step). By construction, we arrive at the desired family $(E',F',s',t')$. \section*{References} \addcontentsline{toc}{section}{References} [L] Laumon, G. : Champs alg\'ebriques. Preprint no. 88-33, Universit\'e Paris Sud, 1988. [Mal] Malgrange, B. : Extension of holonomic ${\cal D}$-modules, in Algebraic Analysis (dedicated to M. Sato), M. Kashiwara and T. Kawai eds., Academic Press, 1988. [Ne] Newstead, P.E. : {\sl Introduction to moduli problems and orbit spaces}, TIFR lecture notes, Bombay (1978). [N] Nitsure, N. : Moduli of semistable logarithmic connections. J. Amer. Math. Soc. 6 (1993) 597-609. [S] Simpson, C. : Moduli of representations of the fundamental group of a smooth projective variety - I, Publ. Math. I.H.E.S. 79 (1994) 47-129. [Ve] Verdier, J.-L. : Prolongements de faisceaux pervers monodromiques, Ast\'erisque 130 (1985) 218-236. \bigskip Addresses: School of Mathematics, Tata Institute of Fundamental Research, Homi Bhabha Road, Bombay 400 005, India. e-mail: [email protected] Centre de Mathematiques, CNRS ura169, Ecole Polytechnique, Palaiseau cedex, France. e-mail: [email protected] \end{document} 
1,116,691,499,940
arxiv
\section{Introduction} Let $G$ be a finite group, and let \({\rm cd}(G)\) denote the \emph{degree set} of $G$, i.e., the set of degrees of the irreducible complex characters of $G$. In the paper ``Research in Representation Theory at Mainz (1984--1990)" (\cite{mainz}), Bertram Huppert writes: ``Several special results, known since some time, made it clear that the structure of a finite group $G$ is controlled to a large extent by the type of the prime-number-decomposition of the degrees of the irreducible characters of $G$ over $\mathbb{C}$". That work highly contributed to boost the interest of many authors, and the study of the arithmetical properties of the degree set, both on their own account and in connection with the structure of the group, is nowadays a well-established and classical research topic in the representation theory of finite groups. Part of the discussion in \cite{mainz} concerns the \emph{character degree graph} \(\Delta(G)\) of a finite group $G$ (\emph{degree graph} for short), a tool that has been devised in order to investigate the arithmetical structure of the degree set \({\rm cd}(G)\). This is the simple undirected graph whose vertex set \(\V G\) consists of the primes dividing the numbers in $\cd{G}$, and such that two distinct vertices $p$ and $q$ are adjacent if and only if $pq$ divides some number in $\cd{G}$. As discussed, for instance, in the survey \cite{overview}, many results in the literature illustrate the deep link between graph-theoretical properties of \(\Delta(G)\) (in particular, connectivity properties) and the group structure of \(G\). The series of three papers including the present one (together with \cite{DPSS,DPS3}) is a contribution in this framework. Namely, our purpose is to characterize the finite non-solvable groups whose degree graph has a \emph{cut-vertex}, i.e., a vertex whose removal increases the number of connected components of the graph. We mention that an analysis of the solvable case is carried out in \cite{LM}. The main results of the whole series are Theorem~A, Theorem~B and Theorem~C of \cite{DPSS}: they are stated in full details and commented in \cite[Section~2]{DPSS}, where also the relevant graphs are described with some figures (we refer the reader to \cite[Section~2]{DPSS}, as well as to the Introduction of \cite{DPSS} for a more exhaustive presentation of the problem). In particular Theorem~C of \cite{DPSS}, which provides a characterization of the finite non-solvable groups whose degree graph has a cut-vertex \emph{and it is disconnected}, is entirely proved in that paper and will not be discussed further. As regards Theorem~A and Theorem~B of \cite{DPSS}, they deal with finite non-solvable groups whose degree graph has \emph{connectivity degree~$1$} (i.e., it has a cut-vertex \emph{and it is connected}), and they are only partially proved in \cite{DPSS}. As we said, we do not reproduce the full statements of these theorems here but, for the convenience of the reader, we summarize next some properties of the relevant class of groups. If $G$ is a finite non-solvable group whose degree graph has connectivity degree $1$, then $G$ has a unique non-solvable composition factor $S$, belonging to a short list of isomorphism types: ${\rm{PSL}}_2(t^a)$, ${\rm{Sz}}(2^a)$, ${\rm{PSL}}_3(4)$, ${\rm{M}}_{11}$, ${\rm{J}}_1$; moreover, if $S \not\cong {\rm{PSL}}_2(t^a)$, then $G$ has a (minimal) normal subgroup isomorphic to $S$. Finally, denoting by $R$ the solvable radical of $G$, the vertex set of $\Delta(G)$ consists of the primes in \(\pi(G/R)\) (the set of prime divisors of the order of the almost-simple group $G/R$) and, if not already there, the cut-vertex of $\Delta(G)$. We also remark that in all cases, except possibly when the non-abelian simple section $S$ is isomorphic to the Janko group ${\rm{J}}_1$, the cut-vertex $p$ of $\Delta(G)$ is a complete vertex (i.e., it is adjacent to all other vertices) of $\Delta(G)$; moreover, the graph obtained from $\Delta(G)$ by removing the vertex $p$ (and all the edges incident to $p$) has exactly two connected components, which are complete graphs, and one of them consists of a single vertex. While in \cite{DPSS} we classify the finite non-solvable groups $G$ such that \(\Delta(G)\) has connectivity degree $1$ \emph{and whose unique non-solvable composition factor $S$ is not isomorphic to \(\PSL{t^a}\)} for any prime power \(t^a\geq 4\) (which covers cases~(a)--(d) of \cite[Theorem~A]{DPSS}), here we consider the situation when $S\cong\PSL{t^a}$ \emph{for an odd prime \(t\) with \(t^a>5\)}, thus covering case~(e) of \cite[Theorem~A]{DPSS}. Note that in conclusion (b) of the following theorem, by the ``natural module" for \(K/L\cong\SL{t^a}\) we mean the standard \(2\)-dimensional module for $K/L$ over the field with \(t^a\) elements, or any of its Galois conjugates, seen as a \(2a\)-dimensional $K/L$-module over the field with \(t\) elements. \begin{Main} \label{main} Let \(G\) be a finite group and let \(R\) be its solvable radical. Assume that \(G\) has a composition factor isomorphic to $\PSL{t^a}$, for a suitable odd prime \(t\) with \(t^a> 5\), and let \(p\) be a prime number. Then, denoting by $K$ the last term in the derived series of $G$, the graph \(\Delta(G)\) is connected and has cut-vertex \(p\) if and only if: \(G/R\) is an almost-simple group, $\V G=\pi(G/R)\cup\{p\}$, the order of \(G/KR\) is not a multiple of \(t\), the prime $p$ is not $t$, and one of the following holds. \begin{enumeratei} \item \(K\) is isomorphic to \(\PSL{t^a}\) or to \(\SL{t^a}\), and $\V{G/K}=\{p\}$. \item \(K\) contains a minimal normal subgroup \(L\) of \(G\) such that \(K/L\) is isomorphic to \(\SL{t^a}\) and \(L\) is the natural module for \(K/L\); moreover, $\V{G/K}=\{p\}$. \item \(t^a=13\) and $p=2$. \(K\) contains a minimal normal subgroup \(L\) of \(G\) such that \(K/L\) is isomorphic to \(\SL{13}\), and \(L\) is one of the two \(6\)-dimensional irreducible modules for \(\SL{13}\) over the field with three elements. Moreover, $\V{G/K}\subseteq\{2\}$. \end{enumeratei} In all cases, \(p\) is a complete vertex and the unique cut-vertex of \(\Delta(G)\), and it is the unique neighbour of \(t\) in \(\Delta(G)\) with the only exception of case {\rm(c)} (see Figure~$\ref{c}$). \end{Main} \begin{figure}[h] \label{c} \caption{\(\Delta(G)\) for \(G\) as in (c) of the Main Theorem} \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=1.0cm,y=1.0cm] \clip(0.86,-0.7) rectangle (4.82,2.18); \draw [line width=0.8pt] (1.18,0.98)-- (2.28,0.96); \draw [line width=0.8pt] (3.5,1.56)-- (2.28,0.98); \draw [line width=0.8pt] (3.5,0.40)-- (2.28,0.98); \draw [line width=0.8pt] (3.5,1.56)-- (3.5,0.40); \draw (0.98,1.6) node[anchor=north west] {$3$}; \draw (2.06,1.6) node[anchor=north west] {$2$}; \draw (3.55,0.6) node[anchor=north west] {$7 $}; \draw (3.5,1.85) node[anchor=north west] {$13$}; \begin{scriptsize} \draw [fill=black] (1.18,0.98) circle (2.5pt) \draw [fill=black] (2.28,0.98) circle (2.5pt) \draw [fill=black] (3.5,1.56) circle (2.5pt) \draw [fill=black] (3.5,0.40) circle (2.5pt) \end{scriptsize} \end{tikzpicture}\end{figure} As already mentioned, the even-characteristic case is treated in \cite{DPS3}, and it covers the remaining case (f) of \cite[Theorem~A]{DPSS} together with \cite[Theorem~B]{DPSS}. As regards the ``only if" part of the above theorem, the main difference from the situation treated in \cite{DPSS} is that here the subgroup \(K\) need not be minimal normal in $G$. In fact, \(K\) can be isomorphic to \(\SL{t^a}\), or even, it is possible to have a non-trivial normal subgroup \(L\) of \(G\) such that \(K/L\cong\SL{t^a}\). Much of the work carried out in this paper consists in showing that such a subgroup $L$ is minimal normal in $G$, and in controlling the (conjugation) action of $K/L$ on~$L$. To this end, two results concerning orbit properties in certain actions of \(\SL{t^a}\) (Theorem~\ref{TipoIeIIPieni}, that should be compared with Lemma~3.10 of \cite{DPSS}, and Theorem~\ref{TipoIeII}) turn out to be crucial in our analysis; these might be of interest on their own. To conclude, we point out that the structure of the groups appearing in the Main Theorem (as well as of the corresponding graphs), quite surprisingly, does not fall too far from the structure of the finite non-solvable groups whose degree graph has two connected components (see Theorem~\ref{LewisWhite}). We will comment on this fact in Remark~\ref{comparison}. In the following discussion, every group is tacitly assumed to be a finite group. \section{Preliminaries} Given a group \(G\), we denote by \(\Delta(G)\) the degree graph of \(G\) as defined in the Introduction. Our notation concerning character theory is standard, and we will freely use basic facts and concepts such as Ito-Michler's theorem, Clifford's theory, Gallagher's theorem, character triples and results about extension of characters (see \cite{Is}). As customary (and already used), given a positive integer \(n\), the set of prime divisors of \(n\) will be denoted by \(\pi(n)\), but we simply write \(\pi(G)\) for \(\pi(|G|)\). Moreover, for a prime power \(q\), we use the symbol \(\mathbb{F}_q\) to denote the field having \(q\) elements. \smallskip We start by recalling some structural properties of the \(2\)-dimensional special linear or projective special linear groups. Although this paper treats groups of this kind in odd characteristic, most of the results in this section and in the following one will be stated and proved with no restrictions about the characteristic. \begin{rem}\label{Subgroups} Recall that, for an odd prime \(t\), the group \(\PSL{t^a}\) has order \(\dfrac{t^a(t^a-1)(t^a+1)}{2}\). Moreover, as stated in \cite[II.8.27]{Hu}, the proper subgroups of this group are of the following types. \begin{itemize} \item[(i$_+$)] Dihedral groups of order $t^a + 1$ and their subgroups. \item[(i$_-$)] Dihedral groups of order $t^a - 1$ and their subgroups. \item[(ii)] Frobenius groups with elementary abelian kernel of order $t^a$ and cyclic complements of order $(t^a -1)/2$, and their subgroups; \item[(iii)] $A_4$, $S_4$ or $A_5$; \item[(iv)] $\PSL{t^b}$ or ${\rm PGL}_2(t^b)$, where $b$ divides $a$. \end{itemize} \end{rem} In our discussion, we will freely refer to the above labels when dealing with a subgroup of \(\PSL{t^a}\). By a subgroup of type (i) we will mean a subgroup that is either of type (i$_-$) or of type (i$_+$). \begin{lemma} \label{PSL2} Let \(G\cong\SL{t^a}\) or \(G\cong\PSL{t^a}\), where \(t\) is a prime and \(t^a\geq 4\). Let \(r\) be an odd prime divisor of \(t^a-1\), and let \(R\) be a subgroup of \(G\) with \(|R|=r^b\) for a suitable \(b\in{\mathbb N}-\{0\}\). Then \(R\) lies in the normalizer in \(G\) of precisely two Sylow \(t\)-subgroups of \(G\). \end{lemma} \begin{proof} We start by observing that the number of Sylow \(t\)-subgroups of \(G\) is \(t^a+1\) and, for \(T\in{\rm{Syl}}_t(G)\), the number of subgroups of order \(r^b\) lying in \(\norm G T\) is \(t^a\). Moreover, the total number of subgroups of \(G\) having order \(r^b\) is \(t^a(t^a+1)/2\). Now, consider the set \[X=\{(R_0,T)\; \mid\; T\in\syl t G,\, |R_0|=r^b,\, R_0\subseteq\norm G T\}.\] On one hand we get \[|X|=\sum_{T\in\syl t G}t^a=t^a(t^a+1);\] on the other hand, if \(n\) denotes the number of Sylow \(t\)-subgroups of \(G\) that are normalized by a given subgroup of order \(r^b\), we also have \[|X|=\sum_{|R_0|=r^b}n=t^a(t^a+1)/2\cdot n.\] The desired conclusion is then readily achieved. \end{proof} The following results are more specific of the context we will analyze. \begin{lemma}[\mbox{\cite[Theorem~5.2]{W}}] \label{PSL2bis} Let $S \cong \PSL{t^a}$ or $S \cong \SL{t^a}$, with $t$ prime and $a \geq 1$. Let $\rho_{+} = \pi(t^a+1)$ and $\rho_{-} = \pi(t^a-1)$. For a subset $\rho$ of vertices of $\Delta(S)$, we denote by $\Delta_{\rho}$ the subgraph of $\Delta = \Delta(S)$ induced by the subset $\rho$. Then \begin{enumeratei} \item if $t=2$ and $a \geq 2$, then $\Delta(S)$ has three connected components, $\{t\}$, $\Delta_{\rho_{+}}$ and $\Delta_{\rho_{-}}$, and each of them is a complete graph. \item if $t > 2$ and $t^a > 5$, then $\Delta(S)$ has two connected components, $\{t\}$ and $\Delta_{\rho_{+} \cup \rho_{-}}$; moreover, both $\Delta_{\rho_{+}}$ and $\Delta_{\rho_{-}}$ are complete graphs, no vertex in $\rho_{+} - \{ 2 \}$ is adjacent to any vertex in $\rho_{-} - \{ 2\}$ and $2$ is a complete vertex of $\Delta_{\rho_{+} \cup \rho_{-}}$. \end{enumeratei} \end{lemma} \begin{theorem}[\mbox{\cite[Theorem~3.9]{DPSS}}]\label{MoretoTiep} Let $G$ be an almost-simple group with socle $S$, and let $\delta = \pi(G) - \pi(S)$. If $\delta \neq \emptyset$, then $S$ is a simple group of Lie type, and every vertex in \(\delta\) is adjacent to every other vertex of $\Delta(G)$ that is not the characteristic of $S$. \end{theorem} \begin{lemma}\label{InfiniteCommutator} Let \(G\) be a group and let \(R\) be its solvable radical. Assume that \(G/R\) is an almost-simple group with socle isomorphic to $\PSL{t^a}$, for a prime \(t\) with \(t^a> 4\) and \(t^a\neq 9\). If \(K\) is the last term in the derived series of \(G\), then one of the following conclusions holds. \begin{enumeratei} \item \(K\) is isomorphic to \(\PSL{t^a}\) or to \(\SL{t^a}\); \item \(K\) has a non-trivial normal subgroup \(L\) such that \(K/L\) is isomorphic to \(\PSL{t^a}\) or to \(\SL{t^a}\), and every non-principal irreducible character of \(L/L'\) is not invariant in \(K\). \end{enumeratei} \end{lemma} \begin{proof} Note that \(K\) is clearly non-trivial because \(G\) is non-solvable, so there exists a normal subgroup \(N\) of \(G\) such that \(K/N\) is a chief factor of \(G\). As \(K\) is perfect, it is easy to see that \(KR/NR\cong K/N\) is a non-solvable chief factor of \(G/R\); now, denoting by \(M/R\) the socle of the almost-simple group \(G/R\), we get \(NR=R\) (i.e., \(N=K\cap R\)) and \(KR=M\). Thus, by our hypothesis, \(K/N\cong M/R\) is isomorphic to \(\PSL{t^a}\) and we get conclusion (a) if \(N\) is trivial. Therefore we can assume \(N\neq 1\), and we can also assume that there exists a non-principal irreducibe character \(\mu\) of \(N/N'\) such that \(\mu\) is invariant in \(K\) (otherwise we get (b) setting \(L=N\)). So, let us define \(L\) to be \(\ker\mu\). We have that \(L\) is a normal subgroup of \(K\) and \(N/L\) is contained in \(\zent{K/L}\); thus, since \(K\) is perfect, we see that \(N/L\) embeds in the Schur multiplier of \(K/N\) (see \cite[Theorem~11.19]{Is}). Under our assumptions, this Schur multiplier is trivial if \(t=2\) and it has order \(2\) if \(t\neq 2\), so, in the present situation, we have \(t\neq 2\), \(|N/L|=2\) and \(K/L\cong\SL{t^a}\). Again we reach conclusion (a) if \(L\) is trivial. But if \(L\neq 1\), taking into accout that the Schur multiplier of \(K/L\) is trivial and arguing as above, then we see that \(L/L'\) does not have any non-principal irreducible character that is invariant in \(K\). We reached conclusion (b) in this case, and the proof is complete. \end{proof} Recall that, for $a$ and $n$ integers larger than $1$, a prime divisor $q$ of $a^n-1$ is called a \emph{primitive prime divisor} if $q$ does not divide $a^b -1$ for all $1 \leq b <n$. In this case, $n$ is the order of $a$ modulo $q$, so $n$ divides $q-1$. It is known (\cite[Theorem~6.2]{MW}) that $a^n - 1$ always has primitive prime divisors except when $n = 2$ and $a= 2^c -1$ for some integer $c$, or when $n=6$ and $a= 2$. \begin{lemma}\label{singer} Let \(G\) be a group and let \(R\) be its solvable radical. Assume that \(G/R\) is an almost-simple group with socle isomorphic to $\PSL{t^a}$, for a prime \(t\) with \(t^a\geq 4\). Denoting by \(K\) the last term in the derived series of \(G\), assume that \(L\) is a minimal normal subgroup of \(G\), contained in \(K\), such that \(K/L\cong\SL{t^a}\) acts non-trivially on \(L\). Setting \(|L|=q^d\), where \(q\) is a suitable prime, let \(U\) be a Sylow \(u\)-subgroup of \(R\) for an odd prime \(u\) that does not divide \(q^d-1\). If there exists a primitive prime divisor \(p\) of \(q^d-1\) such that \(|K/L|\) is a multiple of \(p\), then \(U\subseteq\cent G L\).\end{lemma} \begin{proof} Set \(C=\cent G L\). Since \(L\) is an elementary abelian \(q\)-group of order \(q^{d}\), the factor group \(G/C\) embeds in \({\rm{GL}}_{d}(q)\). Denoting by \(S/C\) a subgroup of \(KC/C\) such that \(|S/C|=p\), by \cite[II, 7.3]{Hu} we see that \(S/C\) is contained in a Singer subgroup of \({\rm{GL}}_{d}(q)\): this is a cyclic group of order \(q^{d}-1\), which is a maximal cyclic subgroup of \({\rm{GL}}_{d}(q)\), and which acts fixed-point freely on \(L\). Now \(S/C\) acts irreducibly on \(L\), thus, by Schur's Lemma and by the fact that the ring of endomorphisms of \(L\) that commute with the action of \(S/C\) is a finite field, \(\cent{{\rm{GL}}_{d}(q)}{S/C}\) is a cyclic group and it is then forced to have order \(q^{d}-1\). On the other hand, setting \(N/L=\zent{K/L}\) and observing that \(N=K\cap R\) (so, \([K/N,R/N]=1\)), by coprimality we have \(K/N\cong(K/L)/(N/L)=\cent{K/L}{U}/(N/L)\), whence \(K/L=\cent{K/L}{U}\). In particular, as \(L\subseteq C\), the factor group \(UC/C\) centralizes \(KC/C\), which forces \(U\subseteq C\) by the discussion above. The proof is complete. \end{proof} The above result can be applied when \(L\) is isomorphic to the natural module for \(K/L\cong\SL{t^a}\) as far as \(t^{2a}-1\) has a primitive prime divisor. Nevertheless, we remark that the conclusion of Lemma~\ref{singer} is true in this situation even if \(t^{2a}-1\) does not have any primitive prime divisor, i.e., if \(a=1\) and \(t\) is a Mersenne prime, or if \(t^{2a}=2^6\): in the former case, in fact, the \(u\)-part of \(|KC/C|\) already exhausts the full \(u\)-part of \(|{\rm{GL}}_{2}(t)|\) because \(u\) does not divide \(|{\rm{GL}}_{2}(t):\SL{t}|=t-1\), therefore a Sylow \(u\)-subgroup of \(R\) is forced to be contained in~\(\cent G L\). In the latter case, we observe that \({\rm{GL}}_{6}(2)\) has a unique conjugacy class of elements of order \(2^3+1\); therefore, defining \(S/C\) to be a subgroup of order \(9\) of \(G/C\), the action of \(S/C\) on \(L\) is again fixed-point free and obviously irreducible, so the previous argument goes through. For later use, we stress that Lemma~\ref{singer} also applies when \(K/L\cong\SL{13}\) and \(L\) is isomorphic to an irreducible \(K/L\)-module of dimension \(6\) over \({\mathbb F}_3\), because \(7\) is a primitive prime divisor of \(3^6-1\) dividing \(|\SL{13}|\). \begin{theorem}{\cite[Theorem~3.15]{DPSS}}\label{0.2} Let \(G\) be a non-solvable group such that \(\Delta(G)\) is connected and it has a cut-vertex \(p\). Then, denoting by \(R\) the solvable radical of \(G\), we have that \(G/R\) is an almost-simple group and \(\V G=\pi(G/R)\cup\{p\}\). \end{theorem} As an important reference for our discussion, we recall here the characterization of the finite non-solvable groups whose degree graph has two connected components, provided by M.L. Lewis and D.L. White in \cite{LW}. We will discuss the relationship between this and our main result in the last section of the present paper. \begin{theorem}{\cite[Theorem~6.3]{LW}} \label{LewisWhite} Let \(G\) be a non-solvable group. Then \(\Delta(G)\) has two connected components if and only if there exist normal subgroups \(N\subseteq K\) of \(G\) such that, setting \(C/N=\cent{G/N}{K/N}\), the following conditions hold. \begin{enumeratei} \item \(K/N\simeq\PSL{t^a}\), where \(t\) is a prime with \(t^a\geq 4\). \item \(G/K\) is abelian. \item If \(t^a\neq 4\), then \(t\) does not divide \(|G/CK|\). \item If \(N\neq 1\), then either \(K\cong\SL{t^a}\) or there exists a minimal normal subgroup \(L\) of \(G\) such that \(K/L\cong\SL{t^a}\) and \(L\) is isomorphic to the natural module for \(K/L\). \item If \(t=2\) or \(t^a=5\), then either \(CK\neq G\) or \(N\neq 1\). \item If \(t=2\) and \(K\) is an in {\rm{(d)}} in the case \(K\not\cong\SL{t^a}\), then every non-principal character in \(\irr L\) extends to its inertia subgroup in \(G\). \end{enumeratei} \end{theorem} We also recall that if $G$ satisfies the hypotheses of Theorem~\ref{LewisWhite}, then the solvable radical \(R\) of \(G\) coincides with \(C\) and \(\V G=\pi(G/R)\) (see \cite[Remark~3.8]{DPSS}). \section{Some orbit theorems} In this third preliminary section, we focus on some module actions of \(2\)-dimensional special linear groups that will be crucial for our discussion. The main results of the section are Theorem~\ref{TipoIeIIPieni}, which deals with irreducible modules for \(\SL{t^a}\) in cross characteristic, and Theorem~\ref{TipoIeII} concerning actions on \(t\)-groups. We will also prove another result on orbit sizes in this kind of linear actions (Lemma~\ref{brauer}), that will turn out to be useful. To begin with, we recall some special types of actions of groups on modules. Let \(H\) and \(V\) be groups, and assume that \(H\) acts by automorphisms on \(V\). Given a prime number \(q\), we say that \emph{the pair \((H,V)\) satisfies the condition \(\mathcal{N}_q\) } if $q$ divides $|H: \cent HV|$ and, for every non-trivial \(v\in V\), there exists a Sylow \(q\)-subgroup \(Q\) of \(H\) such that \(Q\trianglelefteq \cent H v\) (see \cite{C}). If $(H, V)$ satisfies $\mathcal{N}_q$ then \(V\) turns out to be an elementary abelian \(r\)-group for a suitable prime \(r\), and \(V\) is in fact an \emph{irreducible} module for \(H\) over the field $\mathbb{F}_r$ (see~Lemma~4 of~\cite{Z}). For \(H\cong\SL{t^a}\), we have the following result. \begin{lemma}{\cite[Lemma~3.10]{DPSS}} \label{SL2Nq} Let $t$, $q$, $r$ be prime numbers, let $H = {\rm SL}_2(t^a)$ (with $t^a \geq 4$) and let $V$ be an $H$-module over the field $\mathbb{F}_r$. Then $(H, V)$ satisfies $\mathcal{N}_q $ if and only if either $t^a = 5$ and $V$ is the natural module for $H/\cent HV \cong {\rm SL}_2(4)$ or $V$ is faithful and one of the following holds. \begin{enumeratei} \item $t = q = r$ and $V$ is the natural $\mathbb{F}_r[H]$-module (so $|V| = t^{2a}$); \item $q = r = 3$ and $(t^a, |V|) \in \{(5, 3^4), (13, 3^6)\}$. \end{enumeratei} \end{lemma} Theorem~\ref{TipoIeIIPieni} is introduced by the following Lemma~\ref{brauer2}. Note that the numbers indicated in Table~\ref{Brauer3} of this lemma are the \emph{possible} dimensions of the relevant modules (only some of them actually show up). \begin{lemma}\label{brauer2} Let \(V\) be an irreducible module for \(G=\SL{t^a}\) over the field \({\mathbb F}_q\), where \(t^a\geq 4\) and \(q\) is a prime in \(\pi(G)-\{t\}\). Also, let \({\mathbb F}\) be a finite-degree field extension of \({\mathbb F}_q\) such that \({\mathbb F}\) is a splitting field for \(G\) and all its subgroups, and let \(\ell\) denote the composition length of the \({\mathbb F}[G]\)-module \(V\otimes {\mathbb F}\). Then, for \(T\in\syl t G\), the maximal dimensions of \(\cent V T\) over \({\mathbb F}_q\) (depending on \(\dim_{{\mathbb F}_q} V\)) are listed in Table~\ref{Brauer3}. \begin{table}[htbp] \caption{\label{Brauer3} Maximal dimension of the centralizers of \(T\) in \(V\).} \renewcommand{\arraystretch}{.95} $$ \begin{array}{llllll} \hline\hline\\ \dim_{{\mathbb F}_q} V\quad\qquad\quad& {\ell\cdot t^a}\quad\quad & \ell\cdot(t^a+1) \quad\quad& \ell\cdot(t^a-1)\quad\quad& \ell\cdot\left(\dfrac{t^a+1}{2}\right)\quad\quad& \ell\cdot\left(\dfrac{t^a-1}{2}\right)\\ &&&& \text{\rm{(for }}t\neq 2)& \text{\rm{(for }}t\neq 2) \\ \\ \hline \\ \dim_{{\mathbb F}_q}\cent V T & \ell & 2\ell & 0 & \ell & 0 \\ \\ \hline\hline \end{array} $$ \end{table} \end{lemma} \begin{proof} We start by claiming that, for any field extension \({\mathbb K}\) of \({\mathbb F}_q\), we have \(\dim_{{\mathbb K}}\cent{V\otimes{\mathbb K}}{T}=\dim_{{\mathbb F}_q}\cent V T\). In fact, denote by \(U\) an \({\mathbb F}_q[T]\)-submodule of \(V\) that is a direct complement for \(\cent V T\) in \(V\) (such a submodule \(U\) exists by Maschke's theorem); then, the \({\mathbb K}[T]\)-module \(V\otimes {\mathbb K}\) decomposes as \((\cent T V\otimes {\mathbb K})\oplus (U\otimes {\mathbb K})\), and our claim easily follows from the fact that the $1$-dimensional trivial \({\mathbb K}[T]\)-module is not an irreducible constituent of \(U\otimes{\mathbb K}\) (this is ensured, for instance, by Lemma~1.12 in \cite[VII]{HB}). Now, \cite[VII, Theorem~2.6b)]{HB} yields the existence of a field \({\mathbb F}\) as in our hypothesis. Since \(V\otimes{\mathbb F}\) is the direct sum of Galois conjugates of a suitable irreducible \({\mathbb F}[G]\)-module \(W\), we get \(\dim_{{\mathbb F}}(V\otimes{\mathbb F})=\ell\cdot \dim_{{\mathbb F}}W\) and \(\dim_{{\mathbb F}}\cent{V\otimes{\mathbb F}}{T}=\ell\cdot \dim_{{\mathbb F}}\cent W T\); therefore we will assume (absolute) irreducibility for the \({\mathbb F}[G]\)-module \(V\otimes{\mathbb F}\) and the general bounds concerning the dimension of \(\cent V T\) over \({\mathbb F}_q\) will follow at once. Given that, if we extend the field further to the algebraic closure \(\overline{{\mathbb F}}\) of \({\mathbb F}\), then the module \(V\otimes{\overline{{\mathbb F}}}\) clearly remains irreducible. Let \(\mathfrak{T}\) be an \(\overline{{\mathbb F}}\)-representation associated with \(V\otimes\overline{{\mathbb F}}\); by Maschke's theorem and by the fact that \(\overline{{\mathbb F}}\) is a splitting field for \(T\), the restriction \(\mathfrak{T}_T\) (up to equivalence) maps each element of \(T\) to a diagonal matrix, and \(\dim_{\overline{{\mathbb F}}}\cent{V\otimes\overline{{\mathbb F}}}T\) is the multiplicity of the $1$-dimensional trivial representation as a constituent of \(\mathfrak{T}_T\). Following the notation established in the paragraph preceding \cite[Lemma~15.1]{Is}, let \(R\) be the full ring of algebraic integers in the complex field, and let \(U\) be the subgroup of elements of order not divisible by \(q\) in the multiplicative group of \({\Bbb C}\) (clearly, \(U\subseteq R\)); we can indentify \(\overline{{\mathbb F}}\) with the factor ring \(R/M\), where \(M\) is a maximal ideal of \(R\) containing \(qR\), and we consider the natural homomorphism \(*:R\rightarrow\overline{{\mathbb F}}\). By \cite[Lemma~15.1]{Is}, the restriction \(*_U\) maps \(U\) isomorphically onto the multiplicative group of \(\overline{{\mathbb F}}\). Now, consider the composite map \(\mathfrak{D}\) given by \(\mathfrak{T}_T\) followed by the map which applies the inverse of \(*_{U}\) to each entry of the relevant matrix: it is easy to see that \(\mathfrak{D}\) is a complex representation of \(T\) whose character \(\delta\) is the restriction to \(T\) of the Brauer character \(\theta\) afforded by \(\mathfrak{T}\), and what we want to determine is in fact the multiplicity \([1_T,\delta]\) of the trivial character of \(T\) as a constituent of \(\delta\). In view of the discussion of \cite[Sections 9.2--9.4]{Bo}, it turns out that \(\theta\) has a lift \(\chi\in\irr G\) (i.e., the restriction \(\widehat{\chi}\) of \(\chi\) to the set of \(q\)-regular elements of \(G\) coincides with \(\theta\)). Now, as we have \(\delta=\chi_T\), we can refer to the ordinary character table of \(G\) (see for example \cite[Page 58]{Bo}) and compute \([1_T,\chi_T]\). The result of this computation can be found (setting \(\ell=1\), as we are assuming irreducibility for \(V\otimes{\mathbb F}\)) in the second row of Table~\ref{Brauer3}, where the maximal values for \(\dim_{{\mathbb F}_q}\cent V x\) are displayed according to the possible dimensions of \(V\) over \({\mathbb F}_q\). \end{proof} We are ready to prove the first main result of this section. \begin{theorem} \label{TipoIeIIPieni} Let \(V\) be a non-trivial irreducible module for \(G=\SL{t^a}\) over the field \({\mathbb F}_q\), where \(t^a\geq 4\) and \(q\neq t\) is a prime number. For odd primes \(r\in\pi(t^a-1)\) and \(s\in\pi(t^a+1)\) (possibly \(r=q\) or \(s=q\)) let \(R\), \(S\) be respectively a Sylow \(r\)-subgroup and a Sylow \(s\)-subgroup of \(G\), and let \(T\) be a Sylow \(t\)-subgroup of \(G\). Then, considering the sets \[V_{I_-}=\{v\in V\mid {\textnormal{ there exists }} z\in G {\textnormal{ such that }}R ^z\trianglelefteq\cent G v\},\] \[V_{I_+}=\{v\in V\mid {\textnormal{ there exists }} z\in G {\textnormal{ such that }}S ^z\trianglelefteq\cent G v\},\] \[V_{II}=\{v\in V\mid {\textnormal{ there exists }} z \in G {\textnormal{ such that }}T^z\trianglelefteq\cent G v\},\] we have that \(V-\{0\}\) strictly contains \(V_{I_-}\cup V_{II}\), \(V_{I_+}\cup V_{II}\), and \(V_{I_-}\cup V_{I_+}\), unless one of the following holds. \begin{enumeratei} \item \(G\cong\SL 5\), \(s=3\), \(|V|=3^4\) and \(V-\{0\}=V_{I_+}\), \item \(G\cong\SL {13}\), \(r=3\), \(|V|=3^6\) and \(V-\{0\}=V_{I_-}\). \end{enumeratei} \end{theorem} \begin{proof} Observe first that, if \(V-\{0\}\) is covered by just one of the sets \(V_{I_-}\), \(V_{I_+}\) or \(V_{II}\), then we get conclusions (a) or (b) by Lemma~\ref{SL2Nq}; therefore we will show that \(V-\{0\}\) cannot be covered as in the statement assuming that none of the relevant sets is empty. Given that, we start by ruling out the case when \(q\) is coprime to \(|G|\); in fact, in this situation, Theorem~2.3 of \cite{KP} ensures that there exists \(v\in V\) whose centralizer in \(G\) is trivial or of order \(2\), thus \(v\neq 0\) does not lie in any of the sets \(V_{I_+}\), \(V_{I_-}\) and \(V_{II}\). We will then assume \(q\in\pi(G)-\{t\}\), and we will show first that \(V-\{0\}\) cannot be covered by \(V_{I_-}\) and \(V_{II}\). So, for a proof by contradiction, let us assume \(V-\{0\}=V_{I_-}\cup V_{II}\) (both the sets on the right-hand side being non-empty, as we said). If, for \(g\in G\), the element \(v\in V-\{0\}\) is centralized by both \(R\) and \(R^g\) with \(R^g\neq R\), then there exists a Sylow \(t\)-subgroup of \(G\) that is normalized by both \(R\) and \(R^g\). Since, by Lemma~\ref{PSL2}, \(R\) is contained in the normalizer of precisely two Sylow \(t\)-subgroups of \(G\), and since these normalizers contain \(t^a\) conjugates of \(R\) each, there are at most \(2(t^a-1)\) choices for \(R^g\). On the other hand, the total number of conjugates of \(R\) in \(G\) is \(\dfrac{t^{a}\cdot(t^a+1)}{2}\), so there certainly exists an element \(h\in G\) such that no element of \(V-\{0\}\) is centralized by both \(R\) and \(R^h\). As a consequence, we get \(\dim_{{\mathbb F}_q} V\geq 2\cdot \dim_{{\mathbb F}_q}\cent V R\). Now, the possible dimensions of \(V\) over \({\mathbb F}_q\) are listed in Table~\ref{Brauer3}. Note that, \(V_{II}\) being non-empty, the dimension over \({\mathbb F}_q\) of \(\cent V T\) cannot be \(0\), so the relevant dimensions in the present situation are \(\ell\cdot t^a\), \(\ell\cdot(t^a+1)\) and \(\ell\cdot\left(\dfrac{t^a+1}{2}\right)\). \medskip Let us consider the case when \(\dim_{{\mathbb F}_q} V=\ell\cdot t^a\). Taking into account Table~\ref{Brauer3} together with the conclusions of the second-last paragraph above (and the fact that the number of Sylow \(t\)-subgroups of \(G\) is \(t^a+1\)), in order to get a contradiction it will be enough to show that the following inequality holds:\[q^{\ell\cdot t^a}-1>\dfrac{t^a\cdot(t^a+1)}{2}\cdot(q^{\ell\cdot t^{a}/2}-1)+(t^a+1)\cdot(q^{\ell}-1).\] Since \((t^a+1)\cdot(q^\ell-1)\) is clearly smaller than \(\dfrac{t^a\cdot(t^a+1)}{2}\cdot(q^{\ell\cdot t^{a}/2}-1)\), it is enough to analyze the inequality \[q^{\ell\cdot t^a}-1>t^a\cdot (t^{a}+1)\cdot (q^{\ell\cdot t^a/2}-1),\] which is in turn satisfied if \[q^{\ell\cdot t^a}>t^a\cdot (t^{a}+1)\cdot q^{\ell\cdot t^a/2}\] holds. The last inequality is obviously satisfied for every value of \(\ell\) provided it is satisfied for \(\ell=1\), and this happens whenever \(t^a\geq 17\); for smaller values of \(t^a\) we go back to the original inequality, and we see that it is not satisfied only by the following triples \((q,t^a,\ell)\): \((2,5,1)\), \((2,7,1)\), \((2,9,1)\), \((2,11,1)\), \((3,4,1)\). In any case, since the value of \(\ell\) is always \(1\), the \({\mathbb F}_q[G]\)-module \(V\) is absolutely irreducible, and we are in a position to apply Corollary~1.2 of \cite{L}: there exists a regular orbit for the action of \(G\) on \(V\), a contradiction for us, \emph{except for the triple \((3,4,1)\)}. As regards \((3,4,1)\) (which is, by the way, the unique triple among those under consideration that corresponds to an existing module) a direct computation via GAP \cite{GAP} shows that \(\SL 4\) generates an orbit of size \(30\) in the action on its absoultely irreducible module of order \(3^4\). This contradiction completes the proof for the case \(\dim_{{\mathbb F}_q} V=\ell\cdot t^a\). \medskip We treat next the case \(\dim_{{\mathbb F}_q} V=\ell\cdot (t^a+1)\), so, we analyze the inequality \[q^{\ell\cdot(t^a+1)}-1>\dfrac{t^a\cdot(t^a+1)}{2}\cdot(q^{\ell\cdot{\frac{t^a+1}{2}}}-1)+(t^a+1)\cdot(q^{2\ell}-1).\] As above, the second summand of the right-hand side is smaller than the first summand, so it is enough to have \[q^{\ell\cdot(t^a+1)}-1>t^a\cdot(t^a+1)\cdot(q^{\ell\cdot{\frac{t^a+1}{2}}}-1),\] which reduces to \[q^{\ell\cdot(t^a+1)}>t^a\cdot(t^a+1)\cdot q^{\ell\cdot{\frac{t^a+1}{2}}}.\] If the last inequality holds for \(\ell=1\), which happens as soon as \(t^a\geq 16\), then it clealry holds for every value of \(\ell\). On the other hand, for smaller values of \(t^a\), the original inequality is always satisfied except when the triple \((q,t^a,\ell)\) lies in \(\{(2,5,1),(2,7,1),(2,9,1),(2,11,1)\}\). Making use of \cite[Corollary~1.2]{L} as above, only the triple \((2,7,1)\) has to be checked: a computation via GAP \cite{GAP} shows that \(\SL 7\) generates an orbit of size \(21\) in the action on its absolutely irreducible module of order \(2^8\), the final contradiction for this case. \medskip To conclude, we consider the case \(\dim_{{\mathbb F}_q} V=\ell\cdot\left(\dfrac{t^a+1}{2}\right)\), only for \(t\neq 2\). Thus, we analyze the inequality \[q^{\ell\cdot\frac{t^a+1}{2}}-1>\dfrac{t^a\cdot(t^a+1)}{2}\cdot(q^{\ell\cdot\frac{t^a+1}{4}}-1)+(t^a+1)\cdot(q^\ell-1).\] Similarly to the previous cases, we reduce to \[q^{\ell\cdot\frac{t^a+1}{2}}>t^a\cdot(t^a+1)\cdot q^{\ell\cdot\frac{t^a+1}{4}},\] which is satisfied for every value of \(\ell\) as soon as it is satisfied for \(\ell=1\); this happens whenever \(t^a\geq 43\). Looking at smaller values in the original inequality for \(\ell=1\), and using Corollary~1.2 of \cite{L}, it turns out that only the triples \((3,11,1)\) and \((3,13,1)\) have to be checked. A direct computation via GAP \cite{GAP} shows that \(\SL{11}\) has two absolutely irreducible modules of order \(3^7\): in both of them, the three sets \(V_{I_-}\) (for \(r=5\)), \(V_{I_+}\) (for \(s=3\)) and \(V_{II}\) are all non-empty, so our assumptions are not satisfied. As regards the absolutely irreducible module of order \(3^7\) for \(\SL {13}\), it has elements whose centralizer in \(\SL {13}\) is a \(2\)-group, again not our case. For larger values of \(\ell\) we note that, if \(\ell=2\), then the present inequality is the same as the one discussed for the case \(\dim_{{\mathbb F}_q} V=\ell\cdot (t^a+1)\), and the triples that have to be checked are \((2,5,2)\), \((2,7,2)\), \((2,9,2)\), \((2,11,2)\). Again by GAP \cite{GAP}, none of these triples corresponds to an existing module, the final contradiction for this case. \medskip We omit the proof that \(V-\{0\}\neq V_{I_+}\cup V_{II}\), which is totally analogous to the previous one, and we focus next on the remaining claim that \(V-\{0\}\) is not covered by \(V_{I_-}\) and \(V_{I_+}\). Assuming the contrary, Table~\ref{Brauer3} yields that the dimension of \(V\) over \({\mathbb F}_q\) is \(\ell\cdot(t^a-1)\) or \(\ell\cdot\left(\dfrac{t^a-1}{2}\right)\) (only for \(t\neq 2\)), and we will start by considering the former possibility. In view of the fact that both \(\dim_{{\mathbb F}_q}\cent V R\) and \(\dim_{{\mathbb F}_q}\cent V S\) are at most a half of \(\dim_{{\mathbb F}_q} V\), it is enough to analyze the inequality \[q^{\ell\cdot(t^a-1)}-1>\dfrac{t^a\cdot(t^a+1)}{2}\cdot\left(q^{\ell\cdot\left(\frac{t^a-1}{2}\right)}-1\right)+\dfrac{t^a\cdot(t^a-1)}{2}\cdot\left(q^{\ell\cdot\left(\frac{t^a-1}{2}\right)}-1\right).\] As usual we reduce to \[q^{t^a-1}>t^a\cdot(t^a+1)\cdot q^{\frac{t^a-1}{2}},\] which is satisfied for \(t^a\geq 19\). For smaller values of \(t^a\), it can be checked (via the original equation for various values of \(\ell\) and taking into account Corollary~1.2 of \cite{L}) that the possible exceptions are the triples \((2,5,1)\), \((2,11,1)\), \((3,5,1)\), \((3,7,1)\), \((5,4,1)\) and \((2,5,2)\); among the corresponding modules, the first one has elements whose centralizers in \(\SL 5\) do not have a normal Sylow \(3\)-subgroup, the remaining four absolutely irreducible modules all have elements whose centralizers in the relevant group are \(2\)-groups, whereas the last one does not exists. The proof for this case is complete. \medskip Finally, we focus on \(\dim_{{\mathbb F}_q} V=\ell\cdot\left(\dfrac{t^a-1}{2}\right)\), thus the inequality that has to be considered is \[q^{\ell\cdot\left(\frac{t^a-1}{2}\right)}-1>\dfrac{t^a\cdot(t^a+1)}{2}\cdot\left(q^{\ell\cdot\left(\frac{t^a-1}{4}\right)}-1\right)+\dfrac{t^a\cdot(t^a-1)}{2}\cdot\left(q^{\ell\cdot\left(\frac{t^a-1}{4}\right)}-1\right).\] With the usual argument, this reduces to \[q^{\frac{t^a-1}{2}}>t^{a}\cdot (t^a+1)\cdot q^{\frac{t^a-1}{4}}\] which is always satisfied for \(t^a\geq 47\). For smaller values of \(t^a\) (and for various \(\ell\)), from the original inequality we see that the triples to be checked are: \((2,t^a,1)\) for \(t^a\in\{7,9,17,23,25,31\}\), \((3,11,1)\), \((3,13,1)\), \((5,9,1)\), \((5,11,1)\), \((2,5,2)\), \((2,7,2)\), \((2,9,2)\), \((2,11,2)\), \((2,13,2)\), \((3,5,2)\), \((3,7,2)\), \((2,5,3)\), \((2,7,3)\), \((2,9,3)\) and \((2,5,4)\). None of these correspond to modules that satisfy our assumptions, the final contradiction that concludes the proof. \end{proof} Next, the aforementioned result concerning actions of \(\SL{t^a}\) by automorphisms on a \(t\)-group. \begin{theorem} \label{TipoIeII} Let \(T\) be a Sylow \(t\)-subgroup of \(G\cong\SL{t^a}\) (where \(t^a\geq 4\)) and, for a given odd prime divisor \(r\) of \(t^{2a}-1\), let \(R\) be a Sylow \(r\)-subgroup of \(G\). Assuming that \(V\) is a \(t\)-group such that \(G\) acts by automorphisms (not necessarily faithfully) on \(V\) and \(\cent V G=1\), consider the sets \[V_I=\{v\in V\mid {\textnormal{ there exists }} x\in G {\textnormal{ such that }}R^x\trianglelefteq\cent G v\},{\textnormal{ \emph{and}}}\] \[V_{II}=\{v\in V\mid {\textnormal{ there exists }} x \in G {\textnormal{ such that }}T^x\trianglelefteq\cent G v\}.\] Then the following conditions are equivalent. \begin{enumeratei} \item \(V_I\) and \(V_{II}\) are both non-empty and \(V-\{1\}=V_I\cup V_{II}\). \item \(G\cong\SL {4}\), and \(V\) is an irreducible \(G\)-module of dimension \(4\) over \(\mathbb{F}_2\). More precisely, \(V\) is the restriction to $G$, embedded as \(\Omega_4^-(2)\) into ${\rm{SL}}_4(2)$, of the standard module of ${\rm{SL}}_4(2)$. \end{enumeratei} \end{theorem} \begin{proof} Set \(V^{\sharp}=V-\{1\}\) and, as in our hypothesis, assume \(V^{\sharp}=V_I\cup V_{II}\) where \(V_I\) and \(V_{II}\) are both non-empty; we start by observing that \(V_I\) and \(V_{II}\) are disjoint, because our assumptions ensure that no non-trivial element of \(V\) is centralized by the whole \(G\) and, on the other hand, no proper subgroup of \(G\) contains both a conjugate of \(R\) and a conjugate of \(T\) as a normal subgroup. By similar reasons, it is also clear that \(V_{II}\) is partitioned by the subsets \(\cent{V^{\sharp}}{T^x}\) as \(x\) runs in \(G\). As regards the set \(V_{I}\), we see that it is certainly covered by the union of the subsets \(\cent {V^{\sharp}}{R^x}\) as \(x\) runs in \(G\) (in fact, every element of \(V_{I}\) lies in precisely one of those subsets); however, only in the case when \(r\) is a divisor of \(t^a-1\), the subsets \(\cent {V^{\sharp}}{R^x}\) can have a non-empty intersection also with \(V_{II}\), and for our purposes it will be important to determine these intersections. To this end, assuming \(r\in\pi(t^a-1)\), we will show next that \(\cent {V^{\sharp}}{R}\cap V_{II}\) is the (disjoint) union of \(\cent {V^{\sharp}}{T_1R}\) and \(\cent {V^{\sharp}}{T_2R}\) where \(T_1\), \(T_2\) are suitable conjugates of \(T\). In fact, these \(T_1\) and \(T_2\) turn out to be the two Sylow \(t\)-subgroups of \(G\) that are normalized by \(R\) (see Lemma~\ref{PSL2}). So, let \(T_0\in\syl t S\) be such that \(R\) lies in \(\norm G {T_0}\), and let \(v\) be an element of \(\cent {V^{\sharp}}{T_0R}\). Then \(v\) is in \(V_{II}\), as otherwise it would lie in \(V_I\) and \(\cent G v\) would contain a unique Sylow \(r\)-subgroup together with a Sylow \(t\)-subgroup of \(G\), which does not happen for any proper subgroup of \(G\). Also, it is clear that \(v\) is centralized by \(R\). As a consequence, if \(T_1\) and \(T_2\) are the two Sylow \(t\)-subgroups of \(G\) that are normalized by \(R\), then we get \(\cent{V^{\sharp}}{T_1R}\cup\cent{V^{\sharp}}{T_2R}\subseteq\cent{V^{\sharp}}{R}\cap V_{II}\). On the other hand, let \(v\) be an element of \(\cent {V^{\sharp}} R\cap V_{II}\), and let \(T_0\) be the (unique) Sylow \(t\)-subgroup of \(G\) that centralizes \(v\). As \(T_0\) is a normal subgroup of \(\cent G v\) and \(R\subseteq\cent G v\), we clearly get \(R\subseteq\norm G{T_0}\). This yields \(T_0\in\{T_1,T_2\}\) and, together with the discussion in the previous paragraph, \(\cent{V^{\sharp}}{T_1R}\cup\cent{V^{\sharp}}{T_2R}=\cent{V^{\sharp}}{R}\cap V_{II}\), as wanted. \smallskip Still in the case \(r\in\pi(t^a-1)\), set \(|V|=t^d\), \(|\cent V R|=t^e\), \(|\cent V T|=t^f\) and, assuming \(R\subseteq\norm G T\) (as we may), \(|\cent V {TR}|=t^g\). Note that \(e\) and \(f\) are positive integers because \(V_I\) and \(V_{II}\) are non-empty; moreover, \(f\geq g\), and \(t^e>2t^g-1\). In view of the above discussion, we get the following equality. \[t^d-1=|V^{\sharp}|=\left(\sum_{T\in\syl t G}|\cent{V^{\sharp}} T|\right)+\left(\sum_{R\in\syl q G}|\cent {V^{\sharp}} R-V_{II}|\right)=\] \[=(t^a+1)(t^f-1)+\dfrac{t^a(t^a+1)}{2}(t^e-2t^g+1),\] therefore, \[2t^d+t^a+2t^{2a+g}+2t^{a+g}=2t^{a+f}+2t^f+t^{2a+e}+t^{2a}+t^{a+e}.\] It is not difficult to check that, by the uniqueness of the \(t\)-adic expansion, the above equality is never satisfied if \(t\) is odd. As regards the case \(t=2\), the equality is indeed satisfied if and only if \(d=2a\), \(g+1=e\) and \(a=f+1\), so, in particular, we get \(|V|=2^{2a}\). We focus next on the subgroup \(\Omega=\Omega(\zent V)\) of \(V\) generated by all the central elements of \(V\) having order \(2\). Since \(\Omega\) is a characteristic subgroup of \(V\) and it is an elementary abelian \(2\)-group, we can view it as an \(G\)-module over \(\mathbb{F}_2\) and we can consider an irreducible submodule \(W\) of it. Of course no non-trivial element of \(W\) is centralized by the whole \(G\), and we have \(W=(W\cap V_I)\cup(W\cap V_{II})\). Moreover, if \(W\cap V_{II}\) is empty, then \cite[Lemma~3.3]{ACDPS} yields a contradiction; on the other hand, if \(W\cap V_I\) is empty or if both \(W\cap V_I\) and \(W\cap V_{II}\) are non-empty, then we get \(|W|=2^{2a}(=|V|)\) via \cite[Lemma~5.2]{LWc} or via the discussion in this proof, respectively. In any case we conclude that \(V=W\) is an irreducible \(G\)-module of dimension \(2a\) over \(\mathbb{F}_2\). Now, by Lemma~3.12 in \cite{PR}, \(a\) is even (\(a=2b\), say) and \(V\) is necessarily the restriction to \(\mathbb{F}_2\) of the ``natural" (\(4\)-dimensional) \(\mathbb{F}_{2^b}\)-module for the group \(\Omega_4^-(2^b)\) or one of its Galois twists; furthermore, in this situation we get \(|\cent V T|=2^b\), i.e., \(f=a/2\). But we observed that \(a=f+1\), whence \(a=2\) and the proof that (a) implies (b) is complete in this case. The converse statement is also true, because the centralizers of the non-trival elements in the module described in (b) are isomorphic either to \(S_3\) or to \(A_4\). \smallskip It remains to show that condition (a) cannot hold if, under our hypotheses, \(r\) lies in \(\pi(t^a+1)\). In fact, using the notation introduced for the case \(r\in\pi(t^a-1)\), we get \[t^d-1=(t^a+1)(t^f-1)+\dfrac{t^a(t^a-1)}{2}(t^e-1),\] therefore \[2t^d+t^a+t^{2a}+t^{a+e}=2t^{a+f}+2t^f+t^{2a+e}.\] Again by the uniqueness of the \(t\)-adic expansion, it is not difficult to see that the above equality is never satisfied. \end{proof} We conclude the section with Lemma~\ref{brauer}, that deals with one specific orbit condition in linear actions of \(\SL{t^a}\), for \(t\neq 2\), on modules over \(\mathbb{F}_2\). \begin{lemma}\label{brauer} Let \(V\) be a non-trivial irreducible module for \(G=\SL{t^a}\) over the field \({\mathbb F}_2\), where \(t\) is an odd prime with \(t^a>3\). Let \(r\neq t\) be a prime in \(\{3,5\}\). Then there exists \(v\in V\) such that \(\cent G v\) does not contain any element of order \(r\), except in the following cases: \(t^a=5\) and \(\dim_{{\mathbb F}_2} V=4\), or \(t^a=7\) and \(\dim_{{\mathbb F}_2} V=3\) (in both cases \(r=3\)). \end{lemma} \begin{proof} We will prove the statement as follows. Given an element \(x\in G\) of order \(r\), we will establish an upper bound for the dimension over \({\mathbb F}_2\) of the subspace \(\cent V x\); we will then see that, with the possible exceptions mentioned in the statement, this dimension is too small to get a covering of \(V\) with subspaces of the form \(\cent V x\) where \(x\) ranges over the elements of order \(r\) in \(G\). The bound for \(\dim_{{\mathbb F}_2}\cent V x\) can be obtained by the same argument used in Lemma~\ref{brauer2}. Observing that \(V\) is in fact a module for \(S=\PSL{t^a}\) (because the element of order \(2\) of \(G\) acts trivially on \(V\)), we consider the irreducible \(2\)-Brauer characters of \(S\). In view of the discussion of \cite[Section VIII]{B}, it turns out that every non-principal irreducible \(2\)-Brauer character \(\theta\) of \(S\) is unique in its \(2\)-block and it has a lift \(\chi\in\irr S \), except when \(t^a-1\) is divisible by \(2^2\) and \(\theta\) lies in the principal \(2\)-block of \(S\); in the latter case, however, there exists \(\gamma\in\irr S\) such that \(\gamma-1_S\) is a lift for \(\theta\). In particular, the degrees of the irreducible \(2\)-Brauer characters of \(S\) are \(t^a-1\), \((t^a-1)/2\) and \(t^a+1\). Now we can refer to the ordinary character table of \(S\) (see for example~\cite[Pages~80, 82]{B}) and compute \([1_{\langle x\rangle},\chi_{\langle x\rangle}]\), where \(\chi\in\irr S\) ranges over a set of lifts for the non-principal irreducible \(2\)-Brauer characters of \(S\). In Table~\ref{lifts3} and Table~\ref{lifts5} the parameter \(\ell\) denotes (as in Lemma~\ref{brauer2}) the composition length of the \({\mathbb F}[G]\)-module \(V\otimes {\mathbb F}\), where \({\mathbb F}\) is a splitting field for \(G\) and all its subgroups; the maximal value of \([1_{\langle x\rangle},\chi_{\langle x\rangle}]\) is shown in the second and third row of the tables (dealing with the cases \(r\mid t^a-1\) and \(r\mid t^a+1\), respectively). \begin{table}[htbp] \caption{\label{lifts3} Maximal dimension of the centralizer of an element of order \(3\).} \renewcommand{\arraystretch}{.95} $$ \begin{array}{llll} \hline\hline\\ \dim_{{\mathbb F}_2} V\qquad\qquad & {\ell\cdot (t^a-1)}\qquad\quad&\ell\cdot\left(\dfrac{t^a-1}{2}\right)\qquad\quad& \ell\cdot(t^a+1)\\ \\ \hline \\ \dim_{{\mathbb F}_2}\cent V x&\ell\cdot\left( \dfrac{t^a-1}{3} \right)&\ell\cdot\left( \dfrac{t^a-1}{6} \right)&\ell\cdot\left( \dfrac{t^a+5}{3} \right)\\ \text{\rm{for }}t^a\equiv 1 {\textnormal{ (mod } 3)}&&&\\ \\ \hline \\ \dim_{{\mathbb F}_2}\cent V x&\ell\cdot\left( \dfrac{t^a+1}{3} \right)&\ell\cdot\left( \dfrac{t^a-5}{6} \right)&\ell\cdot\left( \dfrac{t^a+1}{3} \right)\\ \text{\rm{for }}t^a\equiv -1 {\textnormal{ (mod } 3)}&&&\\ \\ \hline\hline \\ \\ \end{array} $$ \end{table} \begin{table}[htbp] \caption{\label{lifts5} Maximal dimension of the centralizer of an element of order \(5\).} \renewcommand{\arraystretch}{.95} $$ \begin{array}{llll} \hline\hline\\ \dim_{{\mathbb F}_2} V\qquad\qquad& {\ell\cdot (t^a-1)}\qquad\quad&\ell\cdot\left(\dfrac{t^a-1}{2}\right)\qquad\quad& \ell\cdot(t^a+1)\\ \\ \hline \\ \dim_{{\mathbb F}_2}\cent V x&\ell\cdot\left( \dfrac{t^a-1}{5} \right)&\ell\cdot\left( \dfrac{t^a-1}{10} \right)&\ell\cdot\left( \dfrac{t^a+9}{5} \right)\\ \text{\rm{for }}t^a\equiv 1 {\textnormal{ (mod } 5)}&&&\\ \\ \hline \\ \dim_{{\mathbb F}_2}\cent V x&\ell\cdot\left( \dfrac{t^a+1}{5} \right)&\ell\cdot\left( \dfrac{t^a-9}{10} \right)&\ell\cdot\left( \dfrac{t^a+1}{5} \right)\\ \text{\rm{for }}t^a\equiv -1 {\textnormal{ (mod } 5)}&&&\\ \\ \hline\hline \end{array} $$ \end{table} Next, assume that every element of \(V\) is centralized by an element of order \(r\) of \(G\); then, denoting by \(k\) the number of Sylow \(r\)-subgroups of \(G\) and choosing a non-trivial element \(x_i\) (\(i\in\{1,\ldots, k\}\)) from each of those subgroups, we have \[V-\{0\}=\bigcup_{i=1}^k(\cent V {x_i}-\{0\}).\] Observe that \(k=t^a\cdot(t^a\pm 1)\), where the plus or minus sign occurs according to whether \(t^a\equiv 1{\textnormal{ (mod }} r)\) or \(t^a\equiv -1{\textnormal{ (mod }} r)\) respectively. So, setting \(m\) to be the maximal dimension of \(\cent V x\) corresponding to each \(d=\dim_{{\mathbb F}_2}V\) as shown in Table~\ref{lifts3} and Table~\ref{lifts5}, we consider the inequality \begin{equation}\label{ineq} 2^d-1> t^a\cdot(t^a+ 1)\cdot(2^m-1), \end{equation} and we discard all the modules whose corresponding pair \((t^a,d)\) satisfies Inequality~(1) (the appropriate value of \(m\) is deduced by the tables from the values of \(t^a\) and \(d\)). Note that (1) is true whenever we have \begin{equation} \label{ineq2} 2^d> t^a\cdot(t^a+ 1)\cdot 2^m; \end{equation} furthermore, writing \(d=\ell d_0\), \(m=\ell m_0\), and applying the function \(\log_2\) to both sides, it is also easy to see that Inequality~(2) is in turn satisfied if \(2^{d_0}> t^a\cdot(t^a+ 1)\cdot 2^{m_0}\) is. In other words, if an absolutely irreducible module (i.e., an irreducible module for which the parameter \(\ell\) is \(1\)) has a corresponding pair \((t^a,d_0)\) satisfying~(2), then we can discard every irreducible module corresponding to a pair of the kind \((t^a,\ell d_0)\). Now, the list of pairs \((t^a,d)\) that do not satisfy (2) and that correspond to absolutely irreducible modules is the following. \begin{enumeratei} \item For \(r=3\): \((5,2)\), \((5,4)\), \((5,6)\), \((7, 3)\), \((7,6)\), \((7, 8)\), \((11, 5)\), \((11, 10)\), \((13, 6)\), \((17, 8)\), \((19, 9)\), \((23, 11)\), \((25, 12)\). \item For \(r=5\): \((9,4)\), \((9,8)\), \((11,5)\), \((19,9)\). \end{enumeratei} This can be refined further using Corollary~1.2 of \cite{L} (so, discarding the modules on which the action of \(G\) generates regular orbits); as a result, the pairs that still need to be analyzed for the case \(\ell=1\) are the following. \begin{enumeratei} \item For \(r=3\): \((5,4)\), \((7,3)\), \((7, 8)\), \((11, 10)\), \((17, 8)\), \((23, 11)\), \((25, 12)\). \item For \(r=5\): \((9,4)\). \end{enumeratei} A direct computation with GAP \cite{GAP} shows that, in all of the above cases, there are elements of the relevant module whose centralizer in \(G\) does not have an order divisible by \(r\), except for the pairs \((5,4)\) and \((7,3)\) as claimed in the statement. As regards the cases when \(\ell>1\), only three pairs do not satisfy inequality (2), the value of \(\ell\) being \(2\) for all of them: these are \((5,4)\), \((5,8)\) and \((7,6)\). The non-absolutely irreducible pair \((5,4)\) is associated to the natural module of \(\SL 4\cong \PSL 5\) and obviously does not satisfy the condition about the centralizers, whereas the pairs \((5,8)\) and \((7,6)\) are not associated to any existing module. On the other hand, if \(V\) is an absolutely irreducible module corresponding to the pairs \((5,4)\) or \((7,3)\), then every element of \(V\) is actually centralized by an element of order \(3\) (the sets of orbit sizes are \(\{5,10\}\) and \(\{7\}\) respectively). The proof is complete. \end{proof} \section{The structure of the last term in the derived series} Let \(G\) be a non-solvable group having a normal section isomorphic to \(\PSL{t^a}\), where \(t\) is an odd prime with \(t^a> 5\), and assume that \(\Delta(G)\) is connected with a cut-vertex. Our aim in this section is to describe the structure of the last term \(K\) in the derived series of \(G\). In particular we will see that, except for one sporadic case, either we have \(K\cong\PSL{t^a}\), or \(K\cong\SL{t^a}\), or \(K\) contains an abelian minimal normal subgroup \(L\) of \(G\) such that \(K/L\cong\SL{t^a}\) and \(L\) is the natural module for \(K/L\). We will actually prove that the dual group of \(L\) is the natural module for \(K/L\), but the desired conclusion follows taking into account that this module is self dual. \begin{theorem}\label{MainAboutK} Assume that the group \(G\) has a composition factor isomorphic to $\PSL{t^a}$, for an odd prime \(t\) with \(t^a> 5\), and let \(p\) be a prime number. Assume also that \(\Delta(G)\) is connected with cut-vertex \(p\). Then, denoting by \(K\) the last term in the derived series of \(G\), one of the following conclusions holds. \begin{enumeratei} \item \(K\) is isomorphic to \(\PSL{t^a}\) or to \(\SL{t^a}\); \item \(K\) contains a minimal normal subgroup \(L\) of \(G\) such that \(K/L\) is isomorphic to \(\SL{t^a}\) and \(L\) is the natural module for \(K/L\). \item \(t^a=13\), and \(K\) contains a minimal normal subgroup \(L\) of \(G\) such that \(K/L\) is isomorphic to \(\SL{13}\), and \(L\) is isomorphic to one of the two \(6\)-dimensional irreducible modules for \(\SL{13}\) over \({\mathbb F}_3\). \end{enumeratei} \end{theorem} Our analysis concerning Theorem~\ref{MainAboutK} splits in two parts, depending on whether one of the sets of primes \(\pi(t^a-1)\), \(\pi(t^a+1)\) reduces to the prime \(2\) or not. Theorem~\ref{pallasgonfia}, which is introduced by Proposition~\ref{ThreeVertices}, deals with the former situation. \begin{proposition} \label{ThreeVertices} Set \(G\simeq\PSL{t^a}\), where \(t\) is an odd prime, and assume that one among the sets of primes \(\pi_{-}=\pi(t^a-1)-\{2\}\) and \(\pi_{+}=\pi(t^a+1)-\{2\}\) is empty. Then the following conclusions hold. \begin{enumeratei} \item \(a=1\), unless \(t^a=9\). \item If \(|\pi(G)|=3\), then \(t^a\in\{5,7,9,17\}\). \end{enumeratei} \end{proposition} \begin{proof} Our assumptions, together with Proposition~3.1 of \cite{MW}, yield that \(a=1\) and \(t\) can be written as \(2^k-1\) or \(2^k+1\) for a suitable integer \(k>2\), with the only exception of the case \(t^a=9\) (if \(t=2^k-1\), then \(t\) is a Mersenne prime and \(k\) is a prime number, whereas if \(t=2^k+1\), then \(t\) is a Fermat prime and \(k\) is a \(2\)-power). This proves conclusion~(a). Assuming now in addition that \(|\pi(G)|=3\), we write \(\pi(G)=\{2,t,u\}\) and we consider first the case when \(\pi_{+}\) is empty, so \(t=2^k-1\) for a suitable prime number \(k\). Then \(t-1=2^k-2=2\cdot (2^{k-1}-1)\) is divisible only by \(2\) and \(u\), which means that \(u=2^{k-1}-1\) is in turn a Mersenne prime and \(k-1\) is a prime number. This can happen only for \(k=3\), thus \(t=7\). On the other hand, if \(\pi_{-}\) is the empty one, then either \(t^a=9\) or \(t=2^k+1\) is a Fermat prime and \(k\) is a \(2\)-power. Consider the latter case: the fact that \(\pi_{+}\) consists of the single prime \(u\) yields that \((2^{k-1}+1)=(t+1)/2\) is a power of \(u\), so either \(u=2^{k-1}+1\) is in turn a Fermat prime (and \(k-1\) is a \(2\)-power as well) or \(u^2=9=2^{k-1}+1\). If both \(k\) and \(k-1\) are \(2\)-powers, then necessarily we get \(k=2\) and \(t=5\); it remains then the possibility \(u^2=9\), thus \(k-1=3\) and \(t=2^k+1=2^4+1=17\). \end{proof} \begin{theorem}\label{pallasgonfia} Assume that the group \(G\) has a composition factor isomorphic to $\PSL{t^a}$, for an odd prime \(t\) with \(t^a> 5\), and let \(p\) be a prime number. Assume also that \(\Delta(G)\) is connected with cut-vertex \(p\), and that one among the sets of primes \(\pi_{-}=\pi(t^a-1)-\{2\}\) and \(\pi_{+}=\pi(t^a+1)-\{2\}\) is empty. Then, denoting by \(K\) the last term in the derived series of \(G\), one of the following conclusions holds. \begin{enumeratei} \item \(K\) is isomorphic to \(\PSL{t^a}\) or to \(\SL{t^a}\); \item \(K\) contains a minimal normal subgroup \(L\) of \(G\) such that \(K/L\) is isomorphic to \(\SL{t^a}\) and \(L\) is the natural module for \(K/L\). \end{enumeratei} \end{theorem} \begin{proof} Let $R$ be the solvable radical of $G$. Observe that, by Theorem~\ref{0.2}, the factor group \(G/R\) is an almost-simple group (with socle isomorphic to \(\PSL{t^a}\)) and \(\V G=\V{G/R}\cup\{p\}\); furthermore, Proposition~\ref{ThreeVertices} yields that \(a=1\) with the only exception of the case \(t^a=9\), therefore \(\V G\) consists of \(p\) and the primes dividing the order of the socle of \(G/R\). As the subgraph of \(\Delta(G)\) induced by \(\V G-\{t,p\}\) is then a complete subgraph, it is clear that \(p\neq t\) and that the set of neighbours of \(t\) in \(\Delta(G)\) is \(\{p\}\). As another consequence of the fact that \(a=1\) or \(t^a=9\), the socle of \(G/R\) does not have any proper subgroup of type~(iv) except when \(t^a=9\), in which case the relevant subgroups (isomorphic to \(A_4\) or \(S_4\)) are in fact also of type~(iii). Consider now the last term \(K\) in the derived series of \(G\) and assume \(t^a\neq 9\). By Lemma~\ref{InfiniteCommutator} we can assume that, for a suitable non-trivial normal subgroup \(L\) of \(K\), we have \(K/L\cong\PSL{t^a}\) or \(K/L\cong\SL{t^a}\); moreover, for every non-principal irreducible character \(\lambda\) of \(L/L'\), the inertia subgroup \(I_K(\lambda)\) is a proper subgroup of \(K\) (and the same clearly holds also for \(I_K(\lambda)N\), where \(N/L=\zent{K/L}\)). We claim that the conclusions of Lemma~\ref{InfiniteCommutator} actually hold even if \(t^a=9\). In fact, as in the proof of that lemma, consider a chief factor \(K/N\cong\PSL{9}\) of \(G\) and suppose (as we may) \(N\neq 1\); if \(\mu\) is a non-principal \(K\)-invariant irreducible character of \(N/N'\) and \(L=\ker\mu\), then (\(L\trianglelefteq K\) and) \(N/L\) lies in the Schur multiplier of \(K/N\), that has order \(6\). But if \(K/L\) is a central extension of \(N/L\) by \(K/N\cong\PSL{9}\) such that \(3\) divides \(|N/L|\), then it can be checked that the set of irreducible character degrees of \(K/L\) contains both \(6\) and \(15\), thus \(\Delta(K/L)\) is a triangle of vertices \(2,3,5\) and \(\Delta(G)\) does not satisfy the hypothesis. The conclusion so far is that \(K/L\cong\SL{9}\), and again we can assume \(L\neq 1\). Now, since the Schur multiplier of \(\SL{9}\) has order \(3\), but the Schur cover of \(\SL{9}\) has irreducible characters of degree \(6\) and \(15\), arguing as above we see that \(L/L'\) does not have any \(K\)-invariant irreducible character. Given that, the proof is organized as follows. First, we will show that in the present setting \(K/L\) is isomorphic to \(\SL{t^a}\) and \(L/L'\) is the natural module for \(K/L\); we will treat separately four cases, depending on whether \(|\pi(K/N)|\) is larger than \(3\) or not, and whether \(p\neq 2\) or \(p=2\). After that, we will prove that \(L'\) is in fact trivial. This, together with the observation that in this situation \(L=\oh t K\) (therefore \(L\) is actually a normal subgroup of \(G\)) and \(L\) is already a minimal normal subgroup of \(K\), will conclude the proof. \medskip Consider first the case when \(|\pi(K/N)|\) is larger than \(3\), i.e., the non-empty set among \(\pi_{-}\) and \(\pi_{+}\) does not consist of a single prime (observe that the case \(t^a=9\) is then excluded, so we always have \(a=1\)), and \(p\neq 2\). In what follows, we denote by \(\pi\) the non-empty set among \(\pi_+\) and \(\pi_-\) (in other words, we set \(\pi=\pi_+\cup\pi_-\)). Let \(\lambda\) be a non-principal irreducible character of \(L/L'\), and assume that \(t\) divides \(|K:I_K(\lambda)|\). Then \(I_K(\lambda)/L\) must contain a Sylow \(2\)-subgroup of \(K/L\), as otherwise \(t\) would be adjacent to \(2\) in \(\Delta(G)\) (recall that \(p\neq 2\) is the only neighbour of \(t\) in \(\Delta(G)\)); in particular, \(N\) is contained in \(I_K(\lambda)\). In this situation, if \(I_K(\lambda)/N\) is of type~(i), then it is easy to see that \(t\) is adjacent in \(\Delta(G)\) to every prime in \(\pi\), which implies \(\pi=\{p\}\) and \(|\pi(K/N)|=3\), not the case we are considering. It is also clear that \(I_K(\lambda)/N\) cannot be of type~(ii). Finally, if \(I_K(\lambda)/N\) is of type (iii), then the largest power of \(2\) that divides \(|K/N|\) is \(2^2\) or \(2^3\); since it is easily seen that, writing \(t=2^k\pm 1\), the \(2\)-part of \(|K/N|\) is \(2^k\), we get either \(k=2\) (but recall that \(k>2\) in the present situation) or \(k=3\), \(t=7\) and \(\V G=\{2,3,7\}\), in any case a contradiction. On the other hand, assume that \(t\) does not divide \(|K:I_K(\lambda)|\). Then \(I_K(\lambda)N/N\) is not of type~(i); moreover, if this factor group is of type~(iii), then we get \(t=3\) or \(t=5\), against our hypothesis. The only possibility is then the type (ii): in particular, \emph{\(I_K(\lambda)/L\) contains a Sylow \(t\)-subgroup of \(K/L\) as a normal subgroup.} Since this holds for every non-principal irreducible character \(\lambda\) of \(L/L'\), by Lemma~\ref{SL2Nq} we conclude that \(K/L\cong\SL{t}\) and \(L/L'\) is the natural module, as desired. \smallskip Still assuming \(|\pi(K/N)|>3\), we move now to the case \(p=2\). Again, let \(\lambda\) be a non-principal irreducible character of \(L/L'\). If \(t\) divides \(|K:I_K(\lambda)|\), then the factor group \(I_K(\lambda)N/N\) is clearly not a subgroup of type (ii) with non-trivial \(t\)-part of \(K/N\). But this factor group is not of type (iii) as well: in fact, if \(I_K(\lambda)N/N\) is isomorphic to \(A_4\) or \(S_4\), then \(\pi\) would consist of the element \(3\) only, whereas if \(I_K(\lambda)N/N\cong A_5\), then (by Gallagher's theorem or by the theory of character triples, according to whether \(\lambda\) has an extension to its inertia subgroup or not) we see that \(t\) is adjacent to \(3\) in \(\Delta(G)\), anyway a contradiction. Therefore \(I_K(\lambda)N/N\) must be of type (i), containing a Hall \(\pi\)-subgroup of \(K/N\) as a normal subgroup; as easily checked, we then have that \(I_K(\lambda)/L\) contains a Hall \(\pi\)-subgroup of \(K/L\) as a normal subgroup. On the other hand, if \(t\) does not divide \(|K:I_K(\lambda)|\), then \(I_K(\lambda)N/N\) is a subgroup of type (ii) of \(K/N\), and \(I_K(\lambda)/L\) contains a Sylow \(t\)-subgroup of \(K/L\) as a normal subgroup. Now, assume that \(L/L'\) is not a \(t\)-group. Then there exists a chief factor \(L/Y\) of \(K\) that is a \(q\)-group for a suitable prime \(q\neq t\). Denoting by \(V\) the dual group of \(L/Y\), the discussion carried out so far yields that for every non-trivial element \(\lambda\) of \(V\), the factor group \(I_K(\lambda)/L\) contains either a unique Hall \(\pi\)-subgroup or a unique Sylow \(t\)-subgroup of \(K/L\). This is against Theorem~\ref{TipoIeIIPieni}, thus we conclude that \(L/L'\) is a \(t\)-group, and we are in a position to apply Theorem~\ref{TipoIeII} obtaining that the subgroups \(I_K(\lambda)/L\) (for \(\lambda\) a non-principal character in \(\irr{L/U}\)) are all of the same kind: either they all contain a unique Hall \(\pi\)-subgroup of \(K/L\) or they all contain a unique Sylow \(t\)-subgroup of \(K/L\). However, by Lemma~\ref{SL2Nq} (and recalling \cite[Lemma~4]{Z}) the former condition is impossible and \(L/L'\) is the natural module for \(K/L\cong\SL{t}\), as desired. \smallskip Now, let us consider the case when \(|\pi(K/N)|=3\), so \(\pi(K/N)=\{2,u,t\}\) for a suitable prime \(u\). Recall that, by Proposition~\ref{ThreeVertices}, \(K/N\) is then isomorphic to one of the following groups: \(\PSL 7\), \(\PSL 9\) or \(\PSL{17}\). We assume first that, under the hypothesis \(|\pi(K/N)|=3\), we have \(p\neq 2\). Thus, if \(\lambda\) is a non-principal irreducible character of \(L/L'\) and \(t\) divides \(|K:I_K(\lambda)|\), then \(2\) cannot divide the degree of any irreducible character of \(K\) lying over \(\lambda\). In particular, \(K/L\) should have an abelian Sylow \(2\)-subgroup by Theorem~A of \cite{NT}, and this is not the case. Looking at \cite{ATLAS}, the only remaining possibility is that \(I_K(\lambda)/L\) contains a Sylow \(t\)-subgroup of \(K/L\) as a normal subgroup. This holds for every non-principal \(\lambda\in\irr{L/L'}\), and again by Lemma~\ref{SL2Nq} we have that \(L/L'\) is the natural module for \(K/L\cong\SL {t^a}\). \smallskip Finally let \(|\pi(K/N)|=3\) and \(p=2\). Assume first \(K/N\cong\PSL 7\): in this case we have \(\V G=\{2,3,7\}\) and the only non-adjacency in \(\Delta(G)\) is between \(3\) and \(7\). Now if, for every non-principal \(\lambda\) in \(\irr{L/L'}\), we have that \(7\) does not divide \(|K:I_K(\lambda)N|\), then \(I_K(\lambda)/L\) contains a Sylow \(7\)-subgroup of \(K/L\) as a normal subgroup and \(L/L'\) is the natural module for \(K/L\cong\SL 7\) (by Lemma~\ref{SL2Nq}); therefore, in view of the structure of the maximal subgroups of \(\PSL{7}\), we can assume that there exists \(\lambda_0\in\irr{L/L'}\) such that \(I_K(\lambda_0)N/N\) is isomorphic to a subgroup of \(S_4\). Of course the order of \(I_K(\lambda_0)N/N\) must be a multiple of \(3\), so either \(I_K(\lambda_0)/L\) contains a Sylow \(3\)-subgroup of \(K/L\) as a normal subgroup, or it has a normal section isomorphic to \(A_4\). In the latter case \(\lambda_0\) cannot extend to \(I_K(\lambda_0)\), as otherwise we would get the adjacency between \(3\) and \(7\) by Clifford's theory; hence \(|L/L'|\) is an even number, and there exists a chief factor \(L/Y\) of \(K\) that is a \(2\)-group. But it can be checked (via GAP \cite{GAP}, for instance) that all the three irreducible modules of \(\SL 7\) over \({\mathbb F}_2\) (that have dimensions \(3\), \(3\), \(8\) respectively) produce factor groups \(K/Y\) having irreducible characters of degree divisible by \(21\), and this is a contradiction. By Ito-Michler's theorem and Gallagher's theorem we conclude that, for every non-principal \(\lambda\in\irr{L/L'}\), \(I_K(\lambda)/L\) contains either a Sylow \(7\)-subgroup or a Sylow \(3\)-subgroup of \(K/L\) as a normal subgroup. If \(L/L'\) is not a \(7\)-group, then we can consider a chief factor \(L/Y\) of \(K\) such that \(L/Y\) is a \(q\)-group for a suitable prime \(q\neq 7\), and Theorem~\ref{TipoIeIIPieni} applied to the action of \(K/L\) on the dual group of \(L/Y\) yields a contradiction. Therefore \(L/L'\) is a \(t\)-group, and now by Theorem~\ref{TipoIeII} (together with Lemma~\ref{SL2Nq}) we get that \(L/L'\) is in fact the natural module for \(K/L\cong\SL 7\). Assume now \(K/N\cong\PSL 9\), thus \(\V G=\{2,3,5\}\) and the only non-adjacency in \(\Delta(G)\) is between \(3\) and \(5\). If, for every non-principal \(\lambda\) in \(\irr{L/L'}\), we have that \(3\) does not divide \(|K:I_K(\lambda)N|\), then \(I_K(\lambda)/L\) contains a Sylow \(3\)-subgroup of \(K/L\) as a normal subgroup and \(L/L'\) is the natural module for \(K/L\cong\SL 9\); therefore, we can assume that there exists \(\lambda_0\in\irr{L/L'}\) such that \(I_K(\lambda_0)N/N\) is isomorphic to a subgroup of \(A_5\). Of course the order of \(I_K(\lambda_0)N/N\) must be a multiple of \(5\), so either \(I_K(\lambda_0)/L\) contains a Sylow \(5\)-subgroup of \(K/L\) as a normal subgroup, or it is isomorphic to \(A_5\). In the latter case \(\lambda_0\) cannot extend to \(I_K(\lambda_0)\), as otherwise we would get the adjacency between \(3\) and \(5\) by Clifford's theory; hence \(|L/L'|\) is an even number, and there exists a chief factor \(L/Y\) of \(K\) that is a \(2\)-group. But it can be checked (via GAP \cite{GAP}, for instance) that all the three irreducible modules of \(\SL 9\) over \(\mathbb{F}_2\) (that have dimensions \(4\), \(4\), \(16\) respectively) produce factor groups \(K/Y\) having irreducible characters of degree divisible by \(15\), and this is a contradiction. We conclude that, for every non-principal \(\lambda\in\irr{L/L'}\), \(I_K(\lambda)/L\) contains either a Sylow \(3\)-subgroup or a Sylow \(5\)-subgroup of \(K/L\) as a normal subgroup; the same argument as in the paragraph above shows that \(L/L'\) is the natural module for \(K/L\cong\SL 9\). Finally, consider the case \(K/N\cong\PSL {17}\), hence \(\V G=\{2,3,17\}\) and the only non-adjacency in \(\Delta(G)\) is between \(3\) and \(17\). If, for every non-principal \(\lambda\) in \(\irr{L/L'}\), we have that \(17\) does not divide \(|K:I_K(\lambda)N|\), then \(I_K(\lambda)/L\) contains a Sylow \(17\)-subgroup of \(K/L\) as a normal subgroup and \(L/L'\) is the natural module for \(K/L\cong\SL {17}\); therefore, we can assume that there exists \(\lambda_0\in\irr{L/L'}\) such that \(I_K(\lambda_0)N/N\) has order not divisible by \(17\), but divisible by \(3^2\). An inspection of the subgroups of \(\PSL {17}\) yields that \(I_K(\lambda_0)N/N\) is either a cyclic group of order \(3^2\) or it is a dihedral group of order \(18\), so, in any case it contains a Sylow \(3\)-subgroup of \(K/N\) as a normal subgroup and we can get the conclusion that \(L/L'\) is the natural module for \(K/L\cong\SL{17}\) as in the previous cases. \medskip The proof that \(L/L'\) is the natural module for \(K/N\cong\SL{t^a}\) is then concluded. It remains to show that, setting \(U=L'\), we have \(U=1\). This will also imply that \(L\) is normal in \(G\), because if \(U=1\) then we have \(L=\oh t K\). Again we will treat separately the cases \(p\neq 2\) and \(p=2\). \smallskip Assume \(p\neq 2\) and, aiming at a contradiction, assume \(U\neq 1\). Since \(U/U'\) is an abelian (non-trivial) normal subgroup of \(L/U'\) having \(t\)-power index, any non-linear irreducible character \(\phi\) of \(L/U'\) is such that \(t\) divides \(\phi(1)\). Now, we know that every irreducible character of \(K\) lying over \(\phi\) must have odd degree, and so \cite[Theorem~A]{NT} yields that \(K/L\) has abelian Sylow \(2\)-subgroups. This is clearly a contradiction, and the proof is complete. \smallskip Finally, let us consider the case \(p=2\) and, again assuming \(U\neq 1\), let us take a subgroup \(Z\) of \(K\) such that \(U/Z\) is a chief factor of \(K\). We treat three situations, that are exhaustive, and that all lead to a contradiction. {\bf{(i)}} \(U/Z\not\leq\zent{L/Z}.\) \noindent(Note that, in this case, \(U/Z\) is a faithful \(K/U\)-module.) Consider the normal subgroup \(\cent{U/Z}{L/U}\) of \(K/Z\); since \(U/Z\) is a chief factor of \(K\) and it is not centralized by \(L/U\), we deduce that \(\cent{U/Z}{L/U}\) is trivial. We are in a position to apply the proposition appearing in the Introduction of \cite{Cu}, which ensures that the second cohomology group \({\rm {H}}^2(K/U,U/Z)\) is trivial, and therefore \(K/Z\) is a split extension of \(U/Z\); in particular, every irreducible character of \(U/Z\) extends to its inertia subgroup in \(K/Z\). Now, let \(\lambda\) be any non-principal character in \(\irr{U/Z}\): if \(\xi\in\irr{L/Z}\) lies over \(\lambda\), then \(\xi(1)\neq 1\) (as \(U/Z=(L/Z)'\)) and \(\xi(1)\) is a divisor of \(|L/U|=t^2\). In particular, every irreducible character of \(K\) lying over \(\lambda\) has a degree divisible by \(t\). Since \(\lambda\) extends to \(I_K(\lambda)/Z\), our assumptions together with Gallagher's theorem imply that \(I_K(\lambda)/U\) contains a unique Hall \(\pi\)-subgroup of \(K/U\). But this yields a contradiction via, for example, Proposition~3.13 of \cite{DPSS}; in fact, according to that result, \(K/U\) should have a cyclic solvable radical (whereas \(L/U=\oh t{K/U}\) is non-cyclic). This concludes the proof in this case. {\bf{(ii)}} \(U/Z\leq\zent{L/Z}\), but \(U/Z\not\leq\zent{K/Z}.\) \noindent Let \(\lambda\) be a non-principal character in \(\irr{U/Z}\), and let \(\xi\in\irr{L/Z}\) be a character lying over \(\lambda\). Clearly \(\xi(1)\) is a multiple of \(t\) and, assuming for the moment \(t^a\neq 9\), \(\xi\) extends to its inertia subgroup in \(K\) because \(K/L\) has cyclic Sylow \(t\)-subgroups (\cite[8.16, 11.22, 11.31]{Is}). It follows by Gallagher's theorem that \(I_K(\xi)\) contains a Hall \(\pi\)-subgroup of \(K/L\) as a normal subgroup. Now, \(\xi_U=\xi(1)\lambda\) and \(\xi(1)^2\leq t^2\), therefore \(\xi(1)=t\) and \(\lambda\) is fully ramified with respect to \(L/U\). In particular, \(\xi\) vanishes outside \(U\), thus \(I_K(\lambda)=I_K(\xi)\) and so \(I_K(\lambda)/L\) contains a Hall \(\pi\)-subgroup of \(K/L\) as a normal subgroup. This is against Lemma~\ref{SL2Nq}, and we are done under the additional assumption \(t^a\neq 9\). As regards the remaining case, we proceed as follows. Observe first that \(U/Z\) is a \(3\)-group, as otherwise \(L/Z\) would be the direct product of \(U/Z\) with an abelian \(3\)-group, and it would be abelian. Moreover, if \(\lambda\) is a non-principal character in \(\irr{U/Z}\), then any irreducible character of \(K\) lying over \(\lambda\) has a degree divisible by \(3\); as a consequence, \(|K:I_K(\lambda)|\) is not divisible by \(5\). But a direct computation on the non-trivial irreducible modules for \(\SL{9}\) over \({\mathbb F}_{3}\) (there are five of them, of dimensions \(4\), \(4\), \(6\), \(9\), \(12\)) shows that in every such module there are elements lying in orbits of size divisible by \(5\), the final contradiction that concludes the proof for this case. {\bf{(iii)}} \(U/Z\leq\zent{K/Z}.\) \noindent Let \(\lambda\) be a non-principal character in \(\irr{U/Z}\). We have that \((L/Z,U/Z,\lambda)\) is a character triple for which the factor group \((L/Z)/(U/Z)\cong L/U\) is abelian, thus we can apply Lemma~2.2 of \cite{Wo}: there exists a unique subgroup \(W/Z\) of \(L/Z\), containing \(U/Z\), maximal with respect to the fact that \(\lambda\) has an \(L/Z\)-invariant extension to \(W/Z\). By the uniqueness of this \(W/Z\) and by the fact that \(\lambda\) is invariant in \(K\), it follows that \(W/Z\) is normal in \(K/Z\) and, since it is properly contained in \(L/Z\) (because \(\lambda\) does not extend to \(L\)), we get \(W=U\). Now part (b) of the same lemma yields that \(\lambda\) is fully ramified with respect to \(L/U\), and therefore the unique \(\xi\) in \(\irr{L/Z\mid\lambda}\) is such that \(I_K(\xi)=I_K(\lambda)=K\). Now, if \(\xi\) (whose degree is divisible by \(t\)) extends to \(K\), then we reach a contradiction via Gallagher's theorem. This always happens when the Schur multiplier of \(K/L\) is trivial, i.e., when \(t^a\neq 9\). As for the remaining case, working with character triples, we see that there exists an irreducible character of \(K\) lying over \(\xi\) whose degree is divisible by \(5\) (actually, by \(15\)) even if \(\xi\) does not extend to \(K\). This is impossible in our setting, and the proof is complete. \end{proof} We consider now, still for \(t\) odd and \(t^a\neq 5\), the complementary situation with respect to previous result. This will conclude the proof of Theorem~\ref{MainAboutK}. \begin{theorem}\label{pallepiene} Assume that the group \(G\) has a composition factor isomorphic to $\PSL{t^a}$, for an odd prime \(t\) with \(t^a> 5\), and let \(p\) be a prime number. Assume also that \(\Delta(G)\) is connected with cut-vertex \(p\), and that both the sets of primes \(\pi_{-}=\pi(t^a-1)-\{2\}\) and \(\pi_{+}=\pi(t^a+1)-\{2\}\) are non-empty. Then, denoting by \(K\) the last term in the derived series of \(G\), one of the following conclusions holds. \begin{enumeratei} \item \(K\) is isomorphic to \(\PSL{t^a}\) or to \(\SL{t^a}\); \item \(K\) contains a minimal normal subgroup \(L\) of \(G\) such that \(K/L\) is isomorphic to \(\SL{t^a}\) and \(L\) is the natural module for \(K/L\). \item \(t^a=13\), and \(K\) contains a minimal normal subgroup \(L\) of \(G\) such that \(K/L\) is isomorphic to \(\SL{13}\), and \(L\) is isomorphic to one of the two \(6\)-dimensional irreducible modules for \(\SL{13}\) over \({\mathbb F}_3\). \end{enumeratei} \end{theorem} \begin{proof} Denoting by \(R\) be the solvable radical of \(G\), Theorem~\ref{0.2} yields that \(G/R\) is an almost-simple group with socle isomorphic to \(\PSL{t^a}\), and \(\V G=\pi(G/R)\cup\{p\}\). Note that Lemma~\ref{InfiniteCommutator} applies here, because \(t^a=9\) is excluded by our assumption of \(\pi_-\) being non-empty; so, either we get conclusion (a), or \(K\) has a non-trivial normal subgroup \(L\) such that \(K/L\) is isomorphic to \(\PSL{t^a}\) or to \(\SL{t^a}\), and every non-principal irreducible character of \(L/L'\) is not invariant in \(K\). Therefore, we can assume that the latter condition holds. Consider then a non-principal \(\lambda\) in \(\irr{L/L'}\): as we said, the inertia subgroup \(I_K(\lambda)\) is a proper subgroup of \(K\) and the same clearly holds for \(I_K(\lambda)N\), where we set \(N/L=\zent{K/L}\). Thus \(I_K(\lambda)N/N\) is a suitable proper subgroup of \(K/N\cong\PSL{t^a}\). If \(t\) is not a divisor of \(|K:I_K(\lambda)|\), then \(I_K(\lambda)N/N\) is of type (ii) and it is easy to see that \(I_K(\lambda)/L\) contains a Sylow \(t\)-subgroup of \(K/L\) as a normal subgroup. Assuming for the moment that this happens for every non-principal \(\lambda\in\irr{L/L'}\), an application of Lemma~\ref{SL2Nq} yields that \(K/L\) is isomorphic to \(\SL{t^a}\) and \(L/L'\) is the natural module for \(K/L\). So, in order to get conclusion (b), we only have to show that \(L'\) is trivial (note that, once this is proved, \(L=\oh t K\) is a minimal normal subgroup of \(G\)), and this is what we do next. Writing \(U\) for \(L'\) we observe that, if \(K/L\cong\SL{t^a}\) and \(L/U\) is the natural module for \(K/L\), then \(\Delta(K/U)\) has two complete connected components, \(\{t\}\) and \(\V {K/L}-\{t\}\); recalling also that every prime in \(\pi(G/R)-\pi(K/L)\) is adjacent in \(\Delta(G/R)\) to all the other primes in \(\pi(G/R)-\{t\}\) (Theorem~\ref{MoretoTiep}), we see that the subgraph of \(\Delta(G)\) induced by the set \(\pi(G/R)-\{t\}\) is a clique, and therefore the set of neighbours of \(t\) in \(\Delta(G)\) consists of \(p\) only. Given that, for a proof by contradiction, assume \(U\neq 1\) and consider the non-abelian factor group \(L/U'\); since \(U/U'\) is an abelian normal subgroup of \(L/U'\) having \(t\)-power index, any non-linear irreducible character \(\phi\) of \(L/U'\) has a degree divisible by \(t\). Now, if \(p\neq 2\), then the non-adjacency between \(t\) and \(2\) yields that every irreducible character of \(K\) lying over \(\phi\) has odd degree, which implies (via Theorem~A of \cite{NT}) that \(K/L\) has abelian Sylow \(2\)-subgroups. Since this is not the case, we deduce that \(p=2\) and so \(|K:I_K(\phi)|\) is divisible only by \(2\) and \(t\). Note that \(\phi\) cannot be invariant in \(K\), because the Schur multiplier of \(K/L\) is trivial and we would easily get a contradiction by Gallagher's theorem. It is also not difficult to see that \(I_K(\phi)N/N\) cannot be a subgroup of type (i), (ii) or (iv) of \(K/N\). On the other hand, \(I_K(\phi)N/N\) cannot be isomorphic to \(S_4\) or to \(A_4\) as well, because we know that every prime in \(\pi_-\cup\pi_+\) does not divide \(|K:I_K(\phi)N|\) and we would have \(\pi_-\cup\pi_+\subseteq\{3\}\), against the assumption that both \(\pi_-\) and \(\pi_+\) are non-empty. It remains to consider the case \(I_K(\phi)N/N\cong A_5\). In this situation, \(\pi_-\cup\pi_+\) is forced to coincide with \(\{3,5\}\), hence \(t\not\in\{3,5\}\). Moreover, \(\phi\) does not have an extension to \(I_K(\phi)\), as otherwise we would get (again by Gallagher's theorem) that \(t\) is adjacent to \(3\) and to \(5\) in \(\Delta(G)\). Now, if \(I_K(\phi)\) contains \(N\), then \(I_K(\phi)/L\) is isomorphic to \(\SL 5\) (because \(K/L\cong\SL {t^a}\) has a unique involution); but the Schur multiplier of this group is trivial, so \(\phi\) would have an extension to \(I_K(\phi)\). It follows that \(I_K(\phi)\) does not contain \(N\) and, using character triples, we see that there exists an irreducible character of \(I_K(\phi)\) lying over \(\phi\) whose degree is divisible by \(3\). This again produces the adjacency between \(t\) and \(3\) in \(\Delta(G)\), the final contradiction that yields conclusion (b) in this case. \smallskip So, our assumption will be henceforth that there exists a non-principal \(\lambda\in\irr{L/U}\) such that \(t\) divides \(|K:I_K(\lambda)|\). Our aim will be to get conclusion (c) under this hypothesis. We claim that, if this is the setting, then the vertices \(2\) and \(t\) are adjacent in \(\Delta(K)\) (thus, in \(\Delta(G)\)). Assuming the contrary, first of all we observe that \(I_K(\lambda)/L\) is forced to contain a Sylow \(2\)-subgroup of \(K/L\), and \cite[Theorem~A]{NT} ensures that \(K/L\) has abelian Sylow \(2\)-subgroups, which yields \(N=L\); in particular, \(I_K(\lambda)/N\) cannot be a subgroup of type~(ii) of \(K/N\), and it cannot be isomorphic to \(S_4\) or to ${\rm PGL}_2(t^b)$ for any \(b\). The remaining possibilities for \(I_K(\lambda)/N\) are then to be either of type~(i) or isomorphic to a group in the following list: \(A_4\), \(A_5\), \(\PSL{t^b}\) for a suitable divisor $b$ of $a$. Suppose first that \(\lambda\) does not have an extension to \(I_K(\lambda)\): then, working in a Schur cover of \(I_K(\lambda)/N\) for any of its possible isomorphism types and using the theory of character triples, we see that there exists an irreducible character of \(I_K(\lambda)\) lying over \(\lambda\) and having an even degree. In fact, if \(I_K(\lambda)/N\) is (non-cyclic) of type~(i), then any central extension of \(I_K(\lambda)/N\) has an abelian normal subgroup of index \(2\) and all irreducible character degrees in \(\{1,2\}\); the other cases can be easily checked. This is against our assumption that \(2\) and \(t\) are non-adjacent in \(\Delta(K)\). On the other hand, if \(\lambda\) does have an extension to \(I_K(\lambda)\), then Gallagher's theorem yields a contradiction for all the possible types of \(I_K(\lambda)/N\), except for the type \(A_4\). In this last case, however, by Gallagher's theorem the primes in \(\pi(K/N)-\{2\}\) are the vertices of a clique in \(\Delta(K)\): taking into account Lemma~\ref{PSL2bis} and Theorem~\ref{MoretoTiep} (together with the fact that \(\V G=\pi(G/R)\cup\{p\}\)), it is easily seen that \(\Delta(G)\) cannot have a cut-vertex, against our hypothesis. The claim concerning the adjacency between \(2\) and \(t\) is then proved. As a consequence, again in view of Lemma~\ref{PSL2bis} and Theorem~\ref{MoretoTiep}, we get \(p=2\). Still assuming that there exists a non-principal \(\lambda\in\irr{L/U}\) such that \(t\) divides \(|K:I_K(\lambda)|\), we also claim that \(t\) must be adjacent in \(\Delta(K)\) to some odd prime divisor of \(t^{2a}-1\). This is clearly true if \(I_K(\lambda)N/N\) is of type (i), (ii) or (iv). As regards type (iii), assume that our claim is false. Then \(I_K(\lambda)N/N\) cannot be isomorphic to \(A_4\) or \(S_4\) by our assumption that both \(\pi_+\) and \(\pi_-\) are non-empty; on the other hand, if \(I_K(\lambda)N/N\cong A_5\), then we get \(\pi_-\cup\pi_+=\{3,5\}\), and Gallagher's theorem or the theory of character triples (according to whether \(\lambda\) extends to \(I_K(\lambda)\) or not) yield the contradiction that \(t\) is adjacent to \(3\) in \(\Delta(K)\). Finally, given our assumption that \(\Delta(G)\) has a cut-vertex (and still having \ref{PSL2bis}, \ref{MoretoTiep} in mind), it is easy to see that the neighbours of \(t\) in \(\Delta(G)\) belonging to \(\pi(t^{2a}-1)-\{2\}\) must be either all in \(\pi_-\) or all in \(\pi_+\); moreover, there are no adjacencies in \(\Delta(G)\) between vertices lying in \(\pi_-\) and vertices lying in \(\pi_+\). \smallskip Taking into account the structure of the graph \(\Delta(G)\) as described so far, we consider now a chief factor \(V=L/Y\) of \(K\) and discuss the action of \(K/L\) on its irreducible module \(V\). We will work actually on the dual module of \(V\), analyzing the subgroups \(I_K(\mu)N/N\) for \(\mu\) in \(\irr V-\{1_V\}\); recall that, since \(\mu\) can be regarded as a character of \(L/U\), the factor group \(I_K(\mu)N/N\) is a proper subgroup of \(K/N\). \smallskip Let us start by assuming that \(t\) is adjacent in \(\Delta(K)\) to some vertex in \(\pi_-\). Then we know that \(t\) has no neighbours in \(\pi_+\) (as a vertex of \(\Delta(G)\)), so we can immediately exclude that \(I_K(\mu)N/N\) is a subgroup of a dihedral group of order \(t^a-1\); on the other hand, \(I_K(\mu)N/N\) can be a subgroup of a dihedral group of order \(t^a+1\), and in that case it contains a Sylow subgroup of \(K/N\) (as a normal subgroup) for every prime in \(\pi_+\). However, note that the latter situation cannot occur \emph{for all} non-principal \(\mu\in\irr V\), as otherwise we get a contradiction by Lemma~\ref{SL2Nq}. Suppose that \(I_K(\mu)N/N\) is of type (ii): in this case, \(I_K(\mu)N/N\) must contain a Sylow subgroup of \(K/N\) both for the prime \(t\) and for all the primes in \(\pi_-\), therefore it is a Frobenius group whose complements have order divisible by every prime in \(\pi_-\) and it has irreducible characters whose degree is a multiple of all those primes. It is not difficult to see that \(I_K(\mu)/L\) has a normal \(2\)-complement \(H/L\), which enjoys the same properties mentioned above for \(I_K(\mu)N/N\). If \(\mu\) extends to \(H\), then Clifford's theory yields the adjacency in \(\Delta(K)\) of every prime in \(\pi_-\) with every prime in \(\pi_+\), not our case. On the other hand, if \(\mu\) does not extend to \(H\), then \(V=L/Y\) is a \(t\)-group (recall \cite[8.16, 11.22, 11.31]{Is}) and we reach a contradiction as well: in fact, in this case \(I_K(\mu)/Y\) has a normal Sylow \(t\)-subgroup \(T_0/Y\) and \(\mu\) does not extend to \(T_0\). Now, the restriction of any \(\phi\in\irr{I_K(\mu)|\mu}\) to \(T_0\) must have an irreducible constituent whose degree is divisible by \(t\), as otherwise some linear constituent of \(\phi_{T_0}\) would lie over \(\mu\) and would be an extension to \(T_0\) of it, so Clifford's theory would imply the adjacency of \(t\) with every prime in \(\pi_+\). We exclude next that \(I_K(\mu)N/N\) is of type (iv). If this is the case, certainly \(|K:I_K(\mu)|\) is divisible by \(t\) (thus not divisible by any prime in \(\pi_+\)), and therefore no irreducible character of \(I_K(\mu)\) lying over \(\mu\) has a degree divisible by a prime in \(\pi_+\); it is not difficult to check that both if \(\mu\) extends to \(I_K(\mu)\) and if it does not, using Gallagher's theorem or character triples respectively, we obtain a contradiction (note that \(I_K(\mu)N/N\) cannot be isomorphic to \(\PSL{9}\) or \({\rm{PGL}}_2(9)\) in this case, because any primitive prime divisor of \(3^{2a}-1\) lies in \(\pi_+\) and clearly divides \(|K:I_K(\mu)|\)). Our conclusion so far is that, for every non-principal \(\mu\in\irr V\), the factor group \(I_K(\mu)N/N\) is either of type (i) or of type (iii); as observed, there must be some \(\mu\) as above for which the latter case holds. Now, if \(I_K(\mu)N/N\) is isomorphic to either \(S_4\) or \(A_4\), then \(t\) cannot be \(3\): otherwise the \(t\)-part of \(|K/N|\) would be larger than \(t\), so \(t\) would divide \(|K:I_K(\mu)|\) and no prime in \(\pi_+\) could be a divisor of \(|K:I_K(\mu)N|\). But this would force \(\pi_+\) to be empty, which is not the case. Thus, \(t\) is a divisor of \(|K:I_K(\mu)|\) and \(\pi_+\) is then forced to be \(\{3\}\). If \(\mu\) extends to \(I_K(\mu)\), then Clifford's theory yields the adjacency in \(\Delta(K)\) between \(t\) and \(3\), which is not allowed. It only remains the possibility that \(\mu\) does not extend to \(I_K(\mu)\), and this can happen only if \(V\) is a \(2\)-group. On the other hand, if \(I_K(\mu)N/N\cong A_5\), a similar argument shows that either \(t\) is larger than \(5\) and \(\pi_+\subseteq\{3,5\}\), or \(\{t\}\cup\pi_+=\{3,5\}\); in any case \(t\) divides \(|K:I_K(\mu)|\) and, if \(\mu\) extends to \(I_K(\mu)\), then we get the adiacency of \(t\) with the primes in \(\pi_+\). This cannot happen, so again we have that \(V\) is a \(2\)-group. To sum up, we can assume that \(V\) is an irreducible \(K/L\)-module over the field \({\mathbb F}_2\). Moreover, there exists \(r\in\{3,5\}\) (dividing \(t^a+1\)) such that every non-principal irreducible character of \(V\) is centralized by an element of order \(r\). An application of Lemma~\ref{brauer} yields the final contradiction for this case. \medskip Let us move to the case when \(t\) is adjacent in \(\Delta(K)\) to some vertex in \(\pi_+\). Then we know that \(t\) has no neighbours in \(\pi_-\) (as a vertex of \(\Delta(G)\)), so we can immediately exclude that \(I_K(\mu)N/N\) is of type~(i$_+$). We can also exclude that \(I_K(\mu)N/N\) is of type (iv) arguing as in the case when \(t\) is adjacent to a prime in \(\pi_-\), unless \(I_K(\mu)N/N\) is isomorphic to \(\PSL{9}\) or \({\rm{PGL}}_2(9)\). In the latter case, it is not difficult to see that \(I_K(\mu)/L\) has a normal subgroup \(H/L\) isomorphic either to \(\PSL{9}\) or to \(\SL{9}\); recalling that \(\ker \mu\) is a normal subgroup of \(I_K(\mu)\) with \(L/\ker \mu\subseteq\zent{I_K(\mu)/\ker\mu}\), we consider the factor group \(H/\ker\mu\). If this factor group splits over \(L/\ker\mu\), then \(\mu\) extends to \(H\), and there certainly exists an irreducible character of \(I_K(\mu)\) lying over \(\mu\) with a degree divisible by \(5\); now, Gallagher's theorem yields the contradiction that \(t=3\) is adjacent to \(5\in\pi_-\). On the other hand, if \(H/\ker\mu\) does not split over \(L/\ker\mu\), then \(L/\ker\mu\) embeds in the Schur multiplier of \(H/L\) (so, \(|L/\ker\mu|\in\{2,3\}\)), and using character triples we see that \(H/\ker\mu\) has irreducible characters lying over \(\mu\) having a degree divisible by \(5\). As a consequence, \(I_K(\mu)\) has irreducibe characters lying over \(\mu\) having a degree divisible by \(5\), again producing the same contradiction as above. Next, \(I_K(\mu)N/N\) can be of type (i$_-$), and in that case it contains a Sylow subgroup of \(K/N\) (as a normal subgroup) for every prime in \(\pi_-\). Clearly, \(I_K(\mu)/L\) contains a Sylow subgroup of \(K/L\) as a normal subgroup for every prime in \(\pi_-\) as well. The factor group \(I_K(\mu)N/N\) can also be of type (ii) with an order divisible by \(t\): if so, then it must contain a Sylow subgroup of \(K/N\) for all the primes in \(\pi_-\) (of course \(I_K(\mu)/L\) contains a Sylow subgroup of \(K/L\) for every prime in \(\pi_-\) as well), and it is a Frobenius group whose kernel is a \(t\)-group and whose complements have an order divisible by every prime in \(\pi_-\). We claim that \(I_K(\mu)N/N\) must contain a full Sylow \(t\)-subgroup of \(K/N\). To show this, we can clearly assume \(a>1\); if \(t^a-1\) has a primitive prime divisor \(q\), then \(q\) lies in \(\pi_-\) and so the Sylow \(t\)-subgroup of \(I_K(\mu)N/N\) (on which an element of order \(q\) of \(I_K(\mu)N/N\) acts fixed-point freely) cannot have order less than \(t^a\). On the other hand, assume that \(t^a-1\) does not have a primitive prime divisor and, for a proof by contradiction, that \(I_K(\mu)N/N\) does not contain a full Sylow \(t\)-subgroup of \(K/N\). Then \(a=2\), \(t\) is a Mersenne prime, and \(I_K(\mu)N/N\) has a (normal) subgroup \(H/N\) that is a Frobenius group whose kernel has order \(t\) and whose complements are cyclic of odd order \(\frac{t-1}{2}\). Setting \(H_0=H\cap I_K(\mu)\), we then get that the normal subgroup \(H_0/L\) of \(I_K(\mu)/L\) has irreducible characters of degree divisible by every prime in \(\pi_-\), and it has a trivial Schur multiplier because every Sylow subgroup of \(H_0/L\) is cyclic. Now \(\mu\) extends to \(H_0/L\), which easily yields that \(I_K(\mu)\) has irreducible characters lying over \(\mu\) and having a degree divisible by every prime in \(\pi_-\). As usual, we get a contradiction via Gallagher's theorem. It remains to consider type (iii). Assume that \(\mu\in\irr V\) is such that \(I_K(\mu)N/N\) is isomorphic to either \(S_4\) or \(A_4\). Then \(t\) cannot be \(3\): otherwise the \(t\)-part of \(|K/N|\) would be larger than \(t\), so \(t\) would divide \(|K:I_K(\mu)|\) and no prime in \(\pi_-\) could be a divisor of \(|K:I_K(\mu)N|\). But this would force \(\pi_-\) to be empty, which is not the case. Thus, \(t\) is a divisor of \(|K:I_K(\mu)|\) and \(\pi_-\) is then forced to be \(\{3\}\). If \(\mu\) extends to \(I_K(\mu)\), then Clifford's theory yields the adjacency in \(\Delta(K)\) between \(t\) and \(3\), which is not allowed. It only remains the possibility that \(\mu\) does not extend to \(I_K(\mu)\), and this can happen only if \(V\) is a \(2\)-group. On the other hand, if \(I_K(\mu)N/N\cong A_5\), a similar argument shows that either \(t\) is larger than \(5\) and \(\pi_-\subseteq\{3,5\}\), or \(\{t\}\cup\pi_-=\{3,5\}\); in any case \(t\) divides \(|K:I_K(\mu)|\) and, if \(\mu\) extends to \(I_K(\mu)\), then we get the adiacency of \(t\) with the primes in \(\pi_-\). This cannot happen, so again we have that \(V\) is a \(2\)-group. To sum up, if there exists \(\mu\in\irr V\) such that \(I_K(\mu)N/N\) is of type (iii), then \(V\) is a non-trivial irreducible \(K/L\)-module over the field \(\mathbb{F}_2\) and the set \(\pi_-\) is contained in \(\{3,5\}\). But now, for \(r\in\pi_-\), the discussion in the last three paragraphs above ensures that the centralizer in \(K/L\) of every element of $V$ contains an element of order \(r\): an application of Lemma~\ref{brauer} yields a contradiction (note that \(t^a=7\) is excluded here by the fact that \(\pi_+\) is assumed to be non-empty). Our conclusion so far is that, for every non-trivial \(\mu\in\irr V\), the factor group \(I_K(\mu)/L\) either contains a unique Sylow subgroup of \(K/L\) for every prime in \(\pi_-\), or it contains a unique Sylow \(t\)-subgroup of \(K/L\). Recall, however, that \(V\) is not the natural module for \(K/L\) because the factor groups \(I_K(\mu)/L\) are never Sylow \(t\)-subgroups of \(K/L\) in the present situation. By (Lemma~\ref{SL2Nq} and) Theorem~\ref{TipoIeIIPieni} or Theorem~\ref{TipoIeII}, depending on whether \(V\) has order coprime to \(t\) or is a \(t\)-group, we then get that \(t^a=13\), \(K/L\) is isomorphic to \(\SL{13}\), and \(V\) is one of the two \(6\)-dimensional irreducible modules for \(\SL{13}\) over \({\mathbb F}_3\). \smallskip In order to complete the proof of conclusion (c), it remains to show that actually \(|L|=3^6\) (which also implies \(L=\oh 3 K\trianglelefteq G\)). Observe first that, by our argument so far, \(|L/W|=3^6\) \emph{for every choice} of a chief factor \(L/W\) of \(K\); this implies that \(L/U\) is actually a \(3\)-group. If \(U\neq 1\), we see that every non-linear irreducible character \(\phi\) of \(L/U'\) has a degree divisible by \(3\). Now, such a \(\phi\) is not \(K\)-invariant, as otherwise it would have an extension to \(K\) (because the Schur multiplier of \(K/L\) is trivial) and we would get a contradiction via Gallagher's theorem; on the other hand, every maximal subgroup of \(K/N\cong\PSL{13}\) has an index in \(K/N\) that is divisible either by \(7\) or by \(13\), thus \(I_K(\phi)N\) cannot be a proper subgroup of \(K\) and we have a contradiction. Our conclusion so far is that \(U=1\), i.e., \(L\) is an abelian \(3\)-group. Taking into accout that (again by \cite[8.16, 11.22, 11.31]{Is}) every irreducible character of \(L\) extends to its inertia subgroup in \(K\), we are now ready to finish our argument. Let \(\theta\) be a non-principal irreducible character of \(L\), and observe that \(\theta\) cannot be \(K\)-invariant (as the Schur multiplier of \(\SL{13}\) is trivial); therefore, \(I_K(\theta)N/N\) is contained in a maximal subgroup of \(K/N\). Now, the index of a maximal subgroup of \(\PSL{13}\) lies in the set \(\{14,78,91\}\), but the non-adjacency between \(3\) and \(13\) in \(\Delta(G)\) rules out the index \(78\) for a maximal subgroup containing \(I_K(\theta)N/N\). On the other hand, in all the remaining cases we must have that \(3\) divides the order of \(I_K(\theta)N/N\); but now, looking at the structure of the possible maximal subgroups containing \(I_K(\theta)N/N\), we see that \(I_K(\theta)/L\) has an irreducible character of degree \(3\), yielding contradictory adjacencies via Gallagher's theorem, except when it contains a Sylow \(3\)-subgroup of \(K/L\) as a normal subgroup. Since this holds for every non-principal \(\theta\in\irr L\), \cite[Lemma~4]{Z} yields that \(L\) is an irreducible \(K/L\)-module, completing the proof. \end{proof} \section{A proof of the main result} In this section we prove the Main Theorem stated in the Introduction, which provides a characterization of the groups having a normal section isomorphic to \(\PSL {t^a}\) (for \(t\neq 2\) and \(t^a>5\)) and whose degree graph is connected with a cut-vertex. \begin{proof}[Proof of the Main Theorem] We start by proving the ``only if" part of the statement: we assume that the group \(G\) has a composition factor isomorphic to $\PSL{t^a}$ for a suitable odd prime \(t\) with \(t^a> 5\), and the graph \(\Delta(G)\) is connected with cut-vertex \(p\). As usual, this implies (via Theorem~\ref{0.2}) that \(G/R\) is an almost-simple group whose socle is isomorphic to \(\PSL{t^a}\), and \(\V G=\pi(G/R)\cup\{p\}\). Moreover, by Theorem~A of \cite{W1}, if \(t\) is a divisor of \(|G/KR|\) then both \(t\) and \(2\) turn out to be complete vertices of the graph \(\Delta(G/R)\), against our hypothesis on \(\Delta(G)\). Also, if \(p=t\), then we get \(\V G=\pi(G/R)\); but again \cite[Theorem~A]{W1} yields that the subgraph of \(\Delta(G/R)\) induced by \(\pi(G/R)-\{t\}\) is connected, not our case. It remains to prove that one among (a), (b) and (c) holds and, in this respect, much of the work has been done in Theorem~\ref{MainAboutK}. Namely, we will only have to show that \(\V{G/K}=\{p\}\) in cases (a) and (b), whereas \(p=2\) and \(\V {G/K}\subseteq\{2\}\) in case (c). Set \(N=K\cap R\) (so, \(K/N\cong\PSL{t^a}\)): we start by proving that \(t\) cannot lie in \(\V{G/K}\). In fact, assuming the contrary and recalling that \(t\) does not divide \(|G/KR|\), we would have (by Ito-Michler's theorem) that \(t\) divides the degree of some irreducible character \(\phi\) of \(R/N\cong KR/K\). Now, as \(KR/N\) is the direct product of \(K/N\) and \(R/N\), it easily follows that \(t\) is adjacent in \(\Delta(G)\) to every prime in \(\pi(K/N)-\{t\}=\pi(KR/R)-\{t\}\). But an application of (Clifford's theory and) Proposition~2.6(a) in \cite{ACDPS} yields that \(t\) is adjacent in \(\Delta(G)\) to every prime \(r\) in \(\pi(G/R)-\pi(KR/R)\) as well: in fact, according to that proposition, there exists an irreducible character \(\theta\) of \(K/N\) such that \(|G:I_G(\theta)|\) is divisible by \(r\), and our claim follows considering any irreducible character of \(G\) lying over \(\phi\theta\in\irr{KR}\). So, \(t\) turns out to be a complete vertex in the subgraph of \(\Delta(G)\) induced by \(\pi(G/R)\); this forces \(p=t\), against what observed above. Next, we show that any prime \(q\in\V{G/K}\) is adjacent in \(\Delta(G)\) to all the primes in \(\pi(G/R)-\{q\}\). Take then \(q\in\V{G/K}\) (recall that \(q\neq t\) by the previous paragraph): it is well known that the Steinberg character ${\sf St}$ of \(K/N\) has an extension to \(G\) (see for instance \cite{S}). Therefore, recalling that ${\sf St}(1)=t^a$, Gallagher's theorem yields that \(q\) is adjacent to \(t\) in \(\Delta(G)\). Now, if \(q\) divides \(|G/KR|\), then \(q\) is adjacent in \(\Delta(G/R)\) to every prime in \(\pi(G/R)-\{q,t\}\) by \cite[Theorem~A]{W1}; but since \(q\) is also adjacent to \(t\) in \(\Delta(G)\), we are done in this case. On the other hand, if \(q\) does not divide \(|G/KR|\), then \(q\) divides the degree of some irreducible character of \(KR/K\) (hence of \(R/N\)). Since \(KR/N=K/N\times R/N\), clearly \(q\) is adjacent in \(\Delta(G)\) to every prime in \(\pi(K/N)-\{q\}=\pi(KR/R)-\{q\}\); but $q$ is also adjacent in \(\Delta(G)\) to every prime in \(\pi(G/R)-\pi(KR/R)\), by the same argument involving \cite[Proposition~2.6(a)]{ACDPS} that was used in the previous paragraph. The desired conclusion follows. As a consequence of the paragraph above, if there exists a prime \(q\in\V{G/K}-\{p\}\), then $q$ is a complete vertex in the subgraph of \(\Delta(G)\) induced by \(\pi(G/R)\), which contradicts the fact that \(p\neq q\) is a cut-vertex of \(\Delta(G)\). In particular, we get \(\V{G/K}\subseteq\{p\}\). Another immediate consequence is that, if (conversely) \(p\) lies in \(\V{G/K}\), then \(p\) is a complete vertex of \(\Delta(G)\) For cases (a) and (b) we actually see that \(p\in\V{G/K}\). Assuming the contrary, we would have that \(G/K\) is abelian, which implies \(R/N\subseteq\zent{G/N}\). Now Theorem~\ref{LewisWhite} yields that \(\Delta(G)\) is disconnected, a contradiction (observe that the subgroup \(C\) of Theorem~\ref{LewisWhite} coincides with \(R\) in our situation). As for case (c), it can be checked via GAP \cite{GAP} that the set of irreducible character degrees of \(K\) is \(\{1,\;2\cdot 3,\;7,\,2^2\cdot 3,\;13,\;2\cdot 7,\;2^3\cdot 7\cdot 13\}\), therefore \(\pi(G/R)=\{2,3,7,13\}\) induces a connected subgraph of \(\Delta(G)\) and we get \(\V G=\pi(G/R)\); as \(2\) is a complete vertex of \(\Delta(G)\), we necessarily have \(p=2\) and the ``only if" part of the statement is proved. Observe that, still in case (c), \(t=13\) is adjacent in \(\Delta(G)\) to the cut-vertex \(2\) but also to \(7\), so the exception pointed out in the last sentence of the statement is a genuine one. Note also that, by the last observation of the previous paragraph, \(p\) is a complete vertex of \(\Delta(G)\) in cases (a) and (b) because it lies in \(\V{G/K}\), but \(p=2\) is a complete vertex in case (c) as well, thus proving the first claim in the sentence that concludes the statement. \medskip We move now to the ``if" part. Observe first that, by Theorem~\ref{LewisWhite} and Theorem~4.1 of \cite{LW}, \(\Delta(G)\) is a connected graph under our hypotheses. Setting \(N=K\cap R\) as above, consider the Steinberg character ${\sf St}$ of \(K/N\); we already observed that ${\sf St}$, viewed by inflation as an irreducible character of \(K\) having degree \(t^a\), has an extension to \(G\). Thus, by Gallagher's theorem, \(t^a\cdot \psi(1)\) is the degree of an irreducible character of \(G\) for every \(\psi\in\irr{G/K}\). Since one of our assumptions is that \(p\) divides the degree of some irreducible character of \(G/K\), we get the adjacency of \(t\) with \(p\) in \(\Delta(G)\). Furthermore, if \(K\) is as in (c) then, as noted in the paragraph above, \(2\) is a complete vertex of \(\Delta(G)\), thus it is the only possible cut-vertex of \(\Delta(G)\) and it also adjacent to the vertex \(3\) in \(\Delta(G)\). In order to conclude the proof, we will show that there exists \(v\in\V G\) which is adjacent \emph{only} to \(p\) in \(\Delta(G)\): this \(v\) is the vertex \(t\) in cases (a) and (b) (which settles also the last claim of the statement), whereas it is the vertex \(3\) in case (c). To this end we will first prove that, for every \(\chi\in\irr G\) such that \(v\in\pi(\chi(1))\) and \(\ker\chi\supseteq L\), we have \(\pi(\chi(1))\subseteq\{v,p\}\) (where \(L\) is set to be the trivial group in case (a)); secondly, we will see that \(v\) does not divide \(\chi(1)\) for every \(\chi\in\irr G\) with \(\ker\chi\not\supseteq L\). So, let us start with \(\chi\in\irr G\) such that \(v\in\pi(\chi(1))\) and \(\ker\chi\supseteq L\). Consider first cases (a) and (b) (so, we set \(v=t\)). Since \(t\) does not divide \(|G/KR|\), the degree of an irreducible constituent \(\xi\) of \(\chi_{KR}\) is necessarily divisible by \(t\); moreover, taking into account that \(KR/L\) is a central product of \(K/L\) and \(R/L\), we have that \(KR/L\) is isomorphic to a quotient of the direct product \(K/L\times R/L\), hence \(\xi\) can be viewed as the product of two suitable irreducible characters \(\alpha\), \(\beta\) of \(K/L\) and \(R/L\), respectively. Since \(\V{G/K}=\{p\}\) and \(p\neq t\), it easily follows that \(t\) does not divide any irreducible character degree of \(R/L\), so \(t\) necessarily divides \(\alpha(1)\). But \(K/L\) has a unique irreducible character of degree divisible by \(t\) (see \cite[Theorem~2.1(i)]{GGLMNT}), that is ${\sf St}$: hence \(\alpha={\sf St}\) and, as \(\alpha\) has an extension to \(G\), \(\chi(1)\) is the product of \(\alpha(1)=t^a\) with the degree of some irreducible character of \(G/K\). We conclude that \(\pi(\chi(1))\subseteq\{p,t\}\), as wanted. As regards case (c) (for which we set \(v=3\)), we have \(G=KR\) and \(\chi\) can be thought as an irreducible character of \(K/L\times R/L\), as above; since \(3\) is only adjacent to \(p=2\) in \(\Delta(K/L)\), and \(\V{R/L}\subseteq\{2\}\), we easily deduce that \(\pi(\chi(1))\subseteq\{3,p\}\) in this case as well. Finally, let \(\chi\in\irr G\) be such that \(\chi_L\) has a non-principal irreducible constituent \(\lambda\) (obviously case~(a) is not involved here). Denoting by \(V_0\) a Sylow \(v\)-subgroup of \(R\), the hypothesis \(\V{G/K}\subseteq\{p\}\) implies that the factor group \(V_0N/N\) is an abelian normal Sylow \(v\)-subgroup of \(R/N\cong KR/K\). It is then easily seen that \(V_0/L\) is an abelian normal Sylow \(v\)-subgroup of \(R/L\) (in particular, \(V_0\trianglelefteq G\)), and recall also that, by Lemma~\ref{singer}, we have \(L\subseteq\zent {V_0}\). Now, consider the normal subgroup \(KV_0\) of \(G\): taking into account that, both in case (b) and in case (c), \(I_K(\lambda)/L\) is a Sylow \(v\)-subgroup of \(K/L\), we have that \(I_{KV_0}(\lambda)=I_K(\lambda)V_0\) is a Sylow \(v\)-subgroup of \(G\). Furthermore, again in both cases (b) and (c), \(N/L\) acts fixed-point freely on \(L\); thus the abelian subgroup \(L\) has a complement in \(G\) by the proposition in the Introduction of \cite{Cu}. It follows that \(\lambda\) extends to \(I_{KV_0}(\lambda)\) and, as \(I_{KV_0}(\lambda)/L\cong (I_K(\lambda)/L)\times (V_0/L)\) is abelian, every irreducible constituent of \(\lambda^{I_{KV_0}(\lambda)}\) is linear. Now Clifford's theory (together with the fact that \(v\) does not divide \(|KV_0:I_{KV_0}(\lambda)|\)) yields that \(v\) does not divide the degree of any irreducible character of \(KV_0\) lying over \(\lambda\). But then \(v\) does not divide \(\psi(1)\), where \(\psi\) is an irreducible constituent of \(\chi_{KV_0}\) lying over \(\lambda\). As \(v\) does not divide \(|G:KV_0|\) as well, it follows that \(v\) does not divide \(\chi(1)\), and the proof is complete. \end{proof} We conclude the paper with the following remark, that compares the structure of the groups appearing in the Main Theorem with the structure of the groups whose degree graph has two connected components. \begin{rem}\label{comparison} Let \(G\) be a group as in the Main Theorem: denoting by \(R\) the solvable radical of \(G\), the factor group \(G/R\) is almost-simple with socle isomorphic to \(\PSL{t^a}\) for a suitable odd prime \(t\) (with $t^a>5$), and \(\Delta(G)\) is connected with cut-vertex \(p\). Then, in view of that theorem, the structure of \(G\) is very similar to the structure of a non-solvable group \(G\) whose graph \(\Delta(G)\) has two connected components (see Theorem~\ref{LewisWhite}). This is actually true with the only exception of case (c) in the Main Theorem, that we exclude from the following considerations. In fact, in cases (a) and (b), we know that there exist normal subgroups \(K\supseteq N\) such that \(K/N\cong\PSL{t^a}\); these subgroups are respectively the last term in the derived series of \(G\), and \(N=K\cap R\). Also, observing that the subgroup denoted by \(C\) in the statement of Theorem~\ref{LewisWhite} is in fact \(R\), we have that \(t\) does not divide \(|G/KR|\). Finally, if \(N\neq 1\), then either \(K\cong\SL{t^a}\) or there exists a minimal normal subgroup \(L\) of \(G\) such that \(K/L\cong\SL{t^a}\) and \(L\) is isomorphic to the natural module for \(K/L\). In other words, \(G\) has precisely the structure prescribed by Theorem~\ref{LewisWhite} except for two aspects: rather than being empty, the set \(\V{G/K}\) consists of a single prime which is the cut-vertex \(p\), and the vertex set of \(G\) is \(\pi(G/R)\cup\{p\}\) rather than \(\pi(G/R)\). This can be somewhat surprising, since the graph-theoretical condition expressed in Theorem~\ref{LewisWhite} is in principle much stronger than that of the Main Theorem. \smallskip Of course, the two relevant classes of groups behave similarly also from the point of view of the graphs: the only difference is that the groups of the Main Theorem have a degree graph with a complete vertex \(p\) (that may be already in \(\pi(G/R)\) or not), which is the unique neighbour of \(t\) and which guarantees the connectedness of the graph. For a description of the relevant graphs, we refer the reader to \cite[Section~2]{DPSS}. On the other hand, the class of groups as in case (c) of the Main Theorem is different and doesn't show up in Theorem~\ref{LewisWhite}. As already pointed out, this is the only exception to the fact that \(p\) (which is \(2\)) is the unique neighbour of \(t=13\) in \(\Delta(G)\) (see Introduction, Figure~\ref{c}). \end{rem}
1,116,691,499,941
arxiv
\section{Introduction} The epoch of reionization (EoR) is a major transition period for the Universe. During this time, the first luminous sources form, which begin to reionize the intergalactic medium (IGM) that is mostly neutral hydrogen as a result of recombination. Eventually, as dark matter haloes grow, galaxies begin to form, completing reionization. Clues from recent indirect measures have constrained the EoR to likely be an extended process, the bulk of which spans the range $6\lesssim z \lesssim 10$ \citep[e.g.][]{Robe15a,Bouw15a,Mitr15a}. These observations include high-redshift quasar spectra \citep[e.g.][]{Fan06, Mort11a, Bolt11a,McGr15a}, the cosmic microwave background (CMB) polarization \citep{Koma11a,Plan15a}, IGM temperature measurements \citep[e.g.][]{Theu02,Rask12a,Bolt12a}, and the decline of Lyman-$\alpha$ (Ly$\alpha$) emission in high-redshift galaxies\citep[e.g.][]{Star10a,Sche12a,Pent14a,Tilv14a}. Direct observations of the main sources of reionization (presumably star-forming galaxies) remain elusive, but the boundaries are being pushed to ever higher redshift \citep[e.g.][]{Bouw15b}. The best constraints are likely to result from redshifted 21-cm emission from neutral hydrogen present in the IGM. Current experiments with the low-frequency radio interferometers, including the Giant Metrewave Radio Telescope (GMRT)\footnote{\url{http://gmrt.ncra.tifr.res.in/}} \citep{Paci11a}, the Low Frequency Array (LOFAR)\footnote{\url{http://www.lofar.org/}} \citep[e.g.][]{Hark10a}, the Murchison Widefield Array (MWA)\footnote{\url{http://www.mwatelescope.org/}} \citep{Lons09a}, and the Precision Array for Probing the Epoch of Reionization (PAPER)\footnote{\url{http://eor.berkeley.edu/}} \citep{Pars10a}, are attempting to measure the 21-cm radiation from the EoR. The next generation of telescopes, such as the Square Kilometre Array (SKA)\footnote{\url{http://www.skatelescope.org/}}, will have higher sensitivity and will measure further back in time. The goal of these experiments is to produce three-dimensional information about the morphology and evolution of reionization. Much theoretical work has gone into understanding how the underlying physical processes will shape the 21-cm signal \citep[see e.g.][]{Prit12a}. A number of analytic methods \citep[e.g][]{Furl04}, semi-numerical models \citep[e.g.][]{Mesi07}, and numerical simulations \citep[e.g.][]{Ilie06a, McQu07a,Ilie14a} have been developed for to model the 21-cm signal, but capturing the small-scale physics and large-scale structure simultaneously is difficult. There is significant disconnect between the large scales -- from few up to tens, even hundreds of Mpc -- at which the reionization is patchy \citep{Ilie14a} and at which most observations are done, and the much smaller scales at which galaxy formation and radiative feedback occurs \citep[e.g.][]{Wise14a}. Therefore, detailed modelling is required to connect these very disparate scales and to gain a better understanding of the early galaxies based on the large-scale observational signatures. In this paper, we focus on modelling the sources and evolution thereof during the EoR. By straightforward theoretical arguments, heating the IGM reduces the cooling necessary to form stars, and recent hydrodynamical simulations show that radiative feedback from ionizing sources suppresses star formation in dwarf galaxies \citep[e.g.][]{Simp13a,Ocvi15a}. No general consensus exists in the literature on what this may mean for reionization or, more generally, the escape of ionizing radiation into the IGM, especially when all the complicated processes of star and galaxy formation are considered. On scales less than 10~Mpc, some recent studies find that large galaxies may not be the dominant contributor to the ionizing photon budget as often assumed reionization models where every dark matter halo emits radiation proportional to its mass \citep{Wise14a,Paar15a}. This information paints a complicated picture that indicates the need for sophisticated treatments of ionizing sources. In our previous work, we have implemented some simplified models for suppression, mostly instantaneous full suppression of star formation for dwarf galaxies in ionized regions, and found significant differences from a model with no suppression present \citep{Frie11a,Ilie12a}. \citet{McQu07a}, using a $N$-body and radiative transfer code for $<100$~Mpc box, and \citet{Soba13a}, applying a semi-numeric approach to reionization informed by 1D collapse simulations, find radiative feedback to have a minimal effect on the progress of reionization, though we are primarily interested in observational signatures and not just the timing of reionization. We apply detailed radiative transfer (RT) modelling to track the ionized regions and their evolution in cosmological volumes, with structures provided by large-scale $N$-body simulations (up to $369$~Mpc on a side) to make statistically meaningful predictions of observable signatures. In this work we are interested in what imprints the radiative feedback on low-mass galaxies might have left, and what can we learn about the high-redshift galaxies. The results from detailed radiative hydrodynamical simulations and theoretical considerations suggest that radiative feedback from photoionizing radiation, which heats the gas to at most few tens of thousands of degrees, affects mostly smaller galaxies and leaves larger ones unchanged. We, therefore, separate the ionizing sources into two distinct populations, high-mass ones, with masses above $10^9\,M_\odot$ (high-mass, atomic-cooling haloes, or `HMACHs') and those between $10^8$ and $10^9\,M_\odot$ (low-mass, atomic-cooling haloes, or `LMACHs'). The $10^8\,M_\odot$ mass limit roughly corresponds to a virial temperature of $10^4$\,K, below which the halo gas is unable to radiatively cool through hydrogen and helium atomic lines. haloes with virial temperature less than $10^4$ K, or minihaloes, collapse much earlier than HMACHs and even LMACHs. Because of their early formation epoch, many of these haloes have zero or very low metallicity, and stars inside them can form only through H$_2$ molecular cooling. Even though its cooling rate is slower than the atomic cooling, some fraction of these haloes can host very metal-poor stars, which can yield a non-negligible impact on the early stages of cosmic reionization through photo-ionization \citep{Wyit03b, Haim06a, Ahn12a} or X-ray heating from their by-products \citep{Mira11a, Fial14a, Xu14a, Jeon14a, Chen15a}. We do not consider such small haloes in this work, because we mainly focus on relatively late stages of reionization, believed to be dominated by LMACHs and HMACHs. HMACHs are above the Jeans mass in the ionized and heated medium and, thus, are assumed to be unaffected by radiative feedback. Given our current incomplete understanding of the effects of radiative feedback on the star formation in early galaxies, we employ several physically motivated models for the suppression of LMACHs, as discussed in detail in Section~\ref{sec:supp} below. In this work, we present two new models that build on our previous efforts. These models can be characterised by how aggressively we suppress star formation in LMACHs, from complete suppression at all times, to full suppression in ionized regions, to partial suppression, where LMACHs remain active sites of star formation with a diminished efficiency. These cases, therefore, sample much of the available parameter space and provide clues on the observational signatures to be expected. The outline of the paper is as follows. In Section~\ref{sec:supp}, we outline in detail the theoretical underpinnings of our source models. We present our suite of simulations, including $N$-body and RT, in Section~\ref{sec:sims}. Section~\ref{sec:results} contains our results, which include the reionization history and the morphology and various statistics of the 21-cm signal. We then conclude in Section~\ref{sec:summary}. The background cosmology is based on \emph{Wilkinson Microwave Anisotropy Probe} (\emph{WMAP}) 5-year data combined with constraints from baryonic acoustic oscillations and high-redshift supernovae ($\Omega_{\rm m} = 0.27, \Omega_\Lambda=0.73, h=0.701, \Omega_{\rm b}=0.044, \sigma_8 =0.8, n=0.96$). \section{Theory of reionization sources} \label{sec:theory} The exact nature of the sources of ionizing radiation during the EoR is still quite uncertain, although most likely the majority of the ionizing radiation was produced by massive stars in galaxies. In this section, we outline the physical processes we consider in our source modelling. \subsection{Source suppression by Jeans-mass filtering} \label{sec:supp} During photoionization, the excess photon energy above the Lyman limit heats the gas to temperatures above $\sim\!10^4$\,K. The exact value of the temperature reached varies and depends on the local level of the ionizing flux, its spectrum, and the relevant cooling mechanism \citep[see e.g.][for detailed numerical calculations]{Shap04a}. Typical values are $T_{\rm IGM}=10,000-20,000$~K, but could be as high as $\sim\!40,000$~K for a hot black-body spectrum, such as could be found in Population~III (Pop.~III) stars present in the early Universe. However, hydrogen-line radiative cooling is highly efficient for $T_{\rm IGM}>8,000$~K, particularly at high redshift where the gas is denser on average. This cooling would typically bring the temperature down to $T_{\rm IGM}\sim10^4$~K, possibly somewhat lower due to the adiabatic cooling from the expansion of the Universe. The increase of the IGM temperature caused by its photoheating results in a corresponding increase in the Jeans mass. In linear theory, the instantaneous Jeans mass is given by \begin{eqnarray} M_{\rm J}&=&4.1\times10^9\,M_\odot\left(\frac{T_{\rm IGM}}{10^4\,K}\right)^{3/2}\left(\frac{\Omega_{\rm m} h^2}{0.1327}\right)^{-1/2}\nonumber\\ & &\times\left(\frac{\Omega_{\rm b}h^2}{0.02162}\right)^{-3/5}\left(\frac{1+z}{10}\right)^{3/2} \end{eqnarray} or roughly $M_{\rm J} \sim 10^9 M_\odot$ \citep[e.g.][and references therein]{Shap94a,Ilie02a,Ilie08a}. Even in linear theory, the actual filter mass differs somewhat from this instantaneous Jeans mass, since the mass scale at which baryons successfully collapse out of the IGM is determined by integrating the differential equation for perturbation growth over time for the evolving IGM \citep{Shap94a,Gned98a,Gned00a}. In full, non-linear cosmological simulations, the situation is still more complicated. A halo collapsing inside an ionized and heated region can only acquire enough gas to form stars if it is sufficiently massive. The minimum mass depends on the detailed gas dynamics of the process and on radiative heating and cooling. No sharp cutoff exists above which a collapsing halo retains all its gas and below which the gas does not collapse with the dark matter. Instead, simulations show that in haloes with mass $M_{\rm halo} \lesssim 10^9M_\odot$ the cooled gas fraction decreases gradually with decreasing halo mass \citep{Efst92a,Thou96a,Nava97a, Dijk04a, Shap04a, Okam08a, Finl11a, Hase13a}. The exact halo mass threshold for the onset of suppression from photoionization heating and the dependence of suppression on the halo mass below that threshold depends on the assumed physical processes. For simplicity, we assume that star formation is suppressed in haloes with masses below $10^9\,M_\odot$ and not suppressed in larger haloes, in rough agreement with the linear Jeans mass estimate for $10^4$\,K gas and the above dynamical studies. \subsection{Source efficiencies and the Pop. III to Pop. II transition} \label{sec:eff} For the majority of our source models, we assume that the source emissivities are proportional to the host halo mass with an effective mass-to-light ratio, with different values adopted for LMACHs and HMACHs. For all haloes in the simulation volume, each halo that is not suppressed by Jeans-mass filtering is an ionizing source. For a source with halo mass, $M_{\rm halo}$, and lifetime, $t_{\rm s}$, we assign ionizing photon emissivity, $\dot{N}_\gamma$, according to \begin{equation} \dot{N}_\gamma=g_\gamma\frac{M_{\rm halo}\Omega_{\rm b}}{m_{\rm p}(10\,\rm Myr)\Omega_0}, \end{equation} where $m_{\rm p}$ is the proton mass and the proportionality coefficient, $g_{\gamma}$, reflects the ionizing photon production efficiency of the stars per stellar atom, $N_{\rm i}$, the star formation efficiency, $f_*$, and the escape fraction, $f_{\rm esc}$: \begin{equation} g_\gamma=f_*f_{\rm esc}N_{\rm i}\left(\frac{10 \;\mathrm{Myr}}{t_{\rm s}}\right). \end{equation} \citep[e.g.][]{Haim03a,Ilie12a}. The factor $g_\gamma$, as defined, has the advantage that it is independent of the length of the source lifetime as long as the ionizing luminosity ($N_{\rm i}/t_{\rm s}$) is a constant and, as such, allows a direct comparison between different runs with varying source luminosities. All quantities determining the source efficiencies remain quite uncertain, especially at high redshift, see e.g. \citet{Ilie05a} for discussion. Recent theoretical studies have indicated that the first, metal-free (Pop.~III) stars might have been quite massive \citep[e.g.][]{Abel00a,Brom02a,OShe07a}, even when these stars are formed as multiples inside minihaloes \citep{Turk09a,Grei12a,Hira14a}. Massive stars are more efficient producers of ionizing photons, emitting up to $N_{\rm i}\sim10^5$ ionizing photons per stellar atom \citep{Brom01a,Scha02a,Venk03a}. Integrating over a top-heavy IMF for Pop.~III stars leads to estimates of $N_{\rm i}\sim25,000-90,000$ \citep{Scha02a}. As supernovae enrich the Universe with metals, Population~II (Pop.~II) stars form and become dominant, and the Salpeter IMF for these stars gives $N_{\rm i}=3,000-10,000$ \citep{Leit99a}. The values of $f_*$ and $f_{\rm esc}$ are even less certain, ranging from $\sim\!0.01$ to 1 for each of these quantities. Several recent studies have found that the photon escape fraction is mass-dependent and significantly higher for small galaxies that are more typical at high redshift than for large galaxies that form at later times \citep{Kita04a,Alva06a,Wise14a,Paar15a}, though \citet{Yaji14a} finds the ionizing radiation escape fraction to be $\sim\!0.2$ and be independent or redshift and galaxy property. Although the details are complicated, we include reionization scenarios with a higher $g_{\gamma}$ assigned to smaller haloes than the larger ones to capture the basic consensus. \subsection{Mass-dependent feedback} \label{sec:grad_theory} Recent high-resolution, cosmological hydrodynamics simulations of galaxy formation suggest that source emissivities are mass-dependent with smaller haloes being more susceptible to radiative feedback \citep[][in prep.] {Wise09a,Ocvi15a,Sull15a}. The sharp distinction between LMACHs and HMACHs described above is, therefore, a simplified picture. In particular, the largest LMACHs behave nearly as HMACHs, while the smallest LMACHs have highly suppressed star formation in ionized regions. The transition between unsuppressible and highly suppressible is likely to be gradual and proportional to the mass of the halo. Loosely following \citet{Wise09a} and \citet{Sull15a} (in prep.), we assume that the mass-dependence of our efficiency in ionized regions is: \begin{equation} g_\gamma = g_{\gamma,\rm HMACH} \times \left[\frac{M_{\rm halo}}{9\times10^8 M_\odot}-\frac{1}{9}\right], \label{eq:grad_eff} \end{equation} essentially linear in logarithmic units of halo mass with $g_\gamma = g_{\gamma,\rm HMACH}$ at $10^9\,M_\odot$ and $g_\gamma = 0$ at $10^8\,M_\odot$. The precise formula for the suppression of ionizing photon production in smaller haloes is not important to our conclusions, since we are comparing \emph{methods} of suppression. Our main motivations are a simple relation and mass boundaries to match our other source models. The important characteristics here are that star formation in ionized regions is suppressed in a mass-dependent manner and that the smallest haloes are affected the most. Although such a simplified model is unable to capture all the expected halo-to-halo variation in physical quantities, we aim to capture the general behavior of ionizing radiation escaping haloes. \section{The Simulations} \label{sec:sims} Our basic simulation methodology has been previously described in \citet{Ilie06a}, \citet{Mell06b}, and \citet{Ilie07a}, with the current, massively paralleled generation of the codes used here described in \citep{Ilie12a}. Hence, we will mainly focus on the new features we introduce, as well as outline the main simulation parameters. \subsection{$N$-body simulations} \begin{table*} \caption{$N$-body simulation parameters. Background cosmology is based on the \emph{WMAP} 5-year results and constraints from baryonic acoustic oscillations and high-redshift supernovae and is consistent with the \citet{Plan15a} results. } \label{summary_N-body_table} \begin{center} \begin{tabular}{@{}llllll} \hline box size & $N_{\rm part}$ & mesh & force softening & $m_{\rm particle}$ & $M_{\rm halo,min}$ \\[2mm]\hline 47$\,h^{-1}$~Mpc & $1728^3$ & $3456^3$ & $1.36\,h^{-1}$kpc & $2.153\times10^6\,M_\odot$ & $1.076\times10^8\,M_\odot$ \\[2mm] 244$\,h^{-1}$~Mpc & $4000^3$ & $8000^3$ & $3.05\,h^{-1}$kpc & $2.429\times10^7\,M_\odot$ & $0.971\times10^9\,M_\odot$ \\ \hline \end{tabular} \end{center} \end{table*} We start by performing very high-resolution $N$-body simulations of the formation of high-redshift structures. We use the \textsc{\small CubeP$^3$M} $N$-body code \citep{Harn13a}\footnote{\url{http://wiki.cita.utoronto.ca/mediawiki/index.php/CubePM} \url{https://github.com/jharno/cubep3m}}. This code uses a two-level, particle-mesh grid to calculate the long-range gravitational forces, kernel-matched to a local direct particle-particle interaction. The distance from a given particle up to which the direct forces are calculated is a code parameter. In the current simulations, we set this to eight fine grid cells, or two coarse grid cells, which we found provides the best tradeoff between precision and speed. Extending this further makes the calculations much more expensive, while providing little additional accuracy. The basic $N$-body simulation parameters are listed in Table~\ref{summary_N-body_table}. The force-smoothing length in both cases is set at 1/20th of the mean interparticle spacing. The larger computational volume, with a box size $L_{\rm box}=244\,h^{-1}=349~$Mpc, is chosen to recreate the large-scale reionization patchiness \citep{Ilie14a}. The smaller volume, $L_{\rm box}=47\,h^{-1}=67~$Mpc, provides significantly better resolution, which is useful for method validation purposes and provides faster radiative transfer simulation runtimes. The corresponding particle numbers, at $4000^3$ for the large box and $1728^3$ for the small box, are chosen to ensure reliable halo identification down to $10^9\,M_\odot$ (with 40~particles) and $10^8\,M_\odot$ (with 50~particles), respectively. As discussed above, $M_{\rm halo}\sim10^8\,M_\odot$ roughly corresponds to the atomically cooling limit, while $M_{\rm halo}\sim10^9 M_\odot$ is roughly the mass below which Jeans-filtering occurs in intergalactic gas at a temperature of $10^4$\,K, which is typical for the post-reionization IGM. The unresolved haloes are added using a sub-grid model, as discussed in detail in \citet{Ahn15a}. This model provides the mean local halo abundance based on the cell density and, here, is used to include haloes with masses $10^8\,M_\odot<M_{\rm halo}<10^9\,M_\odot$ in the larger-volume ($244\,h^{-1}$Mpc) simulation. Even though the correlation between the halo abundance and the cell density is stochastic, we do not include such an effect here. In the current simulations, we also do not include the effects of minihaloes, with masses below $M_{\rm halo}<10^8M_\odot$. These sources could be included using the same sub-grid model coupled with radiative transfer for the H$_2$ molecule-destroying Lyman-Werner band photons \citep{Ahn09a,Ahn12a}. However, while these sources drive the early reionization process and can contribute significantly to the integrated electron-scattering optical depth derived from the CMB, $\tau_{\rm es}$, their contribution at the later times of interest here is more limited, thus we leave this for future work. The linear power spectrum of density fluctuations was calculated with the code \textsc{\small CAMB} \citep{Lewi00a}. Initial conditions were generated using the Zel'dovich approximation at redshifts high enough to employ linear theory and low enough to ensure against numerical artefacts, where $z_{\rm i}=150$ for the $244\,h^{-1}$Mpc volume and $z_{\rm i}=300$ for $47\,h^{-1}$Mpc \citep{Croc06a}. \subsection{Radiative transfer simulations} The radiative transfer simulations are performed with our code \textsc{\small C$^2$-Ray} (Conservative Causal Ray-Tracing) \citep{Mell06a}. The method is explicitly photon-conserving in both space and time for individual sources and, to a good approximation, for multiple sources. This method ensures the tracking of ionization fronts without loss of accuracy, independent of the spatial and time resolution, with corresponding gains in efficiency. The code has been tested in detail against a number of exact analytical solutions \citep{Mell06a}, as well as in direct comparison with a number of other independent radiative transfer methods on a standardised set of benchmark problems \citep{Ilie06b,Ilie09a}. The ionizing radiation is ray-traced from every source to every grid cell using the short characteristics method, whereby the neutral column density between the source and a given cell is found by interpolating the column densities of the intervening cells, in addition to the neutral column density through the cell itself. The contribution of each source to the local photoionization rate of a given cell is first calculated independently. Then, all contributions are added together, and a nonequilibrium chemistry solver is used to calculate the resultant ionization state. Typically, multiple sources contribute to the local photoionization rate of each cell. Changes in the rate from additional sources modify the neutral fraction and, therefore, the neutral column density, which in turn changes the local photoionization rates themselves. Consequently, an iteration procedure is required in order to converge -- with certain tolerance -- to the correct, self-consistent solution. The $N$-body simulations discussed above provide us with the spatial distribution of cosmological structures and their evolution in time, including the locations and masses of galactic haloes, lists of the $N$-body particles which belong to each halo, and the intergalactic gas density field. We then use this information as the input to a full, 3D radiative transfer simulation of the reionization history, as follows. We have saved a series of slices, including halo particle lists, halo catalogues, and the density field smoothed to a grid of the intended resolution of the \textsc{\small C$^2$-Ray} simulation, from redshift 50 down to 6. These time-slices are uniformly spaced in time, every $\Delta t=11.53$~Myr, for a total of 82~slices. Simulating the transfer of ionizing radiation with the same spatial resolution as the underlying $N$-body (fine grid of $8000^3$, dynamic range $\sim\!10^5$) is still not feasible with current computational capacity. We, therefore, use an SPH-style smoothing scheme using nearest neighbours to transform the data to lower resolution, with $306^3$ or $612^3$ cells for 47\,$h^{-1}$~Mpc and $250^3$ or $500^3$ cells for 244\,$h^{-1}$~Mpc, for the radiative transfer simulations. We combine sources which fall into the same coarse cell, which slightly reduces the number of sources to be considered compared to the total number of haloes. All simulations presented here include an approximate treatment of Lyman-limit systems (LLS), which are small, dense neutral regions that act as absorbers. During the early stages of reionization, the photon mean free path is set by the large neutral patches, making LLS unimportant; while at late times, they set a mean free path of several tens of Mpc \citep{Song02a}. In the current simulations, we roughly model this mean free path by imposing a hard limit on the distance an ionizing photon can travel, set at 40 comoving Mpc. We consider more detailed LLS models and their effects on reionization in \citet{Shuk15a}. All identified haloes are potential sources of ionizing radiation, with different suppression criteria and ionizing photon production efficiencies imposed depending on the source model. We present a series of radiative transfer simulations with varying source models, summarized in Table~\ref{tab:summary}, as follows: \begin{itemize} \item{\raggedright{}\textit{HMACHs only:}} In this scenario, we assume that only large haloes produce ionizing photons, corresponding to reionization being driven exclusively by relatively large galaxies. In terms of source suppression, this model could be considered an extreme case where all LMACHs are fully suppressed (or never formed) at all times. This situation may also arise when mechanical feedback from supernovae quickly (on scales smaller than our time-step) quenches the star formation in low-mass haloes. Though not considered the most realistic option, this model provides a good baseline to gauge the absolute contributions to observables from HMACHs alone, as well as facilitating comparison to older simulations with lower resolution \citep[e.g.][]{Ilie06a,Seme07a}. All HMACHs have a source efficiency of $g_\gamma = 1.7$. \\ \item{\raggedright{}\textit{Fully suppressed LMACHs (S):}} This model was proposed previously in \citep{Ilie07a,Ilie12a}. HMACHs are once again assigned $g_\gamma = 1.7$, while LMACHs are assigned a higher efficiency $g_\gamma = 7.1$ in neutral regions to mimic the properties of early galaxies. Likely, these galaxies had higher photon production from massive, Pop.~III stars and/or higher escape fractions, and therefore higher photon production efficiencies overall, as detailed in Section~\ref{sec:eff}. We assume that LMACHs in ionized regions are completely suppressed, producing no ionizing photons. This scenario corresponds to the case of aggressive suppression of LMACHs from either mechanical or radiative feedback or a combination thereof.\\ \item{\raggedright{}\textit{Partially suppressed LMACHs (pS):}} For this model, first introduced in this paper, LMACHs are assumed to contribute to reionization at all times. In neutral regions, we assign LMACHs a higher efficiency as in the previous model, $g_\gamma = 7.1$. In ionized regions, these small galaxies are suppressed, resulting in diminished efficiency, and we set the efficiency to the same as the HMACHs, $g_\gamma = 1.7$. Here, star formation remains ongoing, but at a lower rate. Physically, this situation could arise if the fresh gas supply is cut off or diminished by the photoheating of surrounding gas, but a gas reservoir within the galaxy itself remains available for star formation. In this model, HMACHs are again given $g_\gamma = 1.7$.\\ \item{\raggedright{}\textit{Mass-dependent suppression of LMACHs (gS):}} This model is also introduced in this paper for the first time. Instead of a sharp decrease in ionizing efficiency, as in the previous two cases, we also consider the gradual, mass-dependent suppression of sources in ionized regions. As before, HMACHs are assigned $g_\gamma = 1.7$ everywhere, and LMACHs have that same efficiency when residing in neutral regions. In ionized patches, LMACHs are suppressed in a mass-dependent manner, described by equation~(\ref{eq:grad_eff}), where larger galaxies are less susceptible to any kind of suppression. \end{itemize} As discussed above, there are two series of radiative transfer simulations based on the structure formation data from the 244$h^{-1}=349$~Mpc and the 47$h^{-1}=67$~Mpc volumes, with the first having sufficiently large volume to faithfully represent the reionization observables and the second affording better mass resolution. These two very different computational volumes also allow us to investigate the effects of resolution and sub-grid model and to evaluate which features of reionization and observable signatures are predicted robustly. We label all runs by a short label (listed in the first column of Table~\ref{tab:summary}) for more compact notation. Large-box runs are labelled LB1-LB4, while small-box ones are labelled SB1-SB4. The radiative transfer grid resolutions are $250^3$ and $306^3$ for the large and small volumes, with LB1, LB3, and SB2 also run at higher grid resolutions of $500^3$, $500^3$, and $612^3$ (LB1\_HR, LB3\_HR, and SB2\_HR), respectively. Note, that LB3\_HR was not run through the end of reionization, but still provides a useful comparison. Our full simulation notation reads $Lbox\_gI\_(J)(Supp)$ (the bracketed quantities are listed only when needed), where $'Lbox'$ is the simulation box size in Mpc, $'I'$ and $'J'$ are the values of the $g_{\gamma}$ factor for HMACHs and LMACHs, respectively. The symbol `Supp' indicates the suppression model S (fully suppressed), pS (partially suppressed), or gS (mass-dependent suppression) with no symbol meaning HMACHs only. For example, 63Mpc\_g1.7\_7.1pS indicates that large sources have an efficiency $g_\gamma=1.7$, while small sources have an efficiency $g_\gamma=7.1$ in neutral regions and are suppressed to $g_\gamma=1.7$ in ionized regions. Most of these simulations were run on Curie at GENCI, France under the PRACE4LOFAR Tier-0 (Petascale) project, which was awarded time under the $5^{th}$ and $9^{th}$ Partnership for Advanced Computing in Europe (PRACE) calls. The rest of the simulations were run on computers in Germany (SuperMUC at LRZ, Hermit and Hornet at HLRS), Sweden (Triolith at NSC and Abisco at HPC2N), Finland (SISU), United States (Lonestar at TACC), and UK (Archer at EPCC and Apollo at the University of Sussex). The $N$-body simulations were run on 864 cores (47\,$h^{-1}$~Mpc) and 8,000 cores (244\,$h^{-1}$~Mpc) and required 89k and 456k core-hours respectively to complete. The radiative transfer simulations were run on a variable number of computing cores, up to 32,000. The lower-resolution runs required between 100k and 1M core-hours (47\,$h^{-1}$~Mpc volume) and between 256k and 3M core-hours (244\,$h^{-1}$~Mpc volume). The high-resolution runs required 4M (47\,$h^{-1}$~Mpc volume) and 2M (244\,$h^{-1}$~Mpc volume), respectively. The resources required for each radiative transfer run are dependent on the grid resolution used \emph{and} the number of active sources, with the latter varying significantly depending on the source suppression model -- from relatively low (LB1, LB2, SB1, SB2) to extremely high (LB3, LB4). In the latter cases, all grid cells contain active sources at late times. \begin{table*} \caption{Reionization simulation parameters and global reionization history results.} \label{tab:summary} \begin{center} \begin{tabular}{@{}llllllllllllll}\hline \hline label & run & box size & $g_{\gamma}$ & $g_{\gamma}$ & $g_{\gamma}$ &mesh & $\tau_{\rm es}$ & $z_{10\%}$&$z_{50\%}$&$z_{90\%}$&$z_{\rm reion}$ \\ & & \tiny{[$h^{-1}$Mpc]} & \tiny{HMACH} & \tiny{LMACH} & \tiny{LMACH$_{\rm supp}$} & & & &&& \\[1.5mm] \hline LB1 & 349Mpc\_g1.7\_0 & 244 & 1.7 & 0 & 0 & $250^3$ & 0.049 & 8.515 & 7.059 & 6.483 & 6.231 \\[2mm] LB1\_HR & 349Mpc\_g1.7\_0\_HR & 244 & 1.7 & 0 & 0 & $500^3$ & 0.049 & 8.456 & 7.059 & 6.483 & 6.201 \\[2mm] LB2 & 349Mpc\_g1.7\_7.1S & 244 & 1.7 & 7.1 & 0 & $250^3$ & 0.055 & 10.290 & 7.263 & 6.617 & 6.323 \\[2mm] LB3 & 349Mpc\_g1.7\_7.1pS & 244 & 1.7 & 7.1 & 1.7 & $250^3$ & 0.068 & 11.200 & 8.636 & 7.859 & 7.525 \\[2mm] LB3\_HR$^1$ & 349Mpc\_g1.7\_7.1pS & 244 & 1.7 & 7.1 & 1.7 & $500^3$ & & 10.673 & 8.515 & & \\[2mm] LB4 & 349Mpc\_g1.7\_gS & 244 & 1.7 & 1.7 & eqn.~\ref{eq:grad_eff} & $250^3$ & 0.057 & 9.938 & 7.712 & 6.981 & 6.721 \\[2mm] \\ SB1 & 67Mpc\_g1.7\_0 & 47 & 1.7 & 0 & 0 & $306^3$ & 0.052 & 8.762 & 7.348 & 6.721 & 6.418 \\[2mm] SB2 & 67Mpc\_g1.7\_7.1S & 47 & 1.7 & 7.1 & 0 &$306^3$ & 0.054 & 9.308 & 7.480 & 6.793 & 6.483 \\[2mm] SB2\_HR & 67Mpc\_g1.7\_7.1S\_HR & 47 & 1.7 & 7.1 & 0 & $612^3$ & 0.053 & 9.235 & 7.435 & 6.757 & 6.418 \\[2mm] SB3 & 67Mpc\_g1.7\_7.1pS & 47 & 1.7 & 7.1 & 1.7 & $306^3$ & 0.064 & 10.383 & 8.515 & 7.809 & 7.480 \\[2mm] SB4 & 67Mpc\_g1.7\_gS & 47 & 1.7 & 1.7 & eqn.~\ref{eq:grad_eff} & $306^3$ & 0.058 & 9.382 & 7.760 & 7.099 & 6.793 \\[2mm] \hline \end{tabular} \end{center} \begin{flushleft} $^1$ Simulation not run beyond $x_{\rm m} = 0.78$. \end{flushleft} \end{table*} \section{Results} \label{sec:results} \subsection{Comparison to observations} \label{sec:obs_comp} \begin{figure*} \begin{center} \hspace{-0.4in} \includegraphics[height=1.8in]{xv_244Mpc_250.eps} \vspace{-0.2in} \hspace{-0.09in} \includegraphics[height=1.8in]{tau_244Mpc_250.eps} \vspace{-0.2in} \hspace{-0.082in} \includegraphics[height=1.8in]{Gamma_mean_244Mpc_250_vol.eps} \vspace{-0.2in} \hspace{-0.25in} \vspace{+1.1cm} \caption{The four source models in the 244\,$h^{-1}$~Mpc box compared to observational constraints. \emph{Left:} The volume-weighted mean neutral fraction of hydrogen compared to observational inferences from Ly$\alpha$~ forest transmission (squares) \citep{Fan06}, dark Ly$\alpha$~ forest pixels (triangles) \citep{McGr11a,McGr15a}, quasar near zones (circles) \citep{Schr13a}, GRB damping wing absorption (diamonds)\citep{McQu08a,Chor13a}, decline in Ly$\alpha$~ emitters (hexagons) \citep{Ota08a,Ouch10a}, and Ly$\alpha$~ clustering (pentagons) \citep{Ouch10a}, following the discussion in \citep{Robe15a}. \emph{Middle:} The integrated electron-scattering optical depth compared to the \emph{Planck}TT+lowP+lensing+BAO 2015 results (dashed horizontal line) and the 1$\sigma$ error interval (shaded region) \citep{Plan15a}. \emph{Right:} The mean volume-weighted hydrogen photoionization rate compared to the observational constraint of \citet{Wyit11a} (hexagon). \label{fig:obs}} \end{center} \end{figure*} The current observational constraints on the timing and duration of reionization are still not tight. The main observables include the integrated electron-scattering optical depth derived from the cosmic microwave background polarization power spectra, which suggest an extended process \citep[e.g.][]{Robe15a}, and observations of the galaxies and intergalactic medium towards the end of reionization, which indicate that it ended around redshift $z\sim6$ \citep[e.g.][]{McGr15a}. Our models yield a range of results for these quantities, generally consistent with these constraints (Fig.~\ref{fig:obs} and Table~\ref{tab:summary}). The left panel of Fig~\ref{fig:obs} shows the volume-weighted mean neutral fraction of hydrogen, $x_{\rm \ion{H}{i}}^{\rm v}$, from a variety of observations. The most well-known results for $x_{\rm \ion{H}{i}}^{\rm v}$ are from measurements of the effective optical depth evolution of the Ly$\alpha$~ forest (including higher-order transitions, if available) along many lines of sight to high-redshift quasars in \citet{Fan06}, represented by squares and shortened to Ly$\alpha$~ forest transmission. Interpreting the transmission as a neutral fraction requires significant modelling, so the resultant neutral fraction is somewhat uncertain \citep{Mesi10}. Nearly model-independent upper limits on the neutral fraction come from the fraction of dark pixels in the Ly$\alpha$~ forest, shown as triangles \citep{McGr11a,McGr15a}. Gamma-ray burst (GRB) damping wings, though rare, provide upper limits in this redshift range (diamonds) \citep{McQu08a,Chor13a}. The size of near zones around quasars give some indication of the minimum neutral fraction (circle), but these measurements are dependent on uncertain intrinsic quasar properties \citep{Bolt11a,Schr13a}. Ly$\alpha$~ emitters \citep{Ota08a,Ouch10a} and clustering \citep{Ouch10a} provide further constraints, shown as hexagons and pentagons, respectively. Our late reionization models (LB1, LB2 and LB4) agree well with the observed fast rise in the neutral hydrogen fraction observed at $z\sim6-7$. The early reionization model (LB3) does not agree, which due to its numerous and weakly suppressed sources leads to an earlier end of reionization. However, if we tune down the assumed source efficiencies in LB3, bringing the neutral fraction evolution into agreement with that data set is straightforward. The integrated electron-scattering optical depth from the CMB last scattering surface to the present era, $\tau_{\rm es}$, measured from our simulations is listed in Table~\ref{tab:summary} and plotted in the middle panel of Fig.~\ref{fig:obs}. Current constraints from \emph{Planck}TT+lowP+lensing+BAO data give $\tau_{\rm es} = 0.066\pm0.013$ \citep{Plan15a}, shown as the shaded region in Fig.~\ref{fig:obs}. Conversely to the end-of-reionization data, the simulated $\tau_{\rm es}$ is highest for the LB3 model, making it most in agreement the central observed $\tau_{\rm es}$ value. The late reionization models correspond to lower values, albeit still in agreement within $1\sigma$ (LB2 and LB4) and within $2\sigma$ (LB1). We note that these data do not independently constrain the exact start, finish, or duration of reionization. An earlier beginning to reionization generally gives a larger $\tau_{\rm es}$, since the early Universe is denser and larger densities amplify $\tau_{\rm es}$. The very beginning of reionization is likely driven by minihaloes \citep{Ahn12a}, which form much earlier than the LMACHs and HMACHs that we consider here. Since we do not include any contribution from minihaloes, we expect our results to be $\sim\!0.02$ too low compared to those cases with very active star formation inside minihaloes \citep{Ahn12a}. More massive haloes dominate the later stages of reionization, which are the focus of this work. Finally, the right panel of Fig.~\ref{fig:obs} shows the volume-averaged hydrogen photoionization rate, $\Gamma$. Our late reionization simulations all predict $\Gamma\sim10^{-12}\,\mathrm{s}^{-1}$ at $z=6$, while the observations (hexagon) find a slightly lower value of $\Gamma_{\rm obs}=10^{-13}-10^{-12.4}$ \citep{Wyit11a,Calv11a}. This discrepancy might be resolved by, for example, including small-scale gas clumping, which is not done in the simulations presented here. This clumping delays the late stages of reionization, while not having very significant effect on the optical depth (Mao et al., in prep.). On the other hand, the early reionization model LB3 finds significantly higher value for the photoionization rate and likely can be excluded with the current efficiency parameters. Once again, this model can be reconciled with the observational data by tuning down the assumed ionizing photon efficiencies. \subsection{Reionization history} \label{sec:reion_hist} The mean global reionization histories derived from our simulations can be characterised by several basic parameters, as detailed in Table~\ref{tab:summary}. The first of these parameters is the end of the reionization epoch, $z_{\rm reion}$, which we customarily define as the time when the mass-weighted ionized fraction of the gas, $x_{\rm m}$, first surpasses 99 per cent. This value also quantifies the overall duration of reionization, since the start of reionization is determined by when the first resolved haloes form in our simulations, which is fixed by structure formation alone. The second global parameter is $\tau_{\rm es}$, as discussed in the previous section. Finally, the redshifts at which $x_{\rm m}$ reaches 10 per cent, 50 per cent, and 90 per cent are also listed, which correspond to the early, middle, and late stages of reionization. The middle redshift, when 50 per cent of the gas mass is ionized for the first time, is of particular interest for observations, since it is a good, if rather rough, indicator of the epoch when the ionization fluctuations reach a maximum \citep[e.g.][]{Mell06b}. This maximum corresponds to the maximum of many observables, such as the redshifted 21-cm fluctuations. \begin{figure*} \includegraphics[width=3.2in]{xfrac_244Mpc_250_comb.eps} \includegraphics[width=3.2in]{xfrac_47Mpc_306_comb.eps} \caption{Redshift evolution of the mass-weighted ionized fraction (lower panels) and the corresponding ratios of mass-weighted and volume-weighted ionized fractions (top panels), which are equal to the mean density of the ionized regions in units of the mean density of the Universe. \emph{Left:} the $244\,h^{-1}$~Mpc box shows the evolution for source models LB1 (solid), LB2 (dotted), LB3 (dashed), and LB4 (dot-dashed) (bottom panel). \emph{Right} for the $47\,h^{-1}$~Mpc box, SB1 (solid), SB2 (dotted), SB3 (dashed), and SB4 (dot-dashed) are displayed. \emph{Insets:} the same reionization histories in linear scale, as opposed to logarithmic. \label{fig:fracs}} \end{figure*} The variation in the progression of reionization from source model differences is demonstrated in the globally averaged reionization histories as a function of redshift, shown in Fig.~\ref{fig:fracs}. Predictably, reionization starts significantly earlier in models where LMACHs are present, as LMACHs form earlier. The first HMACHs in the $244\,h^{-1}$~Mpc volume form at $z\sim21$, well after the first LMACHs. Accordingly, the SB1/LB1 HMACH-only models start reionizing with a significant delay with respect to the other models. Cases with high-efficiency LMACHs (LB2 and LB3) naturally reionize faster than the low-efficiency one (LB4). Initially, the method of LMACH suppression, either the full, aggressive one (LB2) or the partial one (LB3), makes essentially no difference in the global history, because the exponential growth of the halo collapsed fraction drives the exponential rise in the ionized fraction. However, once the ionized fraction reaches a few per cent, these two models begin to depart, as the LMACH suppression becomes more pronounced. The ionized fraction in the LB3 continues to grow quickly, with a change in slope due to the decreasing efficiency of the LMACHs. In contrast, LB2 results in a considerable slowdown and flattening of the reionization history until the HMACHs become dominant at $z\sim9-10$, after which the exponential growth resumes. In the global reionization history, the gradual, mass-dependent suppression model (LB4) follows the same trends as LB3, but with some delay due to its lower-efficiency LMACHs. Accordingly, the ionization fraction in LB4 overtakes the one in LB2 at redshifts just below $z\sim10$, as the lack of full LMACH suppression compensates for their lower efficiencies. The end of reionization and $z_{\rm reion}$ is dictated by the surviving sources and their efficiencies in each case, with little influence from the previous history, which is related to a process we refer to as self-regulation \citep{Ilie07a}. Consequently, LB1 and LB2 reach the end of reionization at approximately the same time, since only HMACHs remain at late times in either case. In contrast, LMACHs survive, albeit at lower efficiencies, in LB3 and LB4 and, thus, still contribute significantly to the entire evolution, leading to an earlier completion of reionization. The effective efficiency of LMACHs in LB4 is lower (though, increasing over time due to growing average source mass) than in LB4, slowing reionization. Reionization is fastest in models LB1 and LB3 and relatively slow and extended in LB2 and LB4. Accordingly, in the former cases, detecting an all-sky `global step' in the 21-cm emission due to the relatively fast transition of the IGM from fully neutral to ionized will be easier \citep{Shav99a}. However, the EoR is fairly extended \emph{all} cases, so such a measurement remains very difficult. The smaller, $47\,h^{-1}$~Mpc volumes are based on a higher underlying $N$-body resolution (eliminating the need for sub-grid halo modelling) \emph{and} a higher radiative transfer grid resolution ($953\,h^{-1}$~kpc cells vs. $183\,h^{-1}$~kpc). The main drawback is that the volume is 1/140th of the $244\,h^{-1}$~Mpc boxes. The reionization process starts later in smaller volumes, because the earliest sources are very rare and are statistically unlikely to exist at very high redshift. For the same reason, the transition between LMACH-dominated and HMACH-dominated evolution is quicker and less pronounced. Regardless of these underlying dissimilarities, the overall trends in the global reionization histories discussed above remain the same for both simulation sizes, indicating the overall robustness of the results. The resolution also plays a small role in whether a source can ionize its cell and immediate surroundings, which is most evident in the fully suppressed model as SB2 appears depressed compared to LB2 (dotted lines on the right and left, respectively). Though a minor effect overall, the higher resolution of the smaller box means a smaller cell is more easily ionized, suppressing more sources for a given threshold for suppression. For a fixed simulation volume, increasing the RT grid resolution for the $47\,h^{-1}$~Mpc box does not have an appreciable effect on the reionization history or the number of photons emitted, indicating full convergence. For the larger, $244\,h^{-1}$~Mpc volume, the RT grid resolution has very little effect on the reionization history in case LB1, but LB3 slightly delays (by $\Delta z<0.5$) reionization, particularly in the intermediate stages where the LMACH suppression is more prominent. We do not show these comparisons due to their similarity, but the main change from higher resolution to lower is a decrease in suppression of LMACHs. In contrast to the reionization histories, the ratio of mass-weighted to volume-weighted ionized fraction, which indicates the character (inside-out or outside-in) of the reionization process \citep{Ilie06a}, mostly shows only minor variations between models. The only exception here is model LB1 (HMACH-only), where the ionized regions are comparatively more overdense. However, this difference is largely due to numerical resolution, rather than a physical effect, as we discuss below. For the larger box, the mass-weighted over volume ionized fraction is always lower in the suppression models than in the HMACH-only model, indicating that reionization has less pronounced inside-out character. In other words, ionized regions are less correlated with the highest density peaks in models with LMACHs, since reionization is driven by wider range of sources, including low-mass, less-biased ones. The gradual suppression model LB4 is somewhat higher than the other two suppression models once reionization is underway ($x_{\rm m}\gsim0.01$). This model is more biased, because the largest LMACH sources are more strongly clustered and have higher efficiency on average compared to the LMACHs in the fully or partially suppressed models. For the $47\,h^{-1}$~Mpc box (right) in Fig.~\ref{fig:fracs}, we can see that $x_{\rm m}/x_{\rm v}$ is nearly converged. The higher-resolution run (SB2\_HR, not shown) differs in the ratio only slightly, indicating a robust inside-out nature of all models. In the context of the small, high-resolution boxes versus the large, low-resolution boxes, the high-resolution ratios take a somewhat different shape: rising initially, then falling towards unity at the end of reionization, by definition. The values reached are significantly higher, due to the better radiative transfer grid resolution that amplifies the inside-out nature of the process. For the higher resolution runs in the larger volume (L1\_HR and L3\_HR, not shown), $x_{\rm m}/x_{\rm v}$ peaks at higher values for the same reason. As noted above, only the HMACH-only model at low resolution achieves a similar shape to the results of the smaller boxes, and at high resolution, the ratio exceeds three. The partially suppressed, high-resolution model has a flatter shape, indicating a lack of convergence. Whether a source can ionize its own cell and immediate surroundings is resolution dependent; therefore, models with suppression generally require higher resolution than models with straightforward sources with a constant mass-to-light ratio. In summary, the different LMACH suppression models result in significant variations in the duration and shape of the reionization history, even for same underlying cosmological structures and same efficiencies for HMACHs. For all models, HMACHs dominate during the late stages of reionization, which are the focus of this work. However, the inside-out nature of the process, in the sense of denser structures being ionized earlier on average, remains robust and roughly independent of the source suppression model, depending somewhat on the resolution. \begin{figure*} \includegraphics[width=3.2in]{244Mpc_250_flux_comb.eps} \includegraphics[width=3.2in]{47Mpc_306_flux_comb.eps} \caption{\label{fig:source_lum} Number of ionizing photons emitted by all active sources in the computational volume per time-step renormalized to a ($100\,h^{-1}$\,Mpc)$^3$ volume (bottom panels) and cumulative number of photons per total gas atom released into the IGM (top panels). Shown are the 244\,$h^{-1}$~Mpc box (left) with LB1 (solid), LB2 (dotted), LB3 (dashed), and LB4 (dot-dashed) and 47\,$h^{-1}$~Mpc box (right) with SB1 (solid), SB2 (dotted), SB3 (dashed), and SB4 (dot-dashed). The open circles indicate $z_{\rm reion}$.} \end{figure*} These reionization histories are a direct consequence of the overall number of ionizing photons being emitted by all active sources, shown in Fig.~\ref{fig:source_lum}. For ease of comparison, the number of photons emitted by both the large and small boxes are renormalized to a $100\,h^{-1}$\,Mpc$^3$ volume. In the case LB1, where only HMACHs contribute, the number of photons emitted per time-step simply rises proportionally to the halo collapsed fraction, which is roughly exponentially. In all cases with LMACHs, reionization begins earlier, and all models initially have similar slopes. In fact, cases LB2 and LB3 are nearly identical until sufficient reionization occurs to produce significant self-regulation, around $x_{\rm m} = 0.20$. In the full-suppression case LB2, the initial exponential rise is halted around redshift $z\sim15$ and increases slowly (and moderately non-monotonically) until $z\sim9$, where high-mass, non-suppressible sources become dominant and the low-mass sources become highly suppressed. Therefore, similarly to our earlier results in \citet{Ilie07a}, the late phase of reionization and the end of the epoch, $z_{\rm reion}$, are dominated by HMACHs, while the LMACHs dominate the early phase of reionization and provide a significant boost to $\tau_{\rm es}$. Similarly to the reionization histories above, the resolution and box-size effects play a minor role, making the main trends in the evolution of the number of ionizing photons robust. The HMACH-only model is minimally affected by resolution, with the high-resolution flux appearing nearly identical to the low-resolution case. The models with suppression are somewhat affected by resolution, particularly through the sub-grid modelling of the LMACHs (in the large boxes) and the lower RT grid resolution. The combined effect is more significant suppression in the higher-resolution case, since the cells are smaller and therefore easier to ionize, which suppresses the LMACHs. However, once the ionization fraction grows sufficiently, the amount of suppression in the low-resolution case reaches and then surpasses the high-resolution case, since the sub-grid LMACHs are more strongly clustered than the resolved ones \citep{Ahn15a}. With more nearby sources increasing the ionizing radiation experienced by an LMACH, the source cell is ionized more easily, resulting in earlier suppression and the majority of ionizing photons being produced by HMACHs at late times. All models require just under two photons per atom to reach end of reionization, independent of resolution and simulation volume. Therefore, increasing the resolution by a factor of 13 from 0.98 to $0.078 h^{-1}$\,Mpc resolves more gas clumping, but does not substantially increase the impact of recombinations on reionization. Gas clumping at much smaller scales is needed to increase the number of photons per atom to higher values (Mao et al., in prep). The LLS, only partly included here, may also increase the number of photons required to complete reionization \citep{Shuk15a}. \subsection{Ionization morphology} \label{sec:morph} \begin{figure*} \begin{center} \includegraphics[width=1.7in]{xy306_ion_47Mpc_8.2S_612_8.012.eps} \includegraphics[width=1.7in]{xy306_ion_47Mpc_8.2S_612_7.435.eps} \includegraphics[width=1.7in]{xy306_ion_47Mpc_8.2S_612_7.059.eps} \includegraphics[width=1.7in]{xy306_ion_47Mpc_8.2S_612_6.757.eps} \vspace{-0.3cm} \end{center} \caption{Spatial slices of the ionized and neutral gas density from our radiative transfer simulation SB2\_{HR} at box-averaged by mass ionized fraction $x_{\rm m}= 0.3, 0.5, 0.7,$ and 0.9 from left to right. The density field is shown in blue, with lighter shades corresponding to denser regions and vice versa, and overlaid with the ionization field, where dark is neutral and light is fully ionized. \label{fig:images_gradual}} \end{figure*} In Fig.~\ref{fig:images_gradual}, we illustrate the evolution of the reionization geometry at several key stages of the process, corresponding to mass-weighted ionized fractions of $x_{\rm m}= 0.3, 0.5, 0.7,$ and 0.9 from left to right. We use a small-box, high-resolution simulation here, specifically SB2\_HR, which allows for better discrimination of any differences between models as small-scale structure is more discernible. Note that these smaller volumes are missing the large-scale density modes, which introduce additional large-scale fluctuations \citep{Ilie14a}. Even the new, mass-dependent source suppression model contains the basic features of source models considered in previous work \citep[e.g.][]{Ilie12a}, which we will explore in detail. Initially, a large number of fairly small, Mpc-size \ion{H}{ii} regions form. These regions are strongly clustered on small scales, following the clustering of the sources. Locally, these small \ion{H}{ii} regions quickly start merging into larger ones, with sizes between few and $\sim\!10$~Mpc across. We note that, of course, these are 2D cuts of the ionization field and that \ion{H}{ii} regions can, and do, have different sizes depending on the direction considered, as quantified e.g. in \citet{Ilie08b}. Significant large-scale percolation of the \ion{H}{ii} regions only occurs when the universe reaches $\sim\!50$ per cent ionization by mass, at which point, many ionized regions reach sizes of tens of Mpc and become connected by bridges to other nearby, large ionized regions of similar size. At the same time, different regions of similar size still remain neutral. The \ion{H}{ii} regions continue percolating up to still larger scales, and by $x_{\rm m}=0.7$, some reach tens of Mpc across, with significant neutral regions remaining between them. These large ionized and neutral regions both reflect the large-scale fluctuations of the underlying density field, as the densest regions are also sources. Finally, when the mass is 90 per cent ionized, most \ion{H}{ii} regions have percolated into one, though significant neutral regions remain even in this late phase. \begin{figure*} \begin{center} \includegraphics[width=1.7in]{xy153_ion_47Mpc_0_306_7.348.eps} \includegraphics[width=1.7in]{xy153_ion_47Mpc_8.2S_306_7.480.eps} \includegraphics[width=1.7in]{xy153_ion_47Mpc_8.2pS_306_8.515.eps} \includegraphics[width=1.7in]{xy153_ion_47Mpc_gS_306_7.760.eps} \vspace{-0.3cm} \end{center} \caption{Spatial slices of the ionized and neutral gas density from our 47\,$h^{-1}$~Mpc box. Models SB1, SB2, SB3, and SB4 (from left to right) are shown at the same mass-weighted ionized fraction, $x_{\rm m}\approx0.5$. The density field is shown in blue, with lighter shades corresponding to denser regions and vice versa, and overlaid with the ionization field, where dark is neutral and light is fully ionized. \label{fig:images_47Mpc}} \end{figure*} Direct comparison of all four simulations at the same ionized fraction illustrates the differences in morphology caused by the various LMACH suppression models, shown as SB1, SB2, SB3, and SB4 from left to right in Fig.~\ref{fig:images_res} at $x_{\rm m}\approx0.50$. In all cases, the large-scale structures of the ionization field strongly correlate with the underlying distribution of density and clustered haloes and are, thus, quite similar. There are significant differences in the smaller scale structures among the range of simulations. Naturally, the HMACH-only SB1 has larger, smoother ionized patches and few small-scale ones. The aggressive suppression case (SB2) has more widespread relic \ion{H}{ii} regions, where the local sources have switched off, compared to SB3 and SB4, where most LMACHs remain active, albeit at a lower emissivity. Cases SB2 and SB3 have much more fine, small-scale structures compared to SB1 and (to a lesser extent) SB4. Finally, we compare different RT grid resolutions for the same source models in Fig.~\ref{fig:images_res}. Apart from (obviously) much sharper images in the high-resolution cases, the reionization morphology is largely the same. Although especially true for the smaller, $47\,h^{-1}$~Mpc volume, the two distributions are quite close in both volumes. Clearly, more small-scale structure is revealed at higher resolution, but that is likely too small to make a difference for the first generation of observations, which will have relatively low resolution. At least visually, the overall differences are small between the four source models and depend weakly on the RT resolution. We quantify these differences in more detail below. \begin{figure*} \begin{center} \includegraphics[width=2.2in]{xy153_ion_47Mpc_8.2S_306_7.139.eps} \vspace{-0.0in} \hspace{-0.055in} \includegraphics[width=2.2in]{xy306_ion_47Mpc_8.2S_612_7.059.eps}\\ \includegraphics[width=2.2in]{xy125_ion_244Mpc_8.2pS_250_8.172.eps} \hspace{-0.055in} \includegraphics[width=2.2in]{xy250_ion_244Mpc_8.2pS_500_8.118.eps} \end{center} \caption{Spatial slices of the ionized and neutral gas density from our radiative transfer simulations with box sizes 47\,$h^{-1}$~Mpc (upper panels) and 244\,$h^{-1}$~Mpc (lower panels), all at mass-averaged ionized fraction $x_{\rm m}\sim0.70$. The density field is shown in blue, with lighter shades corresponding to denser regions and vice versa, and overlaid with the ionization field, where dark is neutral and light is fully ionized. The left panels are low resolution, 306$^3$ for 47\,$h^{-1}$~Mpc and 250$^3$ for 244\,$h^{-1}$~Mpc, and the right panels are high resolution, 612$^3$ for 47~Mpc and 500$^3$ for 244\,$h^{-1}$~Mpc. Shown are cases SB2, SB2\_HR, LB3, and LB3\_HR (left to right and top to bottom). \label{fig:images_res}} \end{figure*} A more quantitative measure of the size distributions of the ionized regions, based on the spherical average method \citep[SPA,][]{Zahn07a,McQu07a}, supports the qualitative conclusions drawn from the slices. Fig.~\ref{fig:R_dist} shows the probability distributions for the radius of ionized regions, $R_{\rm \ion{H}{ii}}$, at $x_{\rm m}$ = 0.3 and 0.5 (left and middle panel, respectively) for LB1 (solid), LB2 (dotted), LB3 (dashed), and LB4 (dot-dashed). To investigate the sizes of ionized regions, we rely on the $244\,h^{-1}$Mpc volume, since small simulation volumes severely constrain the abundance and sizes of large \ion{H}{ii} regions \citep{Ilie14a}. The distributions reflect both the suppression mechanism and the epoch at which the corresponding reionization stage is reached. As expected, the size of the ionized bubbles grows during reionization, starting mainly at the Mpc scale for $x_{\rm m}=0.3$ (left). At this stage, LB1 has the flattest distribution, whereas the models with LMACHs produce majority small bubbles. By the midpoint ($x_{\rm m}=0.5$, middle), ionized regions of at least $\sim\!10$~Mpc begin to emerge for all source models. Here, LB3 has the most numerous and uniform source, yielding the flattest distribution. As more ionized regions merge together, large bubbles of $\gtrsim\!10$~Mpc begin to dominate. Throughout reionization, the partially suppressed model (LB3) always has smaller bubbles on average, since the smallest, abundant sources are never fully suppressed. Conversely, HMACH-only model (LB1) has the largest bubbles on average. As expected from visual observation of the spatial slices, LB1 and LB3 are at the two extremes during the early stages of reionization ($x_{\rm m}=0.3$, left), with distributions skewed towards very large patches for the former and small patches for the latter. This behaviour reflects the size of the sources, with large sources -- that cannot be suppressed -- creating large bubbles from emitting more photons. Conversely, highly efficient, small sources create small bubbles, are then suppressed, and just maintain the ionized region. The other two cases, LB2 and LB4 show almost identical distributions at this time, intermediate between the two extremes. Around 50 per cent ionized (middle panel), the bubble sizes for all models have grown, and the distributions for all models have become increasingly similar. LB2 is becoming dominated by the large sources that drive LB1, narrowing the gap between the distributions from early times. By $x_{\rm m} = 0.7$ (not shown here), the distributions have nearly converged for all models with log$_{10}(R_{\rm \ion{H}{ii}}^{\rm max})$ ranging from $\sim\!1.1 - 1.4$. The rightmost plot of Fig.~\ref{fig:R_dist} shows the probability distributions for the radius of neutral islands, $R_{\rm \ion{H}{i}}$, at $x_{\rm m} = 0.9$, since, at this late time, the ionized patches have all topologically merged and only the neutral islands are distinct. As before, LB3 is the most uniform with the smallest neutral regions, and LB1 is the most stochastic with the the largest neutral regions. The remaining models (LB2 and LB4) are very similar at this point. The neutral regions are also more Gaussian as compared to the ionized regions, especially in the large-$R_{\rm \ion{H}{i}}$ tail. \begin{figure*} \begin{center} \vspace{+0.1in} \includegraphics[height=1.7in]{zahn_244Mpc_250_x30.eps} \vspace{-0.2in} \includegraphics[height=1.7in]{zahn_244Mpc_250_x50.eps} \vspace{-0.2in} \includegraphics[height=1.7in]{zahn_244Mpc_250_n90.eps} \vspace{+0.8cm} \caption{ \label{fig:R_dist} Size distributions of ionized or neutral regions for the $244\,h^{-1}$~Mpc box. Distributions are shown at different stages of the reionization process with ionized fraction by mass as $x_{\rm m} = 0.3$ (\ion{H}{ii} regions), 0.5 (\ion{H}{ii} regions), and 0.9 (\ion{H}{i} regions) from left to right. The LB1 (solid), LB2 (dotted), LB3 (dashed), and LB4 (dot-dashed) simulations are represented.} \end{center} \end{figure*} \subsection{21-cm background} \label{sec:21cm} \subsubsection{Calculating redshifted 21-cm emission} The differential brightness temperature of the redshifted 21-cm emission with respect to the CMB is determined by the density of neutral hydrogen, $\rho_{\rm \ion{H}{i}}$, and its spin temperature, $T_{\rm S}$, and is given by \citep{Fiel59a}: \begin{eqnarray} \delta T_{\rm b}&=&\frac{T_{\rm S} - T_{\rm CMB}}{1+z}(1-e^{-\tau})\nonumber\\ &\approx& \frac{T_{\rm S} - T_{\rm CMB}}{1+z} \frac{3\lambda_0^3A_{10}T_*n_{\rm \ion{H}{i}}(z)}{32\pi T_{\rm S} H(z)}. \label{eq:dT0} \end{eqnarray} Here, $T_{\rm CMB}$ is the temperature of the CMB radiation at that time, $\tau$ is the corresponding 21-cm optical depth (assumed to be small when writing equation~\ref{eq:dT0}), $\lambda_0=21.16$~cm is the rest-frame wavelength of the 21-cm line, $A_{10}=2.85\times10^{-15}\,\rm s^{-1}$ is the Einstein A-coefficient, and $T_*=0.068$~K corresponds to the energy difference between the two levels. The mean number density of neutral hydrogen, $n_{\rm \ion{H}{i}}(z)$, at redshift, $z$, is: \begin{eqnarray} \langle n_{\rm \ion{H}{i}} \rangle(z)&=& \frac{\Omega_{\rm b}\rho_{\rm crit,0}}{\mu_{\rm H}m_{\rm p}}(1+z)^3\nonumber\\ &=&1.909\times10^{-7}\rm cm^{-3}\left(\frac{\Omega_{\rm b}}{0.042}\right)(1+z)^3, \end{eqnarray} with $\mu_{\rm H}=1.22$ is the corresponding mean molecular weight (assuming 24 per cent He abundance), and $H(z)$ is the redshift-dependent Hubble constant, \begin{eqnarray} H(z)&=& H_0[\Omega_{\rm m}(1+z)^3+\Omega_{\rm k}(1+z)^2+\Omega_\Lambda]^{1/2} \nonumber\\ &=&H_0E(z)\approx H_0\Omega_{\rm m}^{1/2}(1+z)^{3/2}. \end{eqnarray} Here, $H_0$ is the Hubble constant at present, and the last approximation in the above equation is valid for $z\gg 1$. Throughout this work, we assume that $T_{\rm S} \gg T_{\rm CMB}$, i.e., that all of the neutral IGM gas is Ly-$\alpha$-pumped by the background of UV radiation below 13.6~eV from early sources and heated well above the CMB temperature (due to, e.g., a small amount of X-ray heating). Therefore, the 21-cm line is seen in emission. These assumptions are generally well-justified, except possibly at the earliest times \citep[see e.g.][and references therein]{Furl06a}. In the high-$T_{\rm S}$ limit, equation~(\ref{eq:dT0}) becomes \begin{eqnarray} \delta T_{\rm b}&=&{28.5\,\rm mK}\left(\frac{1+z}{10}\right)^{1/2}(1+\delta)\nonumber\\ & &\times\left(\frac{\Omega_{\rm b}}{0.042}\frac{h}{0.73}\right)\left(\frac{0.24}{\Omega_{\rm m}}\right)^{1/2}, \label{eq:dT} \end{eqnarray} where $1+\delta={n_{\rm \ion{H}{i}}}/{ \langle n_{\rm H} \rangle}$ is the density of the neutral hydrogen in units of the mean gas density. Since our simulations take place in real space, we need to transform our data to redshift space, where the observations of lines occur. If the redshift is caused only by the Hubble expansion, then the redshift space position, $\mathbf{s}$, of some emitter will be the same as its comoving real space position, $\mathbf{r}$. However, if there is also a peculiar velocity along the line of sight, $v_{\parallel}$, then an emitter at position $\mathbf{r}$ in real space will be shifted to a position $\mathbf{s}$ in redshift space: \begin{equation} \mathbf{s} = \mathbf{r} + \frac{1 + z_{\mathrm{obs}}}{H(z_{\mathrm{obs}})} v_{\parallel} (t, \mathbf{r}) \hat{r}, \label{eq:reddist} \end{equation} where $1+z_{\mathrm{obs}} = (1+z_{\mathrm{cos}})(1-v_{\parallel}/c)^{-1}$, $z_{\mathrm{obs}}$ is the observed redshift, and $z_{\mathrm{cos}}$ is the cosmological redshift \citep[e.g.][]{Mao12a}. In other words, an emitter with a peculiar velocity away from the observer (i.e., $v_{\parallel}>0$) will be more redshifted than one with no velocity and will appear to be farther away than is really the case, and vice versa. \cite{Mao12a} describe several ways to calculate the redshift-space signal from a real-space simulation volume with brightness temperature and velocity information. Here, we use a slightly different method, introduced in \citet{Jens13a}, which splits each cell along the line-of-sight into $n$ sub-cells, each with a brightness temperature $\delta T (\mathbf{r})/n$. We then interpolate the velocity and density fields on to the sub-cells, move them around according to equation~(\ref{eq:reddist}), and re-grid to the original resolution. This scheme is valid only in the optically thin and high-$T_{\rm s}$ case, when equation~(\ref{eq:dT}) holds and each parcel of gas can be treated as an independent emitter of 21-cm radiation. For this paper, we use $40$ sub-cells, which is converged to less than one per cent. In Fig.~\ref{fig:nu_box_raw}, we show the position-frequency slices cut through the simulated image cube. The vertical scale is the spatial dimension, and the horizontal is the observed frequency. Images are of the differential brightness temperature (colour scale at right) for simulations LB1, LB2, LB3, and LB4 from top to bottom, continuously interpolated in frequency and including the redshift-space distortions. At low frequency (high redshift), all \ion{H}{ii} regions are small and mostly isolated, though the exact redshift where this is no longer true depends on the reionization history and, therefore, on the source model. As these bubbles begin to merge, larger structures ($\sim\!10$~Mpc) begin to form, culminating in hundreds of such bubbles that all merge together towards the end of reionization. The intervening period, when the 21-cm fluctuations peak, varies significantly in duration between models. This period is most extended in model LB2, due to the combination of an early start and aggressive LMACH suppression exclusive to that model. Conversely, LB1 and LB3, where all sources are always active, have reionization proceed relatively fast. Finally, LB4 is intermediate between these two extremes. In that case, reionization starts early, but the lowest-mass sources, which initially dominate the photon budget, are quickly suppressed and form only small ionized patches. Only when the larger sources become more common do the \ion{H}{ii} regions become larger. \begin{figure*} \includegraphics[height=4in]{lc_pv_z_244Mpc_all.eps} \caption{ \label{fig:nu_box_raw} Position-redshift slices from our 244 $h^{-1}$ Mpc boxes. These slices illustrate the large-scale geometry of reionization and the significant local variations in reionization history as seen in the redshifted 21-cm line. From top to bottom, the images show the differential brightness temperature (colour scale at right) at the full grid resolution in linear scale for LB1, LB2, LB3, and LB4. The spatial (vertical) scale is comoving Mpc.} \vspace{-0.5cm} \end{figure*} \begin{figure*} \includegraphics[height=4in]{lc_pv_b3nu44_244Mpc_all.eps}\caption{ \label{fig:nu_box_smooth} Position-frequency slices from our 244 $h^{-1}$ Mpc boxes. These slices illustrate the large-scale geometry of reionization and the significant local variations in reionization history as seen at redshifted 21-cm line with a realistic 3~arcmin (Gaussian FWHM) beam size and 0.44~MHz (top-hat) bandwidth filter. From top to bottom, the images show the smoothed differential brightness temperature in linear scale for LB1, LB2, LB3, and LB4. The spatial (vertical) scale is comoving Mpc.} \vspace{-0.5cm} \end{figure*} Fig.~\ref{fig:nu_box_smooth} indicates how these fluctuations will be seen as a function of observed frequency, $\nu_{\rm obs}$, at resolution similar to the that of the first generation experiments, such as LOFAR. The same volume and simulations are shown, but the entire image cube is smoothed with a 3~arcmin Gaussian beam and a 0.44~MHz (top-hat) bandwidth filter. The early, small-scale structure is effectively erased given probable noise levels and foreground signals for the current experiments (to be presented in detail in a companion paper), but might become detectable in future, more sensitive experiments, such as SKA. However, the large-scale patches remain clearly visible even with this relatively large smoothing. The smoothing also somewhat diminishes the visual distinction between models, although the overall trends and features remain the same. \subsubsection{Mean and rms} \begin{figure*} \includegraphics[width=3.2in]{dTinset_244_rsd_b3nu44_sources.eps} \includegraphics[width=3.2in]{dTinset_47_rsd_b3nu44_sources.eps} \caption{The evolution of the rms and mean (inset) of the 21-cm background. \emph{Left:} for the $244\,h^{-1}$~Mpc box, four models are shown, LB1 (solid), LB2 (dotted), LB3 (dashed), and LB4 (dot-dashed). \emph{Right:} for the $47\,h^{-1}$~Mpc box, four models are shown, SB1(solid), SB2 (dotted), SB3 (dashed), and SB4 (dot-dashed). \label{fig:mean_rms}} \end{figure*} The evolution of the mean differential brightness temperature, $\overline{\delta T_{\rm b}}$, as a function of observed frequency for all low-resolution cases is shown in Fig.~\ref{fig:mean_rms} (insets) with the $244\,h^{-1}$~Mpc ($47\,h^{-1}$~Mpc) on the left (right). In all cases, the evolution is gradual over time. With regards to experiments looking for rapid changes in the 21-cm signal as the Universe reionizes \citep{Shav99a,Bowm10a}, this behaviour means that all the suppression models are difficult to detect and likely impossible to distinguish at this aggregate level. The HMACH-only and fully suppressed model show a steeper drop in signal at late times than the gradual and partially suppressed models, but the effect is weak. \begin{figure*} \includegraphics[height=1.7in]{dTrmsxi_244_rsd_b2nu29_sources.eps} \includegraphics[height=1.7in]{dTrmsxi_244_rsd_b3nu44_sources.eps} \includegraphics[height=1.7in]{dTrmsxi_244_rsd_b5nu73_sources.eps} \caption{The evolution of the rms fluctuations in the 21-cm background versus mean mass-weighted ionized fraction for different instrument realizations. The left, middle, and right panels are smoothed with a Gaussian beam of size 2, 3, and 5~arcmin and bandwidth 0.29, 0.44, and 0.73~MHz, respectively, for the 244$h^{-1}$~Mpc box and all source models LB1 (solid), LB2 (dotted), LB3 (dashed), and LB4 (dot-dashed). \label{fig:dTrms_xi}} \end{figure*} Comparing the root-mean-square (rms) of the fluctuations in $\delta T_{\rm b}$ with respect to the mean ($\langle \delta T_b^2 \rangle$) averaged over a LOFAR-like beam and bandwidth (3~arcmin Gaussian and 0.44~MHz bandwidth filter) shown in Fig.~\ref{fig:mean_rms}, we see that the overall evolution follows similar paths in all cases with some variations. Since very little gas is ionized at early times, the 21-cm fluctuations track the underlying density fluctuations. First, consider the larger box in the left panel. As reionization progresses, the rms curves begin to diverge from being purely density-driven, with LB2 and LB3 diverging the earliest at $\nu_{\rm obs} > 85$~MHz. The higher efficiency of the LMACHs drive this behaviour by more rapidly ionizing the universe as compared to the other models. The mass-dependent suppression model (LB4) deviates from the density fluctuations later at $\nu_{\rm obs} > 95$~MHz, since most LMACHs have very low efficiency after suppression. Of course, the rms for the model lacking LMACHs entirely (LB1) turns over the latest at $\nu_{\rm obs} \sim110$~MHz. Essentially, this feature gives no information about the effects of source suppression, just the mean source efficiency and the type of sources. As the highest density peaks are ionized and the the mean $\delta T_{\rm b}$ decreases, the rms curves dip, because the \ion{H}{ii} regions are still smaller than the smoothing scale and, therefore, do not contribute to the fluctuations. As reionization proceeds further, the size of the \ion{H}{ii} regions increases, eventually outgrowing the smoothing size and producing the peak in the signal. The position of this peak is largely dictated by the reionization history and the typical bubble size as compared to the beam size; hence, the fastest ionizing model (LB3), which makes large ionizing patches more quickly, peaks first at 155~MHz with LB4 following at 173~MHz. LB1 and LB2 share nearly identical sources at the end of reionization, because LMACHs are completely or nearly (respectively) suppressed. These models peak latest at $\nu_{\rm obs} \sim 185$~MHz. Larger fluctuations result from a later global reionization, benefitting from greater density fluctuations. The peak of LB2 is lower than that of LB1, because more reionization occurred earlier, front-loading the fluctuations. The peak height depends on the typical ionized bubble size at maximum fluctuations and how this size compares to the smoothing scale. Although the rms shapes are similar across the simulations, the fully suppressed model (LB2) differs the most. For 110~MHz $< \nu_{\rm obs} < 135$~MHz, the signal is flatter. Specifically, the high-$\nu_{\rm obs}$ is significantly less distinct, aligning with the peak in LB3, and the signal does not begin dropping until $\sim\!140$~MHz. The aggressive suppression of LMACHs in this model limits the growth of ionized bubbles early on as compared to the other suppression models. Comparing these fluctuations versus $x_{\rm m}$ (Fig.~\ref{fig:dTrms_xi}) removes the dependence on the reionization timing from the comparison of the models (see the middle panel for the same smoothing as above). The peak position occurs later in the reionization process in models with more numerous and brighter LMACHs, with LB4 peaking the latest. The trough has similar behaviour, with LB1 bottoming out the earliest. The full and partial suppression models look very similar in the early universe, dominated by high-efficiency LMACHs, so the trough position is nearly identical for LB2 and LB3 around $x_{\rm m}\sim0.3$. Since LB4 has low-efficiency LMACHs, the trough appears between the high-efficiency LMACH models and the HMACH-only model. As above, the largest differences in shape occur during the early stages of reionization before $x_{\rm m} \sim 0.2$. The flattening of the signal in LB2 is more pronounced, with all the other models showing a steep initial drop in the magnitude of fluctuations. The RT grid resolution has essentially no effect in SB2 vs. SB2\_HR and LB1 vs. LB1\_HR and a minor effect in LB3 vs. LB3\_HR, where the peak and trough values change by up to 10 per cent, but their position in frequency remains the same (not shown). This consistency demonstrates the robustness of our results to changes in RT grid resolution. In Fig.~\ref{fig:dTrms_xi}, we also compare various levels of smoothing with a 2, 3, and 5~arcmin Gaussian beam (left to right) and a corresponding bandwidth filter of 0.29, 0.44, and 0.73~MHz, respectively. The different levels of smoothing have only a mild effect on the 21-cm rms fluctuations. The overall shapes and relative levels for the different source models are robust to these toy models of smoothing. The larger beam size slightly decreases the peak fluctuations, from $\sim\!4.5-5.5$~mK for 2~arcmin to $\sim\!3.5-4.5$ for 5~arcmin beam. This larger smoothing also moves the peak to slightly later times (higher $x_{\rm m}$), since ionized patches grow over time and better match the larger beam and bandwidth sizes. \begin{figure*} \includegraphics[height=1.7in]{ps_244Mpc_Mao_x03.eps} \includegraphics[height=1.7in]{ps_244Mpc_Mao_x05.eps} \includegraphics[height=1.7in]{ps_244Mpc_Mao_x07.eps} \caption{The power spectra for the $244\,h^{-1}$~Mpc box and all source models at $x_{\rm m} = 0.3, 0.5,$ and 0.7 from left to right. The LB1 (solid), LB2 (dotted), LB3 (dashed), and LB4 (dot-chased) simulations are represented. These power spectra are calculated for coeval cubes, including redshift-space distortions. \label{fig:244_ps}} \end{figure*} \subsubsection{Power spectra} Another key statistical quantity is the autocorrelation power spectrum of the 21-cm differential brightness temperature fluctuations, referred to as the 21-cm power spectrum. The power spectrum, $P_{21}(\mathbf{k})$, is defined as: \begin{equation} \langle \widetilde{\delta T_{\rm b}^*}(\mathbf{k}) \widetilde{\delta T_{\rm b}}(\mathbf{k'}) \rangle \equiv (2 \upi)^3 P_{21}(\mathbf{k}) \delta_D^3(\mathbf{k}-\mathbf{k'}), \label{eq:ps_def} \end{equation} where $\widetilde{\delta T_{\rm b}}$ is the Fourier transform of $\delta T_{\rm b}$ and $\delta_{\rm D}^3$ is the three-dimensional Dirac delta function. Throughout this work, we will use the (spatially) dimensionless power spectrum, $\Delta^2_{21}(k)$, where \begin{equation} \Delta^2_{21}(k) = \frac{k^3}{2 \upi^2} P_{21}(k) \label{eq:delta_21} \end{equation} and has the units of mK$^2$. We follow the methodology in \cite{Mao12a} that includes redshift-space distortions. In Fig.~\ref{fig:244_ps}, we compare $\Delta^2_{21}(k)$ for simulations LB1 (solid), LB2 (dotted), LB3 (dashed), and LB4 (dot-dashed) at $x_{\rm m} = 0.3$ (left), 0.5 (middle), and 0.7 (right). We do not discuss the power spectrum results from the smaller volume simulations, since these volumes are unable to represent the large-scale fluctuations that are important during reionization. At $x_{\rm m} = 0.3$ (left), the power is dominated by small scales, which is expected in the early stages of reionization when the ionized bubbles have yet to grow large and the density field is a significant contributor to the power spectrum. As reionization progresses (see $x_{\rm m} = 0.5$, middle), the power spectrum flattens with larger modes contributing significantly. These large scales come from the large ionized bubbles that are beginning to form with the density contributing mainly on small scales. At later times, the power spectrum weakens on small scales, as seen at $x_{\rm m} = 0.7$ (right panel), and LB3 and LB4 have more power on larger scales than the smallest. Here, $\Delta^2_{21}(k)$ is dominated by the ionization field contribution, and with more and more ionized regions overlapping, the small-scale structure diminishes. Generically, the power is higher for models that reionize later, since the increased density fluctuations at later times increase the 21-cm fluctuations at a particular ionized fraction. Therefore, LB1 is the highest at all scales, and LB4 is the lowest, except when all LMACH models converge at the latest times (as seen on the right, $x_{\rm m}=0.7$). Since LB1 has only large sources, all ionized bubbles are larger on average, and LB1 has increased power on all scales, especially at large scales and at early times before significant overlap occurs. Distinguishing between the three suppression methods is easiest at late times and at small scales (high $k$). As seen in the left panel, a significant spread in the high-$k$ tail is present. More numerous sources, as in LB3 where LMACHs are never fully suppressed, create a more uniform ionization field, which suppresses small-scale power. Unfortunately, such small scales ($k>1\,h$Mpc$^{-1}$) are below the resolution of the current 21-cm experiments. At intermediate scales ($0.1<k<1\,h$Mpc$^{-1}$), all models yield largely identical power spectra, while at large scales differences are somewhat greater, but likely also difficult to detect. \subsubsection{PDFs and non-Gaussianity} The 21-cm power spectra would fully characterise the emission field if the differential brightness distribution were purely Gaussian, which is manifestly not the case during reionzation \citep{Mell06b,Ilie08a,Hark09a,Watk14a}. The probability distribution functions (PDFs) of $\delta T_{\rm b}$ could be significantly non-Gaussian, particularly at the later stages of reionization \citep{Mell06b}, and their measured skewness can be used to discriminate between different reionization scenarios \citep{Hark09a}. The PDFs and their evolution could also be used to derive the reionization history of the IGM \citep{Ichi10a,Glus10a}. \begin{figure*} \begin{center} \vspace{+0.1in} \includegraphics[height=1.7in]{dT_PDF_rsd_b3nu44_244Mpc_250_x30.eps} \vspace{-0.2in} \includegraphics[height=1.7in]{dT_PDF_rsd_b3nu44_244Mpc_250_x50.eps} \vspace{-0.2in} \includegraphics[height=1.7in]{dT_PDF_rsd_b3nu44_244Mpc_250_x70.eps} \vspace{+1cm} \caption{ \label{fig:dT_PDF} The probability distribution function for $\delta T_{\rm b}$ from our 244$\,h^{-1}$~Mpc simulations. Distributions are shown at different stages of the reionization process with ionized fraction by mass, $x_{\rm m} = 0.3, 0.5,$ and 0.7 from left to right. The LB1 (solid), LB2 (dotted), LB3 (dashed), and LB4 (dot-dashed) simulations are represented. In order to mimic the behaviour of an interferometer, we apply a Gaussian 3~arcmin beam size and 0.44~MHz (top-hat) bandwidth filter, and the mean signal has been subtracted. } \end{center} \end{figure*} The 21-cm PDFs smoothed over a Gaussian 3~arcmin beam and 0.44~MHz (top-hat) bandwidth for all suppression models in our large box at three representative stages of reionization ($x_{\rm m}=0.3, 0.5,$ and 0.7, from left to right) are shown in Fig.~\ref{fig:dT_PDF}. Early on (see $x_{\rm m}=0.3$ on the left), the distributions are mostly following the underlying density field and are, therefore, the closest to Gaussian. Non-linear density evolution introduces non-Gaussianity that increases over time. Reionization itself introduces some non-Gaussianity at low $\delta T_{\rm b}$, as the first \ion{H}{ii} regions form around the highest density peaks, and moves the corresponding cells into the extreme left of the distributions. This effect slightly skews the distribution towards below-average (i.e., negative in $\delta T_{\rm b}-\overline{\delta T_{\rm b}}$) temperature values, since the low-density regions remain more neutral on average. The HMACH-only model (LB1) produces the largest skew and distribution width, because the high-mass sources reside in the densest regions that are strongly clustered as a consequence of the Gaussian density field statistics \citep[see, e.g., Figure~4 in][]{Ilie14a}. LB3 has the narrowest distribution, because the smallest sources, which are less biased and more common, are never fully suppressed. The remaining two models lie in between with full suppression, LB2, producing a wider distribution than the gradual suppression, LB4. As reionization progresses (see $x_{\rm m}=0.5$ in the middle), the hierarchy of the PDF width among the models remain, but stronger non-Gaussianity develops. The significant negative tail in the PDF is due to the ionized regions ($\delta T_{\rm b}-\overline{\delta T_{\rm b}}<0$). The remaining neutral regions are a mostly voids that have low, but positive $\delta T_{\rm b}-\overline{\delta T_{\rm b}}$. During the latest stages of reionization (see $x_{\rm m}=0.7$ on the right), the four simulations are most similar to each other, as most regions are fully ionized. As before, LB1, with the rarest sources and latest $z_{\rm reion}$, has the most high-$\delta T_{\rm b}$ cells and LB3 least, with the most uniform sources and earliest $z_{\rm reion}$. This effect is due to the uniformity of sources, as more uniform source make more uniform ionization fields, and partially due to the increased density fluctuations at later times, i.e., the density fluctuations are larger for L1 than L3 at the same ionized fraction. Therefore, the statistics of bright peaks, particularly at late times, provide a promising way to discriminate between the different suppression scenarios and to learn about the nature of the ionizing sources. For example, at $x_{\rm m}=0.7$, there are no 10~mK peaks in LB3, very few in LB4, and orders of magnitude more in LB2 and LB1. Although not shown here, the RT grid resolution again has essentially no effect in SB2 vs. SB2\_HR and LB1 vs. LB1\_HR and only a minor effect in LB3 vs. LB3\_HR. In the latter case, the PDF distribution exhibits more low-$\delta T_b$ pixels at early times and more high-$\delta T_b$ pixels at late times for the high-resolution grid. \begin{figure*} \includegraphics[width=3.2in]{dTskewkurt_244_rsd_b3nu44_sources.eps} \includegraphics[width=3.2in]{dTskewkurtxi_244_rsd_b3nu44_sources.eps} \caption{The evolution of the skewness (top) and kurtosis (bottom) in the 21-cm PDFs for the 244\,$h^{-1}$~Mpc box and all source models. The models LB1 (solid), LB2 (dotted), LB3 (dashed), and LB4 (dot-dashed) are smoothed with a 3~arcmin (Gaussian FWHM) and bandwidth 0.44~MHz. The left (right) panel shows the evolution as a function of frequency (ionized fraction). \label{fig:dTskewkurt}} \end{figure*} The level of non-Gaussianity of the PDFs can be quantified to first and second order by the skewness and kurtosis, respectively. Fig.~\ref{fig:dTskewkurt} shows the evolution of the skewness (upper panel) and kurtosis (lower panel) vs. observed frequency (left) for simulations LB1 (solid), LB2 (dotted), LB3 (dashed), and LB4 (dot-dashed). As expected from the above discussion, the skewness is very large at high frequencies, or late times, and is very low and slightly negative at low frequencies, or early times. The major feature of the skewness is the dip at intermediate frequencies. The position of this feature is mainly determined by the timing of reionization, where the dip occurs earlier for faster reionization scenarios. When plotted against the ionization fraction all scenarios produce almost identical evolution (Fig.~\ref{fig:dTskewkurt}, right top panel), with the dip occurring at $x_{\rm m} \approx 0.35$. The depth of this trough is weakly dependent on the distribution of the ionizing sources, where the most uniform sources model (LB3) has deepest trough. Similarly, LB1 and LB2 are roughly the same at the frequency of this feature, because there are only HMACHs and many fully suppressed LMACHs, respectively. The variations between the models are very minor, however. Skewness also proves insensitive to the RT grid resolution, which can be seen in the upper panel of Fig.~\ref{fig:dTskewkurt_res}. The two large-volume simulations with high resolution available are shown, with L1 (solid), L1\_HR, (dotted), L3 (solid), and L3\_HR (dotted), where the HMACH-only models have a higher-frequency trough as compared to the partially suppressed LMACH models. The trough is narrower and shifted slightly to higher frequency for the high-resolution simulations, though the effect is minimal. The kurtosis differentiates the models more than the skewness. We will focus on the two main features: the peak, which always occurs first and roughly coincides in frequency with the trough of the skewness, and the trough, which happens later and occurs at approximately the same frequency as the peak in the rms fluctuations. Accordingly, the frequency position of the peak (trough) depends mostly on the timing of reionization with earlier reionization producing a lower-frequency peak (trough). Also, the timing of reionization affects the size of density fluctuations at a given frequency. This effect plus the general uniformity of the sources determines the height of the peak, where LB3 (earliest and most uniform) is the highest and LB1 (latest and least uniform) is the lowest. For the trough, the kurtosis turns slightly negative. All models reach the same approximate depth at different frequencies, dependent on the speed of reionization, but at similar ionized fraction, $x_{\rm m}\sim0.7-0.8$. The RT grid resolution has no significant effect on the kurtosis evolution for case SB2 vs. SB2\_HR, but for the larger boxes the higher grid resolution shifts the peak to higher frequency (by few MHz), as shown in the lower panel of Fig.~\ref{fig:dTskewkurt_res}. Interestingly, this frequency shift also aligns the kurtosis peak and skewness trough more closely in frequency. This correlation suggests that measuring both quantities at the same time might serve as a check and validation of the measurements. These higher order statistics show the greatest promise for differentiating between possible models of suppression. \begin{figure} \includegraphics[width=3.2in]{dTskewkurt_244_rsd_b3nu44_res.eps} \caption{The evolution of the skewness (top) and kurtosis (bottom) in the 21-cm PDFs for the 244\,$h^{-1}$~Mpc box for the high-resolution RT grid (dashed) vs. corresponding low-resolution models (solid) with a Gaussian beam size 3~arcmin and bandwidth 0.44~MHz. Here are two suppression models with HMACHs only (left trough, LB1) and partial suppression of LMACHs (right trough, LB2). \label{fig:dTskewkurt_res}} \end{figure} \section{Discussion} \label{sec:summary} We presented a suite of full radiative transfer simulations of the epoch of reionization designed to investigate the observational signatures of star-formation suppression in dwarf galaxies due to radiative feedback (or, possibly, mechanical feedback). We considered four different, physically motivated suppression models, all with a large and (comparatively) small box size and with two RT grid resolutions. We investigated mainly the large-scale effects of reionization and addressed the robustness of our results to numerical effects, namely simulation volume and grid resolution. Specifically, we sought to discover which observational signatures are most sensitive to the method of radiative feedback and which are the most robust. We primarily focused on the redshifted 21-cm signatures that can probe the full reionization history and the detailed morphology thereof. The 21-cm signals can provide a wealth of information, including the mean history, rms evolution, power spectra, PDFs, imaging, and higher-order statistics. We find that the morphology of reionization and the overall shape of observational features addressed here are generally insensitive to source suppression model, box size, and resolution. The exact timing of reionization varies among the suppression models, where the degree of survival for small sources determines how quickly reionization progresses. Despite these differences, the evolution of the 21-cm signal is very similar. The same is true for the evolutions of the fluctuations in the signal, where the characteristic rms peak and trough locations depend on the reionization history. A closer examination does reveal important differences, however, especially in the higher-order statistics. The small-scale power spectrum is affected by the typical mass of the dominant ionizing sources. More aggressive suppression of LMACHs (low-mass sources) effectively removes them and, therefore, cause differences in the power spectrum at small scales between the suppression models, especially at late times. Similar differences can be seen in the PDFs of the differential brightness temperature. Here, larger (and more rare) sources cause more high-$\delta T_{\rm b}$ regions to form. Therefore, the HMACH-only model (LB1) will always have a brighter tail, and the least-suppressed LMACH model (LB3) will have the narrowest distribution. The kurtosis of the 21-cm PDF distribution shows significant variation between the models, while the skewness is quite insensitive and has a largely universal shape. Using the fact that the trough in the kurtosis approximately corresponds to peak in the rms and the peak roughly coincides with the trough in the skewness can be a useful check on the measurements of these quantities, and these features contain information on the characteristic size of ionized regions, the beam size, and timing of reionization. As expected, the smaller box size misses some of the large-scale structure and rare, bright sources found in the larger box size, which creates differences between the two volumes. Since we are mainly focussing on observables and large-scale structures, we mainly presented results for the larger volume. By definition, higher resolution simulations capture smaller structure better, as can be readily seen in the simulation slices in Fig.~\ref{fig:images_res}. Although the overall shape of the observables remain robust to RT resolution, we found that the models of suppression were sensitive to the resolution, leading to small-scale differences that are mainly present during the intermediate stages of reionization. During early times, there are fewer (or dimmer, depending on the source model) sources in high-resolution cases, because a source can more easily ionize its own smaller cell and introduce suppression of ionizing radiation. Towards the end of reionization, all models are dominated by large halos that are not affected by suppression, so the resolution matters less at these late times. This study does not cover every possible influence that a source model might have on a signature of the EoR. We are only considering the possible signatures of nature of LMACH suppression, with all other parameters (e.g., source efficiencies and thresholds for suppression) being equal. In previous studies, we considered varying the ionizing efficiency of sources \citep{Mell06a,Ilie07a,Ilie08a}, the typical mass and the very nature of the active sources \citep{Ilie12a,Ahn12a,Grif13a}, and how large a simulation volume is sufficient to derive the various quantities \citep{Ilie14a}. Ongoing observations that constrain galaxy formation and small-scale, high-resolution hydrodynamical simulations of galaxy evolution will help to further limit the available freedom of these other quantities. These simulations are also necessarily a simplified version of the early Universe, since all relevant scales cannot be simulated simultaneously. In particular, we do not consider the small-scale, unresolved IGM structure, which includes gas clumping, and only include a simplified model for the Lyman-limit systems. These structures should have the greatest impact at latest times, particularly in suppressing the large-scale fluctuations \citep[][and Mao et al., 2015, in prep.]{Soba14a,Shuk15a}. In addition to self-shielding regions in the IGM that may remain neutral, galaxies may also hold dense regions that act as photon sinks. These effects reduce the overall contrast between ionized and neutral regions, suppressing fluctuations generally and preventing the late-time rise in the skewness \citep{Watk15a}. Throughout this work we also assume that $T_{\rm S} \gg T_{\rm CMB}$ and ignore any early heating from X-ray sources. These effects should be important early on in reionization \citep[e.g.][]{Venk01a,Furl06a,Seme07a,Prit07a,Sant10a,Mesi13a}, and we plan investigate these effects in a future paper. Due to the resolution of our $N$-body simulations, we do not consider sources below $10^8~M_\odot$, which should be important during the early stages of reionization \citep{Ahn12a} and may contribute a significant fraction of the total ionizing photons \citep{Wise14a,Paar15a}. Although the current generation of radio interferometer experiments may not be able to detect the differences resulting from the various suppression methods presented here \citep[see e.g.][]{Pati14a}, we present which 21-cm signal features are robust to this physical uncertainty. Any comparisons to observations must take into account the details of the instrument, which is beyond the scope of this paper. Future experiments may indeed have the sensitivity to distinguish between our models presented here. None the less, this suite of simulations will aid in the interpretation of any upcoming 21-cm measurements. \section{Acknowledgements} We thank Hannes Jensen for sharing \textsc{\small c2raytools} and Yi Mao for providing us with his power spectrum code. We acknowledge PRACE for awarding us computational time under PRACE4LOFAR grants 2012061089 and 2014102339 and access to resource Curie based in France at CEA and to resource SuperMUC at LRZ. This work was supported by the Science and Technology Facilities Council [grant numbers ST/F002858/1 and ST/I000976/1] and the Southeast Physics Network (SEPNet). GM was supported in part by Swedish Research Council grant 60336701. KA was supported by NRF-2012K1A3A7A03049606 and NRF-2014R1A1A2059811. P.R.S. was supported in part by U.S. NSF grant AST-1009799, NASA grant NNX11AE09G, NASA/JPL grant RSA Nos. 1492788 and 1515294, and supercomputer resources from NSF XSEDE grant TG-AST090005 and the Texas Advanced Computing Center (TACC) at the University of Texas at Austin. Some of the numerical computations were done on the Apollo cluster at The University of Sussex. \bibliographystyle{mnras}
1,116,691,499,942
arxiv
\section{Misfit strain accomodation} The lattice mismatch for the semiconductor layer with lattice constant $a$ is measured by the misfit parameter $f_m$ \begin{displaymath} f_m = \frac{a-a_s}{a_s} \end{displaymath} where $a_s$ is the lattice constant of the substrate. The misfit parameter for the $\mathrm{Ge_xSi_{1-x}/Si}$ heterostructure could be written, using Vegard rule $a(x) = x a_{\mathrm{Ge}} + (1-x) a_{\mathrm{Si}}$, as $f_m = 0.418 x$. If the thickness $h$ of the layer is small, the misfit between the two semiconductors is accommodated by a strain of the layer that is known as the `misfit strain'. The in-plain ($x-y$) components of the strain tensor are \begin{displaymath} \varepsilon_{xx} = \varepsilon_{yy} = \varepsilon_\parallel = f_m \end{displaymath} while the normal one \begin{displaymath} \varepsilon_{zz} = \varepsilon_\perp = -2 \frac{C_{13}}{C_{33}} \varepsilon_\parallel = - \frac{2 \nu}{1 - \nu} \varepsilon_\parallel \end{displaymath} where $C_{ij}$ are the components of the elastic stiffness tensor in Voigt notation \cite{nye} and $\nu$ is the Poisson ratio that can differ significantly from its bulk value for very thin films \cite{morozov}. Subscript $\parallel$ will be omitted below. The general definition of the elastic energy is \begin{displaymath} E^{el} = \displaystyle \frac{1}{2} \mathcal{E} \cdot \mathcal{C} \cdot \mathcal{E} . \end{displaymath} The elastic energy $E$ per unit area stored in the layer due to the homogeneous strain is \cite{jain91} \begin{displaymath} E^{el} = B h f_m^2 \end{displaymath} where the constant $B$ is defined as \begin{displaymath} B = 2 \mu \frac{1+ \nu}{1 - \nu} \end{displaymath} and $\mu$ is the elastic shear modulus. $B$ is not the bulk modulus: it allows for the vertical relaxation of the layer which accompanies the constraint in the plane, and incorporates the factor of 1/2 in relating elastic energy to the square of strain. The strain energy $E^{el}$ is proportional to $h$. As $h$ increases and exceeds a certain critical thickness $h_c$, pseudomorphic growth of the uniform layer with flat free surface is no more possible and several phenomena are observed: \begin{itemize} \item{introduction of misfit dislocations} \item{modulation of the free surface profile} \item{composition modulations \cite{decomp,spin1,spin2}} \item{microtwin formation \cite{twin1,twin2}} \item{interdiffusion between the layer and the substrate \cite{interd,interdif}} \end{itemize} The last mechanism usually occurs at temperatures higher than typical growth ones. The composition nonuniformities and microtwin formation are of primary interest for III-V ternary compounds when one of the constituents is In. Ref. \cite{tao} lists reasons to ignore concentration fluctuations in an analysis of SiGe morhology, the first one being a well-known fact that Si and Ge are miscible over the entire composition range. Thus models of the last three mechanisms are not discussed below. Strain relaxation through {\md}s and through the surface modification are certainly the major routes for the strain accomodation in SiGe alloy. As a rule of thumb, the first one dominates for low while the second - for high lattice mismatch. However, these mechanisms, depending on the materials system, growth temperature and the value of the misfit strain could be cooperative as well as competitive \cite{hull,desj,hull2}. On the one hand, there is a direct correlation between the surface cross-hatched morphology and the arrangement of the interfacial misfit dislocations \cite{yastr}; surface undulations, in turn, could serve as nucleation sites. On the other hand, strain relief by one of this mechanism reduces the driving force for the other. The strain, surface and interface energies of the SiGe/Si heterostructure with and without misfit dislocations have been recently computed for all three growth modes (Frank-van der Merve, Stranski-Krastanov and Volmer-Weber) as a function of the layer composition and thickness \cite{nak}. Introduction of {\md}s could be explained by an analysis of the energy of the system. For $h > h_c$ introduction of dislocations becomes energetically favourable providing a partial strain accommodation. They are introduced at the interface in the case of the constant layer composition and throughout the strained graded layer (uniformly when the linear grading is used and non-uniformly for the square-root or parabolic profile \cite{bosa}). Misfit dislocations could be produced by the motion of the threading dislocations from the substrate or generated in the strained layer via nucleation and/or multiplication. Three stages (regimes) of the strain relaxation through introduction of misfit dislocations can be distinguished \cite{gonz,rodr}. The first one is characterised by a relatively slow strain relief provided mainly by the glide of the pre-existing threading dislocations. The relaxation rate in the second one is higher and depends on the multiplication processes and activization of new nucleation mechanisms. In the last stage a saturation of relaxation is observed caused by the strain hardening ('work-hardening') \cite{gonz2}. On the other hand, the elastic energy of a body with a flat surface {\em always diminishes} if the surface becomes wavy and thus counteracts the effect of increasing surface energy. Thus the strained flat surface could be unstable and development of surface undulations could relax strain. The wavelenth $\lambda$ of the surface ripples is decrease with the layer strain \cite{tham}: $\lambda \propto \varepsilon^{-2}$. The strain is reduced locally at the peaks of the structure and is increased in the throughs. An extreme stage of surface roughening is the formation of epitaxial islands that are a promising object for electronic devices \cite{freund}. This problem had gain a lot of attention recently (see, for example, reviews \cite{island,island1,island2}). The reverse phenomenon - strain relaxation by pit formation in the compositionally graded SiGe thick films - also has been observed \cite{gaspare,gasp2}. Island coalescence could lead to the formation of the crystallographic tilt due to the asymmetric generation of $60^o$ dislocation and asymmetric strain relief \cite{riesz}. It is believed that in contrast to InGaAs strained layers that are characterized by an instability against the simultaneous perturbation of the surface profile and the composition, the onset of the surface roughening of strained the SiGe layers is primarily determined by nucleation of islands \cite{leon}. Surface roughening is certainly an evil, if the aim is to grow a planar layer. \section{Dislocation system in equilibrium} Two theories have been developed to calculate the equilibrium critical thickness $h_c$ of the uniform epitaxial layer. The first theory originated in the work of Frank and Van der Merwe \cite{f_merve} and has been developed further by Van der Merwe and collaborators \cite{merve}. It is based on the principle of the energy minimization. The second one by Matthews and Blakeslee \cite{matt_b,matt} is known as the force balance theory. Being correctly formulated, these two theories are equivalent and give the identical critical thickness, as by definition of thermodynamic equilibrium they must. It has been shown that the expression for the critical thickness could be also used for graded layers if the misfit parameter is based on the average Ge concentration \cite{jain91a}. Subsequent development of critical thickness models has been aimed at the accurate modelling of the dislocation core energy \cite{beltz}, accounting for the surface effects \cite{cam} and anisotropy \cite{anis}. If the thickness of the layer is continuously increased, the energy minimization predicts that the number of {\md}s and the strain relaxation will also increase. The strain is never fully relaxed for any finite value of the thickness but approaches $f_m$ (which corresponds to the complete relaxation of strain) as h tends to $\infty$. To calculate the number of dislocations as a function of $h$, a minimum of the total energy of the layer should be determined. The possible orientations of the {\md}s are limited by the crystallography of the system. For the f.c.c. structures of SiGe alloys with the interface normal coinsiding with a cube edge dislocations form in two parallel arrays with members of one array being perpendicular to the members of the other. Let the spacing between two neighbouring dislocations in the arrays be $p$ and \begin{displaymath} b_1 = - b \sin (\alpha) \sin (\beta), \end{displaymath} where $b$ is Burgers vector, $\alpha$ is the angle between the glide plane and the normal to the interface and $\beta$ is the angle between the dislocation line and the Burgers vector \cite{jain92,jain93}. For $60^o$ dislocations \begin{displaymath} \alpha = \arctan \frac{1}{\sqrt {2}}, \qquad \beta = \frac{\pi}{3} \end{displaymath} while for $90^o$ dislocations \begin{displaymath} \alpha = \frac{\pi}{2}, \qquad \beta = \frac{\pi}{2} \end{displaymath} The in-plain component of the homogeneous strain in the layer in the presence of dislocations becomes \begin{displaymath} \varepsilon = f_m + \frac{b_1}{p} \end{displaymath} and the energy \begin{displaymath} E = B h \left( f_m + \frac{b_1}{p} \right)^2. \end{displaymath} $f_m$ and $b_1/p$ always have opposite signs and the homogeneous energy is reduced by the {\md}s (misfit-energy-increasing dislocations studied in ref. \cite{increase} are nonequilibrium ones). The energy of dislocations $E^{\mathrm{d}}$ is determined using linear elasticity (for example, \cite{lan,teo}). It also contributes to the total energy that for uniform distribution of dislocations is written as \begin{displaymath} E^{\mathrm{tot}} = B h \left( f_m + \frac{b_1}{p} \right)^2 + \frac{2}{p} E^\mathrm{d}. \end{displaymath} In the early works the following expression for the dislocation energy has been used \begin{equation} \label{ed} E^\mathrm{d}_\infty = A b^2 \left( (1 - \nu \cos^2 \beta) \left( \ln\frac{\varrho h}{b} + 1\right) \right) \end{equation} where $A = \mu /(4 \pi (1 - \nu))$ and parameter $\varrho$ accounts for the non-elastic energy of the dislocation core. Note that the energy only weakly depends on the concrete value of $\varrho$ for $h \gg b$. A number of both explicit and implicit assumptions have been made in derivation of this relation and folowing from it equation for the critical thickness \begin{equation} \label{hc} h_c=\displaystyle\frac{b(1-\nu\cos^2\beta)\left( \ln\displaystyle\frac{\varrho h}{q} + 1\right) +\displaystyle\frac{8\pi(1-\nu^2)s\gamma}{Bb(1-\nu)}} {8\pi(1+\nu)\sin\beta\sin\alpha \left( f_m-\displaystyle\frac{2}{B}\displaystyle\frac{\tilde{\sigma^{\mathrm{fault}}}} {b\cos(2\alpha)\sin\beta} \right)} \end{equation} Often the step energy and the stacking fault energy in eq. (\ref{hc}) are omitted. One of the simplifying assumption in the derivation of eqs. (\ref{ed},\ref{hc}) is the neglection of the interaction between dislocations. Accounting for this effect leads to \cite{jain} \begin{displaymath} E^\mathrm{d} = A\left[ a_0+a_1\ln \left(p \frac{1-\exp (-g)}{2 \pi q} \right) + a_2 \frac{g \exp(-g)}{1 - \exp(-g)} - a_3 \frac{g^2 \exp(-g)}{(1-\exp(-g)^2} -a_2 \right] \end{displaymath} where \begin{displaymath} a_0=(b_1^2+b_2^2) \left(\sin^2{\alpha}- \frac{1-2\nu)}{4\pi(1-\nu)} \right), a_1= b_1^2+b_2^2) +(1-\nu)b_3^2, \end{displaymath} \begin{displaymath} a_2=b_1^2-b_2^2, a_3=\frac{1}{2}(b_1^2+b_2^2), g=4\pi\frac{h}{p}, b_2=b\cos\alpha\sin\beta, b_3=-b\cos\beta. \end{displaymath} Recently it has been claimed, however, that this expression overestimate the effect of dislocation interactions \cite{wies2}. The relations displayed so far ignore the presence of the free surface. This drawback has been eliminated in ref. \cite{fisher2} using the image method. However, in this work, in turn, it has been assumed implicitly that the substrate has the infinite thickness. This restriction seems to be too severe nowadays due to the importance that have gained so called {\em compliant\/} substrates. The problems related to the use of such substrates ("strain partitioning", critical thickness reduction) have been analyzed recently in ref. \cite{kast}. The most rigorous analysis of the critical thickness is probably presented in ref. \cite{lee} where the finite thickness of both the substrate and the epitaxial layer as well as the difference in mechanical properties are taken into account. In the capped layers relaxation occurs by the introduction of dislocation dipoles (the expression for the dipole energy could be found in ref. \cite{jain}). When the cap layer thickness is less than a certain thickness, a mixture of the single and paired misfit dislocations has been observed \cite{jin}. The regular periodic distribution having the lowest energy is rarely occurs in real systems: the dislocations frequently nucleate at regenerative heterogeneous sources (defects, impurities, ledges etc.), and hence form bunches. Presumably, these bunches are distributed in a random manner in the layer. For example, statistically significant measurements of ref. \cite{stat} reveal that distribution of spacings, being a broad unimodal one at the begining of the strain relaxation, could tend to a bimodal distribution as the misfit relief proceeds ( the mean spacing decreases) while in \cite{wies} only significant narrowing of the unimodal distribution has been registered. The energy of the non-periodic dislocation arrays has been considered in \cite{jain93a}. The total energy of a layer containing non-periodic arrays can be calculated by adding the homogeneous misfit strain energy and the interaction energy between the homogeneous misfit strain and the average strain caused by the dislocation arrays. In equilibrium the number of {\md}s in the layer is smaller if the distribution is non-periodic. The primary use of the expression for the total energy of the layer containing dislocations, as was indicated, is to determine the critical thickness at which dislocations should appear. However, one can also get the concentration of dislocation $1/p_e$ that cause strain relaxation $|b_1/p_e|$ and thickness $h_e$ for the equilibrium 'supercritical' layers ($h > h_c$) \cite{jain92}. The values $|b_1/p_e|$ that provide energy minimum increase with h first rapidly and then slowly. For any given thickness, the concentration of dislocations $1/p_e$ is smaller if interactions of dislocations are not properly taken into account. The observed concentrations are always much smaller than the predicted values for a periodic distribution. The discrepancy is partly due to the non-periodic distribution and partly due to the difficulty in nucleating the dislocations. The effect of the finite size of the substrate or mesa on the dislocation density reduction \cite{xiong} has been considered analytically in ref. \cite{fisher} where the distribution of the misfit stress versus the distance the edge has been obtained and using finite element method in ref. \cite{anan}. Extension of the equilibrium theory of the critical thickness for the epitaxial layers suggested in ref. \cite{huang} is based on the proper account of the multiple reflection of the image dislocations. \section{Evolution of dislocation system} In thick ($h > h_c$) semiconductor layers grown at low temperatures the concentration of {\md}s is much smaller than that predicted by the thermodynamic equilibrium condition. Therefore the layers are metastable. When the metastable layers are heated at higher temperatures or during the continuous growth of the layers, dislocations are introduced and the strain relaxes. Generation of dislocations involves nucleation and/or multiplication and the glide motion of the dislocations. The creation of the {\md}s by multiplication also involves the glide of the dislocations and the nature of the dislocations depends on the growth mechanism of the layer. At high temperatures, the growth mode is by three dimensional island growth because the atoms can more easily migrate to the islands. 3D growth mode is also occurs in the high lattice mismatch growth. In SiGe system with Ge content over 0.8 three growth stages have been observed \cite{koch}: 1) the pseudomorphic growth of thick (3-5 ML) wetting layer; 2) nucleation and growth of 3D islands; 3) coalescence of islands and continuos film growth. Misfit dislocations are readily nucleated at the boundaries between the islands \cite{payne}. To develop a model of strain relaxation through the system of dislocations, it is necessary to describe the dislocation motion and the evolution of the disclocation density due to their primary generation ({\md} forming due the motion of existing {\td}, homogeneos and/or heterogeneous nucleation), multiplication, and interactions between dislocations (blocking, mutual fusion and annihilation) as well as with the native and artificially induced (such as cavities produced by He or H implantation and annealing \cite{foll,herzog,trink}) defects. Development of the dislocation system in the substrate, generally speaking, should also be taken into account, since the dislocation half-loops in the substrate could produce a great number of intersections in the glide planes \cite{stein}. Evidently, the detailed description of the strain relaxation in the heterostructures is extremely complex and, probably, excessive for the practical aim of optimization of the growth process. \subsection{Propagation of dislocations} The propagation of dislocations at low temperatures is dominated by a glide; a climb component that implies mass transport by diffusion in the bulk is significant at high temperatures only \cite{tsao}. Velocities of the dislocations of different types can vary greatly. The thermally activated dislocation velocity is given by the equation \begin{equation} \label{vel} v_d = v_0 (\sigma_{\mathrm{exc}})^m \exp(- E_v/kT) \end{equation} where $v_0$ is a constant, $\sigma_{\mathrm{exc}}$ is the excess stress and $E_v$ is the energy of activation for the glide motion of the dislocation, m usually taken as 1 or 2. The excess stress can be written as \begin{equation} \label{exc} \sigma_{\mathrm{exc}} = 2 S \mu \frac{1 + \nu}{1 - \nu} \varepsilon - \frac {\mu b \cos (\alpha) (1 - \nu \cos^2 \beta)} {4 \pi h (1 - \nu)} \ln \frac {\varrho h}{b} \end{equation} where $S$ is the Schmid factor. The first term in Eq. (\ref{exc}) is the stress acting on the dislocation line due to misfit strain and the second term is the self-stress of the dislocation line \cite{f_hull}. The stress $\sigma_{\mathrm{exc}}$ in the capped layers (strained buried layers) is as follows \cite{tsao,nix} \begin{equation} \sigma_{\mathrm{exc}} = 2 S \mu \frac{1 + \nu}{1 - \nu} \varepsilon - \frac {\mu b \cos (\alpha) (1 - \nu \cos^2 \beta)} {4 \pi h (1 - \nu)} \ln \frac {\varrho h}{b}- \frac {\mu b \cos (\alpha) (1 - \nu \cos^2 \beta)} {4 \pi h_{\mathrm{eff}} (1 - \nu)} \ln \frac {\varrho h}{b} \nonumber \end{equation} where \begin{displaymath} h_{\mathrm{eff}}=\displaystyle\frac{h h_{\mathrm{cap}}}{h+h_{\mathrm{cap}}} \end{displaymath} Evidently, both $\sigma_{\mathrm{exc}}$ and the velocity of the dislocation $v_d$ are smaller in the capped layers. If the strained layer has an `infinitely thick' capping layer $h_{\mathrm{cap}}=\infty$, the self-energy of the dislocation line (dipole) increases by a factor 2 \cite{jane92b} and in the denominator of the second term in the eq.~(\ref{exc}) "2" appears instead of "4". The velocity of dislocations in the different regions of a sample have been observed to be different by a factor up to 3 \cite{hull91} due to the local variations of the stress. The dislocation motion is usually described by the double (single) kink model. If the dislocation line is sufficiently long, several kinks may be formed at the same time. Single kinks are formed in the thin uncapped layers. As the layer thickness increases, the rate of nucleation of double kinks also increases. The transition thickness over which double kinks dominate has been estimated as 1 $\mu \mathrm m$ and 20 $\mathrm{nm}$ for the strains $\varepsilon = 0.2 \%$ and $\varepsilon = 1 \%$, respectively \cite{jain}. The activation energy is $E_v = E_m + F_k$ for single kink and $E_v = E_m + 2 F_k$ for double kink models, where $E_m$ is the activation energy for the kink jump along the dislocation line direction and $F_k$ is the energy required to nucleate an isolated single kink. The model \cite{hull91} predicts the linear dependence of the velocity on the excess stress $\sigma_{\mathrm{exc}}$ and on the length of the dislocation length when the latter does not exceed a critical value. The stability of the dislocation glide has been studied in ref. \cite{lunin}. The kink motion in the field of random forces has been considered. It has been found that in the case of the low stress (compared to the Pierls stress) the attachment of the point defects to the dislocation core may cause both dislocation immobilization and instability of the dislocation glide. On the other hand, experimental data \cite{point} show that the presence of the point defects could cause either increase or decrease of the dislocation velocity being dependent the defect nature, energy, concentration and the layer strain. The authors of the cited paper have also studied the effect of the free surface on the dislocation propagation. While no systematic difference between measurement during growth and after growth has been registered, the dislocation velocity has been found to increase several times after forming a native oxide on the surface in the post-growth processing. The most possible explanation suggested in the paper is that the local stress at the oxide-layer interface can enhance kink nucleation rates at the surface. The substrate thickness is finite, thus, there is strain of the opposite sign and much smaller in magnitude than in the epitaxial layer and threading dislocation in the substrate move in the opposite direction. As a result a "hairpin" configuration is formed. It consists of two long arms parallel to the surface (one at the interface, the other deep in the substrate), connected by a small threading segment \cite{pichaud}. \subsection{Nucleation of dislocations} If the substrate is characterized by a sufficiently high density of the pre-existing {\td}s, the necessary for the strain relaxation {\md}s are produced by the propagation of the threading segments. When the high quality substrates with the low dislocation density are used, the strain relaxation could be limited by the {\md} generation. Possible sources of {\md}s are: \begin{itemize} \item{homogeneous nucleation of half-loops (whole or partial) at the free surface of the epitaxial layer} \item{homogeneous nucleation of half-loops at the substrate/epilayer interface} \item{heterogeneous nucleation of complete loops at the nucleation sites in the bulk of the epilayer} \item{heterogeneous nucleation of half-loops at the nucleation sites (point defects at the interface, edges of the islands} \item{multiplication of dislocations} \end{itemize} The homogeneous nucleation of the dislocation half-loops at the surface of the semiconductor strained layers can be analysed through the behaviour of the total energy of the loop \cite{jain} \begin{displaymath} E^{\mathrm{tot}} = E^{\mathrm{loop}} - E^{\mathrm{strain}} \pm E^{\mathrm{step}} + E^{\mathrm{fault}} \end{displaymath} where $E^{\mathrm{tot}}$ is the self-energy of the semicircular loop of radius $R$, $E^{\mathrm{strain}}$ is the reduction of the homogeneous strain energy due to the interaction between the loop and the misfit strain and $E^{\mathrm{step}} = 2 R s \gamma$ is the energy of the surface step which is necessarily created ($s = 1$) or destroyed ($s = - 1$) if the Burgers vector of the dislocation has a vertical (normal to the surface) component. The last term $E^{\mathrm{fault}} = \tilde {\sigma}^{\mathrm{fault}} (h / \cos \alpha)$ is included in the case of a partial dislocation only \cite{island}. It represents the energy of the stacking fault or the antiphase boundary created by the dislocation. The total energy of the loop increases from 0 for $R = 0$ to a maximum value $E^{\mathrm{act}}$, the activation energy for nucleation (which decreases with increase of the misfit $f_m$). The maximum occurs when $d E^{\mathrm{tot}}/d R = 0$ at $R = R_c$, the critical radius of the loop. As the radius increases beyond $R_c$, the loop energy decreases and the loop grows at a rate determined by the velocity of the dislocations until it reaches the interface. After this, its threading segments move apart extending the misfit dislocation at the interface. The critical loop radius and corresponding activation energy are determined by \cite{island} \begin{displaymath} R_c = \displaystyle \frac{B (1 - \nu) (1 - \displaystyle \frac{\nu}{2}) b^2 \left(1 + \ln \left( \displaystyle \frac{\varrho R_c}{b} \right) \right) + 16 (1 - \nu^2) s \gamma} {8 \pi (1 - \nu^2) (\sigma b \sin\beta \sin\alpha \cos\alpha -E^{\mathrm{fault}})} \end{displaymath} \begin{displaymath} E^{\mathrm{act}} = R_c s \gamma + \frac{B R_c (1 - \nu) (1 - \displaystyle \frac{\nu}{2}) b^2 \left(\ln \left( \displaystyle \frac{\varrho R_c}{b} \right) -1 \right)} {16 (1 - \nu^2)} \end{displaymath} If the step and stacking fault energies as well as the logarithmic factors are neglected, it can be seen that $E^{\mathrm{act}} \propto b^3$, i.e. it is easier to nucleate partial dislocations that have smaller Burgers vectors. The rate of nucleation is generally assumed to be proportional to $\exp(E^{\mathrm{act}}/kT)$. The estimates show that values of $E^{\mathrm{act}}$ are extremely high and thus homogeneous nucleation is very unlikely to occur for the reasonable values of the misfit parameter \cite{jain}. In most cases the observed values of the activation energies are much lower and have been attributed to the heterogeneous nucleation. It has been known for many years that point defects and their clusters lower the activation energy dramatically \cite{matt} and act as efficient sources for the nucleation of dislocations. Deliberate control of the number of these sources (by growing an intermediate layer with the high density of defects or introducing defects into the substrate) is widely used in practice \cite{bolh} and presents an alternative to the classical layer grading \cite{abraham}. The authors of ref. \cite{h_bean} argued that $E^{\mathrm{act}}$ in GeSi alloys should be lowered for the following reasons: \begin{itemize} \item{preferential accumulation of Ge near the core in the compressed layers can substantially reduce the nonelastic core energy} \item{random fluctuation of Ge concentration results in the activation energy reduction in the regions where local Ge concentration is high} \end{itemize} In \cite{eaglesham} a regenerative source called 'diamond defect' with the low activation energy is described which is probably a result of the interstitial precipitation. Nucleation at the atomic ledges trapped at the interface between the substrate and the epitaxial layer has been suggested in ref. \cite{perovic}. Unfortunately, little theoretical work on the heterogeneous nucleation in the semiconductor strained layers has been done, presumably because many unknowns are involved and the process is very complex. The modulation of the free surface (surface roughening) can provide regions (ripple throughs) of the large stress where the activation barrier for dislocation nucleation is extremely low \cite{cullis}. It has been shown that its value is proportional to $\varepsilon^{-4}$ while for the dislocation nucleation by other mechanisms usually varies as $\varepsilon^{-1}$ \cite{mooney}. Thus nucleation of dislocations via surface roughening dominates for the large values of $f_m$. Note that the half-loop can nucleate at the surface only if $h \geq h_d$, where $d = R_c \cos \varphi$, $\varphi$ is the angle between the surface and the normal to the slip plane. \subsection{Multiplication of dislocations} The most popular mechanism invoked for the multiplication of dislocations is the well-known Frank-Read source or its modification. A characteristic feature of this mechanism is that often the large dislocation loops extended into the Si substrate as well as the dislocation pile-ups several microns deep are observed \cite{mooney}. Another multiplication process is the so-called Hagen-Strunk mechanism \cite{hagen} that operates when two dislocations meet each other at the right angle. Multiplication of dislocations by this mechanism has been observed in both GeSi \cite{eaglesham} and InGaAs \cite{fitz89} strained layers. It operates efficiently, however, only if neither the layer thickness nor the misfit are large. There are also doubts of the correctness of observations interpretation as Hagen-Strunk mechanism in some cases \cite{vdovin}. Both Frank-Read nad Hagen-Strunk mechanisms lead to the bunching of the dislocations with identical Burgers vectors. Filling the area between this bunches could be promoted by the cross slip process \cite{hohn,pichaud}. The rate of multiplication is commonly written as \begin{equation} \left( \frac{d N}{d t} \right)_{\mathrm{mult}} = K N_m v_d \end{equation} where $K$ is a breeding factor and $N_m$ is the number of mobile threading dislocations. A breeding factor is usually considered to be either constant \cite{jain} $K = K_0$ or proportional to the excess stress \cite{dodson} $K \propto \sigma_{\mathrm{exc}}$. A more elaborate expression has been suggested for the Hagen-Strunk multiplication mechanism \cite{jain} \begin{equation} \label{hagen} \left( \frac{d N}{d t} \right)_{\mathrm{mult}} = - \frac {(f_m + \varepsilon) N_m v_d}{2 b_{\mathrm{eff}}} P_{\mathrm{mult}} \end{equation} where $P_{\mathrm{mult}}$ is the probability that an interaction of the appropriate type leads to a multiplication event that depends on the lattice mismatch and the thickness of the layer. \subsection{Interaction of dislocations} Dislocation interactions not only influence the rate at which the dislocations propagate, but also can halt the threading dislocation motion entirely. Additionally, blocked dislocations can alter the surface morphology as well as limit the overall relaxation of technologically important low-dislocation-density, graded buffer structures. On the other hand, annihilation of the threading segments could lead to the significant reduction of the dislocation density in the strained layer. \subsubsection{Blocking (pinning) of dislocations} Two blocking mechanisms are known. The first one, {\em long-range blocking\/}, has been described over a decade ago \cite{freund}. Recently another blocking mechanism named {\em reactive blocking\/} has been detected both experimentally (by real time transmission electron microscopy observations) and numerically (using discrete disclocation dynamics computations of the strained layer relaxation) \cite{stach}. \paragraph{Long-range blocking} The {\md} could impede the motion of the {\td} if the two dislocations have the right kind of Burgers vectors. There are four pairs of strain relieving Burgers vectors, only one of which causes the significant blocking of the moving dislocation. The probability that the dislocation interaction can impede the motion of the threading dislocation is therefore 1/4. As a propagating threading dislocation approaches an interfacial misfit dislocation segment, the strain fields associated with each dislocation begin to overlap, resulting in an interaction force that has the general form: \begin{displaymath} \sigma_{\mathrm{int}} \propto \frac {\mathbf{b_1} \cdot \mathbf{b_2}}{r} \end{displaymath} where ${\mathbf{b_1}}$ and ${\mathbf{b_1}}$ are the Burgers vectors of the two dislocations and $r$ is the distance between them. Thus, this force can act to either increase or decrease the magnitude of the net stress that drives the threading dislocation forward, depending on the signs of the Burgers vectors of the dislocations involved. If this interaction stress cancels the other stresses over a significant portion of the threading segment, the motion of the entire dislocation will be halted. If the interaction stress is not sufficiently large, the threading dislocation will propagate past the misfit interfacial segment. To bypass the blocking {\md}, the {\td} should alter its path by moving in the same glide plane but closer to the free surface, i.e. in a channel of the width $h^\star$ smaller than the layer thichness. Three forces act on the {\td} in this configuration: \begin{enumerate} \item{the driving force due to the residual homogeneous strain} \item{the retaining force due to the line tension of the {\td} (including the interaction with the surface via the image forces} \item{the interaction force with the {\md}} \end{enumerate} The condition of blocking the {\td} motion could be written as an equation for critical value of $h^\star$ \cite{russel} \begin{displaymath} \varepsilon - \varepsilon_r = \frac{3 b}{16 \pi h^\star (1 + \nu)} \left(\frac{4 - \nu}{3} \ln \left(\frac{8 h^\star}{b} \right) - \frac{1}{2} \cos(\alpha) - \frac{1 - 2 \nu}{4 (1 - \nu)} \right ), \end{displaymath} where $\varepsilon_r$ is the reduced strain due to the presence of {\md}. The described blocking mechanism is enhanced due to the presence of the surface ripples as result of the stress field of the orthogonal misfit dislocation \cite{sf97,fitz99} and could lead to the {\td}s pile-ups to be discussed later in this section. \paragraph{Reactive blocking} A new strong blocking effect observed using real time TEM has been studied by numerical simulations \cite{stach}. A propagation of the threading segment towards the {\md} on an intersecting glide plane has been analysed. With the available Burgers vectors and directions of approach, sixteen distinct interactions of this kind are possible, leading to a variety of outcomes such as repulsion of the misfit dislocation into the substrate, reconnection of the two dislocations, and junction or jog creation. It was found that four of the interactions involve the parallel Burgers vectors and can result in a reconnection reaction. The authors claim that this blocking mechanism is much stronger than the conventional (long range) misfit-blocking interaction. Unfortunately, no analytical model/approximation is proposed in the paper. \paragraph{Overall blocking effect} The number of dislocations blocked per unit time is defined as \cite{jain} \begin{equation} \label{block} \left( \frac{d N}{d t} \right)_{\mathrm{block}} = \frac{d N_i (t)}{d t} P(t) \end{equation} where $N_i(t)$ is the total number of interactions and P (t) is the blocking probability. P (t) is 1/4 if the blocking occur ( the local force is greater than the critical value) and zero otherwise. Eq. (\ref{block}) could be re-written as \begin{equation} \left( \frac{d N}{d t} \right)_{\mathrm{block}} = - \frac {(f_m + \varepsilon) N_m v_d}{2 b_{\mathrm{eff}}} P(t) \\ \nonumber \end{equation} If the propagating {\td} interacts with a closely bunched cluster of N {\md}s with the identical Burgers vectors, the interaction force is multiplied by a factor N \cite{freund90}. The pile-up formation mentioned above could be responsible for the strain (work) hardening \cite{hull_bacon} and observed slowing of strain relaxation in the layer at the late stages. An indirect confirmation of the importance of the pile-ups formation in SiGe is the effect of the chemical-mechanical polishing at the intermediate growth stages on the final {\td} density reduction \cite{fitz_apl}. It is believed that planarization of the surface free the {\td}s pinned in the pile-ups. Introduction of hardening is a way to account integrally for the numerious interactions between the dislocations. While on atomic level plastic flow is always very inhomogeous, its macroscopic phenomenological description via the flow stress \begin{displaymath} \tau= \alpha_\tau \mu b \sqrt {\rho} \end{displaymath} where the coefficient $\alpha_\tau$ depends on the strain rate and the temperature is commonly used and proved to be reasonable for most cases of strain hardening in a wide range of materials \cite{kocks}. \subsubsection{Fusion and annihilation of dislocations} The outcome of the binary reaction of two threading dislocations depends on the relative arrangement of their gliding plabes and the orientation of the Burgers vectors. Possible cases including fusion and annihilation of dislocations have been considered in ref. \cite{pichaud}. Assuming that the first threading dislocation propagates in the plane (111) and its Burgers vector $\mathbf b_1$ has a positive projection on the vertical direction, basic binary reactions are summarized in the Table 1. \begin{table} \caption{Basic binary reactions between threading dislocations} \begin{tabular}{llll} \hline $2^{nd}$ plane & $(b_2)_z$ & Treading & Misfit\\ \hline 111 & positive & fusion & single line\\ 111 & negative & annihilation & single line\\ $1\tilde11$ & positive & fusion & two-arm\\ $1\tilde11$ & negative & annihilation & two-arm\\ \hline \end{tabular} \end{table} The critical parameter for these reactions is the interaction radius. A continuum-based approach using linear elasticity has been employed to compute this variable for the dislocations in the heteroepitaxial system in ref. \cite{anni} \section{Strain relaxation models} \subsection{Discrete models} A number of both micro- and mesoscale numerical models \cite{phillips} have been applied to the study of the relaxation mechanisms of the strained epitaxial layers. \subsubsection{Atomistic models} First-principles total energy computations has been used to resolve the disagreement of the experimentally determined relation between lattice relaxation in in-plane and out-of-plane directions with the predictions of classical elasticity \cite{oht}. It was found that segregation at the interface significantly influence strain relaxation in the heterostructure. The deformation state of the heteroepitaxial strained system has been studied using atomistic simulations in \cite{mar1,mar3}. A three-step relaxation procedure has been developed: \begin{enumerate} \item{structural relaxation with composition being fixed} \item{compositional relaxation} \item{further local structural relaxation} \end{enumerate} Conjugate gradients method has been used for the energy minimization at the first and third stages while Metropolis implementation of Monte Carlo method has been applied to the compositional relaxation. Molecular dynamics simulations have been used in ref. \cite{increase} to capture the growth process at the atomic level and to study the mechanisms of the dislocation formation. The embedded atom method has been employed that in addition to the binary interactions efficiently accounts for the many-body effects. The kinetic constrained influence on the atomic assembly process has been studied. In ref. \cite{ham} the two-dimensional Frenkel-Kontorova model has been applied to computation of the dislocation nucleation rate in the growing heteroepitaxial island. As in the preceding paper, the embedded atom method has been used to compute the total energy. One-dimensional Monte Carlo method has been used to simulate the surface height evolution during and after the strain relaxation in ref. \cite{andrews}. The aim of this study was to get insight into the cross-hatch morhology development and to asses different existing models of the process (such as enhanced growth over strain relaxed regions due to the lateral transport by surface diffusion and surface undulations caused by the dislocation generation and glide). The authors conclude that surface step flow is a necessary condition for the development of the mesoscale cross-hatch morphology while the plastic relaxation itself could not produce the undulations of significant amplitude. \subsubsection{Mesoscopic models} Detailed simulations of the interaction of the two threading segments encountering each other in a thin strained SiGe layer has been performed in ref. \cite{schw7} using the full three-dimensional Peach-Koehler formalism \cite{hirth}. The force acting on a dislocation segmeny $d\mathbf l$ in the glide plane is \begin{displaymath} b_i\sigma_{ij}n_j (\mathbf n \times d\mathbf l) \end{displaymath} where $\mathbf n$ is the normal to the glide plane. The stress tensor includes stresses due to the applied strain and stresses generated by the presence of dislocations. The authors have avoided the difficulties with the stress correction caused by the presence of surfaces by considering the symmetrically capped layer. In a few recent papers \cite{stach,don,nicola,schw} the application of the discrete dislocation dynamics method to the strain relaxation has been reported. Large scale 2D simulations are used to study the misfit strain relaxation in the hetroepitaxial islands in ref. \cite{don}, pecularities of the hardening in the single crystal thin films has been investigated in ref. \cite{nicola}. Monitoring of the evolution of a few hundreds dislocations in the strained layer \cite{stach,schw} leaded to a discovery of a new blocking mechanism discussed briefly in the preceeding section. \subsection{Continuum models} A classification of continuum (macroscopic) models adopted below is by no means unique and generally accepted. Still, it is worth to make an attempt to sort out different approaches to the simulation of the strain relaxation. The main problem is, of course, a considerable overlapping of ideas and methods. \subsubsection{Equilibrium models} \paragraph{Uniform layer} The equilibrium density of the {\md} in the strained layer with the uniform composition is obtained similar to the analysis of the critical thickness by the energy minimization \cite{tsao}. A coarse estimate of the {\td} density as 1-2 times that for {\md} follows from the scheme of the {\md} generation due to the threading segment motion ($\rho_{\mathrm{td}} = \rho_{\mathrm{md}}$) or by half-loop nucleation ($\rho_{\mathrm{td}} = 2 \rho_{\mathrm{md}}$). These estimates, of course, do not account for the {\td} density reduction due to the fusion/annihilation. An estimate for the {\td} density via the average {\md} length has been suggested in ref. \cite{fs77}, assuming that two threading dislocation are connected by a misfit segment with lenghth $\langle l \rangle$. Similar relation has been suggested later \cite{ferrari}: \begin{displaymath} \rho_{td} \approx 4 \rho_{md} \left(\frac{1}{\langle l \rangle} - \frac{1}{L} \right) \end{displaymath} where L is the sample size and $\rho_{md}$ is determined via the lattice mismatch. A model for the equilibrium {\td} density in the thick layer (compared to the critical thickness) has been analysed in \cite{ayers} with neglection of the misfit strain. The author assumes that the population of the {\td}s is governed by the coalescence of the close dislocations and introduces the 'minimum stable separation' (i.e. the fusion/annihilation radius) estimated as \begin{displaymath} \displaystyle\frac{1}{r_{\mathrm{min}}} = \displaystyle\frac{\cos \phi}{4h} \left(\cos^2\beta + \displaystyle\frac{\sin^2\beta}{4(1-\nu)} \ln \left( \displaystyle\frac {\sin\alpha\sin\beta}{4f_m}\right)\right) \end{displaymath} Then $\rho_{\mathrm{td}} = 2 /(R_{\mathrm{av}} r_{\mathrm{av}})$ where $R_{\mathrm{av}}$ is the average spacing between the glide planes defined by the misfit and $r_{\mathrm{av}} = 2 r_{\mathrm{min}}$ is the average distance between the {\td}s within the plane. The model reasonably predict both the mismatch dependence and the order of magnitude of the {\td} density for some materials in the range $f_m = 0.002-0.1$. Still, the author, being aware of the simplifications made, lists major of them: 1) the large layer thickness $h \gg h_c$; 2) approximations in the line tension calculations; 2) assumed large spacing of the {\md}s; 4) no kinetic barriers to the glide of the {\td}s; 5) the {\td}s density is considered as a function of the film thickness only, while experiments show that it varies across the layer. The author also notes that the difference between the strain accomodation by $60^o$ {\md}s and the pure edge ones (a factor 2 in the {\td} density) explaines the twofold reduction of the dislocation density in some materials during the post-growth annealing by transformation of the first type dislocations into the second one. \paragraph{Graded layer} Strain relaxation in the linear graded epitaxial layers has been considered in ref. \cite{kim}. The term "equilibrium dynamics" used by the authors is somewhat misleading. In fact a quasi-stationary approach is exploited. A set of algebraic equations that define the current values of the lattice constant, strain, biaxial modulus and shear modulus as a function of the film thickness is formulated. Expressions for the local relaxation thickness $h_c^l$, the plastic strain and the equilibrium dislocation density are obtained. The value of $h_c^l$ is attributed to the size of the dislocation-free region on the top of the growing layer. Its predicted weak dependence on the film thickness as well as the strong effect of the grading rate on both the local relaxation thickness and the equilibrium disclocation spacing are confirmed by the experimental data. The influence of the grading law on the residual strain, the {\td} and {\md} density has been studied theoretically and experimentally in ref. \cite{romanato}. The authors assume that the grading, however, does not change the basic phenomena such as nucleation/multiplication studied in detail for the uniform layers. The standard relation between the strain and the {\md} density is generalized to \begin{displaymath} \varepsilon(h)=-f_m(h) + b_{\parallel} \int_0^{h_f} \rho_{\mathrm{md}} dh \end{displaymath} where $\varepsilon(h)$ and $f_m(h)$ are the depth profiles of the residual strain and of the lattice misfit, respectively, $h_f$ is the total film thickness. Assuming full relaxation, the authors get \begin{displaymath} \rho_{\mathrm{md}}= \displaystyle\frac{1}{b_{\parallel}} \displaystyle\frac{d}{dh}f_m(h) \end{displaymath} Accounting for the existence of the top dislocation-free layer of the thickness $h_c$, the residual strain distribution is written as \begin{equation} \label{eps} \varepsilon(h)= \left \{ \begin{array}{rl} 0, \quad if \quad 0\leq h \leq h_f-h_c \\ -(f_m(h)-f_m(h_f)), \quad if \quad h_f-h_c < h \leq h_f \end{array} \right. \end{equation} Several layers with different grading laws (linear, parabolic, square-root, step + linear) have been grown. The analysis of the observed work hardening forced the following modification of eq. (\ref{eps}): \begin{displaymath} \varepsilon(h)= \left \{ \begin{array}{rl} \varepsilon^{\mathrm{wh}}(h), \quad if \quad 0\leq h \leq h^{\mathrm{wh}} \\ -(f_m(h)-f_m(h^{\mathrm{wh}})-\varepsilon^{\mathrm{wh}}(h)), \quad if \quad h ^{\mathrm{wh}}< h \leq h_f \end{array} \right. \end{displaymath} where the supersript 'wh' refers to workhardening and, as experiments show, $h^{\mathrm{wh}} > h_f - h_c$. \subsubsection{Reaction kinetics models} An evolutionary approch based on {\em reaction\/} and {\em reaction-diffusion\/} models has been applied to a number of the misfit strain relaxation problems. A general form of kinetics equations used to study the {\td} reduction in the strained layers is as follows \cite{rom1}-\cite{rom4} \begin{equation} \label{react} \frac{d \rho_i}{d h} = - \sum_j K_{ij} \rho_i \rho_j + \sum_l \sum_m K_{lm} \rho_l \rho_m \end{equation} where $\rho_i$ is the density of the specific $i^{\mathrm{th}}$ dislocation family and the kinetic coefficients $K_{ij}$ are the rates of the reactions between the dislocations from the families $i$ and $j$. These equations described the dislocation densities evolution \begin{itemize} \item{with the layer thickness during growth $\rho_i = \rho_i (h)$ or} \item{in time for the film of the fixed thickness $\rho_i = \rho_i (t)$} \end{itemize} Both first- and second order reactions could be considered (the order of a reaction corresponds to the number of participants). For example, blocking of the {\td} propagation due to the interaction with the {\md} is the first-order reaction while both fusion ($\mathbf{b}_3 = \mathbf{b}_1 + \mathbf{b}_2$) and annihilation ($0 = \mathbf{b}_1 + \mathbf{b}_2$) are the second-order reactions. The complete treatment of the strain relaxation using the reaction equations requires the account of the crystallographic details and a subdivision of the dislocation system into separate populations corresponding to the specific Burgers vectors and line directions. For f.c.c. semiconductors, for example, there are 24 dislocation sets arising from the combination of four possible (111) type slip planes and six Burgers vectors; 20 families of dislocations in GaN have been considered in ref. \cite{rom4}. The coupled system of nonlinear ODEs being rather complex, reduced models are frequently used. An obvious bonus of the model reduction is the possibility to obtain an analytical solution for some limiting cases \cite{rom1,rom3}. For example, in the model of ref. \cite{hull89} {\md}s are created exclusively by lateral bending of the threading segments and {\td}s by half-loop nucleation at the surface at rate $j$; {\td}s are blocked by the misfit ones with the probability $\eta$ and multiplication is neglected. Thus the following system is used \begin{eqnarray} \frac{\partial \rho_{md}}{\partial t} & = & v \rho_{td} \\ \nonumber \frac{\partial \rho_{td}}{\partial t} & = & j - \eta v \rho_{td} \rho_{md} \nonumber \end{eqnarray} The review of some early reaction type models has been given in \cite{ayers}. The first one ("annihilation-coalescence" model) is just an equation for the total dislocation density that accounts for the fusion and annihilation of dislocations \begin{displaymath} \frac{d \rho}{d h}= -A \rho - B \rho^2 \end{displaymath} leading to the relation \begin{displaymath} \rho(h)=\displaystyle\frac{1}{\left(\displaystyle\frac{1}{\rho_0}+ \displaystyle\frac{B}{A} \right)\exp(Ah)-\displaystyle\frac{B}{A}} \end{displaymath} The "half-loop" model based on the assumption that the fusion of the threading dislocations results in the formation of half-loops and half-loops smaller than a certain critical size are removed from the layer by gliding to the interface leads to the following equation for the total dislocation density \begin{displaymath} \rho=\displaystyle\frac{f_m \sqrt{2} (1-\nu)(1-2\nu)\ln \left( \displaystyle\frac{2\pi f_m}{1-\nu}\right)} {b h(1-\nu)^3 (1-\ln(2b \sqrt{\rho}))} \end{displaymath} The set containing three dislocation families (mobile and immobile {\td}; {\md}) has been exploited in ref. \cite{rom3} and extended in ref. \cite{rom5} to the four unknowns system by splitting the population of the {\md}s into an 'active' and a 'passive' parts. Analytic solutions have been obtained for a number of special cases (no blocking of the {\td} propagation by the {\md} or no annihilation reaction). Eqs. (\ref{react}) is used to study the dislocation evolution either in the layer of the fixed thickness or during the growth, i.e. for the single independent unknown. Dislocation densities of the gliding and climbing {\td}s as well as of the {\md}s has been considered in ref. \cite{rom6}. The authors stress the importance of the climb process inclusion into the model since it permits the description of the effect of point defects on the dislocation propagation. Another feature of this model is the attempt to account for the nonlocal character of the {\td} interaction with the {\md} by introducing the diffusion term into the conservation equation for the gliding dislocation density. The principal character of this extension is the transition from ODE to PDE: \begin{eqnarray} \frac{\partial \rho_g}{\partial t} & = & A \sigma(z) - B (\rho_g) \rho_g+ D \frac{\partial^2 \rho_g}{\partial z^2}\\ \nonumber \frac{\partial \rho_c}{\partial t} & = & B (\rho_g) \rho_g - K \rho_c\\ \nonumber \frac{\partial \rho_m}{\partial t} & = & K \rho_c \nonumber \end{eqnarray} where $\rho_i$, $i = g, c, m$ are the density of the gliding, climbing and misfit dislocations, respectively, A, B, K - corresponding reaction rates \cite {rom6,rom7}. The model just described has been applied to the problem of the misfit dislocation patterning \cite{rom7,rom8}. The complete system has been used for the linear stability analysis only, while dislocation evolution has been considered for the two limiting cases: the uniform time-dependent solution $\rho_i = \rho_i (t)$ and the steady-state non-uniform one $\rho_i = \rho_i (z)$. \subsubsection{Plastic flow models} Phenomenological plastic flow models of the dislocation evolution are based on the well-known Alexander-Haasen (AH) model \cite{ah} developed for elemental semiconductors loaded in a single slip orientation. It uses the dislocation density as a state variable and relates the plastic deformation in the crystal to the movement and multiplication of dislocations. Usually under AH model (or Alexander-Haasen-Sumino model) a tuple of three components is meant \cite{dupret}. These components are: \begin{enumerate} \item{Orowan equation \cite{orowan} that relates the plastic shear strain rate to the motion of mobile dislocations \begin{displaymath} \dot{\varepsilon}^{\mathrm{pl}}= N b v_d \end{displaymath} } \item{normalized expression for the dislocation velocity eq. (\ref{vel}) with a substitution $\sigma_{\mathrm{eff}}$ instead of $\sigma_{\mathrm{exc}}$ \begin{equation} v_d = v_0 \left(\frac{\sigma_{\mathrm{eff}}}{\sigma_{\mathrm{0}}}\right)^m \exp \left(- \frac{E_v}{kT} \right) \nonumber \end{equation} } \item{equation for the dislocation density evolution \begin{displaymath} \dot{N}= \delta N v_d \end{displaymath} } \end{enumerate} The main contribution to the model by Alexander and Haasen themselves is the adaptation of the relation for the dislocation velocity to the case of covalent crystals with high Peierl barrier. They gave an explanation to the Arrhenius type temperature dependence of the dislocation velocity observed in experiments, in particular, in Ge \cite{ge}. To determine the backstress $\hat{\sigma}$ in \begin{displaymath} \sigma_{\mathrm{eff}} = \sigma_{\mathrm{exc}} - \hat{\sigma} \end{displaymath} the authors consider a statistical arrangement of $N$ parallel dislocations giving the backstress \begin{displaymath} \hat{\sigma}= \frac{\mu b}{2\pi(1-\nu)} N^{1/2} \end{displaymath} that is consistent with the square-root dependence suggested by G.I. Taylor. The multiplication law is due to Johnson \& Gilman \cite{gil}. Alexander and Haasen postulated that the breeding coefficient is \begin{displaymath} \delta = K \sigma_{\mathrm{eff}} \end{displaymath} where K is an empirical constant. Refs. \cite{dodson,nix,houghton} are probably the first examples of the application of AH-type models to the strain relaxation in the thin films. The main modification of the plastic flow model in the first of the cited papers is an analysis of both gliding and climbing dislocation motion resulting in the equation for $\gamma = f_m - \varepsilon$ \begin{displaymath} \frac{d\gamma}{dt} = \frac{\sigma_{\mathrm{eff}}^2}{\mu^2} \left( \Gamma_g \exp -\frac{E_g}{kt} + \Gamma_c \exp -\frac{E_c}{kt} \right) (\gamma + \gamma_0) \end{displaymath} or \begin{displaymath} \frac{d \ln(\gamma+ \gamma_0)}{dt} = \frac{\sigma_{\mathrm{eff}}^2}{\mu^2} \left( \Gamma_g \exp -\frac{E_g}{kt} + \Gamma_c \exp -\frac{E_c}{kt} \right) \end{displaymath} where $\Gamma_g$ and $\Gamma_c$ are the glide and climb prefactors, respectively, $E_g$ and $E_c$ are the activation energies and $\gamma_0$ represents a "source" term needed for starting multiplication process. As noted in \cite{tsao}, in the more accurate formulation the excess in-plane stress should be replaced by the excess stress resolved on the slip plane. To adjust $\Gamma_g$, $\Gamma_c$, $E_g$, $E_c$ and $\gamma_0$, experimental data \cite{sige1,sige2} have been used. The essence of the 'improved' Dodson-Tsao model proposed in \cite{jain} is the account for different processes that change the dislocation density \begin{displaymath} \frac{dN}{dt} = - Q_{\mathrm{block}} + Q_{\mathrm{nucl}} + Q_{\mathrm{mult}} \end{displaymath} In ref. \cite{fitz_dyn} AH-type model has been applied to the strain relaxation in the graded SiGe layer. Under a number of simplifying assumptions the authors get an expression for time derivative of the dislocation density that depends linearly on both the growth rate $R_g$ and the grading rate $R_{gr}$ \begin{displaymath} \frac{d \rho}{dt} \propto R_g R_{gr} \end{displaymath} Clearly, this relation does not endure the limiting transition to the case of the uniform layer growth. In the investigation of the strain relaxation in the uniform layer \cite{wang} $\sigma_{\mathrm{eff}}$ has been used in the equations for both the dislocation velocity and dislocation density evolution. However, the backstress has been determined as \begin{equation} \label{hard} \hat{\sigma} = B H (\varepsilon^{\mathrm{pl}})= B \alpha \left(\frac{\varepsilon^{\mathrm{pl}}}{f_m} \right)^{\beta} \left( 1 - \tanh \frac{\gamma \varepsilon^{\mathrm{pl}}}{f_m} \right) \end{equation} where $\alpha, \beta, \gamma$ are ajustable parameters. This model has been extended to the case of multiple and graded layers in ref. \cite{wang_am}. The source term in the equation for the dislocation density evolution has been separated into nucleation and annihilation parts \begin{displaymath} \dot{\rho} = \dot{\rho}_{\mathrm{nucl}} + \dot{\rho}_{\mathrm{annih}} \end{displaymath} A threshold stress for the {\td} nucleation $\sigma_0$ has also been introduced \begin{displaymath} \dot{\rho}_{\mathrm{nucl}} = \xi_0 \left( \frac {\sigma_{\mathrm{exc}} - \hat{\sigma} - \sigma_0}{\mu} \right) \exp -\frac {E_{\rho}}{kT} \end{displaymath} The autors assume that dislocation multiplication plays a relatively minor role; the threading dislocations nucleate at the surface and distributed to all the layers according to some weight - a power function of the effective stress has been adopted in the paper. An extension of the Dodson-Tsao model \cite{dodson} suggested in ref. \cite{beres} is mainly a modification of the rate equation for the surface nucleation. In addition to the nculeation equation itself \begin{displaymath} \frac{d \rho}{d t} = \xi_0 \left( \frac{\sigma_{\mathrm{eff}}}{\mu}\right)^{(1 + \alpha)} N_s \end{displaymath} an equation for the time evolution of the nucleation site density $N_s$ is introduced \begin{displaymath} \frac{d N_s}{d t} = G - \frac{N_s}{t_0} \end{displaymath} where $t_0$ is the characteristic time of the source deactivation. This modification seems to be specific for III-V heterostructure growth where, in contrast to the SiGe material system, a significant reduction of the strain relaxation rate at the growth interruption is observed. The extended models gives the deviation from the experimental data twice smaller than the model \cite{dodson}. The AH-type model has been used to study the strain relaxation in the structure with the substrate of finite thickness in the multiscale approach of ref. \cite{mar3}. The model equations have been combined \cite{m_apl,m_ss} with the equation of the mechanical equilibrium \begin{displaymath} M_f \varepsilon_f h_f + M_s \varepsilon_s h_s = 0 \end{displaymath} and the compatibility equation \begin{displaymath} \varepsilon_f - \varepsilon_s = f_m - s b N_{md} \end{displaymath} where \begin{displaymath} s = \left \{ \begin{array}{rl} 1 \qquad \qquad for \; tensile\; strain \qquad f_m > 0 \\ -1 \qquad for \; compressive \; strain \qquad f_m < 0 \end{array}\right. \end{displaymath} to get an ODE for the single unknown - the strain in the layer $\varepsilon_f$ - as a function of the film thickness \cite{mar3,m_ss}. \section{Conclusions} \subsection{Strain relaxation scenario} The overall scenario of the strain relaxation in the heteroepitaxial structure with low/medium lattice mismatch includes the following stages: \begin{enumerate} \item{elastic strain accomodation} \item{slow strain relaxation} \item{fast strain relaxation} \item{relaxation saturation due to strain hardening} \end{enumerate} An additional process to be considered is the relaxation during the annealing. This stage is somewhat simpler in the numerical analysis since the film thickness is fixed. \subsection{Assessment of strain relaxation models} \subsubsection{Discrete models} The advantage of the micro- and mesoscale numerical models of the misfit strain relaxation is the detailed description of the processes. There are, however, two drawbacks. The first one is evident: huge computer resources for the real-life problems. The second one is more subtle, but actually more severe: the need for corresponding initial and boundary conditions. Thus, at the present time, such discrete numerical models could be practically usefull \begin{itemize} \item{as a component of a multiscale simulation system either directly or via homogenization-type procedure} \item{as a measuring stick for the calibration of the continuum models \cite {mar2}} \end{itemize} \subsubsection{Continuum models} It is evident that estimate of the dislocation density using equlibrium models will always produce an {\em upper\/} bound of this parameter and a {\em lower} bound for the residulal strain. Reaction and reaction diffusion models allow a detailed description of the interaction of the dislocations belonging to the different slip systems. Their weak point is first of all the absence of a mechanism to account for the collective behavour of the dislocations and its effect on the strain. The known applications of the models of this kind deal with either of \begin{itemize} \item{dislocation evolution versus the film thickness} \item{with annealing of the constant thickness film} \end{itemize} The reaction-diffusion models described above that include the gradient terms are formulated as two dimensional problems. The spatial coordinate is in-plane, however, and solutions published are either uniform (zero spatial dimension) non-stationary or 1D stationary. Plastic flow models relay heavily on the tuning to the experimental data. Still, it seems that at present they are capable to account best (phenomenologically) for the complex processes of the strain relaxation in the heterostrucures, provided enough experimental information is available for the reliable determination of the adjustable parameters. In the all known examples plastic flow models used for the relaxation during growth are written in terms of the layer thickness as an independent variable. These models have been also applied to the simulation annealing. \subsection{Evolutionary model} The comprehensive model of misfit accomodation should describe the strain relaxation and the dislocation evolution both during the growth itself and the post-processing (annealing). As the first step we are considering the layer only without the substrate. The model for the strain relaxation in the heterostructure being implemented now is an adaptation of the model developed for the analysis of dislocation density evolution during the growth of single bulk crystals \cite{oxford}. Its major features are as follows. The strain is divided into elastic and plastic components: \begin{displaymath} \varepsilon_{ik} = \varepsilon^{el}_{ik} + \varepsilon^{\mathrm{pl}}_{ik} \end{displaymath} Strain is related to the stress via Hooke's equation which in general case of anysotropic crystal can be written as \begin{displaymath} \sigma_{ik} = \sigma^{el}_{ik} + c_{iklm} ( \varepsilon^{\mathrm{pl}}_{lm} - \beta_{lm} \triangle T ) \end{displaymath} where $\sigma_{ik}$ is the stress for completely elastic case which should be used in the equilibrium equations, $\sigma^{el}_{ik}$ is the real elastic stress, $\triangle T$ is relative temperature, $c_{iklm}$ are elastic constants and $\beta_{ik}$ are thermal expansion coefficients, which are usually assumed to be isotropic: $ \beta_{ik} = \delta_{ik} \alpha \ $. The dependence of the density of dislocations flux tensor on deviatoric stress is taken from \cite{Tsai}: \begin{displaymath} j_{ik} = - \frac{S_{ik}}{\sqrt{J_2^S}} b N v \ \end{displaymath} where $S_{ij}$ and $J_2^S$ are the elastic deviatoric stress and it's second invariant, respectively: \begin{displaymath} S_{ik} = \sigma^{el}_{ik} - \frac{1}{3} \delta_{ik} \sigma^{el}_{ll}, \quad J_2^S = \frac{1}{2} S_{ik} S_{ik} \ \end{displaymath} Using the generalization of the Orowan equation \begin{displaymath} \delta \varepsilon^{\mathrm{pl}}_{ik} = - \frac{1}{2} ( j_{ik} + j_{ki} ) \delta t \end{displaymath} one finally obtain \begin{equation} \label{or} \frac{ d\varepsilon^{\mathrm{pl}}_{ik} }{ dt } = \frac{S_{ik}}{\sqrt{J_2^S}} b N v \end{equation} The equation for the total dislocation density and the dislocation velocity are written as \begin{displaymath} \frac{dN}{dt} = K \sigma_{eff_N}^\lambda N v + \dot{N}_{bin} \end{displaymath} \begin{displaymath} v = v_0 \sigma_{\mathrm{eff}}^m sign (\sigma_{\mathrm{eff}}) \exp {-\frac{Q_v}{kT}} \end{displaymath} where the term $\dot{N}_{bin}$ accounts for the binary dislocation reactions and $K, \lambda, v_0, m, Q_v $ are the material parameters. Effective stress is defined as \begin{displaymath} \sigma_{\mathrm{eff}} = |\sigma - \xi \mu b \sqrt{N}| \end{displaymath} $\sigma$ is applied elastic stress and $\xi$ is the strain hardening factor. A hardening function (\ref{hard}) is considered as a probable alternative. The straightforward extension of the model is possible to account for the dislocation evolution along each slip system. However more accurate such model may appear, one should bear in mind that the number of the material parameters will blow up. For example, in the Orowan equation (\ref{or}) $b N v$ is to be changed to the sum over all slip systems \cite{kalan} \begin{displaymath} \sum_i b_i N_i (v_d)_i \end{displaymath} while the effective stress for the $i^{th}$ slip system should be written as \cite{theod} \begin{displaymath} (\sigma_{\mathrm{eff}})_i = |\sigma_i - \xi_i\mu b_i \sqrt{N_i}| -\mu b_i \sqrt{(\sum_j (\chi_{ij} N_j)} \end{displaymath} Thus it has been decided to start with the model for the evolution of the total dislocation density. As has been mention already, all the models of the strain relaxation during the growth reviewed above (as well as most examples of the AH model applications to the density evolution in the bulk crystals, starting with the classical papers \cite{Brown,Muller}) are written as the equations in the layer thickness instead of time as an evolutionary variable. In other words, it is assumed that the dislocation motion is instantly frozen and no relaxation occur in the part of the layer that has been already grown. If it is probably acceptable for the growth of III-V thin films, it is certainly not true for the SiGe epitaxial growth: as experiments show, the growth interruption does not stop the relaxation process. The model outlined briefly in this section is a true transient one similar to the model used recently for the growth of bulk crystals \cite{zhu}. Moreover, it allows the uniform treatment of the growth itself and annealing. \newpage \section*{} \addcontentsline{toc}{section}{References}
1,116,691,499,943
arxiv
\section{#1}} \newcommand{\reff}[1]{eq.~(\ref{#1})} Quantification of entanglement remains as a major challenge in quantum information theory. For a state of a bi-partite system, it is well known that a measure for the entanglement between the two subsystems is given by the von Neumann entropy \cite{pr}. For a two-qubit system, the concurrence \cite{hw} has been introduced as an alternative measure which is related to the von Neumann entropy in a bijective manner. In the language of spin-1/2 particles, concurrence can be described in terms of a time-reversal operation. Using this concept, generalisations of concurrence have been proposed for multi-qubit systems and in particular they have been applied to quantify the entanglement of the ground state of the BCS model in the thermodynamic limit \cite{g,md}, which breaks time-reversal symmetry due to broken gauge symmetry. In this Letter we adopt a different approach to investigate the ground state entanglement of the BCS model. We will use the notion of \emph{local concurrence} which is defined in analogy with the functional relation that exists between concurrence and von Neumann entropy for a two qubit system (cf. \cite{ckw}). It is a measure of the entanglement between a single qubit and the remainder of the system. We then define an entanglement measure which is the Average Local Concurrence (ALC). The ALC satisfies the properties of an entanglement monotone (EM): it vanishes for a state if and only if that state is a product state, is invariant under local unitary transformations, and does not increase on average under local operations assisted by classical communication. For multi-qubit systems the number of independent EMs is the same as the number of non-local invariants \cite{ging}, growing exponentially with system size ($2^{L+1}-3L-2$ where $L$ is the number of qubits \cite{lps,kem}). It is therefore useful to identify EMs which can be related to physical aspects of the system. In the thermodynamic limit we will show that the ALC for the ground state of the BCS model displays a simple relationship with the magnitude of the order parameter. For finite systems the order parameter vanishes, so we may use the ALC in lieu of the order parameter. Our investigation of a relationship between the ground state ALC and the condensation energy exposes a threshold coupling which signifies the onset of entanglement not measured by the local concurrences. To study the entanglement of any pure state we can examine the von Neumann entropy. Consider a general quantum system comprised of $L$ qubits and a partition into two subsystems denoted $A$ and $B$. For any pure state density matrix $\rho$ the entanglement $\mathcal{E}_{AB}(\rho)$ between $A$ and $B$ is given by the von Neumann entropy $$\mathcal{E}_{AB}(\rho)=-{\rm tr}(\rho_A \log \rho_A)=-{\rm tr}(\rho_B \log \rho_B) $$ where the logarithm is taken base 2 and $\rho_A$ is the reduced density matrix obtained from $\rho$ by taking the partial trace over the state space of subsystem $B$. The reduced density matrix $\rho_B$ is defined analogously. Now we make precise the definition of local concurrence in a general context. Hereafter we will only deal with the particular case when the subsystem $A$ denotes a single qubit (say the $j$th qubit) and $B$ denotes the remainder of the system. For this case we will write $\rho_j$ for $\rho_A$ and $\mathcal{E}_{j}(\rho)$ for $\mathcal{E}_{AB}(\rho)$. In such an instance we have $\mathcal{E}_j(\rho)=-\lambda_j^{+} \log (\lambda_j^{+}) -\lambda_j^{-} \log \lambda_j^{-}$ where $\lambda_j^{\pm}$ denote the two eigenvalues of $\rho_j$. Using the fact that ${\rm tr} (\rho_j)=1$ and that the eigenvalues of $\rho_j$ lie in the interval $[0,1]$ means that we can always parameterise them as $\lambda_j^{\pm}= \left(1\pm \sqrt{1-C^2_j}\right)/2$ with $C_j\in [0,\,1]$. For a two-qubit system $C=C_1=C_2$ is precisely the concurrence \cite{hw}, so it is natural to call the $C_j$ for the $L$-qubit system the local concurrences (cf. \cite{ckw}). In terms of Pauli matrices it can be determined that $\rho_j=\left(I + \sum_{\alpha=x,y,z} \left<\sigma_j^\alpha\right> \sigma^\alpha_j\right)/2$ giving the local concurrence as \begin{eqnarray} C_j=\sqrt{1-\sum_{\alpha=x,y,z} \left<\sigma_j^\alpha\right>^2}.\label{lc} \end{eqnarray} For the ground state of the BCS model we will establish a correspondence between each local concurrence and a certain correlation function $\tilde{C}_j$ (see (\ref{cf}) below) describing the fluctuation in the Cooper pair occupation numbers. An advantage of this approach is that it applies equally to the thermodynamic limit where gauge symmetry is broken, and for finite systems where there is no broken symmetry. We obtain analytic results for the ALC in two extreme cases; (i) the thermodynamic limit and (ii) the case of a single Cooper pair in a system with two single particle energy levels. Between these extremes we investigate the ALC in terms of the exact solution of the model provided by the Bethe ansatz \cite{r}, which facilitates the calculation of correlation functions \cite{zlmg,lzmg} and in turn the ALC. {\it The reduced BCS model.} The reduced BCS Hamiltonian has received much attention as a result of the effort to understand pairing correlations in nanoscale metallic systems \cite{ds,vd}. The Hamiltonian reads \begin{eqnarray} H_{\rm{BCS}}&=&\sum_{j=1}^{L}\epsilon_jn_j -d\lambda \sum_{j\neq k}^{L} c_{j+}^{\dagger}c_{j-}^{\dagger}c_{k-} c_{k+}. \label{bcs} \end{eqnarray} Above, $j=1,\dots,{L}$ labels a shell of doubly degenerate single particle energy levels with energies $\epsilon_j$, $d$ is the mean level spacing and $\lambda$ is the dimensionless coupling. The operators $c_{j\pm},\,c^{\dagger}_{j\pm}$ are the annihilation and creation operators for electrons at level $j$ where the labels $\pm$ refer to pairs of time-reversed states and $n_j=c^\dagger_{j+}c_{j+} + \ c^\dagger_{j-}c_{j-} $ is the electron number operator for level $j$. Throughout we will only consider systems at half-filling. The physical properties predicted by the model are quite different in the superconducting (SC) regime ($d\ll\tilde{\Delta}$, where $\tilde{\Delta}$ is the bulk gap, defined below) and the fluctuation dominated (FD) regime ($d\gg \tilde{\Delta}$), the latter being the case for nanoscale systems. For the SC regime the variational BCS ansatz using mean-field theory can be used to determine the ground state properties. However, the mean-field approximation is not justified in the FD regime, where quantum fluctuations are significant. The condensation energy $E^c$ is defined as the energy loss relative to the energy of the uncorrelated state (i.e. the ground state energy at $\lambda=0$, or equivalently the energy expectation value of the Fermi sea). In the language of \cite{ddb}, it is equivalent to the entanglement gap for this model. Intuitively, the entanglement gap gives an indication of the ground state entanglement of the system. By definition, it is zero if and only if the ground state is not entangled. It is thus desirable to determine how the entanglement gap (or equivalently the condensation energy) relates to EMs. It is known that the condensation energy is extensive in the SC regime, intensive in the FD regime, but with entirely smooth behaviour in the crossover \cite{ds,vd}. Below we will use a relation between condensation energy and the ALC to establish that a threshold coupling exists which marks qualitative differences in the ground state in terms of entanglement. An important quantity in our subsequent analysis will be played by the dimensionless condensation energy per electron, defined by $\tilde{E}=E^c/\omega_D L$. Next we discuss the decomposition of the Hilbert space into subsystems. At each energy level $\epsilon_j$ there are four independent states; $\left|0\right>,\, c^\dagger_{+}\left|0\right>, \,c^\dagger_{-}\left|0\right>,\, c^\dagger_{+} c^\dagger_{-}\left|0\right>$. These states serve as a \emph{ququadrit}, which can be further decomposed into two qubits through the identification $\left|0\right> \equiv\left|00\right>,\, c^\dagger_{+}\left|0\right>\equiv\left|10\right>, \,c^\dagger_{-}\left|0\right>\equiv\left|01\right>,\, c^\dagger_{+} c^\dagger_{-}\left|0\right>\equiv\left|11\right>$ \cite{z,zw}. In the ground state all electrons are paired, so there is zero probability of observing a single electron at any level $\epsilon_j$. Thus for the ground state each level $\epsilon_j$ gives rise to a two-state system with basis $\left|0\right>,\,c^\dagger_{+} c^\dagger_{-}\left|0\right>$, and each ququadrit serves as an effective qubit. {\it The grand canonical ensemble.} The conventional BCS theory \cite{bcs} employs a grand canonical ensemble, where the electron number is not fixed, using the well-known variational ground state ansatz \begin{eqnarray} \left|\rm{BCS}\right>=\prod_{j=1}^L(u_jI+e^{i\theta} v_j c_{j+}^{\dagger}c_{j-}^{\dagger}) \left|0\right> \label{var} \end{eqnarray} with $u_j,\,v_j$ real and satisfying $u_j^2+v_j^2=1$. Including only those levels within the cut-off given by the Debye frequency $\omega_D$ (i.e. $|\epsilon_j|\leq \omega_D$ where the Fermi level is $\epsilon_F=0$), minimisation of the expectation value of the energy for (\ref{var}) gives \begin{eqnarray} 4u_j^2v_j^2&=&{\tilde{\Delta}^2}/{(\epsilon_j^2+\tilde{\Delta}^2)} \label{dist} \end{eqnarray} where $\tilde{\Delta}=\omega_D/\sinh(1/\lambda)$ is the bulk gap. Unless stated otherwise we assume that the levels $\epsilon_j$ are uniformly distributed. It can then be deduced that $\tilde{E}=(\coth(1/\lambda)-1)/2$. With respect to the decomposition of (\ref{var}) into ququadrits discussed above, it is a product state and thus not entangled. There is however entanglement within each ququadrit subsystem. Expressing the state of a ququadrit as $u\left|00 \right> +v\,e^{i\theta} \left|11\right>$ it is easily determined that the concurrence is $C=2|uv|$. Thus for the state (\ref{var}) we may define $L$ local concurrences $C_j$ associated with each level $j$, {\it which quantify the entire entanglement content of the state}. Remarkably, most of the $2^{L{+}1}{-}3L{-}2$ independent EMs are zero. Note that the definition of local concurrence here is not the same as the definition of partial concurrence in \cite{md} based on the notion of broken time-reversal symmetry of (\ref{var}) (see also \cite{g}). A simple choice for an EM which reflects the overall ground state entanglement is the ALC, $ \oC= {L^{-1}} \sum_{j=1}^L C_j$. We remark that $\oC$ only quantifies bi-partite entanglement, and not multi-partite entanglement \cite{ckw,lps,kem}. However (\ref{var}) has no multi-partite entanglement. In the thermodynamic limit $d\rightarrow 0$ with $Ld=2\omega_D$ finite we can compute the ALC as \begin{eqnarray} \oC&=& \frac{\int_{-\omega_D}^{\omega_D} f(\epsilon) \mu(\epsilon) d\epsilon }{\int_{-\omega_D}^{\omega_D} \mu(\epsilon) d\epsilon} \label{alc} \end{eqnarray} with $f(\epsilon)=\tilde{\Delta}/(\sqrt{\epsilon^2+\tilde{\Delta}^2})$ and $\mu(\epsilon)\geq 0$ a density function for the distribution of the single particle energy levels. For the case $\mu(\epsilon)=1$ it is straightforward to determine that the ALC is $\oC={1}/({\lambda\sinh(1/\lambda)})$. Recalling \cite{vd} that the order parameter is given by $\Delta=\lambda d \sum_{j=1}^L \left<c_{j-}c_{j+}\right>$ we have $|\Delta|=\lambda \omega_D\oC$. This last relation reflects the fact that superconducting order arises from the instability of the Fermi sea due to Cooper pairing, which also results in the emergence of entanglement in the ground state. Additionally we find $2{d \tilde{E}}/{d \lambda} = \oC^2$. For the canonical case (i.e. a finite system with fixed electron number) the order parameter vanishes, but one can alternatively study the extent to which `superconducting order' survives through the study of certain correlation functions \cite{vd}, which we will show can be used to compute the ALC. \begin{figure} \includegraphics[scale=.27]{figa} \caption{\label{fig1} The ALC as a function of the dimensionless coupling $\lambda$ in the thermodynamic limit for various distributions $\mu(\epsilon)$ of the single particle levels. The results shown are for, in increasing order of local density about the Fermi level, $\mu(\epsilon)=\epsilon^2$ (dash), $\mu(\epsilon)= |\epsilon|$ (solid), $\mu(\epsilon)=1$ (bold), $\mu(\epsilon)=\omega^2_D-\epsilon^2$ (dash-dot) and $\mu(\epsilon)=\omega_D-|\epsilon|$ (dot). For each of these cases the expression (\ref{alc}) can be evaluated analytically. } \end{figure} We introduce the correlators describing the fluctuation in Cooper pair occupation: \begin{eqnarray} \tilde{C}_j&=&\sqrt{\left< n_j^2\right>- \left<n_j\right>^2} =\sqrt{\left<2-n_j\right>\left<n_j\right>}.\label{cf} \end{eqnarray} These correlators can be directly evaluated for the variational wavefunction (\ref{var}) giving $\tilde{C}_j=2\left|u_j v_j\right|=C_j$, so the local concurrence is precisely the fluctuation of the Cooper pair occupation. These fluctuations are localised within the range $\tilde{\Delta}$ of the Fermi level as given by (\ref{dist}). The behaviour of the ALC is strongly influenced by the density of levels about the Fermi level, as depicted in Fig.~1. Hereafter we will only consider the case of uniform distribution of the levels to simplify the analysis. In the thermodynamic limit (\ref{var}) becomes the exact ground state and the finite size corrections are of order $1/L$ \cite{a}. For systems with large but finite electron number in the SC regime, the entanglement of the state (\ref{var}) and the entanglement of the ground state will be the same up to corrections of order $1/L$. However, the ground state cannot be accurately approximated by the product state (\ref{var}) in the FD regime. In this case the correlations spread out in energy space over the entire width $2\omega_D$ about the Fermi level \cite{vd}. {\it The canonical ensemble.} Our next step is to establish that the correlators (\ref{cf}) are still equivalent to the local concurrences for a canonical system. Recall that since there are no unpaired electrons in the ground state, we can treat each ququadrit as an effective qubit. We express the action of the canonical Fermi algebra on the subspace of the Hilbert space with no unpaired electrons in terms of the Pauli matrices through the identification $n=I+\sigma^z,~ c_{+}^{\dagger}c_{-}^{\dagger}=\sigma^+,~ c_{-}c_{+}=\sigma^-$. The uniqueness of the ground state and the $u(1)$ invariance of the Hamiltonian (\ref{bcs}) due to conservation of total electron number means that (\ref{lc}) reduces to $C_j=\sqrt{1-\left<\sigma^z\right>^2}$. Next we express the correlators (\ref{cf}) in terms of the Pauli matrices, with the result being $\tilde{C}_j= C_j$. A clarifying point is needed here. The local concurrence $C_j$ is a measure of the entanglement between the effective qubit associated with the $j$th level and the remainder of the system. Treating the $j$th level as a ququadrit, the reduced density matrix becomes \begin{eqnarray} \rho_j=\frac{1}{2}(2-\left<n_j\right>)\left|00\right>\left<00\right| +\frac{1}{2} \left<n_j\right> \left|11\right>\left<11\right|. \label{red} \end{eqnarray} Taking the partial trace over either qubit yields a reduced density matrix which has the same non-zero eigenvalues as (\ref{red}), so the local concurrence for either qubit within a ququadrit is the {\it same} as the local concurrence of the ququadrit viewed as an effective qubit. We see that the definition for the ALC in terms of the correlators (\ref{cf}) is the same for the canonical and grand canonical cases. Note that the derivation of the ALC in terms of (\ref{cf}) for canonical systems relied on $u(1)$ invariance. In the thermodynamic limit the $u(1)$ invariance of the ground state density matrix is broken, but the same expression for the ALC in terms of (\ref{cf}) is valid. \begin{figure} \includegraphics[scale=.27]{figb} \caption{\label{fig2} Ground state ALC for systems of $L=24$ (dot-dash), $40$ (dash) and $68$ (solid) levels. The bold and dotted lines show the analytic results for the thermodynamical limit and $L=2$ respectively. The inset, showing the result for small $\lambda$, highlights that the ALC is $L$-dependent and not universal.} \end{figure} \begin{figure} \includegraphics[scale=.27]{figc} \caption{\label{fig4} The ratio $\oC^2/\tilde{E}$ versus $\ln (\lambda)$. The results shown are for $L=24$ (dot-dash), $40$ (dash) and $68$ (solid) levels, while the bold and dotted curves are the analytic results obtained for the thermodynamic limit and $L=2$. The maximum for each case is a threshold coupling, below which other EMs become significant. For $L=2$ and the thermodynamic limit the maximum occurs at $\lambda=0$, as there is only bi-partite entanglement in these cases. } \end{figure} To analyse the ground state ALC in the canonical case we employ results from the exact solution \cite{r}. An eigenstate of (\ref{bcs}) with $M$ Cooper pairs is characterised by a set of complex parameters $\{v_1,\dots,v_M\}$ which provide a solution to the set of Bethe Ansatz Equations (BAE) \begin{eqnarray} \frac{2}{d\lambda}+ \sum_{k=1}^{L}\frac{1}{v_i-\epsilon_k} &=&\sum_{j\neq i}^M \frac{2}{v_i-v_j}, \label{bcsbae} \end{eqnarray} and the energy is given by $E=2\sum_{j=1}^M v_j+\lambda d M$. For the simple case of $L=2,\,M=1$, (\ref{bcsbae}) can be solved analytically yielding the ground state energy $E= \epsilon_1+\epsilon_2-\sqrt{d^2\lambda^2+(\epsilon_1-\epsilon_2)^2}$. Using the Hellmann-Feynman theorem the correlators (\ref{cf}) can be computed which gives the ALC as $\oC=\lambda/{\sqrt{1+\lambda^2}}$. For the case of general finite $L$ the ALC can be computed through determinant representations of the $\tilde{C}_j$ which are given as functions of the set $\{v_1,\dots,v_M\}$. The explicit formulae can be found in \cite{zlmg,lzmg}. In the limits of weak \cite{f} and strong \cite{yba} coupling for large but finite $L$ we have the asymptotic results \begin{eqnarray} \tilde{E}&\sim& \lambda/2(1+O(1/L)),\nonumber \\ \oC&\sim& 1-1/(6\lambda^2)(1+O(1/ L)), ~~~~~~~~~~~~~~~~~\, \lambda^{-1}\ll 1, \nonumber \\ \tilde{E}&\sim& \lambda^2\ln(2)/L(1+O(1/\ln L)),\,\,\nonumber \\ \oC&\sim& \lambda \sqrt{{2}/{L}} \,\ln (3+\sqrt{8})(1+O(1/\ln L)), \,\ \ \, ~~~~~\lambda \ll 1. \nonumber \end{eqnarray} It is found that for $\lambda^{-1}\ll1$, $\oC^2/\tilde{E}\sim 2/\lambda$, and ~~\\ \begin{eqnarray} \oC^2/\tilde{E} &\sim& (2\ln^2(3+\sqrt{8}))/\ln(2) \approx 8.97 \nonumber \end{eqnarray} for $\lambda\ll 1$. Therefore in the FD and SC regimes the the quantity $\oC^2/\tilde{E}$ displays scaling behaviour, i.e. the leading term is independent of $L$. In contrast to the FD and SC regimes, which are characterised by the scales $1/\ln (L)$ and $1/L$ respectively, the scale $1/\ln^2 (L)$ occurs for the crossover regime~\cite{f}. In Fig.~2 we show the ALC versus $\lambda$ for $L=2,\,24,\,40,\,68$ and the thermodynamic limit. Like the condensation energy, it is clear that the ALC is a smooth monotonic function of $\lambda$ as it crosses from the FD to the SC regime. In Fig.~3 we plot the ratio $\oC^2/\tilde{E}$ versus $\ln (\lambda)$, for the finite cases $L=24,40,68$. For sufficiently small $\lambda$ this ratio is approximately the constant value 8.97 (up to a small correction $O(1/\ln (L))$) in agreement with the asymptotic result, but is nonetheless monotonically increasing. At sufficiently large $\lambda$ the ratio is monotonically decreasing and is well approximated by the analytic curve for the thermodynamic limit. For $L=2$ and the thermodynamic limit the curves, also shown, are monotonic, while in each of the other three cases there is clearly a maximum at a finite value of $\ln(\lambda)$. The coupling at which the maximum of $\oC^2/\tilde{E}$ occurs is a threshold: below this coupling (\ref{var}) no longer approximates the ground state and we must expect that other (generally multi-partite) EMs become significant. For larger values of $L$, we appeal to a heuristic argument based on the observation made in~\cite{f}: for the crossover regime, the condensation energy is roughly reproduced by simply summing up the contributions from the perturbative result in the FD regime and the BCS mean field theory in the SC regime \cite{gg}. This also applies to the ALC. Therefore, the same picture we have drawn from exact Bethe ansatz solutions for small $L$ works for very large $L$, thus filling in the gap between the tractable but relatively small $L$'s and the thermodynamic limit. As $L$ becomes very large, the threshold coupling tends to the value $2/\ln (L)$ (the coupling at which mean-field theory breaks down~\cite{f}). This gives $\oC^2/\tilde{E}\sim \ln^2 (L)$ at the threshold, showing the peak in Fig.~3 is not bounded as $L$ increases. The competition of different scales in the crossover regime leads to the breakdown of the scaling behaviour observed in the FD and SC regimes. We gratefully acknowledge financial support from the Australian Research Council.
1,116,691,499,944
arxiv
\section{\small{\bf {Introduction}}} \label{in} Quantum oscillator is an important model in various branches of physics, i.e, quantum mechanics, quantum field theory, string theory and gravity due to its exact solvability and overcomplete symmetry. The symmetry is manifested through angular momentum degeneracy of the energy spectrum. It is also possible to separate the differential equation with respect to variables in few coordinate systems. The overcomplete symmetry let the harmonic oscillator to remain exactly solvable even after some deformation of the potential is made. So the symmetry is the prime issue which gives the the harmonic oscillator such a status in different fields of study. But in quantum mechanical oscillator on K\"{a}hler conifold \cite{kahler}, proposed in Ref. \cite{bellucci}, on the other hand this symmetry is generally broken. It is an important model nevertheless, because it is solvable and it is defined on K\"{a}hler conifold, which is a curved space. In string theory and gravity K\"{a}hler space \cite{zumino, bowick} gets immense importance. It is a four dimensional quantum oscillator on the $(\nu,\epsilon)$ parametric family of K\"{a}hler conifolds related to the complex projective space $CP^2$ for $\nu =1$ and $\epsilon =1$ and four dimensional Lobacewski space $\mathcal L_2$ for $\nu =1$ and $\epsilon =-1$. Now the question is whether it is possible to retain the degeneracy of angular momentum in the energy spectrum of the oscillator defined in Ref. \cite{bellucci}. The answer is yes! In our present work we are going to address this issue. We will basically perform an one parameter family of self-adjoint extension \cite{reed} of the initial domain of the radial Hamiltonian of the harmonic oscillator \cite{bellucci} by von Neumann method \cite{reed}. This will help us to construct a generalized boundary condition. We will show that for a particular value of the extension parameter we can in fact recover the angular momentum degeneracy in the energy spectrum. Not only that, it is also possible to get the previously obtained result \cite{bellucci} for another value of the parameter and other results. However, the importance of self-adjointness of an operator is far fundamental. As we know evolution of a quantum system is dictated by unitary group and the generator of this group is the Hamiltonian itself. According to Stone's theorem \cite{reed} generators of unitary group (in this case Hamiltonian) should be self-adjoint. So for a non self-adjoint operator we should search for a self-adjoint extension if possible. If the system has many self-adjoint extensions then different self-adjoint extensions should unveil different physics for the system. The paper is organized as follows: In Sec.~\ref{os}, we discuss about the quantum mechanical oscillator on K\"{a}hler conifold. In Sec.~\ref{ra}, we perform the self-adjoint extension of the radial Hamiltonian and we make some observations for some particular value of the extension parameter $\omega_0$. Here we show that it is actually possible to retain degeneracy in the energy spectrum (symmetry of the system). We discuss in Sec.~\ref{con}. \section{\small{\bf {Quantum mechanical oscillator on K\"{a}hler conifold }}} \label{os} The Hamiltonian for the system is given by \begin{eqnarray} {\widehat{\cal H}}= -\hbar^2 g^{a \bar b}\partial_a\partial_{\bar b} +V_{osc}, \label{ham} \end{eqnarray} where the metric is of the form \begin{eqnarray} g_{a\bar b}= \frac{\nu r_0^2 (z\bar z)^{\nu -1}}{2(1+\epsilon (z\bar z)^\nu )} \left( \delta_{a b}-\frac{1-\nu+ \epsilon (z\bar z)^\nu}{z\bar z\; (1+\epsilon (z\bar z)^\nu)}\bar z^a z^b\right), \label{metric} \end{eqnarray} and the oscillator potential is given by \begin{eqnarray} V_{osc}= \omega^2 g^{\bar a b}\partial_{\bar a}K \partial_b K= \frac{\omega^2 r_0^2}{2} (z\bar z)^\nu. \label{pot} \end{eqnarray} $K$ the potential of the Ka\"{a}hler structure is given by \begin{eqnarray} K=\frac{r^2_0}{2\epsilon} \log (1+\epsilon(z\bar z)^\nu),\quad \nu >0; \quad \epsilon=\pm 1, \label{kpot} \end{eqnarray} In order to investigate the maximum possible solutions of the problem we need to consider the eigenvalue problem, which is \begin{eqnarray} {\widehat{\cal H}}\Psi= E\Psi, \label{eigen} \end{eqnarray} Equation (\ref{eigen}) can be separated out in spherical coordinates \begin{eqnarray} z^1& =& r^{\frac{1}{\nu}}\cos\frac{\beta}{2}\exp\left[\frac{i}{2}(\alpha+\gamma)\right],\nonumber\\ z^2& =& - ir^{\frac{1}{\nu}}\sin\frac{\beta}{2}\exp\left[-\frac{i}{2}(\alpha -\gamma)\right], \label{spherical} \end{eqnarray} if we consider the trial wavefunction of the form \begin{eqnarray} \Psi=\psi(r) D^j_{m,s}(\alpha,\beta,\gamma). \label{wavef} \end{eqnarray} Here $\alpha\in [0,2\pi)$, $\beta\in [0,\pi]$ and $\gamma\in [0,4\pi)$, and $r$ is a dimensionless radial coordinate taking values in the interval $ [0,\infty)$ for $\epsilon=+1$, and in $[0, 1]$ for $\epsilon=-1$. In the Wigner function $ D^j_{m,s}(\alpha,\beta,\gamma)$ $j$, $m$ denote orbital and azimuthal quantum numbers and corresponding operators are $\widehat J^2,\widehat J_3$ respectively, while $s$ is the eigenvalue of the operator $\widehat J_0$. \begin{eqnarray} &&\widehat{J}_0\Psi=s\Psi,\label{J0}\\ &&\widehat{\bf J}^2\Psi =j(j+1)\Psi,\quad \widehat{J}_3\Psi=m\Psi ,\\ && m,s=-j,-j+1,\ldots , j-1, j\;\; \mbox{where}~~ j=0,1/2,1,\ldots \label{mj} \end{eqnarray} The volume element reads \begin{eqnarray} dV_{(4)}=\frac{\nu^2r_0^4}{32}\frac{r^3}{(1+ \epsilon r^2)^3}\sin\beta drd\alpha d\beta d\gamma. \label{volume} \end{eqnarray} Separating the differential equation we get the radial eigenvalue equation of the form \begin{eqnarray} H(r)\psi(r)= E\psi(r), \label{radialeigen} \end{eqnarray} where \begin{eqnarray}\textstyle{ H(r)=-\frac{\hbar^2(1+\epsilon r^2)^2}{2r_0^2}\left[\frac{d^2}{dr^2} + \frac{3+\epsilon r^2}{1+\epsilon r^2}\frac{1}{r} \frac{d}{dr} + \frac{\epsilon\omega^2 r_0^4}{\hbar^2(1+ \epsilon r^2)^2} -\frac{\epsilon \delta^2}{1+\epsilon r^2} -\frac{4\nu j(j+1)+4(1-\nu)s^2}{\nu^2 r^2(1+\epsilon r^2)}\right]} \label{radialham} \end{eqnarray} We now move to the next section to discuss the self-adjointness of the radial Hamiltonian $H(r)$ of Eq.(\ref{radialham}). \section{\small{\bf {Self-adjointness of the radial Hamiltonian }}} \label{ra} The effective radial Hamiltonian $H(r)$, Eq.(\ref{radialham}) is formally self-adjoint, but formal self-adjointness does not mean that it is self-adjoint on a given domain \cite{dunford}. This operator $H(r)$ belongs to unbounded differential operator defined on a Hilbert space. As we have mentioned in our introduction that we will perform self-adjoint extension of the operator $H(r)$ by von Neumann's method \cite{reed}, so for the shake of completeness here we briefly review the von Neumann method. Let us consider an unbounded differential operator $T$ defined over a Hilbert space $\mathcal H$ and consider a domain $D(T)\subset \mathcal H$ for the operator $T$ such that it becomes symmetric on the domain $D(T)\subset \mathcal H$. Note that the operator $T$ is called symmetric or Hermitian if $(T\phi,\chi)= (\phi, T\chi)~~ \forall \phi,\chi \in D(T)$, where (.~,~.) is the inner product defined over the Hilbert space $\mathcal H$. Let $D(T^\dagger)$ is the domain of the corresponding adjoint operator $T^\dagger$. The operator $T$ is self-adjoint iff $T= T^\dagger$ and $D(T)= D(T^\dagger)$. We now state the criteria of self-adjointness of a symmetric operator $T$ according to von Neumann method. We need to find out the the deficiency subspaces (it is actually a null space) $D^\pm \equiv \mbox{Ker}(i\mp T\dagger)$ and the deficiency indices $n^\pm(T) \equiv \dim(D^\pm)$. Depending upon $n^\pm$, $T$ is classified as \cite{reed}: \begin{list}{\arabic{enumi})}{\usecounter{enumi}} \item $T$ is essentially self-adjoint if $n^+= n^- = 0$. \item $T$ has a $n$-parameter family of self-adjoint extension if $n^+ = n^-= n \ne 0$. \item $T$ has no self-adjoint extension if $n^+\ne n^-$. In this case $T$ is called maximally symmetric. \end{list} We now return to the discussion of our effective radial differential operator $H(r)$. This operator is symmetric in the domain \begin{eqnarray} D(H(r)) = \{\phi(r): \parbox[t]{9cm}{\mbox{$\phi(r) = \phi'(r) = 0 $}, absolutely continuous, square integrable over the full range with measure \mbox{$d\mu$} \}\,.} \label{domain1} \end{eqnarray} where $d\mu = \frac{r^3}{(1+ \epsilon r^2)^3}dr$, $\phi'(r)$ is the derivative of $\phi(r)$ with respect to $r$. The domain of the adjoint operator $H^\dagger(r)$, whose differential expression is same as $H(r)$ due to formal self-adjointness, is given by \begin{eqnarray} D^\dagger(H(r)) = \{\phi(r): \parbox[t]{9cm}{ absolutely continuous, square integrable over the full range with measure \mbox{$d\mu$} \}\,,} \label{domain2} \end{eqnarray} $H(r)$ is obviously not self-adjoint \cite{reed}, because \begin{eqnarray} D(H(r))\ne D(H^\dagger(r)) \label{nonselfad} \end{eqnarray} So we may ask whether there is any possible self-adjoint extension \cite{reed} for the problem? To answer this question we need to investigate whether there is any square-integrable solution for the differential equations \begin{eqnarray} H(r)^\dagger \phi^\pm = \pm i\phi^\pm \label{imaginarysol} \end{eqnarray} Eq. (\ref{imaginarysol}) can be transformed into Hypergeometric \cite{abr} differential equation upon transformation \begin{eqnarray} r= \cases{ \tan\theta,& for $\epsilon=1$; \cr \tanh\theta, & for $\epsilon= - 1$;} \label{trans} \end{eqnarray} and taking the trial solution of the form \begin{eqnarray} \phi^\pm= \cases{ \sin^{j_1-1}\theta\cos^{\delta}\theta\; \psi^\pm, & for $\epsilon=1$; \cr \sinh^{j_1-1}\theta\cosh^{-\delta - 2a^\pm}\theta\; \psi^\pm, & for $\epsilon=-1$.} \label{cons1} \end{eqnarray} The transformed differential equation is given by \begin{eqnarray} t(1-t)\frac{d^2\psi^\pm}{dt^2} + \left[c-(a^\pm +b^\pm +1)t \right]\frac{d\psi^\pm}{dt} - a^\pm b^\pm\psi^\pm, \label{hyper} \end{eqnarray} where \begin{eqnarray} a^\pm = \frac{1}{2}\left(1+j_1+ \epsilon\delta - \sqrt{ \frac{\pm 2 r_0^2 i}{\epsilon\hbar^2} +4+ \frac{\omega^2 r_0^4}{\epsilon^2\hbar^2}}\right), b^\pm = \cases{- a^\pm +\delta+j_1+1, & for $\epsilon=1$; \cr a^\pm +\delta, & for $\epsilon= -1$;}\nonumber \end{eqnarray} \begin{eqnarray} c= j_1+1,\nonumber j_1^2 = \frac{4j(j+1)}{\nu} +1-\frac{4(\nu-1)s^2}{\nu^2}, \delta^2= \frac{4s^2}{\nu^2} + \frac{\omega^2 r_0^4}{\hbar^2},\nonumber \end{eqnarray} \begin{eqnarray} t= \cases{\sin^2\theta, & for $\epsilon=1$; \cr \tanh^2\theta & for $\epsilon= -1$.} \label{} \end{eqnarray} The squareintegrable solutions of the deficiency space, apart from normalization is given by \begin{eqnarray} \phi^\pm=\cases{ D t^{\frac{c -2}{2}}(1-t)^{\frac{b^\pm + a^\pm -c}{2}}\; _2F_1(a^\pm, b^\pm; c; t),\; & for $\epsilon=1$; \cr D (\frac{t}{1-t})^{\frac{c -2}{2}}(\frac{1}{1-t})^{-\delta -2a^\pm}\; _2F_1(a^\pm, b^\pm, c; t) & for $\epsilon= - 1$,} \label{constant}\end{eqnarray} where $_2F_1$ is the Hypergeometric function \cite{abr}. The existence of these complex eigenvalues of $H(r)^\dagger$ signifies that $H(r)$ is not self-adjoint. The solution $\phi^\pm$ belong to the null space $D^\pm$ of $H(r)^\dagger \mp i$. where $D^\pm \in D^\dagger(H)$. The dimension of $D^\pm$ are known as deficiency indices $n^\pm$ and is defined by \begin{eqnarray} n^\pm = \dim(D^\pm) \label{deficiencyindices} \end{eqnarray} Since in our case the deficiency indices $n^+ = n^- = 1$, we can have a 1-parameter family of self-adjoint extension of $H(r)$. The selfadjoint extension of $H(r)$ is given by $H(r)^{\omega_0}$ with domain $D(H(r)^{\omega_0})$, where \begin{eqnarray} D(H(r)^{\omega_0})= \{ \psi(r)= \phi(r)+ \phi^+(r) + e^{i\omega_0}\phi^-(r) : \phi(r)\in D(H(r)), \omega_0\in \mathbb{R} (\bmod 2\pi)\}. \label{selfdomain} \end{eqnarray} The bound state solution of $H(r)^\omega$ is given by \begin{eqnarray} \psi(r) =\cases{ C t^{\frac{c -2}{2}}(1-t)^{\frac{b + a -c}{2}} \; _2F_1(a, b; c; t), & for $\epsilon=1$; \cr C (\frac{t}{1-t})^{\frac{c -2}{2}}(\frac{1}{1-t})^{-\delta - 2a} \; _2F_1(a, b, c; t), & for $\epsilon= - 1$;} \label{beigenvalue}\end{eqnarray} where \begin{eqnarray} a = \frac{1}{2}\left(1+j_1+ \epsilon\delta - \sqrt{ \frac{2 r_0^2 E}{\epsilon\hbar^2} +4+ \frac{\omega^2 r_0^4}{\epsilon^2\hbar^2}}\right)\,, b = \cases{- a +\delta+j_1+1, & for $\epsilon=1$; \cr a+\delta & for $\epsilon= -1$;}\nonumber \end{eqnarray} \begin{eqnarray} c= j_1+1, t= \cases{\sin^2\theta, & for $\epsilon=1$; \cr \tanh^2\theta, & for $\epsilon= -1$;} \end{eqnarray} and $C$ is the normalization constant. To find out the eigenvalue we have to match the function $\psi(r)$ with the domain (\ref{selfdomain}) at $r\rightarrow 0$. In the limit $r\rightarrow 0$, \begin{eqnarray} \psi(r) \to \cases{C t^{\frac{c -2}{2}}(1-t)^{\frac{b + a -c}{2}} \left[\Gamma_1 + (1-t)^{c-a-b}\Gamma_2 \right], & for $\epsilon=1$; \cr C t^{\frac{c -2}{2}} \left[\Gamma_1 + (1-t)^{1+\frac{c}{2}}\Gamma_2 \right], & for $\epsilon= -1$;} \label{matching1} \end{eqnarray} where \begin{eqnarray} \Gamma_1 &=& \frac{\Gamma(c) \Gamma(c-a-b) \Gamma(a+b-c+1) \Gamma(1-c)} {\Gamma(c-a) \Gamma(c-b)\Gamma(b-c+1)\Gamma(a-c+1)} \\ \Gamma_2 &=& \frac{\Gamma(c)\Gamma(a+b-c)\Gamma(c-a-b+1)\Gamma(1-c)} {\Gamma(a)\Gamma(b)\Gamma(1-b)\Gamma(1-a)} \end{eqnarray} and \begin{eqnarray} \phi^+(r) + e^{i\omega_0}\phi^-(r) \to \cases{D t^{\frac{c -2}{2}}(1-t)^{\frac{b + a -c}{2}} \left[\bar\Gamma_1 + (1-t)^{c-a-b}\bar\Gamma_2 \right], & for $\epsilon=1$; \cr D t^{\frac{c -2}{2}} \left[\bar\Gamma_1 + (1-t)^{1+\frac{c}{2}}\bar\Gamma_2 \right], & for $\epsilon= -1$;} \label{matching2} \end{eqnarray} where \begin{eqnarray} \textstyle{\bar\Gamma_1 = \frac{\Gamma(c)\Gamma(c-a^+-b^+)\Gamma(a^++b^+-c+1)\Gamma(1-c)}{\Gamma(c-a^+)\Gamma(c-b^+)\Gamma(b^+-c+1)\Gamma(a^+-c+1)} + e^{i\omega_0}\frac{\Gamma(c)\Gamma(c-a^- -b^-)\Gamma(a^-+b^- -c+1)\Gamma(1-c)}{\Gamma(c-a^-)\Gamma(c-b^-)\Gamma(b^- -c+1)\Gamma(a^- -c+1)}}, \\ \textstyle{\bar\Gamma_2 = \frac{\Gamma(c)\Gamma(a^+ +b^+ -c)\Gamma(c-a^+ -b^+ +1)\Gamma(1-c)}{\Gamma(a^+)\Gamma(b^+)\Gamma(1-b^+)\Gamma(1-a^+ )} + e^{i\omega_0}\frac{\Gamma(c)\Gamma(a^- +b^- -c)\Gamma(c-a^- -b^- +1)\Gamma(1-c)}{\Gamma(a^-)\Gamma(b^-)\Gamma(1-b^-)\Gamma(1-a^-)}} \label{} \end{eqnarray} Now comparing the respective coefficients in Eq. (\ref{matching1}) and Eq. (\ref{matching2}) we get the eigenvalue equation, \begin{eqnarray} f(E)= \frac{\Gamma(a)\Gamma(b)\Gamma(1-b)\Gamma(1-a)}{\Gamma(c-a)\Gamma(c-b)\Gamma(b-c+1)\Gamma(a-c+1)}= \frac{\sin(c-b)\pi\sin(c-a)\pi}{\sin a\pi \sin b\pi} = \mathcal M\frac{\cos(\beta +\omega_0/2)}{\cos(\alpha +\omega_0/2) }, \label{compare} \end{eqnarray} where \begin{eqnarray} \Gamma(a^\pm) = \chi_1 e^{\pm i\alpha_1},~\Gamma(b^\pm) = \chi_2 e^{\pm i\alpha_2},~ \Gamma(1-a^\pm) = \chi_3 e^{\pm i\alpha_3},~ \Gamma(1-b^\pm) = \chi_4 e^{\pm i\alpha_4}, \label{} \end{eqnarray} \begin{eqnarray} \Gamma(c -a^\pm) = \lambda_1 e^{\pm i\beta_1},~\Gamma(c -b^\pm) = \lambda_2 e^{\pm i\beta_2},~ \Gamma(b^\pm -c +1) = \lambda_3 e^{\pm i\beta_3},~ \Gamma(a^\pm -c +1) = \lambda_4 e^{\pm i\beta_4}, \label{} \end{eqnarray} \begin{eqnarray} \mathcal M = \frac{\lambda_1\lambda_2\lambda_3\lambda_4}{\chi_1\chi_2\chi_3\chi_4},~ \beta =\beta_1+\beta_2+\beta_3+\beta_4,~ \alpha = \alpha_1+\alpha_2+\alpha_3+\alpha_4\,. \label{} \end{eqnarray} The eigenvalue for general value of $\omega_0$ can be calculated numerically. But we can immediately calculate the eigenvalue analytically at least for some values of the extension parameter $\omega_0$ in the boundary condition. So to appreciate constructing generalized boundary condition we now investigate some special cases. \subsection{Case 1} When the right hand side of Eq. (\ref{compare}) is infinity, we get $a = \pm n$ or $b = \pm n$. $a= -n$ leads to the eigenvalue, already calculated in Ref. \cite{bellucci}, \begin{eqnarray} E_{n,\,j,\,s} = \cases {\frac{\hbar^2}{2r^2_0} \left[\left(2n+ j_1+ \delta +1 \right)^2 - 4 -\frac{\omega^2 r_0^4}{\hbar^2}\right],& for $\epsilon=1$. \cr -\frac{\hbar^2}{2r^2_0} \left[\left(2n+ j_1 -\delta +1 \right)^2 - 4 -\frac{\omega^2 r_0^4}{\hbar^2}\right]\,, & for $\epsilon= -1$.} \label{spectrum} \end{eqnarray} The radial quantum number is given by \begin{eqnarray} n= \cases{0,1,\dots ,\infty, & for $\epsilon=1$.\cr 0,1,\dots ,n^{\rm max}=[\delta/2-j-1], & for $\epsilon=-1$.} \label{quantumno} \end{eqnarray} For $a = +n$ the energy spectrum will be the same expression (\ref{spectrum}), with $n$ replaced by $-n$. For $b= +n$, the energy spectrum will be \begin{eqnarray} E_{n,\,j,\,s} = \cases{\frac{\hbar^2}{2r^2_0} \left[\left(2n - j_1 -1 -\delta \right)^2 - 4 -\frac{\omega^2 r_0^4}{\hbar^2}\right], & for $\epsilon=1$.\cr -\frac{\hbar^2}{2r^2_0} \left[\left(-2n + j_1 +1 + \delta \right)^2 - 4 -\frac{\omega^2 r_0^4}{\hbar^2}\right], & for $\epsilon= -1$.} \label{spectrum3} \end{eqnarray} for $b=- n$, $n$ in (\ref{spectrum3}) will be replaced by $-n$ and radial quantum number $n$ is given in (\ref{quantumno}). \subsection{Case 2} We can also make the right hand side of Eq. (\ref{compare}) zero, which gives us $c-b= \pm n$ or $c-a =\pm n$. for $c-b= +n$, the energy spectrum becomes, \begin{eqnarray} E_{n,\,j,\,s} = \cases{\frac{\hbar^2}{2r^2_0} \left[\left(-2n + j_1 + 1 -\delta \right)^2 - 4 -\frac{\omega^2 r_0^4}{\hbar^2}\right], & for $\epsilon=1$.\cr -\frac{\hbar^2}{2r^2_0} \left[\left(2n - j_1 - 1 + \delta \right)^2 - 4 -\frac{\omega^2 r_0^4}{\hbar^2}\right], & for $\epsilon= -1$.} \label{spectrum4} \end{eqnarray} for $c-b=- n$, $n$ in (\ref{spectrum4}) will be replaced by $-n$ and radial quantum number $n$ is given in (\ref{quantumno}). For $c-a =n$, \begin{eqnarray} E_{n,\,j,\,s} = \cases {\frac{\hbar^2}{2r^2_0} \left[\left(2n- j_1 -1 + \delta \right)^2 - 4 -\frac{\omega^2 r_0^4}{\hbar^2}\right],& for $\epsilon=1$. \cr -\frac{\hbar^2}{2r^2_0} \left[\left(2n- j_1 -1 -\delta \right)^2 - 4 -\frac{\omega^2 r_0^4}{\hbar^2}\right]\,, & for $\epsilon= -1$.} \label{spectrum5} \end{eqnarray} For $c-a = -n$, $n$ in (\ref{spectrum5}) will be replaced by $-n$ and radial quantum number $n$ is given in (\ref{quantumno}). \subsection{Case 3} On the other hand if we make the right hand side $\pm 1$, then we get degenerate(degenerate with respect to orbital quantum no $j_1$) eigenvalue. For $c-b= +n+b$ and $c-a= +n+a$, we get, \begin{eqnarray} E_{n,\,s} = \frac{\hbar^2}{2r^2_0} \left[\left(n+ \delta \right)^2 - 4 -\frac{\omega^2 r_0^4}{\hbar^2}\right], \mbox{for} ~~\epsilon=1. \label{spectrum6} \end{eqnarray} For $c-b=+n +b$ and $c-a= -n+a$ we get, \begin{eqnarray} E_{n,\,s} = -\frac{\hbar^2}{2r^2_0} \left[\left(n + \delta \right)^2 - 4 -\frac{\omega^2 r_0^4}{\hbar^2}\right], \mbox{for} ~~\epsilon= -1. \label{spectrum7} \end{eqnarray} \subsection{Case 4} Even if, we can get totally degenerate eigenvalue when $c-b = c-a \pm n$ and the form of the spectrum is given by \begin{eqnarray} E_{n } =\frac{\hbar^2}{2r^2_0}\left[ n^2 - 4- \frac{\omega^2 r_0^4}{\hbar^2}\right], \mbox{for}~~~\epsilon = +1\,. \label{spectrum2} \end{eqnarray} For $a+b+c=\pm n$ we get, \begin{eqnarray} E_{n } =-\frac{\hbar^2}{2r^2_0}\left[ n^2 - 4- \frac{\omega^2 r_0^4}{\hbar^2}\right], \mbox{for}~~~\epsilon = -1\,. \label{spectrum2} \end{eqnarray} We have so far discussed the oscillator, where the dimension of the complex coordinate is $N=2$. But we can generalize it for arbitrary dimensions $N > 1$. The arbitrary dimensional conic oscillator Hamiltonian can be constructed from conic oscillator of Ref. \cite{stefan} by making the magnetic field zero. Once the oscillator Hamiltonian is given for general dimensions the rest of the work of making self-adjoint extension is exactly same as what we have done above. \section{\small{\bf {Discussion}}} \label{con} In conclusion, we have calculated a generalized boundary condition for the harmonic oscillator \cite{bellucci} and we have shown that this generalized boundary condition can restore the angular momentum degeneracy in energy spectrum for a fixed value of the extension parameter. we have also recovered the result of Ref. \cite{bellucci} in our work. Not only that, we have shown that it allows more solutions for different values of the extension parameter. \subsubsection*{Acknowledgements} We thank Palash B. Pal for comments on manuscript and helpful discussions.
1,116,691,499,945
arxiv
\section{Introduction} The derivation of process models from partial observations has received significant attention in the last years, as it enables eliciting evidence-based formal representations of the real processes running in a system~\cite{AalstBook}. This discipline, known as \emph{process discovery}, has similar premises as in \emph{regression analysis}, i.e., only when moderate assumptions are made on the input data one can derive faithful models that represent the underlying system. Formally, a technique for process discovery receives as input an \emph{event log}, containing the footprints of a process' executions, and produces a model (\eg, a Petri net) describing the real process. Many process discovery algorithms in the literature make strong implicit assumptions. A widely used one is \emph{log completeness}, requiring every possible trace of the underlying system to be contained in the event log. This is hard to satisfy by systems with cyclic or infinite behavior, but also for systems that evolve continuously over time. Another implicit assumption is the lack of \emph{noise} in the log, \ie, traces denoting exceptional behavior that should not be contained in the derived process model. Finally, every discovery technique has a \emph{representational bias}. For instance, the $\alpha$-algorithm~\cite{Aalst11} can only discover Petri nets of a specific class (\emph{structured workflow nets}). Few attempts have been made to remove the aforementioned assumptions. One promising direction is to relieve the discovery problem by assuming that more knowledge about the underlying system is available as input. On this line, the works in~\cite{Ferreira2006,Lamma2008,Goedertier2009} are among the few that use domain knowledge in terms of \emph{negative information}, expressed by traces which do not represent process behavior. In this paper we follow this direction, but additionally incorporate a crucial information to be used for the task of process discovery: when a pair of activities are {\em independent} of each other. One example could be the different tests that a patient should undergo in order to have a diagnosis: blood test, allergy test, and radiology test, which are independent each other. We believe that obtaining this coarse-grain independence information from a domain expert is an easy and natural step; however, if they are not available, one can estimate them from analysing the log with some of the techniques in the literature, e.g., the relations computed by the $\alpha$-algorithm~\cite{AalstBook}. \begin{figure}[t] \begin{centering} \includegraphics[scale=0.6]{Figures/flow.pdf} \caption{\label{fig:flow} Unfolding-based process discovery.} \end{centering} \end{figure} The approach of this paper is summarized in \autoref{fig:flow}. Starting from an event log and an independence relation on its set of activities, we conceptually construct a collection of \emph{labeled partial orders} whose linearizations include both the sequences in the log as well those in the same Mazurkiewicz trace~\cite{Mazurkiewicz86}, \ie, those obtained via successive permutations of independent activities. We then merge (the common prefixes of) this collection into an \emph{event structure} which we next transform into an occurrence net representing the same behavior. Finally, we perform a controlled generalization by selectively folding the occurrence net into a Petri net. This step yields a net that (a) can execute all traces contained in the event log, and (b) generalizes the behavior of the log in a controlled manner, introducing no execution given in the collection of negative traces. The folding process is driven by a \emph{folding equivalence relation}, which we synthesize using SMT. Different folding equivalences guarantee different properties about the final net. The paper proposes three different classes of equivalences and studies their properties. In particular we define a class of \emph{independence-preserving} folding equivalences, guaranteeing that the natural independence relation in the final net will equal the one given by the expert. In summary, the main contributions of the paper are: \begin{itemize} \item A general and efficient translation from prime event structures to occurrence nets (\autoref{sec:discovery}). \item Three classes of folding equivalences of interest not only in process discovery but also in formal verification of concurrent systems (\autoref{sec:generalization}). \item A method to synthesize folding equivalences using SMT (\autoref{sec:computing}). \item An implementation of our approach and experimental results witnessing its capacity to even rediscover the original model (\autoref{sec:experiments}). \end{itemize} Remarkably, the discovery technique of this paper solves for the first time one of the foreseen operations in~\cite{DumasG15}, which advocates for the unified use of event structures to support process mining operations. \section{Preliminaries} \label{sec:prelim} \paragraph{Events:} given an alphabet of actions $A$, several occurrences of a given action can happen on a run or execution. In this paper we consider a set~$E$ of events representing the occurrence of actions in executions. Each event $e \in E$ has the form $e\eqdef\tup{a,H}$, where $a \in A$ and~$H \subseteq E$ is a subset of events causing~$e$ (its history). The label of an event is given by a function $\lambda \colon E \to A$ defined as $\lambda(\langle a,H \rangle) \eqdef a$. \paragraph{Labeled partial orders (lpos):} we represent a labelled partial order by a pair $(E, \le)$, where ${\le} \subseteq E \times E$ is a reflexive, antisymmetric and transitive relation on the set~$E$ of events. Two distinct events $e,e' \in E$ can be either ordered ($e \le e'$ or $e' \le e$) or concurrent ($e \not\le e'$ and $e' \not\le e$). Observe that all events are implicitly labelled by $\lambda$. \paragraph{Petri nets:} a net consists of two disjoint sets $P$ and $T$ representing respectively places and transitions together with a set $F$ of flow arcs. The notion of state of the system in a net is captured by its markings. A marking is a multiset $M$ of places, \ie, a map $M \colon P \to \nat$. We focus on the so-called safe nets, where markings are sets, \ie, $M(p) \in \{0,1\}$ for all $p \in P$. A Petri net (PN) is a net together with an initial marking and a total function that labels its transitions over an alphabet $A$ of observable actions. Formally a PN is a tuple $\pnet \eqdef (P,T,F,\lambda,M_0)$ where \emph{(i)} $P \not = \emptyset$ is a set of places; \emph{(ii)} $T \not = \emptyset$ is a set of transitions such that $P \cap T = \emptyset$; \emph{(iii)} $F \subseteq (P \times T) \cup (T \times P)$ is a set of flow arcs; \emph{(iv)} $\lambda \colon T \to A$ is a labeling mapping; and \emph{(v)} $M_0 \subseteq P$ is an initial marking. Elements of $P \cup T$ are called the nodes of $\mathcal{N}$. For a transition $t \in T$, we call $\preset t \eqdef \{ p \mid (p,t) \in F \}$ the preset of $t$, and $\postset t \eqdef \{p \mid (t,p) \in F \}$ the postset of $t$. In figures, we represent as usual places by empty circles, transitions by squares, $F$ by arrows, and the marking of a place $p$ by black tokens in $p$. A transition $t$ is enabled in marking $M$, written $M \arrow t$, iff $\preset t \subseteq M$. This enabled transition can fire, resulting in a new marking $M' \eqdef (M \backslash \preset t) \cup \postset t$. This firing relation is denoted by $M \arrow t M'$. A marking $M$ is reachable from $M_0$ if there exists a firing sequence, i.e. transitions, $t_1, \dots, t_n$ such that $M_0 \arrow{t_1} \dots \arrow{t_n} M$. The set of reachable markings from $M_0$ is denoted by $\reach \mathcal{N}$. The set of co-enabled transitions is ${\coe \mathcal{N}} \eqdef \{ (t,t') \mid \exists M \in \reach \mathcal{N} \colon \preset t \subseteq M \land \preset{t'} \subseteq M \}$. The set of observations of a net is the image over $\lambda$ of its fireable sequences, \ie, $\sigma \in \obs \mathcal{N}$ iff $M_0 \arrow{t_1} \dots \arrow{t_n} M$ and $\lambda(t_1) \dots \lambda(t_n) = \sigma$. \paragraph{Occurrence nets:} occurrence nets can be seen as infinite Petri nets with a special acyclic structure that highlights conflict between transitions that compete for resources. Places and transitions of an occurrence net are usually called conditions and events. Formally, let $\net \eqdef (P,T,F)$ be a net, $<$ the transitive closure of $F$, and $\leq$ the reflexive closure of $<$. We say that transitions $t_1$ and $t_2$ are in structural conflict, written $t_1 \cfl[s] t_2$, if and only if $t_1 \not = t_2$ and $\preset{t_1} \cap \preset{t_2} \not = \emptyset$. Conflict is inherited along $<$, that is, the conflict relation $\cfl$ is given by $a \cfl b \Leftrightarrow \exists t_a,t_b \in T \colon t_a \cfl[s] t_b \land t_a \leq a \land t_b \leq b$. Finally, the concurrency relation $\bf co$ holds between nodes $a,b \in P \cup T$ that are neither ordered nor in conflict, i.e. $a\textbf{ co } b \Leftrightarrow \neg (a \leq b) \land \neg (a \cfl b) \land \neg (b \leq a)$. A net $\beta \eqdef (B,E,F)$ is an occurrence net iff \emph{(i)} $\leq$ is a partial order; \emph{(ii)} for all $b \in B$, $\lvert \preset b \rvert \in \{0,1\}$; \emph{(iii)} for all $x\in B \cup E$, the set $[x] := \{y \in E \mid y \leq x\}$ is finite; \emph{(iv)} there is no self-conflict, i.e. there is no $x \in B \cup E$ such that $x\cfl x$. The initial marking $M_0$ of an occurrence net is the set of conditions with an empty preset, i.e. $\forall b \in B\colon b \in M_0 \Leftrightarrow \preset b = \emptyset$. Every $\leq$-closed and conflict-free set of events $C$ is called a configuration and generates a reachable marking defined as $\marking C \eqdef (M_0 \cup \postset C) \setminus \preset C$. We also assume a labeling function $\lambda \colon E \to A$ from events in $\beta$ to alphabet $A$. Conditions are of the form $\langle e , X \rangle$ where $e \in E$ is the event generating the condition and $X \subseteq E$ are the events consuming it. Occurrence nets are the mathematical form of the partial order unfolding semantics of a Petri net~\cite{EsparzaRV02}; we use indifferently the terms occurrence net and unfolding. \iftoggle{long}{ \begin{lemma} \label{lem:cocond} Let $\beta$ be an occurrence net such that $(e,e') \in {\coe \beta}$, then for every $b \in \preset e$ and $b' \in \preset{e'}$ we have $b=b'$ or $b \textbf{ co } b'$. \end{lemma} \begin{proof} Routine. \end{proof} } Conditions in an occurrence net can be removed by keeping the causal dependencies and introducing a conflict relation; the obtained object is an event structure~\cite{NielsenPW81}. \paragraph{Event structures:} an event structure is a tuple $\les \eqdef (E,\leq,\#)$ where $E$ is a set of events; $\leq\ \subseteq E \times E$ is a partial order (called causality) satisfying the property of finite causes, i.e. $\forall e \in E : \lvert [e] \rvert < \infty$ where $[e] := \{ e' \in E \mid e' \leq e \}$; ${\cfl} \subseteq E \times E$ is an irreflexive symmetric relation (called conflict) satisfying the property of conflict heredity, i.e. $\forall e,e',e'' \in E : e \cfl e' \land e' \leq e'' \Rightarrow e \cfl e''$. Note that in most cases one only needs to consider reduced versions of relations $\leq$ and $\cfl$, which we will denote $\lessdot$ and $\dcfl$, respectively. Formally, $\lessdot$ (which we call direct causality) is the transitive reduction of $\leq$, and $\dcfl$ (direct conflict) is the smallest relation inducing $\cfl$ through the property of conflict heredity. A configuration is a computation state represented by a set of events that have occurred; if an event is present in a configuration, then so must all the events on which it causally depends. Moreover, a configuration does not contain conflicting events. Formally, a configuration of $(E,{\leq},{\cfl})$ is a set $C \subseteq E$ such that $e \in C \Rightarrow (\forall e' \leq e : e' \in C)$, and $(e \in C \land e \cfl e') \Rightarrow e'\not\in C$. The set of configurations of~$\mathcal{E}$ is denoted by~$\Omega(\mathcal{E})$. \paragraph{Mazurkiewicz traces:} let $A$ be a finite alphabet of letters and $\meddiamond \subseteq A \times A$ a symmetric and irreflexive relation called independence. The relation $\meddiamond$ induces an equivalence relation $\equiv_\meddiamond$ over $A^*$. Two words $\sigma$ and $\sigma'$ are equivalent ($\sigma \equiv_\meddiamond \sigma'$) if there exists a sequence $\sigma_1 \dots \sigma_k$ of words such that $\sigma=\sigma_1, \sigma'=\sigma_k$ and for all $1\leq i \leq k$ there exists words $\sigma_i', \sigma_i''$ and letters $a_i,b_i$ satisfying $$\sigma_i=\sigma_i' a_i b_i \sigma_i'', \hspace{5mm} \sigma_{i+1}=\sigma_i' b_i a_i \sigma_i'', \hspace{4mm} \text{and } (a_i,b_i) \in \meddiamond$$ Thus, two words are equivalent by $\equiv_\meddiamond$ if one can be obtained from the other by successive commutation of neighboring independent letters. For a word $\sigma \in A^*$ the equivalence class of $\sigma$ under $\equiv_\meddiamond$ is called a Mazurkiewicz trace~\cite{Mazurkiewicz86} \bigskip We now describe the problem tackled in this paper, one of the main challenges in the {\em process mining} field~\cite{AalstBook}. \paragraph{Process Discovery:} a log ${\mathcal{L}}$ is a finite set of traces over an alphabet $A$ representing the footprints of the real process executions of a system $\mathcal{S}$ that is only (partially) visible through these runs. Process discovery techniques aim at extracting from a log ${\mathcal{L}}$ a process model ${\mathcal{M}}$ (e.g., a Petri net) with the goal to elicit the process underlying in ${\mathcal{S}}$. By relating the behaviors of ${\mathcal{L}}$, $\obs {\mathcal{M}}$ and ${\mathcal{S}}$, particular concepts can be defined~\cite{BuijsDA14}. A log is \emph{incomplete} if ${\mathcal{S}} \backslash {\mathcal{L}} \ne \emptyset$. A model ${\mathcal{M}}$ \emph{fits} log ${\mathcal{L}}$ if ${\mathcal{L}} \subseteq \obs {\mathcal{M}}$. A model is \emph{precise} in describing a log ${\mathcal{L}}$ if $\obs {\mathcal{M}} \backslash {\mathcal{L}}$ is small. A model ${\mathcal{M}}$ represents a \emph{generalization} of log ${\mathcal{L}}$ with respect to system ${\mathcal{S}}$ if some behavior in ${\mathcal{S}} \backslash {\mathcal{L}}$ exists in $\obs {\mathcal{M}}$. Finally, a model ${\mathcal{M}}$ is \emph{simple} when it has the minimal complexity in representing $\obs {\mathcal{M}}$, i.e., the well-known \emph{Occam's razor principle}. It is widely acknowledged that the size of a process model is the most important simplicity indicator. Let ${\cal U}^{\cal N}$ be the universe of nets, we define a function $\hat c: {\cal U}^{\cal N} \to \mathbb{N}$ to measure the simplicity of a net by counting the number of some of its elements, \eg, its transitions and/or places. \section{Independence-Preserving Discovery} \label{sec:discovery} Let $\mathcal{S}$ be a system whose set of actions is~$A$. Given two actions $a,b \in A$ and one state~$s$ of~$\mathcal{S}$, we say that~$a$ and~$b$ \emph{commute} at~$s$ when \begin{itemize} \item if $a$ can fire at $s$ and its execution reaches state~$s'$, then $b$ is possible at~$s$ iff it is possible at~$s'$; and \item if both $a$ and $b$ can fire at~$s$, then firing $ab$ and $ba$ reaches the same state. \end{itemize} Commutativity of actions at states identifies an equivalence relation in the set of executions of the system $\mathcal{S}$; it is a \emph{ternary} relation, relating two transitions with one state. Since asking the expert to provide the commutativity relation of~$\mathcal{S}$ would be difficult, we restrict ourselves to unconditional independence, \ie, a conservative overapproximation of the commutativity relation that is a sole property of transitions, as opposed to transitions and states. An \emph{unconditional independence} relation of~$\mathcal{S}$ is any \emph{binary}, symmetric, and irreflexive relation~$\meddiamond \subseteq A \times A$ satisfying that if~$a \meddiamond b$ then~$a$ and~$b$ commute at \emph{every reachable state} of~$\mathcal{S}$. If $a, b$ are not independent according to $\meddiamond$, then they are dependent, denoted by $a \diamondtimes b$. In this section, given a log~$\mathcal{L} \subseteq A^*$, representing some behaviors of~$\mathcal{S}$, and an arbitrary unconditional independence~$\meddiamond$ of~$\mathcal{S}$, provided by the expert, we construct an occurrence net whose executions contain $\mathcal{L}$ together with all sequences in $A^*$ which are $\equiv_{\meddiamond}$-equivalent to some sequence in~$\mathcal{L}$. If commuting actions are not declared independent by the expert (\ie, $\meddiamond$~is smaller than it could be), then $\mathcal{M}$ will be more sequential than $\mathcal{S}$; if some actions that did not commute are marked as independent, then $\mathcal{M}$ will not be a truthful representation of $\mathcal{S}$. The use of expert knowledge in terms of an independence relation is a novel feature not considered before in the context of process discovery. We believe this is a powerful way to fight with the problem of log incompleteness in a practical way since it is only needed to observe in the log one trace representative of a class in $\equiv_\meddiamond$ to include the whole set of traces of the class in the process model's executions. Our final goal is to generate a Petri net that represents the behavior of the underlying system. We start by translating~$\mathcal{L}$ into a collection of partial orders whose shape depends on the specific definition of~$\meddiamond$. \begin{definition} \label{def:log2lpo} Given a sequence $\sigma \in A^*$ and an independence relation $\meddiamond \subseteq A \times A$, we associate to~$\sigma$ a labeled partial order $\lpo \sigma$ inductively defined by: \begin{enumerate} \item If $\sigma = \varepsilon$, then let $\bot \eqdef \tup{\tau, \emptyset}$ and set $\lpo \sigma \eqdef (\set \bot, \emptyset)$. \item If $\sigma = \sigma' a$, then let $\lpo{\sigma'} \eqdef (E',\leq')$ and let $e \eqdef \tup{a, H}$ be the single event such that~$H$ is the unique $\subseteq$-minimal, causally-closed set of events in~$E'$ satisfying that for any event $e' \in E'$, if $\lambda(e') \diamondtimes a$, then $e' \in H$. Then set $\lpo \sigma \eqdef (E,\leq)$ with $E \eqdef E' \cup \{ e \}$ and ${\leq} \eqdef {\leq'} \cup (H \times \{ e \})$. \end{enumerate} \end{definition} Since a system rarely generates a single observation, we need a compact way to model all the possible observations of the system. We represent all the partially ordered executions of a system with an event structure. \begin{definition} \label{def:lpo2es} Given a set of partial orders $ S \eqdef \{ (E_i,\leq_i) \mid 1 \leq i \leq n \}$, we define $\es S \eqdef (E,\leq, \cfl)$ where: \begin{enumerate} \item $E \eqdef \bigcup\limits_{1 \leq i \leq n} E_i$, \item ${\leq} \eqdef (\bigcup\limits_{1 \leq i \leq n} \leq_i)^*$, and \item for $e \eqdef \langle a,H \rangle$ and $e' \eqdef \langle b,H' \rangle$, we have that $e \dcfl e'$ (read: $e$ and $e'$ are in direct conflict) iff $e' \not \in H, e \not \in H'$ and $a \diamondtimes b$. The conflict relation $\cfl$ is the smallest relation that includes $\dcfl$ and is inherited \wrt $\leq$, \ie, for $e \cfl e'$ and $e \leq f$, $e' \leq f'$, one has $f \cfl f'$. \end{enumerate} \end{definition} \iftoggle{long}{ \begin{lemma} \label{le:lpo2es} $\es S\eqdef (E,\leq, \cfl)$ from \autoref{def:lpo2es} is an event structure. \end{lemma} \begin{proof} Clearly, $\leq$ is reflexive, antisymmetric and transitive by the Kleene closure: violations to these properties on $\leq$ will contradict the corresponding properties in some lpo in $S$ since every event is characterized by the set of their causal events. Now, the definition of $e \dcfl e'$ is clearly symmetric in the roles of $e$ and $e'$ since $\diamondtimes$ is a symmetric relation; symmetry is also inherited under $\leq$. \end{proof} \begin{lemma} $\es S\eqdef (E,\leq, \cfl)$ from \autoref{def:lpo2es} is unique. \end{lemma} \begin{proof} The set $E$ and the relation $\leq$ are clearly unique since they are defined from the union and Kleene closure operators, respectively, which derive unique results. Now, $\dcfl$ is unique since its definition in \autoref{def:lpo2es} is based on removing all the causality from $\diamondtimes$, and hence only one possible relation is obtained for $\dcfl$. By taking the smallest relation including $\dcfl$ that is inherited with respect $\leq$, again only one possible relation is obtained for $\cfl$. \end{proof} } Given a set of finite partial orders $S$, we now show that $S$ is included in the configurations of the event structure obtained by \autoref{def:lpo2es}. This means that our event structure is a fitting representation of~$\mathcal{L}$. \begin{proposition} \label{lemma:fit} If $S$ is finite, then $S \subseteq \Omega(\es S)$. \end{proposition} \iftoggle{long}{ \begin{proof} By \autoref{def:lpo2es}.1, all the events of the partial orders are part of the event structure. Let $(E_i,\leq_i) \in S$, clearly $E_i$ is casually closed in $\es S$ (\autoref{def:lpo2es}.2). By \autoref{def:lpo2es}.3, any event in conflict with some event $e \in E_i$ is not in its past; we can conclude that $E_i$ is conflict free and therefore a configuration. \qed \end{proof} } Since we want to produce a Petri net, we now need to ``\emph{attach conditions}'' to the result of \autoref{def:lpo2es}. Event structures and occurrence nets are conceptually very similar objects so this might seem very easy for the acquainted reader. However, this definition is crucial for the success of the subsequent folding step (\autoref{sec:generalization}), as we will be constrained to merge conditions in the preset and postset of an event when we merge the event. As a result, the conditions that we produce now should constraint as little as possible the future folding step. \begin{definition} \label{def:es2on} Given an event structure $\les \eqdef (E,\leq,\#)$ we construct the occurrence net $\beta \eqdef (B,E \backslash \{ \bot \},F)$ in two steps \begin{enumerate} \item Let $G \eqdef (V,A)$ be a graph where $V \eqdef E$ and $(e_1,e_2) \in A$ iff $e_1 \dcfl e_2$. For each clique (maximal complete subgraph) $K \eqdef \{e_1, \dots, e_n \}$ of $G$, let $C_K \eqdef [e_1] \cap \dots \cap [e_n]$ and $e_K \in \max (C_K)$. We add a condition $b$ to $B$ and set $b \in \postset{e_K}$ and $b \in \preset{e_i}$ for $i = 1 \dots n$. \item For each $e \in E$, let $G_e \eqdef (V_e,A_e)$ be a graph where $V_e \eqdef \{ e' \in E \mid e \lessdot e' \}$ and $(e_1,e_2) \in A_e$ iff $\lambda(e_1) \diamondtimes \lambda(e_2)$. For each clique $K_e := \{e_1, \dots, e_n \}$ of $G_e$, we add a condition $b$ to $B$ and set $b \in \postset{e}$ and $b \in \preset{e_i}$ for $i = 1 \dots n$. \end{enumerate} \end{definition} \autoref{def:es2on}.1. adds a condition for every set of pairwise direct conflicting events; the condition is generated by some event $e_K$ which is in the past of every conflicting event and consumed by all of them; by the latter the conflict of the event structure is preserved in the occurrence net. For each event and its immediate successors, \autoref{def:es2on}.2. adds conditions between them to preserve causality. To minimize the number of conditions, for the successor events having dependent labels only one condition is generated. This step does not introduce new conflicts in the occurrence net since the events have dependent labels and none is in the past of the other, then by \autoref{def:lpo2es} they are also in conflict in the event structure. We note that Winskel already explained, in categorical terms, how to relate an event structure with an occurrence net~\cite{Winskel84a}. However, his definition is of interest only in that context, while ours focus on a practical and efficient translation. Given a log $\mathcal{L}$ and an independence relation $\meddiamond$, the net obtained applying Definitions \ref{def:log2lpo}, \ref{def:lpo2es} and \ref{def:es2on}, in this order, is denoted by $\on_{\logs,\ind}$. Since every trace in $\mathcal{L}$ is a linearization of some of the partial orders in the set $S$ obtained by \autoref{def:log2lpo} and these partial orders are included by \autoref{lemma:fit} in the configurations of $\es S$ (which are the same as the configurations in $\on_{\logs,\ind}$), the obtained net is fitting. \begin{proposition} \label{prop:fit} Let $\mathcal{L}$ be a log and $\meddiamond$ an independence relation, for every $\sigma \in \mathcal{L}$ we have $\sigma \in \obs \on_{\logs,\ind}$. \end{proposition} \begin{proof} Since every trace is a linearization of some partial order obtained by \autoref{def:lpo2es}, by \autoref{lemma:fit} every trace is a linearization of the maximal configurations of the event structure; since causality and conflict are preserved by \autoref{def:es2on}, their configurations coincide, the trace correspond to a sequential execution of the occurrence net and the result holds. \end{proof} It is worth noticing that the obtained net generalizes the behavior of the model, but in a controlled manner imposed by the independence relation. For instance, if $\mathcal{L} \eqdef \set{ab}$ and $a \meddiamond b$, then $ba \in \obs \on_{\logs,\ind}$, even if this behavior was not present in the log. If the expert rightly declared~$a$ and~$b$ independent (\ie, if they commute at all states of~$\mathcal{S}$), then necessarily~$ba$ is a possible observation of~$\mathcal{S}$, even if it is not in~$\mathcal{L}$. The extra information provided by the expert allows us to generalize the discovered model in a provably sound manner, thus coping with the log incompleteness problem. \iftoggle{long}{ \begin{proposition} \label{ass:cond_dep} Let $\on_{\logs,\ind} \eqdef (B,E,F)$ be the unfolding obtained from the log $\mathcal{L}$ with $\meddiamond$ as the independence relation. For all pairs of events $e,e' \in E$ such that $(e,e') \in {\coe \on_{\logs,\ind}}$ we have $\preset e \cap \preset{e'} \ne \emptyset \Leftrightarrow \lambda(e) \diamondtimes \lambda(e')$. \end{proposition} \begin{proof} \item{$\Rightarrow$)} Let $b \in \preset e \cap \preset{e'}$; if $b$ was added by \autoref{def:es2on}.2., the result trivially holds since the condition was added in the preset of the events in the clique which relates only events with dependent labels; if $b$ was added by \autoref{def:es2on}.1., then it was added in the preset of the events in the clique which relates only direct conflicting events and then $e \dcfl e'$ in the event structure. This means that none can be in the past of the other, because otherwise some event would be in self-conflict which is ruled out in event structures; now, by \autoref{def:lpo2es} we have $\lambda(e) \diamondtimes \lambda(e')$. \item{$\Leftarrow$)} Let $\lambda(e) \diamondtimes \lambda(e')$, events $e$ and $e'$ could not be generated from the same log since if not they would be causally related (see \autoref{def:log2lpo}) contradicting $(e,e') \in {\coe \on_{\logs,\ind}}$; since they were generated from different logs and have dependent labels, we have from \autoref{def:lpo2es} that $e \dcfl e'$; since they are in conflict, \autoref{def:es2on}.1 adds a conditions in their presets and finally $\preset e \cap \preset{e'} \ne \emptyset$. \qed \end{proof} } The independence relation between labels gives rise to an arbitrary relation between transitions of a net (not necessarily an independence relation): \begin{definition} \label{def:indt} Let $\meddiamond \subseteq A \times A$ be an independence relation, $\pnet \eqdef (P,T,F,\lambda,M_0)$ a net, and $\lambda \colon T \to A$. We define relation ${\indt N} \subseteq T \times T$ between transitions of~$N$ as $$ t \indt N t' \Leftrightarrow \lambda(t) \meddiamond \lambda(t'). $$ \end{definition} In the next section we will define an approach to fold $\on_{\logs,\ind}$ into a Petri net whose natural independence relation equals $\meddiamond$. To formalize our approach we first need to define such natural independence. \begin{definition} \label{def:indu} Let $\net \eqdef (P,T,F)$ be a net. We define the \emph{natural independence} relation ${\indu N} \subseteq T \times T$ on~$N$ as $$t \indu N t' \Leftrightarrow \preset t \cap \preset{t'} = \emptyset \land \postset t \cap \preset{t'} = \emptyset \land \preset t \cap \postset{t'} = \emptyset. $$ \end{definition} In fact, one can prove that when~$N$ is safe, then $\indu N$ is the notion of independence underlying the unfolding semantics of~$N$. In other words, the equivalence classes of $\equiv_{\indu N}$ are in bijective correspondence with the configurations in the unfolding of~$N$. The following result shows that the natural independence on the discovered occurrence net corresponds to the relation provided by the expert, when both we restrict to the set of co-enabled transitions. \begin{theorem} \label{prop:ind_on} Let $\on_{\logs,\ind}$ be the occurrence net from the log $\mathcal{L}$ with $\meddiamond$ as the independence relation, then $$ {\indt \on_{\logs,\ind}} \cap {\coe \on_{\logs,\ind}} = {\indu \on_{\logs,\ind}} \cap {\coe \on_{\logs,\ind}} $$ \end{theorem} \iftoggle{long}{ \begin{proof} \item[$\subseteq)$] Let $(e,e') \in {\indt \on_{\logs,\ind}} \cap {\coe \on_{\logs,\ind}}$, then from \autoref{def:indt} follows that $\lambda(e) \meddiamond \lambda(e')$ and by \autoref{ass:cond_dep} we have $\preset{e} \cap \preset{e'} = \emptyset$. Suppose $\postset{e} \cap \preset{e'} \not = \emptyset$ then $\exists b_1 \in \preset{e}$ such that $\forall b_2 \in \preset{e'}$ it holds that $b_1< b_2$ and by \autoref{lem:cocond} $(e,e') \not \in {\coe \on_{\logs,\ind}}$ which leads to a contradiction. Using the same reasoning it can be proven that $\preset{e} \cap \postset{e'} = \emptyset$. By \autoref{def:indu} we can conclude that $(e,e') \in {\indu \on_{\logs,\ind}} \cap {\coe \on_{\logs,\ind}}$. \item[$\supseteq)$] Let $(e,e') \in {\indu \on_{\logs,\ind}} \cap {\coe \on_{\logs,\ind}}$, by \autoref{def:indu} we get $\preset{e} \cap \preset{e'} = \emptyset$ and since they are co-enabled, by \autoref{ass:cond_dep} follows $\lambda(e) \meddiamond \lambda(e')$; finally by \autoref{def:indt} we have $(e,e') \in {\indt \on_{\logs,\ind}}$ and since the events were co-enabled by assumption $(e,e') \in {\indt \on_{\logs,\ind}} \cap {\coe \on_{\logs,\ind}}$. \qed \end{proof} } \section{Introducing Generalization} \label{sec:generalization} The construction described in the previous section guarantees that the unfolding obtained is fitting (see \autoref{lemma:fit}). However, the difference between~$\mathcal{S}$ and~$\mathcal{L}$ may be significant (e.g.,~$\mathcal{S}$ can contain cyclic behavior that can be instantiated an arbitrary number of times whereas only finite traces exist in~$\mathcal{L}$) and the unfolding may be poor in generalization. The goal of this section is to generalize $\on_{\logs,\ind}$ in a way that the right patterns from~$\mathcal{S}$, partially observed in~$\mathcal{L}$ (\eg, loops), are incorporated in the generalized model. To generalize, we fold the discovered occurrence net. This folding is driven by an equivalence relation~$\sim$ on~$E \cup B$ that dictates which events merge into the same transition, and analogously for conditions; events cannot be merged with conditions. We write $[x]_\sim \eqdef \{ x' \mid x \sim x' \}$ for the equivalence class of node $x$. For a set $X$, $[X]_\sim \eqdef \{ [x]_\sim \mid x \in X \}$ is a set of equivalence classes. \begin{definition}[Folded net \cite{FahlandA13}] \label{def:foldednet} Let $\beta \eqdef (B,E,F)$ be an occurrence net and $\sim$ a equivalence relation on the nodes of $\beta$. The folded Petri net (\wrt~$\sim$) is defined as $\beta^\sim \eqdef (P_\sim,T_\sim,F_\sim,{M_0}_\sim)$ where \begin{align*} P_\sim & \eqdef \{ [b]_\sim \mid b \in B \}, & F_\sim & \eqdef \{ ([x]_\sim,[y]_\sim) \mid (x,y) \in F \}, \\ T_\sim & \eqdef \{ [e]_\sim \mid e \in E \}, & {M_0}_\sim([b]_\sim) & \eqdef \lvert \{ b' \in [b]_\sim \mid \preset{b'} = \emptyset \} \rvert. \end{align*} \end{definition} Notice that the initial marking of the folded net is not necessarily safe. Safeness of the net depends on the chosen equivalence relation (see \autoref{prop:safe}). \subsection{Language-Preserving Generalization} \label{sec:lpgeneralization} Different folding equivalences guarantee different properties on the folded net. From now on we focus our attention on three interesting classes of folding equivalences. The first preserves sequential executions of $\on_{\logs,\ind}$. \begin{definition}[Sequence-preserving folding equivalence] \label{def:fold1} Let $\beta$ be an occurrence net; an equivalence relation $\sim$ is called a sequence preserving (SP) folding equivalence iff $e_1 \sim e_2$ implies $\lambda(e_1) = \lambda(e_2)$ and $[\preset{e_1}]_\sim = [\preset{e_2}]_\sim$ for all events $e_1,e_2 \in E$. \end{definition} From the definition above it follows that $e_1 \sim e_2$ implies $\forall b \in \preset{e_1}: \exists b' \in \preset{e_2}$ with $b \sim b'$. Since for every folded net obtained from a SP folding equivalence only equally labeled events are merged; we define then $\lambda([e]_\sim) \eqdef \lambda(e)$. \begin{figure}[t] \centering \subfigure{\scalebox{.65}{\input{Figures/net_po1}}} \hspace{.25cm} \subfigure{\scalebox{.65}{\input{Figures/net_po2}}} \hspace{.25cm} \subfigure{\scalebox{.65}{\input{Figures/net_po3}}} \hspace{.25cm} \subfigure{\scalebox{.65}{\input{Figures/net_po4}}} \caption{Folding equivalences and folded nets.} \label{fig:fold} \end{figure} \begin{example} \label{ex:fold1} Consider the log $\mathcal{L} = \{ abc, bd \}$ and the independence relation $\meddiamond = \emptyset$. \autoref{fig:fold} shows the obtained unfolding $\on_{\logs,\ind}$ (left) and three of its folded nets. The equivalence relation $\sim_1$ merges events labeled by $b$, but it does not merge their presets, i.e. is not a SP folding equivalence. It can be observed that $bd$ is not fireable in $\on_{\logs,\ind}^{\sim_1}$. Whenever two events are merged, their preconditions need to be merged to preserved sequential executions. The equivalence relation $\sim_2$ does not only merge events labeled by $b$, but it also sets $p_1 \sim_2 p_2$ and is a SP folding equivalence. The folded net $\on_{\logs,\ind}^{\sim_2}$ can replay every trace in the log $\mathcal{L}$, but it also adds new traces of the form $a^*, a^*b, a^*bc, a^*bd, a^*bcd$ and $a^*bdc$. \end{example} Given an unfolding, every SP folding equivalence generates a net that preserves its sequential executions. \begin{restatable}{theorem}{thetwo} \label{the:fire_seq} Let $\beta$ be an occurrence net and $\sim$ a SP folding equivalence, then every fireable sequence $M_0 \arrow{e_1} \dots \arrow{e_n} M_n$ from $\beta$ generates a fireable sequence $[M_0]_\sim \arrow{[e_1]_\sim} \dots \arrow{[e_n]_\sim} [M_n]_\sim$ from $\beta^\sim$. \end{restatable} \iftoggle{long}{ \begin{proof} We reason inductively on the length of the fireable sequence. \begin{description} \item[Base case:] if $n=0$, the results holds since an empty sequence of events from $\beta$ generates an empty sequence of transitions that is trivially a fireable sequence from $\beta^\sim$. \item[Inductive case:] we assume every fireable sequence $M_0 \arrow{e_1} \dots \arrow{e_n} M_n$ from $\beta$ generates a fireable sequence $[M_0]_\sim \arrow{[e_1]_\sim} \dots \arrow{[e_n]_\sim} [M_n]_\sim$ from $\beta^\sim$; we need to prove that the sequence $M_0 \arrow{e_1} \dots \arrow{e_{n+1}} M_{n+1}$ generates a fireable sequence $[M_0]_\sim \arrow{[e_1]_\sim} \dots \arrow{[e_{n+1}]_\sim} [M_{n+1}]_\sim$. Consider a fireable sequence $e_1 \dots e_{n+1}$ from $\beta$, then by the inductive hypothesis, we know that the first $n$ events generate a firing sequence $[e_1]_\sim \dots [e_n]_\sim$ from $\beta^\sim$ leading to the marking $[M_n]_\sim$; we need to prove that $[M_n]_\sim \arrow{[e_{n+1}]}$. Suppose this is not true, then there exists $[b]_\sim \in \preset{[e_{n+1}]_\sim}$ with $[b]_\sim \not \in [M_n]_\sim$; the latter implies $b \not \sim b_n$ for every $b_n \in M_n$. \autoref{def:foldednet} does not add a flow arrow from a place $[b]_\sim$ to a transition $[e]_\sim$ unless a flow arrow exists between a condition $b' \in [b]_\sim$ and an event $e' \in [e]_\sim$ in the occurrence net. Therefore $[b]_\sim \in \preset{[e_{n+1}]_\sim}$ implies there exists $b_1 \sim b$ and $e'_{n+1} \sim e_{n+1}$ such that $b_1 \in \preset{e'_{n+1}}$ (there exists a flow arrow between $b_1$ and $e'_{n+1}$ in $\beta$), and from the transitivity of $\sim$ follows that \begin{equation} \label{proof} \tag{*} b_1 \not \sim b_n \text{ for every } b_n \in M_n. \end{equation} Since $\beta$ allows a firing sequence of length $n+1$, we know $M_n \arrow{e_{n+1}}$ and then $\forall b_2 \in \preset{e_{n+1}}: b_2 \in M_n$. As every $b_2$ in $\preset{e_{n+1}}$ is also in $M_n$, by (\ref{proof}) we have $b_1 \not \sim b_2$ for all $b_2 \in \preset{e_{n+1}}$. From $b_1 \in \preset{e'_{n+1}}, e_{n+1} \sim e'_{n+1}$ and the fact that $\sim$ is a SP folding equivalence (\autoref{def:fold1}) follows that there exists $b_2 \in \preset{e_{n+1}}$ such that $b_1 \sim b_2$, but we showed this is not possible; therefore our assumption was false and for all $[b]_\sim \in \preset{[e_{n+1}]_\sim}$ we have $[b]_\sim \in [M_n]_\sim$. Finally $[M_n]_\sim \arrow{[e_{n+1}]_\sim}$ and $[M_0]_\sim \arrow{[e_1]_\sim} \dots \arrow{[e_{n+1}]_\sim} [M_{n+1}]_\sim$ is a fireable sequence from $\beta^\sim$. \qed \end{description} \end{proof} } As a corollary of the result above and \autoref{prop:fit}, the folded net obtained from $\on_{\logs,\ind}$ with a SP folding equivalence is fitting. \begin{corollary} \label{cor:fitting} Let $\mathcal{L}$ be a log, $\meddiamond$ an independence relation and $\sim$ a SP folding equivalence, then for every $\sigma \in \mathcal{L}$ we have $\sigma \in \obs \on_{\logs,\ind}^\sim$. \end{corollary} \iftoggle{long}{ \begin{proof} Since by \autoref{lemma:fit} every $\sigma \in \mathcal{L}$ corresponds to a fireable sequence in $\on_{\logs,\ind}$, the results follows immediately from \autoref{the:fire_seq}. \end{proof} } \begin{example} We saw in \autoref{ex:fold1} that every trace from $\mathcal{L}$ can be replayed in $\on_{\logs,\ind}^{\sim_2}$, but (as expected) the net accepts more traces. However this net also adds some independence between actions of the system: after firing $b$ the net puts tokens at $[p_3]_{\sim_2}$ and $[p_4]_{\sim_2}$ and the reached marking enables concurrently actions $c$ and $d$ which contradicts $c \diamondtimes d$ (the independence relation $\meddiamond = \emptyset$ implies $c \diamondtimes d$). In order to avoid this extra independence, we now consider the following class of equivalences. \end{example} \begin{definition}[Independence-preserving folding equivalence] \label{def:fold2} Let $\beta$ be an occurrence net and $\meddiamond$ an independence relation; an equivalence relation $\sim$ is called an independence preserving (IP) folding equivalence iff \begin{enumerate} \item $\sim$ is a SP folding equivalence, \item $\lambda(e_1) \meddiamond \lambda(e_2) \Leftrightarrow [\preset e_1]_\sim \cap [\preset{e_2}]_\sim = \emptyset \land [\preset e_1]_\sim \cap [\postset{e_2}]_\sim = \emptyset \land [\postset e_1]_\sim \cap [\preset{e_2}]_\sim = \emptyset$ for all events $e_1,e_2 \in E$. \item $b_1 \textbf{ co } b_2$ implies $b_1 \not \sim b_2$ for all conditions $b_1,b_2 \in B$. \end{enumerate} \end{definition} IP folding equivalences not only preserve the sequential behavior of $\beta$, but also ensure that $\beta^\sim$ and $\beta$ exhibit the same natural independence relation. The definition above differs from the folding equivalence definition given in \cite{FahlandA13}; they consider occurrence nets coming from an unfolding procedure which takes as an input a net. This procedure generates a mapping between conditions and events of the generated occurrence net and places and transitions in the original net. Such mapping is necessary to define their folding equivalence. In our setting, the occurrence net does not come from a given net and therefore the mapping is not available. \begin{example} The equivalence $\sim_2$ from \autoref{fig:fold} is not an IP folding equivalence since the intersection of the equivalent classes of the preset of $c$ and $d$ is empty ($[\preset c]_{\sim_2} = \{[p_4]_{\sim_2}\}, [\preset d]_{\sim_2} = \{[p_3]_{\sim_2}\}$ and $\{[p_4]_{\sim_2}\} \cap \{[p_3]_{\sim_2}\} = \emptyset$), but $c$ and $d$ are not independent. Consider the equivalence relation $\sim_3$ which merges events labeled by $b$ and it sets $p_1 \sim_3 p_2$ and $p_3 \sim_3 p_4$; this relation is an IP folding equivalence. It can be observed in the net $\on_{\logs,\ind}^{\sim_3}$ of \autoref{fig:fold} that all the traces from the log can be replayed, but new independence relations are not introduced. \end{example} The occurrence net $\on_{\logs,\ind}$ is clearly safe. We show that $\on_{\logs,\ind}^\sim$ is also safe when~$\sim$ is an IP folding equivalence. In this work, we constraint IP equivalences to generate safe nets because their natural independence relation is well understood (\autoref{def:indu}), thus allowing us to assign a solid meaning to the class IP. It is unclear what is the natural unconditional independence of an unsafe net, and extending our definitions to such nets is subject of future work. \begin{proposition} \label{prop:safe} Let $\on_{\logs,\ind}$ be the unfolding obtained from the log $\mathcal{L}$ with $\meddiamond$ as the independence relation and $\sim$ an IP folding equivalence. Then $\on_{\logs,\ind}^\sim$ is safe. \end{proposition} \begin{proof} The unfolding $\on_{\logs,\ind}$ is trivially safe since its initial marking puts one token in its minimal conditions and each condition contains only one event in its preset and that event cannot put more than one token in the condition. Suppose $\on_{\logs,\ind}^\sim$ is not safe, by the above this is possible iff there exists $C \in \reach \on_{\logs,\ind}$ and $b_1,b_2 \in C$ such that $b_1 \sim b_2$. If $b_1$ and $b_2$ belong to a reachable marking, then they must be concurrent and since $\sim$ is an IP folding equivalence they cannot be merged, which leads to a contradiction. Finally $\on_{\logs,\ind}^\sim$ must be safe. \qed \end{proof} \autoref{prop:ind_on} shows that the structural relation between events of the unfolding and the relation generated by the independence given by the expert coincide (when we restrict to co-enabled events); the result also holds for the folded net when an IP folding equivalence is used. \begin{restatable}{theorem}{thethree} \label{the:ip} Let $\on_{\logs,\ind}$ be the unfolding obtained from the log $\mathcal{L}$ with $\meddiamond$ as the independence relation and $\sim$ an IP folding equivalence, then ${\indt \on_{\logs,\ind}^\sim} = {\indu \on_{\logs,\ind}^\sim}$. \end{restatable} \iftoggle{long}{ \begin{proof} Let $(t,t') \in {\indt \on_{\logs,\ind}^\sim}$, from \autoref{def:indt} this is true iff $\lambda(t) \meddiamond \lambda(t')$ which is true iff for all $e \in t, e' \in t'$ we have $\lambda(e) \meddiamond \lambda(e')$ (since the folding equivalence preserves labeling). As $\sim$ is a IP folding equivalence, independence between labeles holds iff for all $e\in t, e'\in t'$ we have $[\preset e]_\sim \cap [\preset{e'}]_\sim = \emptyset$ (see \autoref{def:fold2}.2). Using \autoref{def:foldednet}, the presets of $t$ and $t'$ are generated by some of the conditions in the preset of each $e$ and $e'$ respectively (the folding procedure does not introduces flow arrows) and we showed above that those conditions generate places that do not intersect those places generated by conditions in the preset of every $e'$; thus $[\preset e]_\sim \cap [\preset{e'}]_\sim = \emptyset$ iff $\preset{[e]_\sim} \cap \preset{[e']_\sim} = \emptyset$ iff $\preset t \cap \preset{t'} = \emptyset$ from $t = [e]_\sim$ and $t' = [e']_\sim$. Using the same reasoning it can be shown that independence between labels holds iff $\preset t \cap \postset{t'} = \emptyset$ and $\postset t \cap \preset{t'} = \emptyset$. Finally from \autoref{def:indu} we get $\preset t \cap \preset{t'} = \emptyset \land \preset t \cap \postset{t'} = \emptyset \land \postset t \cap \preset{t'} = \emptyset$ iff $(t,t') \in {\indu \on_{\logs,\ind}^\sim}$. \qed \end{proof} } \subsection{Controlling Generalization via Negative Information} We have shown that IP folding equivalences preserve independence. However, they could still introduce new unintended behaviour not present in~$\mathcal{S}$. In this section we limit this phenomena by considering \emph{negative information}, denoted by traces that should not be allowed by the model. Concretely, we consider negative information which is also given in the form of sequences $\sigma \in \mathcal{L}^- \subseteq A^*$. Negative information is often provided by an expert, but it can also be obtained automatically by recent methods~\cite{BrouckeWVB14}. Very few techniques in the literature use negative information in process discovery~\cite{Goedertier2009}. In this work, we assume a minimality criterion on the negative traces used: \begin{assumption} Let $\mathcal{L} \eqdef \mathcal{L}^+ \uplus \mathcal{L}^-$ be a pair of positive and negative logs and $\meddiamond$ the independence relation given by the expert. Any negative trace $\sigma \in \mathcal{L}^-$ corresponds to the local configuration of some event $\esig$ in $\on_{\logs,\ind}$. \end{assumption} This assumption implies that each negative trace is of the form $\sigma' a$ where $\sigma'$ only contains the actions that are necessarily to fire $a$. If $a$ can happen without them, they should not be consider part of $\sigma$. By removing all events $\esig$ from $\on_{\logs,\ind}$ (one for each negative trace $\sigma \in \mathcal{L}^-$), we obtain a new occurrence net denoted by $\on_{\logs,\ind,*}$. The goal of this section is to fold this occurrence net without re-introducing the negative traces in the generalization step. If the expert is unable to provide negative traces satisfying this assumption, the discovery tool can always let him/her choose $\esig$ from a visual representation of the unfolding. \begin{definition}[Removal-aware folding equivalence] \label{def:fold3} Let $\beta \eqdef (B,E,F)$ be an occurrence net and $\mathcal{L}^-$ a negative log; an equivalence relation $\sim$ is called removal aware (RA) folding equivalence iff \begin{enumerate} \item $\sim$ is a SP folding equivalence, and \item for every $\sigma \in \mathcal{L}^-$ and $e' \in E$ we have $\lambda(e') = \lambda(\esig)$ implies $[\preset{e'}]_\sim \not \subseteq [\preset{\esig}]$. \end{enumerate} \end{definition} The folded net obtained from $\on_{\logs,\ind,*}$ with a RA folding equivalence does not contain any of the negative traces. \begin{restatable}{theorem}{thefour} \label{the:ra} Let $\on_{\logs,\ind,*}$ be the unfolding obtained from the log $\mathcal{L} \eqdef \mathcal{L}^+ \uplus \mathcal{L}^-$ with $\meddiamond$ as the independence relation after removing the corresponding event of each negative trace and $\sim$ a RA folding equivalence,\footnote{Since \autoref{def:fold3} refers to the events that generates the local configurations of the negative traces, the folding equivalence must be defined over the nodes of $\on_{\logs,\ind}$ and not those of $\on_{\logs,\ind,*}$.} then $$\obs \onls^\sim \cap \mathcal{L}^- = \emptyset$$ \end{restatable} \iftoggle{long}{ \begin{proof} Let $\sigma \eqdef \sigma' a$ and suppose $\sigma \in \obs \onls^\sim \cap \mathcal{L}^-$. Since $\sigma \in \mathcal{L}^-$, by \autoref{lemma:fit} $\sigma \in \obs \on_{\logs,\ind}$, it follows by construction (see \autoref{def:lpo2es}) that $\sigma$ generates a unique local configuration which is removed in $\on_{\logs,\ind,*}$ (by removing $\esig$). Thus $\sigma \not \in \obs \on_{\logs,\ind,*}$, but $\sigma' \in \obs \on_{\logs,\ind,*}$ since only the maximal event $\esig$ of the local configuration is removed. Let $M$ be the marking reached in $\on_{\logs,\ind,*}$ after $\sigma'$, we know (using \autoref{the:fire_seq}) that $\sigma'$ generates a firing sequence in $\onls^\sim$ which leads to the reachable marking $[M]_\sim$. Since we assumed $\sigma \in \obs \onls^\sim$, there exists a transition $[e_a]_\sim$ such that $[M]_\sim \arrow{[e_a]_\sim}$ with $\lambda([e_a]_\sim) = a$, but this implies (from \autoref{def:foldednet}) that the preset of $\esig$ was merged with the preset of $e_a$ which contradicts the assumption that $\sim$ is a RA folding equivalence. Finally the assumption was false and $\obs \onls^\sim \cap \mathcal{L}^- = \emptyset$. \qed \end{proof} } \section{Computing Folding Equivalences} \label{sec:computing} \autoref{sec:discovery} presents a discovery algorithm that generates fitting occurrence nets and \autoref{sec:generalization} defines three classes of folding criteria, SP, IP, and RA, that ensure various properties. This section proposes an approach to synthesize~SP,~IP and~RA folding equivalences using~SMT. \subsection{SMT Encoding} \label{sec:sat} We use an SMT encoding to find folding equivalences generating a net $\beta^\sim$ satisfying specific metric properties. Specifically, given a measure $\hat c$ (cf., \autoref{sec:prelim}), decidable in polynomial time, and a number~$k \in \nat$, we generate an SMT formula which is satisfiable iff there exists a folding equivalence $\sim$ such that $\hat c(\beta^\sim) = k$. We consider the number of transitions in the folded net as the measure $\hat c$, however, theoretically, any other measure that can be computed in polynomial time could be used. As explained in \autoref{sec:prelim} simple functions like counting the number of nodes/arcs provide in practice reasonable results. Given an occurrence net~$\beta \eqdef (B,E,F)$, for every event $e \in E$ and condition $b \in B$ we have integer variables $v_e$ and $v_b$. The key intuition is that two events (conditions) whose variables have equal number are equivalent and will be merged into the same transition (place). The following formulas state, respectively, that every element of a set $X$ is related with at least one element of a set $Y$, and that every element of $X$ is not related with any element of $Y$: $$\ssc X Y \eqdef \bigwedge\limits_{x \in X}\bigvee\limits_{y \in Y} (v_x = v_y) \hspace{10mm} \disj X Y \eqdef \hspace{-2mm}\bigwedge\limits_{x \in X, y \in Y} \hspace{-2mm} (v_x \not = v_y)$$ We force any satisfying assignment to represent an~SP folding equivalence (\autoref{def:fold1}) with the following two constraints: $$ \phi_\beta^{SP} \eqdef \phi_\beta^{lab} \land \phi_\beta^{pre}. $$ Formulas $\phi_\beta^{lab}$ and $\phi_\beta^{pre}$ impose that only equally labeled events should be equivalent and that if two events are equivalent, then their presets should generate the same equivalence class: $$\phi_\beta^{lab} \eqdef \bigwedge\limits_{\substack{e,e' \in E \\ \lambda(e) \not = \lambda(e')}} \hspace{-2mm} (v_e \not = v_{e'}) \hspace{10mm} \phi_\beta^{pre} \eqdef \bigwedge\limits_{e,e' \in E} (v_e = v_{e'} \Rightarrow (\ssc {\preset e} {\preset{e'}} \land \ssc {\preset{e'}} {\preset e}))$$ In addition to the properties encoded above, an IP folding equivalence (\autoref{def:fold2}) should satisfy some other restrictions: $$\phi_\beta^{IP} \eqdef \phi_\beta^{SP} \land \phi_\beta^{ind} \land \phi_\beta^{co}$$ where $\phi_\beta^{ind}$ imposes that the presets and postsets of events with independent labels should generate equivalence classes that do not intersect and $\phi_\beta^{co}$ forbids concurrent conditions to be merged: $$\phi_\beta^{ind} \eqdef \bigwedge\limits_{e,e' \in E} \hspace{-2mm} (\lambda(e) \meddiamond \lambda(e') \Leftrightarrow (\disj{\preset e}{\preset{e'}} \land \disj{\preset e}{\postset{e'}} \land \disj{\preset e}{\postset{e'}})) \hspace{8mm}\phi_\beta^{co} \eqdef \bigwedge\limits_{\substack{b,b' \in B \\ b \textbf{ co } b'}} \hspace{-2mm} (v_b \not = v_{b'})$$ Given a negative log $\mathcal{L}^-$, to encode a RA folding equivalence (\autoref{def:fold3}) we define: $$\phi_{\beta, \mathcal{L}^-}^{RA} \eqdef \phi_\beta^{SP} \land (\hspace{-2mm}\bigwedge\limits_{\substack{\sigma \in \mathcal{L}^-, e' \in E\\ \lambda(e') = \lambda(\esig)}} \hspace{-3mm} \neg \ssc {\preset{e'}} {\preset \esig})$$ where the right part of the conjunction imposes that for every $\esig$ generated by a negative trace and any other event with the same label, their presets cannot generate the same equivalence class. We now encode the optimality (\wrt the number of transitions) of the mined net. Given an occurrence net~$\beta \eqdef (B,E,F)$, each event $e \in E$ generates a transition $v_e$ in the folded net $\beta^\sim$. To impose that the number of transitions in $\beta^\sim$ should be at most $k \in \nat$, we define: $$\phi_{\beta,k}^{MET} \eqdef \bigwedge\limits_{e \in E} (1 \le v_e \leq k)$$ To find an IP and RA folding equivalence that generates a net with at most $k$ transitions we propose the following encoding: $$\phi_{\beta,\mathcal{L}^-,k}^{OPT} \eqdef \phi_{\beta}^{IP} \land \phi_{\beta,\mathcal{L}^-}^{RA} \land \phi_{\beta,k}^{MET}$$ \begin{restatable}{theorem}{thefive} Let $\mathcal{L} \eqdef \mathcal{L}^+ \uplus \mathcal{L}^-$ be a set of positive and negative logs, $\meddiamond \subseteq A \times A$ and independence relation and $k \in \nat$. The formula $\phi_{\beta,\mathcal{L}^-,k}^{OPT}$ is satisfiable iff there exists an IP and RA folding equivalence $\sim$ such that $\beta_{\mathcal{L}, \meddiamond, *}^\sim$ contains at most $k$ transitions. \end{restatable} \iftoggle{long}{ \begin{proof} Let $\psi$ be a solution of $\phi_{\beta,\mathcal{L}^-,k}^{OPT}$ and let $\sim_\psi$ be the relation such that $x \sim_\psi x'$ iff $\psi \models (v_x = v_{x'})$, i.e. $\psi$ assigns the same value to $v_x$ and $v_{x'}$. By the reflexivity, symmetry and transitivity of integer numbers follows that $\sim_\psi$ is an equivalence relation. The assignment $\psi$ is a solution of the formula iff all of the following are true: \begin{enumerate} \item $\phi_{\on_{\logs,\ind}}^{IP}$ holds; this is true iff \emph{(i)} for every two events $e,e'$ with different labels $v_e \not = v_{e'}$, \emph{(ii)} if $v_e = v_{e'}$ then for all $b \in \preset e$ there exists $b' \in \preset{e'}$ such that $v_b = v_{b'}$ and viceversa, \emph{(iii)} for every pair $e,e'$ of events with independent labels \emph{(iii.a)} for all conditions $b \in \preset e,b' \in \preset{e'}$ we have $v_b \not = v_{b'}$, \emph{(iii.b)} for all conditions $b \in \preset e,b' \in \postset{e'}$ we have $v_b \not = v_{b'}$, \emph{(iii.c)} for all conditions $b \in \postset e,b' \in \preset{e'}$ we have $v_b \not = v_{b'}$, \emph{(iv)} for every pair $b,b'$ of concurrent conditions we have $v_b \not \sim v_{b'}$; by the definition of $\sim_\psi$ we have \emph{(i)} for every two events $e,e'$ with different labels $e \not \sim_\psi e'$, \emph{(ii)} if $e \sim_\psi e'$ then $[\preset e]_{\sim_\psi} = [\preset{e'}]_{\sim_\psi}$, \emph{(iii)} for every pair $e,e'$ of events with independent labels $[\preset e]_{\sim_\psi} \cap [\preset{e'}]_{\sim_\psi} = \emptyset, [\preset e]_{\sim_\psi} \cap [\postset{e'}]_{\sim_\psi} = \emptyset$ and $[\postset e]_{\sim_\psi} \cap [\preset{e'}]_{\sim_\psi} = \emptyset$, \emph{(iv)} $b \textbf{ co } b'$ implies $b \not \sim b'$; by \autoref{def:fold2} this is true iff the relation $\sim_\psi$ is an IR folding equivalence. \item $\phi_{\on_{\logs,\ind}, \mathcal{L}^-}^{RA}$ holds; this is true iff \emph{(i)} for every two events events $e,e'$ with different labels $v_e \not = v_{e'}$, \emph{(ii)} if $v_e = v_{e'}$ then for all $b \in \preset e$ there exists $b' \in \preset{e'}$ such that $v_b = v_{b'}$ and viceversa, \emph{(iii)} for any trace $\sigma \in \mathcal{L}^-$ and any event $e' \in E$ with the same label as $\esig$ there exists a conditions $b \in \preset{e'}$ such that for any condition $b' \in \preset \esig$ we have $v_b \not = v_{b'}$; by the definition of $\sim_\psi$ we have \emph{(i)} for every two events $e,e'$ with different labels $e \not \sim_\psi e'$, \emph{(ii)} if $e \sim_\psi e'$ then $[\preset e]_{\sim_\psi} = [\preset{e'}]_{\sim_\psi}$ and \emph{(iii)} $[\preset{e'}]_{\sim_\psi} \not \subseteq [\preset \esig]_{\sim_\psi}$ for any negative trace $\sigma$ and event $e'$ with the same label as $\esig$; by \autoref{def:fold3} this is true iff $\sim_\psi$ is a RA folding equivalence. \item $\phi_{\on_{\logs,\ind},k}^{MET}$ holds, this is true iff $v_e \leq k$ for every event $e \in E$; the encoding associates a number to each equivalence class (according to $\sim_\psi$) of events and bounds the number of equivalence classes by $k$, since the number of transitions in $\on_{\logs,\ind,*}^{\sim_\psi}$ corresponds to the number of equivalent classes of events (see \autoref{def:foldednet}), this is true iff the number of transitions of $\on_{\logs,\ind,*}^{\sim_\psi}$ is bounded by $k$. \qed \end{enumerate} \end{proof} } \subsection{Finding an Optimal Folding Equivalence} \label{sec:opt_fold} \autoref{sec:sat} explains how to compute a folding equivalence that generates a folded net with a bounded number of transitions; this section explain how to obtain the optimal folded net, i.e the one with minimal number of transitions satisfying the properties of \autoref{the:ip} and \autoref{the:ra}. Iterative calls to the SMT solver can be done for a binary search with $k$ between $min_k$ and $max_k$; since only equally labeled events can be merged by the folding equivalence, the minimal number of transitions in the folded net is $min_k \eqdef \lvert A \rvert$; in the worst case, when events cannot be merged, $max_k \eqdef \lvert E \rvert$. As a side remark, we have noted that the optimal folding equivalence can be encoded as a MaxSMT problem~\cite{NieuwenhuisO06} where some clauses which are called hard must be true in a solution (in our case $\phi_{\beta}^{IP}$ and $\phi_{\beta,\mathcal{L}^-}^{RA}$) and some soft clauses may not ($\phi_{\beta,k}^{MET}$ for $\lvert A \rvert \leq k \leq \lvert E \rvert$); a MaxSMT solver maximizes the number of soft clauses that are satisfiable and thus it obtains the minimal $k$ generating thus the optimal folded net. \section{Experiments} \label{sec:experiments} \newcommand\ratiosm{r_{{\mathcal{S}} \subseteq {\mathcal{M}}}} \newcommand\ratioms{r_{{\mathcal{M}} \subseteq {\mathcal{S}}}} As a proof of concept, we implemented our approach into a new tool called \podtool (Partial Order Discovery).\footnote{Tool and benchmarks: \url{http://lipn.univ-paris13.fr/~rodriguez/exp/atva15/}.} It supports synthesis of SP and IP folding equivalences using a restricted form of our SMT encoding. In particular \podtool merges all events with equal label, in contrast to the encoding in \autoref{sec:computing} which may in general yield more than one transition per log action. While this ensures a minimum (optimal as per \autoref{sec:opt_fold}) number of folded transitions, the tool could sometimes not find a suitable equivalence (unsatisfiable SMT encoding). Since the number of transitions in the folded net is fixed, it turns out that the quality of the mined model increases as we increase the number of folded places, as we show below. Using \podtool we evaluate the ability of our approach to rediscover the original process model, given its independence relation and a set of logs. For this we have used standard benchmarks from the verification and process mining literature~\cite{MCC,WDHS08}. \input{table} In our experiments, \autoref{tab:exp}, we consider a set of original processes faithfully modelled as safe Petri nets. For every model $\mathcal{S}$ we consider a log~$\mathcal{L}$, i.e. a subset of its traces. We extract from~$\mathcal{S}$ the (best) independence relation~$\indu \mathcal{S}$ that an expert could provide. We then provide~$\mathcal{L}$ and~$\indu \mathcal{S}$ to \podtool and find an SP folding equivalence with the largest number of places (\cols ``max.\ places'') and with~60\% of the places of~$\mathcal{S}$ (last group of \cols), giving rise to two different mined models. All three models, original plus mined ones, have perfect fitness but varying levels of precision, i.e. traces of the model not present in the log. For the mined models, we report (\cols ``\%Prec.'') on the ratio between their precision and the precision of the original model~$\mathcal{S}$. All precisions were estimated using the technique from~\cite{AMCDA15}. All \podtool running times were below 10s. Additionally, we measure how much independence of the original model is preserved in the mined ones. For that, we define the ratios~$\ratiosm \eqdef \nicefrac{|{\indu \mathcal{S}} \cap {\indu \mathcal{M}}|}{|\indu \mathcal{S}|}$ and~$\ratioms \eqdef \nicefrac{|{\indu \mathcal{S}} \cap {\indu \mathcal{M}}|}{|\indu \mathcal{M}|}$. The closer $\ratiosm$ is to~1, the larger is the number of pairs in $\indu \mathcal{S}$ also contained in $\indu \mathcal{M}$ (\ie, the more independence was preserved), and conversely for $\ratioms$ (the less independence was ``\emph{invented}''). Remark that ${\indu \mathcal{S}} = {\indu \mathcal{M}}$ iff $\ratiosm = \ratioms = 1$. In 7 out of the 11 benchmarks in \autoref{tab:exp} our proof-of-concept tool rediscovers the original model or finds one with only minor differences. This is even more encouraging when considering that we only asked \podtool to find SP equivalences which, unlike IP, do not guarantee preservation of independence. In 9 out of 11 cases both ratios $\ratiosm$ and $\ratioms$ are above 98\%, witnessing that independence is almost entirely preserved. Concerning the precision, we observe that it is mostly preserved for these 9 models. We observe a clear correlation between the number of discovered places and the precision of the resulting model. The running times of \podtool on all benchmarks in \autoref{tab:exp} were under few seconds. In \bench[2]{Peters} and \bench[1]{Angio} our tool could not increase the number of places in the folded net, resulting in a significant loss of independence and precision. We tracked the reason down to (a) the additional restrictions on the SMT encoding imposed by our implementation and (b) the algorithm for transforming event structures into unfoldings (\ie, introducing conditions). We plan to address this in future work. This also prevented us from of employing IP equivalences instead of SP for these experiments: \podtool could find IP equivalences for only~5 out of~11 cases. Nonetheless, as we said before, in 9 out of 11 the found SP equivalences preserved at least 98\% of the independence. Finally, we instructed \podtool to synthesize~SP equivalences folding into an arbitrarily chosen low number of places (60\% of the original). Here we observe a large reduction of precision and significant loss of independence (surprisingly only~$\ratiosm$ drops, but not~$\ratioms$). This witnesses a strong dependence between the number of discovered places and the ability of our technique to preserve independence. \section{Related Work} To the best of our knowledge, there is no technique in the literature that solves the particular problem we are considering in this paper: given a set of positive and negative traces and an independence relation on events, derive a Petri net that both preserves the independence relation and satisfies the quality dimensions enumerated in \autoref{sec:prelim}. However, there is related work that intersects partially with the techniques of this paper. We now report on it. Perhaps the closest work is~\cite{FahlandA13}, where the simplification of an initial process model is done by first unfolding the model (to derive an overfitting model) and then folding it back in a controlled manner, thus generalizing part of the behavior. The approach can only be applied for fitting models, which hampers its applicability unless alignment techniques~\cite{AryaThesis} are used. The folding equivalences presented in this paper do not consider a model and therefore are less restrictive than the ones presented in~\cite{FahlandA13}. {\em Synthesis} is a problem different from discovery: in synthesis, the underlying system is given and therefore one can assume $\mathcal{S} = \mathcal{L}$. Considering a synthesis scenario, Bergenthum {\em et al.} have investigated the synthesis of a p/t net from partial orders~\cite{BergenthumDLM08}. The class of nets considered in this paper (safe Petri nets) is less expressive than p/t nets, which in practice poses no problems in the context of business processes. The algorithms in~\cite{BergenthumDLM08} are grounded in the {\em theory of regions} and split the problem into two steps \emph{(i)} the p/t net $\mathcal{M}$ is generated which, by construction, satisfies $\mathcal{L} \subseteq \obs {\mathcal{M}}$, and \emph{(ii)} it is checked whereas $\mathcal{L} = \obs {\mathcal{M}}$. Actually, by avoiding \emph{(ii)}, a discovery scenario is obtained where the generalization feature is not controlled, in contrast to the technique of this paper. With the same goal but relying on ad-hoc operators tailored to compose lpos (choice, sequentialization, parallel compositions and repetition), a discovery technique is presented in~\cite{BergenthumDML09}. Since the operators may in practice introduce wrong generalizations, a domain expert is consulted for the legality of every extra run. \section{Conclusions} A fresh look at process discovery is presented in this paper, which establishes theoretical basis for coping with some of the challenges in the field. By automating the folding of the unfolding that covers traces in the log but also combinations thereof derived from the input independence relation, problems like log incompleteness and noise may be alleviated. The approach has been implemented and the initial results show the potential of the technique in rediscovering a model, even for the simplest of the folding equivalences described in this paper. Next steps will focus on implementing the remaining folding equivalences, and in general improving the SMT constraints for computing folding equivalences. Also, incorporating the notion of trace frequency in the approach will be considered, to guide the technique to focus on principal behavior. This will allow to also test the tool in presence of incomplete or noisy logs. \bibliographystyle{plain}
1,116,691,499,946
arxiv
\section{Introduction} Many theoretical calculations, including lattice QCD, predicted existence of glueballs (particles dominantly made of gluons) with masses $M >$ 1.5 GeV. No one of them was up to now unambiguously identified. The nature of scalar mesons below 2 GeV is also not well understood. The lowest mass meson considered as a glueball candidate is a scalar $f_0(1500)$ \cite{AC96} observed by the Crystall Barrel Collaboration in $p\bar p$ annihilation \cite{Crystall_Barrel_f0_1500}. It was next confirmed by the WA102 Collaboration in central production in $pp$ collisions in two-pion \cite{WA102_f0_1500_2pi} and four-pion \cite{WA102_f0_1500_4pi} decay channels. \section{Mechanisms of exclusive scalar $f_0(1500)$ meson production} We concentrate on exclusive production of scalar $f_0(1500)$ in the following reactions: $p p \to p p f_0(1500)$, $p \bar p \to p \bar p f_0(1500)$, $p \bar p \to n \bar n f_0(1500)$. While the first process can be measured at the J-PARC complex, the latter two reactions could be measured by the PANDA Collaboration. We have proposed a different mechanisms (shown in Fig.\ref{fig:mechanisms}) for exclusive scalar $f_0(1500)$ meson production (more details can be found in Ref. \cite{SL09}). \begin{figure}[!htb] \begin{center} (a)\includegraphics[width=0.25\textwidth]{qcd_f0.eps} (b)\includegraphics[width=0.25\textwidth]{triangle_regge_f0.eps} (c)\includegraphics[width=0.21\textwidth]{pion_pion_f0.eps} \end{center} \caption{\label{fig:mechanisms} \small The sketch of the bare mechanisms of exclusive scalar $f_0(1500)$ meson production: (a) the QCD mechanism, (b) double-diffractive mechanism with intermediate pionic triangle and (c) pion-pion fusion. } \end{figure} If $f_0(1500)$ is a glueball (or has a strong glueball component \cite{CZ05}) then the diffractive mechanism (see Fig.\ref{fig:mechanisms}a) may be important. This mechanism is often considered as the dominant mechanism of exclusive Higgs boson \cite{KMR} and $\chi_c(0^+)$ meson \cite{PST07} production at high energies. At lower energies ($\sqrt{s} <$ 20 GeV) other processes may become important as well. Since the two-pion channel is one of the dominant decay channels of $f_0(1500)$ (34.9 $\pm$ 2.3 \%) \cite{PDG} one may expect the two-pion fusion (see Fig.\ref{fig:mechanisms}c) to be one of the dominant mechanisms at the FAIR energies. The two-pion fusion can be relative reliably calculated in the framework of meson-exchange theory. The pion coupling to the nucleon is well known \cite{Ericson-Weise}. \section{Results and Conclusions} \begin{figure}[!htb] \begin{center} \includegraphics[width=0.29\textwidth]{sig_w_f0_1500_ela.eps} \space \space \space \includegraphics[width=0.29\textwidth]{sig_w_f0_1500_cex.eps} \caption{\label{fig:sigma_W} \small The integrated cross section as a function of the center-of-mass energy for $p \bar p \to p \bar p f_0(1500)$ (left panel) and $p \bar p \to n \bar n f_0(1500)$ (right panel) reactions. The thick solid lines: $\pi\pi$ contribution, the dashed line: QCD diffractive contribution (KL UGDF), the dotted line: the KMR approach, the thin solid lines: "mixed" UGDF (KL $\otimes$ Gauss) and the long-dashed line: the mechanism with intermediate pionic triangle. The WA102 experimental data point at $W = 29.1$ GeV is from \cite{kirk}. } \end{center} \end{figure} \begin{figure}[!htb] (a)\includegraphics[width=0.29\textwidth]{dsig_dy_pipi_f0_1500.eps} (b)\includegraphics[width=0.29\textwidth]{dsig_dpt_pipi_f0_1500.eps} (c)\includegraphics[width=0.29\textwidth]{dsig_dphi_pipi_f0_1500.eps} \caption{\label{fig:dsig_pipi} \small Differential cross sections $\frac{d\sigma}{dy}$ (a), $\frac{d\sigma}{dp_{t}}$ (b) and $\frac{d\sigma}{d\phi}$ (c) for the reaction $p \bar p \to n \bar n f_0(1500)$ ($\pi^+ \pi^-$ fusion only) at $W$ = 3.5, 4.0, 4.5, 5.0, 5.5 GeV (from bottom to top). } \end{figure} We have estimated the integrated cross section for the exclusive $f_0(1500)$ meson production (see Fig.\ref{fig:sigma_W}). We have included both gluon induced diffractive and triangle-double-diffractive mechanisms as well as the $\pi\pi$ exchange contributions. We predict the dominance of the $\pi\pi$ contribution close to the threshold and the dominance the diffractive components at higher energies. In Fig.\ref{fig:dsig_pipi} we present differential cross sections for the $\pi\pi$ exchange mechanism at energies of future experiments at HESR at the FAIR facility in GSI. The experimental studies of exclusive production of $f_0(1500)$ are not easy at all as in the $\pi \pi$ decay channel one expects a large continuum. We have performed an involved calculation of the four-body $p \bar p \pi^+ \pi^-$ background. Our calculation \cite{SL09} shows that imposing extra cuts should allow to extract the signal of the glueball $f_0(1500)$ candidate at the highest PANDA energy.
1,116,691,499,947
arxiv
\section{Introduction} \label{secIntro} In many cell types, cell membranes are composed \cite{phillips12} of a diverse array of lipids, organized as a lipid bilayer, and membrane proteins, which play a central role in most cellular processes. Membrane proteins are rigid compared to the surrounding lipid bilayer \cite{mouritsen93,engelman05,jensen04,andersen07}. Thus, the lipid bilayer typically deforms to accommodate membrane proteins and, in particular, the bilayer hydrophobic thickness is compressed or expanded compared to the preferred bilayer thickness in the absence of membrane proteins \cite{jensen04,lundbaek06,andersen07,phillips09,mcintosh06,brown12,engelman05,mouritsen93}. Distinct conformations of a membrane protein generally yield distinct energy costs of protein-induced lipid bilayer deformations. As a result, the lipid bilayer can serve as a ``splint'' stabilizing certain protein conformations \cite{andersen07} and thereby regulate protein function~\cite{mouritsen93,jensen04,engelman05,andersen07,mcintosh06,phillips09,brown12,lundbaek06}. In agreement with this general picture, experiments have revealed \cite{lundbaek10,greisen11,brohawn12,schmidt12,brohawn14,anishkin13,anishkin14,milescu09} that, across the kingdoms of life, central biological functions of integral membrane proteins such as ion exchange and signaling are regulated by the mechanical properties of the surrounding lipid bilayer, with the hydrophobic regions of membrane proteins coupling to the hydrophobic regions of lipid bilayers \cite{mitra04,sonntag11,krepkiy09}. In particular, elastic bilayer thickness deformations have been found \cite{mouritsen93,jensen04,engelman05,andersen07,mcintosh06,phillips09,brown12,lundbaek06} to regulate the functions of a diverse range of integral membrane~proteins. Cell membranes are crowded with membrane proteins \cite{engelman05,takamori06,dupuy08,linden12}, with a typical mean center-to-center distance $d\approx10$~nm between neighboring proteins \cite{phillips09}. As a result, the elastic decay length of protein-induced lipid bilayer thickness deformations \cite{wiggins04,wiggins05,ursell08} is comparable to the typical edge-to-edge spacing of proteins in cell membranes \cite{phillips09}, yielding thickness-mediated interactions between membrane proteins \cite{harroun99,grage11,goforth03,botelho06,phillips09}. For the small protein separations relevant for cell membranes, thickness-mediated interactions between integral membrane proteins can be $> 10$~$k_B T$ in magnitude \cite{ursell07,phillips09} and, depending on the hydrophobic thickness of neighboring membrane proteins, be energetically favorable or unfavorable. The lipid bilayer elasticity theory \cite{seifert97,boal02,safran03} underlying the description of protein-induced bilayer deformations and bilayer-mediated protein interactions has a rich and distinguished history, dating back to the classic work of W. Helfrich \cite{helfrich73}, P.~B. Canham \cite{canham70}, E.~A. Evans \cite{evans74}, and H.~W. Huang \cite{huang86}. According to this classic theory, membrane proteins may, in addition to thickness-mediated interactions \cite{dan93,dan94,aranda96,dan98,harroun99b,partenskii04,brannigan07,ursell07,CAH2013a,OWC1}, also interact \cite{fournier99,phillips09} via bilayer curvature deformations \cite{goulian93,weikl98,kim98,kim00,muller05,muller05b,kim08,auth09,muller10,frese08,reynwar11,bahrami14,dommersnes99,evans03,weitz13,yolcu14} and bilayer fluctuations \cite{goulian93,dommersnes99,evans03,weitz13,yolcu14,golestanian96,golestanian1996b,weikl01,lin11}. While the competition between thickness-,\linebreak curvature-, and fluctuation-mediated protein interactions depends on the properties of the specific lipids and membrane proteins under consideration, one generally expects \cite{ursell07,phillips09} that thickness-mediated protein interactions are strong and short-ranged, and that curvature- and fluctuation-mediated protein interactions are weak and long-ranged. The classic elasticity theory of protein-induced lipid bilayer deformations can be extended in various ways \cite{gil98,brannigan06,brannigan07,west09,may04,may99,bohinc03,may07,watson11,watson13,jablin14,rangamani14,bitbol12,partenskii02,partenskii03,partenskii04,kim12,yoo13,yoo13b,lee13} to account for detailed molecular properties of lipids such as lipid tilt and lipid intrinsic curvature \cite{dan93,dan94,aranda96,dan98,fournier99}, yielding additional modes of bilayer-mediated protein interactions. Theoretical studies of bilayer-mediated protein interactions have largely focused on idealized (often cylindrical or conical) protein shapes which do not correspond to any particular membrane protein, and proteins at large $d$. In contrast, bilayer-mediated protein interactions at small $d$ are most relevant for the crowded environment provided by cell membranes \cite{engelman05,takamori06,dupuy08,linden12}, while modern structural biology suggests a rich picture of membrane protein shape with experimental surveys of the protein content in various cell membranes \cite{takamori06,yun11,linden12} indicating great diversity in the oligomeric states and symmetries of membrane proteins. The central goal of this article is to provide a detailed discussion of a combined analytic and numerical framework \cite{CAH2013a,OWC1,CAH2013b,OPWC1} which allows prediction of lipid bilayer-mediated elastic interactions between integral membrane proteins at arbitrary $d$ for the protein shapes suggested by structural studies. We focus here on protein-induced lipid bilayer thickness deformations, which have been found \cite{mouritsen93,jensen04,engelman05,andersen07,mcintosh06,phillips09,brown12,lundbaek06,harroun99,goforth03,botelho06,grage11} to play central roles in regulation of protein function and bilayer-mediated protein interactions in a wide range of experimental systems \cite{dan93,dan94,aranda96,dan98,harroun99b,partenskii04,brannigan07,ursell07,huang86,helfrich90,nielsen98,nielsen00,harroun99,partenskii02,partenskii03,kim12,lundbaek10,greisen11,wiggins04,wiggins05,ursell08,grage11,CAH2013a,CAH2013b,OWC1,OPWC1,mondal11,mondal12,mondal13,mondal14,CAH2014a}. Using this mathematical framework we have shown previously that the shape of integral membrane proteins, and resulting structure of lipid bilayer thickness deformations, can play a crucial role in the regulation of protein function by lipid bilayers \cite{OWC1,CAH2013b}, and that bilayer thickness-mediated interactions between integral membrane proteins can be strongly directional and dependent on protein shape \cite{CAH2013a,OWC1,OPWC1,CAH2014a}. Thus, in addition to the magnitude of the bilayer-protein hydrophobic mismatch \cite{engelman05,mouritsen93,jensen04,mcintosh06,lundbaek06,andersen07,phillips09,brown12,harroun99,goforth03,botelho06,grage11}, protein shape may be a crucial determinant of membrane protein regulation by lipid bilayers and bilayer-mediated protein interactions. We develop, illustrate, and test our analytic and numerical framework for calculating bilayer-mediated protein interactions using the protein shapes shown in Fig.~\ref{figIllust}, which embody key mechanisms by which protein crowding and protein shape may affect protein-induced lipid bilayer deformations. The most straightforward model of protein-induced lipid bilayer thickness deformations assumes a circular protein cross section with constant boundary conditions along the bilayer-protein interface [see Fig.~\ref{figIllust}(a)]. The resulting ``cylinder model'' of membrane proteins allows investigation of thickness-mediated protein interactions in crowded membranes without the further complications introduced by a complicated protein shape. The cylinder model has been used before in a number of different settings \cite{huang86,wiggins05,ursell08,helfrich90,andersen07,phillips09,nielsen98,nielsen00} to describe protein-induced bilayer thickness deformations. \begin{figure}[t!] \includegraphics[width=0.92\columnwidth]{fig1} \caption{(Color online) Overlapping lipid bilayer thickness deformation fields can yield bilayer thickness-mediated interactions between membrane proteins. Bilayer thickness deformations $u$ due to interacting membrane proteins obtained using (a) the cylinder model, (b) the crown model, and (c) the clover-leaf model of integral membrane proteins (see Sec.~\ref{secElasticModel} for further details). The thickness deformations in panels (a) and (b) were obtained through exact analytic minimization of the thickness deformation energy (see Sec.~\ref{secAnalyticSol}), and the thickness deformations in panel (c) were calculated using finite elements (see Sec.~\ref{secFE}). The clover-leaf shape in panel (c) provides a simple coarse-grained model \cite{CAH2013a,CAH2013b,OWC1,OPWC1} of the observed closed-state structure of the pentameric mechanosensitive channel of large conductance \cite{chang98} (protein structural data shown as ribbon diagrams; Protein Data Bank accession number 2OAR). The calculated lipid bilayer thickness deformations depend on lipid and protein properties, the protein center-to-center distance $d$, and, for the crown and clover-leaf models, the protein orientations $\omega_{1,2}$. The color scale ranges from $u_{\text{min}}=-0.6$~nm to $u_{\text{max}}=0.4$~nm. } \label{figIllust} \end{figure} Aside from interactions with neighboring membrane proteins, angular variations in the bilayer-protein boundary conditions \cite{mondal11,mondal12,mondal13,mondal14,CAH2013a} or a non-circular protein cross section \cite{CAH2013a,CAH2013b,OWC1,OPWC1,CAH2014a} may also break rotational symmetry of bilayer thickness deformations about the protein center. Simple representations of these two features of protein shape are provided \cite{CAH2013a,CAH2013b,OWC1,OPWC1} by the ``crown model'' [see Fig.~\ref{figIllust}(b)], in which we allow for angular variations of the protein hydrophobic thickness while assuming a circular protein cross section, and by the ``clover-leaf model'' [see Fig.~\ref{figIllust}(c)], in which we assume constant boundary conditions along the bilayer-protein interface but allow for a non-circular shape of the protein cross section. In general, membrane proteins will have both a non-circular cross section and variable hydrophobic thickness. The crown and clover-leaf models allow us to isolate these two possible origins of angular anisotropy of protein-induced lipid bilayer deformations. An important difference between the cylinder model, and the crown and clover-leaf models, is that for the latter models bilayer-mediated protein interactions are inherently directional and depend not only on the protein separation $d$ but also on the protein orientations $\omega_i$, where the index $i=1,2,\dots$ denotes different membrane proteins [Figs.~\ref{figIllust}(b,c)]. The organization of this article is as follows. Section~\ref{secElasticModel} provides a detailed discussion of the elastic energy of lipid bilayer thickness deformations, and the cylinder, crown, and clover-leaf models of integral membrane proteins. In Sec.~\ref{secAnalyticSol} we obtain, based on Refs.~\cite{huang86,dan93,dan94,aranda96,dan98,goulian93,weikl98}, analytic solutions of the thickness deformation fields and thickness deformation energies due to cylinder, crown, and clover-leaf shapes at arbitrary $d$ and protein orientations. These analytic solutions are exact for cylinder and crown shapes, and perturbative for clover-leaf shapes. As described in Sec.~\ref{secNumericalSol}, we complement our analytic solutions, and assess their validity, by developing numerical solution schemes based on finite element (FE) and finite difference (FD) solution procedures. In particular, the FE approach described here offers a straightforward way of representing the complicated protein shapes suggested by membrane structural biology, and is efficient and accurate enough to enable prediction of the directional thickness-mediated interactions of hundreds of integral membrane proteins at arbitrary $d$ and protein orientations \cite{OWC1,OPWC1} with reasonably coarse computational grids. In Sec.~\ref{secCylinder2} we provide a detailed comparison between analytic and numerical results for the thickness deformation fields and thickness deformation energies implied by the cylinder model in the non-interacting and interacting regimes of $d$. Sections~\ref{secCrownResults} and~\ref{secCloverResults} provide similar comparisons between analytic and numerical results for the crown and clover-leaf models. A summary of our results and conclusions can be found in Sec.~\ref{secSummary}. \section{Elastic model of protein-induced bilayer thickness deformations} \label{secElasticModel} \subsection{Elastic thickness deformation energy} In the standard elasticity theory of bilayer-protein interactions \cite{jensen04,lundbaek06,andersen07,phillips09}, integral membrane proteins are assumed to be rigid membrane inclusions which deform the surrounding lipid bilayer \cite{boal02,safran03,seifert97}. In the simplest formulation, bilayer deformations can then be captured by two coupled scalar fields $h_+$ and $h_-$ which define the positions of the hydrophilic-hydrophobic interface in the outer and inner lipid bilayer leaflets, respectively. It is mathematically convenient to express $h_+$ and $h_-$ in terms of the midplane deformation field \begin{equation} h=\frac{1}{2} \left(h_+ + h_- \right) \end{equation} and the thickness deformation field \begin{equation} u=\frac{1}{2} \left(h_+ - h_- -2 a\right)\,, \end{equation} in which $a$ is one-half the unperturbed hydrophobic thickness of the lipid bilayer. To leading order, the elastic energies governing $h$ and $u$ decouple from each other \cite{huang86,fournier99}. In the most straightforward model of bilayer-protein interactions \cite{huang86,goulian93,dan93,weikl98,jensen04,lundbaek06,andersen07,phillips09,boal02,safran03,seifert97}, the energy cost of midplane deformations is then captured by the Helfrich-Canham-Evans energy \cite{canham70,helfrich73,evans74} and, within the Monge representation, the energy cost of thickness deformations is of the form~\cite{huang86,andersen07,ursell08} \begin{equation} \label{energy} {\textstyle G=\frac{1}{2}}\int dx dy {\textstyle\left\{K_b (\nabla^2 u)^2+K_t \left(\frac{u}{a}\right)^2+\tau \left[2 \frac{u}{a}+(\nabla u)^2 \right] \right\}}\,, \end{equation} where $K_b$ is the bending rigidity of the lipid bilayer, $K_t$ is the stiffness associated with thickness deformations, and $\tau$ is the membrane tension. The effective parameters $K_b$, $K_t$, and $a$ in Eq.~(\ref{energy}) encapsulate bilayer material properties relevant for protein-induced bilayer thickness deformations and depend on the bilayer composition \cite{rawicz00,rawicz08,nagle13}. Typical values measured in experiments are $K_b = 20$ $k_B T$, $K_t = 60$ $k_B T/$nm$^2$, and $a=1.6$~nm \cite{andersen07,phillips09}, which we use for all the numerical calculations described here. The classic model of protein-induced lipid bilayer thickness deformations in Eq.~(\ref{energy}) employs the Monge representation of surfaces, $u=u(x,y)$, with Cartesian coordinates $(x,y)$, and only considers leading-order terms in $u$ and its derivatives. The former assumption can be justified by noting that thickness deformations generally decay rapidly compared to midplane deformations, with typical thickness and midplane decay lengths $\approx 1$~nm and $\approx 5$--500~nm \cite{phillips09,ursell08}, respectively. The validity of the latter assumption depends on the specific properties of the lipid bilayer and protein under consideration, but for experimental model systems \cite{nielsen98,nielsen00,wiggins04,wiggins05,ursell07,ursell08,grage11,OPWC1} one typically finds $u/a<0.3$ and $\|\nabla u \| < 0.2$. Hence, bilayer overhangs and higher-order corrections to Eq.~(\ref{energy}) can often be neglected when describing protein-induced lipid bilayer thickness deformations, with the bilayer midplane being approximately parallel to the reference plane invoked in the Monge representation of surfaces. The terms $K_b \left(\nabla^2 u \right)^2$ and $K_t \left(u/a\right)^2$ in Eq.~(\ref{energy}) provide lowest-order descriptions of the energy cost of bilayer bending, and compression and expansion, of the bilayer hydrophobic core, respectively. For generality we allow for the two tension terms $2 \tau u/a$ and $\tau \left(\nabla u\right)^2$ in Eq.~(\ref{energy}), which account \cite{boal02,safran03,seifert97} for stretching deformations tangential to the leaflet surfaces and changes in the projection of the bilayer area onto the reference plane, respectively. The minimal model in Eq.~(\ref{energy}) can be extended in a variety of ways to account for more detailed properties of lipid bilayers including lipid tilt \cite{fournier99,may07,watson11,watson13,jablin14,rangamani14}, lipid intrinsic curvature \cite{dan93,dan94,aranda96,dan98,brannigan06,brannigan07,west09}, inhomogeneous deformation of lipid volume and effects of Gaussian curvature on protein-induced bilayer deformations \cite{brannigan06,brannigan07,west09}, asymmetric bilayer thickness deformations \cite{brannigan06,brannigan07,west09,bitbol12}, and protein-induced local modulation of bilayer elastic properties \cite{partenskii02,partenskii03,partenskii04,kim12,yoo13,yoo13b,lee13}. The lipid bilayer thickness deformation energy in Eq.~(\ref{energy}) provides a simple model of thickness-mediated protein interactions, as well as the coupling between protein function and bilayer thickness deformations, and has the appealing property that all the material parameters entering Eq.~(\ref{energy}) can be measured directly in experiments. For given bilayer-protein boundary conditions (see Sec.~\ref{secShape}), minimization of Eq.~(\ref{energy}) completely specifies the lowest-energy bilayer thickness configuration and its associated energy cost. Models based on Eq.~(\ref{energy}) have been found to capture the basic experimental phenomenology of bilayer-protein interactions for gramicidin channels \cite{huang86,helfrich90,nielsen98,nielsen00,harroun99,harroun99b,partenskii02,partenskii03,partenskii04,kim12,lundbaek10,greisen11}, the mechanosensitive channel of large conductance (MscL) \cite{wiggins04,wiggins05,ursell07,ursell08,grage11,CAH2013a,CAH2013b,OWC1,OPWC1}, G-protein coupled receptors \cite{mondal11,mondal12,mondal13,mondal14}, the bacterial leucine transporter \cite{mondal14}, and chemoreceptor lattices \cite{CAH2014a}, as well as a variety of other integral membrane proteins \cite{andersen07,phillips09,jensen04,lundbaek06,mcintosh06,brown12}. While we focus here on elastic thickness deformations, midplane deformations may generally also contribute \cite{goulian93,weikl98,wiggins05,phillips09} to bilayer-mediated protein interactions, and the regulation of protein function by bilayer mechanical properties. Minimization of Eq.~(\ref{energy}) can be performed by solving the appropriate Euler-Lagrange equation, which is given by \begin{equation} \label{genBihpreu} K_b \nabla^4 u - \tau \nabla^2 u+\frac{K_t}{a^2} u+\frac{\tau}{a}=0\,. \end{equation} The analytic solution of Eq.~(\ref{genBihpreu}) is facilitated \cite{CAH2013b} by introducing the function \begin{equation} \label{transfu} \bar u(x,y)= u(x,y)+\frac{\tau a}{K_t}\,, \end{equation} in terms of which Eq.~(\ref{genBihpreu}) can be expressed as \begin{equation} \label{genBih} \left(\nabla^2 - \nu_+\right) \left(\nabla^2 - \nu_-\right) \bar u=0\,, \end{equation} where \begin{equation} \label{defnu} \nu_\pm = \frac{1}{2 K_b} \left[\tau \pm \left(\tau^2-\frac{4 K_b K_t}{a^2} \right)^{1/2} \right]\,. \end{equation} To analytically calculate the thickness deformation energy associated with the solutions of Eq.~(\ref{genBihpreu}) we use Eqs.~(\ref{genBihpreu}) and~(\ref{transfu}) to rearrange the thickness deformation energy in Eq.~(\ref{energy}) as \begin{eqnarray} \nonumber G&=&G_1+\\&&\frac{1}{2}}\int dx dy {\textstyle \nabla \cdot \left[K_b (\nabla \bar u) \nabla^2 \bar u-K_b \bar u \nabla^3 \bar u+\tau \bar u \nabla \bar u \right]\,, \nonumber\\&&\label{energy2} \end{eqnarray} where the term \begin{equation}\label{energyG1} G_1=-\frac{1}{2} \int dx dy \frac{\tau^2}{K_t} \end{equation} is independent of $\bar u$ and arises, for $\tau > 0$, due to relaxation of the ``loading device'' producing membrane tension \cite{ursell08} via uniform compression of the bilayer hydrophobic core. Since $G_1$ does not contribute to the energy cost of protein-induced bilayer deformations we subtract $G_1$ from $G$, $G \to G-G_1$. Using Gauss's theorem, we then find \begin{equation} \label{EvalEnergyLine} G=\frac{1}{2}\int dl \, {\textstyle \mathbf{\hat n} \cdot \left[K_b (\nabla \bar u) \nabla^2 \bar u-K_b \bar u \nabla^3 \bar u+\tau \bar u \nabla \bar u \right]}\,, \end{equation} where the line integrals $\int dl$ are to be taken along all bilayer-protein interfaces, with the bilayer unit normal vectors $\mathbf{\hat n}$ perpendicular to the bilayer-protein interfaces and pointing towards the proteins, and we assume \cite{huang86,nielsen98,nielsen00} that $\bar u$, as well as its derivatives, go to zero far from the proteins. \subsection{Modeling protein shape} \label{secShape} Following previous work on lipid bilayer-protein interactions \cite{huang86,goulian93,weikl98,dan93,jensen04,lundbaek06,andersen07,phillips09} we model integral membrane proteins as rigid membrane inclusions of fixed shape and hydrophobic thickness. The specific properties of a given membrane protein enter our description of bilayer-protein interactions through the shape of the protein cross section, and through the boundary conditions on $u$ along the bilayer-protein interface. As described in Sec.~\ref{secIntro}, we consider here three distinct models of protein shape: the cylinder model, the crown model, and the clover-leaf model. These minimal models do not provide detailed descriptions of protein shape but, rather, aim to encapsulate the features of a given protein hydrophobic surface most crucial for protein-induced lipid bilayer thickness deformations. As noted above, midplane deformations decouple to leading order from thickness deformations. Hence, the models of protein-induced bilayer thickness deformations considered here can be easily complemented by corresponding models of protein-induced bilayer midplane deformations \cite{goulian93,weikl98,wiggins05,phillips09}, which capture separate aspects of bilayer-protein interactions. \subsubsection{Cylinder model} \label{secCylinder} A straightforward description of the effect of protein shape on protein-induced lipid bilayer thickness deformations is provided by the cylinder model of integral membrane proteins [Fig.~\ref{figIllust}(a)] \cite{huang86,wiggins05,ursell08,helfrich90,andersen07,phillips09,nielsen98,nielsen00}. Introducing the polar coordinates $(r_i,\theta_i)$ with the center of membrane protein $i$ as the origin, the boundary conditions for protein $i$ can be written as \begin{eqnarray} \label{bc1f} u(r_i,\theta_i)\big|_{r_i=C_i(\theta_i)}&=&U_i\,,\\ \label{bc2f} \mathbf{\hat n} \cdot \nabla u(r_i,\theta_i)\big|_{r_i=C_i(\theta_i)}&=&U_i^\prime\,, \end{eqnarray} where the boundary curve $C_i(\theta_i)=R_i$ for a membrane protein with a circular cross section of radius $R_i$, and the constants \cite{wiggins05} \begin{eqnarray} \label{bc1} U_i &=& \frac{1}{2} \left(W_i - 2a\right)\,,\\ \label{bc2} U_i^\prime &=& \frac{1}{2} \left(H_+^\prime-H_-^\prime\right)\,, \end{eqnarray} where $W_i$ is the hydrophobic thickness of protein $i$ and the $H_\pm^\prime$ correspond to the normal derivatives of $h_\pm$ evaluated along the bilayer-protein boundary. Equation~(\ref{bc1}) assumes perfect hydrophobic matching between the membrane protein and the lipid bilayer \cite{harroun99,harroun99b,andersen07,phillips09}. This assumption is expected to break down for a large enough hydrophobic mismatch between the protein and the (undeformed) lipid bilayer \cite{nielsen98,nielsen00,mondal11,mondal12,mondal13,mondal14}, in which case $W_i$ corresponds to the effective hydrophobic thickness of the membrane protein. Unless indicated otherwise, we use $W_i=3.8$~nm for the numerical calculations described here, which approximately corresponds to \cite{ursell08,elmore03} the hydrophobic thickness of the observed structure of closed pentameric MscL \cite{chang98}. Following previous work on MscL-induced lipid bilayer thickness deformations employing the cylinder model of integral membrane proteins \cite{wiggins04,wiggins05,ursell07,ursell08,phillips09}, we use $R_i=2.3$~nm, which yields an area of the transmembrane protein cross section consistent with the observed structure of closed pentameric MscL \cite{chang98}. A number of different choices for the boundary condition in Eq.~(\ref{bc2f}) have been investigated \cite{huang86,helfrich90,nielsen98,nielsen00,harroun99,harroun99b,partenskii02,partenskii03,partenskii04,bitbol12,kim12,lee13,brannigan06,brannigan07,west09}. In particular, $U_i^\prime$ may be chosen based on experimental observations or molecular dynamics simulations, or may be regarded as a free parameter to be fixed as part of the energy minimization procedure. We follow here previous theoretical work on the experimental phenomenology of gramicidin channels \cite{huang86,harroun99,harroun99b} and MscL \cite{wiggins04,wiggins05,ursell07,ursell08,phillips09} which suggests that, to a first approximation, $U_i^\prime=0$. \subsubsection{Crown model} \label{secCrown} The hydrophobic thickness of integral membrane proteins is generally expected to vary along the bilayer-protein interface \cite{sonntag11,krepkiy09,mondal11,mondal12,mondal13,mondal14}, yielding anisotropic bilayer thickness deformations and, in the case of two or more membrane proteins in sufficiently close proximity, directional interactions \cite{CAH2013a}. To study the generic effects of a variable protein hydrophobic thickness on bilayer-mediated protein interactions we replace the constant $U_i$ in Eq.~(\ref{bc1f}) by \cite{CAH2013a} \begin{equation} \label{VarU} U_{i}(\theta_i)=U_i^0+ \delta_i \cos s \left(\theta_i-\omega_i\right)\,, \end{equation} where $U_i^0$ is the average hydrophobic mismatch, $\delta_i$ is the magnitude of mismatch modulations, $s$ is the protein symmetry, and $\omega_i$ parametrizes the orientation of protein $i$. For each bilayer leaflet, Eq.~(\ref{VarU}) yields a periodic modulation of the protein hydrophobic surface which resembles the shape of a crown [Fig.~\ref{figIllust}(b)], and we therefore refer to Eq.~(\ref{VarU}) as the crown model of integral membrane proteins. For our numerical calculations we use the values $U_i^0=-0.1$~nm, $\delta_i=0.5$~nm, and $s=5$ in Eq.~(\ref{VarU}), and vary $\omega_i$ to explore bilayer thickness-mediated interactions for a range of relative protein orientations. We choose all other parameter values as described for the cylinder model of membrane proteins. Even for non-interacting membrane proteins, this parametrization of the crown model in Eq.~(\ref{VarU}) yields a maximum magnitude of the gradient of bilayer thickness deformations $\approx 1$, and therefore produces thickness deformations which lie at the limit of applicability of the leading-order energy in Eq.~(\ref{energy}). Furthermore, for interacting membrane proteins we generally find with this parametrization of the crown model that the maximum magnitude of the gradient of bilayer thickness deformations $>1$. As a result, the numerical estimates of the thickness deformation energy obtained for this parametrization of Eq.~(\ref{VarU}) are of limited physical significance. We allow here for such large magnitudes of the gradient of bilayer thickness deformations in order to explore the mathematical limits of applicability of our analytic and numerical solution procedures (see Sec.~\ref{secCrownResults}). \subsubsection{Clover-leaf model} \label{secClover} Membrane structural biology has produced a rich and diverse picture of membrane protein shape, which suggests \cite{spencer02,takamori06,yun11,linden12} that integral membrane proteins can occur in a variety of different oligomeric states and transmembrane shapes. Distinct oligomeric states of membrane proteins generally yield distinct symmetries of the protein cross section which, in turn, induce distinct symmetries of lipid bilayer thickness deformations \cite{mondal11,mondal12,mondal13,mondal14,CAH2013a,CAH2013b,OWC1,OPWC1}. The resulting non-trivial structure of bilayer thickness deformations can yield substantial deviations from the energy cost of protein-induced bilayer thickness deformations implied by the cylinder model of integral membrane proteins. In particular, for MscL it has been found \cite{CAH2013a,CAH2013b,OWC1,OPWC1} that the elastic energy of protein-induced bilayer thickness deformations provides a signature of the protein oligomeric state, with distinct MscL oligomeric states yielding distinct MscL gating characteristics and directional bilayer-mediated protein interactions. A simple coarse-grained model of the cross sections of a diverse range of membrane proteins is provided by the clover-leaf model \cite{CAH2013a,CAH2013b,OWC1} \begin{equation} \label{boundCclover} C_{i}(\theta_i)=R_i \left[1+\epsilon_i \cos s \left(\theta_i-\omega_i\right) \right]\,, \end{equation} where $\epsilon_i$ parametrizes the magnitude of the deviation of the protein cross section from the circle [Fig.~\ref{figIllust}(c)]. In particular, the structure of pentameric MscL observed in \textit{Mycobacterium tubercolosis} \cite{chang98} suggests \cite{CAH2013a,CAH2013b,OWC1} $s=5$, $R_i=2.27$~nm, and $\epsilon_i=0.22$, which we use for all the numerical calculations involving clover-leaf shapes described here. In general, the hydrophobic thickness of integral membrane proteins is expected to vary along the boundaries of clover-leaf shapes. However, in order to isolate the effect of anisotropy in protein shape on bilayer thickness-mediated protein interactions we focus here on the simpler scenario of a constant hydrophobic thickness, and use for the clover-leaf model of integral membrane proteins the same boundary conditions on $u$ along the bilayer-protein interface as for the cylinder model [see Eqs.~(\ref{bc1f}) and~(\ref{bc2f})]. \section{Analytic solution} \label{secAnalyticSol} Building on earlier work on the lipid bilayer thickness deformations induced by cylindrical membrane proteins \cite{huang86,dan93,dan94,aranda96,dan98} and bilayer curvature-mediated interactions between conical membrane proteins in the far-field limit \cite{goulian93,weikl98}, we develop in this section analytic solutions \cite{CAH2013a,CAH2013b} of the bilayer thickness-mediated interactions between integral membrane proteins at arbitrary protein separations and relative orientations. To solve for the thickness-mediated interactions implied by Eq.~(\ref{energy}) for two membrane proteins, we employ a two-center bipolar coordinate system (see Fig.~\ref{figBiPol}). For the sake of simplicity, we assume in Fig.~\ref{figBiPol} that the two proteins have circular cross sections with radii $R_{1,2}$. We relax this assumption below to capture interactions between clover-leaf shapes. To mathematically relate the polar coordinates $(r_{1,2},\theta_{1,2})$ centered about proteins 1 and 2, we note from Fig.~\ref{figBiPol} the bipolar coordinate transformations \begin{equation} r_2=\left(d^2+r_1^2+2 d r_1 \cos \theta_1 \right)^{1/2}\,, \end{equation} $\cos \theta_2 = \left(d+r_1 \cos \theta_1\right)/r_2$, and $\sin \theta_2 = \left(r_1 \sin \theta_1\right)/r_2$. The corresponding transformations for $r_1$ and $\sin \theta_1$ are symmetric in the protein indices, but $\cos \theta_1 = - \left(d-r_2 \cos \theta_2\right)/r_1$. \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig2} \caption{(Color online) Two-center bipolar coordinate system for two membrane proteins with circular cross sections of radii $R_{1,2}$ separated by a center-to-center distance $d$ along the $x$-axis. The $x$-$y$ plane corresponds to the reference plane used in the Monge representation of surfaces. Expressions for $(r_1,\theta_1)$ in terms of $(r_2,\theta_2)$ can be obtained by considering the coordinates of the red (left) point, and expressions for $(r_2,\theta_2)$ in terms of $(r_1,\theta_1)$ can be obtained by considering the coordinates of the blue (right)~point.} \label{figBiPol} \end{figure} We solve the Euler-Lagrange equation~(\ref{genBih}) by making the ansatz \cite{weikl98,CAH2013a} \begin{equation} \label{genSol} \bar u=\bar u_1(r_1,\theta_1)+\bar u_2(r_2,\theta_2)\,, \end{equation} where the $\bar u_i(r_i,\theta_i)$ are the solutions of Eq.~(\ref{genBih}) for a single protein $i=1,2$, which are of the form \cite{huang86,zauderer83} \begin{equation} \label{constructGenSol} \bar u_i(r_i,\theta_i)=f_i^+(r_i,\theta_i) + f_i^-(r_i,\theta_i)\,, \end{equation} where the $f_i^\pm$ are solutions of the Helmholtz equations \begin{equation} \nabla^2 f_i^\pm = \nu_{\pm} f_i^\pm\,. \end{equation} For the exterior of a circle of radius $R_i$, the above Helmholtz equations are readily solved by separation of variables \cite{boas83,zauderer83}. Thus, the general single-protein solution of Eq.~(\ref{genBih}) can be constructed from \begin{eqnarray} \nonumber f_i^\pm(r_i,\theta_i) &=& A_{i,0}^\pm K_0(\sqrt{\nu_\pm} r_i)+ C_{i,0}^\pm I_0(\sqrt{\nu_\pm} r_i) \\ && \label{genSolSingle} +\sum_{n=1}^\infty \left\{\mathcal{A}_{i,n}^\pm + \mathcal{B}_{i,n}^\pm+ \mathcal{C}_{i,n}^\pm+ \mathcal{D}_{i,n}^\pm\right\} \,, \quad \end{eqnarray} where $A_{i,0}^\pm$ and $B_{i,0}^\pm$ are constants, the Fourier-Bessel terms \begin{eqnarray} \mathcal{A}_{i,n}^\pm&=& A_{i,n}^\pm K_n(\sqrt{\nu_\pm} r_i) \cos n \theta_i\,,\\ \mathcal{B}_{i,n}^\pm&=& B_{i,n}^\pm K_n(\sqrt{\nu_\pm} r_i) \sin n \theta_i\,,\\ \mathcal{C}_{i,n}^\pm&=& C_{i,n}^\pm I_n(\sqrt{\nu_\pm} r_i) \cos n \theta_i\,,\\ \mathcal{D}_{i,n}^\pm&=& D_{i,n}^\pm I_n(\sqrt{\nu_\pm} r_i) \sin n \theta_i\,, \end{eqnarray} the $I_j$ and $K_j$ with $j \geq 0$ are the modified Bessel functions of the first and second kind, and $A_{i,n}^\pm$, $B_{i,n}^\pm$, $C_{i,n}^\pm$, $D_{i,n}^\pm$ with $n \geq 1$ are constants. Assuming that $\bar u \to 0$ as $r_i \to \infty$ \cite{huang86,nielsen98,nielsen00}, we have $C_{i,0}^\pm=C_{i,n}^\pm = D_{i,n}^\pm =0$ for $n \geq 1$, and Eq.~(\ref{genSolSingle}) reduces to \cite{CAH2013b} \begin{equation} \label{genSolSingle2} f_i^\pm(r_i,\theta_i) = A_{i,0}^\pm K_0(\sqrt{\nu_\pm} r_i) +\sum_{n=1}^N \left\{\mathcal{A}_{i,n}^\pm + \mathcal{B}_{i,n}^\pm\right\} \,, \end{equation} where $N \to \infty$ corresponds to the full single-protein solution. If the boundary conditions on $u$ along the bilayer-protein interfaces are such that $y \to - y$ in Fig.~\ref{figBiPol} we have, by symmetry, that $B_n^\pm=0$ for $n \geq 1$. Substitution of Eq.~(\ref{constructGenSol}) with Eq.~(\ref{genSolSingle2}) into Eq.~(\ref{genSol}) yields the solution of the thickness deformations induced by two membrane proteins. To use Eq.~(\ref{EvalEnergyLine}) to evaluate the elastic energy associated with these thickness deformations, and to impose suitable boundary conditions along the bilayer-protein interfaces, we recast---along the bilayer-protein boundary associated with protein 2---$\bar u_1(r_1,\theta_1)$ in terms of $r_2$, $\sin \theta_2$, and $\cos \theta_2$, and vice versa. For protein 2, this is achieved \cite{mathematica11} by first expanding the bipolar coordinate transformations for $r_1$, $\sin \theta_1$, and $\cos \theta_1$, and then the expression for $\bar u_1(r_1,\theta_1)$ in Eq.~(\ref{constructGenSol}), in terms of $r_2^\prime = r_2/d$ up to some order $M$ in $r_2^\prime$. Steric constraints mandate $d>R_1+R_2$ and, hence, $r_2^\prime<1$ along the bilayer-protein boundary associated with protein 2. Following a similar procedure for protein 1, Eq.~(\ref{genSol}) yields explicit expressions for $\bar u$ in terms of $(r_2,\theta_2)$ in the vicinity of protein 2 and in terms of $(r_1,\theta_1)$ in the vicinity of protein 1. We note that expansion of $u_1(r_1,\theta_1)$ around protein 2 up to order $M$ in $r_2^\prime$ produces angular variations in $\theta_2$ up to $\sin M \theta_2$ and $\cos M \theta_2$. We set $M=N$ to ensure that these ``secondary'' angular variations, which are introduced into the general solution in Eq.~(\ref{genSol}) via expansion of the bipolar coordinate transformations, are of the same maximum order as the angular variations captured directly by the Fourier-Bessel series in Eq.~(\ref{genSolSingle2}) \cite{footnote}. The expansions described above yield explicit expressions for $\bar u$ in terms of $(r_1,\theta_1)$ and $(r_2,\theta_2)$ in the vicinity of proteins 1 and 2, respectively. Thus, the expression for the thickness deformation energy in Eq.~(\ref{EvalEnergyLine}) can be written as \begin{eqnarray} G&=& -\frac{1}{2} R_1 \int_0^{2 \pi} d \theta_1 g_1(\theta_1) -\frac{1}{2} R_2 \int_0^{2 \pi} d \theta_2 g_2(\theta_2)\,,\nonumber \\&& \label{intGder} \end{eqnarray} where the overall minus signs arise because the bilayer normal vectors point towards decreasing $r_i$ along the bilayer-protein interfaces, and the boundary energy densities \begin{eqnarray} \label{Eanaytic1} g_1(\theta_1)&=& \left[K_b \frac{\partial \bar u}{\partial r_1} \nabla_1^2 \bar u-K_b \bar u \frac{\partial}{\partial r_1}\nabla_1^2 \bar u+\tau \bar u \frac{\partial \bar u}{\partial r_1}\right]_{r_1=R_1}\,,\nonumber\\&&\\ \label{Eanaytic2} g_2(\theta_2)&=& \left[K_b \frac{\partial \bar u}{\partial r_2} \nabla_2^2 \bar u-K_b \bar u \frac{\partial}{\partial r_2}\nabla_2^2 \bar u+\tau \bar u \frac{\partial \bar u}{\partial r_2} \right]_{r_2=R_2}\nonumber\\&& \end{eqnarray} are evaluated using $\bar u(r_1,\theta_1)$ and $\bar u(r_2,\theta_2)$, respectively, in which we have noted that $\left|\mathbf{\hat n} \cdot \nabla\right|=\frac{\partial}{\partial r}$ along the circumference of a circle, and the Laplace operators \begin{equation} \nabla_i^2 = \frac{\partial^2}{\partial r_i^2}+\frac{1}{r_i} \frac{\partial}{\partial r_i}+\frac{1}{r_i^2} \frac{\partial^2}{\partial \theta_i^2} \end{equation} in polar coordinates, where $i=1,2$. As a result of our expansions of the bipolar coordinate transformations, $\bar u$ around proteins 1 and 2 only depends on $\theta_{1,2}$ through linear sums over $\sin j \theta_{1,2}$ and $\cos j \theta_{1,2}$ with $j \geq 0$, and it is therefore straightforward \cite{mathematica11} to analytically evaluate the angular integrals in Eq.~(\ref{intGder}), resulting in an algebraic expression for $G$. In the case of two interacting membrane proteins, the general solution in Eq.~(\ref{genSol}) contains the $4 (2N+1)$ coefficients $A_{i,0}^\pm$, $A_{i,n}^\pm$, and $B_{i,n}^\pm$ with $i=1,2$ and $n=1,\dots,N$. These coefficients are determined by the boundary conditions through the linear system of equations \begin{equation} \label{matrixEq} \mathbf{M} \mathbf{c} = \mathbf{b}\,, \end{equation} where the vector $\mathbf{c}$ is of length $4 (2N+1)$ and contains each coefficient appearing in Eq.~(\ref{genSol}) as a separate element. The vector $\mathbf{b}$ contains the boundary conditions on $\bar u$ and its radial derivatives, at proteins 1 and 2, at each order in $\sin j \theta$ and $\cos j \theta$ with $j \geq 0$ as separate elements. At $j=0$, we have two boundary conditions for each protein, yielding four elements in $\mathbf{b}$. Similarly, for $j>1$ we have four boundary conditions at each order in $j$ for each protein yielding, in total, the $4 (2N+1)$ independent boundary conditions required to fix the $4 (2N+1)$ independent coefficients appearing in Eq.~(\ref{genSol}). Finally, the rows of the matrix $\mathbf{M}$ are constructed from the coefficients of $\sin j \theta$ and $\cos j \theta$ with $j \geq 0$ in Eq.~(\ref{genSol}) and their radial derivatives, at proteins 1 and 2, with each column corresponding to a particular coefficient. For two proteins one therefore obtains four rows at $j=0$ and eight rows at each $j>0$ yielding, as required, a square matrix of order $4 (2N+1)$. Solution of Eq.~(\ref{matrixEq}) for the coefficients $\mathbf{c}$ \cite{mathematica11} yields \cite{CAH2013a}, as $N \to \infty$, the exact thickness deformations $u$ in Eqs.~(\ref{transfu}) and~(\ref{genSol}) induced by arbitrary protein configurations and, via Eq.~(\ref{intGder}), the associated thickness deformation energy $G$. However, to make the above solution procedure analytically tractable it is, in practice, necessary to truncate the respective Fourier-Bessel series, and expansions of the bipolar coordinate transformations, at some finite value of $N$. Such a truncation relies on the assumption that, beyond $N$, angular variations in Eq.~(\ref{genSol}) can be neglected. The validity of this assumption for a given value of $N$ can be confirmed \cite{CAH2013a} by systematically including higher-order terms. For large $N$, it can be convenient to substitute numerical values for all model parameters, and to numerically solve for $\mathbf{c}$ in Eq.~(\ref{matrixEq}). The above analytic solution procedure can be implemented directly for the boundary conditions associated with the cylinder and crown models of membrane proteins discussed in Secs.~\ref{secCylinder} and~\ref{secCrown}. For the clover-leaf model discussed in Sec.~\ref{secClover}, suitable (approximate) boundary conditions can be obtained perturbatively from Eq.~(\ref{boundCclover}) \cite{CAH2013a,CAH2013b} by expanding the left-hand sides of Eqs.~(\ref{bc1f}) and~(\ref{bc2f}) in terms of the small parameter $\epsilon_i$ \cite{kim00}, and noting that \begin{equation} \label{trigIdentTrimer} \cos s \left(\theta_i-\omega_i \right)=\cos s \theta_i \cos s \omega_i+ \sin s \theta_i \sin s \omega_1\,. \end{equation} To leading order in $\epsilon_i$, Eqs.~(\ref{bc1f}) and~(\ref{bc2f}) are then given~by \begin{align} \label{genBC1trimerFin} &\bar u(R_i,\theta_i)+F_i(R_i) \cos s \theta_i+G_i(R_i) \sin s \theta_i=U_i+\frac{\tau a}{K_t}\,,\\ &\frac{\partial \bar u(r_i,\theta_i)}{\partial r_i}\bigg|_{r_i=R_i}+F_i^\prime(R_i) \cos s \theta_i+G_i^\prime(R_i) \sin s \theta_i=U_i^\prime\,, \label{genBC2trimerFin} \end{align} where \begin{eqnarray} \label{genBC1trimerFin2} F_i(r_i)&=&R_i \epsilon_i \frac{\partial \bar u(r_i,\theta_i)}{\partial r_i} \cos s \omega_i\,, \\ \label{genBC2trimerFin2} G_i(r_i)&=&R_i \epsilon_i \frac{\partial \bar u(r_i,\theta_i)}{\partial r_i} \sin s \omega_i\,. \end{eqnarray} Note that, for protein configurations which satisfy $\sin s \omega_i=0$, only cosine modes must be considered in the above expressions because, for such configurations, the arrangement in Fig.~\ref{figBiPol} is symmetric under $y \to -y$. For the sake of simplicity, we approximate $\bar u$ in Eqs.~(\ref{genBC1trimerFin2}) and~(\ref{genBC2trimerFin2}) by only including the rotationally symmetric ``background'' fields about proteins 1 and 2 in Eq.~(\ref{genSol}). Bilayer thickness-mediated interactions in the clover-leaf model of integral membrane proteins can then be analyzed following the same steps as for the cylinder and crown models, but with the additional approximations inherent in Eqs.~(\ref{genBC1trimerFin}) and~(\ref{genBC2trimerFin}). \section{Numerical solution} \label{secNumericalSol} The FE framework provides a versatile numerical approach for handling protein-induced lipid bilayer deformations in crowded membranes. We have developed a general FE scheme \cite{OWC1,OPWC1} for the numerical study of bilayer-protein interactions which allows reliable and efficient minimization of Eq.~(\ref{energy}) for hundreds of interacting integral membrane proteins with complicated shapes and boundary conditions. To complement our FE approach, we have also developed a FD scheme for the minimization of Eq.~(\ref{energy}). In this section we provide a detailed description of our FE and FD solution procedures, which permit minimization of Eq.~(\ref{energy}) for effectively arbitrary protein shapes, separations, and orientations. While the analytic solution described in Sec.~\ref{secAnalyticSol} allows for infinitely large system sizes, numerical solutions are necessarily restricted to finite solution domains. With the exception of special cases such as provided by periodic systems \cite{OPWC1}, numerical minimization of Eq.~(\ref{energy}) therefore relies on the assumption that the energy cost of thickness deformations decays sufficiently rapidly at large distances from the proteins so that finite size effects can be neglected. This assumption is violated for finite membrane tensions in Eq.~(\ref{energy}), in which case the magnitude of $G_1$ in Eq.~(\ref{energyG1}) increases with bilayer area and, indeed, becomes infinite in the limit of infinitely large lipid bilayers. However, $G_1$ does not contribute to the energy cost of protein-induced lipid bilayer thickness deformations. Thus, direct comparisons between analytic and numerical solutions of the energy cost of protein-induced lipid bilayer thickness deformations in the non-interacting as well as interacting regimes can be made, even for $\tau>0$, by subtracting the (finite) value of $G_1$ associated with a specific choice for the size of the solution domain from the numerical solution, and (formally) subtracting the corresponding (infinite) value of $G_1$ from the analytic solution as in Eq.~(\ref{EvalEnergyLine}). Furthermore, as we discuss below, comparisons between analytic and numerical results for the thickness deformation field, rather than the thickness deformation energy, over identical (finite) bilayer areas for the analytic and numerical solutions provide an alternative approach for testing our numerical solution procedures. This approach does not rely on subtracting $G_1$ in Eq.~(\ref{energy2}). \subsection{Finite elements} \label{secFE} The FE framework \cite{shames1985energy,bathe2006} was developed to permit reliable and computationally efficient numerical solutions of boundary value problems involving large computational domains with complicated boundary shapes and boundary conditions. This makes the FE approach well suited for the calculation of lipid bilayer-mediated protein interactions in crowded membranes with many interacting proteins. Using MscL as a model system, we have shown previously \cite{OWC1,OPWC1} that the FE approach makes it feasible to predict directional bilayer-mediated protein interactions in systems composed of hundreds of integral membrane proteins. While standard FE methods based on Lagrange interpolation functions \cite{shames1985energy} are sufficient to compute the thickness stretch and tension terms in Eq.~(\ref{energy}), the curvature term, being second order in derivatives, requires $C^1$ continuity \cite{bathe2006} and therefore cannot be handled through Lagrange interpolation functions. The discrete Kirchhoff triangle (DKT) method offers an elegant and efficient way to circumvent this limitation \cite{Batoz1980,Bathe1981}. In particular, to bypass $C^1$ continuity, the DKT approach employs a plate theory allowing for transverse shear deformations, in which case $C^0$ continuity is sufficient. The Kirchhoff hypothesis of zero transverse shear is then enforced discretely along the edges of the triangular elements, thus ensuring the conformity of curvatures at element interfaces. In our FE framework for calculating bilayer thickness-mediated interactions between integral membrane proteins \cite{OWC1,OPWC1} we adopt a hybrid FE approach, in which we combine the DKT formulation for the bending terms \cite{Batoz1980} with standard Lagrange interpolation for the thickness stretch and gradient terms \cite{shames1985energy}. To derive the stiffness matrix associated with this hybrid FE approach, we rewrite Eq.~(\ref{energy}) in Cartesian coordinates, \begin{eqnarray} G &= \frac{1}{2} \int dxdy \left\{ K_b\left(\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}\right)^2 +\frac{K_t}{a^2} u^2 \nonumber \right. \\*&\left. + \tau \left[\left(\frac{\partial u}{\partial x}\right)^2+\left(\frac{\partial u}{\partial y}\right)^2\right] + \frac{2\tau}{a} u \right\} \, . \label{FEenergy} \end{eqnarray} The variation of Eq.~(\ref{FEenergy}) is given by \begin{equation} \delta G = \int dxdy (\delta \bold{\epsilon} )^T \mathbf{D} \bold{\epsilon} +\int dxdy \frac{\tau}{a} \delta u \, , \end{equation} with the generalized strain vector \begin{equation} \bold{\epsilon}^T = \left[u \quad \frac{\partial u}{\partial x} \qquad \frac{\partial u}{\partial y} \qquad \frac{\partial^2 u}{\partial x^2} \qquad \frac{\partial^2 u}{\partial y^2} \right]\,, \end{equation} and the constitutive matrix \begin{equation} \mathbf{D} = \left[ \begin{array}{ccccc} \frac{K_t}{a^2} & 0 & 0 & 0 & 0 \\ 0 & \tau & 0 & 0 & 0 \\ 0 & 0 & \tau & 0 & 0 \\ 0 & 0 & 0 & K_b & K_b \\ 0 & 0 & 0 & K_b & K_b \\ \end{array} \right] \, . \end{equation} While the displacements $u_1$, $u_2$, and $u_3$ of the corner nodes of each FE triangle are sufficient to define Lagrange interpolation functions, the DKT approach requires nine degrees of freedom per triangle, \begin{equation} \vec{U}^T = [u_1 \; \; \theta_{x1} \; \; \theta_{y1} \; \; u_2 \; \; \theta_{x2} \; \; \theta_{y2} \; \; u_3 \; \; \theta_{x3} \; \; \theta_{y3}]\,, \end{equation} where the partial derivatives $\theta_{xi} = u_{i,y}$ and $\theta_{yi}= -u_{i,x}$ with $i=1,2,$ or 3 correspond to rotations at the corner nodes of each FE triangle. We use the strain-displacement transformation matrix \begin{equation} \label{eqdefB} \mathbf{B} = \left[ \begin{array}{c} \mathbf{G}^T \\ \vspace*{0.1cm} \mathbf{G}^T_{,x} \\ \vspace*{0.1cm} \mathbf{G}^T_{,y} \\ \vspace*{0.1cm} \mathbf{H}^T_{x,x} \\ \vspace*{0.1cm} \mathbf{H}^T_{y,y} \end{array} \right] \end{equation} to construct the strain vector $\bold{\epsilon} = \mathbf{B}\mathbf{U}$, where the linear triangular shape functions $\mathbf{G}$ are given in Ref.~\cite{shames1985energy} and the DKT shape functions $\mathbf{H}$ are given in Ref.~\cite{Batoz1980}. Finally, the FE thickness deformation energy is obtained by summing over all finite elements, \begin{equation} G_\text{FE} = \sum_{e\in \text{elements}} \left( \mathbf{U}^e)^T (\mathbf{K}^e \mathbf{U}^e +\mathbf{f}^e \right) \, , \label{eq:enFE} \end{equation} in which the element stiffness matrix $\mathbf{K}^e$ and ``internal tension'' $\mathbf{f}^e$ are given by \begin{eqnarray} \mathbf{K}^e &=& 2A^e \int d\xi d\eta \mathbf{B}^T \mathbf{D} \mathbf{B}\, , \nonumber \\ \label{eq:stifftension} \mathbf{f}^e &=& 2A^e \int d\xi d\eta \frac{\tau}{a}\mathbf{G}\, . \end{eqnarray} The above integrals are performed over the local coordinates $(\xi,\eta)$ using second-order (three points per element) Gaussian quadrature, and scaled by the area $A^e$ of the element. \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig3} \caption{(Color online) Thickness deformations $u$ minimizing Eq.~(\ref{energy}) obtained using our FE approach for two clover-leaf pentamers. The triangular mesh (black overlay) indicates the mesh used in the FE calculation, and is generated using the \textsc{frontal} algorithm of the \textsc{gmsh} package \cite{TriangulateRef}. All model parameters were chosen as described in Sec.~\ref{secElasticModel}. } \label{FE1} \end{figure} To enforce the general boundary conditions along the bilayer-protein interfaces in Eqs.~(\ref{bc1f}) and~(\ref{bc2f}) we fix $(u, \theta_x, \theta_y)=(U, (U_t)_{,y}, -(U_t)_{,x})$ for the nodes defining the protein boundaries, where $(U_t)_{,x}$ and $(U_t)_{,y}$ are $x$ and $y$ projections of the thickness gradient $U_t$ along the contour of the bilayer-protein interfaces. For the cylinder and clover-leaf models of protein shape the hydrophobic thickness is constant along the bilayer-protein interface, and we therefore impose $U_t=0$. For the crown model we have $U_t(\theta)=-\delta s \sin(\theta-\omega)$ from the hydrophobic thickness variation in Eq.~(\ref{VarU}). For the nodes defining the outer boundary of the simulation domain we do not constrain $u$ and its derivatives. We assemble the thickness deformation energy in Eq.~(\ref{eq:enFE}) in C++ using the variational mechanics library \textsc{voom} and minimize Eq.~(\ref{eq:enFE}) by employing the \textsc{l-bfgs-b} solver \cite{Zhu1997}. Figure~\ref{FE1} shows a representative thickness deformation profile obtained for two clover-leaf pentamers together with the corresponding triangular mesh used in the FE calculation, which we generate using the \textsc{frontal} algorithm of the \textsc{gmsh} package \cite{TriangulateRef}. We check the accuracy of our FE procedure by comparing the total thickness deformation energies predicted by analytic and FE approaches for the cases in which exact analytic results on protein-induced lipid bilayer thickness deformations are available \cite{huang86,CAH2013a,CAH2013b} (see Sec.~\ref{secAnalyticSol}). As discussed above, such comparisons necessitate subtracting $G_1$ in Eq.~(\ref{energyG1}). To complement these tests, we also compare our analytic and FE approaches using the analytic and FE solutions for the thickness deformation field. This approach for quantifying the level of agreement between analytic and FE approaches does not rely on subtracting $G_1$ in Eq.~(\ref{energyG1}). In particular, following Ref.~\cite{Zienkiewicz1987} we monitor the percentage error in the thickness deformations obtained from the FE approximation, $u_h$, relative to the analytic solution, $u$, \begin{equation} \label{percentageError} \eta_u = 100 \times \frac{||u-u_h||_{L^2}}{||u||_{L^2}}\,, \end{equation} and the corresponding percentage error in curvature deformations, \begin{equation} \eta_{\nabla^2 u} = 100 \times \frac{|u-u_h|_{W^{2,2}}}{|u|_{W^{2,2}}} \,, \label{percentageError2} \end{equation} where the $L^2$ norm \begin{equation} \label{eq:L2} ||u||_{L^2} = \left(\int u^2\; dxdy\right)^\frac{1}{2} \end{equation} and the Sobolev semi-norm \begin{equation} \label{eq:H2} |u|_{W^{2,2}} = \left(\int (\nabla^2u)^2 \;dxdy\right)^\frac{1}{2} \end{equation} are evaluated by numerical quadrature over identical (finite) bilayer areas for the analytic and numerical solutions, using an approximately circular integration domain with radius $\approx 22$~nm. \subsection{Finite differences} \label{secFD} FD methods have been employed previously to study the bilayer thickness deformations induced by gramicidin channels \cite{helfrich90,harroun99b, partenskii03,partenskii04,miloshevsky06}, MscL \cite{ursell07}, G-protein coupled receptors \cite{mondal11,mondal12,mondal14}, and the bacterial leucine transporter \cite{mondal14}. In our FD scheme, we discretize the lipid bilayer domain of interest using a hexagonal grid with $H\times H$ nodes and lattice spacing $h$ [see Fig.~\ref{FD1}(a)]. We denote the nodal values of the thickness deformations by $u_{i,j}$. Taylor series expansion then yields the discretized Laplace operator \begin{eqnarray} \textstyle \nabla^2 u_{i,j} &= &\frac{1}{h^2}\bigg[\frac{2}{3} ( u_{i-1,j+1}+u_{i,j+1}+u_{i-1,j}+u_{i+1,j} \nonumber\\ \label{eqFDLap} \quad \quad && +u_{i, j-1}+u_{i+1,j-1} -6 u_{i,j}) \bigg]\,. \end{eqnarray} Minimization of the thickness deformation energy in Eq.~(\ref{energy}) via solution of the Euler-Lagrange equation~(\ref{genBihpreu}) using FD requires an expression for the discretized biharmonic term $\nabla^4 u_{i,j}$. We obtain $\nabla^4 u_{i,j}$ by applying the discretized Laplace operator in Eq.~(\ref{eqFDLap}) two times, resulting in a 19-point stencil with the coefficients shown in Fig.~\ref{FD1}(a). \begin{figure}[t!] \includegraphics[width=0.97\columnwidth]{fig4} \caption{(Color online) Illustration of the hexagonal grid used for our FD calculations. (a) Nodal values of the discretized thickness deformation field $u_{i,j}$ are indexed by $i$ along the horizontal direction and by $j$ along the oblique direction. The 19-point stencil discretizing the biharmonic operator at position $(i,j)=(0,0)$ is found by multiplying $u_{i,j}$ and nearby nodal values of the thickness deformation field by the indicated coefficients, adding up these contributions, and dividing by $h^4$. (b) We choose the protein boundary points (blue circles) to correspond to the nodes closest to the exact protein boundary curve (grey clover-leaf shape; see Sec.~\ref{secClover}), and impose the general boundary condition in Eq.~(\ref{bc1f}) at these nodes. We use a layer of interior points (red squares), together with corresponding exterior points (see main text), to impose the general boundary condition in Eq.~(\ref{bc2f}).} \label{FD1} \end{figure} For our FD calculations we focus on the case of zero membrane tension, for which Eq.~(\ref{energy}) yields the FD thickness deformation energy \begin{equation} \label{energyFD} G_\text{FD} = \frac{\sqrt{3} h^2}{2}\sum_{i,j}\left[ \frac{K_b}{2} \left(\nabla^2 u_{i,j}\right)^2 +\frac{K_t}{2a^2} u_{i,j}^2 \right]\,, \end{equation} in which nodal contributions are scaled with the unit area associated with each node. Collecting nodal values of the thickness mismatch $u_{i,j}$ into a vector $\vec{u}$ of length $H^2$, we recast the energy in Eq.~(\ref{energyFD}) into the matrix form \begin{eqnarray} G_\text{FD} &=& \frac{\sqrt{3} h^2}{2}\left[ \frac{K_b}{2}\left(\vec{L} \vec{u}\right)^T (\vec{L}\vec{u}) +\frac{K_t}{2a^2} \vec{u}^T \vec{u} \right] \nonumber \\ &=& \frac{\sqrt{3} h^2}{2}\left[ \frac{K_b}{2}\left(\vec{u}^T \vec{N} \vec{u}\right) +\frac{K_t}{2a^2} \vec{u}^T \vec{u} \right] \nonumber\\ &\equiv& \vec{u}^T \vec{Q} \vec{u}\,, \label{CalculateGFD} \end{eqnarray} where \begin{equation} \vec{Q}=\frac{\sqrt{3} h^2}{2}\left( \frac{K_b}{2} \vec{N} +\frac{K_t}{2a^2} \vec{I} \right)\,, \end{equation} and $\vec{L}$, $\vec{N}= \vec{L}^T \vec{L}$, and $\vec{I}$ are $H^2\times H^2$ Laplacian, biharmonic, and identity matrices, respectively. In each row, the matrices $\vec{L}$ and $\vec{N}$ have the coefficients of the discrete Laplace and biharmonic operators associated with a single node as their elements, respectively. The matrices $\vec{L}$ and $\vec{N}$ are therefore highly sparse, with non-zero elements organized according to the node ordering of the vector $\vec{u}$. To enforce the general bilayer-protein boundary conditions in Eqs.~(\ref{bc1f}) and~(\ref{bc2f}) in our FD scheme we find all grid points at a distance less than $h/2$ from the protein boundary, as illustrated in Fig.~\ref{FD1}(b) for clover-leaf shapes. We take these nodes to define the protein boundary in our FD scheme, and impose the boundary condition in Eq.~(\ref{bc1f}) by setting $u_{i,j}=U$ along the discretized protein boundary. To enforce the boundary condition on $\mathbf{\hat n} \cdot \nabla u$ in Eq.~(\ref{bc2f}) we find, for each protein, the nodes in the interior of the protein boundary curve within a distance $b$ from the protein boundary so that $h/2<b<3h/2$. For each of these interior points we then find a mirror symmetric point exterior to the protein boundary curve such that interior and exterior points are connected by a line of length $2b$ which is normal to the exact protein boundary curve. In order to satisfy the boundary condition $U_i^\prime=0$ used here, we impose the constraint that the values of $u_{i,j}$ at the interior and exterior points are equal to each other (other choices for the value of $U_i^\prime$ could be implemented following analogous steps). Here a complication arises in that the exterior points are typically not grid points. We address this issue by interpolating, for each exterior point, the values of the thickness deformation field at the three nearest grid points. Due to the substantial computational cost associated with our FD scheme we only employ here the FD approach to study mirror-symmetric protein configurations. For two proteins located at $(\pm d/2,0)$ the size of the solution domain can then be reduced by a factor of one-half by imposing mirror-symmetric boundary conditions along the boundary line $x=0$ (see Fig.~\ref{figBiPol}). For the remaining boundaries of the solution domain we set $u=0$. For all our FD calculations we use a rectangular solution domain of side lengths ($25$, $25\sqrt{3}/2$)~nm with the protein center placed on the longer midline of the rectangle. We check that our results are robust with respect to increases in the size of the solution domain. To minimize the thickness deformation energy we construct the discretized version of the Euler-Lagrange equation~(\ref{genBihpreu}) for $\tau=0$, \begin{equation} \label{eqFDEuler} \vec{Q} \vec{u}=\vec{v}\,, \end{equation} defined for the nodal values $\vec{u}$ of all FD grid points. To enforce the boundary conditions in Eqs.~(\ref{bc1f}) and~(\ref{bc2f}) we adjust the rows of the matrix $\vec{Q}$ in Eq.~(\ref{eqFDEuler}) corresponding to the FD boundary nodes so as to fix the value of $u_{i,j}$ at these nodes, as described above. Accordingly, the vector $\vec{v}$ contains non-zero elements at rows corresponding to the boundary nodes. We solve the linear system in Eq.~(\ref{eqFDEuler}) using the sparse matrix structures and solvers provided by the numerical computing environment \textsc{matlab} \cite{matlab13}. We calculate the corresponding thickness deformation energy using Eq.~(\ref{CalculateGFD}) for the nodal values of all FD grid points lying outside the protein domains. \section{Cylinder model} \label{secCylinder2} In this section we focus on the most straightforward scenario of lipid bilayer thickness deformations induced by cylindrical membrane proteins [Fig.~\ref{figIllust}(a)], and compare numerical results obtained using our FE and FD schemes to the corresponding exact analytic solutions. As discussed above, we subtract $G_1$ in Eq.~(\ref{energyG1}) to compare thickness deformation energies obtained using numerical and analytic solution procedures. We first consider the bilayer thickness deformations induced by cylindrical membrane proteins in the non-interacting regime of large protein separations, and then discuss the bilayer thickness deformations induced by two interacting cylindrical membrane proteins. \subsection{Non-interacting cylindrical membrane proteins} \label{secSingleCylind} \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig5a}\\ \vspace{0.1cm} \includegraphics[width=\columnwidth]{fig5b} \caption{(Color online) Lipid bilayer thickness deformations due to a single cylindrical membrane protein. (a) Bilayer thickness deformation profile $u$ versus radial coordinate $r$ obtained from the exact analytic solution in Eq.~(\ref{constructGenSol}) for $d \to \infty$ \cite{huang86}, the FE approach, and the FD approach. The grey vertical line indicates the protein boundary. (b) Bilayer thickness deformation energy versus thickness mismatch $U$ obtained using analytic, FE, and FD approaches. The insets show (a) the difference in thickness deformation profile, $\Delta u$, and (b) the percentage difference in thickness deformation energy, $\eta_G$ in Eq.~(\ref{eq:DefetaG}), between the analytic solution and the corresponding results of FE (blue solid curves) and FD (green dashed curves) calculations, respectively. For the FE solution we used an average edge size of the FE mesh $\langle l_\textrm{edge} \rangle= 0.3$~nm, and for the FD solution we used a lattice spacing $h=0.05$~nm. We set $\tau=0$ for both panels. All model parameters were chosen as described in Sec.~\ref{secElasticModel}, and analytic and numerical solutions were obtained as discussed in Secs.~\ref{secAnalyticSol} and~\ref{secNumericalSol}. } \label{fig:single_cyl} \end{figure} For a single cylindrical membrane protein, the protein-induced lipid bilayer thickness deformations are rotationally symmetric about the protein, with the exact analytic solution \cite{huang86} corresponding to the zeroth-order terms in the general solution in Eq.~(\ref{constructGenSol}). The radial profile of this exact analytic solution about the membrane protein is governed by zeroth-order modified Bessel functions of the second kind, yielding an approximately exponential decay of thickness deformations with a periodic modulation \cite{huang86,dan93,dan94} and a characteristic length scale of thickness deformations $\lambda=(a^2 K_b/K_t)^{1/4} \approx 1$~nm \cite{phillips09,ursell08} [see Fig.~\ref{fig:single_cyl}(a)]. The thickness deformation profiles obtained using our FE and FD solution procedures are in excellent quantitative agreement with the corresponding analytic solution [Fig.~\ref{fig:single_cyl}(a)]. Computing the percentage difference between numerical and analytic results for the thickness deformation energy, \begin{equation} \label{eq:DefetaG} \eta_G = 100\times \left|\frac{G_\text{numerical}-G_\text{analytic}}{G_\text{analytic}}\right|\,, \end{equation} we find that the thickness deformation energies obtained using analytic, FE, and FD approaches are also in excellent quantitative agreement [Fig.~\ref{fig:single_cyl}(b)]. As expected from scaling arguments \cite{wiggins04,wiggins05,phillips09}, all three solution procedures yield an approximately quadratic dependence of the thickness deformation energy on hydrophobic mismatch. \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig6a}\\ \vspace{0.1cm} \includegraphics[width=\columnwidth]{fig6b} \caption{(Color online) Comparison of analytic and FE solutions for cylindrical membrane proteins. (a) Percentage error in thickness deformations in Eq.~(\ref{percentageError}) ($\eta_u$; orange dashed curve) and percentage error in curvature deformations in Eq.~(\ref{percentageError2}) ($\eta_{\nabla^2 u}$; magenta solid curve) versus $\langle l_\textrm{edge} \rangle$ for a single cylindrical membrane protein. Errors in the thickness mismatch and curvature deformations decay as $\langle l_\textrm{edge} \rangle$ and $\langle l_\textrm{edge} \rangle^2$, respectively. (b) Percentage difference between analytic and FE results for the thickness deformation energy per cylindrical membrane protein, $\eta_G$ in Eq.~(\ref{eq:DefetaG}), versus $\langle l_\textrm{edge} \rangle$ for a system consisting of two identical cylindrical membrane proteins in the non-interacting regime ($d=14$~nm; red solid curve) and in the strongly interacting regime ($d=5$~nm; blue dashed curve; $N=11$ for the analytic solution). We set $\tau=0$ for both panels. All model parameters were chosen as described in Sec.~\ref{secElasticModel}, and analytic and numerical solutions were obtained as discussed in Secs.~\ref{secAnalyticSol} and~\ref{secNumericalSol}. } \label{fig:cyl_convergence} \end{figure} The convergence of the FE and FD solutions towards the exact analytic solution can be quantified by systematically increasing the spatial resolution of the numerical solution schemes. We find that for the FE solution the percentage errors in thickness and curvature deformations in Eqs.~(\ref{percentageError}) and~(\ref{percentageError2}), which are summed over the entire solution domain, monotonically decrease with decreasing average edge size of the FE mesh, $\langle l_\textrm{edge} \rangle$ [see Fig.~\ref{fig:cyl_convergence}(a)]. In particular, the thickness deformation error decreases approximately quadratically with decreasing $\langle l_\textrm{edge} \rangle$, while the curvature deformation error decreases approximately linearly with decreasing $\langle l_\textrm{edge} \rangle$. As shown in Fig.~\ref{fig:cyl_convergence}(b) (red solid curve), the error in the thickness deformation energy decreases quadratically with decreasing $\langle l_\textrm{edge} \rangle$. While the results in Fig.~\ref{fig:cyl_convergence} were obtained at zero membrane tension, we find that membranes at finite tension yield similar scaling of the errors in the FE thickness deformations, the FE curvature deformations, and the FE thickness deformation energy with $\langle l_\textrm{edge} \rangle$. For the FD solution (red solid curve in Fig.~\ref{fig:cyl_convergence2}) the error in the thickness deformation energy decreases approximately linearly with decreasing lattice spacing $h$. The central-difference Laplacian FD stencil we used here is second-order accurate. The linear convergence is therefore most likely an indication that the FD error is dominated by the enforcement of the slope boundary conditions. Furthermore, we find that, for the parameter values used for Fig.~\ref{fig:cyl_convergence2}, the percentage error in the thickness deformation energy is smaller than $0.5 \%$ for resolutions $h\leq0.2$~nm. Thus, we find that the FE and FD solutions both yield good agreement with the exact analytic solution for high enough spatial resolutions of the numerical solution schemes. However, Figs.~\ref{fig:cyl_convergence}(b) and~\ref{fig:cyl_convergence2} also show that, compared to the FD solution procedure, the FE solution procedure yields accurate results even at relatively low average spatial resolutions, and converges more rapidly towards the exact analytic result for the thickness deformation energy with increasing average spatial resolution. This suggests that the FE solution procedure is more efficient and, for a given average spatial resolution, more accurate than the FD scheme used here. \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig7} \caption{(Color online) Percentage difference between analytic and FD results for the thickness deformation energy per cylindrical membrane protein, $\eta_G$ in Eq.~(\ref{eq:DefetaG}), versus lattice spacing, $h$, for a system consisting of two identical cylindrical membrane proteins in the non-interacting regime ($d=14$~nm for the FD solution; red solid curve) and in the strongly interacting regime ($d=5$~nm; blue dashed curve; $N=11$ for the analytic solution). We set $\tau=0$. All model parameters were chosen as described in Sec.~\ref{secElasticModel}, and analytic and numerical solutions were obtained as discussed in Secs.~\ref{secAnalyticSol} and~\ref{secNumericalSol}. } \label{fig:cyl_convergence2} \end{figure} \subsection{Interacting cylindrical membrane proteins} For cylindrical membrane proteins of the same hydrophobic thickness, our analytic and numerical solution schemes imply that, for the bilayer and protein parameter values used here, bilayer thickness-mediated interactions are strongly favorable for protein center-to-center distances smaller than $d\approx 8$~nm (see Fig.~\ref{fig:cylinders}). For intermediate protein separations $d\approx8$--$12$~nm, thickness-mediated interactions are weakly unfavorable. We find that thickness-mediated interactions are practically negligible for protein separations greater than $d \approx 12$~nm or minimum protein edge-to-edge separations greater than $\approx 7 \lambda$. \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig8} \caption{(Color online) Thickness deformation energy per protein, $G$, for two cylindrical membrane proteins versus center-to-center protein distance, $d$, calculated analytically at $N=3$, $N=5$, $N=7$, and $N=11$ in Eq.~(\ref{genSol}), and numerically using our FE and FD solution procedures. The inset shows the percentage difference in thickness deformation energy, $\eta_G$ in Eq.~(\ref{eq:DefetaG}), between the analytic solution at $N=11$ and the corresponding results of FE (blue dashed curve) and FD (orange solid curve) calculations. For the FE solution we used $\langle l_\textrm{edge} \rangle \approx 0.27$~nm and for the FD solution we used $h=0.05$~nm. We set $\tau=0$. All model parameters were chosen as described in Sec.~\ref{secElasticModel}, and analytic and numerical solutions were obtained as discussed in Secs.~\ref{secAnalyticSol} and~\ref{secNumericalSol}. } \label{fig:cylinders} \end{figure} The non-monotonic dependence of bilayer thickness-mediated interactions on protein separation (Fig.~\ref{fig:cylinders}) can be understood \cite{CAH2014a} by considering how the bilayer thickness deformations due to single isolated membrane proteins would interfere. As noted in Sec.~\ref{secSingleCylind}, the thickness deformation profile induced by a single cylindrical membrane protein relaxes rapidly away from the protein, but overshoots with respect to the unperturbed lipid bilayer thickness \cite{huang86,dan93,dan94} [Fig.~\ref{fig:single_cyl}(a)]. Indeed, the zeroth-order modified Bessel functions of the second kind appearing in the general solution in Eq.~(\ref{genSol}) imply successive expansion and compression zones of the lipid bilayer thickness around each protein. When two identical cylindrical membrane proteins are in close proximity to each other, bilayer thickness deformation zones of the same sign overlap and the overall bilayer deformation footprint of the interacting proteins is strongly reduced compared to the large-$d$ limit, resulting in strongly favorable thickness-mediated protein interactions. In contrast, for intermediate protein separations, the protein-induced bilayer thickness deformation profiles are out of phase, yielding substantial overlap of compressed and expanded bilayer regions. This results in frustration of bilayer thickness deformations, and produces weakly unfavorable protein interactions. As discussed in Sec.~\ref{secAnalyticSol}, the analytic solution of bilayer thickness-mediated protein interactions in Eq.~(\ref{genSol}) is, in practice, only obtained up to some finite order $n=N$. While modes with $n\geq1$ are irrelevant at large protein separations, modes with non-zero $n$ are essential to correctly account for the thickness deformations induced by interacting proteins. To confirm the validity of the truncated expansion in Eq.~(\ref{genSol}), we compare the analytic solution of thickness-mediated protein interactions to high-resolution numerical solutions obtained by FE and FD methods (Fig.~\ref{fig:cylinders}). At low orders of the analytic solution, $N=3$ and $N=5$, our analytic estimates of the thickness-mediated interaction energy reproduce the qualitative features of the numerical interaction potentials, but exceed the numerical results by several $k_B T$. Increasing the order of the analytic solution to $N=7$ and $N=11$, we obtain convergence of the analytic solution even for small values of $d$. We find that these high-order analytic solutions are in good quantitative agreement with the corresponding FE solution. The agreement between FE and high-order analytic solutions is approximately independent of protein separation [Fig.~\ref{fig:cylinders}(inset)]. In particular, FE and analytic approaches agree similarly well in the non-interacting regime, for which only zeroth-order modes of the analytic solution must be considered, and in the strongly interacting regime for which, in principle, and infinite number of higher-order modes should be considered in the analytic solution. This suggests that the FE and high-order analytic solutions correctly account for thickness-mediated protein interactions even at very small $d$. Furthermore, we find that, in the strongly interacting regime, the discrepancy between FE and high-order analytic results for the thickness deformation energy decreases approximately quadratically with decreasing $\langle l_\textrm{edge} \rangle$ [blue dashed curve in Fig.~\ref{fig:cyl_convergence}(b)], in agreement with the corresponding result obtained for non-interacting membrane proteins [red solid curve in Fig.~\ref{fig:cyl_convergence}(b)]. While the results in Figs.~\ref{fig:cyl_convergence}(b) and~\ref{fig:cylinders} were obtained with $\tau=0$, we find similar agreement between FE and high-order analytic solutions for finite membrane tensions. In contrast to the FE solution procedure, the discrepancy between FD and high-order analytic solutions tends to increase with decreasing $d$ [Fig.~\ref{fig:cylinders}(inset)]. This can be understood by noting that, in the FD scheme, very small lattice spacings are required to resolve protein-induced lipid bilayer deformations in the strongly interacting regime. As in the case of non-interacting proteins (red solid curve in Fig.~\ref{fig:cyl_convergence2}), the discrepancy between FD and high-order analytic results for the thickness deformation energy decreases approximately linearly with decreasing lattice spacing (blue dashed curve in Fig.~\ref{fig:cyl_convergence2}). As in Sec.~\ref{secSingleCylind} this points to approximate enforcement of slope boundary conditions as the likely dominant source of error in the FD solutions. Moreover, Fig.~\ref{fig:cyl_convergence2} shows that, consistent with Fig.~\ref{fig:cylinders}, the FD scheme produces larger discrepancies with FE and high-order analytic solutions at smaller $d$ than larger $d$, independent of the lattice spacing considered. For instance, for small $d$ a too coarse lattice spacing $h=0.2$~nm can yield errors in the thickness deformation energy $>2$~$k_B T$, while the same lattice spacing only produces errors $<0.4$~$k_B T$ at large protein separations. In contrast, the convergence of FE and high-order analytic solutions with decreasing $\langle l_\textrm{edge} \rangle$ is not diminished in the interacting regime compared to the non-interacting regime [Fig.~\ref{fig:cyl_convergence}(b)]. \section{Crown model} \label{secCrownResults} The cylinder model of integral membrane proteins \cite{huang86,wiggins05,ursell08,helfrich90,andersen07,phillips09,nielsen98,nielsen00} provides a beautiful ``zeroth-order'' description of thickness-mediated protein interactions, but is not able to capture the discrete symmetries and distinct hydrophobic shapes of membrane proteins suggested by membrane structural biology. As discussed in Sec. \ref{secCrown}, a straightforward way to account for rotational asymmetry of the hydrophobic surface of membrane proteins is to allow for angular variations in protein hydrophobic thickness while maintaining a circular protein cross section \cite{CAH2013a}, resulting in the crown model of membrane proteins [Fig.~\ref{figIllust}(b)]. Angular variations in protein hydrophobic thickness yield rotationally asymmetric distributions of compression and expansion zones about the protein. For two or more such proteins in close enough proximity, the anisotropy of overlapping deformation fields produces directionality of thickness-mediated protein interactions \cite{CAH2013a}, which is expected to affect protein organization and function. \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig9} \caption{(Color online) Thickness deformation energy per protein for two crown shapes, $G$, versus center-to-center protein distance, $d$, calculated analytically at $N=11$, and numerically using FE and FD schemes, for the minus-minus configuration [$\omega_1=0^\circ$ and $\omega_2=36^\circ$ in Eq.~(\ref{VarU})], the plus-minus configuration [$\omega_1=0^\circ$ and $\omega_2=0^\circ$ in Eq.~(\ref{VarU})], and a configuration intermediate between the minus-minus and plus-minus configurations [$\omega_1=0^\circ$ and $\omega_2=24^\circ$ in Eq.~(\ref{VarU})]. We only consider here FD solutions for the mirror-symmetric minus-minus protein configuration. The inset shows the percentage difference in thickness deformation energy, $\eta_G$ in Eq.~(\ref{eq:DefetaG}), between the analytic result at $N=11$ and the corresponding results of the FE [blue curves; minus-minus (solid), intermediate (short dashed), and plus-minus (long dashed) configurations] and FD (orange triangles with $\eta_G\approx1.2\%$ at $d=5$~nm) calculations. We used $\langle l_\textrm{edge} \rangle \approx 0.27$~nm for the FE and $h=0.05$~nm for the FD solutions, and set $\tau=0$. All model parameters were chosen as described in Sec.~\ref{secElasticModel}, and analytic and numerical solutions were obtained as discussed in Secs.~\ref{secAnalyticSol} and~\ref{secNumericalSol}. } \label{fig:crowns} \end{figure} We used our analytic and numerical solution procedures to determine the thickness deformation energy associated with two crown shapes in the ``minus-minus configuration'' [$\omega_1=0^\circ$ and $\omega_2=36^\circ$ in Eq.~(\ref{VarU})], in which protein boundary regions with minimal hydrophobic thickness face each other at the point of closest protein edge-to-edge separation, in the ``plus-minus configuration'' [$\omega_1=0^\circ$ and $\omega_2=0^\circ$ in Eq.~(\ref{VarU}); see Fig.~\ref{figIllust}(b)], in which protein boundary regions with maximal and minimal hydrophobic thickness face each other at the point of closest protein edge-to-edge separation, and in a protein configuration corresponding to $\omega_1=0^\circ$ and $\omega_2=24^\circ$ in Eq.~(\ref{VarU}), which is intermediate between minus-minus and plus-minus configurations (see Fig.~\ref{fig:crowns}). We find that, in the non-interacting regime, FE and exact analytic solutions are in good quantitative agreement, with the discrepancy in thickness deformation energy $<0.3\%$ for the parameter values used in Fig.~\ref{fig:crowns} [see Fig.~\ref{fig:crowns}(inset)]. In contrast, our FD solution yields more substantial discrepancies with the exact analytic solution. We attribute this to the difficulty of accurately and unambiguously imposing complicated boundary conditions in the FD scheme. In the interacting regime, we find that analytic, FE, and FD solutions predict the same basic qualitative properties of bilayer thickness-mediated interactions between crown shapes (Fig.~\ref{fig:crowns}). Depending on relative protein orientation, thickness-mediated interactions can switch from being strongly favorable to being strongly unfavorable at small protein separations. In particular, when the periodic undulations of the thickness deformations induced by the two proteins are in phase in the membrane region separating the two proteins, as in the case of the minus-minus configuration in Fig.~\ref{fig:crowns}, similar patterns of protein-induced compression and expansion zones of the lipid bilayer overlap for small $d$, yielding strongly favorable interactions. In contrast, when the thickness undulations induced by the two proteins are out of phase, as in the case of the plus-minus configuration in Fig.~\ref{fig:crowns}, there is substantial overlap of out-of-phase compression and expansion zones for small $d$, resulting in frustration of bilayer thickness deformations and strongly unfavorable bilayer thickness-mediated interactions. As the relative protein orientation is changed continuously from in-phase to out-of-phase configurations, the interaction potentials change smoothly \cite{CAH2013a} from being favorable to being unfavorable at small $d$ which, as in the case of the intermediate configuration in Fig.~\ref{fig:crowns}, can produce a minimum in the thickness-mediated interaction energy at a characteristic protein separation. On a quantitative level, we find that our FE results on bilayer thickness-mediated interactions between crown shapes are generally in good quantitative agreement with our analytic solution (Fig.~\ref{fig:crowns}). In contrast, the FD solution procedure yields more substantial discrepancies with our analytic results. In particular, analytic and FE solutions are in good quantitative agreement for all configurations in Fig.~\ref{fig:crowns} with $d>5.3$~nm, with a discrepancy in thickness deformation energy $<0.5\%$. We find a similar level of quantitative agreement between analytic and FE solutions even for the smallest protein separations considered for the minus-minus configuration in Fig.~\ref{fig:crowns}. \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig10a}\\ \includegraphics[width=\columnwidth]{fig10b} \caption{(Color online) Comparison of analytic and FE solutions for crown shapes. (a) Percentage error in thickness deformations in Eq.~(\ref{percentageError}) ($\eta_u$; orange dashed curve) and percentage error in curvature deformations in Eq.~(\ref{percentageError2}) ($\eta_{\nabla^2 u}$; magenta solid curve) versus $\langle l_\textrm{edge} \rangle$ for two crown shapes in the minus-minus configuration in the interacting regime at $d=7$~nm. The error is evaluated using the corresponding analytic result at $N=11$. Errors in the thickness mismatch and curvature deformations decay as $\langle l_\textrm{edge} \rangle$ and $\langle l_\textrm{edge} \rangle^2$, respectively. (b) Percentage difference between analytic and FE results for the thickness deformation energy per protein, $\eta_G$ in Eq.~(\ref{eq:DefetaG}), versus $\langle l_\textrm{edge} \rangle$ for two crown shapes in the minus-minus configuration in the non-interacting regime ($d=14$~nm for the FE solution; red solid curve) and in the strongly interacting regime ($d=5$~nm; blue dashed curve; $N=11$ for the analytic solution). We set $\tau=1$~$k_B T/\textrm{nm}^2$. All model parameters were chosen as described in Sec.~\ref{secElasticModel}, and analytic and numerical solutions were obtained as discussed in Secs.~\ref{secAnalyticSol} and~\ref{secNumericalSol}. \label{fig:crown_convergence}} \end{figure} The discrepancy between analytic and FE results is most pronounced at very small $d$ in the strongly unfavorable regime of bilayer thickness-mediated interactions, $d<5.3$~nm, for the plus-minus and intermediate configurations in Fig.~\ref{fig:crowns}, where the interaction energy diverges. However, even in this regime the percentage difference between analytic and FE results is $<18\%$ for the plus-minus and intermediate configurations in Fig.~\ref{fig:crowns}, for which the thickness deformation energy can exceed $G\approx 1000$~$k_B T$ for the smallest value $d \approx 5$~nm we allow here. We find that in the strongly unfavorable regime $d<5.3$~nm the magnitude of the gradient of $u$ can exceed $\| \nabla u \|=3$. The maximum value of $\| \nabla u \|$ in the strongly interacting regime is therefore substantially greater than the maximum magnitude of the gradient of bilayer thickness deformations $\approx1$ in the non-interacting regime of the crown shapes considered here, which we take to induce large gradients of the thickness deformation field so as to test the mathematical limits of applicability of our analytic and numerical solution procedures (see Sec.~\ref{secCrown}). The more pronounced discrepancy between FE and analytic results in the strongly unfavorable regime of bilayer thickness-mediated interactions between crown shapes arises because, in this regime, bilayer thickness deformations show a strong variation over the small membrane region separating the two proteins. High accuracy in the strongly unfavorable regime of thickness-mediated interactions therefore requires highly refined meshes in the FE scheme and, to capture pronounced angular variations in the thickness deformation field, a large value of $N$ in the analytic approach. \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig11} \caption{(Color online) Percentage difference between analytic and FD results for the thickness deformation energy per protein, $\eta_G$ in Eq.~(\ref{eq:DefetaG}), versus lattice spacing, $h$, for two crown shapes in the plus-plus configuration in the non-interacting regime ($d=14$~nm for the FD solution; red solid curve) and in the strongly interacting regime ($d=5$~nm; blue dashed curve; $N=11$ for the analytic solution). We set $\tau=0$. All model parameters were chosen as described in Sec.~\ref{secElasticModel}, and analytic and numerical solutions were obtained as discussed in Secs.~\ref{secAnalyticSol}~and~\ref{secNumericalSol}. } \label{fig:crown_convergence2} \end{figure} Following Sec.~\ref{secCylinder2} we quantify the discrepancy between numerical and analytic solutions by systematically increasing the spatial resolution of the numerical solutions. As in the case of cylindrical membrane proteins we find that, for the FE solution procedure, the errors in thickness deformations and thickness deformation energy decrease approximately quadratically with decreasing average edge size of the FE mesh, while the error in curvature deformations decreases approximately linearly with $\langle l_\textrm{edge} \rangle$ (see Fig.~\ref{fig:crown_convergence}). We find similar scaling of the discrepancy between FE and analytic solutions with $\langle l_\textrm{edge} \rangle$ in the interacting and non-interacting regimes, as well as for zero and finite membrane tensions. As for cylindrical membrane proteins, the error in the thickness deformation energy obtained from the FD solution procedure decreases approximately linearly with decreasing lattice spacing in the interacting and non-interacting regimes (see Fig.~\ref{fig:crown_convergence2}), which again points to approximate enforcement of slope boundary conditions as the likely dominant source of error in the FD solutions. Furthermore, Fig.~\ref{fig:crown_convergence2} suggests that, similar to the case of cylindrical membrane proteins, the FD scheme generally produces larger discrepancies with high-order analytic solutions at smaller $d$ than larger $d$. In contrast, the convergence of FE and high-order analytic solutions is not diminished in the interacting regime compared to the non-interacting regime [Fig.~\ref{fig:crown_convergence}(b)]. Finally, comparison of Figs.~\ref{fig:cyl_convergence} and~\ref{fig:crown_convergence}, and Figs.~\ref{fig:cyl_convergence2} and~\ref{fig:crown_convergence2}, shows that, for a given spatial resolution, the discrepancy between FE and FD solutions and the analytic solution is more pronounced for crown shapes than for cylindrical membrane proteins. \section{Clover-leaf model} \label{secCloverResults} The clover-leaf model of integral membrane proteins discussed in Sec.~\ref{secClover} can be used to capture non-circular bilayer-protein boundary curves [Fig.~\ref{figIllust}(c)], and provides a generalization of the cylinder model of membrane proteins complementary to the crown model. In particular, to model the discrete symmetries of integral membrane proteins suggested by membrane structural biology, the clover-leaf model allows for periodic modulations in the shape of the hydrophobic cross section of membrane proteins. As a result, the bilayer thickness deformations induced by clover-leaf proteins show a characteristic pattern of compression and expansion zones about the protein \cite{CAH2013b}, which provides a simple description of the effect of protein shape and oligomeric state on lipid bilayer thickness deformations. Furthermore, for two clover-leaf proteins in close enough proximity, bilayer thickness-mediated interactions are directional \cite{CAH2013a,OWC1,CAH2014a,OPWC1} and bear characteristic signatures of protein shape, symmetry, and orientation. \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig12a}\\ \includegraphics[width=\columnwidth]{fig12b} \caption{(Color online) Thickness deformation energy of clover-leaf shapes. (a) Thickness deformation energy per protein for two clover-leaf shapes, $G$, versus center-to-center protein distance, $d$, calculated analytically at $N=11$, and numerically using FE and FD schemes, for the face-on configuration [$\omega_1=0^\circ$ and $\omega_2=36^\circ$ in Eq.~(\ref{boundCclover})] and the tip-on configuration [$\omega_1=36^\circ$ and $\omega_2=0^\circ$ in Eq.~(\ref{boundCclover})]. (b) Thickness-mediated interaction energy $G_\textrm{int}$ versus $d$ obtained by subtracting the protein-induced thickness deformation energies in the non-interacting regime from the respective perturbative analytic, FE, and FD solutions in panel (a). The inset shows the difference in the thickness-mediated interaction energies obtained from the FE solution procedure, and the analytic and FD solution procedures. We use the same labeling conventions for panel (b) as for panel (a). For our numerical calculations we used $\langle l_\textrm{edge} \rangle \approx 0.26$~nm for the FE and $h=0.05$~nm for the FD solutions. We set $\tau=0$ for both panels. All model parameters were chosen as described in Sec.~\ref{secElasticModel}, and analytic and numerical solutions were obtained as discussed in Secs.~\ref{secAnalyticSol} and~\ref{secNumericalSol}. } \label{fig:clover5} \end{figure} To compare our analytic and numerical solutions in the case of clover-leaf shapes, we consider the bilayer thickness deformation energies associated with two clover-leaf proteins in the ``face-on configuration'' [$\omega_1=0^\circ$ and $\omega_2=36^\circ$ in Eq.~(\ref{boundCclover})] and the ``tip-on configuration'' [$\omega_1=36^\circ$ and $\omega_2=0^\circ$ in Eq.~(\ref{boundCclover})] using analytic, FE, and FD solution procedures [see Fig.~\ref{fig:clover5}(a)]. We find that the thickness deformation energies obtained using our FE and FD solution procedures are in agreement within the numerical accuracy of the numerical solution schemes. In contrast, the thickness deformation energies obtained via the perturbative analytic solution procedure differ substantially from the FE and FD solutions, in the non-interacting as well as interacting regimes in Fig.~\ref{fig:clover5}(a). Some discrepancy between analytic and numerical results is to be expected, given that somewhat different boundary value problems are solved in the perturbative analytic and numerical approaches, with the analytic solution only being first order in the perturbation parameter $\epsilon_i$ in Eq.~(\ref{boundCclover}). This produces a systematic error in the thickness deformation energy obtained through the perturbative analytic approach, which yields disagreement between perturbative analytic and numerical solution procedures even in the non-interacting regime. As far as bilayer thickness-mediated interactions between clover-leaf shapes are concerned [see Fig.~\ref{fig:clover5}(b)], the discrepancy between analytic and numerical results is most pronounced at small $d$, where the interaction potentials obtained from perturbative analytic and numerical approaches can differ by $>1$~$k_B T$. This can be understood intuitively by noting that the analytic calculation of thickness-mediated interactions between clover-leaf shapes relies on a perturbative mapping of clover-leaf boundary curves with constant hydrophobic thickness onto circular boundary curves with varying hydrophobic thickness [see Eqs.~(\ref{genBC1trimerFin})--(\ref{genBC2trimerFin2})]. We use here a first-order perturbative mapping, which is expected to become increasingly inaccurate at small protein separations since, as $d$ is being decreased, the structure of thickness deformations in close proximity to the clover-leaf proteins comes to dominate the interaction energy. We therefore attribute the disagreement between analytic and numerical approaches in Fig.~\ref{fig:clover5}(b) to shortcomings of the analytic approach. Note, however, that the first-order perturbative analytic solution and the corresponding numerical solutions yield the same basic scenario for the competition between the different orientations of pentameric clover-leaf shapes considered in Fig.~\ref{fig:clover5}. Indeed, similar agreement between first-order perturbative and numerical approaches is obtained for other clover-leaf shapes and orientations \cite{CAH2013a,OWC1}, which suggests that the first-order perturbative approach can accurately capture the directionality of thickness-mediated interactions between integral membrane proteins with clover-leaf shapes. \section{Summary and conclusions} \label{secSummary} A wide range of experiments indicate \cite{mouritsen93,jensen04,engelman05,andersen07,mcintosh06,phillips09,brown12,lundbaek06,harroun99,goforth03,botelho06,grage11} that protein-induced lipid bilayer thickness deformations can play a crucial role in the regulation of protein function through bilayer material properties and bilayer-mediated protein interactions. Cell membranes are generally crowded with membrane proteins \cite{engelman05,takamori06,dupuy08,phillips09,linden12}, suggesting that the protein center-to-center distance $d$ is typically small \textit{in vivo}, while modern structural biology suggests a rich picture of membrane protein shape with great diversity in the oligomeric states and symmetries of membrane proteins. Motivated by these experimental observations, we have developed a combined analytic and numerical framework \cite{CAH2013a,CAH2013b,OWC1,OPWC1,CAH2014a} which allows prediction of the protein-induced lipid bilayer thickness deformations implied by the classic model in Eq.~(\ref{energy}) for arbitrary $d$ and the protein shapes suggested by structural studies. Our analytic solution procedure, which is based on Refs.~\cite{huang86,dan93,dan94,aranda96,dan98,goulian93,weikl98}, allows exact solutions of the protein-induced lipid bilayer thickness deformation field and elastic thickness deformation energy for proteins with constant or varying hydrophobic thickness in the (strongly) interacting as well as non-interacting regimes, provided that the proteins have circular transmembrane cross sections. Through a perturbative approach, our analytic solution scheme can also account for non-circular protein cross sections. Following similar steps as in Refs.~\cite{goulian93,weikl98,CAH2013a}, our analytic solution procedure is readily applied \cite{CAH2014a} to calculate curvature-mediated protein interactions at arbitrary protein separations. The exact analytic solutions described here are in excellent quantitative agreement with our numerical solutions for arbitrary protein orientations and arbitrary $d$ with the exception of the strongly unfavorable regime of bilayer thickness-mediated interactions, for which the interaction energy diverges as the edge-to-edge protein separation approaches zero. We regard this regime as being of limited physical significance because, for proteins of distinct hydrophobic thickness, the leading-order model in Eq.~(\ref{energy}) is expected to break down at small $d$ due to the large gradients of protein-induced lipid bilayer deformations in the bilayer region separating the two proteins. In principle, higher-order analytic solutions than those considered here could be developed to access thickness-mediated interactions in this regime. For proteins of non-circular cross section, our comparisons between analytic and numerical solution procedures show that the first-order perturbative solution can accurately capture the dependence of the thickness deformation energy on protein oligomeric state \cite{CAH2013b,OWC1}, as well as the directionality of bilayer thickness-mediated protein interactions \cite{CAH2013a,OWC1,OPWC1}. However, the first-order perturbative approach does not yield the exact value of the thickness deformation energy. The discrepancy between our perturbative analytic and numerical solutions of bilayer thickness-mediated protein interactions is most pronounced at small $d$, where the perturbative analytic approach is expected to break down. We have developed both FD and FE solution schemes to numerically calculate the lipid bilayer thickness deformations induced by membrane proteins. The FD scheme has the advantage of being conceptually simple and relatively straightforward to implement. We find that our FD scheme accurately accounts for the lipid bilayer thickness deformations induced by cylindrical membrane proteins in the non-interacting regime and can also capture, albeit with less accuracy, bilayer thickness-mediated interactions between cylindrical membrane proteins. However, the convergence of the numerical solutions to the exact analytic solutions is slower for the FD scheme than for the FE scheme, most likely due to errors in the FD slope boundary conditions at the protein surface. Furthermore, we find that the FD solution procedure can introduce substantial numerical errors for non-cylindrical membrane proteins. These errors are particularly pronounced in the interacting regime. In contrast, we find that the FE scheme described here yields rapid numerical convergence for all available exact analytic solutions of the minima of Eq.~(\ref{energy}) \cite{huang86,CAH2013a,CAH2013b}. The convergence properties of the FE scheme do not seem to be diminished substantially in the interacting regime compared to the non-interacting regime of bilayer thickness-mediated protein interactions, or by complicated boundary shapes and boundary conditions. The combined presence of both first and second derivatives in the energy in Eq.~(\ref{energy}) places special demands on the FE formulation. In particular, while standard Lagrange interpolation functions \cite{shames1985energy} are adequate to compute the thickness stretch and gradient terms in Eq.~(\ref{energy}), they fail to produce conforming curvatures at element interfaces \cite{bathe2006}. Our FE solution procedure overcomes this challenge by combining \cite{OWC1,OPWC1} Lagrange shape functions for the thickness stretch and gradient terms with the DKT method \cite{Batoz1980,Bathe1981} for curvature deformations. The resulting FE approach is computationally efficient and allows accurate solutions of the complicated boundary value problems posed by many interacting membrane proteins. Indeed, we have shown previously \cite{OWC1,OPWC1} that our FE approach permits calculation of directional thickness-mediated protein interactions in systems composed of hundreds of integral membrane proteins for arbitrary protein separations and orientations using protein shapes suggested by membrane structural biology. The combined analytic and numerical framework described here shows that the shape of integral membrane proteins, and resulting structure of lipid bilayer thickness deformations, can play a crucial role in the regulation of protein function by lipid bilayers \cite{OWC1,CAH2013b}, and that bilayer thickness-mediated interactions between integral membrane proteins are strongly directional and dependent on protein shape \cite{CAH2013a,OWC1,OPWC1,CAH2014a}. Taken together, our results suggest that, in addition to bilayer-protein hydrophobic mismatch \cite{engelman05,mouritsen93,jensen04,mcintosh06,lundbaek06,andersen07,phillips09,brown12,harroun99,goforth03,botelho06,grage11}, protein shape may be a crucial determinant of membrane protein regulation by lipid bilayers and bilayer-mediated protein interactions. The classic model of protein-induced lipid bilayer thickness deformations in Eq.~(\ref{energy}) and modifications thereof have been found to capture the basic experimental phenomenology of bilayer-protein interactions in a wide range of experimental systems \cite{dan93,dan94,aranda96,dan98,harroun99b,partenskii04,brannigan07,ursell07,huang86,helfrich90,nielsen98,nielsen00,harroun99,partenskii02,partenskii03,kim12,lundbaek10,greisen11,wiggins04,wiggins05,ursell08,grage11,CAH2013a,CAH2013b,OWC1,OPWC1,mondal11,mondal12,mondal13,mondal14,CAH2014a,andersen07,phillips09,jensen04,lundbaek06,mcintosh06,brown12}, only involve parameters which can be measured directly in experiments, and are simple enough to allow analytic solutions. Analogous models have been formulated \cite{fournier99,phillips09} to describe protein-induced curvature deformations \cite{goulian93,weikl98,kim98,kim00,muller05,muller05b,kim08,auth09,muller10,frese08,reynwar11,bahrami14,dommersnes99,evans03,weitz13,yolcu14} and fluctuation-mediated interactions \cite{goulian93,dommersnes99,evans03,weitz13,yolcu14,golestanian96,golestanian1996b,weikl01,lin11}. In general, thickness-, curvature-, and fluctuation-mediated interactions all contribute to bilayer-mediated interactions between integral membrane proteins, but the relative strengths of these interactions depend on the specific experimental system under consideration. In addition to bilayer-mediated interactions, membrane proteins may, in principle, also interact via electrostatic forces. However, electrostatic interactions in aqueous environments are generally screened \cite{bental96,walther96} and charged protein residues are typically excluded from the transmembrane regions of membrane proteins \cite{ulmschneider01}. The model of protein-induced lipid bilayer thickness deformations in Eq.~(\ref{energy}) and the corresponding ``zeroth-order'' models \cite{goulian93,weikl98,kim98,kim00,muller05,muller05b,kim08,auth09,muller10,frese08,reynwar11,bahrami14,dommersnes99,evans03,weitz13,yolcu14,golestanian96,golestanian1996b,weikl01,lin11} capturing curvature- and fluctuation-mediated protein interactions absorb the molecular details of lipids and membrane proteins into effective material parameters. To provide a more detailed description of bilayer-protein interactions, a number of extensions and refinements of these models have been developed \cite{gil98,brannigan06,brannigan07,west09,may04,may99,bohinc03,may07,watson11,watson13,jablin14,rangamani14,bitbol12,partenskii02,partenskii03,partenskii04,kim12,yoo13,yoo13b,lee13}. For instance, the effect of bilayer-protein interactions on the elastic properties of lipid bilayers can be captured \cite{partenskii02,partenskii03,partenskii04,lee13} by allowing for spatial variations in the values of the elastic bilayer parameters. Furthermore, the microscopic roughness of lipid bilayers due to area fluctuations can affect \cite{brannigan06,west09} protein-induced lipid bilayer deformations. These model refinements yield bilayer thickness deformation profiles which are quantitatively but not qualitatively different from those implied by Eq.~(\ref{energy}) [see, for instance, Fig.~\ref{fig:single_cyl}(a)]. However, additional structural properties of the lipid bilayer, such as lipid tilt \cite{may04,may99,bohinc03,may07,watson11,watson13, jablin14}, may have a more substantial effect on bilayer-mediated protein interactions. Moreover, integral membrane proteins may tilt to reduce hydrophobic mismatch, as suggested by Monte Carlo and molecular dynamics simulations \cite{kim10,neder12}. Tilting of membrane proteins is expected to be most pronounced for small membrane proteins with only a single transmembrane $\alpha$-helix. While protein tilting generally competes with protein-induced lipid bilayer thickness deformations as a mechanism for alleviating bilayer-protein hydrophobic mismatch, experiments have suggested \cite{dePlanque03,holt10} that protein tilting is in general too weak to fully offset a hydrophobic mismatch between membrane proteins and the surrounding lipid bilayer. In this article we have focused on bilayer thickness-mediated interactions between two integral membrane proteins. In general, more than two membrane proteins are expected to interact in the crowded membrane environments provided by living cells. We have shown previously \cite{OPWC1} that, in contrast to curvature- and fluctuation-mediated protein interactions \cite{kim98,kim00,dommersnes99,kim08,weitz13,yolcu14}, bilayer thickness-mediated protein interactions are approximately pairwise additive, at least for large enough protein separations. For small protein separations, non-pairwise contributions to bilayer thickness-mediated interactions between integral membrane proteins can modify the interaction strength by $>1k_B T$ \cite{OPWC1}. However, except in special cases \cite{OPWC1}, non-pairwise contributions to bilayer thickness-mediated protein interactions do not alter how bilayer thickness-mediated interactions vary with the shape and arrangement of proteins. The approximate pairwise additivity of bilayer thickness-mediated protein interactions presents a considerable simplification \cite{OPWC1} for the mathematical analysis of systems composed of many interacting integral membrane proteins. Recent breakthroughs in superresolution light microscopy and electron cryo-tomography have revealed that integral membrane proteins can form large clusters with intricate translational and orientational protein ordering, which provides \cite{mouritsen93,jensen04,engelman05,andersen07,mcintosh06,phillips09,brown12,lundbaek06,bray04,lang10,anishkin13,anishkin14} a general mechanism for cells to modulate protein function through cooperative interactions and local modification of bilayer mechanical properties. But, to date, the physical mechanisms giving rise to the observed lattice architectures and collective functions of membrane protein clusters remain largely unknown. The directionality of bilayer thickness-mediated protein interactions implied by the observed protein structures presents one possible physical mechanism for membrane protein organization and collective function. Such directional interactions can yield \cite{OWC1,OPWC1,CAH2013a,CAH2014a} ordering of integral membrane proteins, which is also consistent with molecular dynamics simulations \cite{periole07,parton11,mondal13,yoo13}. The combined analytic and numerical framework we have discussed here allows calculation of the lipid bilayer-mediated protein interactions implied by bilayer elasticity theory \cite{seifert97,boal02,safran03,canham70,helfrich73,evans74,huang86,dan93,dan94,aranda96,dan98,harroun99b} for the protein shapes suggested by structural studies at arbitrary protein separations and orientations. Our framework thus presents a step towards a general physical theory of how directional bilayer-mediated protein interactions affect the molecular structure, organization, and biological function of proteins in the crowded membrane environments provided by living cells. \acknowledgments{This work was supported at USC by NSF Award No. DMR-1206332, an Alfred P. Sloan Research Fellowship in Physics (C.A.H.), the James H. Zumberge Faculty Research and Innovation Fund at the University of Southern California, and by the USC Center for High-Performance Computing, and at UCLA by NSF Award No. CMMI-0748034 and No. DMR-1309423. We thank M. Lind\'en, R. Phillips, and N.~S. Wingreen for helpful comments.}
1,116,691,499,948
arxiv
\section{Introduction} Fluid flow in small droplets or vesicles is essential for many biological and chemical processes, both in artificially fabricated microdroplets and in biological cells. Droplet microfluidics~\cite{Teh2008, Chou2015} has a rapidly growing range of applications including molecular detection, imaging, drug delivery and diagnostics. It has also been used to synthesize artificial cells, using droplet stabilized vesicles which can be filled with filaments and motors in a controlled way~\cite{Spatz2017}. Furthermore, viscous liquid droplets, which are chemically driven out of equilibrium, have been shown to grow and divide, reminiscent of living cells~\cite{Zwicker2}. Both artificial and biological cells contain active matter, which can generate intracellular flow, which in turn leads to a variety of functions, ranging from transport of nutrients~\cite{Goldstein2008,Goldstein2015} to control of asymmetric cell division~\cite{Mittasch2018} and to cell locomotion~\cite{Kree_2017}. The internal flow in droplets and cells is experimentally accessible with the help of tracer particles which are advected by the flow. Several techniques have been used, such as video microscopy, laser based particle tracking and FCS to just mention a few, and a variety of tracers are available~\cite{Theriot2009,Sackmann2008}. Recently it has become possible to perturb cytoplasmic flow locally, in vivo and probe-free by an interactive microscopic technique~\cite{Mittasch2018}, which allows to simultaneously observe the consequences of such perturbations. Among the many possible functions of internal flow is the advection of particles, which gives rise to stirring and mixing as important prerequisites for biochemical reactions. In microdroplets, laminar creeping flow often persists, and mixing by diffusion is inefficient on reaction time scales. In droplet-based microfluidics several techniques have been developed to rapidly mix the particle content by an externally generated flow. Stirring by chaotic advection due to simple Eulerian flow fields has been extensively studied in the past (for a recent review see \cite{Aref_2017}). The first example of an incompressible flow field inside a sphere, which leads to chaotic Lagrangian trajectories was given in Ref.~\cite{Moffatt1990}. In Ref. ~\cite{Stone1991} it was shown that simple linear flow fields in the ambient fluid may generate stirring by chaotic advection in passive droplets. In eukaryotic cells, one observes intracellular flow, known as cytoplasmic streaming, which is generated for example by molecular motors, carrying cargo and entraining the adjacent fluid. Advection of chemicals by the flow contributes significantly to molecular transport and mixing \cite{Goldstein2008,Goldstein2015}. The complex architecture of a biological cell is of course not adequately captured by a liquid droplet and many other mechanisms are known to contribute to transport in a cell. Nevertheless such a simple model as a self-propelled droplet can be a first approximation to study the chaotic flow generated by active matter in biological cells. It may furthermore prove useful for efforts to synthesize cells from a small number of components. \section{Chaotic advection from internal self-propelling flow} In the present work, we consider autonomous self-propelled droplets, driven by an internal flow. We show that the trajectories of particles advected by this flow display the full richness of dynamical systems, ranging from stagnation points and closed orbits to quasi-periodic motion and chaos~\cite{Moffatt1990,Aref_2017}. Self-generated flow provides an efficient mechanism for stirring and mixing inside the droplet, while the motion of the droplet as a whole remains simple and regular, e.g. rectilinear or along a helix. We study spherical droplets of radius $a=1$, built from two immiscible fluids and driven either by active body force distributions, $\mathbf{f}^{act}$ inside the droplet or by active surface tractions $\mathbf{t}^{act}$ on the interface. The forcing mechanism generates flow ${\bf v}({\bf r})$ in the interior of the droplet, which in turn generates flow in the surrounding fluid and gives rise to self-propulsion. We have computed such flow fields for a spherical droplet and general forcing in Stokes approximation~\cite{Kree_2017}, i.e. the solution of \begin{equation} \eta\nabla^2 \mathbf{v}-\nabla p=-\mathbf{f}^{act}, \quad \nabla\cdot \mathbf{v}=0. \label{eq:stokes} \end{equation} If the forcing is achieved by tractions, then $\mathbf{f}^{act}=0$ and force balance at the interface has to include active tractions $\mathbf{t}^{act}$. Self-propulsion requires that the total force and torque exerted by the fluid on the droplet vanishes. The internal flow implies a constant nonzero linear and angular momentum of the droplet which determine its linear and angular velocity, $ {\bf U}=\frac{3}{4\pi}\int_V\mathbf{v}(\mathbf{r}) d^3 r$ and $ \boldsymbol{\omega} =\frac{15}{8\pi}\int_V d^3 r \,\mathbf{r} \times \mathbf{v}, $ of self-propulsion. The simplest such flow field due to active tractions, which propels the droplet in $\hat{\mathbf{n}}$-direction takes on the form \begin{align} {\mathbf v}_{t}({\bf r})= (1-2r^2)\hat{\mathbf{n}}+ (\hat{\mathbf{n}}\cdot\mathbf{r})\mathbf{r} \label{eq:vminus} \end{align} in the co-moving frame. A force- and torque-free rotational flow can be written as \begin{equation}\label{eq:rotationalflow} {\bf v}_{r}({\bf r})= \,h(r){\mathbf r}\times {\hat{\boldsymbol{\omega}}}. \end{equation} with the unit vector $ \hat{\boldsymbol{\omega}}= \boldsymbol{\omega}/|\boldsymbol{\omega}|$ pointing in the direction of angular momentum. Such a flow cannot be generated by tractions but needs chiral body forces. For details see \cite{Kree_2017} , where we also derived admissible functional forms of $h(r)$. All these flows must vanish on the interface. For the present work, we have chosen $h(r)=-1+3r^2-2r^3$ as a simple illustrating example. Advected particles subjected to a linear combination of ${\bf v}_{t}$ and ${\bf v}_{r}$ follow the equation of motion $ \frac{d{{\bf r}}}{dt}=a_{t}{\bf v}_{t}({\bf r}) + a_{r}{\bf v}_{r}({\bf r}), $ which constitutes a three dimensional, non-linear dynamical system in the interior of a sphere. In the following we first introduce axially symmetric regular flow fields characterised by $\hat{\mathbf{n}}=\hat{\boldsymbol{\omega}}$. This idealised situation is perturbed either by a time-independent biaxiality ($\hat{\mathbf{n}}\neq \hat{\boldsymbol{\omega}}$) or by a time-periodic direction $\hat{\mathbf{n}}(t)$. We discuss that both perturbations, taken separately or in combination, give rise to chaotic trajectories and to mixing. For $\hat{\mathbf{n}}=\hat{\boldsymbol{\omega}}=\hat{\mathbf{z}}$ the equations of motion in cylindrical coordinates $x=\rho\cos\phi,\, y=\rho\sin\phi, \rho=\sqrt{x^{2}+y^{2}} $ take on the form \begin{align} \frac{d\rho}{dt}=& a_tz\rho\nonumber\\ \frac{d z}{dt}=&a_t(1-2\rho^2-z^2)\nonumber\\ \frac{d\phi}{dt}=&-a_{r} h(\rho, z), \end{align} which implies the conservation of angular momentum $L_z$ due to axial symmetry. As for any two-dimensional incompressible flow, the first two equations can be mapped onto Hamiltonian dynamics. For our case the Hamiltonian becomes $H=a_t(p^2+pq^2-p)$ with $q=z,p=\rho^2$. For $a_r=0$ (and thus $L_{z}=0)$ the trajectories in the x-z plane are shown in the left part of Fig.\ref{fig1}. Also shown are the hyperbolic stagnation points at the poles, and the two elliptic stagnation points at $z=0,\, \rho=\pm1/\sqrt{2}$, which span a whole circle of stagnation points in the x-y plane due to the axial symmetry. For any other choice of $\hat{\mathbf{n}}=\hat{\boldsymbol{\omega}}$, the flow pattern is a rotated version of Fig.\ref{fig1}a. If the strength of the forcing and thus $a_t(t)$ is time-dependent, this can be absorbed into a rescaled time $\tau$ with $d\tau/dt=1/a_t(t)$ without changing the streamlines and the shape of the droplet's trajectory. In the following, we put $a_t=1$. For $a_r\neq 0$ the angular momentum $L_z$ is still conserved, but no longer vanishes. Every trajectory results from a superposition of two periodic motions, one in the x-z plane due to $\mathbf{v}_{t}$ and one in the x-y plane due to $\mathbf{v}_{r}$, which leads to periodic or quasi-periodic motion on tori. It is plausible and can be shown that these tori constitute a complete foliation of the sphere. \begin{figure} \stackon{(a)}{ \includegraphics[width=0.41\columnwidth]{fig1a.png}} \stackon{(b)}{ \includegraphics[height=0.42\columnwidth]{fig1b.png}} \caption{\label{fig1} (a): Trajectories of tracer particle inside a droplet, which moves with constant linear velocity and without rotation. Initial conditions are $\mathbf{r}(0)=\pm k\cdot 10^{-1}\,\hat{\mathbf{x}}$ with $k=1,\cdots 6$ (b): a 3d graph of the cycles for $\hat{\boldsymbol{\omega}}$ tilted with respect to $\hat{\mathbf{z}}$ by $\theta=0.3\pi$ (see also Fig.\ref{fig2})} \end{figure} Now consider trajectories of advected particles, which are generated by time-independent forcing for which the translational and angular velocities are {\it no longer parallel}. As a consequence, the system is not axially symmetric and the trajectories are determined by a fully coupled, autonomous system of three differential equations. There are two control parameters of this flow: $a_r$ and $\hat{\mathbf{n}}\cdot\hat{\boldsymbol{\omega}}=\cos\theta$. To illustrate the transition to chaos, we choose $\hat{\mathbf{n}}=\hat{\mathbf{z}}$ and $\hat{{\boldsymbol{\omega}}}$ tilted by an angle $\theta$ with respect to the z-axis in the x-z plane. In Fig.\ref{fig2}, we show Poincar\'{e} sections of this flow for $\theta=0.3\pi$ and different $a_{r}$. For small values of $a_{r}$, we find regular motion for all values of $\theta$. A finite tilt of the $\hat{\boldsymbol{\omega}}$ axis causes a new cycle to appear, corresponding to 2-periodic points in the Poincar\'{e} section, see Fig.\ref{fig2}. Now there are two types of tori (A and B), winding around the two cycles shown in Fig.\ref{fig1} b. Type A wind around the cycle, which continuously emerges from the line of fixpoints at $a_r=0$, and type B tori enclose the new cycle. The topology of the flow is controlled by these coherent structures and the vanishing of the rotational velocity and the radial part of the translational velocity on the interface. This latter feature is a general property of \emph{all} force- and torque-free flows inside the sphere. Thus advected particles on the surface always follow regular trajectories connecting the two poles. In terms of dynamical system theory, the droplet's surface is the unstable manifold of the south pole and the stable manifold of the north pole, which are hyperbolic fixpoints. The rotational flow also vanishes on the axis of rotation, which therefore is clearly visible in the Poicar\'{e} sections. With increasing $a_{r}$ tori of both types decay and chaotic trajectories as well as islands of regular motion appear. We stress that the motion of the droplet as a whole is still simple. If we assume that the time-independent forcing is co-translating and co-rotating with the interior fluid, then the droplet moves on a helix -- even in the presence of chaotic streamlines inside. (For a detailed discussion of the droplet's trajectories in dependence on the internal flow see Ref.\cite{Kree_2017}). \begin{figure}[!tb] \stackon{(a)}{ \includegraphics[width=0.45\columnwidth]{fig2a.png}} \stackon{(b)}{ \includegraphics[width=0.45\columnwidth]{fig2b.png}}\\ \stackon{(c)}{ \includegraphics[width=0.45\columnwidth]{fig2c.png}} \stackon{(d)}{ \includegraphics[width=0.45\columnwidth]{fig2d.png}} \caption{\label{fig2} Poincar\'{e} section (x-z plane) of time-independent flow with $\hat{\mathbf{n}}=\hat{\mathbf{z}}$ and $\hat{\boldsymbol{\omega}}$ tilted as indicated in the figure for different strength of the rotational flow, $a_{r}=$ $1.1$ (a), $2$ (b), $3$ (c), $9$ (d); initial conditions as used in Fig.\ref{fig1}a, with 2 additional trajectories starting on the z-axis at $0.1\cdot \hat{\mathbf{z}}$ and $0.3\cdot \hat{\mathbf{z}}$}. \end{figure} In biological systems the active elements such as motors in a vesicle or cell, are in general time dependent. Of particular interest is a time-periodic forcing. Here, we consider a droplet without rotational flow but with periodic changes in the droplet's swimming direction, $\hat{\mathbf{n}}(t) =\hat{\mathbf{r}}(\theta(t), \varphi(t))$, parametrised by polar and azimuthal angles $\theta(t)$ and $\varphi(t)$. If we choose $\varphi(t)=0$ the $\hat{\mathbf{n}}$-axis aways stays in the x-z plane. It is easy to see by direct inspection of the equations of motion that trajectories starting with $y=0$ remain in the x-z plane, which thus constitutes a 2d invariant manifold of the 3d flow. Consequently, the 3d flow originating from initial conditions with $y > 0$ ($y<0$) is restricted to the corresponding half-sphere. In the following we consider a simple harmonic time dependence $\theta(t)=\Delta\theta \cos(t)$ and the same initial conditions ($y=0$) as those shown in Fig.\ref{fig1}a. Stroboscopic views (at $t_k=2\pi k$) of trajectories are shown in Fig.\ref{fig3} for increasing amplitude $\Delta\theta$. \begin{figure}[!tb] \stackon{(a)}{ \includegraphics[width=0.45\columnwidth]{fig3a.png}} \stackon{(b)}{\includegraphics[width=0.45\columnwidth]{fig3b.png}}\\ \stackon{(c)}{\includegraphics[width=0.45\columnwidth]{fig3c.png}} \stackon{(d)}{ \includegraphics[width=0.45\columnwidth]{fig3d.png} } \caption{\label{fig3} Stroboscopic views for a flow due to time-dependent $\hat{\mathbf{n}}$ (see text); tilt angles $\Delta \theta =k\pi/180$ with k=4 (a), 10 (b), 20 (c) and 30 (d) for same initial conditions as in Fig.\ref{fig2}a } \end{figure} For small $\Delta\theta$ we observe coexistence of regular motion and chaotic trajectories, but regular orbits gradually disappear for larger tilt. The flow within the 2d invariant manifold represents an example of the classical scenario of emerging chaotic behavior in a periodically driven Hamiltonian system in a 2d phase space\cite{Arnold1989}. The onset of chaos does not affect the motion of the droplet as a whole, which is a simple translational motion following the oscillation of $\hat{\mathbf{n}}(t)$. \section{Mixing properties} The chaotic trajectories provide an efficient internal mechanism for mixing, which may accelerate intracellular processes like signal transduction, as we now discuss. There are mathematical definitions of perfect mixing in ergodic theory \cite{Arnold1989} , but there is no unique measure of the efficiency of mixing. Here we focus on a simple generic scenario as it may arise in signal transduction,--- the temporal development of a chemical signal of $N$ initially neighbouring particles. An example is shown in Fig.\ref{fig4} for the 2-dimensional flow described above. Mixing efficiency is characterised by the dispersal of the initial signal for times up to a finite $t_{max}$, which is set by reaction times, life times of active states or other relevant scales. Stretching, folding and stirring is capable to transport a chemical signal over the droplet‘s diameter within a few periods of oscillation, while at the same time it leads to a distribution of large concentration gradients, as seen in Fig.\ref{fig4}a. Once several such stretching and folding processes have taken place (see Fig.\ref{fig4}b), diffusive transport is expected to take over and generate a homogeneous distribution in a cell or vesicle of size in the $\mu m$ range (see below.) Note, however that not all trajectories are chaotic in Fig.\ref{fig3}. We also observe regular islands, and if the initial distribution is chosen within such an island it does not mix, as illustrated in Fig.\ref{fig4}d. Regions of chaotic advection are thus separated from regular motion by transport barriers, which can only be surmounted by diffusion. In this way the droplet’s interior is subdivided dynamically into mixing and non-mixing regions, which can be controlled via parameters of the flow. An active droplet can thus easily switch from a mixing to a partially mixing or non-mixing state, by changing its global activities, like e.g., $\Delta\theta$ or the velocity scales $a_{r}$ and $a_{t}$ of chiral and non-chiral flow. \begin{figure}[!tb] \stackon{(a)}{\includegraphics[width=0.4\columnwidth]{fig4a.png}} \stackon{(b)}{\includegraphics[width=0.4\columnwidth]{fig4b.png}}\\ \stackon{(c)}{ \includegraphics[width=0.4\columnwidth]{fig4c.png}} \stackon{(d)}{ \includegraphics[width=0.4\columnwidth]{fig4d.png}} \caption{\label{fig4} Temporal development of an initial distribution of $5\cdot 10^3$ particles distributed at random within the solid square centered at $\mathbf{r}=(0.6, 0, 0.6)$ for 3 (a), 4 (b) and 5 (c) periods of $\theta(t)$, with $\Delta\theta=0.2\pi$. d) centered at $\mathbf{r}= (-0.6, 0, 0.5)$ for $\Delta\theta=0.12$ after 30 periods of $\theta(t)$, } \end{figure} So far we have discussed mixing in the x-z plane, which constitutes an invariant manifold of the flow. Mixing in 3d can be achieved by either choosing the initial conditions outside the invariant manifold or by adding a rotational flow. In the former case, the invariant manifold constitutes a barrier between the $y>0$- and the $y<0$-half-sphere. Fig. \ref{fig5}a is the analogue of Fig. \ref{fig4}c for an initial distribution in a cube centered at $\mathbf{r}_{0}=(0.6, 0.1, 0.5)$. The resulting particle distribution fills the $y > 0$ half-sphere, however, it is not uniform but accumulates in the neighborhood of the invariant manifold. Mixing within the entire sphere can be achieved with an additional rotational flow as shown in Fig. \ref{fig5}b. Here the rotational flow of Eq.{\ref{eq:rotationalflow}} has been added with $\hat{\boldsymbol{\omega}}=\mathbf{e}_{z}$ and $a_{r}=1$. This rotational flow transports the 2d mixing in the invariant manifold into the sphere, and thus leads to effective mixing in 3d. \begin{figure}[!tb] \stackon{(a)}{\includegraphics[width=0.45\columnwidth]{fig5a.png}} \stackon{(b)}{ \includegraphics[width=0.45\columnwidth]{fig5b.png} } \caption{\label{fig5} Positions of $5\times 10^{3}$ particles initially distributed randomly within a small solid cube centered at $\mathbf{r}_{0}=(0.6, 0.1, 0.5)$ after 6 periods of oscillations of $\hat{\mathbf{n}}(t)$ with $\Delta\theta=0.2\pi$ as in Fig. \ref{fig4}a-c. (a) without rotational flow and (b) with rotational flow of strength $a_{r}=1$ } \end{figure} From the particle distribution we can derive further quantitative statistical measures of mixing by coarse graining. We introduce a partitioning of 2d or 3d space into $M$ cells with areas or volumes $\{F_n\}_{n=1}^{M}$. In each cell, we count the number of advected particles $N_n$ and measure the density $\rho_n=N_n/(F_n N)$, which is an estimate of the probability to find a particle in the cell. In order to compare this probability to a uniform distribution, we use the Kullback-Leibler (KL) entropy, defined as \begin{equation} S=\sum_n \rho_n \log\frac{\rho_n}{\rho} \end{equation} with the uniform density $\rho=1/F$. Small values of $S$ indicate good homogeneous mixing properties, whereas large values result from strongly inhomogeneous distributions as, e.g., in Fig.\ref{fig4}d. The KL-entropy depends upon bin sizes and contains statistical errors due to finite N. We have studied these dependencies and found that $N \geq 5\cdot 10^{3}$ particles and $\geq 25$ bins in both angular and radial coordinates are sufficient to detect global features of mixing. The entropy is plotted versus $\Delta\theta$ in Fig.\ref{fig6} after 12 cycles of the axis $\mathbf{n}$, which we found to be sufficient to reach a stationary value. The initial conditions have been chosen as in Fig.\ref{fig4}a. For small values of $\Delta\theta \lesssim 0.2$ one observes a decrease of entropy. The decrease is, however, non-monotonic due to Lagrangian coherent structures (like islands, for example), which appear and vanish as $\Delta\theta$ is varied. This is shown for 3 examples in the inset of Fig.\ref{fig6} a-c. Thus homogenization of an initially localized distribution of particles depends sensitively on $\Delta\theta$ in accordance with our previous results on single particle trajectories. The coexistence of chaotic regions and regular islands is reflected in correspondingly complex homogenization and mixing properties. \begin{figure}[!tb] \def \includegraphics[width=\columnwidth]{fig6.png}{ \includegraphics[width=\columnwidth]{fig6.png}} \def \stackon{(a)}{ \includegraphics[width=0.15\columnwidth]{fig6a.png}}{ \stackon{(a)}{ \includegraphics[width=0.15\columnwidth]{fig6a.png}}} \def \stackon{(b)}{ \includegraphics[width=0.15\columnwidth]{fig6b}}{ \stackon{(b)}{ \includegraphics[width=0.15\columnwidth]{fig6b}}} \def \stackon{(c)}{ \includegraphics[width=0.15\columnwidth]{fig6c.png}}{ \stackon{(c)}{ \includegraphics[width=0.15\columnwidth]{fig6c.png}}} \savestack{\firststack}{\stackinset{l}{45pt}{t}{23pt}{ \stackon{(a)}{ \includegraphics[width=0.15\columnwidth]{fig6a.png}}}{ \includegraphics[width=\columnwidth]{fig6.png}}} \savestack{\secondstack}{\stackinset{l}{95pt}{t}{23pt}{ \stackon{(b)}{ \includegraphics[width=0.15\columnwidth]{fig6b}}}{\firststack}} \stackinset{l}{145pt}{t}{23pt}{ \stackon{(c)}{ \includegraphics[width=0.15\columnwidth]{fig6c.png}}}{\secondstack} \caption{\label{fig6} Coarse grained Kullback-Leibler entropy at $t=24\pi$ versus $\Delta\theta$ for the initial distribution shown in Fig.\ref{fig4}a; insets: temporal development as in Fig.\ref{fig4} for 3 selected values (a),(b),(c) of $\Delta \theta$ and 12 periods of $\theta(t)$. } \end{figure} \section{Conclusions and Discussion} To conclude, flow inside a self-propelling droplet driven by internal active forcing exhibits chaotic advection, which should be observable in an experiment. We stress that the chaotic behavior is predicted for autonomous swimmers, whose trajectories are simple and regular. Although we have illustrated the emergence of chaotic trajectories and their effects on mixing only for a special, self-propelling flow, it is known that there are generic routes to chaos in 3d incompressible flows~\cite{Mezic1994}, and thus we expect most of our qualitative conclusions to hold in a much broader context. We have investigated the mixing properties of the internal flow by following the time development of an initially localised set of particles, as well as with the help of the Kullback-Leibler entropy. Mixing in microfluidic devices has been studied extensively~\cite{Lee2011}. Given the reversibility of the flow, mixing is usually achieved with the help of carefully designed geometries or time-dependent external forces. Here, in contrast, we consider spherical shapes only and show that internal forcing mechanisms result in mixing. How relevant are these results on scales of biological cells where molecules or vesicles need to be transported? Can chaotic advection beat transport or mixing by diffusion? As a hint, we estimate two dimensionless numbers for an internal fluid with a viscosity $\eta$. For directed transport, the P\'{e}clet number $ Pe=UL/D $ gives the ratio of convective to diffusive transport. Here, $U$ denotes a typical velocity, $L$ the size of the cell, and for an object of radius $R$ we estimate the diffusion constant $D=k_{B}T/6\pi\eta R$ with help of the Einstein relation. Assuming $\eta$ to be 4 times the viscosity of the ambient water we find $Pe/(ULR)\approx 6\pi\, s/\mu m^{3}$. The extension of a diffusing protein is estimated as $R\sim10\, nm$. For a bacterium of typical cell size $L\sim 1\, \mu m$, one needs a velocity of $U\sim 5\, \mu m/s$ to achieve $Pe = 1$. On the other hand, a eukaryotic cell can be as large as $L \sim 100\, \mu m$, implying $Pe\sim 100$ for the same velocity. A typical scale, which compares mixing by stretching and folding to diffusion is the Batchelor length $l_{B}=\sqrt{D/\lambda}$. For distances smaller than $l_{B}$ mixing is dominated by diffusion~\cite{Aref_2017}. Here $\lambda$ denotes a positive Liapunov exponent, which characterizes the spreading of two nearby initial conditions: $\delta x \sim e^{\lambda t}$. We found that $\lambda \sim 1/s$ for the studied chaotic flows with typical velocities of $1\, \mu m/s$ so that $l_{B} \sim 1\, \mu m$. Thus chaotic advective transport in an autonomous microswimmer provides an efficient mechanism for mixing in case diffusion is slow due to the large size of either the advected object or the cell. Even for small sized objects or cells, the rapid buildup of large gradients due to stretching and folding can accelerate diffusive transport substantially. \section*{Appendix} In this apendix, we briefly recall the computation of the flow fields inside the droplet, and in particular the rotational flow of Eq.\ref{eq:rotationalflow}, which cannot be generated by surface tractions but requires the action of body forces. For details of the calculations we refer to \cite{Kree_2017}. Consider a neutrally buoyant spherical droplet of radius $R=1$, filled with an incompressible Newtonian fluid of viscosity $\eta$, which is swimming in another Newtonian fluid of viscosity $\eta_{a}$ and driven by internal active volume force densities $\mathbf{f}(r, \theta, \varphi)$ and/or active surface force densities $\mathbf{t}( \theta,\varphi)$ on the interface at $r=R$. The forces generate a flow field inside the droplet, which is coupled to the ambient fluid via viscous forces at the interface and may lead to self-propulsion. To calculate the flow field $\mathbf{v}$ in the lab frame we choose a spherical coordinate system $(r, \theta, \varphi)$ with its origin at the center of the droplet at a fixed time $t$. The unit vectors corresponding to the coordinate lines are denoted by $\mathbf{e}_{r}, \mathbf{e}_{\theta}, \mathbf{e}_{\varphi}$. For small Reynolds number, the flow obeys Stokes' equations \begin{equation} \eta(r) \nabla^{2} \mathbf{v} =-\nabla p +\mathbf{f} ; \quad \nabla\cdot \mathbf{v}=0 \end{equation} with $\eta(r<1)=\eta$ and $\eta(r>1)=\eta_{a}$. To distinguish the internal from the ambient flow we introduce $\mathbf{v}(r>1)=\mathbf{v}_{a}$. For immiscible fluids and no interfacial velocity slip, $\mathbf{v}$ is continuous across the interface, \begin{equation} \mathbf{v}(r=1, \theta, \varphi)= \mathbf{v}_{a}(r=1, \theta, \varphi). \end{equation} Furthermore every surface element of the interface is force free, which implies \begin{equation} \quad (\boldsymbol{\sigma}(\mathbf{v}_{a})-\boldsymbol{\sigma}(\mathbf{v}) - \mathbf{t})\cdot\mathbf{e}_{r}=2 \gamma\mathbf{e}_{r}. \label{eq:boundaryforcefree} \end{equation} The viscous stress tensor $\boldsymbol{\sigma}$ is characterized by its cartesian components $\sigma_{ij}(\mathbf{v})=-p\delta_{ij}+\eta(\partial_{i}v_{j}+\partial_{j}v_{i})$ with pressure field $p(r,\theta,\varphi)$. The term on the r.h.s. of Eq. \ref{eq:boundaryforcefree} results from a homogeneous surface tension $\gamma$. To take advantage of the spherical geometry, we expand the pressure field into spherical harmonics $Y_{{\ell m}}$ and the vector fields $\mathbf{v}, \mathbf{f}$ and $\mathbf{t}$ into vector spherical harmonics. Our choice for the latter is $\mathbf{Y}^{(0)}_{\ell m}= Y_{\ell m} \mathbf{e}_{r}$, $\mathbf{Y}^{(1)}_{\ell m}= r\nabla Y_{\ell m}$ and $\mathbf{Y}^{(2)}_{\ell m}= \mathbf{r}\times\nabla Y_{\ell m}$. Inserting the expansion \begin{equation}\label{eq1} \mathbf{v}(r,\theta,\varphi)=\sum_{s=0}^{2}\sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{\ell} v^{s}_{\ell m}(r)\mathbf{Y}^{(s)}_{\ell m}(\theta, \varphi) \end{equation} and the corresponding expansions for $p, \mathbf{f}$ and $\mathbf{t}$ into Stokes equations results in a system of ordinary differential equations for $v^{s}_{\ell m}(r)$, which is completely decoupled in $\ell$ and $m$. For each $\ell, m$ the only remaining coupling is between $s=0$ and $s=1$ modes. The simple translational flow of Eq.\ref{eq:vminus} represents the $m=0$ component of the nonchiral part of Eq.\ref{eq1} $(s=0,1)$ \begin{align} \mathbf{v}_t(\mathbf{r})&= \mathbf{v}(\mathbf{r})-\mathbf{U}\nonumber\\ &\propto (r^2-1)\mathbf{Y}^{(0)}_{1 0}(\theta, \varphi) +(2r^2-1)\mathbf{Y}^{(1)}_{1 0}(\theta, \varphi)\nonumber\\ &\propto(r^2-1)\cos{\theta}\,\mathbf{e}_{r}+(1-2r^2)\sin{\theta}\,\mathbf{e}_{\theta}\nonumber \end{align} in agreement with Eq.\ref{eq:vminus}. The rotational flow represents the chiral part of Eq.\ref{eq1} $(s=2)$. Traction forces give rise to rigid body motion which is not compatibel with the vanishing of the total torque as required for autonomous swimming. Body forces on the other hand can give rise to rotational flow inside the droplet. The simple example, discussed in the main text, corresponds to the force density $\mathbf{f}(\mathbf{r})=\sum_m\gamma_m(r)\mathbf{Y}^{(2)}_{1 m}(\theta, \varphi)$. Let us first consider the axially symmetric case with only the $m=0$ component of the force density. The total torque vanishes, if $\int_0^1 dr r^3\gamma_0(r)=0$. This can be achievd by two power laws for $\gamma_0(r)$, for example $\gamma_0(r)=c_1r+c_2r^2$ with $c_1/5=-c_2/6$. The resulting rotational flow is given by \begin{align} \mathbf{v}_r(\mathbf{r})&\sim \includegraphics[width=\columnwidth]{fig6.png}((r-r^3)/2+(r-r^4)/3 \includegraphics[width=\columnwidth]{fig6.png}) \mathbf{r}\times\nabla Y_{1 0}(\theta,\varphi)\\ &\sim h(r)\sin{\theta}\,\mathbf{e}_{\phi}= -h(r)\mathbf{r\times}\mathbf{e}_{z}. \nonumber \end{align} The more general case with all $\gamma_m(r)$ of the same functional form, can then be shown~\cite{Kree_2017} to result in $ \mathbf{v}_r(\mathbf{r}) \sim -h(r)\mathbf{r\times}{\hat{\boldsymbol{\omega}}}$.
1,116,691,499,949
arxiv
\section{Introduction} Metal-insulator transition (MIT) materials undergo an electronic phase change from a metallic to an insulating state as a function of applied external conditions, i.e., temperature, pressure, or doping. The transition is typically discerned through optical and/or transport measurements.\cite{Imada/Fujimori/Tokura:1998} Predicting whether a material is prone to undergo an MIT or not is an ongoing research area\cite{PhysRevLett.121.245701, Vargas-Hernandez2018, Dong2019} of high technological importance. MIT materials may deliver new ``steep slope'' transistors that operate at very low voltage for beyond-Boltzmann-based computation\cite{Shukla2015,Brahlek2017} or function as ``smart'' components in thermochromic windows.\cite{Cui2018}. To that end, there is strong interest in discovering new materials exhibiting MITs with improved properties. Rational design of MIT materials, however, has proven to be difficult. One reason for this is that the electronic phase transition may arise from a variety of (possibly competing) mechanisms (see \autoref{fig:mechanisms} for a non-exhaustive list). The transition mechanism is often the subject of intense debate. The discussions are often limited to select chemistries and crystal structures\cite{Hiroi2015}. The transition is often characterized by certain microscopic physical observables that differentiate the metallic from the insulating state, which are then treated as variable order parameters. In cases where the order parameters are known, there is often ambiguity over whether the transition is driven by the electronic or lattice order parameters, which hinders subsequent control and optimization of the transition characteristics, as well as over the role of the other different structural and electronic modes in tuning the relative energies that characterize the transition\cite{wang2020featureless,Georgescu14434,Dominguez2020}. \rev{While significant progress has been made recently in disentangling the electronic and lattice degrees of freedom in the low-temperature insulating state\cite{landscapes}, understanding the temperature dependence of the electronic properties of a material, and particularly whether a material with an insulating state at $T=0$\,K will transition to a metallic state, or vice versa, as the temperature is raised in a reliable way, is beyond the currently available high-throughput methods available in the field.} Progress in the synthesis of high-quality materials, novel characterization methodologies, and advances in the quantum-mechanical modeling and theory of electron correlations has led to the recognition that subtle details in the crystal and local structure are indeed essential to describing MITs.\cite{PhysRevMaterials.4.104401,PhysRevMaterials.3.095003,GEORGESCU2021107991,landscapes} Displacive distortions to the size, shape, and connectivity of the basic metal-oxygen polyhedra or shortening of metal-metal distances in transition metal compounds are common in a number of these materials, which adopt different crystal structures (\autoref{fig:mechanisms}). This is best exemplified in the thermal MIT in VO$_{2}$, which appears to be described by a Mott-assisted Peierls transition, rather than exclusively Mott-Hubbard or Peierls-type physics\cite{Hiroi2015, Jager2017, Lee2018}. More recently, the perovskite oxide rare-earth nickelate family\cite{landscapes,Dominguez2020,Nick,Georgescu14434,Ghosez,Oleg,PhysRevLett.112.106404, Berciu,PhysRevMaterials.1.024410,Liao9515,PhysRevMaterials.1.024410} and the Ruddlesden-Popper ruthenate Ca$_2$RuO$_4$ \cite{landscapes,CRO, CRO2, CRO3, CRO4, CRO5} have been the subject of intense study, with both theoretical and experimental work focused on identifying and understanding the MIT mechanism to enable phase control. Improvements in sample quality have allowed for new metal-insulator transitions to be discovered even in previously known materials \cite{CMO}. Although most MIT materials are oxides, there is an increasing amount of work focused at the discovery of materials with anions different from oxygen\cite{mixed,mixed2}. \begin{figure*}[t] \centering \includegraphics[width=0.75\textwidth]{fig1_distortion.pdf} \caption{Relationship between atomic distortions and the physical interactions driving MITs in transition metal compounds comprised of diverse chemistries and structure types.} \label{fig:mechanisms} \end{figure*} \begin{figure} \centering \includegraphics[width=0.65\textwidth]{fig2_data_summary.pdf} \caption{% Visualization of a sample of the database, comprising (upper) metals, (center) insulators, and (lower) thermal metal-insulator transition materials, in a 2D space of features commonly used in each electronic class. Not all compounds are visualized since sometimes the required data cannot be obtained (e.g., missing band gap values from the Materials Project database). % The metals exhibit a broad range of resistivity values at 300\,K, and no clustering of the materials occurs although the vast majority exhibit $\rho<10^{-2}$\,$\Omega\cdot\mathrm{cm}$. % The DFT calculated band gap energies, obtained from the Materials Project, and calculated refractive indices, following Shannon method,\cite{shannon_empirical_2016} are inversely proportional to each other.\cite{PhysRevMaterials.3.044602} % (Band gap values of 0\,eV for some insulators are due to DFT underestimation.) % Most MIT compounds exhibit transition temperatures $T_\mathrm{MIT}$ below 600\,K independent of d electron count. % (Note that the non-integer d electron count for some MIT compounds is a result of the averaging of oxidation states over several atomic sites.) % } \label{fig:summary} \end{figure} Despite this progress, there are still less than a hundred inorganic materials which exhibit thermally-driven MITs (\autoref{fig:summary}). Further, despite their scarcity, before this work there has been no standard library of MIT materials available to the general scientific audience, which has slowed down the study of known MIT materials, and most likely reduced the rate at which new materials are discovered. Experimental databases of synthesized and predicted inorganic materials, e.g., the ICSD\cite{Hellenbrandt2004} or SpringerMaterials\cite{SpringerMaterials} lack the necessary information to assign an electrical conductivity class labels: always metallic, always insulating, or exhibiting a thermal MIT. Although high-throughput first-principles databases exist, e.g., Materials Project\cite{Jain2013}, OQMD\cite{Saal2013MaterialsOQMD}, and AFLOW\cite{Curtarolo2012}, the methods used to compute the data often omit essential microscopic interactions and corrections to standard DFT exchange-correlation functionals that could capture the MIT physics. The theory used (density functional theory) is also $T=0$\,K theory. While these databases have been used to successfully build machine-learning classifiers for separating metals and insulators, as differentiated at a band-theory level (see \supt2 of the Supporting Information, SI), they often do not include the relevant physics to describe MIT materials families. Particularly, they do not model in sufficient detail the effect of electron-electron interactions that are crucial to understanding the opening of the band gap in correlated materials. High-throughput DFT without the use of appropriate microscopic models\cite{Varignon2019} or corrections specific to correlated materials can lead to the incorrect classification of MIT materials and some insulators as metals. For example, the insulator LaTiO$_3$ and the MIT material NdNiO$_3$ are both listed as having 0\,eV band gaps in Materials Project. Indeed, \autoref{fig:summary} shows that a large number of materials---including insulators---would be classified as metals based on the simulated 0\,eV band gap from Materials Project \rev{providing a poor starting point for any further classification}. The combination of materials scarcity and inaccurate descriptions of the ground state are among the main difficulties in building machine-learning models for discovering and understanding correlated materials. Furthermore, limited efforts have focused on identifying whether any of these materials possess \emph{thermally-driven} MITs at finite temperatures, as the first-principles calculations often only report 0\,K data. \rev{Current theoretical methods are often insufficient to understand the complex temperature-dependent electron-lattice interplay leading to an MIT\cite{landscapes,Varignon2019}.} Here, we resolve these limitations and build a database of experimentally confirmed temperature-driven MIT compounds through a combination of domain-knowledge and natural language processing (NLP). We then augment this database with structurally and compositionally related materials that are exclusively metallic or insulating. We featurize the complete dataset using atomic, electronic, and structural descriptors along with MIT-specific features, including an unscreened Coulomb interaction through an estimated Hubbard $U$ energy, $U_\mathrm{est}$, an estimated charge transfer energy, $\Delta_0$, and the global-instability index (GII). After training multiple supervised learning models for the three classification models, i.e., the metal \emph{vs} non-metal (M), insulator \emph{vs} non-insulator (I), and MIT \emph{vs} non-MIT (T) classification tasks, we identify new features whose interplay separate MIT materials from non-MIT materials. Analysis of the SHAP scores of the T-model led us to identify two previously unappreciated descriptors that offer significant class separation without requiring sophisticated computational techniques: ($i$) average deviation of the covalent radius (ADCR), a feature that describes the relative size difference among the elements comprising a compound, and ($ii$) a compositional feature, called the range of the Mendeleev number. The GII and Ewald energy are also identified as important features. We then examine the role these features play in the MITs exhibited by binary vanadates, titanates, and complex rare-earth nickelates. Finally, we describe an online tool comprised of the three binary classifiers, which enables a user to upload a crystal structure file to obtain three probabilities of it being identified as a metal, insulator, or MIT compound. \section{Methods} Feature-based supervised learning involves data acquisition, feature engineering, and model building. The main result of our data acquisition is a database containing 343 materials, each labeled as a metal, insulator, or metal-insulator transition compound, based on available experimental literature. At the time of this publication, there are 96 metals, 179 insulators and 68 MIT materials in the dataset. \rev{Next, we obtained a crystal structure for each material via one of the following methods, in order of descending preference: retrieval from experimental library (ICSD or Springer), retrieval from the Materials Project, or in-house generation, as described in the SI.\cite{code-link}} The crystal structure of each material was then automatically featurized using common descriptors from \texttt{Magpie},\cite{Ward2016AMaterials}and those obtained from domain knowledge. We labeled materials as MIT compounds if experimental literature on them shows that they exhibit an insulating ${\partial \rho}/{\partial T}<0$ temperature-dependent resistivity on one side of a critical transition temperature $T_\mathrm{MIT}$, and a metallic ${\partial \rho}/{\partial T}>0$ temperature-dependent resistivity on the other side of $T_\mathrm{MIT}$. When there was ambiguity regarding the change in sign of the experimental ${\partial \rho}(T)/{\partial T}>0$ data, we used additional experimental data to determine the class label. For example, if optical data shows a finite charge gap at temperatures below $T_\mathrm{MIT}$, but none above it (as is the case with the MIT in Sr$_3$Fe$_2$O$_7$ \citep{SFO}), we assigned an MIT label. Such subtleties are common among transition metal compounds, which are also prone to non-stoichiometry which can also influence class assignment, especially among less studied materials. Readers can view the class-label assignments for each material at Ref.\ \citen{code-link}. This electronic database hosts the latest experimental information on thermal MIT compounds and related metals and insulators. The data we present herein represents the state-of-knowledge on the class labels for materials available at the time of publication. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{fig3_workflow.pdf} \caption{Iterative workflow, comprising natural language processing (NLP) and human searches, utilized to construct the database of materials for training the electronic classifiers. } \label{fig:database_workflow} \end{figure} \subsection{Data Acquisition and Database Construction} \autoref{fig:database_workflow} illustrates the process by which we built the database for training the electronic classification models. We used a combination of domain knowledge and natural language processing (NLP) to generate both an initial materials database and a keyword list for the NLP pipeline. The materials database initially included all MIT compounds known by the authors along with related materials, including binary vanadates V$_n$O$_m$, $R$NiO$_3$ nickelates ($R$ a rare-earth metal), LaCoO$_3$, and some Ruddlesden-Popper oxides (e.g., Ca$_2$RuO$_4$), as well as lacunar spinels. Compounds with similar chemistries and stoichiometries that are exclusively metallic or insulating were then also added to the database. To extend the database beyond the materials known to the authors, we have also used natural language procesing (NLP). The text corpus used for the NLP methods included 70,123 papers, which was down selected from an entire text corpus of over 4 million scientific papers and journals. Down selection to the highly specialized MIT text corpus was performed using keywords (\supt1), describing MITs and correlated electron systems from the authors' domain knowledge, which matched words in the titles, abstracts, and introductory paragraphs of papers. New MIT compounds identified from two types of NLP searches (\emph{vide infra}) were then verified manually by assessing experimental transport (and/or optical) data. The newly identified compounds were each assigned a metal, insulator, or MIT class label and added to the database. Based on these identifications, new keywords were also added to guide the NLP search or additional human searches to find more compounds relevant to the classification tasks. This process was repeated until the database acquired 343 unique entries. Two NLP methods were used in our workflow. First, using the specialized MIT text corpus of approximately 70,000 papers and a state-of-the-art NLP pipeline,\cite{NLP1} we extracted chemical formulas of possible MIT compounds from each paper. This entity recognition process included tokenization of the relevant MIT text corpus, normalization to lemmatize each term, part-of-speech tagging using dependency parsers, and finally token classification. Using this NLP method, we identified the MIT compound PrRu$_4$P$_{12}$, which belongs to the skutterudite family frequently studied in thermoelectrics research. Its transition is attributed to the physics of Pr 4f electrons, rather than the Ru 4d electrons.\cite{PhysRevLett.79.3218} Although the authors were unaware of this compound prior to the NLP search, we were able to combine our MIT domain knowledge to perform a manual search of the literature and identified SmRu$_4$P$_{12}$ as another skutterudite MIT material, as well as a wide range of other skutterudite materials which exhibit solely metallic or insulating behavior. Second, we employed a FastText model trained directly on the specialized MIT text corpus. The resulting word embeddings were then compared in terms of cosine similarity to previously identified MIT materials. We use a different cosine-similarity approach than that in Ref.\ \citen{NLP2} to identify compounds of interest. For each compound in the original dataset of identified materials, 100 words with highest cosine similarity in the trained FastText model were identified. Then using the metal, insulator, and MIT class labels present within the dataset, we grouped the closest 100 words for each compound into closest words for a given label. Because there were approximately 50 temperature-driven MITs in the original dataset, this grouping method resulted in $\approx$ 5,000 compounds with repeats, that were closest in cosine similarity of word embeddings to known MIT compounds. Then within the group of compounds for a given class label, the 20 most commonly occurring words were identified, resulting in 20 words for each classification label (60 words in total). Of the 60 words, the vast majority were chemical formulas with no noise.\footnote{Nine words were not exact chemical formulas of compounds and thus were considered to be noise. Although we assigned the words as noise, some words were still relevant; for example, the word ``La$A$O$_3$'' appeared in the 20 most common words associated with insulators, where $A$ represented an unidentified element on the periodic table.} We further simplified the search by keeping only those words which were exact chemical formulas. After these compounds were identified, abstracts and titles were searched for these specific compounds. Among this subset of literature, we then searched the classifications for the compounds identified by the FastText model and added new materials to the database. The two NLP-assisted searches led to the addition of 116 compounds to the database (representing a 51\,\% increase in size from the initial dataset), which was then augmented further by additional human searches using the new domain knowledge. \rev{Our database was expanded from about 190 unique compounds to the current 343 with the addition of the mixed human + NLP search. This expansion led to identification, for example, of the skutterudite family in which the MIT mechanism is driven by the rare-earth cations rather than transition metal-driven physics characteristic of the other materials in our database.} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{fig4_class_distribution_heatmap.pdf} \caption{\rev{The distribution plot (left) shows the number of compounds by conductivity class. The heatmap plot of the periodic table (right) shows the elements that are present in all compounds in the database, and the value below each element symbol is the percentage of compounds that exhibit the corresponding element.} } \label{fig:class_distribution} \end{figure} As a result of this search, we have built a database that is as representative as possible. \autoref{fig:class_distribution} shows the distribution of the complete dataset obtained from this workflow at the date of submission of this paper. As expected for a database of materials dominated by transition metal oxides, the dataset is imbalanced, with the number of insulators significantly greater than that of metals or MIT materials. All known MITs fall into a relatively restricted class of compounds and most frequently appear as complex ternary materials, often transition metal oxides and sulfides. In order for our database to be focused on thermal MITs, we have excluded compositionally-controlled MIT compounds. Our model then focuses on complex, inorganic materials involving d-shell or f-shell electrons as the valence states. Although an arbitrary increase in our database size to go beyond this limited scope may allow for an increase in our classifier scores, this increase would not be meaningful as we want the metals and insulators to have similar chemical composition and stoichiometry to the MIT compounds. \rev{The heatmap in \autoref{fig:class_distribution} can be used as a quick guide to identify if a new compound to be tested contains elements that are already in the training data. This would better inform the decision as to how much trust one could place on the classification provided by our ML classifiers. Analogous heatmaps for metals, insulators or the MIT compounds can be found in the SI.} As most MITs are accompanied by a structural change coupled to the electronic transition, an important question in building our model was which structure to select for featurization: the structure corresponding to the low-temperature, often insulating phase, or the high-temperature, often, metallic phase, or both? Across the electronic transition, a symmetry lowering distortion typically occurs with the low-temperature phase comprising small atomic displacements absent in the high temperature structure (see local distortions illustrated in \autoref{fig:mechanisms}). For example, the rare-earth nickelates at low-temperature have a breathing distortion, i.e., small and large NiO$_6$ octahedra alternate in a 3D checkerboard pattern.\cite{PhysRevB.88.054101,Wagner2018} This type of microscopic model associated symmetry-breaking is usually known only in materials after the compound has been studied sufficiently to be labeled as an MIT material. Furthermore, most crystal structures are initially reported at room temperature, which can be far from $T_\mathrm{MIT}$. To build a model that is both mechanism-agnostic and can predict whether a material will have an MIT based on a simple theoretical or experimental structure before in-depth analysis could be performed, we opted to include only the high temperature structures in our dataset, if available. This allows our model to learn the susceptibility of a compound to undergo an MIT using more readily available high-temperature structures. \subsection{Feature Engineering} Constructing the machine learning model requires tabulating appropriate features to describe its properties for accurate class predictions. Our features include the \texttt{Magpie}\cite{Ward2016AMaterials} composition feature set, oxidation state, Ewald energy, local and crystal structure parameters (i.e., variation in bond lengths and atomic volumes), and the global instability index (GII)\cite{Salinas-Sanchez1992a}, all of which are accessible from the \texttt{Matminer}\cite{Ward2018} package. Certain descriptors known from the material physics community to be important in describing MIT compounds, however, are unavailable in these standard libraries designed for machine learning. To that end, we constructed additional structural features, intended to capture the displacive distortions shown in \autoref{fig:mechanisms}, as well as built featurizers that can provide an estimate of the electronic energy scales used in the Zaanen-Sawatzky-Allen framework\cite{PhysRevLett.55.418} to separate metals from insulators.\cite{Torrance1991} Now, we will describe some of the features determined to be important and their implementation. We begin by highlighting the range (or minimum) of the Mendeleev number and the average deviation of the covalent radius (see \autoref{fig:periodictable} for values used in this work). These two features both appear consistently as features with high importance in several training iterations, and have physically interpretable meanings. The Mendeleev number provides an alternative label beyond atomic number to distinguish elements with shared characteristics\cite{Villars2004}. The Mendeleev number generally (but now always) increases down the columns of the periodic table, then increases from left to right. This ordering is intended to bunch elements with similar chemical properties together by an expected oxidation state in most materials. To understand how this can be useful, consider the $AB$O$_3$ perovskite oxides with $A$ a rare-earth and $B$ a transition metal. The minimum of the Mendeleev number will characterize the $A$ cation and the maximum of the Mendeleev number will characterize the O anion. Thus, the range of the Mendeleev number will be the difference between the Mendeleev numbers of the O anion and the $A$ cation. Since most compounds in our dataset are oxides with just a few being sulfides, the maximum of the Mendeleev number is effectively fixed for a substantial portion of our dataset with only the minimum of the Mendeleev number varying between different compounds. This aspect leads to high correlation between the minimum and range Mendeleev number features: the range and minimum of the Mendeleev number have a linear correlation of $-0.995$ and can be considered equivalent features for most of our dataset. We chose to use the range of the Mendeleev number simply as it takes the minimum of the Mendeleev number into account, which may be important for other types of compounds (such as sulfides), and the effect of choosing one over the other is negligible on the performance of our models. \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{fig5_periodictable.png} \caption{Periodic table of Mendeleev numbers (M\#) and covalent radii (CR, in picometers) used for featurization.} \label{fig:periodictable} \end{figure} The average deviation of the covalent radius (ADCR) describes how different the covalent radii are for different elements in a compound: \begin{equation} \label{eq:ADCR} \mathrm{ADCR} = \frac{1}{N}\sum_{i=1}^{N}|R_i-\bar{R}|, \end{equation} where $R_i$ is the covalent radius of the element $i$, $\bar{R}$ is the average covalent radius of all elements in a specific compound, and $N$ is the total number of different elements. For example in an $A_n$X$_m$ compound, \begin{equation} \mathrm{ADCR}\,({A}_n{X}_m)=\frac{n|R_A-\bar{R}|+m|R_X-\bar{R}|}{n+m} \end{equation} with the weighted average $\bar{R}=({nR_A+mR_X})/({n+m})$. We use covalent rather than ionic radii as features, because while ionic radii are known to underlie structural stability in ionic crystals according to Pauling's rules,\cite{George2020} they rely upon knowledge of oxidation states and coordination environment. Further, some of our compounds exhibit non-integer oxidation states, or have oxidation states that are the subject of ongoing research (including many of the skutterudites), which means that including them would have introduced additional ambiguity into our model. Now, we turn to some of the features for which we have built our own featurizers. We begin with the structural global-instability-index (GII) descriptor, which is defined as \begin{equation} \label{eq:GII} \mathrm{GII} = \left({\frac{1}{N}\sum_{i=1}^{N}d_i^2}\right)^{1/2}, \end{equation} where $d_{i} = \mathrm{BVS}(i) - V_{i}$ is the difference between the bond valence sum (BVS) for the $i^{th}$ ion and its formal valence and $N$ is the number of ions in the unit cell.\cite{Brown1992,GII1992} The GII is the root-mean-square deviation of the bond valence sums from the formal valence averaged over all atoms in the cell. This can be understood as approximating an average structural stress inherent in a material, as it captures the average deviation in bond lengths from what would be experimentally expected. The stresses arise from a combination of over- and underbonded cation-ligand interactions and thus describe bond strains compatible with a given structure and crystallographic symmetry.\cite{Brown/Poeppelmeier:2014} Next, we constructed additional structural features, which in turn allowed us to build the electronic features from the Zaanen-Sawatzky-Allen (ZSA) framework. The structural features include: the minimum, maximum, and mean distances between transition metal $M$ cations; the minimum, maximum, and mean distances between the transition metal and the ligand $L$; and the Madelung site potentials for the cations and anions. These features were then used to calculate approximations\cite{Torrance1991} for the relevant ZSA energy scales: the difference between the highest occupied and lowest unoccupied metal orbitals (an estimated Hubbard $U$, hereafter $U_0^\prime$) and a charge-transfer energy gap ($\Delta_0$). Both electronic parameters originate from an ionic model involving local charge excitations with \begin{equation} \label{eq:Uprime} U^\prime_0 = I_{v+1}(M)-I_{v}(M)-e^2/d_{M-M}\,, \end{equation} where $I_v=A$ is the electron affinity of $M^{v+}$, $I_{v+1}$ is the ionization potential, and $e^2/d_{M-M}$ is the Coulomb attraction between the excited electron on one transition metal cation and the hole left behind on its nearest-neighbor cation. The estimated charge-transfer energy is \begin{equation} \label{eq:Delta} \Delta_0 = e\Delta{}V_M+I(L^{n-})-I_{v}(M)-e^2/d_{M-L}\,, \end{equation} where $e\Delta{}V_M$ is the product of the electron charge and the difference in electrostatic Madelung site potentials between the cation and ligand sites, $I(L^{n-})$ is the ionization potential of the ligand in the $-n$ oxidation state, e.g., $I(\mathrm{O}^{2-}) = -7.7$\,eV for oxygen ligands. $I_v$ is as before and $e^2/d_{M-L}$ is the Coulomb attraction between the excited electron on the metal and the hole left behind on its ligand. Code for calculating $U^\prime_0$ and $\Delta_0$ was adapted from Ref.\ \citen{Hong2016} and is available in Ref.\ \citen{code-link}. The ionization potentials and electron affinity values were web-scraped from the NIST Atomic Spectra Database\cite{nist-atomic-database}. \subsection{Supervised Learning Scheme} \subsubsection{Machine Learning Algorithm} To determine which machine learning model is best suited for our classification task, six different models were used in the model selection process: dummy classifiers with random guessing, linear logistic regression models with L2 regularization, generic decision tree models, random forest classifiers, gradient-boosting classifiers, and extreme gradient-boosting classifiers as implemented in XGBoost.\cite{Chen2016} Model hyperparameters were optimized using grid search on the training split in stratified 5-fold cross-validation and are available at Ref.\ \citen{code-link}. Tree-based ensemble methods have been demonstrated empirically to be very efficient machine-learning models that also provide interpretability,\cite{Olson2018,Ward2018} making it easier to expand our domain knowledge. Indeed, XGBoost is an optimized distributed gradient boosting library designed to be highly efficient, flexible and portable. We found the XGBoost models were consistently among the best performing models and they were relatively fast to train compared to random forest and gradient-boosting models as described in the SI. The accessibility of our code allows the user to test their own structures in a browser via the Binder service which we will discuss below or on their personal computer. For these reasons, all classifiers presented here are based on XGBoost models, which are trained on two different features sets: a \emph{full feature set} and a \emph{reduced feature set} as described next. \subsubsection{Feature Selection} In order to obtain an easily interpretable model, and to avoid possible overfitting due to the large number of features, we performed a downselection to certain key features. This selection follows an iterative approach. In the first iteration, our raw feature set included 164 features (163 numeric features and 1 one-hot-encoded categorical feature with 2 levels), which is large compared to the number of compounds. To reduce the feature space complexity, we first removed any numeric features with 0 variance or with an absolute value of linear correlation greater than 0.95 with other features. This resulted in 106 features and is referred to as the \emph{full feature set}. Principal component analysis (PCA), t-distributed stochastic neighbor embedding (t-SNE), and uniform manifold approximation and projection (UMAP)\cite{McInnes2020} are often used to reduce the number of features. The linear and/or nonlinear combination of the original features used in these approaches to create new features, however, makes it difficult to interpret the physical meaning of each new descriptor. Since we desired to preserve the physical meaning of our features, we used a combination of Shapley additive explanations (SHAP)\cite{lundberg_unified_2017} and domain knowledge (physical intuition) to downselect the features in the second iteration. For each of the three binary classifiers, SHAP analysis on the full feature set was used to find the 10 most important features, i.e., the top 10 features with the highest average absolute SHAP values. From this SHAP analysis, 6 features were selected and then combined with 4 features chosen using domain knowledge, which resulted in a total of 10 features. This second feature set is referred to as the \emph{reduced feature set}. Both the full and reduced feature sets are available at Ref.\ \citen{code-link}. We also note that the SHAP scores of each feature are highly dependent on the training dataset: minor updates to the dataset result in significant changes in SHAP importance scores, leading to the conclusion that these measures may not be reliable if taken individually. This is likely a result of our small dataset: in the full feature set, we have 343 compounds and 106 features to describe them, leading to potential overfitting, which will be further addressed in future versions of the code. As a result, we handpicked a combination of features that either appear consistently as important in our SHAP analysis or we believed to be relevant from physical intuition. \rev{For instance, as we increased the number of compounds and trained models on the different iterations of the dataset, we found several features that consistently exhibited high SHAP values, such as the global instability index and the average deviation of the covalent radius. These features were then combined with those deemed important from materials domain knowledge such as the average metal-metal distance, the Hubbard $U$ strength, and charge transfer energy, to form the reduced feature set. The physical interpretation of these features is discussed in more detail below.} \subsubsection{Model Metrics} We computed model classification performance metrics such as receiver operating characteristic (ROC) curves and precision-recall curves with stratified cross-validation splits. Because the splits are dependent on the random seed used to generate them, and performance can vary depending on the different train-validation splits from different seeds, we performed cross-validation using 10 random seeds with integers from 0 to 9. For each of the 10 seeds, we performed a stratified 5-fold cross validation from which we calculated a median value. All metric values we report hereafter are the median values along with the interquartile range of those 10 median values. We carried out all of our cross-validations with the \texttt{scikit-learn} \cite{scikit-learn} Python package. Weighted F-1 scores that take class-imbalance into account were also used for model assessment. \rev{For a dataset this small, a test set usually should not be} used to evaluate model performance as there are only 343 training examples; it was unfeasible to set aside a hold-out set, since to the best of our knowledge, our current temperature-driven MIT materials database is exhaustive. However we will address this issue in future versions of our code, particularly as our database expands. Through cross-validation on 10 random seeds, each with stratified 5-fold splits, we then use the cross-validation performance as a proxy for the actual test set approach. \rev{Nonetheless, we did include for completeness a 90\%-10\% train test split evaluation using the same 10 random seeds, which resulted in similar performance, as reported in the SI. The key difference here is that for the original cross-validation approach, the hyperparameter tuning process uses the entire dataset while during the train test split, it uses only 90\% of the data, and thus the performance is more indicative of the models' extrapolative power.} \section{Results} \subsection{Classifier Performance} We first present performance results for our three binary classifier models, as they perform significantly better than a single ternary classifier (see \supf1 of SI). Classifier M distinguishes between metals and non-metals, which include compounds labeled as insulators (I) or MIT compounds (T). Classifier I is the analogous classifier for insulators with non-insulators comprised of M and T compounds. Classifier T distinguishes MIT compounds from non-MIT compounds, i.e., metals and insulators. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fig6_roc_auc_reduced.pdf} \caption{Receiver operating characteristic curves for the binary metal (M), insulator (I), and MIT compound (T) classifiers. Each colored line represents the median ROC curve from one of the 10 random seeds. For each seed, a 5-fold stratified cross-validation is carried out. Dashed lines represent the performance ($\text{AUC}=0.5$) when randomly guessing. The median area under the curve (AUC) is provided in the lower right corner.} \label{fig:AUC} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fig7_pr_curve_reduced.pdf} \caption{Precision recall curves from 10 seeds with each seed having a 5-fold cross-validation for the binary metal (M), insulator (I), and MIT compound (T) classifiers. The naive precision represented by the black dashed lines indicate what the precision would be if a classifier always predict the positive class. Here ``AUC" is the area under the precision-recall curve, not to be confused with the area under the receiver operating characteristic curve in \autoref{fig:AUC}. } \label{fig:prec-recall} \end{figure} All model performance metrics presented are from models trained on the reduced feature set. The corresponding metrics from models trained on the full feature set are available at Ref.\ \citen{code-link}. Except for the I classifier, which has a slightly worse performance for the one trained on the reduced feature set than that trained on the full feature set, the M and T classifiers are able to retain the same level of performance when trained on the reduced feature set compared to those trained on the full feature set (see \supf2). To visualize model performances, ROC curves for our machine learning models are constructed across 5 random cross-validation runs (\autoref{fig:AUC}) under 10 random seeds. The ROC curve depicts the true positive rate, which is the ratio between the number of correctly identified positive class (true-positives) and all of the positive class (the sum of true-positives and false-negatives), against the false-positive rate, or the ratio of false-positives to the sum of false-positives and true-negatives. An ROC curve confined primarily to the upper left corner is an indication of a model correctly identifying instances of each class without many false positives. An area under the ROC curve (AUC) of 1 represents perfect separation whereas an AUC of 0.5 is equivalent to random guessing. Tighter bunching of the lines indicates less variance in model performance with varying random seeds. We obtain a median ROC area under curve (ROC-AUC) of 0.90 and 0.89 with interquartile ranges of 0.02 and 0.01 for the M and I classifiers, respectively. Remarkably, our novel T classifier exhibits a median ROC-AUC of 0.90 with an interquartile range of 0.03, indicating its overall accuracy is high. \autoref{fig:prec-recall} presents the precision (proportion of true positives to the sum of true positives and false positives) and recall (proportion of true positives to the sum of true positives and false negatives) curves of each binary classifier to better understand performance owing to imbalance among the classes. The median and interquartile range of the cross-validation weighted F$_1$ scores (harmonic mean of precision and recall that takes class imbalance into account) are 0.86\,(0.03), 0.82\,(0.02), and 0.88\,(0.01) for the M, I, and T classifiers, respectively. On one hand, we find that the area under the precision-recall curve for Classifier T is rather low. Indeed, MITs are the least represented class in the dataset owing to the small number of known thermally-driven MIT materials. The poor precision-recall performance could perhaps be overcome with additional data as seen in other works (\supt2). As more positive examples (MIT compounds) are added to the dataset, we expect the precision of Classifier T to improve because the under-representation of MITs in the training set may lead to under-prediction of MITs, which results in a smaller number of true positives and thus a lower precision. On the other hand, the performance of Classifier M and I is comparatively better than that of Classifier T since the dataset contains more metals and insulators. As a result, the models were able to better separate metals from non-metals and insulators from non-insulators. In other words, these two classifiers exhibit better performance, because for them the ratio of positive class to negative class is more balanced than that of Classifier T. Compared with electronic state classifiers formulated in earlier works using various databases and different descriptors (\supt2), our models' performance metrics for the M and I models are comparable. However, we note that the previous models were not intended to learn correlated electron materials, and their metrics if trained on our sparse dataset would likely be different. Therefore, the comparison is not strictly appropriate. We also created a survey that let domain experts (e.g., materials scientists) classify the conductivity class of 18 compounds (6 metals, 7 insulators and 5 MIT compounds). The goal was to establish a human performance baseline for the 3 aforementioned classifiers and to evaluate whether identifying MIT compounds is a trivial task for human scientists. Unsurprisingly, the XGBoost classifiers outperform the average human scientist in every classification task (see \supf3 of SI). \subsection{Feature Importance and Physical Interpretation} We now use a combination of domain knowledge, SHAP values, and Accumulated Local Effect (ALE) analysis to examine the role of different features in the T Classifier trained on the reduced feature set. We want to know how the model learns to differentiate an MIT compound from those that are exclusively metallic or insulating. SHAP values indicate which features are important and the effects of the features on the classification (i.e., how does changing the value of a feature change the classification). ALE plots play a similar role in elucidating the importance of each feature in classifying a material, as well as the role of the \emph{interactions} between the features. Some of the most illustrative ALE plots can be found in the SI. \autoref{fig:MIT_importances} shows the rank-order SHAP importance of each feature as well each feature's relative role in the MIT compound classification (e.g., whether a material having a small value of GII makes it more likely to be classified as an MIT compound or not). \begin{figure}[t] \centering \includegraphics[width=0.75\linewidth]{fig8_mit_classifier_reduced_shap_dot.pdf} \caption{SHAP feature importances for the MIT compound (T) Classifier from the reduced feature set. For each row of feature, each dot corresponds to one compound in the database (i.e., every row has 343 dots). The vertical location of a feature name reflects its importance as ranked by mean absolute SHAP value (the higher up the more important). Colors of each dot indicate whether the corresponding feature was of high (red) or low (blue) value for that compound in the dataset relative to the maximum and minimum values of the feature for all materials in the database. The horizontal location of each dot is the SHAP value, which indicates whether that feature value causes a higher or lower prediction probability of the material as an MIT compound by log-odds. Dots are displaced vertically to reflect the density of compounds at a given SHAP value.} \label{fig:MIT_importances} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.70\textwidth]{fig9_mmU.pdf}\vspace{-48pt} \caption{Interplay of the distance between identical transition metal ions and the unscreened Hubbard $U$ interaction. The ALE plot (left) shows the contribution to the classification probability from these two features, with a higher value (red) corresponding to a higher probability of a positive MIT classification. The scatter plot (right) shows the distribution of compounds in our dataset as a function of these two features, with select families labeled. % We note that most of the MITs that are part of the ABO$_3$ perovskite family, and many of the vanadate MIT compounds are close to the ideal region for these two features. MIT materials such as skutterudite SmRu$_4$P$_{12}$ and PrRu$_4$P$_{12}$ are not considered by the model to have MITs driven by Hubbard $U$ or the metal-metal distance.} \label{fig:mmU} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth]{fig10_giiadcr.pdf} \caption{Interplay of GII (abscissa) and ADCR (ordinate). Most MIT compounds exhibit intermediate ADCR and GII values with low ADCR exceptions for FeS, NiSeS, CuIr$_2$S$_4$. % An interactive version of the data in this feature plot is available at {\small\url{https://mtd.mccormick.northwestern.edu/mit-classification-dataset}.} } \label{fig:CovRadius-GII} \end{figure} The most important feature in our classification is the average transition metal - transition metal distance, which we refer to as metal-metal distance for brevity, as shown in Fig. \ref{fig:MIT_importances}. The ALE decomposition shows that the importance of this feature largely arises from its interaction with other features. This finding agrees well with our physical intuition. In transition metal compounds, the metal-metal distance may determine the electronic bandwidth $W$, which is in competition with other energy scales such as the Hubbard $U$, as described for example in the ZSA-classification scheme, that drive MITs. Indeed, we show in a 2D scatter plot the distribution of materials as a function of the metal-metal distance and Hubbard $U$ and the ALE plot for these two features in Fig. \ref{fig:mmU}. We find that most of the most well-studied MIT materials (mostly perovskite compounds) fall within a narrow region identified as high in probability based on these two features. Interestingly, for some materials for which the mechanism is largely unknown, (SmRu$_4$P$_{12}$ and PrRu$_4$P$_{12}$), but widely assumed to be different than that of most other MIT materials, these two features do not contribute to the MIT classification in any significant amount. This agrees with previous theoretical work, which has suggested that the relevant MIT physics for these materials is, in fact, not driven by the transition metal ion electrons at all, and is driven by the Pr and Sm f-electrons instead.\citep{skutheory} The SHAP feature importances further find the relevant features for the classification are the GII, the charge transfer energy, and the transition metal-ligand distances (see \supf10 with SHAP plots for these two compounds in the SI), providing a possible hint as to the relevant physics in these materials. The GII and ADCR are two features that we find to be consistently important, and novel, in our model. The GII has previously been related to MIT temperatures in certain materials families, for example in the the $Rn$Cu$_3$Fe$_4$O$_{12}$. \citep{giicufe} \autoref{fig:CovRadius-GII} shows most MIT compounds exhibit ADCR values between $30\,\mathrm{pm}$ and $50\,\mathrm{pm}$ and GII values between 0.1 and 0.5. The moderate to high GII values for most MIT materials are consistent with our understanding that these thermally-driven MITs are assisted by a minor structural instability, which can alleviate bond stresses. Materials with high GII values may be too chemically unstable to support this type of mechanism. We find that the MIT compound with the lowest GII is V$_2$O$_3$. Overall, a low GII tends to favor an insulating state, and a higher GII favors a metallic or MIT state. For example, among binary oxides with rutile structure and composition $M$O$_2$, we find $\mathrm{GII\,(TiO}_2)=0.11$, $\mathrm{GII\,(VO}_2)=0.13$, and $\mathrm{GII\,(MoO}_2)=0.32$. As most of our compounds are oxides, and most of those are insulators with a low GII, we deduce that materials with a low GII are highly stable from a bond-stress assessment and are unlikely MIT compounds, which is consistent with the GII SHAP data in \autoref{fig:MIT_importances}. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{fig_11_nickacdr.pdf} \caption{Phase diagram with $T_{\mathrm{MIT}}$ for RNiO$_3$ compounds versus the average deviation of the covalent Radius for the perovskite nickelates. LaNiO$_3$ is always metallic. } \label{fig:nickelate} \end{figure} We glean some understanding of the ADCR relevance by focusing on the perovskite $R$NiO$_3$ nickelates with $R$ a rare-earth cation.\footnote{A similar analysis can also be done on other perovskite families such as $R$CoO$_3$, or $Rn$Cu$_3$Fe$_4$O$_{12}$, with $R$ defined as before and $Rn$ further includes Ho, Tb, Tm.} In cubic perovskites, ABO$_3$, including the RNiO$_3$, the ADCR is linearly correlated to the Goldschmidt tolerance factor ${t}=({r_R+r_\mathrm{O}})/(\sqrt{2}(r_B+r_\mathrm{O}))$. This tolerance factor is known to be associated with whether $B$O$_6$ octahedral rotations are likely to occur and distort the ideal cubic perovskite structure ($t=1$)\cite{Bartel2019}. For $t<1$, the transition metal-oxygen octahedra rotate, making it more difficult for electrons to hop and favoring an MIT. Thus, a lower tolerance factor usually leads to a higher MIT temperature, while a higher $t$ can suppress MIT behavior altogether (e.g., LaNiO$_3$ is metallic). The ADCR plays a more important role in supporting the MIT classification for LuNiO$_3$ (lower ADCR) than it does for NdNiO$_3$ (higher ADCR), capturing the physical trend in the phase diagram in \autoref{fig:nickelate}: the ADCR places NdNiO$_3$ close to metallic LaNiO$_3$, while it places LuNiO$_3$ significantly further away towards high $T_{\mathrm{MIT}}$. This physics is captured in the SHAP values in \autoref{fig:SHAP-example}: NdNiO$_3$ is one of the nickelates with the highest ADCR and the lowest $T_{\mathrm{MIT}}$, making it close to the metallic class as identified by the model with a log-odds ratio of 5.08. In contrast, LuNiO$_3$ with an ADCR lower than NdNiO$_3$ has a 7.22 log-odds ratio. In other words, the classifier is more certain that LuNiO$_3$ is an MIT compound than it is for NdNiO$_3$. The ADCR is thus similar to a generalized tolerance factor, irrespective of the materials family studied. We note that the ADCR is also strongly correlated to the average deviation of electronegativity with a linear correlation of 0.919 (see \autoref{fig:elec-covrad}), which we understand as a consequence of the electron affinity of an element being partially determined by its atomic radius. \begin{figure*}[t] \centering \includegraphics[width=0.98\textwidth]{fig_12_lnonno.pdf \caption{SHAP force plot of predictions from Classifier T for the MIT compounds LuNiO$_3$ and NdNiO$_{3}$. Color indicates whether the feature taking the particular value was providing evidence for (red) or against (blue) a prediction of having an MIT. Bar size represents the SHAP value. SHAP values from all features sum up to the log-odds of a positive MIT prediction. The base value is the log-odds expected based on the average proportion of MITs in the dataset. The classifier predicts with higher confidence that LuNiO$_3$ is an MIT material than NdNiO$_3$, with the ADCR's contribution to the classification as extracted from Shapley values of 1.05 and 0.48, respectively.} \label{fig:SHAP-example} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{fig_13_U_CT.pdf} \caption{Our database of materials plotted in the two-dimensional space of $\Delta_0$ (abscissa) and $U_0^\prime$ (ordinate) after Torrance {et al.} \cite{Torrance1991} reveals poor class separation. The black lines at $\Delta_0 = 10$\,eV and $U^\prime_0 = 11$\,eV correspond to the boundaries between insulators (located in the upper right quadrant) and metals elsewhere as determined in Ref.\ \citen{Torrance1991}. MITs should lie close to the separation boundaries.} \label{fig:torrance} \end{figure} Although we find the estimated Hubbard $U$ values $U_0^\prime$ and charge-transfer energies $\Delta_0$, which are important to the ZSA classification of correlated metals and insulators, are among the top 8 features, the MIT materials do not strongly cluster when they are plotted in a 2D space consisting of these features (\autoref{fig:torrance}). The GII, ADCR, and the range of the Mendeleev number (discussed later) lead to much stronger class clustering (or separation) than the ZSA-classification energies $U_0^\prime$ and $\Delta_0$, which have been used extensively over the last 30 years. The Hubbard $U$ is a strong counter-indicator of an MIT when large and somewhat less strongly predictive of an MIT when low, which gives some support to the findings in Ref. \citen{Torrance1991}. The presence of materials with extremely high $U_0^\prime$ values arising from high ionization energies, e.g., titanates, distorts the SHAP color scale for the Estimated Hubbard $U$ row in \autoref{fig:MIT_importances}. High values of $\Delta_0$ may also indicate that no MIT occurs, although sometimes the SHAP value of such a material is close to 0. The color scale for the charge transfer energy is also skewed by the presence of negative $\Delta_0$ values. These occur when the difference in metal and anion Madelung site potentials is small and/or the ionization energy of the metal is large. One reason that the ZSA classification energies may be difficult to interpret is that the energy estimates we use correspond to unscreened values. Dielectric screening and metal-ligand hybridization effects in solids can lead to significant renormalization of these values, but require electronic-structure based calculations such as cRPA \citep{cRPA} to ascertain. Thus, although these numbers provide some information to our machine learning model, they are difficult to understand in isolation. Two additional features of high importance include the average metal-metal distance and the average metal-nonmetal (metal-ligand) distance. We understand their role in relation to the energies $U_0^\prime$ and $\Delta_0$ within the ZSA framework, which are often scaled relative to the electron-hopping parameters describing correlated electron materials in the form of the d-orbital bandwidth in transition metal compounds. As the d-orbital bandwidth in the low-energy electronic structure is not directly available from the structure alone, a convenient proxy is the metal-metal distance and/or the metal-anion distance as the bandwidth is inversely proportional to the distance between the atomic pairs contributing states that hybridize by symmetry. This explains the strong role of inter-atomic distances in our model. The Ewald energy reflects how stabilizing the ionic charge distribution in a crystal is due to the electrostatic potential imposed by oppositely charged ions in the atomic structure. The calculations as currently performed in the latest version of \texttt{Matminer} correspond to an electrostatic energy between ions modeled as point charges, with the charge approximated by the nominal ionic oxidation state. This feature, as implemented in \texttt{Matminer} has recently been updated to be normalized per atom. This Ewald energy per atom then, to first order, separates highly ionic materials from less ionic materials. Phosphates such as CoP$_3$ (which has an Ewald energy of $-124\,\mathrm{eV/atom}$) have strong ionic character in this picture, with P having a $3-$ charge, and Co a $9+$ charge, while sulfides tend to have lower absolute values of the Ewald energy (FeS$_2$ has an Ewald energy of $-10\,\mathrm{eV/atom}$). \autoref{fig:CovRadius-Ewald} shows oxides that exhibit intermediate values. \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{fig_14_adcr_rangem.pdf} \caption{Interplay of ADCR and Range Mendeleev Number. Many MIT compounds exhibit an ADCR between 30 and 50 pm, and are comprised of group IIIB elements or lanthanides. % $R$ and $Rn$ and yttrium are rare-earth metals, with the same valence, as defined in the main text. RnTm$_4$X$_{12}$ skutterudites include an Rn element as defined before however including Ba, and Ce as well, Tm a transition metal ion from Fe, Ru or Os, and X one of the Sb, As or P anions. The RnTm$_2$X$_2$ are defined the same way as the RnTm$_4$X$_{12}$ skutterudite.} \label{fig:CovRadius-RangeMend} \end{figure} This separation based on the anion Ewald energy is similar to that based on the maximum Mendeleev number, which may explain why the maximum of the Mendeleev Number (which describes the anion in our compounds) does not have a role in our classification according to our SHAP scores whereas the range of the Mendeleev number is much more important. The range of the Mendeleev number essentially separates compounds that contain elements from the first three columns of the periodic table or from the lanthanide or actinide series (such as LaNiO$_3$ or EuO), from those that are binary transition metal compounds (such as FeS or NiO). The importance of this feature is clearer to discern through its interaction with other features, whereas the Ewald energy alone leads to a clear separation based on composition, particularly based on the anion type. % From the combination of ADCR and the range of the Mendeleev number, we find strong clustering of MIT materials (\autoref{fig:CovRadius-RangeMend}). Particularly, a range of the Mendeleev number over 40 and an ADCR below 50 are likely an indicator of an MIT material. We also find that most known MIT compounds tend to be oxides containing yttrium or a lanthanide in their composition (as highlighted by the green dots enclosed in horizontal ellipses in \autoref{fig:CovRadius-RangeMend}). \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{fig_15_adcr_ewald.pdf} \caption{Interplay of Ewald energy per atom and ADCR. Note the linear correlation within the same family (i.e., binary Ti and V oxides) and the clustering of the compounds based on their anion. % $R$ and $Rn$ and yttrium are rare-earth metals, with the same valence, as defined in the main text. } \label{fig:CovRadius-Ewald} \end{figure} Although the Ewald-energy-SHAP scores in \autoref{fig:MIT_importances} are difficult to interpret for the entire database as a whole, they are easier to understand if we focus on a particular family. Consider the V$_{n}$O$_{m}$ binary vanadium oxide family. We find that V$_2$O$_5$ with an Ewald energy of $-57\,\mathrm{eV/atom}$ is insulating, while VO with an Ewald energy of $-24\,\mathrm{eV/atom}$ is metallic. The V$_6$O$_{13}$, VO$_2$, V$_8$O$_{15}$, V$_6$O$_{11}$, V$_5$O$_9$, V$_4$O$_7$, V$_3$O$_5$ and V$_2$O$_3$ vanadates exhibiting intermediate Ewald energies are all MIT compounds. This observation strongly suggests that the Ewald energy of the MIT compounds within a particular materials family is likely to lie in between that of the metallic and insulating members in that family. A similar analysis can be performed for the Ti$_n$O$_m$ family (\autoref{fig:CovRadius-Ewald}). \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{fig_16_electronegativity.pdf} \caption{2D scatterplot of the average deviation of the covalent radius and the average deviation of the electronegativity features. Although the two features are strongly linearly correlated across our materials database, the MIT materials nonetheless show strong bunching. } \label{fig:elec-covrad} \end{figure} The Ewald energy is less useful, however, for differentiating materials with the same stoichiometry and structure type but comprised of different chemistry: LaNiO$_3$ has an Ewald energy of $-33\,\mathrm{eV/atom}$ and LuNiO$_3$ has an Ewald energy of $-34\,\mathrm{eV/atom}$ despite the very large differences between the two as illustrated in \autoref{fig:nickelate}. As the materials classes have varying ranges of Ewald energy, it is then difficult to understand its role from the SHAP plot alone without focusing on a particular family. Finally, we also included the average deviation of the electronegativity as a feature for our classifier. Although it is strongly linearly correlated with the ADCR, we surprisingly find that there is a strong clustering of the MIT compounds within this 2D feature space (\autoref{fig:elec-covrad}). \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{fig_17_binder_demo.pdf} \caption{Screenshot of the web-based Binder notebook, which permits a user to submit a crystal information file (CIF) for a compound and obtain three electronic classification probabilities: metal \emph{vs} non-metal (M), insulator \emph{vs} non-insulator (I), and MIT \emph{vs} non-MIT (T) as shown for GaMo$_4$Se$_8$. } \label{fig:binder-demo} \end{figure} \subsection{Online Classifiers for Predicting Conductivity Classes} Our pre-trained electronic classifiers are deployed and served to the larger materials science community through a cloud service called Binder\cite{binder}. There are several Jupyter notebooks\cite{Kluyver2016jupyter} hosted on the Binder server that may be used to easily reproduce the results presented herein. The Binder website offers interactive execution of these notebooks directly in a web browser without installing any dependency onto a local machine. All notebooks are also available at Ref.\ \citen{code-link}. Here we present a brief demonstration of the Binder notebook at \url{https://tinyurl.com/mit-classifiers}, which enables a user to upload a structure in CIF format and immediately make a classification using our models, without the need to install any software. After uploading a lacunar spinel structure GaMo$_4$Se$_8$, which was identified computationally as a potential MIT material\cite{wang2020featureless}, the notebook automatically featurizes the new structure and makes a prediction. Then after executing through the three binary classifiers trained on the reduced feature set, GaMo$_4$Se$_8$ is classified as an MIT material (\autoref{fig:binder-demo}). Note among the three possible conductivity classes, the assigned classifications may not be mutually exclusive since a single ternary classification is not made. The classifiers predict the following probabilities: 0.4761 for being a metal; 0.0479 for being an insulator; 0.5441 for being an MIT compound. The default threshold for making a positive classification is 0.5, which means a positive classification is made only if the predicted probability is greater than 0.5. In this case, GaMo$_4$Se$_8$ is predicted only to be an MIT compound. However it is also worth noting that although GaMo$_4$Se$_8$ does not obtain a positive metal classification, its probability for being a metal is 0.4761, which is close to 0.5. \section{Conclusions} Within this work, we highlighted and attempted to resolve two important issues in the field of metal-insulator transition compounds, which are in fact quite general to the study of quantum materials more broadly. First, the lack of a widely accessible database of materials based on a particularly relevant - but rare - property. And second, a methodology to provide insight into this class of materials that is complementary to that of standard electronic structure and model calculation methods. Our electronic materials database comprising of MIT compounds as well as related metals and insulators, will help broaden the domain knowledge of other scientists in the field. On this database, we trained three easy-to-interpret machine learning models. \rev{We recognized that the training data size is limited, and took measures to avoid over-fitting whenever possible. We offered a brief analysis on the robustness and extrapolation power of the MIT classifier in the SI.} Based on the MIT classifier model, we identified new features that determine whether a material has a temperature-driven MIT or not, advancing our domain knowledge for this type of classification problem. Particularly, we found the Global Instability Index, Average Deviation of the Covalent Radius, Ewald Energy, and Range of the Mendeleev Number as well as combinations between pairs of these features to be important to the performance of the MIT classification model. \rev{The high importance of the transition metal-transition metal distance, as well as its interaction with other features, such as the Hubbard $U$, highlights both the ability of machine learning approaches to gain physical insight and confirms previous theories about the nature of the MIT in an unbiased way.} MIT materials exhibited strong clustering when plotted in a 2D space spanned by two novel features, the average deviation of the covalent radius (ADCR) and the range of the Mendeleev number, making it possible to quickly assess whether novel materials discovered in the laboratory or predicted computationally may exhibit MITs. We also provided a periodic table with the Mendeleev number and covalent radius of the atoms to enable a quick calculation for other scientists of these two features. We conjecture that these features may be relevant in creating simple models analogous to the Goldhammer-Herzfeld criterion,\citep{GF} which allows for the differentiation between elemental metals and nonmetals. Finally, we offered a simple-to-use online platform that allows users to upload a crystal structure file and obtain a probabilistic prediction on the electronic class of their material. \begin{suppinfo} The Supporting Information is available free of charge on the ACS Publications website at DOI: the performance comparison of XGBoost classifiers (1) against other types of machine learning models (e.g., random forest) trained on the full feature set, (2) trained on the full feature set against those trained on the reduced feature set, and (3) trained in this work against other models trained in previous works to classify only metals and insulators; short primer on SHAP values; NLP search keywords; survey analysis comparing classification accuracy of domain experts against XGBoost classifiers; ALE analysis; \rev{element heatmaps for metals, insulators and MIT compounds; model evaluation with holdout test sets; and a brief discussion on the robustness and extrapolation power of the MIT classifier.} The materials database and calculated features are available online at Ref.\ \citen{code-link}. \end{suppinfo} \acknowledgement The authors thank Professors R.\ Seshadri and S.\ Wilson at the University of California, Santa Barbara, for helpful discussions about this project. This work was supported in part by the National Science Foundation (NSF) under award number DMR-1729303. The information, data, or work presented herein was also funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S.\ Department of Energy, under Award Number DE-AR0001209. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. A.B.G.\ and P.R.\ contributed equally to this work, which was initiated by N.W. A.B.G.\ identified the handpicked features, performed the physical analysis of the results, the human identification and classification of the materials in the final database, and helped coordinate the project. P.R.\ built the final version of the classifier models, the online pipeline, and the featurizer used throughout the project. A.R.T.\ built the NLP pipeline used and identified relevant compounds from the pipeline to add to the materials database. S.Z.\ performed the ALE analysis. K.M.\ built the webpage for the materials database. D.A.\ supervised S.T. E.A.O.\ supervised A.R.T. J.M.R.\ conceived and administered the project. All authors contributed to writing and revising the paper. \providecommand{\latin}[1]{#1} \makeatletter \providecommand{\doi} {\begingroup\let\do\@makeother\dospecials \catcode`\{=1 \catcode`\}=2 \doi@aux} \providecommand{\doi@aux}[1]{\endgroup\texttt{#1}} \makeatother \providecommand*\mcitethebibliography{\thebibliography} \csname @ifundefined\endcsname{endmcitethebibliography} {\let\endmcitethebibliography\endthebibliography}{} \begin{mcitethebibliography}{72} \providecommand*\natexlab[1]{#1} \providecommand*\mciteSetBstSublistMode[1]{} \providecommand*\mciteSetBstMaxWidthForm[2]{} \providecommand*\mciteBstWouldAddEndPuncttrue {\def\unskip.}{\unskip.}} \providecommand*\mciteBstWouldAddEndPunctfalse {\let\unskip.}\relax} \providecommand*\mciteSetBstMidEndSepPunct[3]{} \providecommand*\mciteSetBstSublistLabelBeginEnd[3]{} \providecommand*\unskip.}{} \mciteSetBstSublistMode{f} \mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})} \mciteSetBstSublistLabelBeginEnd {\mcitemaxwidthsubitemform\space} {\relax} {\relax} \bibitem[Imada \latin{et~al.}(1998)Imada, Fujimori, and Tokura]{Imada/Fujimori/Tokura:1998} Imada,~M.; Fujimori,~A.; Tokura,~Y. {Metal-insulator transitions}. \emph{Reviews of Modern Physics} \textbf{1998}, \emph{70}, 1039--1263, DOI: \doi{10.1103/RevModPhys.70.1039}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Hsu \latin{et~al.}(2018)Hsu, Li, Deng, and {Das Sarma}]{PhysRevLett.121.245701} Hsu,~Y.-T.; Li,~X.; Deng,~D.-L.; {Das Sarma},~S. {Machine Learning Many-Body Localization: Search for the Elusive Nonergodic Metal}. \emph{Phys. Rev. Lett.} \textbf{2018}, \emph{121}, 245701, DOI: \doi{10.1103/PhysRevLett.121.245701}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Vargas-Hern{\'{a}}ndez \latin{et~al.}(2018)Vargas-Hern{\'{a}}ndez, Sous, Berciu, and Krems]{Vargas-Hernandez2018} Vargas-Hern{\'{a}}ndez,~R.~A.; Sous,~J.; Berciu,~M.; Krems,~R.~V. {Extrapolating Quantum Observables with Machine Learning: Inferring Multiple Phase Transitions from Properties of a Single Phase}. \emph{Physical Review Letters} \textbf{2018}, \emph{121}, 255702, DOI: \doi{10.1103/PhysRevLett.121.255702}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Dong \latin{et~al.}(2019)Dong, Pollmann, and Zhang]{Dong2019} Dong,~X.-Y.; Pollmann,~F.; Zhang,~X.-F. {Machine learning of quantum phase transitions}. \emph{Physical Review B} \textbf{2019}, \emph{99}, 121104, DOI: \doi{10.1103/PhysRevB.99.121104}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Shukla \latin{et~al.}(2015)Shukla, Thathachary, Agrawal, Paik, Aziz, Schlom, Gupta, Engel-Herbert, and Datta]{Shukla2015} Shukla,~N.; Thathachary,~A.~V.; Agrawal,~A.; Paik,~H.; Aziz,~A.; Schlom,~D.~G.; Gupta,~S.~K.; Engel-Herbert,~R.; Datta,~S. A steep-slope transistor based on abrupt electronic phase transition. \emph{Nature Communications} \textbf{2015}, \emph{6}, DOI: \doi{10.1038/ncomms8812}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Brahlek \latin{et~al.}(2017)Brahlek, Zhang, Lapano, Zhang, Engel-Herbert, Shukla, Datta, Paik, and Schlom]{Brahlek2017} Brahlek,~M.; Zhang,~L.; Lapano,~J.; Zhang,~H.-T.; Engel-Herbert,~R.; Shukla,~N.; Datta,~S.; Paik,~H.; Schlom,~D.~G. Opportunities in vanadium-based strongly correlated electron systems. \emph{{MRS} Communications} \textbf{2017}, \emph{7}, 27--52, DOI: \doi{10.1557/mrc.2017.2}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Cui \latin{et~al.}(2018)Cui, Ke, Liu, Chen, Wang, Zhang, Zhou, Wang, Gao, and Long]{Cui2018} Cui,~Y.; Ke,~Y.; Liu,~C.; Chen,~Z.; Wang,~N.; Zhang,~L.; Zhou,~Y.; Wang,~S.; Gao,~Y.; Long,~Y. {Thermochromic {VO}$_2$ for Energy-Efficient Smart Windows}. \emph{Joule} \textbf{2018}, \emph{2}, 1707--1746, DOI: \doi{10.1016/j.joule.2018.06.018}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Hiroi(2015)]{Hiroi2015} Hiroi,~Z. {Structural instability of the rutile compounds and its relevance to the metal--insulator transition of VO$_2$}. \emph{Progress in Solid State Chemistry} \textbf{2015}, \emph{43}, 47--69, DOI: \doi{10.1016/j.progsolidstchem.2015.02.001}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wang \latin{et~al.}(2020)Wang, Iyer, Chen, and Rondinelli]{wang2020featureless} Wang,~Y.; Iyer,~A.; Chen,~W.; Rondinelli,~J.~M. Featureless adaptive optimization accelerates functional electronic materials design. \emph{Applied Physics Reviews} \textbf{2020}, \emph{7}, 041403, DOI: \doi{10.1063/5.0018811}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Georgescu \latin{et~al.}(2019)Georgescu, Peil, Disa, Georges, and Millis]{Georgescu14434} Georgescu,~A.~B.; Peil,~O.~E.; Disa,~A.~S.; Georges,~A.; Millis,~A.~J. Disentangling lattice and electronic contributions to the metal{\textendash}insulator transition from bulk vs. layer confined RNiO3. \emph{Proceedings of the National Academy of Sciences} \textbf{2019}, \emph{116}, 14434--14439, DOI: \doi{10.1073/pnas.1818728116}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Dom{\'{\i}}nguez \latin{et~al.}(2020)Dom{\'{\i}}nguez, Georgescu, Mundet, Zhang, Fowlie, Mercy, Waelchli, Catalano, Alexander, Ghosez, Georges, Millis, Gibert, and Triscone]{Dominguez2020} Dom{\'{\i}}nguez,~C.; Georgescu,~A.~B.; Mundet,~B.; Zhang,~Y.; Fowlie,~J.; Mercy,~A.; Waelchli,~A.; Catalano,~S.; Alexander,~D. T.~L.; Ghosez,~P.; Georges,~A.; Millis,~A.~J.; Gibert,~M.; Triscone,~J.-M. Length scales of interfacial coupling between metal and insulator phases in oxides. \emph{Nature Materials} \textbf{2020}, \emph{19}, 1182--1187, DOI: \doi{10.1038/s41563-020-0757-x}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Georgescu and Millis(2021)Georgescu, and Millis]{landscapes} Georgescu,~A.~B.; Millis,~A.~J. Energy Landscape analysis of metal-insulator transitions: theory and application to Ca$_2$RuO$_4$, RNiO$_3$ and their heterostructures. \textbf{2021}, arXiv.org. \url{https://arxiv.org/abs/2105.02271}. (accessed 2021-06-01). \relax \mciteBstWouldAddEndPunctfalse \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Schueller \latin{et~al.}(2020)Schueller, Miller, Zhang, Zuo, Rondinelli, Wilson, and Seshadri]{PhysRevMaterials.4.104401} Schueller,~E.~C.; Miller,~K.~D.; Zhang,~W.; Zuo,~J.~L.; Rondinelli,~J.~M.; Wilson,~S.~D.; Seshadri,~R. Structural signatures of the insulator-to-metal transition in $\mathrm{Ba}{\mathrm{Co}}_{1\ensuremath{-}x}{\mathrm{Ni}}_{x}{\mathrm{S}}_{2}$. \emph{Phys. Rev. Materials} \textbf{2020}, \emph{4}, 104401, DOI: \doi{10.1103/PhysRevMaterials.4.104401}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Laurita \latin{et~al.}(2019)Laurita, Puggioni, Hickox-Young, Rondinelli, Gaultois, Page, Lamontagne, and Seshadri]{PhysRevMaterials.3.095003} Laurita,~G.; Puggioni,~D.; Hickox-Young,~D.; Rondinelli,~J.~M.; Gaultois,~M.~W.; Page,~K.; Lamontagne,~L.~K.; Seshadri,~R. Uncorrelated Bi off-centering and the insulator-to-metal transition in ruthenium ${A}_{2}{\mathrm{Ru}}_{2}{\mathrm{O}}_{7}$ pyrochlores. \emph{Phys. Rev. Materials} \textbf{2019}, \emph{3}, 095003, DOI: \doi{10.1103/PhysRevMaterials.3.095003}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Georgescu \latin{et~al.}(2021)Georgescu, Kim, and Ismail-Beigi]{GEORGESCU2021107991} Georgescu,~A.~B.; Kim,~M.; Ismail-Beigi,~S. Boson Subsidiary Solver (BoSS) v1.1. \emph{Computer Physics Communications} \textbf{2021}, \emph{265}, 107991, DOI: \doi{https://doi.org/10.1016/j.cpc.2021.107991}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jager \latin{et~al.}(2017)Jager, Ott, Kraus, Kaplan, Pouse, Marvel, Haglund, Neumark, and Leone]{Jager2017} Jager,~M.~F.; Ott,~C.; Kraus,~P.~M.; Kaplan,~C.~J.; Pouse,~W.; Marvel,~R.~E.; Haglund,~R.~F.; Neumark,~D.~M.; Leone,~S.~R. {Tracking the insulator-to-metal phase transition in VO$_2$ with few-femtosecond extreme UV transient absorption spectroscopy}. \emph{Proceedings of the National Academy of Sciences} \textbf{2017}, \emph{114}, 9558--9563, DOI: \doi{10.1073/pnas.1707602114}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Lee \latin{et~al.}(2018)Lee, Chung, Shi, Kim, Campbell, Xue, Song, Choi, Podkaminer, Kim, Ryan, Kim, Paudel, Kang, Spinuzzi, Tenne, Tsymbal, Rzchowski, Chen, Lee, and Eom]{Lee2018} Lee,~D. \latin{et~al.} {Isostructural metal-insulator transition in VO$_2$}. \emph{Science} \textbf{2018}, \emph{362}, 1037--1040, DOI: \doi{10.1126/science.aam9189}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wagner \latin{et~al.}(2018)Wagner, Puggioni, and Rondinelli]{Nick} Wagner,~N.; Puggioni,~D.; Rondinelli,~J.~M. Learning from Correlations Based on Local Structure: Rare-Earth Nickelates Revisited. \emph{Journal of Chemical Information and Modeling} \textbf{2018}, \emph{58}, 2491--2501, DOI: \doi{10.1021/acs.jcim.8b00411}, PMID: 30111111\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Mercy \latin{et~al.}(2017)Mercy, Bieder, Inigues, and Ghosez]{Ghosez} Mercy,~A.; Bieder,~J.; Inigues,~J.; Ghosez,~P. Structurally triggered metal-insulator transition in rare-earth nickelates. \emph{Nature Communications} \textbf{2017}, \emph{8}, 1677, DOI: \doi{10.1038/s41467-017-01811-x}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Peil \latin{et~al.}(2019)Peil, Hampel, Ederer, and Georges]{Oleg} Peil,~O.~E.; Hampel,~A.; Ederer,~C.; Georges,~A. Mechanism and control parameters of the coupled structural and metal-insulator transition in nickelates. \emph{Phys. Rev. B} \textbf{2019}, \emph{99}, 245127, DOI: \doi{10.1103/PhysRevB.99.245127}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Johnston \latin{et~al.}(2014)Johnston, Mukherjee, Elfimov, Berciu, and Sawatzky]{PhysRevLett.112.106404} Johnston,~S.; Mukherjee,~A.; Elfimov,~I.; Berciu,~M.; Sawatzky,~G.~A. Charge Disproportionation without Charge Transfer in the Rare-Earth-Element Nickelates as a Possible Mechanism for the Metal-Insulator Transition. \emph{Phys. Rev. Lett.} \textbf{2014}, \emph{112}, 106404, DOI: \doi{10.1103/PhysRevLett.112.106404}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Fomichev \latin{et~al.}(2020)Fomichev, Khaliullin, and Berciu]{Berciu} Fomichev,~S.; Khaliullin,~G.; Berciu,~M. Effect of electron-lattice coupling on charge and magnetic order in rare-earth nickelates. \emph{Phys. Rev. B} \textbf{2020}, \emph{101}, 024402, DOI: \doi{10.1103/PhysRevB.101.024402}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Disa \latin{et~al.}(2017)Disa, Georgescu, Hart, Kumah, Shafer, Arenholz, Arena, Ismail-Beigi, Taheri, Walker, and Ahn]{PhysRevMaterials.1.024410} Disa,~A.~S.; Georgescu,~A.~B.; Hart,~J.~L.; Kumah,~D.~P.; Shafer,~P.; Arenholz,~E.; Arena,~D.~A.; Ismail-Beigi,~S.; Taheri,~M.~L.; Walker,~F.~J.; Ahn,~C.~H. Control of hidden ground-state order in $\mathrm{NdNi}{\mathrm{O}}_{3}$ superlattices. \emph{Phys. Rev. Materials} \textbf{2017}, \emph{1}, 024410, DOI: \doi{10.1103/PhysRevMaterials.1.024410}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Liao \latin{et~al.}(2018)Liao, Gauquelin, Green, M{\"u}ller-Caspary, Lobato, Li, Van~Aert, Verbeeck, Huijben, Grisolia, Rouco, El~Hage, Villegas, Mercy, Bibes, Ghosez, Sawatzky, Rijnders, and Koster]{Liao9515} Liao,~Z. \latin{et~al.} Metal{\textendash}insulator-transition engineering by modulation tilt-control in perovskite nickelates for room temperature optical switching. \emph{Proceedings of the National Academy of Sciences} \textbf{2018}, \emph{115}, 9515--9520, DOI: \doi{10.1073/pnas.1807457115}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Zhang \latin{et~al.}(2019)Zhang, McLeod, Han, Chen, Bechtel, Yao, Gilbert~Corder, Ciavatti, Tao, Aronson, Carr, Martin, Sow, Yonezawa, Nakamura, Terasaki, Basov, Millis, Maeno, and Liu]{CRO} Zhang,~J. \latin{et~al.} Nano-Resolved Current-Induced Insulator-Metal Transition in the Mott Insulator ${\mathrm{Ca}}_{2}{\mathrm{RuO}}_{4}$. \emph{Phys. Rev. X} \textbf{2019}, \emph{9}, 011032, DOI: \doi{10.1103/PhysRevX.9.011032}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Okazaki \latin{et~al.}(2020)Okazaki, Kobayashi, Kumai, Nakao, Murakami, Nakamura, Taniguchi, and Terasaki]{CRO2} Okazaki,~R.; Kobayashi,~K.; Kumai,~R.; Nakao,~H.; Murakami,~Y.; Nakamura,~F.; Taniguchi,~H.; Terasaki,~I. Current-induced Giant Lattice Deformation in the Mott Insulator Ca$_2$RuO$_4$. \emph{Journal of the Physical Society of Japan} \textbf{2020}, \emph{89}, 044710, DOI: \doi{10.7566/JPSJ.89.044710}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Tsurumaki-Fukuchi \latin{et~al.}(2020)Tsurumaki-Fukuchi, Tsubaki, Katase, Kamiya, Arita, and Takahashi]{CRO3} Tsurumaki-Fukuchi,~A.; Tsubaki,~K.; Katase,~T.; Kamiya,~T.; Arita,~M.; Takahashi,~Y. Stable and Tunable Current-Induced Phase Transition in Epitaxial Thin Films of Ca$_2$RuO$_4$. \emph{ACS Applied Materials \& Interfaces} \textbf{2020}, \emph{12}, 28368--28374, DOI: \doi{10.1021/acsami.0c05181}, PMID: 32460482\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bertinshaw \latin{et~al.}(2019)Bertinshaw, Gurung, Jorba, Liu, Schmid, Mantadakis, Daghofer, Krautloher, Jain, Ryu, Fabelo, Hansmann, Khaliullin, Pfleiderer, Keimer, and Kim]{CRO4} Bertinshaw,~J. \latin{et~al.} Unique Crystal Structure of ${\mathrm{Ca}}_{2}{\mathrm{RuO}}_{4}$ in the Current Stabilized Semimetallic State. \emph{Phys. Rev. Lett.} \textbf{2019}, \emph{123}, 137204, DOI: \doi{10.1103/PhysRevLett.123.137204}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Han and Millis(2018)Han, and Millis]{CRO5} Han,~Q.; Millis,~A. Lattice Energetics and Correlation-Driven Metal-Insulator Transitions: The Case of ${\mathrm{Ca}}_{2}{\mathrm{RuO}}_{4}$. \emph{Phys. Rev. Lett.} \textbf{2018}, \emph{121}, 067601, DOI: \doi{10.1103/PhysRevLett.121.067601}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Periyasamy \latin{et~al.}(2020)Periyasamy, Øystein {S. Fjellvåg}, Fjellvåg, and Sjåstad]{CMO} Periyasamy,~M.; Øystein {S. Fjellvåg},; Fjellvåg,~H.; Sjåstad,~A.~O. Coupling of magnetoresistance switching and glassy magnetic state at the metal–insulator transition in Ruddlesden-Popper manganite Ca$_4$Mn$_3$O$_{10}$. \emph{Journal of Magnetism and Magnetic Materials} \textbf{2020}, \emph{511}, 166949, DOI: \doi{https://doi.org/10.1016/j.jmmm.2020.166949}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Szymanski \latin{et~al.}(2019)Szymanski, Walters, Puggioni, and Rondinelli]{mixed} Szymanski,~N.~J.; Walters,~L.~N.; Puggioni,~D.; Rondinelli,~J.~M. Design of Heteroanionic MoON Exhibiting a Peierls Metal-Insulator Transition. \emph{Phys. Rev. Lett.} \textbf{2019}, \emph{123}, 236402, DOI: \doi{10.1103/PhysRevLett.123.236402}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bansal \latin{et~al.}(2020)Bansal, Niedziela, Calder, Lanigan-Atkins, Rawl, Said, Abernathy, Kolesnikov, Zhou, and Delaire]{mixed2} Bansal,~D.; Niedziela,~J.~L.; Calder,~S.; Lanigan-Atkins,~T.; Rawl,~R.; Said,~A.~H.; Abernathy,~D.~L.; Kolesnikov,~A.~I.; Zhou,~H.; Delaire,~O. Magnetically driven phonon instability enables the metal{\textendash}insulator transition in h-{FeS}. \emph{Nature Physics} \textbf{2020}, \emph{16}, 669--675, DOI: \doi{10.1038/s41567-020-0857-1}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Shannon and Fischer(2016)Shannon, and Fischer]{shannon_empirical_2016} Shannon,~R.~D.; Fischer,~R.~X. Empirical electronic polarizabilities of ions for the prediction and interpretation of refractive indices: Oxides and oxysalts. \emph{American Mineralogist} \textbf{2016}, \emph{101}, 2288--2300, DOI: \doi{10.2138/am-2016-5730}, Publisher: {GeoScienceWorld}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Naccarato \latin{et~al.}(2019)Naccarato, Ricci, Suntivich, Hautier, Wirtz, and Rignanese]{PhysRevMaterials.3.044602} Naccarato,~F.; Ricci,~F.; Suntivich,~J.; Hautier,~G.; Wirtz,~L.; Rignanese,~G.-M. Searching for materials with high refractive index and wide band gap: A first-principles high-throughput study. \emph{Phys. Rev. Materials} \textbf{2019}, \emph{3}, 044602, DOI: \doi{10.1103/PhysRevMaterials.3.044602}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Hellenbrandt(2004)]{Hellenbrandt2004} Hellenbrandt,~M. {The Inorganic Crystal Structure Database (ICSD)--Present and Future}. \emph{Crystallogr. Rev.} \textbf{2004}, \emph{10}, 17--22, DOI: \doi{10.1080/08893110410001664882}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Spr(2019)]{SpringerMaterials} {SpringerMaterials}. \textbf{2019}, \url{https://materials.springer.com/} (accessed 2021-06-01) \relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jain \latin{et~al.}(2013)Jain, Ong, Hautier, Chen, Richards, Dacek, Cholia, Gunter, Skinner, Ceder, and Persson]{Jain2013} Jain,~A.; Ong,~S.~P.; Hautier,~G.; Chen,~W.; Richards,~W.~D.; Dacek,~S.; Cholia,~S.; Gunter,~D.; Skinner,~D.; Ceder,~G.; Persson,~K.~A. {Commentary: The Materials Project: A materials genome approach to accelerating materials innovation}. \emph{APL Materials} \textbf{2013}, \emph{1}, 011002, DOI: \doi{10.1063/1.4812323}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Saal \latin{et~al.}(2013)Saal, Kirklin, Aykol, Meredig, and Wolverton]{Saal2013MaterialsOQMD} Saal,~J.~E.; Kirklin,~S.; Aykol,~M.; Meredig,~B.; Wolverton,~C. {Materials Design and Discovery with High-Throughput Density Functional Theory: The Open Quantum Materials Database (OQMD)}. \emph{JOM} \textbf{2013}, \emph{65}, 1501--1509, DOI: \doi{10.1007/s11837-013-0755-4}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Curtarolo \latin{et~al.}(2012)Curtarolo, Setyawan, Wang, Xue, Yang, Taylor, Nelson, Hart, Sanvito, Buongiorno-Nardelli, Mingo, and Levy]{Curtarolo2012} Curtarolo,~S.; Setyawan,~W.; Wang,~S.; Xue,~J.; Yang,~K.; Taylor,~R.~H.; Nelson,~L.~J.; Hart,~G.~L.; Sanvito,~S.; Buongiorno-Nardelli,~M.; Mingo,~N.; Levy,~O. {AFLOWLIB.ORG: A distributed materials properties repository from high-throughput ab initio calculations}. \emph{Computational Materials Science} \textbf{2012}, \emph{58}, 227--235, DOI: \doi{10.1016/j.commatsci.2012.02.002}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Varignon \latin{et~al.}(2019)Varignon, Bibes, and Zunger]{Varignon2019} Varignon,~J.; Bibes,~M.; Zunger,~A. Origin of band gaps in 3d perovskite oxides. \emph{Nature Communications} \textbf{2019}, \emph{10}, DOI: \doi{10.1038/s41467-019-09698-6}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ren \latin{et~al.}(2021)Ren, Wagner, Georgescu, and Rondinelli]{code-link} Ren,~P.; Wagner,~N.; Georgescu,~A.; Rondinelli,~J.~M. Electronic Materials Binary Classifiers. \textbf{2021}, \url{https://doi.org/10.5281/zenodo.4765321}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ward \latin{et~al.}(2016)Ward, Agrawal, Choudhary, and Wolverton]{Ward2016AMaterials} Ward,~L.; Agrawal,~A.; Choudhary,~A.; Wolverton,~C. {A general-purpose machine learning framework for predicting properties of inorganic materials}. \emph{npj Comput. Mater.} \textbf{2016}, \emph{2}, DOI: \doi{10.1038/npjcompumats.2016.28}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Peets \latin{et~al.}(2013)Peets, Kim, Dosanjh, Reehuis, Maljuk, Aliouane, Ulrich, and Keimer]{SFO} Peets,~D.~C.; Kim,~J.-H.; Dosanjh,~P.; Reehuis,~M.; Maljuk,~A.; Aliouane,~N.; Ulrich,~C.; Keimer,~B. Magnetic phase diagram of Sr${}_{3}$Fe${}_{2}$O${}_{7\ensuremath{-}\ensuremath{\delta}}$. \emph{Phys. Rev. B} \textbf{2013}, \emph{87}, 214410, DOI: \doi{10.1103/PhysRevB.87.214410}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Jensen \latin{et~al.}(2019)Jensen, Kim, Kwon, Gani, Román-Leshkov, Moliner, Corma, and Olivetti]{NLP1} Jensen,~Z.; Kim,~E.; Kwon,~S.; Gani,~T. Z.~H.; Román-Leshkov,~Y.; Moliner,~M.; Corma,~A.; Olivetti,~E. A Machine Learning Approach to Zeolite Synthesis Enabled by Automatic Literature Data Extraction. \emph{ACS Central Science} \textbf{2019}, \emph{5}, 892--899, DOI: \doi{10.1021/acscentsci.9b00193}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sekine \latin{et~al.}(1997)Sekine, Uchiumi, Shirotani, and Yagi]{PhysRevLett.79.3218} Sekine,~C.; Uchiumi,~T.; Shirotani,~I.; Yagi,~T. Metal-Insulator Transition in ${\mathrm{PrRu}}_{4}\mathrm{P}_{12}$ with Skutterudite Structure. \emph{Phys. Rev. Lett.} \textbf{1997}, \emph{79}, 3218--3221, DOI: \doi{10.1103/PhysRevLett.79.3218}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Tshitoyan \latin{et~al.}(2019)Tshitoyan, Dagdelen, Weston, Dunn, Rong, Kononova, Persson, Ceder, and Jain]{NLP2} Tshitoyan,~V.; Dagdelen,~J.; Weston,~L.; Dunn,~A.; Rong,~Z.; Kononova,~O.; Persson,~K.~A.; Ceder,~G.; Jain,~A. Unsupervised word embeddings capture latent knowledge from materials science literature. \emph{Nature} \textbf{2019}, \emph{571}, 95--98, DOI: \doi{10.1038/s41586-019-1335-8}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Balachandran and Rondinelli(2013)Balachandran, and Rondinelli]{PhysRevB.88.054101} Balachandran,~P.~V.; Rondinelli,~J.~M. Interplay of octahedral rotations and breathing distortions in charge-ordering perovskite oxides. \emph{Phys. Rev. B} \textbf{2013}, \emph{88}, 054101, DOI: \doi{10.1103/PhysRevB.88.054101}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wagner \latin{et~al.}(2018)Wagner, Puggioni, and Rondinelli]{Wagner2018} Wagner,~N.; Puggioni,~D.; Rondinelli,~J.~M. Learning from Correlations Based on Local Structure: Rare-Earth Nickelates Revisited. \emph{Journal of Chemical Information and Modeling} \textbf{2018}, \emph{58}, 2491--2501, DOI: \doi{10.1021/acs.jcim.8b00411}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Salinas-Sanchez \latin{et~al.}(1992)Salinas-Sanchez, Garcia-Mu{\~{n}}oz, Rodriguez-Carvajal, Saez-Puche, and Martinez]{Salinas-Sanchez1992a} Salinas-Sanchez,~A.; Garcia-Mu{\~{n}}oz,~J.; Rodriguez-Carvajal,~J.; Saez-Puche,~R.; Martinez,~J. {Structural characterization of R$_{2}$BaCuO$_{5}$ (R = Y, Lu, Yb, Tm, Er, Ho, Dy, Gd, Eu and Sm) oxides by X-ray and neutron diffraction}. \emph{Journal of Solid State Chemistry} \textbf{1992}, \emph{100}, 201--211, DOI: \doi{10.1016/0022-4596(92)90094-C}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ward \latin{et~al.}(2018)Ward, Dunn, Faghaninia, Zimmermann, Bajaj, Wang, Montoya, Chen, Bystrom, Dylla, Chard, Asta, Persson, Snyder, Foster, and Jain]{Ward2018} Ward,~L. \latin{et~al.} {Matminer: An open source toolkit for materials data mining}. \emph{Computational Materials Science} \textbf{2018}, \emph{152}, 60--69, DOI: \doi{10.1016/j.commatsci.2018.05.018}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Zaanen \latin{et~al.}(1985)Zaanen, Sawatzky, and Allen]{PhysRevLett.55.418} Zaanen,~J.; Sawatzky,~G.~A.; Allen,~J.~W. Band gaps and electronic structure of transition-metal compounds. \emph{Phys. Rev. Lett.} \textbf{1985}, \emph{55}, 418--421, DOI: \doi{10.1103/PhysRevLett.55.418}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Torrance \latin{et~al.}(1991)Torrance, Lacorre, Asavaroengchai, and Metzger]{Torrance1991} Torrance,~J.~B.; Lacorre,~P.; Asavaroengchai,~C.; Metzger,~R.~M. {Why are some oxides metallic, while most are insulating?} \emph{Physica C: Superconductivity} \textbf{1991}, \emph{182}, 351--364, DOI: \doi{10.1016/0921-4534(91)90534-6}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Villars \latin{et~al.}(2004)Villars, Cenzual, Daams, Chen, and Iwata]{Villars2004} Villars,~P.; Cenzual,~K.; Daams,~J.; Chen,~Y.; Iwata,~S. {Data-driven atomic environment prediction for binaries using the Mendeleev number}. \emph{Journal of Alloys and Compounds} \textbf{2004}, \emph{367}, 167--175, DOI: \doi{10.1016/j.jallcom.2003.08.060}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[George \latin{et~al.}(2020)George, Waroquiers, Stefano, Petretto, Rignanese, and Hautier]{George2020} George,~J.; Waroquiers,~D.; Stefano,~D.~D.; Petretto,~G.; Rignanese,~G.-M.; Hautier,~G. The Limited Predictive Power of the Pauling Rules. \emph{Angewandte Chemie International Edition} \textbf{2020}, \emph{59}, 7569--7575, DOI: \doi{10.1002/anie.202000829}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Brown(1992)]{Brown1992} Brown,~I.~D. Chemical and steric constraints in inorganic solids. \emph{Acta Crystallographica Section B Structural Science} \textbf{1992}, \emph{48}, 553--572, DOI: \doi{10.1107/s0108768192002453}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Salinas-Sanchez \latin{et~al.}(1992)Salinas-Sanchez, Garcia-Mu{\~{n}}oz, Rodriguez-Carvajal, Saez-Puche, and Martinez]{GII1992} Salinas-Sanchez,~A.; Garcia-Mu{\~{n}}oz,~J.~L.; Rodriguez-Carvajal,~J.; Saez-Puche,~R.; Martinez,~J.~L. {Structural Characterization of R$_2$BaCuO$_5$ (R = Y, Lu, Yb, Tm, Er, Ho, Dy, Gd, Eu and Sm) Oxides by X-Ray and Neutron Diffraction}. \emph{J. Solid State Chem.} \textbf{1992}, \emph{100}, 201--211, DOI: \doi{10.1016/0022-4596(92)90094-C}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Brown and Poeppelmeier(2014)Brown, and Poeppelmeier]{Brown/Poeppelmeier:2014} Brown,~I.~D., Poeppelmeier,~K.~R., Eds. \emph{Bond Valences}; Springer Berlin Heidelberg, 2014; pp 91--128, DOI: \doi{10.1007/978-3-642-54968-7}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Hong \latin{et~al.}(2016)Hong, Welsch, and Shao-Horn]{Hong2016} Hong,~W.~T.; Welsch,~R.~E.; Shao-Horn,~Y. {Descriptors of Oxygen-Evolution Activity for Oxides: A Statistical Evaluation}. \emph{The Journal of Physical Chemistry C} \textbf{2016}, \emph{120}, 78--86, DOI: \doi{10.1021/acs.jpcc.5b10071}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kramida \latin{et~al.}(2019)Kramida, Ralchenko, Reader, and Team]{nist-atomic-database} Kramida,~A.; Ralchenko,~Y.; Reader,~J.; Team,~N.~A. {NIST Atomic Spectra Database} (version 5.7.1). \textbf{2019}, \url{https://physics.nist.gov/asd}. (accessed 2021-06-01)\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Chen and Guestrin(2016)Chen, and Guestrin]{Chen2016} Chen,~T.; Guestrin,~C. {XGBoost}. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining - KDD '16. New York, New York, USA, 2016; pp 785--794, DOI: \doi{10.1145/2939672.2939785}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Olson \latin{et~al.}(2018)Olson, La~Cava, Mustahsan, Varik, and Moore]{Olson2018} Olson,~R.~S.; La~Cava,~W.; Mustahsan,~Z.; Varik,~A.; Moore,~J.~H. Data-driven {Advice} for {Applying} {Machine} {Learning} to {Bioinformatics} {Problems}. \textbf{2018}, arXiv.org. \url{https://arxiv.org/abs/1708.05070}. (accessed 2021-06-01). \relax \mciteBstWouldAddEndPunctfalse \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[McInnes \latin{et~al.}(2020)McInnes, Healy, and Melville]{McInnes2020} McInnes,~L.; Healy,~J.; Melville,~J. {UMAP}: {Uniform} {Manifold} {Approximation} and {Projection} for {Dimension} {Reduction}. \textbf{2018}, arXiv.org. \url{https://arxiv.org/abs/1802.03426}. (accessed 2021-06-01). \relax \mciteBstWouldAddEndPunctfalse \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Lundberg and Lee(2017)Lundberg, and Lee]{lundberg_unified_2017} Lundberg,~S.~M.; Lee,~S.-I. In \emph{Advances in {Neural} {Information} {Processing} {Systems} 30}; Guyon,~I., Luxburg,~U.~V., Bengio,~S., Wallach,~H., Fergus,~R., Vishwanathan,~S., Garnett,~R., Eds.; Curran Associates, Inc., 2017; pp 4765--4774\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Pedregosa \latin{et~al.}(2011)Pedregosa, Varoquaux, Gramfort, Michel, Thirion, Grisel, Blondel, Prettenhofer, Weiss, Dubourg, Vanderplas, Passos, Cournapeau, Brucher, Perrot, and Duchesnay]{scikit-learn} Pedregosa,~F. \latin{et~al.} Scikit-learn: Machine Learning in {P}ython. \emph{Journal of Machine Learning Research} \textbf{2011}, \emph{12}, 2825--2830\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Curnoe \latin{et~al.}(2002)Curnoe, Harima, Takegahara, and Ueda]{skutheory} Curnoe,~S.; Harima,~H.; Takegahara,~K.; Ueda,~K. {Structural phase transition and anti-quadrupolar ordering in PrFe$_4$P$_{12}$ and PrRu$_4$P$_{12}$}. \emph{Physica B: Condensed Matter} \textbf{2002}, \emph{312-313}, 837--839, DOI: \doi{https://doi.org/10.1016/S0921-4526(01)01261-3}, The International Conference on Strongly Correlated Electron Systems\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[YAMADA(2014)]{giicufe} YAMADA,~I. High-pressure synthesis, electronic states, and structure-property relationships of perovskite oxides, {ACu}$_3$Fe$_4$O$_{12}$ (A: divalent alkaline earth or trivalent rare-earth ion). \emph{Journal of the Ceramic Society of Japan} \textbf{2014}, \emph{122}, 846--851, DOI: \doi{10.2109/jcersj2.122.846}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bartel \latin{et~al.}(2019)Bartel, Sutton, Goldsmith, Ouyang, Musgrave, Ghiringhelli, and Scheffler]{Bartel2019} Bartel,~C.~J.; Sutton,~C.; Goldsmith,~B.~R.; Ouyang,~R.; Musgrave,~C.~B.; Ghiringhelli,~L.~M.; Scheffler,~M. New tolerance factor to predict the stability of perovskite oxides and halides. \emph{Science Advances} \textbf{2019}, \emph{5}, eaav0693, DOI: \doi{10.1126/sciadv.aav0693}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Aryasetiawan \latin{et~al.}(2004)Aryasetiawan, Imada, Georges, Kotliar, Biermann, and Lichtenstein]{cRPA} Aryasetiawan,~F.; Imada,~M.; Georges,~A.; Kotliar,~G.; Biermann,~S.; Lichtenstein,~A.~I. {Frequency-dependent local interactions and low-energy effective models from electronic structure calculations}. \emph{Physical Review B - Condensed Matter and Materials Physics} \textbf{2004}, \emph{70}, 195104 (2004), DOI: \doi{10.1103/PhysRevB.70.195104}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[{P}roject {J}upyter \latin{et~al.}(2018){P}roject {J}upyter, {M}atthias {B}ussonnier, {J}essica {F}orde, {J}eremy {F}reeman, {B}rian {G}ranger, {T}im {H}ead, {C}hris {H}oldgraf, {K}yle {K}elley, {G}ladys {N}alvarte, {A}ndrew {O}sheroff, {P}acer, {Y}uvi {P}anda, {F}ernando {P}erez, {B}enjamin~{R}agan {K}elley, and {C}arol {W}illing]{binder} {P}roject {J}upyter,; {M}atthias {B}ussonnier,; {J}essica {F}orde,; {J}eremy {F}reeman,; {B}rian {G}ranger,; {T}im {H}ead,; {C}hris {H}oldgraf,; {K}yle {K}elley,; {G}ladys {N}alvarte,; {A}ndrew {O}sheroff,; {P}acer,~M.; {Y}uvi {P}anda,; {F}ernando {P}erez,; {B}enjamin~{R}agan {K}elley,; {C}arol {W}illing, {B}inder 2.0 - {R}eproducible, interactive, sharable environments for science at scale. {P}roceedings of the 17th {P}ython in {S}cience {C}onference. 2018; pp 113 -- 120, DOI: \doi{10.25080/Majora-4af1f417-011}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kluyver \latin{et~al.}(2016)Kluyver, Ragan-Kelley, P{\'e}rez, Granger, Bussonnier, Frederic, Kelley, Hamrick, Grout, Corlay, Ivanov, Avila, Abdalla, and Willing]{Kluyver2016jupyter} Kluyver,~T.; Ragan-Kelley,~B.; P{\'e}rez,~F.; Granger,~B.; Bussonnier,~M.; Frederic,~J.; Kelley,~K.; Hamrick,~J.; Grout,~J.; Corlay,~S.; Ivanov,~P.; Avila,~D.; Abdalla,~S.; Willing,~C. In \emph{Positioning and Power in Academic Publishing: Players, Agents and Agendas}; Loizides,~F., Schmidt,~B., Eds.; 2016; pp 87 -- 90, DOI: \doi{10.3233/978-1-61499-649-1-87}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Yao \latin{et~al.}(2020)Yao, Kuznetsov, Xiao, Slocombe, Rao, Hensel, and Edwards]{GF} Yao,~B.; Kuznetsov,~V.~L.; Xiao,~T.; Slocombe,~D.~R.; Rao,~C.; Hensel,~F.; Edwards,~P.~P. Metals and non-metals in the periodic table. \emph{Philosophical Transactions of the Royal Society A} \textbf{2020}, \emph{378}, DOI: \doi{10.1098/rsta.2020.0213}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \end{mcitethebibliography} \newpage \begin{figure*} \center \includegraphics[height=1.74in]{TOC} \end{figure*} \centering Table of Contents Graphic \end{document}
1,116,691,499,950
arxiv
\section{Introduction} Spin Glasses (SG) are disordered magnetic alloys that are generally regarded as particularly convenient model systems for the study of glassy behaviour~\cite{mydosh:93,fisher:93}. Indeed, ideas originating in the SG context have been fruitful in the study of structural glasses, optimisation in computer science, quantum information, econophysics, etc. A distinctive feature of SG is that, below their glass temperature, they remain out of equilibrium even if they are left to relax under constant experimental conditions for days or weeks. In spite of this, the {\em equilibrium} properties of their low-temperature phase is believed to control their non-equilibrium behaviour. Indeed, both theory~\cite{ballesteros:00,palassini:99} and experiment~\cite{gunnarsson:91} agree in that the sluggish dynamics is due to a {\em thermodynamic} phase transition at a critical temperature $T_\mathrm{c}$, that separates the paramagnetic phase from a low-temperature one where the spins freeze according to extremely complex, essentially unpredictable, ordering patterns. Furthermore, it has been now established that an accurate knowledge of the thermodynamic equilibrium properties would allow us to predict in detail many relevant features of their non-equilibrium relaxation~\cite{franz:98,franz:99}. There is an already 30-year-old theoretical controversy regarding the defining properties of the SG phase. On the one hand, the Replica Symmetry Breaking (RSB) theory that stems from Parisi's solution of the SG in the mean field approximation~\cite{mezard:87,marinari:00}. A system well described by the RSB is in a critical state for all $T<T_\mathrm{c}$, where the surfaces of the magnetic domains are space filling. On the other hand, the droplet theory~\cite{mcmillan:84,bray:87,fisher:86,fisher:88} views the SG phase as a disguised ferromagnet. It provides the solution of SG models as computed in the Migdal-Kadanoff approximation~\cite{gardner:84}. We refer the reader to \sref{SECT:MODEL} for the detailed predictions of the RSB and droplet theories for the different physical observables in the SG phase. The predictions of the somewhat intermediate TNT theory~\cite{krzakala:00,palassini:00} are discussed also in \sref{SECT:MODEL}. Numerical simulations are the main tool that theoretical physicists have to make progress in the understanding of the SG phase in $D\!=\!3$ systems. Basically without exceptions, numerical work in $D\!=\!3$ is best described by RSB theory (see~\cite{marinari:00} for a review, refs.~\cite{contucci:06,contucci:07b,contucci:09} for recent work and refs.~\cite{krzakala:00,palassini:00,jorg:08} for some somewhat dissenting views). Yet, numerical investigations have received as well severe criticism. It has been claimed that basically all simulations doable to date are contaminated by critical effects~\cite{moore:98}. One would need to simulate still larger systems at still lower temperatures, in order to observe the asymptotic behaviour corresponding to large enough systems. Here we present the results of a large-scale simulation performed on Janus~\cite{janus:06,janus:08}, a special-purpose computer designed for the simulation of SG. For this particular task, Janus outperforms standard computers by several orders of magnitude. We have devoted (the equivalent of) 200 days of the full Janus computer to an equilibrium, parallel-tempering simulation of the Ising SG in $D\!=\!3$. We have been able to thermalise lattices of size $L\!=\!32$ down to temperatures $T\!\approx \! 0.64 T_\mathrm{c}$. This is not only a world record, but provides as well an unprecedented glimpse on the low temperature SG phase. Our main objectives here have been (see \sref{SECT:MODEL} for definitions): \begin{itemize} \item To perform a precision comparison of equilibrium and non-equilibrium spatial correlation functions. It turns out that a time-length dictionary exists, which relates with amazing accuracy our previous results at finite times~\cite{janus:08b,janus:09b} (on non-equilibrium infinite systems) with equilibrium {\em finite} lattice {\em sizes}. The unavoidable conclusion is that experimental SG are in the dynamical non-equilibrium regimes that correspond to equilibrium results on lattices $L\sim 110$. There is no doubt that at these length scales, the appropriate effective theory is RSB, irrespectively of which of the competing theories is correct for much larger $L$. \item To perform a study of the probability density function (pdf) of the spin overlap, and to extrapolate important quantities to the thermodynamic limit. So doing, we will gather important information about the correlation length in the spin-glass phase. \item To provide a detailed study of the link overlap. \item Last, but not least, to obtain a large set of configurations, fireproof thermalised, which will serve as a starting point for more sophisticated studies (such as investigation of ultrametricity, or temperature chaos). In particular, a detailed study of the spatial correlation functions will appear elsewhere~\cite{janus:10b}. \end{itemize} The layout of the rest of this paper is as follows. In \sref{SECT:MODEL} we briefly recall the definition of the Edwards-Anderson model. In particular, in \sref{SECT:OBSERVABLES} we describe the observables considered and discuss the scaling behaviour predicted for them by the different theoretical scenarios. In \sref{SECT:PT-THERM}, we describe our simulations and address the crucial problem of ensuring thermal equilibrium. We have found it most useful to study the random walk in temperature space performed in our parallel-tempering simulations (\sref{SECT:THERMALIZATION-CRITERIA}). In particular, our thermalisation checks significantly expand the methodology introduced in~\cite{fernandez:09b}. At this point, we are ready to study in \sref{SECT:P-DE-Q} the pdf of the spin overlap. In particular, in \sref{sect:picos} we determine through finite size effects a correlation length in the spin-glass phase. We focus on the spatial correlation functions in \sref{SECT:COND}, finding (\sref{SECT:EQUILIBRIUM-DYNAMICS}) crystal-clear indications of the relevance of our {\em equilibrium} investigations to the {\em non-equilibrium} experimental work. The properties of the link overlap are addressed in \sref{SECT:LINK-OV}. Our conclusions are presented in \sref{SECT:CONCLUSIONS}. Technical details are provided in two appendices. \section{Model, Observables, Theoretical expectations}\label{SECT:MODEL} We divide this section in five paragraphs. In \sref{SECT:MODELDEF} we describe our model. The spin overlap and related quantities are defined in \sref{SECT:OBSERVABLES}. We discuss spatial correlation functions in \sref{SECT:DEF-CORR}. Their non-equilibrium counterparts are recalled in \sref{SECT:DEF-CORR-DINAMICA}. We address the link overlap in \sref{SECT:DEF-QLINK}. Even though most of this section consists of results and definitions well known in the spin-glass community, we consider it convenient as a quick reference. We also introduce some specific (and sometimes new or seldom used) physical quantities for this paper. \subsection{The model}\label{SECT:MODELDEF} We consider the $D=3$ Edwards-Anderson model~\cite{edwards:75,edwards:76}. Our dynamical variables are Ising spins $s_{\vn{x}}\!=\!\pm1$, which are placed on the nodes, $\vn{x}$, of a cubic lattice of linear size $L$, containing $V=L^3$ sites, and with periodic boundary conditions. Their interaction is restricted to lattice nearest neighbours and is given by the Hamiltonian: \begin{equation} {\cal H}=-\sum_{\langle \vn{x}\vn{y}\rangle }\ J_{\vn{x},\vn{y}}\, s_{\vn{x}}\, s_{\vn{y}}\,.\label{EA-H} \end{equation} Note that the couplings $J_{\vn{x},\vn{y}}$ in the Hamiltonian are themselves stochastic variables: they take the values $\pm 1$ with $50\%$ probability. The coupling constants attached to different lattice links are statistically independent. The physical motivation for working with a random Hamiltonian is modelling the effects of impurities in a magnetic alloy. We shall consider the {\em quenched} approximation: in the time scale relevant to the spin dynamics, the impurities can be regarded as static. Hence, we will not allow for any back-reaction of the spins over the coupling constants. A given realisation of the $\{ J_{\vn{x},\vn{y}}\}$ (a sample, from now on), will be fixed from the start and considered non-dynamical~\cite{mydosh:93}. A random Hamiltonian implies a double averaging procedure. For any observable $O$ (an arbitrary function of the spins and the coupling constants), we shall {\em first} compute the thermal average $\langle O\rangle$ using the Boltzmann weight at temperature $T$ for the Hamiltonian (\ref{EA-H}). The average over the coupling constants distribution, $\overline{\langle O\rangle}\,,$ is only taken afterwards. We will refer sometimes to the second averaging, $\overline{(\cdot\cdot\cdot)}$, as disorder average. The reader will notice that the disorder average induces a non-dynamical gauge symmetry~\cite{toulousse:77}. Let us choose a random sign per site $\epsilon_{\vn{x}}=\pm 1\,$. Hence, the energy (\ref{EA-H}) is invariant under the transformation \begin{equation} \begin{array}{rcl} s_{\vn{x}}&\longrightarrow &\epsilon_{\vn{x}}s_{\vn{x}}\,,\\ J_{{\vn{x}},{\vn{y}}}&\longrightarrow &\epsilon_{\vn{x}}\epsilon_{\vn{y}} J_{{\vn{x}},{\vn{y}}}\,.\label{GAUGE-TRANSF} \end{array} \end{equation} Since the gauge-transformed couplings $\epsilon_{\vn{x}}\epsilon_{\vn{y}} J_{{\vn{x}},{\vn{y}}}$ are just as probable as the original ones, the quenched mean value of $\overline{\langle O(\{s_{\vn x}\})\rangle}$ is identical to that of its gauge average $\sum_{\{\epsilon_{\vn x}=\pm 1\}} \overline{\langle O(\{\epsilon _{\vn x} s_{\vn x}\})\rangle}/2^{L^D}\,,$ which typically is an uninteresting constant value. We show in \sref{SECT:OBSERVABLES} how to overcome this problem. We remark as well that the Hamiltonian (\ref{EA-H}) also has a global $\mathbf{Z}_2$ symmetry (if all spins are simultaneously reversed $s_{\vn{x}}\to -s_{\vn{x}}$ the energy is unchanged), corresponding to time-reversal symmetry. This symmetry gets spontaneously broken in three dimensions upon lowering the temperature at the SG transition at $T_\mathrm{c}=1.109(10)$~\cite{hasenbusch:08,hasenbusch:08b}. \subsection{The spin overlap}\label{SECT:OBSERVABLES} We need observables that remain invariant under the transformation (\ref{GAUGE-TRANSF}). The Hamiltonian (\ref{EA-H}) provides, of course, a first example. To make further progress we consider {\em real} replicas $\{s_{\vn{x}}^{(1)}\},\{s_{\vn{x}}^{(2)}\}$, copies of the system that evolve under the same set of couplings $\{ J_{{\vn{x}},{\vn{y}}}\}$ but are otherwise statistically uncorrelated.\footnote{For the thermal average of any observable depending on a {\em single} spin configuration, $O(\{s_{\vn{x}}^{(1)}\})$, we have $\bigl\langle O(\{s_{\vn{x}}^{(1)}\})\bigr\rangle^2=\bigl\langle O(\{s_{\vn{x}}^{(1)}\})\, O(\{s_{\vn{x}}^{(2)}\})\bigr\rangle$.} Using them we form the {\em overlap field}: \begin{equation} q_{\vn{x}}= s_{\vn{x}}^{(1)} s_{\vn{x}}^{(2)}\,,\label{Q-FIELD-DEF} \end{equation} which is obviously invariant under (\ref{GAUGE-TRANSF}). The Edwards-Anderson order parameter, the {\em spin overlap}, is the spatial average of the overlap field: \begin{equation} q=\frac{1}{V}\sum_{\vn{x}} q_{\vn{x}}\,.\label{DEF:Q} \end{equation} In particular, it yields the (non-connected) spin-glass susceptibility \begin{equation} \chi_\mathrm{NC}(T)= V \overline{\langle q^2\rangle}\,,\label{DEF:CHI} \end{equation} that diverges at $T_\mathrm{c}$ with the critical exponent $\gamma$. For all $T<T_\mathrm{c}$, one expects $\chi_\mathrm{NC}={\cal O}(V)\,$. We shall also consider the Binder ratio \begin{equation}\label{eq:Binder} B(T)=\frac{\overline{\langle q^4\rangle}}{\overline{\langle q^2\rangle}^2}\,,\label{DEF:B} \end{equation} In particular, for all $T>T_c$, the fluctuations of $q$ are expected to be Gaussian in the large-$L$ limit, hence $\lim_{L\to\infty} B=3$, $(T>T_\mathrm{c})$. Its behaviour in the low-temperature phase is controversial. For a {\em disguised ferromagnet} picture one expects $B$ to approach $1$ in the limit of large lattices. On the other hand, for an RSB system one expects $1<B<3$ in the SG phase ($T<T_\mathrm{c}$). We recall also that one may consider as well the overlap computed in small boxes, in order to avoid the effect of the interphases (physical results are equivalent to those obtained with the standard overlap~\cite{marinari:98c}). A great deal of attention will be devoted to the probability density function (pdf) of the overlap \begin{equation} \tilde P(q)=\overline{\biggl\langle\delta\Bigl(q - \frac{1}{V}\sum_{\vn{x}} q_{\vn{x}}\Bigr)\biggr\rangle}\,,\label{DEF:PQ-PEINE} \end{equation} Note that, in a finite system, the pdf is not smooth, but composed of $N+1$ Dirac deltas at $q=-1,-\frac{N-2}{N},\ldots,\frac{N-2}{N},1$. Here, we have solved this problem by a convolution of the comb-like pdf (\ref{DEF:PQ-PEINE}) with a Gaussian of width $1/\sqrt{V}$, ${\cal G}_V(x)=\sqrt{\frac{V}{2\pi}} \mathrm{exp}[-V \frac{x^2}{2}]\,$: \begin{equation} P(q=c) =\int_{-\infty}^{\infty} \mathrm{d}q'\ \tilde P(q')\, {\cal G}_V(c-q')=\overline{\Bigl\langle\, {\cal G}_V\bigl (c-\frac{1}{V}\sum_{\vn{x}} q_{\vn{x}}\bigr)\,\Bigr\rangle}\label{DEF:PQ-SMOOTH}\,. \end{equation} In this way, we basically add the contribution of ${\cal O}(\sqrt{V})$ microscopic values of $q$, belonging to an interval of width $\sim 1/\sqrt{V}$~\cite{fernandez:09}. Note, however, that eqs.~(\ref{DEF:CHI},\ref{DEF:B}) are computed out of moments of $\tilde P(q)$, rather than of $P(q)$. The Edwards-Anderson order parameter $q_\mathrm{EA}$ vanishes for all $T\geq T_\mathrm{c}$. Below $T_\mathrm{c}$, in a droplet system, $P(q)$ collapses in the large-$L$ limit in a pair of Dirac delta functions of equal weight, centred at $q=\pm q_\mathrm{EA}$. In an RSB system, $P(q)$ contains as well a pair of delta functions at $q_\mathrm{EA}$, but it also has a continuous piece, non-vanishing for every $q$ such that $-q_\mathrm{EA}<q<q_\mathrm{EA}$. This is the origin of the differences in the predictions that both theories make for $B$ in the low-temperature phase. We will find it useful to consider as well {\em conditional} expectation values at fixed $q$. Let $O$ be an arbitrary function of the spins. We define its conditional expectation \begin{equation} \mathrm{E}(O|q\!=\!c)=\overline{\Biggl\langle\, O\ {\cal G}_V\biggl(c-\frac{1}{V}\sum_{\vn{x}} q_{\vn{x}}\biggr)\,\Biggr\rangle}\Biggr/ \overline{\Biggl\langle\, {\cal G}_V\biggl(c-\frac{1}{V}\sum_{\vn{x}} q_{\vn{x}}\biggr)\,\Biggr\rangle}\,.\label{DEF:q-PROMEDIO} \end{equation} Of course, one may easily recover standard expectation values from $\mathrm{E}(O|q)$: \begin{equation} \overline{\langle O\rangle}=\int_{-\infty}^{\infty}\mathrm{d}q\ P(q)\,\mathrm{E}(O|q)\,.\label{EC:RECONSTRUCCION} \end{equation} Strictly speaking, the integration limits should be $\pm\infty$. However, truncating the integral to $-1<q<1$, the error is exponentially small in $L^{D/2}$ (yet, for $L\!=\!8$ and $12$ we had to extend the limits beyond $\pm 1$). We can also define the conditional variances as \begin{equation}\label{eq:var-q} \mathrm{Var}(O|q=c) = \mathrm{E}(O^2 | q=c) - \mathrm{E}(O|q=c)^2, \end{equation} where we have the identity \begin{equation}\label{eq:var-q-anchura} \overline{\langle O^2\rangle} - \overline{\langle O\rangle}^2 = \int_{-\infty}^{\infty} \rmd q \ P(q) \left[ \mathrm{Var}(O|q) + \bigl( \mathrm{E}(O|q) - \overline{\langle O\rangle} \bigr)^2\right]. \end{equation} \subsection{Spatial correlation functions}\label{SECT:DEF-CORR} The overlap correlation function is \begin{equation}\label{eq:C4} C_4(\vn{r})= \frac{1}{V}\sum_{\vn{x}}\ \overline{\langle q_{\vn{x}}\, q_{\vn{x}+\vn{r}}\rangle}\,. \end{equation} $C_4(\vn{r})$ decays to zero for large $\vn{r}$ only for $T>T_\mathrm{c}$. Thus we have considered as well conditional correlation functions, recall eq.~(\ref{DEF:q-PROMEDIO}): \begin{equation}\label{eq:C4-q} C_4(\vn{r}|q)=\mathrm{E}\left(\left. \frac{1}{V}\sum_{\vn{x}}\,q_{\vn{x}}q_{\vn{x}+\vn{r}}\right|q\right)\,. \end{equation} Eq.~(\ref{EC:RECONSTRUCCION}) allows us to recover $C_4(\vn{r})$ from $C_4(\vn{r}|q)$. The two main theoretical pictures for the SG phase, the droplet and RSB pictures, dramatically differ on their predictions for $C_4(\vn{r}|q)$. Let us discuss them in detail: \begin{itemize} \item In the RSB picture, the {\em connected} correlation functions tend to zero at large $\vn{r}$. For all $q\in[-q_\mathrm{EA},q_\mathrm{EA}]$ we expect the asymptotic behaviour \begin{equation} C_4(\vn{r}|q)\sim q^2 + \frac{A_q}{r^{\theta(q)}}+\ldots\,,\label{EQ:SCALINGC4Q-LARGE-L} \end{equation} where the dots stand for scaling corrections, subleading in the limit of large $r$. The exponent $\theta(q)$ in eq.~(\ref{EQ:SCALINGC4Q-LARGE-L}) has been computed for $D$ larger than the upper critical dimension $D_\mathrm{u}=6$:~\cite{dedominicis:98,dedominicis:99} \begin{eqnarray} \theta(q=0)&=&D-4\,,\\ \theta(0<|q|<q_\mathrm{EA})&=&D-3\,,\\ \theta(|q|=q_\mathrm{EA})&=&D-2\,. \end{eqnarray} These mean-field results for $\theta(q)$ become inconsistent for $D<4$ [the correlations should {\em decrease} for large $r$, implying $\theta(q)>0$, recall eq.~(\ref{EQ:SCALINGC4Q-LARGE-L})]. An expansion in $\epsilon=6-D$ suggests that $\theta(q)$ will renormalise~\cite{dedominicis:06}. Note as well that, at least for large $D$, $\theta(q)$ is discontinuous at $q=0$. However, we remark that there are no compelling theoretical arguments supporting the discontinuity of $\theta(q)$ in $D=3$. Indeed, recent numerical studies found no evidence for it~\cite{contucci:09,janus:10b}. We finally recall a non-equilibrium computation~\cite{janus:09b} yielding in $D\!=\!3$:\footnote{We may mention as well three conjectures: $\theta(0)=(D-2+\eta)/2$~\cite{dedominicis:06} (that from the results in~\cite{hasenbusch:08b}, yields $\theta(0)=0.313(5)$), $\theta(0)=1/\hat\nu$ ($\hat\nu$ is the exponent that rules finite size effects at $q_\mathrm{EA}$) and $\theta(0)+1/\hat\nu=\theta(q_\mathrm{EA})$. There is also an exact scaling relation $\theta(q_\mathrm{EA}) = 2/\hat\nu$~\cite{janus:10b}.} \begin{equation} \theta(q=0)=0.38(2)\,. \end{equation} \item Quite the opposite to the RSB case, in a system well described by a droplet model and for $|q|< q_\mathrm{EA}$, $C_4(\vn{r}|q)$ does not tend to $q^2$ for large $r$ (we are referring, of course, to the regime $1\ll r\ll L$). In fact, spin configurations with $|q|< q_\mathrm{EA}$ are spatially heterogeneous mixtures of the two pure phases. One should find {\em bubbles} or slabs of linear size $\sim L$ of one of the two phases, say $q=+q_\mathrm{EA}$, surrounded by a matrix of the complementary state (see e.g.~\cite{martin-mayor:07,macdowell:06}). It follows that \begin{equation}\label{eq:C4-droplets} C_4(\vn{r}|q)= q_\mathrm{EA}^2f_{\vn{r}/r}(r/L)\,,\quad\mathrm{if}\quad |q|<q_\mathrm{EA}\ \ \mathrm{and}\ \ 1\ll r\ll L\,, \end{equation} ($f_{\vn{r}/r}(x)$ is a direction-dependent scaling function with $f_{\vn{r}/r}(0)=1$). Indeed, the probability that two spins at fixed distance $r$ belong to domains of opposite orientation is proportional to $r/L$ in the large-$L$ limit. On the other hand, precisely at $|q|=q_{EA}$ but only there, droplet theory predicts that the connected correlation function vanishes for asymptotically large $r$. The same behaviour of eq.~(\ref{EQ:SCALINGC4Q-LARGE-L}) was predicted~\cite{bray:87}. The exponent $\theta(q_\mathrm{EA})$ is identical to the scaling exponent of the coupling strength, denoted as $\theta$ or $y$ in the literature, and has a value of $\theta(q_\mathrm{EA})\sim0.2$~\cite{bray:87}. \end{itemize} \subsection{Non-equilibrium correlation functions}\label{SECT:DEF-CORR-DINAMICA} Let us recall that non-equilibrium counterparts exist of $q$ and $C_4(\vn{r}|q)$. We shall not be computing them here, but we {\em will} compare previous computations with our equilibrium results. Hence, we briefly recall the definitions~\cite{janus:09b}. One considers pairs of times $t_\mathrm{w}$ and $t+t_\mathrm{w}$, with $t,t_\mathrm{w}>0$, after a sudden quench from a fully disordered state to the working temperature $T$. The analogous of the spin overlap is \begin{equation} C(t,t_\mathrm{w})=\frac{1}{V}\sum_{\vn{x}}\ \overline{\langle s_{\vn{x}}(t_\mathrm{w}) s_{\vn{x}}(t+t_\mathrm{w})\rangle}\,. \end{equation} The non-equilibrium spatial correlation function is \begin{equation} C_{2+2}(\vn{r};t,t_\mathrm{w})= \frac{1}{V}\sum_{\vn{x}}\ \overline{\langle s_{\vn{x}}(t_\mathrm{w}) s_{\vn{x}}(t+t_\mathrm{w}) s_{\vn{x}+\vn{r}}(t_\mathrm{w}) s_{\vn{x}+\vn{r}}(t+t_\mathrm{w})\rangle} \end{equation} At fixed $t_\mathrm{w}$, $C(t,t_\mathrm{w})$ monotonically decreases from $C=1$ at $t=0$, to $C=0$ at $t\to\infty$. Hence, one may consider $C$, rather than $t$, as an independent variable. We will compare the non-equilibrium $C_{2+2}(\vn{r};t,t_\mathrm{w})$, computed in very large lattices~\cite{janus:08b,janus:09b}, with our equilibrium results for $C_4\left(\vn{r}|q=C(t,t_\mathrm{w})\right)$. To do so, we shall need to relate the finite {\em time} $t_\mathrm{w}$ (on very large lattices) with the finite {\em size} $L$. As we shall see in \sref{SECT:EQUILIBRIUM-DYNAMICS}, the correspondence between the non-equilibrium and the equilibrium correlation functions is amazingly accurate. \subsection{The link overlap}\label{SECT:DEF-QLINK} The link overlap is defined as\footnote{Clearly, $\overline{\langle Q_\mathrm{link} \rangle}=C_4(1,0,0)$.} \begin{equation} Q_\mathrm{link}=\frac{1}{DV}\sum_{\Vert\vn{x}-\vn{y}\Vert=1} q_\vn{x}q_\vn{y}\,.\label{DEF:QLINK} \end{equation} It is a more sensitive quantity than the spin overlap to the differences between a system described by droplet theory or an RSB system~\cite{marinari:99}. Since it is invariant under time-reversal symmetry (the global reversal of every spin in either of our two real replicas $s_{\vn{x}}^{(i)}\longrightarrow -s_{\vn{x}}^{(i)}$) its expectation value is non-vanishing, even in a finite system at high temperatures. Its pdf can be defined as we did with the spin overlap, recall eqs.~(\ref{DEF:PQ-PEINE},\ref{DEF:PQ-SMOOTH}). In fact, it has been proposed that the link overlap (rather than the spin overlap) should be considered as the fundamental quantity to describe the spin-glass phase below the upper critical dimension~\cite{contucci:05b,contucci:06}. There are both physical and mathematical reasons for this: \begin{itemize} \item On the physical side, $Q_\mathrm{link}$ provides an estimate of the volume of the domains' surfaces. Indeed, consider two configurations of the overlap field (\ref{Q-FIELD-DEF}) differing only in that a {\em domain} of size $\sim L$ has flipped. This will result in a large change of the spin overlap, $q$. Yet, the only changing contribution to $Q_\mathrm{link}$ is that of the lattice links crossed by the domain's surface. In a droplet theory, where the surface-to-volume ratio of the domains vanishes in the large-$L$ limit, one does not expect any $q$ variation of the conditional expectation $\mathrm{E}(Q_\mathrm{link}|q)$, not even in the $|q|<q_\mathrm{EA}$ region. Hence, the pdf for $Q_\mathrm{link}$ is expected to collapse to a single-valued delta function in the large-$L$ limit. The intermediate TNT picture coincides with the droplet theory in this respect. For an RSB system, the domains' surfaces are space filling. Hence, when $q$ suffers a variation of order 1, the variation of $Q_\mathrm{link}$ will be of order 1, too. Accordingly, a non-trivial pdf is expected for $Q_\mathrm{link}$, in the limit of large systems. \item On the mathematical side, theorems have been proven for the link overlap~\cite{contucci:03,contucci:05,contucci:07}, valid for three-dimensional systems, which are the exact correlate of mean-field results for the spin overlap.\footnote{The mathematical proof known so far is valid only for Gaussian-distributed couplings in eq.(\ref{EA-H}). However, physical intuition strongly suggests that the theorems are valid in more general cases such as our bimodal couplings.} Specifically, the replica equivalence property holds for the link overlap in three dimensional systems. Replica equivalence~\cite{parisi:98,parisi:00} is a property of the Parisi matrix which yields an infinite hierarchy of identities relating linear combinations of moments of $Q_\mathrm{link}$ in the large-$L$ limit. A specific example that we shall be using here is \begin{equation} \lim_{L\to\infty} \overline{\langle Q_\mathrm{link}\rangle^2}=\lim_{L\to\infty}\left[\,\frac{2}{3} \,\overline{\langle Q_\mathrm{link}\rangle}^2 \ +\ \frac{1}{3}\, \overline{\langle Q_\mathrm{link}^2\rangle} \,\right]\,,\label{EQ:REPLICA-EQUIVALENCE} \end{equation} (at finite $L$, the equality is not expected to hold). This is just a particular case of the family of identities valid for all $k,s=0,1,2,...$ \begin{equation} \lim_{L\to\infty} \overline{\langle Q_\mathrm{link}^k\rangle \langle Q_\mathrm{link}^s\rangle}=\lim_{L\to\infty}\left[\,\frac{2}{3}\, \overline{\langle Q_\mathrm{link}^k\rangle}\; \overline{\langle Q_\mathrm{link}^s\rangle} \ +\ \frac{1}{3}\, \overline{\langle Q_\mathrm{link}^{k+s}\rangle} \,\right]\,,\label{EQ:REPLICA-EQUIVALENCE-MAS-GENERAL} \end{equation} (replica equivalence implies infinitely many relations such as this). It is amusing that the mathematical proof for the three-dimensional theorem does {\em not} use Parisi matrices, relying instead on stochastic stability. Let us stress that ultrametricity implies replica equivalence, but the converse statement (i.e. replica equivalence implies ultrametricity) does not hold, in general.\footnote{For the sake of completeness, let us recall that replica and overlap equivalence, combined, imply ultrametricity~\cite{parisi:00}. In addition, replica equivalence and the Ansatz of a generic ultrametricity implies ultrametricity just as in the SK model~\cite{iniguez:96}. Finally, we point out that replica equivalence is tantamount to stochastic stability and a self-averageness property.} \end{itemize} The distinction between {\em spin} overlap and {\em link} overlap seems somewhat artificial from the point of view of the mean-field approximation. In fact, in the Sherrington-Kirkpatrick model one easily shows that $Q_\mathrm{link}=q^2$. For finite-connectivity mean-field models, non-equilibrium numerical computations yield $Q_\mathrm{link}=a q^2+b$~\cite{fernandez:09f} ($a$ and $b$ are numerical constants). In $D\!=\!3$ there are also clear indications that fixing the spin-overlap fixes as well the link overlap: the conditional variance $\mathrm{Var}(Q_\mathrm{link} | q)$, eq.~(\ref{eq:var-q}), tends to zero for large lattices, see~\cite{contucci:06} and \fref{fig:var-qlink}, below. Furthermore, in a TNT or droplet system, the derivative $\mathrm{d}E(Q_\mathrm{link}|q)/\mathrm{d} q^2$ should vanish in the large-$L$ limit for all $|q|<q_\mathrm{EA}$ (since there is a single valid value for $Q_\mathrm{link}$, there can be no $q^2$ dependency left). Numerical simulations, both in equilibrium~\cite{contucci:06,contucci:07b} and out of equilibrium~\cite{janus:08b,jimenez:03}, find so far a non-vanishing derivative that nevertheless decreases for larger $L$. The extrapolation to $L=\infty$ is still an open issue, see \sref{SECT:OVERLAP-EQUIVALENCE}. We wish to emphasise that $Q_\mathrm{link}$ unveils that the spin-glass phase is a critical state where minimal perturbations can produce enormous changes. In fact, let us couple two otherwise independent copies of the system through $Q_\mathrm{link}$, \begin{equation} {\cal H}=-\sum_{\langle \vn{x}\vn{y}\rangle }\ J_{\vn{x},\vn{y}}\, (s^{(1)}_{\vn{x}}\, s^{(1)}_{\vn{y}}+s^{(2)}_{\vn{x}}\, s^{(2)}_{\vn{y}})\ -\ T \epsilon V Q_\mathrm{link}\,.\label{EQ:QLINK-COUPLING} \end{equation} In a system described by droplet theory, one expects the link susceptibility \begin{equation} \chi_\mathrm{link}\equiv\left.\frac{\partial \overline{\langle Q_\mathrm{link} \rangle}}{\partial\epsilon}\right|_{\epsilon=0}= V\left[\overline{\langle Q_\mathrm{link}^2 \rangle\ -\ \langle Q_\mathrm{link} \rangle^2}\right]\,,\label{DEF:CHI-LINK} \end{equation} to remain finite in the large-$L$ limit, for all $T<T_\mathrm{c}$ (precisely at $T_\mathrm{c}$, a critical divergence might arise). Hence $\overline{\langle Q_\mathrm{link} \rangle}_\epsilon= \overline{\langle Q_\mathrm{link} \rangle}_{\epsilon=0}+ \epsilon \chi_\mathrm{link}+\ldots\,$ in a droplet or TNT system. On the other hand, in the mean-field approximation, one finds for RSB systems a discontinuity with $\epsilon$~\cite{franz:92}: \begin{eqnarray} \overline{\langle Q_\mathrm{link}\rangle}_{\epsilon>0}&=& \mathrm{E}(Q_\mathrm{link}|q=q_\mathrm{EA})\ +\ a_+ \sqrt{\epsilon}+\ldots\,,\\ \overline{\langle Q_\mathrm{link}\rangle}_{\epsilon<0}&=&\ \mathrm{E}(Q_\mathrm{link}|q=0)- \ a_- \sqrt{-\epsilon}+\ldots\,. \end{eqnarray} Actually, the mean-field computation was carried out for the {\em spin} overlap, yet, in mean-field models, $Q_\mathrm{link}$ is essentially $q^2$, hence we can borrow their result. We should emphasise that the situation is even more critical than for standard first-order phase transitions: $\chi_\mathrm{link}(\epsilon)$ diverges when $\epsilon\to 0\,$ (just as if the specific heat of liquid water approaching its boiling temperature showed a divergence!). Below the upper critical dimension, there has been very little investigation of $\chi_\mathrm{link}$ (see, however, ref.~\cite{marinari:99}). In fact, eq.(\ref{EQ:REPLICA-EQUIVALENCE}) has interesting implications in this respect. Let us rewrite it in the equivalent form \begin{equation} \lim_{L\to\infty} \left[\,\overline{\langle Q_\mathrm{link}^2\rangle}\ -\ \overline{\langle Q_\mathrm{link}\rangle^2}\,\right]=\frac{2}{3} \, \lim_{L\to\infty}\left[\, \overline{\langle Q_\mathrm{link}^2\rangle} \ -\ \overline{\langle Q_\mathrm{link}\rangle}^2\, \right]\,,\label{REP-EQUIVALENCE-SENCILLA} \end{equation} In an RSB system, the right-hand side of eq.~(\ref{REP-EQUIVALENCE-SENCILLA}) is positive (since $Q_\mathrm{link}$ may take values on a finite interval). Yet, eq.~(\ref{DEF:CHI-LINK}), the lhs of (\ref{REP-EQUIVALENCE-SENCILLA}) is nothing but the large-$L$ limit of $\chi_\mathrm{link}/L^D$. Hence, RSB implies $\chi_\mathrm{link}\sim L^D$, as expected for first-order phase transitions (see e.g.~\cite{amit:05}). We note that for droplet, or TNT systems, eq.~(\ref{REP-EQUIVALENCE-SENCILLA}) is merely an empty $0\!=\!\frac{2}{3}\times 0$ statement, just as for RSB systems in their paramagnetic phase. Hence we have found of interest to study the dimensionless ratio \begin{equation}\label{eq:R-link} R_{\mathrm{link}}=\frac{\overline{\langle Q_\mathrm{link}^2 \rangle\ -\ \langle Q_\mathrm{link} \rangle^2}}{\overline{\langle Q_\mathrm{link}^2 \rangle}\ -\ \overline{\langle Q_\mathrm{link} \rangle}^2}\,. \end{equation} eq.~(\ref{REP-EQUIVALENCE-SENCILLA}) implies that, for an RSB system on its large-$L$ limit, $R_{\mathrm{link}}=\frac{2}{3}$ for all $T<T_\mathrm{c}$. For a droplet or TNT system any value $0\leq R_{\mathrm{link}}\leq 1$ is acceptable. In fact, the high-temperature expansion for the $D\!=\!3$ EA model tells us that, in the large-$L$ limit, $R_{\mathrm{link}}=1-{\cal O}(T^{-2})$. We finally recall that the Chayes et al. bound~\cite{chayes:86,maiorano:07} may seem to imply that $\chi_\mathrm{link}$ can diverge at most as $L^{D/2}$, rather than as $L^D$ as required by RSB. The way out of the paradox is a little technical.\footnote{Imagine generalising model (\ref{EA-H}) in the following sense: the coupling is $J_{\vn{x}\vn{y}}=+1$ with probability $p$ (and $J_{\vn{x}\vn{y}}=-1$ with probability $1-p$), so that our model is just the particular instance $p=0.5$. One may follow ref.~\cite{chayes:86} to show that $\partial\overline{\langle Q_\mathrm{link}\rangle}/\partial p$ diverges at most as $L^{D/2}$. However, the critical value of $\epsilon$ would still be $\epsilon=0$ for $p$ in a finite range around $p=0.5$ (this is the crucial point: in the standard argument~\cite{chayes:86,maiorano:07} one would require that the critical value of $\epsilon$ vary when $p$ moves away from $p=0.5$). Hence, the rate of divergence of $p$-derivatives does not convey information on the rate of divergence of $\epsilon$-derivatives.} \section{Numerical methods}\label{SECT:PT-THERM} We describe here our numerical simulations. We describe the simulation organisation on Janus in \sref{SECT:JANUS}. We explain our choice of parameters for the parallel tempering simulation in \sref{SECT:PT-PARAMETERS}. An absolutely crucial issue is that of thermalisation criteria, \sref{SECT:THERMALIZATION-CRITERIA}. We largely extend here the methods first introduced in ref.~\cite{fernandez:09b}, which allows us to distribute on a rational basis the computational resources according to the difficulty in thermalising each particular sample. At variance with ref.~\cite{fernandez:09b}, which was restricted to the critical region, we are here probing the deep spin-glass phase, hence more demanding criteria need to be met. The statistical data analysis is described in \sref{SECT:MONTECARLO-EVALUATION}. Finally, in \sref{SECT:THERMALIZATION-TESTS} we describe some more traditional thermalisation tests. \subsection{The Janus computer}\label{SECT:JANUS} Our Monte Carlo simulations have been carried out on the Janus special-purpose machine. Information about Janus' hardware as well as some details of low-level programming can be found in \cite{janus:06,janus:08,janus:09}. Janus is built out of 256 computing cores (Virtex-4 LX200 FPGAs) arranged on 16 boards. With the code used for this paper, each core updates $3\times 10^{10}$ spins per second with a heat bath algorithm. The 16 FPGAs on a board communicate with a host PC via a 17th on-board control FPGA. The controlling PC generates the couplings $\{J_{\vn{x}\vn{y}}\}$, initialises the Janus random number generators, and provides as well the starting spin configurations. All the required data is transmitted to the FPGAs (one FPGA per {\em real} replica) that carry out both the Heat Bath (HB) updating and the Parallel Tempering (PT) temperature exchange. Due to the special architecture of Janus, the PT step is not costless, as we previously need to compute the total energy for each temperature. We thus equilibrate the computational cost of both updates by performing several HB sweeps before a PT temperature swap is attempted. Fortunately, selecting a modest number of HB sweeps per PT update hardly affects the efficiency. After a suitable number of PT cycles, spin configurations of all replicas are copied to PC memory to take measurements. The measurement process on the PC is easily parallelised with the next simulation block in Janus so that the PC is always ready for the next reading. During the simulation, we store on disk information about the PT dynamics (temperature random walk and acceptance rates), configuration energies, and measurements related to the overlap and link overlap fields. We also store full spin configurations every several measurement steps (usually a hundred) to be later used for {\it offline\/} measurements (see \sref{SECT:MONTECARLO-EVALUATION}) or as a checkpoint for continuing the simulation if needed. In a few specific cases (namely one $L=24$ sample and four $L=32$ samples) the time required to fulfil our thermalisation criteria was exceedingly long, more than six months. For these samples we have accelerated the simulation by increasing the level of parallelism. We have used a special low-level code that transfers the PT procedure to the control FPGA. This has allowed us to distribute the set of temperatures along several FPGAs on a board, speeding up the simulation accordingly. For the smaller lattices ($L\le 12$) we substitute the communication with Janus by a call to a simulation routine in the PC. Although these simulation are much less demanding, we go down to very small temperatures. As a consequence, the total cost is not negligible and we have used a PC cluster to complete the simulations. \subsection{Choosing parameters for Parallel Tempering}\label{SECT:PT-PARAMETERS} The key point in a parallel-tempering \cite{hukushima:96,Marinari:98b} simulation consists in ensuring that each configuration spends enough time at high temperatures so that its memory can be erased. Since we intend to study the physics of the Edwards-Anderson spin glass at very low temperatures, our simulations are necessarily very long. Because of this, we do not need to reach temperatures as high as those used in critical point studies. We can perform a quantitative analysis using the known behaviour of the heat-bath dynamics above the critical point. Following \cite{ogielski:85}, the equilibrium autocorrelation time in the thermodynamic limit is taken from a power law to a critical divergence \begin{equation} \tau_\mathrm{HB}(T)\sim(T-T_\mathrm{c})^{-z\nu}\,. \label{TAU_DE_T} \end{equation} For instance, for the maximum temperature used in our largest lattice ($L=32$) Ogielski found $\tau_\mathrm{HB}(T)\sim 10^5$ \cite{ogielski:85}. This is several orders of magnitude shorter than our shortest simulations (see \tref{tab:parameters}). The choice of the minimum temperature was taken so that the whole simulation campaign took about 200 days of the whole Janus machine and so that $T_\mathrm{c} - T_\mathrm{min}\sim L^{-1/\nu}$. With 4000 samples for $L=16, 24$ and 1000 for $L=32$, this resulted in $T_\mathrm{min}=0.479, 0.625$ and 0.703, respectively. Smaller lattices, $L=8,12$, were simulated on conventional computers. In all cases, we simulated four independent real replicas per sample. As to the other parallel-tempering parameters, namely the number and distribution of intermediate temperatures and the frequency of the parallel tempering updates, the choice is more arbitrary. We dedicated several weeks of the machine to test several combinations trying, if not to optimise our decision, at least to avoid clearly bad choices. Specifically, we varied the number $N_T$ of temperatures keeping the acceptance of the parallel-tempering update between 7\% and 36\%. This corresponds to an increase of roughly a factor of two in $N_T$. Noticing that the computational effort is proportional to $N_T$, we found that the efficency hardly changed, even for such a wide acceptance range. Eventually, we chose a compromise value of about 20\% in the acceptance, resulting in the parameters quoted on \tref{tab:parameters}. This both avoided unconventionally low acceptances and saved disk space. In contrast to conventional computers, Janus needs about as much time to do a parallel-tempering update than a heat-bath one. Therefore, while it is customary to perform both updates with the same frequency, after testing frequencies in the range 1--100 we have chosen to do a parallel-tempering update each 10 heath-bath ones. In fact, even if the time to do a parallel-tempering step were negligible, we have checked that doing a single heat-bath between parallel temperings would produce a practically immeasurable gain. We note, finally, that this issue was investigated as well in ref.~\cite{bittner:08} (in that work clear conclusions were not reached, as far as the $D\!=\!3$ Edwards-Anderson model at low temperatures and large $L$ is concerned). \begin{table} \centering \caption{Parameters of our parallel-tempering simulations. In all cases we have simulated four independent real replicas per sample. The $N_T$ temperatures are uniformly distributed between $T_\mathrm{min}$ and $T_\mathrm{max}$ (except for the runs of the first row, which have all the temperatures of the second one plus $T=0.150$ and $T=0.340$). In this table $N_\mathrm{mes}$ is the number of Monte Carlo Steps between measurements (one MCS consists of 10 heat-bath updates and 1 parallel-tempering update). The simulation length was adapted to the thermalisation time of each sample (see section~\ref{SECT:THERMALIZATION-CRITERIA}). The table shows the minimum, maximum and medium simulation times ($N_\mathrm{HB}$) for each lattice, in heat-bath steps. Lattice sizes $L=8,12$ were simulated on conventional PCs, while sizes $L=16,24,32$ were simulated on Janus. Whenever we have two runs with different $T_\mathrm{min}$ for the same $L$ the sets of simulated samples are the same for both. The total spin updates for all lattice sizes sum $1.1\times 10^{20}$.} \label{tab:parameters} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}ccccrlllcc} \br $L$ & $T_{\mathrm{min}}$ & $T_{\mathrm{max}}$ & $N_T$ & $N_\mathrm{mes}$ & $N_\mathrm{HB}^\mathrm{min}$ & \multicolumn{1}{c}{ $N_\mathrm{HB}^\mathrm{max}$}& \multicolumn{1}{c}{ $N_\mathrm{HB}^\mathrm{med}$} & $N_\mathrm{s}$ & System\\ \mr 8 & 0.150 & 1.575 & 10 & $10^3$ & $5.0\!\times\! 10^6$ & $8.30\!\times\!10^8$ & $7.82\!\times\!10^6$ & 4000 & PC \\ 8 & 0.245 & 1.575 & 8 & $10^3$ & $1.0\!\times\! 10^6$ & $6.48\!\times\!10^8$ & $2.30\!\times\!10^6$ & 4000 & PC \\ 12 & 0.414 & 1.575 & 12 & $5\!\times\! 10^3$ & $1.0\!\times\! 10^7$ & $1.53\!\times\!10^{10}$ & $3.13\!\times\!10^7$ & 4000 & PC \\ 16 & 0.479 & 1.575 & 16 & $10^5$ & $4.0\!\times\! 10^8$ & $2.79\!\times\!10^{11}$ & $9.71\!\times\!10^8$ & 4000 & Janus \\ 24 & 0.625 & 1.600 & 28 & $10^5$ & $1.0\!\times\! 10^9$ & $1.81\!\times\!10^{12}$ & $4.02\!\times\!10^9$ & 4000 & Janus \\ 32 & 0.703 & 1.549 & 34 & $2\!\times\!10^5$ & $4.0\!\times\! 10^9$ & $7.68\!\times\!10^{11}$ & $1.90\!\times\!10^{10}$ & 1000 & Janus \\ 32 & 0.985 & 1.574 & 24 & $2\!\times\!10^5$ & $1.0\!\times\! 10^8$ & $4.40\!\times\!10^9$ & $1.16\!\times\!10^8$ & 1000 & Janus \\ \br \end{tabular*} \end{table} \subsection{Thermalisation criteria}\label{SECT:THERMALIZATION-CRITERIA} In order to optimise the amount of information one can obtain given a computational budget, the length of the simulations must be carefully selected. It is well known that sample-to-sample fluctuation is the main source of statistical error. Thus, we want to simulate each sample for the shortest time that ensures thermalisation. The most common robust thermalisation check consists in the determination of the autocorrelation times for physical observables~\cite{sokal:97}. However, in order for this determination to be precise one needs a much longer simulation than needed to thermalise the system (e.g., while ten exponential autocorrelation times can be enough to thermalise the system, we need an at least ten times longer simulation to determine this autocorrelation time). Notice that this is not an issue in ordered systems, where one employs very long simulations in order to reduce statistical errors. The typical practical recipe to assess thermalisation for disordered systems consists in studying the time evolution of the disorder-averaged physical observables. In particular, the so-called $\log_2$-binning procedure uses the evolution of the time averages along the intervals $I_n = (2^{-(n+1)}N_\mathrm{HB}, 2^{-n} N_\mathrm{HB}]$. The system is considered to be thermalised if the first few intervals are compatible. This procedure is not optimal, because the thermalisation time is wildly dependent on the sample. Thus, a simulation time long enough to thermalise the slowest samples will be excessive for most of the rest. Perhaps even more frightening, the average over samples may well hide that a few samples, the very worst ones, are still quite far from equilibrium. Fortunately the use of parallel tempering presents us with the possibility to use the dynamics of the temperature random walk to learn about the thermalisation scale for each sample. In fact, in order to ensure thermalisation each of the participating configurations must cover the whole temperature range. Here, expanding on a method first used in~\cite{fernandez:09b}, we have promoted this idea to a fully quantitative and physically meaningful level. Let us consider the ordered set of $N_{T}$ temperatures $\{T_1,\ldots,T_{N_T}\}$ and let us suppose that $T_{i_\mathrm{c}-1}<T_\mathrm{c}\leq T_{i_\mathrm{c}}$. In \fref{fig:historiabetas}---left we show an instance of the random walk of the temperature index, $i(t)\in\{ 1,2,\ldots, N_T\}$, performed by one of the $N_T$ copies of the system considered in the parallel tempering. The random walk is clearly not Markovian, as the system remembers for a long time that it belongs to the high (low) temperature phase. This effect is also demonstrated in \fref{fig:historiabetas}---right, where we plot the time spent over $T_\mathrm{c}$ as a function of the simulation time (mind the long plateaux). \begin{figure}[t] \centering \includegraphics[height=\linewidth,angle=270]{historia} \caption{We plot (left panel) the temperature index of a fixed configuration of an $L=32$ sample as a function of the number of HB sweeps. We plot one point every $5$ million HB sweeps. The critical temperature corresponds to $i_\mathrm{c}=17$. This specific sample has $\tau_\mathrm{exp}=1.75\times 10^{10}$ HB sweeps. In the right panel, we show the time that all configuration of a same replica spends in the paramagnetic phase.} \label{fig:historiabetas} \end{figure} To make these arguments quantitative, we shall use the standard tools of correlated time series~\cite{amit:05,sokal:97}. We need a mapping defined on the $1,\ldots, N_T$ range of temperature indices so that \begin{eqnarray} f(i) \geq 0, \qquad \forall i \geq i_\mathrm{c}, \label{eq:f1}\\ f(i) < 0, \qquad \forall i < i_\mathrm{c}, \label{eq:f2}\\ \sum_{i=1}^{N_T} f(i) = 0. \label{eq:suma-f} \end{eqnarray} It is also convenient that $f$ be monotonic. Because we have chosen the same number of temperatures above and below $T_\mathrm{c}$, a simple linear $f$ is suitable, but the method works with any function fulfilling the above conditions. For each of the participating configurations we consider the time evolution $i_t$ of the temperature index. We define the equilibrium autocorrelation function as \begin{equation} C(t) = \frac{1}{N_\mathrm{HB}-t_0-t}\sum_{t'=t_0}^{N_\mathrm{HB}-t} f(i_{t'}) f(i_{t'+t}) , \end{equation} where $t_0$ is long enough to ensure that the temperature random walk has reached a steady regime. Due to condition~(\ref{eq:suma-f}), we avoid subtracting the squared mean value of $f$ in this definition. From the normalised $\hat C (t) = C(t) /C(0)$, see e.g. \fref{fig:corr}, we can define the integrated correlation times: \begin{equation}\label{eq:tau-int} \tau_\mathrm{int} = \frac12 + \sum_{t=0}^{W} \hat C(t), \end{equation} where $W$ is a self-consistent window that avoids the divergence in the variance of $\tau_\mathrm{int}$. The great advantage of these functions over the physical observables is that we can average over the $N_T$ configurations in the parallel tempering.\footnote{Even if these are not completely statistically independent, the averaged autocorrelation has a much smaller variance. In addition, the need to simulate several replicas provides independent determinations of $C(t)$, which permits a further error reduction and an estimate of the statistical errors.} This procedure works surprisingly well, not only giving reliable estimates of the integrated time but even providing the, more physical but notoriously difficult to measure, exponential autocorrelation time. Indeed, the correlation function admits an expansion on exponentially decaying modes \begin{equation}\label{eq:corr} \hat C(t) = \sum_i A_i\ \rme^{-t/\tau_{\mathrm{exp},i}},\quad \sum_i A_i=1. \end{equation} \begin{figure} \centering \includegraphics[height=\linewidth,angle=270]{corr} \caption{Autocorrelation functions for samples with $\tau_\mathrm{exp}$ of different orders of magnitude. We have plotted the range $[0,6\tau_\mathrm{exp}]$. We include the automatic double exponential fit, see~\ref{sec:protocol}. In the last panel the fit fails due to the strong downwards fluctuation and our programme has chosen a restricted interval for a fit to a single exponential. In order to avoid cluttering the graphs, we have only plotted a few times (the actual correlation functions have many more points). The horizontal axis is in units of $10^6$ heat-bath updates.} \label{fig:corr} \end{figure} \begin{figure} \centering \includegraphics[height=0.7\linewidth,angle=270]{histograma_taus} \caption{Histogram of exponential autocorrelation times for our simulations of the $L=32$ lattice (1000 samples).} \label{fig:histograma-taus} \end{figure} \begin{figure} \centering \includegraphics[height=0.7\linewidth,angle=270]{histograma_taus_L24_log} \caption{Logarithm of the histogram of exponential autocorrelation times for our simulations of the $L=24$ lattice (4000 samples). Mind the behaviour of the long-times tail.} \label{fig:histograma-taus_L24_log} \end{figure} In this representation, the exponential time $\tau_\mathrm{exp}$ is the largest of the $\tau_{\mathrm{exp},i}$.\footnote{The number of modes equals the dimension of the dynamical matrix of the Monte Carlo Markov process, which in our case is $(N_T!)\times 2^{N_T V}$.} Barring symmetry considerations, this exponential time should be the same for all random variables in the simulation, including the physical observables. The relative sizes of the $A_i$, and hence $\tau_\mathrm{int}$, depend to a certain extent on the particular choice of $f$. Notice, however, that criteria~(\ref{eq:f1}--\ref{eq:f2}) select a family of functions that hopefully reduce the amplitude of the irrelevant fast modes. In any case, $\tau_\mathrm{exp}$ has a physical meaning independently of these somewhat arbitrary considerations. In practice, the simulations are too long (up to $N_\mathrm{HB}\sim 10^{12}$) to consider all the $f(i_t)$ individually and we have to introduce some data binning, averaging over a large number of consecutive measurements. As it turns out, this is not a very limiting issue for two reasons. On the one hand, as long as these bins are much shorter than $\tau$, there is no real information loss. On the other hand, one can reconstruct any polynomial $f$ up to degree $k$ ---in particular our linear $f$--- by saving the sums of the first $k$ powers of the $i_t$. Even after this binning, we have worked with time series with a length of up to several million, so in order to compute the autocorrelation we have used a Fast Fourier Transform algorithm~\cite{frigo:05}. The details of the chosen thermalisation protocol can be found in \ref{sec:protocol}. We summarise by saying that our main thermalisation criterion is ensuring that $N_\mathrm{HB}> 12 \tau_\mathrm{exp}$ ($2\tau_\mathrm{exp}$ are discarded and the remaining $10\tau_\mathrm{exp}$ are used to measure and study $\hat C(t)$). In \fref{fig:corr} we plot several autocorrelation functions showing how the data quality allows for an exponential fit. We have chosen randomly 4 samples with very different exponential autocorrelation times: $6.5\times 10^6$, $8.8\times10^7$, $1.5\times 10^9$ and $1.8\times 10^{10}$. To summarise the distribution of the exponential autocorrelation times we have computed a histogram. Due to the large dispersion of these quantities we have chosen $\log_2 \tau_\mathrm{exp}$ as a variable. In \fref{fig:histograma-taus} we show the results for the two runs performed in $L=32$ (see table~\ref{tab:parameters}). Notice the dramatic increase of the $\tau_\mathrm{exp}$ when decreasing the minimum temperature of the simulation. The smooth shape of the curves defined by the histogram is a further test of our procedure for determining autocorrelation times. In \fref{fig:histograma-taus_L24_log} we plot the logarithm of the histogram in the $L=24$ case to show the exponential behaviour of the long-times tail. This result gives confidence that rare events, with very large (logarithms of) autocorrelation times, are at least exponentially suppressed. We have not made efforts to measure with precision the small autocorrelation times as they are immaterial regarding thermalisation, which is ensured by the minimum number of iterations performed for all samples. \subsection{Monte Carlo evaluation of observables}\label{SECT:MONTECARLO-EVALUATION} We present now some technical details about our evaluation of mean values, functions of mean values and error estimation. Some of the observables considered in this work were obtained by means of an online analysis: the internal energy, the link overlap, powers of the spin overlap ($q,q^2,q^4$), and Fourier transforms of the correlation function $C_4(\vn{r})$ for selected momenta. These quantities were computed as Monte Carlo time averages along the simulation. Note that the length of the simulation is sample dependent, something that would be a nuissance in a multispin coding simulation, but not in Janus were each sample is simulated independently. The disorder averaging followed the Monte Carlo one. Statistical errors were computed using a jackknife method over the samples, see for instance~\cite{amit:05}. However, when designing the simulation, one cannot anticipate all quantities that would be interesting, or these can be too expensive to be computed in runtime. In particular, we did not compute the conditional correlation functions $C_4(\vn{r}|q)$. Fortunately, an offline analysis of the stored configurations has allowed us to estimate them. We had to overcome a difficulty, though, namely the scarcity of stored configurations. In fact, for the samples that were simulated only for the minimum simulation time, we had only $N_\mathrm{conf} \sim 100$ configurations stored on disk (ranging from $N_\mathrm{conf}=10$ for $L=12$ to $N_\mathrm{conf} = 200$ in the case $L=32$). We regard the second half (in a Monte Carlo time sense) of these configurations as fireproof thermalised. Yet, when forming the overlap field, eq.~(\ref{Q-FIELD-DEF}), one needs only that the two spin configurations, $\{s_{\vn{x}}^{(1)}\}$ and $\{s_{\vn{x}}^{(2)}\}$, be thermalised and independent. Clearly enough, as long as the two configurations belong to different real replicas and belong to the second half of the Monte Carlo history they will be suitable. There is no need that the two configurations were obtained at the same Monte Carlo time (as it is done for the online analyses). Furthermore, the four real replicas offer us 6 pair combinations. Hence, we had at least $6\times (N_\mathrm{conf}/2)^2 \sim 10000$ (60000 for $L=32$) measurements to estimate the overlaps and the correlation functions. We used the Fast Fourier Transform to speed up the computation of the spatial correlations. For those samples that had more configurations (because their total simulation time exceeded $N_\mathrm{min}^\mathrm{HB}$), we considered nevertheless $N_\mathrm{conf}/2$ configurations evenly spaced along the full second half of the simulation. When some quantity, for instance the $P(q)$, could be computed in either way, online or offline, we have compared them. The two ways turn out to be not only compatible, but also equivalent from the point of view of the statistical errors. As an example of this let us compute the following quantity: \begin{equation} \sigma_\mathrm{link}^2 = \overline{\langle Q_\mathrm{link}^2\rangle} - \overline{\langle Q_\mathrm{link}\rangle}^2. \end{equation} For $L=32$, $T = 0.703$, the value of $\sigma_\mathrm{link}^2$ computed from online measurements of $Q_\mathrm{link}$ and $Q_\mathrm{link}^2$ is \begin{equation} V \sigma_\mathrm{link, online}^2 = 50.88(90). \end{equation} We could now recompute this value from offline measurements of $Q_\mathrm{link}$ and $Q_\mathrm{link}^2$. Instead, we are going to use eq.~(\ref{eq:var-q-anchura}), which involves the intermediate step of computing conditional expectation values and variances at fixed $q$ and then integrating with $P(q)$. This will serve as a test both of the offline measurements' precision and of our Gaussian convolution method for the definition of clustering quantities. The result is \begin{equation}\label{eq:sigma-link} V \sigma_\mathrm{link, conf}^2 = 50.81(90), \end{equation} The precision of $\sigma_\mathrm{link, online}^2$ and $\sigma_\mathrm{link, conf}^2$ is the same and the difference less than $10\%$ of the error bar, even though we only analysed $100$ configurations per sample for the second one. Of course, both determinations are very highly correlated, so the uncertainty in their difference is actually much smaller than their individual errors. Computing the difference for each jackknife block we see that \begin{equation} V [\sigma_\mathrm{link,conf}^2 - \sigma_\mathrm{link, online}^2] = -0.065(79), \end{equation} which is indeed compatible with zero. A subtle point regards non-linear functions of thermal mean values that are later on averaged over the disorder. In this work, the only instance is $\chi_\mathrm{link}$, see eq.~(\ref{DEF:CHI-LINK}). Care is needed to estimate such non-linear functions because a naive evaluation would be biased, and the bias might be sizeable compared to the statistical errors~\cite{ballesteros:97}. This problem does not arise in non-linear functions such as eq.~(\ref{DEF:q-PROMEDIO}), which are computed on observables only {\em after} the double averaging process over the thermal noise and over the samples. The problem and several solutions are discussed in~\ref{AP:NON-BIAS} (see also~\cite{hasenbusch:08b}). A final issue is the comparison of data computed in different system sizes at the {\em same} temperatures. Unfortunately the grids of temperatures that we used for the different $L$ differ. Hence we have interpolated our data by means of a cubic spline. \subsection{Thermalisation tests}\label{SECT:THERMALIZATION-TESTS} We will consider in this subsection thermalisation tests directly based on physically interesting quantities. \begin{figure}[b] \centering \includegraphics[height=0.7\linewidth,angle=270]{Binder} \caption{Evolution of the Binder parameter for $L=32$, $T=0.703$ using $\log_2$ binning (0 = second half, 1 = second quarter, ...). The blue curve (circles) is the result of stopping at step 1 of our thermalisation protocol (i.e., all samples simulated for a fixed time of $4\!\times\! 10^9$ heat-bath updates). The red curve (squares) is the result of completing all the steps, which implies an increase of roughly 150\% in simulation time.} \label{fig:log2} \end{figure} We start with the traditional $\log_2$-binning procedure. We choose the Binder parameter for the overlap, see eq.~(\ref{DEF:B}), which is specially sensitive to rare events. In \fref{fig:log2} we show the results for $B(T_\mathrm{min})$ for $L=32$, considering only the first $4\times 10^9$ Heat Bath steps of each of our 1000 samples, as if all the simulations were $N_\mathrm{min}^{\mathrm{HB}}$ heat-bath steps long (blue line). We could not affirm that even the last two bins were stable within errors. Things change dramatically if we consider Monte Carlo histories of a length proportional to the exponential autocorrelation time. Note that, thanks to our choice of $N^\mathrm{HB}_\mathrm{min}$ in \tref{tab:parameters}, the simulation time for most samples has not increased. If we first rescale data according to the total simulation length (itself proportional to the autocorrelation time) and average for equal {\it rescaled} time, the $\log_2$-binning procedure gives 4 steps of stability within errors. That is to say: we obtain the Binder parameter without thermalisation bias just discarding 1/16 of the history (and taking up to 1/8). Regarding the Binder parameter our requirement of $12\tau_\mathrm{exp}$ is excessive. In retrospect (see \fref{fig:log2}), shorter simulations would have produced indistinguishable physical results for most observables. We do not regret our choices, however, as we plan to use these thermalised configurations in the future~\cite{janus:xx} for very delicate analyses (such as temperature chaos), which are much more sensitive to thermalisation effects. \begin{figure} \centering \includegraphics[height=0.7\linewidth,angle=270]{Binder-T} \caption{Binder ratio as a function of the temperature for $L=32$. The good overlap between two different simulations (one of them in the much easier critical region) is a further thermalisation check. We use the same set of 1000 samples.} \label{fig:Binder-T} \end{figure} A different test can be performed by comparing the difficult low-temperature simulations of our largest lattice with simulations in the critical region of the {\em same samples}. A faulty thermalisation (for instance, a configuration remains trapped at low temperatures) could be observable as inconsistencies in the values of quantities in common temperatures. In \fref{fig:Binder-T} we show the Binder parameter as a function of temperature for the two simulations with $L=32$ (see \tref{tab:parameters}). The agreement between both simulations is excellent. \begin{figure} \centering \begin{minipage}{.49\linewidth} \includegraphics[height=\linewidth,angle=270]{scatterplot_4metodos.eps} \end{minipage} \begin{minipage}{.49\linewidth} \includegraphics[height=\linewidth,angle=270]{diferencia_4metodos_qlink.eps} \end{minipage} \caption{Bias correction in the computation of $\chi_\mathrm{link}$, eq.~(\ref{DEF:CHI-LINK}). On the left panel we plot the two-replica estimators $\chi_\mathrm{link, 2R}$ as a function of the unbiased four-replica estimator $\chi_{\mathrm{link, 4R}}$, eq.~(\ref{CHI-4R}), for all our temperatures in the $L = 32$ lattice. The two-replica estimators $\chi_\mathrm{link, 2R}$ are computed with no bias correction, eq.~(\ref{CHI-2R-NO}), with linear corrections, eq.~(\ref{CHI-2R-LINEAR}), and with quadratic corrections, eq.~(\ref{CHI-2R-QUADRATIC}). The right panel displays, for the three two-replica estimators, their difference with the four-replica estimator {\em in units of the statistical error for that difference}, as a function of temperature. We show our data for $L=32$ and $L=24$. Note that the statistical error in the {\em difference} between two estimators is largely reduced (as compared to individual errors) due to dramatic data correlation.} \label{fig:diff-chi-link} \end{figure} A very different test on the statistical quality of our data is the comparison of the values of $\chi_\mathrm{link}$ obtained using the different possible estimators for $\langle Q_\mathrm{link}\rangle^2$. We have an unbiased estimator if we use $Q_{\mathrm{link,4R}}^{(2)}$, see eq.~(\ref{BIAS-CORRECTION2}), the linearly bias-corrected estimator $Q_\mathrm{link,linear}^{(2)}$ in eq.~(\ref{BIAS-CORRECTION-LINEAR}), and the quadratically bias-corrected estimator $Q_\mathrm{link,quadratic}^{(2)}$ in eq.~(\ref{BIAS-CORRECTION-QUADRATIC}). The different determinations are equal only if the total simulation time (in each sample) is much longer than the integrated autocorrelation time for $Q_\mathrm{link}$. As we see in \fref{fig:diff-chi-link}--left, only computing $\chi_\mathrm{link}$ from the biased estimator $[Q_\mathrm{link}]_{2/2}^2$ results in a measurable bias. Once bias correction is taken into account, differences are only a fraction of the statistical error for each estimator. Nevertheless, the different statistical estimators are dramatically correlated. Hence, their difference might be significant. In \fref{fig:diff-chi-link}--right we plot these differences for $L=24$ and $L=32$ as a function of temperature, in units of the statistical error for that difference. As we see, at the lowest temperatures for $L=32$, the bias for the estimate of $\chi_\mathrm{link}$ obtained from $Q_\mathrm{link,linear}^{(2)}$ is still measurable. Only the estimate from $Q_\mathrm{link,quadratic}^{(2)}$ is statistically compatible with the unbiased estimator. Since our data fully complies with our expectations, we consider the above analysis as a confirmation of our expectation $N\gg\tau_\mathrm{int,Q_\mathrm{link}},\tau_\mathrm{exp}\,.$ \begin{figure} \centering \includegraphics[height=\linewidth,angle=270]{tau-vs-prob} \caption{Scatter plot of the exponential autocorrelation time ($L=32$) versus the probability of the overlap being less than a small quantity (left) and the energy (right). We do not observe correlation between the thermalisation times and these physically relevant quantities.} \label{fig:tau-vs-prob} \end{figure} We carefully avoided to make decisions during thermalisation based on the values of physical quantities. However, one could worry about the possibility of important statistical correlations between the temperature random walk and interesting quantities. Such correlation could originate some small biases that would be difficult to eliminate. Fortunately, we have not found any correlation of this type. In \fref{fig:tau-vs-prob} we show the correlation between $\tau_\mathrm{exp}$ and two important quantities: probability of the overlap being small and the energy. \section{The overlap probability density}\label{SECT:P-DE-Q} In this section we study the pdf of the spin overlap. This is a particularly interesting quantity because, as we saw in \sref{SECT:MODEL}, it has a qualitatively different behaviour in the droplet and RSB pictures of the spin-glass phase. We have plotted $P(q)$ for $T=0.703$ (the lowest for $L=32$) and $T=0.625$ (the lowest for $L=24$) in \fref{fig:Pq}. Notice that the convolution of the comb-like $\tilde P(q)$, eq.~(\ref{DEF:PQ-PEINE}), with the Gaussian function, eq.~(\ref{DEF:PQ-SMOOTH}), has yielded a very smooth $P(q)$. Initially, one would expect the peaks of this pdf to grow narrower and closer together as $L$ increases, eventually becoming two Dirac deltas at $\pm q_\mathrm{EA}$. The shift in position is clearly visible in the figures, but a more careful analysis is needed to confirm that the peaks are indeed getting sharper (\sref{sect:picos}). In addition, the probability in the $q=0$ sector should either go to zero (droplet) or reach a stable non-zero value (RSB). Even if a visual inspection of \fref{fig:Pq} seems to favour the second scenario, we shall need a more quantitative analysis to draw conclusions. \begin{figure}[b] \centering \begin{minipage}{.48\linewidth} \includegraphics[height=\linewidth,angle=270]{P-T-070256} \end{minipage} \begin{minipage}{.48\linewidth} \includegraphics[height=\linewidth,angle=270]{P-T-062500} \end{minipage} \caption{Overlap probability density function $P(q)$, eq.~(\ref{DEF:PQ-SMOOTH}), at $T=0.625$ and $T=0.703$. Notice that for the central sector of $q\sim0$ the curves for the different system sizes quickly reach a plateau with $P(q) > 0$.} \label{fig:Pq} \end{figure} In the remainder of this section we undertake such a quantitative characterisation of $P(q)$ and, in particular, its thermodynamical limit. To this end, we will study the evolution of $P(q=0)$ with $T$ and $L$ (\sref{sect:P0}); the extrapolation to infinite volume of the Binder cumulant (\sref{sect:Binder}) and finally the evolution of the shape and position of the peaks with the system's size (\sref{sect:picos}). \subsection{The $q=0$ sector}\label{sect:P0} \begin{figure} \centering \begin{minipage}{.48\linewidth} \includegraphics[height=\linewidth,angle=270]{P-cero} \end{minipage} \begin{minipage}{.48\linewidth} \includegraphics[height=\linewidth,angle=270]{P-cero2} \end{minipage} \caption{Overlap density distribution function at zero overlap as a function of temperature. We observe an enveloping curve with a linear behaviour, as expected in an RSB setting.} \label{fig:zero-overlap} \end{figure} We have plotted in \fref{fig:zero-overlap}---left the probability density at $q=0$ as a function of $T$ for all our lattices. There clearly is an enveloping curve in the region $T<T_\mathrm{c}$ with a decreasing, but positive, value of $P(0)$. In a mean-field setting~\cite{mezard:87} we expect this probability density to go to zero linearly in $T$. In order to check this, we have plotted $P(0)/T$ against $T$ in \fref{fig:zero-overlap}---right. As we can see, this expectation is fulfilled. For a similar study see~\cite{katzgraber:01}. We remark that the seemingly out of control value of $P(0)$ for our lowest temperature in $L=8$ is an artifact of the binary nature of the couplings (a finite system always has a finite energy gap). Indeed, in~\cite{palassini:01}, the finite size behaviour of $P(0)$ for the Edwards-Anderson model with binary couplings was studied as a function of temperature. Finite-size effects on $P(0)$ turned out to be stronger close to $T=0$ than at finite temperature. From a droplet model point of view, Moore et al.~\cite{moore:98} have argued that the apparent lack of a vanishing limit for $P(0)$ in numerical work in the 1990s was an artifact of critical fluctuations. In fact, at $T_\mathrm{c}$, $P(0)$ diverges as $L^{\beta/\nu}$ while droplet theory predicts that, for very large lattices, it vanishes as $L^{-\zeta}$, with $\zeta\sim0.2$, for all $T<T_\mathrm{c}$. These authors rationalise the numerical findings as a crossover between these two limiting behaviours. However, a numerical study at very low temperatures (so the critical regime is avoided) found for moderate system sizes a non-vanishing $P(0)$~\cite{katzgraber:01}. Furthermore, we compute in \sref{sect:picos} a characteristic length for finite-size effects in the spin-glass phase, which turns out to be small at $T=0.703$. \subsection{The Binder cumulant}\label{sect:Binder} We have plotted the Binder cumulant~(\ref{eq:Binder}) for $T=0.625,0.703$ as a function of the system size in \fref{fig:Binder-L}. As discussed in \sref{SECT:OBSERVABLES}, the evolution (and thermodynamical limit) of this observable is different in the droplet and RSB pictures: \begin{eqnarray} \mathrm{Droplet:}\qquad B(T;L) &=& 1 + a L^{-\zeta},\label{eq:Binder-droplet} \\ \ \ \, \quad\mathrm{RSB:}\qquad B(T;L) &=& c + d L^{-1/\hat\nu},\label{eq:Binder-RSB} \end{eqnarray} where $1/\hat\nu=0.39(5)$~\cite{janus:10b}. Since it is compatible with our best estimate for the replicon exponent, $\theta(0)=0.38(2)$, we prefer to use the second, more accurate value (there is some analytical ground for this identification~\cite{janus:10b}). We will attempt to distinguish between these two behaviours by fitting our data to (\ref{eq:Binder-droplet}) and (\ref{eq:Binder-RSB}). These two-parameter fits are plotted in \fref{fig:Binder-L} and the resulting parameters are gathered in \tref{tab:Binder}. In the case of the RSB fit, Eq~(\ref{eq:Binder-RSB}), we have included two error bars: the number enclosed in parentheses $(\,\cdot\,)$ comes from the statistical error in a fit fixing $1/\hat\nu$ to $\theta(0)$ and the one inside square brackets $[\,\cdot\,]$ is the systematic error due to the uncertainty in $\theta(0)$. \begin{figure}[t] \centering \begin{minipage}{.48\linewidth} \includegraphics[height=\linewidth,angle=270]{B-T-070256} \end{minipage} \begin{minipage}{.48\linewidth} \includegraphics[height=\linewidth,angle=270]{B-T-062500} \end{minipage} \caption{Infinite volume extrapolation of the Binder parameter at $T=0.703$ and $T=0.625$ and fits to the behaviour expected in the RSB, eq.~(\ref{eq:Binder-RSB}), and droplet, eq.~(\ref{eq:Binder-droplet}), pictures. See \tref{tab:Binder}. For the experimentally relevant scale of $L=110$ (dotted vertical line, see \sref{SECT:EQUILIBRIUM-DYNAMICS}) both fits are well above the $B=1$ value of a coarsening system.} \label{fig:Binder-L} \end{figure} \begin{table}[b] \caption{Scaling of the Binder parameter and fit to the behaviour expected in the droplet, eq.~(\ref{eq:Binder-droplet}), and RSB pictures, eq.~(\ref{eq:Binder-RSB}).}\label{tab:Binder} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}cccccccc} \cline{2-8} & \multicolumn{3}{c}{\bfseries Droplet fit} & & \multicolumn{3}{c}{\bfseries RSB fit} \\ \hline \multicolumn{1}{c}{ $T$} & \multicolumn{1}{c}{ $\chi^2/\mathrm{d.o.f.}$} & \multicolumn{1}{c}{ $a$} & \multicolumn{1}{c}{ $\zeta$} & & \multicolumn{1}{c}{ $\chi^2/\mathrm{d.o.f.}$} & \multicolumn{1}{c}{ $c$} & \multicolumn{1}{c}{ $d$} \\ \hline 0.703 & 3.78/3 & 0.312(17) & 0.110(17) & & 3.44/3 & 1.165(12)[34] & 0.186(34)[03]\\ 0.625 & 2.00/2 & 0.289(16) & 0.134(21) & & 2.73/2 & 1.128(11)[33] & 0.193(28)[03]\\ \hline \end{tabular*} \end{table} As it turns out, both fits have acceptable values of $\chi^2$ per degree of freedom (d.o.f.). However, the evolution of $B$ with $L$ is very slow, so in order to accommodate the limit value of $B(L\to\infty)=1$ consistent with the droplet picture, we have needed a very small exponent ($\zeta \sim 0.12$, smaller than the droplet prediction of $\zeta\approx0.2$~\cite{bray:87}). On the other hand, according to droplet theory~\cite{bray:87}, the connected spatial correlation function at $q\!=\!q_\mathrm{EA}$ decays as $1/r^{\zeta}$. A direct study~\cite{janus:10b}, however, indicates that these correlations decay as $1/r^{0.6}$. The reader may find it disputable, from an RSB point of view, that a single power law should govern finite size effects. It would be rather more natural that corrections were of order \begin{equation} \frac{1}{L^{\theta_\mathrm{eff}(L)}}= \int_0^1 \mathrm{d}q\, \frac{P(q)}{L^{\theta(q)}}\,. \end{equation} It turns out, however, that $\theta(q)$ hardly depends on $q$ (except on the neighbourhood of $q_\mathrm{EA}$), see~\cite{janus:10b} and Sect.~\ref{SECT:COND}. The neighbourhood of $q_\mathrm{EA}$ would produce a subleading correction of order $1/L^{0.6}$. In any case, see \sref{SECT:EQUILIBRIUM-DYNAMICS}, we remark that the relevant regime for comparison with experimental work is $L\approx110$, where both the RSB and the droplet fits predict that $B(T,L)$ is well above $1$ (see \fref{fig:Binder-L}). \subsection{The peaks of $P(q)$, $q_\mathrm{EA}$, and finite size effects}\label{sect:picos} One of the features of the $P(q)$ about which droplet and RSB agree is the fate of its two symmetric peaks as we approach the thermodynamical limit. These should grow increasingly narrow and shift their position until they eventually become two Dirac deltas at $q = \pm q_\mathrm{EA}$. The actual value of $q_\mathrm{EA}$ is notoriously difficult to compute~\cite{janus:09b,perez-gaviro:06,iniguez:97}, see, however, \cite{janus:10b}. Characterising the evolution of these peaks as we increase the system size is the goal of this section. We start by defining $q_\mathrm{EA}(L)$ as the position of the maximum of $P(q;L)$ (since the pdf is symmetric, we shall consider all overlaps to be positive in the remainder of this section). Thanks to the Gaussian smoothing procedure described in eq.~(\ref{DEF:PQ-SMOOTH}), this maximum is very well defined. We compute its position by fitting the peak to a third-order polynomial (notice that the peaks are very asymmetric). In order to further describe the peaks, we will also employ the half-widths $\sigma^{(\pm)}$ at half height $\bigl[P(q^{(\pm)}) = P(q_\mathrm{EA}(L))/2\bigr]$: \begin{equation}\label{eq:sigma} \sigma^{(\pm)} = \bigl| q^{(\pm)} - q_\mathrm{EA}(L)\bigr|\, \end{equation} where $q^{(-)} < q_\mathrm{EA}(L)< q^{(+)}$. We have plotted these parameters as a function of temperature in \fref{fig:qEA-T}. On \tref{tab:sigma} we can see that the width of the peaks does decrease with a power law in $L$, although very slowly. The product $\sigma P(q_\mathrm{EA}(L))$ has a small dependence on $L$. We can now extrapolate $q_\mathrm{EA}(L)$ to find the order parameter in the thermodynamical limit. A finite-size scaling study~\cite{janus:10b} shows that \begin{equation}\label{eq:qEA-inf} q_\mathrm{EA} (L,T) = q_\mathrm{EA}^\infty(T) \biggl[1+\frac{A(T)}{L^{1/\hat\nu}}\biggr],\quad A(T)=[L_\mathrm{c}(T)]^{1/\hat\nu}\,, \end{equation} where $1/\hat\nu=0.39(5)$. Yet, as discussed after eq.~\eref{eq:Binder-RSB}, we prefer to identify $1/\hat\nu$ with the replicon exponent, $\theta(0)=0.38(2)$. A disagreeing reader merely needs to double the error estimate in the extrapolation of $q_\mathrm{EA}$. Note that one should not attempt a three-parameter fit to eq.~(\ref{eq:qEA-inf}), as there are too few degrees of freedom. An independent estimate of $1/\hat\nu$ is required. Similar extrapolations were attempted in~\cite{iniguez:96}, with smaller system sizes ($L\leq16$) and a lesser control over $1/\hat\nu$. \begin{figure}[t] \centering \begin{minipage}{.48\linewidth} \includegraphics[height=\linewidth,angle=270]{qEA-inf} \end{minipage} \begin{minipage}{.48\linewidth} \includegraphics[height=\linewidth,angle=270]{sigma-L} \end{minipage} \caption{\emph{Left:} $q_\mathrm{EA}(L)$ as a function of the temperature. We include two different infinite-volume extrapolations: using the replicon exponent, eq.~(\ref{eq:qEA-inf}) and \tref{tab:qEA}, and the one obtained from finite-size scaling arguments in the critical region,~eqs.~(\ref{eq:qEA-FSS}) and~(\ref{eq:qEA-FSS2}). \emph{Right:} Width of the peaks of $P(q)$, eq.~(\ref{eq:sigma}), as a function of $T$ for all our lattice sizes.} \label{fig:qEA-T} \end{figure} \begin{table}[t] \caption{Width $\sigma=\bigl(\sigma^{(+)} +\sigma^{(-)}\bigr)/2$ of the peaks in $P(q)$ and fit to a power law $\sigma(L)=AL^B$ in the range $[L_\mathrm{min}, 32]$. We also include the product $\sigma P(q_\mathrm{EA}(L))$.} \label{tab:sigma} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}cccccc} \cline{2-6} & \multicolumn{2}{c}{$T = 0.703$}& & \multicolumn{2}{c}{$T = 0.805$}\\ \hline \multicolumn{1}{c}{$L$ } & $\sigma$ & $\sigma P(q_\mathrm{EA}(L))$& & $\sigma$ & $\sigma P(q_\mathrm{EA}(L))$\\ \hline 8 & 0.1177(20) &0.1784(10) & & 0.1391(25)& 0.1833(10) \\ 12 & 0.0963(21) &0.1740(12) & & 0.1165(25)& 0.1809(12) \\ 16 & 0.0817(16) &0.1696(11) & & 0.1001(22)& 0.1756(11) \\ 24 & 0.0735(16) &0.1690(12) & & 0.0860(19)& 0.1728(12) \\ 32 & 0.0668(29) &0.1631(23) & & 0.0798(34)& 0.1669(22) \\ \hline $L_\mathrm{min}$ & 16 & & & 16 \\ $\chi^2/\mathrm{d.o.f.}$ & 0.43/1 & & &1.13/1 \\ $B$ & $-0.278(28)$ & & & $-0.346(30)$ \\ \hline \end{tabular*} \end{table} \begin{table}[h] \caption{Extrapolation to infinite volume of $q_\mathrm{EA}(L,T)$ using the replicon exponent, eq.~(\ref{eq:qEA-inf}). We also include the confidence interval previously obtained in a non-equilibrium study~\cite{janus:09b}.\label{tab:qEA}} \lineup \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}ccc} \hline \multicolumn{1}{c}{\bfseries $L$ } & \multicolumn{1}{c}{\bfseries $T = 0.703$} & \multicolumn{1}{c}{\bfseries $T = 0.805$}\\ \hline 8 & 0.82461(83) & 0.7818(11)\0 \\ 12 & 0.79333(85) & 0.7412(11)\0 \\ 16 & 0.77300(75) & 0.71681(95) \\ 24 & 0.74027(71) & 0.67905(83) \\ 32 & 0.7174(14)\0 & 0.6535(16)\0 \\ \hline $L_\mathrm{min}$ & 16 & 16 \\ $\chi^2/\mathrm{d.o.f.}$ & 1.83/1 & 0.98/1 \\ $q_\mathrm{EA}$ & 0.538[11](6) & 0.447[12](6) \\ Bounds from \cite{janus:09b} & $0.474 \leq q_\mathrm{EA}\leq 0.637$ & $0.368\leq q_\mathrm{EA} \leq 0.556$ \\ \hline \end{tabular*} \end{table} \begin{table}[h] \caption{Determination of $L_\mathrm{c}$ in eq.~\eref{eq:qEA-inf} for several temperatures below $T_\mathrm{c}$. Errors are given as in \tref{tab:qEA}. The characteristic length $L_\mathrm{c}(T)$ scales as a correlation length when $T$ approaches $T_\mathrm{c}$ ($\nu\approx2.45$ from~\cite{hasenbusch:08b}). We warn the reader that the $\chi^2/\mathrm{d.o.f.}$ for the fits at $T=0.85$ and $0.90$ are, respectively, $2.6/1$ and $2.7/1$.}\label{tab:Lc} \lineup \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lclc} \hline \multicolumn{1}{c}{$T$ } & \multicolumn{1}{c}{$L_\mathrm{c}^{1/\hat\nu}$} & \multicolumn{1}{c}{$L_\mathrm{c}$} & \multicolumn{1}{c}{$L_\mathrm{c}(T_\mathrm{c}-T)^\nu$}\\ \hline 0.703 & 1.253[10](32) & \01.78[4](11) & 0.197[4](13)\\ 0.75 & 1.448[12](34) & \02.58[6](16) & 0.210[4](13)\\ 0.805 & 1.731[14](44) & \04.08[9](27) & 0.221[5](15)\\ 0.85 & 2.023[16](54) & \06.09[13](42) & 0.222[5](15)\\ 0.90 & 2.514[21](66) & 10.63[22](71) & 0.230[5](15)\\ \hline \end{tabular*} \end{table} We present the values of $q_\mathrm{EA}(L)$ and the result of a fit to eq.~(\ref{eq:qEA-inf}) on \tref{tab:qEA}. As we can see, the errors due to the uncertainty in the exponent, denoted by $[\,\cdot\,]$, are greater than those caused by the statistical error in the individual points, $(\,\cdot\,)$. In fact, our data admit good fits for a very wide range of values in $1/\hat\nu$. For instance, if we try to input the value of the exponent obtained in the droplet-like extrapolation of the Binder parameter, $\zeta\sim0.12$ (see eq.~(\ref{eq:Binder-droplet}) and \tref{tab:Binder}), we still obtain a good fit, even though the extrapolated value for $q_\mathrm{EA}$ is almost zero at $T=0.703$ and negative at $T=0.805$. Therefore, using the droplet exponent $\zeta$ the spin-glass phase would be non-existent. Also included in \tref{tab:qEA} is the confidence interval for this observable computed from non-equilibrium considerations in~\cite{janus:09b}. Notice that the equilibrium values are much more precise, but consistent. The extrapolations included in this table (and analogous ones for other values of $T$) are plotted on \fref{fig:qEA-T}. We remark that the estimate of $q_\mathrm{EA}$ from eq.~(\ref{eq:qEA-inf}) is fully compatible with the results of a Finite-Size Scaling analysis of the conditional correlation functions~\cite{janus:10b}. Interestingly enough the estimate of $q_\mathrm{EA}$ provides a determination of the correlation-length in the spin glass phase. The reader might be surprised that a correlation length can be defined in a phase where correlations decay algebraically. Actually, finite size effects are ruled by a crossover length $L_\mathrm{c}(T)$~\cite{josephson:66}, that scales as a correlation length (i.e. $L_\mathrm{c}(T)\propto (T_\mathrm{c}-T)^{-\nu}$). In fact, one would expect $q_\mathrm{EA}(T,L)/q_\mathrm{EA}(T)=1+ h[L/L_\mathrm{c}(T)]$. The only thing we know about the crossover function is that it behaves for large $x$ as $h(x)\sim x^{-1/\hat\nu}$. Making the simplest ansatz $h(x)= x^{-1/\hat\nu}$, the amplitude for the finite-size corrections in eq.~\eref{eq:qEA-inf} can be interpreted as a power of the crossover length $L_\mathrm{c}(T)$, \tref{tab:Lc}. We note that our determination of $L_\mathrm{c}(T)$ really scales as a bulk correlation length, with $T_\mathrm{c}$ and $\nu$ taken from~\cite{hasenbusch:08b}. It turns out to be remarkably small at $T=0.703$. The above argument tells us that good determinations of $q_\mathrm{EA}(T)$ are possible, provided that $L\gg L_\mathrm{c}(T)$. Yet, finite size scaling can be used as well to extrapolate $q_\mathrm{EA}(T,L)$ to the large-volume limit, even closer to $T_\mathrm{c}$ where $L$ becomes {\em smaller} than $L_\mathrm{c}$. This somehow unconventional use of finite size scaling was started in Refs.~\cite{luescher:91,kim:93,caracciolo:95,caracciolo:95b}, and has also been used in the spin-glass context~\cite{palassini:99,jorg:06}. Most of the times, these ideas are used in the paramagnetic phase, but we show below how to implement them in the low-temperature phase. Close to $T_\mathrm{c}$, we know that \begin{equation}\label{eq:qEA-inf-FSS} q_\mathrm{EA}^\infty(T) = \lambda (T_\mathrm{c}-T)^\beta [1+ \mu (T_\mathrm{c}-T)^{\omega\nu}+\ldots]\,. \end{equation} We have excellent determinations of $T_\mathrm{c}$ and $\beta$ from the work in~\cite{hasenbusch:08b}, so we need only to estimate the amplitude $\lambda$. In fact, Wegner's confluent corrections $(T_\mathrm{c}-T)^{\omega\nu}$ are small close to $T_\mathrm{c}$. To proceed, we note that finite-size scaling tells us that \begin{equation}\label{eq:qEA-FSS} q_\mathrm{EA}(L,T) = L^{-\beta/\nu} F(x)[1+ L^{-\omega} G(x)+\ldots],\qquad x= L^{1/\nu} (T_\mathrm{c}-T), \end{equation} where the critical exponents are (from~\cite{hasenbusch:08b}), \begin{equation} \nu = 2.45(15),\qquad \beta = 0.77(5),\qquad \omega = 1.0(1). \end{equation} In order to connect eq.~\eref{eq:qEA-FSS} with the infinite-volume limit in eq.~\eref{eq:qEA-inf-FSS} the asymptotic behaviour of the scaling functions $F(x)$ and $G(x)$ must be for large $x$ \begin{equation} F(x) \sim x^\beta,\qquad G(x)\sim x^{\omega\nu}. \end{equation} The resulting scaling plot is represented on \fref{fig:qEA-FSS}. Varying the values of $T_\mathrm{c}$ and the critical exponents inside their error margins does not make significant changes in the plot. Notice how the curves collapse for small values of the scaling variable $x$ and large $L$, but how for our lowest temperatures scaling corrections become important. In fact, eq.~\eref{eq:qEA-FSS} implies that when the temperature is lowered away from $T_\mathrm{c}$ the amplitude for scaling corrections grows as $x^{\omega\nu} \approx x^{2.45}$. \begin{figure}[t] \centering \includegraphics[height=0.7\linewidth,angle=270]{qEA-FSS} \caption{Scaling plot of $y=q_\mathrm{EA}(L,T)L^{\beta/\nu}$ in the critical region below $T_\mathrm{c}$, following eq.~(\ref{eq:qEA-FSS}) and using the values given in~\cite{hasenbusch:08b} for the critical exponents and $T_\mathrm{c}$. \emph{Inset:} Close-up of the region near $T_\mathrm{c}$ in the representation of eq.~(\ref{eq:qEA-FSS2}), showing a linear behaviour for large $L$.} \label{fig:qEA-FSS} \end{figure} In order to estimate the amplitude $\lambda$ we shall concentrate on the small-$x$ region where finite-size scaling corrections are smallest. Disregarding scaling corrections in eq.~\eref{eq:qEA-FSS}, \begin{equation}\label{eq:qEA-FSS2} \bigl(q_\mathrm{EA}(L,T) L^{\beta/\nu}\bigr)^{1/\beta}=F(x)^{1/\beta} \ \underset{x\to\infty}\longrightarrow\ x. \end{equation} The inset of \fref{fig:qEA-FSS} shows that we reach this asymptotic behaviour for $L\geq24$. Then, using the simplest parameterisation, $F(x) = (\lambda^{1/\beta} x+B)^\beta$, \begin{equation}\label{eq:qEA-FSS3} q_\mathrm{EA}(L,T) = \lambda (T_\mathrm{c}-T)^{\beta} \left[ 1+ \frac{\beta B}{\lambda^{1/\beta} (T_\mathrm{c}-T) L^{1/\nu}}+\ldots\right]\ . \end{equation} We can fit our $L=32$ data for $x<0.4$ (where the curves for $L=24$ and $L=32$ are compatible) and use the resulting value of $\lambda$ to extrapolate in eq.~(\ref{eq:qEA-FSS3}) to infinite volume. This extrapolation is represented as a function of $T$ on \fref{fig:qEA-T}. It is clear that this critical extrapolation differs with the extrapolation from \eref{eq:qEA-inf} at most by two standard deviations. The difference, if any, could be explained as Wegner's confluent corrections. However, to make any strong claim on confluent corrections, one would need to estimate the error in the critical extrapolation. Unfortunately, we have found that this error estimate is quite sensitive to the statistical correlation between $T_\mathrm{c}$, $\nu$, and $\beta$ (as far as we know, the corresponding covariance matrix has not been published). One could be tempted to compare eq.~\eref{eq:qEA-FSS3} with eq.~\eref{eq:qEA-inf} and conclude $\hat\nu=\nu$. We observe that, at the numerical level, $\nu=2.45(15)$~\cite{hasenbusch:08b} and $\hat\nu = 2.6(3)$~\cite{janus:10b}. However, we do not regard this as fireproof. Indeed, it is a consequence of our somewhat arbitrary parameterisation $F(x) = (\lambda^{1/\beta} x+B)^\beta$. To investigate this issue further, the small-$x$ region is not enough. One is interested in the asymptotic behaviour of $F(x)$ for large $x$ where unfortunately corrections to scaling are crucial. A careful study of the crossover region can be done only by considering corrections to scaling both at the critical temperature (at $q=0$) and below the critical temperature (at $q=q_\mathrm{EA}$). Finally, the reader could worry about the applicability of \eref{eq:qEA-inf-FSS} well below $T_\mathrm{c}$. The issue has been considered recently within the framework of droplet theory~\cite{moore:10}. It was found that \eref{eq:qEA-inf-FSS} is adequate for all $T<T_\mathrm{c}$ (actually, no Wegner's scaling corrections were discussed in~\cite{moore:10}). Thus, the fact that our data are describable as scaling behavior with leading Wegner's correction does not imply that they are not representative of the low temperature phase. \section{Conditional correlation functions}\label{SECT:COND} \begin{figure}[t] \centering \includegraphics[height=\linewidth,angle=270]{C4} \caption{Spatial correlation function $C_4\bigl(\, (r,0,0)|q=0\bigr)$ at $T=0.703$. We show on the right panel a rescaled version using the replicon exponent $\theta=0.38$ and the scaling variable $r/L$.} \label{fig:C4} \includegraphics[height=0.7\linewidth,angle=270]{C4sub} \caption{Subtracted correlation function, eq.~\eref{eq:C4-substraida}, in units of $1/L^{\theta(0)}$ as function of $q^2$. We took the non-equilibrium determination of the replicon exponent, $\theta(0)=0.38(2)$~\cite{janus:09b}.}\label{fig:C4-substraida} \end{figure} Let us consider the conditional spatial correlation function $C_4(r|q)$, eq.~(\ref{eq:C4-q}). A thorough study in the Fourier space is performed in~\cite{janus:10b}. Here, we provide some complementary information, concentrating on real space and considering as well the statistical fluctuations on the correlators. We first concentrate on $q=0$, the region where the droplet and RSB theory most differ. In \fref{fig:C4}---left we show $C_4(r|q=0)$ for $T=0.703$, which is seen to tend to zero for large $r$. Furthermore, if we use the droplet scaling of eq.~(\ref{eq:C4-droplets}), we see that we need to rescale the correlation function by a factor $L^{\theta(0)}$, with $\theta(0)=0.38(2)$ the replicon exponent, in order to collapse the curves. As for other values of $q$, we may consider the differences \begin{equation}\label{eq:C4-substraida} C_4(r=L/4|q)-C_4(r=L/2|q)\sim\frac{1}{L^{\theta(q)}}\,, \end{equation} where the subtraction takes care of the large-$r$ background in $C_4(r|q)$. As we show in \fref{fig:C4-substraida}, the subtracted correlation function scales in the range $q^2<0.2$ as $L^{-\theta(0)}$. This implies that the connected correlation functions $C_4(r|q)-q^2$ decay algebraically for large $r$ (a similar conclusion was reached in~\cite{contucci:09}). On the other hand, for $q^2= q_\mathrm{EA}^2\approx 0.3$, the exponent $\theta(q)$ is definitively larger than $\theta(0)$ (a detailed analysis indicates $\theta(q_\mathrm{EA})\sim 0.6$~\cite{janus:10b}). The crossover from the scaling $C_4(r=L/4|q)-C_4(r=L/2|q)\sim 1/L^{\theta(0)}$ to $C_4(r=L/4|q)-C_4(r=L/2|q)\sim 1/L^{\theta(q_\mathrm{EA})}$ can be described by means of Finite Size Scaling~\cite{janus:10b}. Recalling that $\overline{\langle Q_\mathrm{link}\rangle} = C_4(r\!=\!1)$, we can consider the spatial correlation as a sort of generalisation of the link overlap. In this sense it is worth recalling that in a mean-field setting fixing $q^2$ also fixes $Q_\mathrm{link}$. In a three-dimensional RSB system one would, therefore, expect the conditional variance $\mathrm{Var}(Q_\mathrm{link} | q)$, eq.~(\ref{eq:var-q}), to tend to zero for large lattices~\cite{contucci:06}. The first panel of \fref{fig:var-qlink} demonstrates that this is the case in our simulations, where we find that $\mathrm{Var}(Q_\mathrm{link}|q) \sim L^{-D/2}$. We can extend this result to $r>1$ by considering the conditional variances of $C_4$. Notice that, unlike $Q_\mathrm{link}$, $C_4$ is already defined as an averaged quantity in eq.~(\ref{eq:C4}) and not as a stochastic variable, so speaking of its variance is either trivial or an abuse of language. However, to avoid clutter, we have maintained the notation $\mathrm{Var}\bigl(C_4(r) |q\bigr)$, as its intended meaning is clear. These are are plotted in \fref{fig:var-qlink}, where we see that they decrease even faster than $\mathrm{Var}(Q_\mathrm{link}|q)$, with a power of $L$ that does not seem to depend on $r$. \begin{figure} \begin{minipage}{.5\linewidth} \includegraphics[height=\linewidth,angle=270]{Var-qlink-070256} \end{minipage} \begin{minipage}{.5\linewidth} \includegraphics[height=\linewidth,angle=270]{Var-C4-1-070256} \end{minipage} \begin{minipage}{.5\linewidth} \includegraphics[height=\linewidth,angle=270]{Var-C4-2-070256} \end{minipage} \begin{minipage}{.5\linewidth} \includegraphics[height=\linewidth,angle=270]{Var-C4-4-070256} \end{minipage} \caption{Plots of the conditional variance at fixed $q$ of $Q_\mathrm{link}$ and $C_4(r)$ at $T=0.703$, rescaled by appropriate powers of $L$ (we chose exponents that provided a good scaling at $q=0$). The abcissas correspond to $q$ in units of $q_\mathrm{EA}(L,T=0.703)$.}\label{fig:var-qlink} \end{figure} \section{Non-equilibrium vs. equilibrium}\label{SECT:EQUILIBRIUM-DYNAMICS} In reference~\cite{janus:08b}, we suggested the existence of a time-length dictionary, relating results in the thermodynamical limit at finite time $t_\mathrm{w}$ with equilibrium results for finite size $L$. The matching for $T=0.7$ was $L \approx 3.7 \xi(t_\mathrm{w})$, where $\xi(t_\mathrm{w})$ is the coherence length at time $t_\mathrm{w}$. The comparison there was restricted to $L\leq 20$. The expectation value $\mathrm{E}(Q_\mathrm{link} | q)$ was confronted with the correlation function $C_{2+2}(r=1)$, recall the definitions in section~\ref{SECT:DEF-CORR-DINAMICA}. We also predicted that the equilibrium data for $L=33$ would match our non-equilibrium results for $t_\mathrm{w}=2^{32}$. Using the same time-length dictionary our $L=32$ simulations would correspond to $t_\mathrm{w} \approx 2^{31}$ and those for $L=24$ would correspond to $t_\mathrm{w} \approx 2^{26}$. Now, recalling that $\overline{\langle Q_\mathrm{link}\rangle}$ is merely $C_4(r\!=\!1)$, it is natural to extend this correspondence between $C_4(r)$ and $C_{2+2}(r)$ to $r>1$. Of course, care must be exercised because $C_4$ in a finite lattice cannot be computed beyond $r=L/2$, while $C_{2+2}$ is defined for arbitrary $r$. However, the matching is very accurate, even for $r$ dangerously close to $L/2$, see \fref{fig:Estatica-vs-Dinamica}. It is interesting to point out that the off-equilibrium results of~\cite{janus:08b} and our equilibrium simulations have similar precision, even though the latter required about twenty times more computation time on Janus, not to mention a much more complicated simulation protocol. In this sense we arrive at the conclusion that simulating the dynamics may be the best way to obtain certain equilibrium quantities. On the other hand, only the equilibrium simulations give access to the crucial $C(t,t_\mathrm{w})=0$ physics. We may now wonder about the experimentally relevant scale of one hour ($t_\mathrm{w} \sim 3.6 \times 10^{15}$, taking one MC step as one picosecond~\cite{mydosh:93}). Assuming a power-law behaviour, $\xi(t_\mathrm{w}) = A t_\mathrm{w}^{1/z(T)}$, with $z(0.64T_\mathrm{c})=11.64(15)$~\cite{janus:09b}, we conclude that the correspondence is 1 hour $\longleftrightarrow L \approx 110$. Note, see for instance \fref{fig:Binder-L}, that $L=110$ is close enough to $L=32$ to allow a safe extrapolation. Let us finally stress that the modified droplet scaling for $\xi(t_\mathrm{w})$~\cite{bouchaud:01} would predict that one hour of physical time would correspond to equilibrium data on $L$ even smaller than 110. Indeed, according to these authors the time needed to reach some coherence length $\xi(t_w)$ grows as \begin{equation} t_\mathrm{w} \sim \tau_0 \xi^{z_\mathrm{c}} \exp\left(\frac{ Y(T) \xi^\psi}{T}\right)\,, \label{MS} \end{equation} where $\tau_0$ is the microscopical time associated to the dynamics; $z_\mathrm{c}$ is the dynamical critical exponent computed at the critical point; $\psi$ is the exponent that takes the free energy barriers into account (from the dynamical point of view) and $Y(T)=Y_0 (1-T/T_\mathrm{c})^{\psi \nu}$, with the $\nu$ exponent being the static critical exponent linked to the coherence length. Near the critical point $Y(T) \to 0$ and the power law critical dynamics is recovered. On the other hand, if we stay below $T_\mathrm{c}$, Eq. (\ref{MS}) predicts an algebraic grow of $t_\mathrm{w}$ with $\xi(t_\mathrm{w})$ only for very small coherence lengths. However, as the coherence length grows, the time needed to reach it diverges exponentially on $\xi(t_\mathrm{w})$. \begin{figure} \centering \begin{minipage}{.48\linewidth} \includegraphics[height=\linewidth,angle=270]{Dinamica-vs-Estatica-L24} \end{minipage} \begin{minipage}{.48\linewidth} \includegraphics[height=\linewidth,angle=270]{Dinamica-vs-Estatica-L32} \end{minipage} \caption{Equilibrium $C_4(r|q)$ as a function of $q^2$ (lines) for $L=24$ (left) and $L=32$ (right) lattices at $T=0.703$. We compare with non-equilibrium data from~\cite{janus:09b} (points) of $C_{2+2}(r,t,t_\mathrm{w})$ as a function of $C^2(t,t_\mathrm{w})$ for $t_\mathrm{w}=2^{26}$ (left) and $t_\mathrm{w}=2^{31}$ (right), (see \sref{SECT:DEF-CORR-DINAMICA} for definitions). The errors in both sets of data are comparable, and smaller than the point size.} \label{fig:Estatica-vs-Dinamica} \end{figure} \section{The link overlap}\label{SECT:LINK-OV} We shall address here three separated problems: overlap equivalence (\sref{SECT:OVERLAP-EQUIVALENCE}), replica equivalence (\sref{SECT:REPLICA-EQUIVALENCE}), and the scaling of the link susceptibility (\sref{SECT:LINK-SUSCEPTIBILITY}). \subsection{Overlap equivalence}\label{SECT:OVERLAP-EQUIVALENCE} As we have discussed previously, it has been proposed~\cite{contucci:05b,contucci:06} that attention should be shifted from the spin-overlap (the primary object for mean-field systems) to the link overlap (the would-be primary object below the upper critical dimension). Two requirements should be met for this change of variable to be feasible: \begin{enumerate} \item The conditional variance $\mathrm{Var}(Q_\mathrm{link}|q)$ must vanish in the large $L$ limit. \item The conditional expectation $\mathrm{E}(Q_\mathrm{link}|q)$ should be a strictly increasing function of $q^2$. \end{enumerate} The scaling with $L$ of $\mathrm{Var}(Q_\mathrm{link}|q)$, \sref{SECT:COND}, does suggest that the first requirement holds. We shall investigate here the second requirement. We remark that the RSB theory expects it to hold, while droplet expects it not to. Furthermore, this point is actually the only disagreement between the RSB and the TNT picture. In fact, RSB expects the derivative $\mathrm{d}\mathrm{E}(Q_\mathrm{link}|q)/\mathrm{d}q^2$ never to vanish. On the other hand, TNT supporters expect this derivative to scale as $L^{D_s-D}$, where $D_s$ represents the (would be) fractal dimension of the surface of the spin-glass domains. In $D\!=\!3$, $D-D_s\approx 0.44$~\cite{palassini:00}. \begin{table}[t] \caption{Coefficients $c_2^{(2m)}$ in the fit to eq.~(\ref{eq:fit-derivative}), for various orders of the fitting polynomial, and $T=0.703$ and $0.625$. This coefficient is interpreted as $\bigl[\mathrm{d}\mathrm{E}(Q_\mathrm{link}|q)/\mathrm{d}q^2\bigr]_{q^2=0}$. We report as well the results for fits of the form $c_2^{(4)}= A/L+c$ (centre) and $c_2^{(4)}= B/L^{0.44}+d$ (bottom). For both fits, we also provide the extrapolation to $L\!=\!110$ which, according to the time-length dictionary, corresponds to the experimentally relevant length scale.} \label{tab:c2} \lineup \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}cccccccc} \hline && {\bfseries $T = 0.703$} & & &&{\bfseries $T = 0.625$} &\\ \hline {\bfseries $L$ } & $c_2^{(2)}$ & $c_2^{(4)}$ & $c_2^{(6)}$ && $c_2^{(2)}$ & $c_2^{(4)}$ & $c_2^{(6)}$\\ \hline 8 & 0.403(5) & 0.405(16) & 0.43(3)& & 0.414(7) & 0.423(19) & 0.45(4)\\ 12 & 0.317(5) & 0.321(14) & 0.35(3)& & 0.331(6) & 0.335(18) & 0.36(3)\\ 16 & 0.271(4) & 0.262(11) & 0.26(2)& & 0.282(6) & 0.275(16) & 0.28(3)\\ 24 & 0.224(5) & 0.222(15) & 0.22(3)& & 0.231(5) & 0.220(14) & 0.22(3)\\ 32 & 0.199(6) & 0.201(18) & 0.20(4)& & --- & --- & --- \\ \hline $\chi^2/\mathrm{d.o.f.}$ && 0.57/3 &&&& 0.46/2 & \\ $A$ && 2.23(21) &&&& 2.46(27) & \\ $c$ && $\ \ \, 0.129(16)$ &&&& $\ \ \,0.121(21)$ & \\ $L=110$ && $\ \ \, 0.149(14)$ &&&& $\ \ \,0.143(19)$ &\\ \hline $\chi^2/\mathrm{d.o.f.}$ && 2.39/3 &&&& 0.18/2 & \\ $B$ && 1.45(11) &&&& 1.32(15) & \\ $d$ && $-0.06(3)$ &&&& $-0.11(5)$ & \\ $L=110$ && $\ \ \, 0.082(20)$&&&& $\ \ \, 0.058(28)$ &\\ \hline \end{tabular*} \end{table} \begin{table} \caption{$C(r=1 | q)$ for $q=0$ and $q=q_\mathrm{EA}$ for all our system sizes at $T=0.703$. For each $L$, we include the correlation coefficient between both values of $q$. Specifically, for two quantities $A$ and $B$, $\mathcal{R}_{AB} = \overline{( \langle A\rangle - \overline{\langle A\rangle}) ( \langle B\rangle - \overline{\langle B\rangle})} / \sqrt{\overline{(\langle A\rangle - \overline{\langle A\rangle})^2} \ \overline{(\langle B\rangle - \overline{\langle B\rangle})^2}} $ }\label{tab:C4_r_1} \lineup \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}cccr} \hline \multicolumn{1}{c}{$L$ } & \multicolumn{1}{c}{ $C(1 |0)$} & \multicolumn{1}{c}{ $C(1 |q_\mathrm{EA})$} & \multicolumn{1}{c}{$\mathcal R$}\\ \hline 8 & 0.46138(82) & 0.57253(33) & 0.134\\ 12 & 0.51649(71) & 0.60390(28) & 0.051\\ 16 & 0.54552(60) & 0.62089(22) & 0.060\\ 24 & 0.57573(77) & 0.63742(17) & $-0.119$ \\ 32 & 0.59131(94) & 0.64579(24) & 0.063\\ \hline \end{tabular*} \end{table} To estimate the derivative $\mathrm{d}\mathrm{E}(Q_\mathrm{link}|q)/\mathrm{d}q^2$, we observe that $E(Q_\mathrm{link}|q)$ is an extremely smooth function of $q^2$ (see the $r\!=\!1$ curves in \fref{fig:Estatica-vs-Dinamica}). Hence we can attempt a polynomial fit: \begin{equation}\label{eq:fit-derivative} \mathrm{E}(Q_\mathrm{link}|q)-\mathrm{E}(Q_\mathrm{link}|q=0)=\sum_{k=1}^m\, c_{2k}^{(2m)} q^{2k}\,. \end{equation} In particular, the coefficient $c_2^{(2m)}$ provides an estimate of $\mathrm{d}E(Q_\mathrm{link}|q)/\mathrm{d}q^2$ at $q^2=0$. Playing with the order $2m$ of the polynomials, one can control systematic errors. Mind that it is very important to fit the {\em difference} $\mathrm{E}(Q_\mathrm{link}|q)-\mathrm{E}(Q_\mathrm{link}|q=0)$, which, due to statistical correlations, has much reduced statistical errors. On the other hand, data for different $q$ are so strongly correlated that standard fitting techniques are inappropriate. We thus used the approach explained in ref.~\cite{janus:09b}. The results, see \tref{tab:c2}, indicate that $c_2^{(4)}$ offers a reasonable compromise between systematic and statistical errors. Once we have the derivatives in our hands, we may try to extrapolate them to large $L$ by means of an RSB fit ($A/L+b$, middle part of \tref{tab:c2}) or using a TNT fit ($B/L^{0.44}+d$, bottom part of \tref{tab:c2}). The two functional forms produce a reasonable fit. As expected, the $1/L$ extrapolation to $L\!=\!\infty$ yields a non-vanishing derivative, while the $1/L^{0.44}$ extrapolation suggests that, for large $L$, $\mathrm{E}(Q_\mathrm{link}|q)$ is constant as $q^2$ varies. We remark as well that the very same conclusion was reached in the analysis of the non-equilibrium temporal correlation functions~\cite{janus:09b}. However, we have far more accurate data at our disposal than the derivative $\mathrm{d}E(Q_\mathrm{link}|q)/\mathrm{d}q^2$ at $q^2=0$, namely the correlation functions themselves. In table~\ref{tab:C4_r_1} we give our estimates for $C(r=1 | q=0)$ and $C(r=1 | q=0.523\approx q_\mathrm{EA})$. According to a TNT picture of the SG phase, the two correlation functions should be equal. As the reader can check, an infinite volume extrapolation as $L^{-0.44}$ is unbearable for both correlation functions (even if we discard the two smallest sizes). The same conclusions hold substituting $L$ by \begin{equation} \ell=\pi/\sin(\pi/L)\,,\label{eq:def-ell} \end{equation} which is more natural for lattice systems. Yet, it could be argued that our data are preasymptotic. Hence, we may try a TNT extrapolation including scaling corrections. \begin{equation}\label{eq:TNT-corrections} C(r=1 |q) = C_\infty + A_q L^{-0.44} ( 1+ B_q L^{-y})\,. \end{equation} We have performed a joint fit of the data on table~\ref{tab:C4_r_1} to eq.~\eref{eq:TNT-corrections}. The fitting parameters were the four amplitudes $A_0,B_0,A_{0.523}$ and $B_{0.523}$, the common scaling corrections exponent $y$ and the common large-$L$ extrapolation $C_\infty$. We take into account the (almost negligible) correlation in data for the same $L$ by computing $\chi^2$ with the covariance matrix, which can be reconstructed from the data on \tref{tab:C4_r_1}. The result is (notice the highly asymmetric errors) \begin{equation} C_\infty = 0.677^{+0.012}_{-0.005}, \qquad y = 0.57^{+0.26}_{-0.08}, \qquad \chi^2 /\mathrm{d.o.f.} = 9.1/4. \end{equation} Were the functional form in eq.~\eref{eq:TNT-corrections} correct, the probability of $\chi^2$ being even larger than we found would be only $6\%$. On the other hand, in an RSB setting, one would expect $C(r\!=\!1|q)$ to scale as $1/L$, with a $q$-dependent infinite volume value $C_\infty(q)$. Indeed, if we fit the data on \tref{tab:C4_r_1} to $C(1|q) = C_\infty(q) + A /\ell$ we obtain \begin{eqnarray} C_\infty(q=0) &=& 0.6349(8), \qquad \chi^2/\mathrm{d.o.f.} = 3.63/3,\\ C_\infty(q=q_\mathrm{EA}) &=& 0.6711(2),\qquad \chi^2/\mathrm{d.o.f.} = 2.86/3. \end{eqnarray} We note as well that $[C_\infty(q=q_\mathrm{EA})-C_\infty(q=0)]/q_\mathrm{EA}^2\approx 0.132$, in fair agreement with the $1/L$ extrapolation for the derivative in \tref{tab:c2}. However, more important than the extrapolation to $L\!=\!\infty$ is the extrapolation to $L\!=\!110$, the length scale that, for $T\!=\!0.7$, matches the experimental time scales. For $T\!=\! 0.625$, $L\!=\!110$ is surely larger than the relevant length scale but, unfortunately, the time-length dictionary at such a low temperature still needs to be tuned. As it can be seen in the middle and bottom parts of \tref{tab:c2}, the two extrapolations yield a non-vanishing derivative. Thus, whichever the standpoint adopted, the conclusion is identical for RSB and TNT theories: at the experimentally relevant length scales, overlap equivalence can be assumed. \subsection{Replica equivalence}\label{SECT:REPLICA-EQUIVALENCE} \begin{figure}[b] \centering \begin{minipage}{.48\linewidth} \includegraphics[height=\linewidth,angle=270]{estabilidad_estocastica} \end{minipage} \begin{minipage}{.48\linewidth} \includegraphics[height=\linewidth,angle=270]{estabilidad_estocastica_q2} \end{minipage} \caption{The ratios $R_\mathrm{link}$, eq.~(\ref{eq:R-link}), (left panel) and $R_\mathrm{q^2}$, eq.~(\ref{eq:R-q2}), (right panel) versus $T$ for the different system sizes. The replica equivalence property implies that, in an RSB system below $T_\mathrm{c}$, $R^\mathrm{link} = 2/3$ in the large-$L$ limit. Recall that $T_\mathrm{c} \approx 1.1$.} \label{fig:estabilidad} \end{figure} We consider now the ratio \begin{equation}\label{eq:R-link2} R_{\mathrm{link}}=\frac{\overline{\langle Q_\mathrm{link}^2 \rangle\ -\ \langle Q_\mathrm{link} \rangle^2}}{\overline{\langle Q_\mathrm{link}^2 \rangle}\ -\ \overline{\langle Q_\mathrm{link} \rangle}^2}\,, \end{equation} defined in \sref{SECT:DEF-QLINK}. As was explained there, the RSB theory expects it to reach a constant value $2/3$ below $T_\mathrm{c}$, whereas the droplet and TNT theories lack a definite prediction. Our numerical data fit very well the RSB expectation (see \fref{fig:estabilidad}--left). Besides, we can also study a similar ratio, in which the mean-field substitution $ Q_\mathrm{link}\rightarrow q^2$ is performed: \begin{equation}\label{eq:R-q2} R_{q^2}=\frac{\overline{\langle q^4 \rangle\ -\ \langle q^2 \rangle^2}}{\overline{\langle q^4 \rangle}\ -\ \overline{\langle q^2 \rangle}^2}\,. \end{equation} Overlap equivalence suggests that $R_{q^2}$ approaches $2/3$ in the large $L$ limit (again neither the droplet nor the TNT theories have a definite prediction). Our data at low temperatures seem compatible with the $2/3$ expectation, see \fref{fig:estabilidad}--right. On the other hand, the convergence to the thermodynamic limit seems fairly slower close to $T_\mathrm{c}$. We recall that a previous computation concluded as well that violations of $R_{q^2}\!=\!2/3$ are due to critical fluctuations~\cite{marinari:00}. \subsection{Link susceptibility}\label{SECT:LINK-SUSCEPTIBILITY} \begin{figure} \centering \begin{minipage}{.48\linewidth} \includegraphics[height=\linewidth,angle=270]{var_qlink} \end{minipage} \begin{minipage}{.48\linewidth} \includegraphics[height=\linewidth,angle=270]{var_qlink_con_L} \end{minipage} \caption{(Left) Susceptibility $\chi_\mathrm{link}$ vs. temperature for the different system sizes. (Right) Behaviour of $\chi_\mathrm{link}$ with $L$ for different temperatures. Lines are power-law fits. The effective exponents found in each of these fits is reported in the legends.} \label{fig:chi_link} \end{figure} We show in \fref{fig:chi_link}--left the link susceptibility $\chi_\mathrm{link}$, eq.~(\ref{DEF:CHI-LINK}), as a function of temperature for different lattice sizes. It is clear enough that this susceptibility is divergent in the spin-glass phase and that the lower the temperature, the more violent the divergence. Hence, it is clear that this particular effect is not due to critical fluctuations. We perform a more quantitative study in \fref{fig:chi_link}-right. As discussed in \sref{SECT:DEF-QLINK}, according to RSB theory, one would expect $\chi_\mathrm{link}\sim L^D$ in the SG phase. We find evidence of a critical divergence. At and above $T_\mathrm{c}$, our data grow very softly with $L$ (at $T=1.3\approx 1.17 T_\mathrm{c}$, data seem to reach a limiting value). However, below $T_\mathrm{c}$, we observe an effective exponent that grows when we lower the temperature We observe that the effective exponent, for our lattice sizes and temperatures, has already grown beyond the Chayes bound of $D/2$ but still has not reached the RSB expectation of $D$. Note that no existing theory of the spin-glass phase can accommodate a temperature-dependent exponent. Therefore, the most economic scenario is that our lattice sizes are not large enough, so we are still in a preasymptotic regime for this quantity. \begin{table}[t] \caption{ Ratio $S_\mathrm{link}^{(2m)}$, eq.~\eref{eq:Slink}, for all our lattice sizes at $T=0.625,0.703$, using the coefficients from \tref{tab:c2}.} \label{tab:Slink} \lineup \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}cllcll} \cline{2-6} &\multicolumn{2}{c}{$T = 0.703$} & &\multicolumn{2}{c}{$T = 0.625$} \\ \hline {\bfseries $L$ } &\multicolumn{1}{c}{$S_\mathrm{link}^{(2)}$} &\multicolumn{1}{c}{$S_\mathrm{link}^{(4)}$} && \multicolumn{1}{c}{$S_\mathrm{link}^{(2)}$} & \multicolumn{1}{c}{$S_\mathrm{link}^{(4)}$} \\ \hline 8 & 0.838(21) & 0.846(67) & & 0.859(29) & 0.897(81) \\ 12 & 0.777(25) & 0.797(70) & & 0.801(29) & 0.821(88) \\ 16 & 0.755(22) & 0.706(59) & & 0.766(33) & 0.729(85) \\ 24 & 0.776(35) & 0.76(10) & & 0.745(32) & 0.675(86) \\ 32 & 0.816(49) & 0.83(15) & & \multicolumn{1}{c}{ ---} & \multicolumn{1}{c}{---} \\ \hline \end{tabular*} \end{table} Let us take a slightly different point of view. Rigorous theorems discussed in \sref{SECT:DEF-QLINK} tell us that, eq.~(\ref{REP-EQUIVALENCE-SENCILLA}), if \begin{equation} \lim_{L\to\infty} \chi_\mathrm{link}/ L^D >0\,, \end{equation} also the width $\sigma_{Q_\mathrm{link}}$ of the probability density function for $Q_\mathrm{link}$, \begin{equation} \sigma^2_{Q_\mathrm{link}}=\overline{\langle Q_\mathrm{link}^2\rangle} \ -\ \overline{\langle Q_\mathrm{link}\rangle}^2\,, \end{equation} will be non-vanishing in the thermodynamic limit (see also \sref{SECT:REPLICA-EQUIVALENCE}). It is very important that the converse statement also holds. Now, using the identity (\ref{eq:var-q-anchura}), we can split up this variance in two different contributions: \begin{eqnarray} \sigma^2_{Q_\mathrm{link}}&=&\int_{-\infty}^{\infty}\mathrm{d}q\ P(q)\left(\mathrm{Var}( Q_\mathrm{link}|q)\ +\ \left[\mathrm{E}\left(Q_\mathrm{link}|q\right)-\overline{\left\langle Q_\mathrm{link}\right\rangle}\right]^2\right)\,. \end{eqnarray} Since $\mathrm{Var}(Q_\mathrm{link}|q)$ scales as $L^{-D/2}$ (see ~\cite{contucci:06} and \fref{fig:var-qlink}), only the second term may survive the large $L$ limit. This suggests the definition of a modified link susceptibility: \begin{equation}\label{DEF:HAT-CHI-LINK} \hat\chi_\mathrm{link}=\frac{L^D\int_{-\infty}^{\infty}\mathrm{d}q\ P(q)\left[\mathrm{E}\left(Q_\mathrm{link}|q\right)-\overline{\left\langle Q_\mathrm{link}\right\rangle}\right]^2}{\overline{\left\langle q^4\right\rangle-\overline{\left\langle q^2\right\rangle}^2}}. \end{equation} According to RSB theory, $\hat\chi_\mathrm{link}$ should scale as $L^D$ whereas it would not diverge as violently in a droplet or TNT scenario. The rationale for dividing out the $\overline{\langle q^4\rangle-\overline{\langle q^2\rangle}^2}$ can be found in eq.~(\ref{eq:fit-derivative}). Assuming that the lowest order polynomial is adequate, one finds that (of course, the particular value of the index $m$ should be immaterial) \begin{equation}\label{eq:estimacion} \hat\chi_\mathrm{link}\approx L^D \bigl[c_2^{(2m)}\bigr]^2\,. \end{equation} Hence, the TNT theory would expect $\chi_\mathrm{link}/L^D$ to tend to zero, just because it predicts that in the large-$L$ limit $c_2^{(2m)}=0$. Note that the droplet theory would predict a vanishing $\chi_\mathrm{link}/L^D$ for a different reason, namely because they expect that $\overline{\langle q^4\rangle-\overline{\langle q^2\rangle}^2}$ should vanish. Let us check to what extent the estimate \eref{eq:estimacion} is accurate. We show on \tref{tab:Slink} the ratios \begin{equation}\label{eq:Slink} S_\mathrm{link}^{(2)}=\frac{L^D[c_2^{(2)}]^2}{\hat\chi_\mathrm{link}}\,,\qquad S_\mathrm{link}^{(4)}=\frac{L^D[c_2^{(4)}]^2}{\hat\chi_\mathrm{link}}\,. \end{equation} Referring again to eq.~\eref{eq:fit-derivative}, it is clear that the contribution linear in $q^2$ explains a large fraction of $\hat\chi_\mathrm{link}$, and that this fraction is not likely to vanish in the large-$L$ limit. Hence the question of whether $\chi_\mathrm{link}$ diverges as $L^D$ or not, turns out to be strictly equivalent to that of overlap equivalence that we discussed at length in Sect.~\ref{SECT:OVERLAP-EQUIVALENCE}. Our interpretation is that the effective scaling in \fref{fig:chi_link}--right is mostly due to strong finite size effects in $c_2^{(2m)}$. Under this light, the effective exponents reported in \fref{fig:chi_link}--right are preasymptotic. In fact, the ratio $A/c$ is large ($c^{(2m)}_2(L) = c + A/L$, see \tref{tab:c2}), which tells us that for $\chi_\mathrm{link}$ and related quantities finite volume corrections are particularly large and naive power law fits may give wrong results. Let us conclude this section by checking how these quantities behave in a 2$D$ Ising ferromagnet (i.e. with no disorder built in). Although this model is clearly too simple, it is also true that, up to our knowledge, the quantities investigated here have not been looked at before. Hence, it is interesting to see what happens even in this simple case. We use two replicas to compute $\chi_\mathrm{link}$. Results for $\chi_\mathrm{link}$ are presented on \tref{tab:2Dising} for two different temperatures below the critical temperature $T_\mathrm{c}$. There we can see that $\chi_\mathrm{link}$ approaches a limiting $\mathcal O(L^0)$ value when $L$ grows. Furthermore, the limiting value decreases when lowering the temperature away from $T_\mathrm{c}$. Hence, a divergent link susceptibility below $T_\mathrm{c}$ is something that should {\em not} be taken for granted. \begin{table}[h] \caption{$\chi_\mathrm{link}$ in the 2$D$-Ising model ($T_\mathrm{c}=2/\log(1+\sqrt{2})\approx 2.26918531\ldots$).}\label{tab:2Dising} \lineup \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}ccc} \hline \multicolumn{1}{c}{\bfseries $L$ } & \multicolumn{1}{c}{\bfseries $T = 0.992T_\mathrm{c}$} & \multicolumn{1}{c}{\bfseries $T = 0.986T_\mathrm{c}$}\\ \hline 8 & 7.16(1)\0 & 7.091(8)\0 \\ 12 & 8.776(6) & 8.614(15)\\ 16 & 10.247(12) & 9.868(9)\0\\ 24 & 11.47(2)\0\0 & 10.51(2)\0\0\0\\ 32 & 12.058(15) & 10.639(12)\0\\ \hline \end{tabular*} \end{table} \section{Conclusions}\label{SECT:CONCLUSIONS} We have obtained equilibrium configurations of the Ising spin glass ($D\!=\!3$, $\pm 1$ Edwards-Anderson model) on large lattices at low temperatures ($T=0.64 T_\mathrm{c}$ for $L=32$, $T=0.56 T_\mathrm{c}$ for $L=24$, and even lower temperatures for smaller systems, see \tref{tab:parameters}). This unprecedented computation has been made possible by the Janus computer. However, the parallel tempering had never before been put to such stress, and we have devoted a large effort to convince ourselves that thermalisation was achieved. New thermalisation tests were devised. Furthermore, a new simulation strategy had to be employed: the simulation time needs to be tailored sample by sample (for one cannot afford adopting worst-case parameters). The main conclusion we draw is that the correspondence between equilibrium results and non-equilibrium dynamics (much easier to compare with experimental work), is deeper than anticipated. In fact, one can construct a time-length dictionary, such that equilibrium correlation functions on finite systems match non-equilibrium correlators at finite time (but infinite system size). The evidence for this correspondence consists of: (i) quantitative comparison of the spatial correlation functions and (ii) the analysis of overlap equivalence on equilibrium (this work) and non-equilibrium settings~\cite{janus:09b}. In addition, there is a remarkable coincidence between the replicon exponent obtained from equilibrium methods~\cite{janus:10b}, and from non-equilibrium dynamics~\cite{janus:08b,janus:09b}. The unavoidable consequence of this time-length correspondence is that the system size that is relevant for the experimental work (time scales of one hour, say) at $T\!=\!0.64 T_\mathrm{c}$ is not infinite, but $L\!=\!110$. Note that this correspondence was obtained assuming a power-law growth with time of the spin-glass coherence length in experimental samples. Should the modified droplet scaling for $\xi(t_\mathrm{w})$ hold~\cite{bouchaud:01}, the relevant equilibrium system size would be even smaller. It is obvious that extrapolating numerical data from $L\!=\!32$ to $L\!=\!110$ is far less demanding than extrapolating them to infinite size. All such extrapolations in this work (even those assuming droplet scaling) were conclusive. The only effective theory that is relevant at experimental time scales is Replica Symmetry Breaking. However, the question of whether RSB is only an effective theory in $D\!=\!3$ or a fundamental one does not lack theoretical interest. We have attempted several extrapolations to infinite system size in this work, finding that droplet theory is ruled out, unless a change of regime arises for system sizes much larger than our reached $L\!=\!32$. We remark that in Sect.~\ref{sect:picos} we have numerically determined a crossover length that rules finite size effects. As expected for a large enough system, it scales with temperature as a {\em bulk} correlation length. However, on the basis of numerical data alone, one can never discard that new behaviour might appear for much larger system sizes, irrelevant for current experimental work. We found three contradictions with droplet theory. First, in order to have a trivial Binder cumulant, finite-size corrections had to be of order $\sim L^{-0.11}$. Such finite size corrections would imply a vanishing, or even negative, spin-glass order parameter $q_\mathrm{EA}$. Second, according to droplet theory (see~\cite{bray:87}, page 139) finite size corrections $\sim L^{-0.11}$ imply that the connected spatial correlation function at $q\!=\!q_\mathrm{EA}$ decays as $1/r^{0.11}$. A direct estimate indicates that, at $q\!=\!q_\mathrm{EA}$, correlations decay as $1/r^{0.6}$~\cite{janus:10b}. Third, the probability density function $P(q\!=\!0)$ does not decrease with increasing system size (a similar conclusion was reached in~\cite{katzgraber:01,katzgraber:03}). Our analysis of overlap equivalence is compatible with the RSB picture, without invoking sophisticated finite-size effects. On the other hand, the statistical likelihood for TNT theory, as formulated in~\cite{palassini:00}, has been quantified to be $6\%$. In any case, TNT scaling predicts that for $L\!=\!110$ the surface-to-volume ratio of the magnetic domains is still of order one (in agreement with RSB). In addition, we find that replica equivalence is consistent with the RSB picture (while TNT lacks a definite prediction). Furthermore, the link susceptibility, $\chi_\mathrm{link}$, is definitively divergent in the spin-glass phase (since the divergence is stronger the lower the temperature, its origin is obviously non-critical). We are aware of no argument in TNT theory implying the divergence of the link susceptibility. On the other hand, RSB theory does require a divergent $\chi_\mathrm{link}$. However, RSB demands a scaling $\chi_\mathrm{link}\sim L^D$. Such growth regime has still not been reached for our system sizes, although we have identified the origin of this preasymptotic behaviour. A final lesson from the present numerical study is that careful non-equilibrium simulations~\cite{janus:09b} are almost as rewarding as the equilibrium work. Indeed, our previous non-equilibrium study~\cite{janus:08b,janus:09b} reached a time scale that corresponds to the present equilibrium $L\!=\!32$ simulation. Yet, the numerical effort to obtain the data in Fig.~\ref{fig:Estatica-vs-Dinamica} has been larger by, roughly, a factor of 20 in the case of the equilibrium work. It is true that the equilibrium approach allows to investigate directly the crucial $q=0$ region, where in the nonequilibrium case one would need to rely on difficult extrapolations to infinite time. However, we do not think that there is much road ahead for equilibrium studies, due to the failure of the parallel tempering algorithm. Indeed, see \tref{tab:parameters}, it takes about 3.5 times more numerical work to equilibrate 1000 samples of $L=32$ at $T=0.64 T_\mathrm{c}$ than 4000 samples of $L=24$ down to $T=0.56 T_\mathrm{c}$. Clearly enough, the temperature window accesible with the parallel tempering algorithm decreases very fast as the system size grows. We believe this failure to be due to a genuine temperature-chaos effect. However, in order to analize quantitatively the effect one needs to correlate the (sample dependent) temperature bottlenecks, see Fig.~\ref{fig:historiabetas}--left, with the spin overlap at different temperatures. This analysis is left for future work~\cite{janus:xx}. \section*{Acknowledgments} We acknowledge support from MICINN, Spain, through research contracts No. TEC2007-64188, FIS2006-08533-C03, FIS2007-60977, FIS2009-12648-C03 and from UCM-Banco de Santander. B.S. and D.Y. are FPU fellows (Spain) and R.A.B. and J.M.-G. are DGA fellows. S.P.-G. was supported by FECYT (Spain). The authors would like to thank the Ar\'enaire team, especially J.~Detrey and F.~de~Dinechin for the VHDL code of the logarithm function~\cite{detrey:07}. M. Moore posed interesting questions that helped us sharpen the discussion in \sref{sect:picos}.
1,116,691,499,951
arxiv
\section{Introduction} \vspace{-3pt} \begin{quote} \it \small Abduction $\cdots$ consists in studying facts and devising a theory to explain them. \\ \mbox{}\hfill -- Charles Sanders Peirce (1839 -- 1914) \vspace{-3pt} \end{quote} \textit{Abductive reasoning}~\cite{peirce1931collected} was coined by Charles Sanders Peirce, the founder of American pragmatism, around 1865. It is inference to the most likely explanation or conclusion for an incomplete set of observations. Abductive reasoning is invariably employed in our everyday life; the generated hypothesis (${\color{myred}{H}}$) is expected to best explain what happens before, after, or during the observation (${\color{myblue}{O}}$). Fig.~\ref{fig:motivation} gives some examples. If you~see ${\color{myblue}{O}}$: ``{\color{myblue}{the road is wet}}'', abduction will lead you to the best explanation ${\color{myred}{H}}$: ``{\color{myred}{it rained earlier}}'' (\ie, ${\color{myred}{H}}_{\!}\!\rightarrow_{\!}\!{\color{myblue}{O}}$). One morning you find ${\color{myblue}{O}}$: ``{\color{myblue}{sister leaves home hurriedly}}'', then you conclude ${\color{myred}{H}}$: ``{\color{myred}{she will be late for class}}'' (\ie, ${\color{myblue}{O}}\!\rightarrow\!{\color{myred}{H}}$). You see ${\color{myblue}{O_1}}$: ``{\color{myblue}{a boy throws a frisbee out and his dog is running after it}}''. One minute later you find ${\color{myblue}{O_2}}$: ``{\color{myblue}{frisbee is in the boy's hand}}''. You~can imagine ${\color{myred}{H}}$:$_{\!}$ ``{\color{myred}{the$_{\!}$ dog$_{\!}$ just$_{\!}$ caught$_{\!}$ the$_{\!}$ frisbee$_{\!}$ back}}''$_{\!}$ (\ie, ${\color{myblue}{O_{1\!}}}\!\rightarrow_{\!}\!{\color{myred}{H}}_{\!}\!\rightarrow\!{\color{myblue}{O_2}}$). Although abductive reasoning has long been considered as a core ability of everyday human cognition~\cite{lombrozoexplanation, shelley1995visual, shanahan2005perception}, it still remains an untouched domain in computer vision literature. \begin{figure}[t] \vspace{-1.5ex} \begin{center} \includegraphics[width=0.95\linewidth]{n_figs/fig1.pdf} \end{center} \vspace{-18pt} \captionsetup{font=small} \caption{\small{Abductive reasoning is inference to the most likely {\color{myred}{explanation}} for an incomplete set of {\color{myblue}{observations}}.}} \vspace{-17pt} \label{fig:motivation} \end{figure} In this article, we propose \textit{Visual Abductive Reasoning} ({VAR}), a novel task and dataset for investigating the abductive reasoning ability of AI systems in daily visual situations. In particular, inspired by the recent advance of causal reasoning in NLP community (\ie, abductive text generation~\cite{bhagavatula2020abductive} and counterfactual story revision \cite{qin2019counterfactual}), we explore the use of natural language as the expression form to fully capture the complexity of real situations. This also better reflects the nature of human mind, which relies on linguistic thinking$_{\!}$~\cite{logan2018topology,logan2018thinking}. {VAR} requires the AI systems to describe the incomplete observation (\ie, visual premise) and write down the hypothesis that can best explain the premise. This allows to thoroughly evaluate the entire abduction procedure, as accurate understanding of the premise is the basis of abductive reasoning. Moreover, this can hasten the development of this new field, by comparing and embracing ideas for a relevant, well-established, yet different task -- dense video captioning (DVC)$_{\!}$~\cite{krishna2017dense}. In contrast to DVC that focuses only on \textit{describing the observation}, {VAR}{\ms}yields{\ms}a{\ms}new{\ms}visual-linguistic{\ms}reasoning{\ms}paradigm -- \textit{inference beyond observation}. Three characteristics make {VAR} uniquely challenging: \textbf{i)} {VAR}{\ms}needs{\ms}imagination{\ms}to{\ms}find{\ms}hypotheses{\ms}\textit{outside} the observation. \textbf{ii)} {VAR} seeks to discover the plausible causal structure among the observed events. \textbf{iii)} {VAR} is more related to the kind of human reasoning in daily situations where the information at hand is {often incomplete~\cite{keil2003folkscience}} and absolutely certain conclusions cannot be reached$_{\!}$~{\cite{bhagavatula2020abductive,keil2006explanation}.} Our dataset is collected to address the characteristics of the {VAR} task (\textit{cf}.$_{\!}$~\S\ref{sec:td}). It contains \cnum{9}K examples from \cnum{3718} videos. Each example consists of several chronologically-ordered events, most of which are logically related. For each event, abduction oriented description is written by people, and its role of \textit{premise} or \textit{explanation} is also annotated. When presenting each example to the AI system, the {explanation} event is masked and premise events are visible. The AI system is required to understand the partial, noisy observations (\ie, premise events) and construct the most plausible explanatory hypothesis -- accurately describing both the observable premise events and the masked explanation event. The examples are on average \cnum{37.8}s long, with \cnum{4.17} events, and harvested from diversely event-rich sources, \ie, YouTube Lifestyle videos, movies and TV shows. To lay a solid foundation for future efforts, a new model, named {{\textsc{Reasoner}}} (causal-and-cascaded reasoning Transformer), is proposed (\textit{cf}.$_{\!}$~\S\ref{sec:md}). Specifically, {{\textsc{Reasoner}}} is building upon a Transformer encoder-decoder architecture. In the encoder of {{\textsc{Reasoner}}}, a \textit{contextualized} \textit{directional$_{\!}$ position$_{\!}$ embedding}$_{\!}$ strategy$_{\!}$ is adopted to$_{\!}$ capture$_{\!}$ causal dependencies among the premise events. Hence the context of the premise events can be gathered in a causality-aware manner, enabling {{\textsc{Reasoner}}} to learn discriminative representations for the premise and explanatory hypothesis. Then {{\textsc{Reasoner}}} cascades a set of decoders for premise/hypothesis sentence production and refinement. For each generated sentence, the associated prediction score is viewed as the confidence and embedded into the next decoder as a signal for inspiring more information to flow from high-confident sentences to the low-confident ones. This leads to a \textit{confidence-guided multi-step reasoning} strategy, boosting the reasoning power of {{\textsc{Reasoner}}} eventually. Extensive experimental results are provided in \S\ref{sec:exp}. First, to comprehensively probe deep neural models on this challenging task, we establish a group of baselines based on current top-leading DVC models. The benchmarking results on {VAR} dataset show that {{\textsc{Reasoner}}} outperforms the best competitor by a large margin, \eg, {\cnum{33.44}} \textit{vs} {\cnum{28.71}} in terms of BERT-S, but is still far behind human performance ({\cnum{42.96}}). This shows that VAR is especially challenging for current video-language models. Then a set of user studies and ablative experiments are conducted for a thorough evaluation. For completeness, we further test {{\textsc{Reasoner}}} on the DVC task and confirm again its superiority. $_{\!}$Concurrent$_{\!}$ to$_{\!}$ us,$_{\!}$ \cite{hesselhwang2022abduction}$_{\!}$ studies$_{\!}$ \textit{image}-based$_{\!}$ abductive$_{\!}$~rea- soning: AI systems are required~to~identify, ground, or compare \textit{given} inferences. Overall, we feel vision-based abductive reasoning is an intriguing problem worthy of exploring. \vspace{-3pt} \section{Related Work} \vspace{-1pt} \paragraph{Dense Video Captioning (DVC).} Different from the classic video description task$_{\!}$~\cite{venugopalan2015translating,venugopalan2015sequence,yao2015describing,pan2016hierarchical,kojima2002natural,zhang2020object} that aims to describe a short video clip using a single sentence, DVC is to comprehensively describe all the events in an untrimmed video$_{\!}$ through$_{\!}$ a$_{\!}$ multi-sentence$_{\!}$ paragraph$_{\!}$~\cite{krishna2017dense}.$_{\!}$ Typical$_{\!}$ DVC models$_{\!}$~\cite{krishna2017dense,zhou2018end,melas2018training,xiong2018move,zhou2019grounded,mun2019streamlined,pan2020spatial,song2021towards} follow a \textit{two-stage}, \textit{bottom-up} paradigm: first parse a video into several temporal events and then decode a description from each detected event. As the problem of event detection is ill-defined$_{\!}$~\cite{deng2021sketch}, some alterative solutions either adopt a \textit{single-stage} strategy to simultaneously predict events and descriptions \cite{li2018jointly,wang2021end}, or turn to a \textit{top-down} regime: first generate paragraphs, and then ground each description to$_{\!}$ a$_{\!}$ video$_{\!}$ segment$_{\!}$~\cite{deng2021sketch,liu2021video}.$_{\!}$ A$_{\!}$ few other methods$_{\!}$~\cite{park2019adversarial,lei2020mart,ji2021hierarchical} focus purely on generating better paragraph captions from a provided list of events. Both {VAR} and DVC are concerned with video-based text generation; a part of our dataset is sourced from ActivityNet Captions~\cite{krishna2017dense}, a famous DVC dataset. However, DVC is aware of general fact based plain narrative, while {VAR} addresses cause-effect chain based abductive reasoning. Rather than accurately understanding what is observed, {VAR} further requires {invoking} what might have happened or will happen. In our experiments, we involve several recent DVC models as baselines for our {VAR} task and also report the performance of our {{\textsc{Reasoner}}} on the DVC task. \paragraph{Context-Aware Text Generation.} Our work is also related to some context-aware text generation tasks in the NLP literature. For instance, \textit{text infilling}$_{\!}$~\cite{zhu2019text}, also known as the \textit{cloze task}$_{\!}$~\cite{taylor1953cloze}, is to generate a span of missing tokens in a text chunk, while \textit{sentence/story} \textit{infilling}$_{\!}$~\cite{ippolito2019unsupervised,huang2020inset} aims to generate missing sentences in long-form text. The generated tokens/sentences are expected to smoothly blend into and fit the context syntactically$_{\!}$~\cite{zhu2019text}, semantically$_{\!}$~\cite{ippolito2019unsupervised,huang2020inset}, and$_{\!}$ logically$_{\!}$~\cite{kang2019linguistic}.$_{\!}$ \textit{Counterfactual$_{\!}$ story$_{\!}$ revision}$_{\!}$~\cite{qin2019counterfactual} requires generating$_{\!}$ a$_{\!}$ new$_{\!}$ ending,$_{\!}$ given$_{\!}$ a$_{\!}$ story$_{\!}$ context$_{\!}$ altered$_{\!}$ by a counterfactual condition. Our work draws inspiration from$_{\!}$ \textit{abductive text generation}$_{\!}$~\cite{bhagavatula2020abductive}, which investigates abductive reasoning via a natural language inference task: write an appropriate reason that could explain observations described by narrative text. Unlike these language tasks addressing inter-sentential relationship understanding only, our {VAR} task requires abduction and narrative for a sequence of partially observable visual events. Moreover, our {VAR} task setting is more general; it is not limited to the strict form of abductive reasoning in \cite{bhagavatula2020abductive}, \ie, generate a hypothesis ($H$) of what happened between the observed past ($O_1$) and future ($O_2$) contexts: $O_1\!\rightarrow\!H\!\rightarrow\!O_2$, but involves $O\!\rightarrow\!H$ and $H\!\rightarrow\!O$ abductive reasoning cases. \paragraph{Visual Future/State Prediction.} Our work is, to some degree, relevant to future prediction -- a popular research area in computer vision. In this area, a huge spectrum of topics/tasks are put forward, including forecasting future frames \cite{mathieu2016deep,vondrick2016generating}, future features \cite{vondrick2016anticipating,suris2021learning}, future actions$_{\!}$~\cite{ryoo2011human,kitani2012activity,lan2014hierarchical,abu2018will,sun2019relational}, future human motions$_{\!}$~\cite{fragkiadaki2015recurrent,jain2016structural,martinez2017human}, future human trajectories$_{\!}$~\cite{alahi2016social}, future goals \cite{epstein2021learning}, \etc. Rather than studying the future generation at the semantic-category or color-pixel level, event-level prediction was recently addressed in$_{\!}$~\cite{lei2020vlep} and$_{\!}$~\cite{park2020visualcomet}. However, \cite{lei2020vlep} only requires choosing from two candidates for future event prediction, making the take less challenging. \cite{park2020visualcomet} targets to describe past, present, and future events for a single image, while our {VAR} task requires making full use of the information from a set of premise events. There are also some efforts made towards understanding the dynamics/transformations between states$_{\!}$~\cite{park2019robust,hong2021transformation,yang2021multiple} or goal-conditioned procedure planning$_{\!}$~\cite{chang2020procedure}, while either relying on a pre-defined, limited action prediction space~\cite{hong2021transformation,chang2020procedure}, or using simulated environments~\cite{chang2020procedure,hong2021transformation}. Our {VAR} task is essentially to discover and describe the causal relations in real visual environments. It is not constrained to a narrow view of predicting either future events or between-state changes, but tries to infer the missing parts in the cause-effect chains, even with some unrelated premise events. All of these together make {VAR} a unique and challenging visual reasoning task. \paragraph{Position Encoding in Transformers.} Due to the permutation invariant nature of the attention operation, \cite{shaw2018self} learns and encodes position embeddings into Transformer tokens. Subsequent language-Transformers hence explore further variations, like incorporating sinusoid prior with more parameters~\cite{dai2019transformer}, simplifying position embeddings as learnable scalars\tcite{raffel2020exploring}, disentangling special tokens$_{\!}$ ($_{\!}$\texttt{[CLS]}$_{\!}$)\tcite{ke2020rethinking}, \etc. Some recent vision-Transformers~\cite{wu2021rethinking,chu2021twins} consider directional relative distance between 2D positions, and/or the interactions between visual tokens and position embeddings. {{\textsc{Reasoner}}} encodes the relations of premise events in a directional and contextualized manner for causal relation modeling, and leverages the prediction scores of descriptions for confidence-guided multi-step reasoning. \vspace{-3pt} \section{Our {VAR} Task and Dataset}\label{sec:td} \vspace{-2pt} \subsection{{VAR} Task}\label{sec:task_definition} \vspace{-2pt} Our visual abductive reasoning ({VAR}) task is designed to test the abductive reasoning ability of machine intelligence in everyday visual environments $\mathcal{E}$. Formally, given a video example $\mathcal{V}\!\subset\!\mathcal{E}$ that contains a set of $N$ events, \ie, $\mathcal{V}\!=\!\{O_1, \cdots_{\!}, O_{n-1}, H, O_n, \cdots_{\!}, O_{N-1}\}$, which are logically related and chronologically organized, we denote $\{O_n\}_{n=1}^{N-1\!}$ as \textit{premise} events -- partial observation of $\mathcal{E}$, and $H$ as \textit{explanation} event that can best explain the premise events. Conditioning on the past \textit{and/or} future visual context $\{O_n\}_{n=1}^{N-1\!}$ \textit{only}, the AI system is asked to describe these premise events, and, more importantly, infer and write down the most plausible explanatory hypothesis for the premise. Naturally, such a hypothesis is expected to be consistent with the content of the {explanation} event $H$. The abduction ability can thus be thoroughly examined by assessing the quality of both the premise-based descriptions and explanatory hypothesis sentences -- {adequent} understanding of the premise is a necessary prerequisite for abductive reasoning. \begin{figure}[t] \begin{center} \includegraphics[width=1.\linewidth]{n_figs/example.pdf} \end{center} \vspace{-18pt} \captionsetup{font=small} \caption{\small An illustrative example of our {VAR} dataset (\S\ref{sec:dataset}).} \vspace{-14pt} \label{fig:dataset:example} \end{figure} \subsection{{VAR} Dataset}\label{sec:dataset} Guided by the above task setup, we build a large-scale dataset for {VAR}. Fig.~\ref{fig:dataset:example} depicts an illustrative example. \vspace{-10pt} \subsubsection{Dataset Collection} \vspace{-4pt} \paragraph{Data Source.} {VAR} dataset is collected from three sources: \begin{itemize}[leftmargin=*] \setlength{\itemsep}{0pt} \setlength{\parsep}{-2pt} \setlength{\parskip}{-0pt} \setlength{\leftmargin}{-10pt} \vspace{-6pt} \item \cnum{23457} \textit{Youtube} lifestyle Vlog videos from ActivityNet Captions~\cite{krishna2017dense} and VLEP~\cite{lei2020vlep} datasets. These videos cover rich social scenarios and human activities. \item { \cnum{13799} \textit{TV show and movie} videos from TVC dataset~\cite{lei2020tvr} and a famous Youtube channel, Fandango MovieClips\footnote{\url{https://youtube.com/user/movieclips}}. These videos are key scenes in popular TV shows and films containing wide-ranging genres. } \vspace{-6pt} \end{itemize} YouTube videos include diverse daily events, but last relatively short durations with short intervals (about minutes). While TV shows and movie videos usually have limited scenarios,$_{\!}$ they$_{\!}$ contain$_{\!}$ rich$_{\!}$ artificial$_{\!}$ cause-effect chains in their story-lines and last relatively long durations with long intervals (spanning even years). Thus gathering these videos together makes our dataset a good testbed for {VAR}. \paragraph{Data$_{\!}$ Cleaning.$_{\!\!}$} The$_{\!\!}$ collected$_{\!\!}$ videos$_{\!\!}$ are$_{\!\!}$ accompanied$_{\!\!}$~by event labels, and{\ms}videos{\ms}containing{\ms}only{\ms}one{\ms}single event are first dropped. Then, for each of the rest videos, three human experts are invited to examine if there exists cause-effect relations between the video events. We only preserve qualified ones with more than two votes in the affirmative, finally resulting in \cnum{3718} videos in total for further annotation. \vspace{-10pt} \subsubsection{Dataset Annotation} \vbox{ % For each video in {VAR}, the annotation contains three steps: \noindent\textbf{Step 1: Event Type Annotation.} For an event $E$ of video } \noindent $\mathcal{V}$, if $E$ can well explain some other events in $\mathcal{V}$; or in other words, if we can imagine that $E$ could happen by only considering the other events $\mathcal{V}/E$, event ${E}$ will be labeled as \textit{explanation} and the other events $\mathcal{V}/E$ will be labeled as \textit{premise}. Fig.~\ref{fig:dataset:example} gives an example. For the video containing three events: $E_1$ ``a man falls off a running horse'', $E_2$ ``the man lies on the ground and gets hurt'', $E_3$ ``the man is taken to the hospital'', we can derive two legal examples for our {VAR} dataset: \{\textit{premise} ($E_1$, $E_2$), \textit{explanation} ($E_3$)\}, and \{\textit{premise} ($E_1$, $E_3$), \textit{explanation} ($E_2$)\}. % \paragraph{Step 2: Abductive Reasoning Aware Description Anno- tation.$_{\!}$} Although$_{\!}$ some$_{\!}$ videos$_{\!}$ are$_{\!}$ collected$_{\!}$ with event-level descriptions/plot summaries, we re-annotate all the events with abductive reasoning$_{\!}$ oriented$_{\!}$ descriptions.$_{\!}$ Specifically, instead of capturing all the visual details in video captioning, like \{``a boy~in~a black jacket plays frisbee with his white dog in a park'', ``the dog catches the blue frisbee and runs back'', ``the boy smiles, takes the frisbee and pats the dog\}, our descriptions are only aware of describing the visual content related to abductive reasoning, like \{``a boy throws a frisbee out and his dog is running after it'', ``the dog catches the frisbee back'', ``the boy gets the frisbee''\}. % \paragraph{Step 3: Validation.} Finally, each annotated example is examined by three verifiers: the verifiers are shown with only the \textit{premise} events and language-based explanation (\ie, description on the \textit{explanation} event), and vote for: ``Is the explanation sound?''. If an example wins majority approval, it will be accepted; otherwise, it will be relabeled or dropped. \vspace{-12pt} \subsubsection{Dataset Features and Statistics}\label{sec:dfs} \vspace{-4pt} To offer deeper insights into our {VAR} dataset, we next discuss its distinctive properties and detailed statistics. \paragraph{Abductive Reasoning Orientated.} {VAR} is the first dataset that underpins machine intelligence study of abductive reasoning in visual daily scenarios. It is designed to reason beyond visual \textit{premise} for a plausible \textit{explanation}, distinguishing it from existing video-language datasets/tasks. \paragraph{Diversity.} To capture diverse cause-effect relations and abduction cases, our {VAR} dataset covers \textbf{i)} various daily events/activities, \eg, work, leisure, household; \textbf{ii)} rich scenarios, \eg, lifestyle recording, scripted drama; \textbf{iii)} different durations and intervals, ranging from minutes to years. \begin{figure}[t] \vspace{-4pt} \begin{minipage}{\textwidth} \hspace{-1ex} \begin{minipage}[t]{0.2\textwidth} \vspace{-11ex} \begin{threeparttable}[t] \small \resizebox{\linewidth}{!}{ \setlength\tabcolsep{2pt} \renewcommand\arraystretch{1.1} \begin{tabular}{c|ccc} \hline\thickhline \rowcolor{mygray} {Split} & {\#Example} & {\#Event} & {\#Video} \\ \hline \hline \texttt{Train} & \cnum{7053} & \cnum{12582} & \cnum{3000} \\ \texttt{Val} & \cnum{460} & \cnum{860} & \cnum{205}\\ \texttt{Test} & \cnum{1093} & \cnum{2044} & \cnum{513}\\ \hline \texttt{Total} & \cnum{8606} & \cnum{15486} & \cnum{3718} \\ \hline \end{tabular} } \end{threeparttable} \end{minipage} \begin{minipage}[t]{0.3\textwidth} \includegraphics[width=0.9\linewidth]{n_figs/distrib.pdf} \end{minipage} \end{minipage} \vspace{-6pt} \captionlistentry[figure]{A placeholder to increase table counter.}\label{fig:statistics} \makeatletter\def\@captype{table}\makeatother\captionsetup{font=small} \captionsetup{labelformat=andfigure} \caption{Summative statistics of {VAR} dataset (\S\ref{sec:dfs}).} \label{tab:statistics} \vspace{-17pt} \end{figure} \paragraph{Large-Scale.} As shown in Table~\ref{tab:statistics}, {VAR} consists of {\cnum{8606}} data examples, collected from {\cnum{3718}} unique videos that span over \cnum{153} hours in total. On average, each video in {VAR} contains \cnum{4.17} events that last \cnum{37.8} seconds, resulting in a total of 15K corresponding descriptive sentences of \cnum{13.5} words. \paragraph{Dataset Split.} We separate the {VAR} dataset into \texttt{train}/ \texttt{val}/\texttt{test} sets and arrive at a unique split of {\cnum{7053}}/{\cnum{460}}/ {\cnum{1093}} examples with no overlapping video between \texttt{val}/ \texttt{test} and \texttt{train} sets. We provide more detailed statistics in both Table~\ref{tab:statistics}\&Fig.~\ref{fig:statistics} and the supplement. \section{Methodology}\label{sec:md} \paragraph{Problem$_{\!}$ Statement.$_{\!\!}$} Given$_{\!}$ a$_{\!}$ video$_{\!}$ $\mathcal{V}$$_{\!}$ with$_{\!}$ $N$ temporally ordered events, \ie, $\mathcal{V}\!=\!\{O_1, \cdots_{\!}, O_{n-1}, H, O_n, \cdots_{\!}, O_{N-1}\}$, the \textit{premise} events, \ie, $\{O_n\}_{n=1}^{N-1}$, and \textit{explanation} event, \ie, $H$, are logically related. The AI system is only presented with a partially observable version of $\mathcal{V}$, \ie, $\tilde{\mathcal{V}}\!=\!\{O_1, \cdots_{\!}, O_{n-1}, \tilde{H}, O_n, \cdots_{\!}, O_{N-1}\}$, where $\tilde{H}$ is obtained by setting all the pixel values of $H$ as zero. The AI system is required to not only describe the premise, but also reason about the most likely explanation for the premise, \ie, gen- erate$_{\!}$ $N$ sentences$_{\!}$ $\mathcal{S}_{\!}\!=_{\!}\!\{S^{O}_n\}^{N-1}_{n=1}\!\cup\!S^{H\!}$ that$_{\!}$ describe$_{\!}$ the$_{\!}$ con- tent of the $N$ events in $\mathcal{V}$, while conditioning on $\tilde{\mathcal{V}}$ only: \vspace{-4pt} \begin{equation}\small \begin{aligned}\label{eq:probability} P(\mathcal{S}|\tilde{\mathcal{V}})&=P(S^{H}|\tilde{\mathcal{V}})\prod\nolimits_{n}\!P(S^{O\!}_n|\tilde{\mathcal{V}})\\ &=\prod\nolimits_{l}\!P(w^{H}_l|w^{H}_{\textless l}, \tilde{\mathcal{V}})\prod\nolimits_{n}\!\prod\nolimits_{l}\!P(w^{On}_l|w^{On}_{\textless l}, \tilde{\mathcal{V}}); \end{aligned} \vspace{2pt} \end{equation} where $w_l$ is the $l$-th word in a generated sentence $S\!\in\!\mathcal{S}$. It is worth mentioning that, when ${H}\!=\!\varnothing$, our VAR~task is degraded into a classic DVC task$_{\!}$~\cite{krishna2017dense} which focuses only on describing the content of observed events $\{O_n\}_{n=1}^{N-1}$. \begin{figure*}[t] \vspace{-6pt} \begin{center} \includegraphics[width=1.\linewidth]{n_figs/pipe.pdf} \put(-359, 99.5){\scriptsize (\S\ref{sec:ec})} \put(-123, 99.5){\scriptsize (\S\ref{sec:dec})} \put(-442.5, 71.5){\scriptsize$\Func{Rel}$} \put(-401.5, 71.5){\scriptsize$\Func{Rel}$} \put(-343.5, 49.5){\scriptsize$\tilde{\Vbm}_1$} \put(-338, 39.5){\scriptsize$\tilde{\Vbm}_h$} \put(-330.5, 31){\scriptsize$\tilde{\Vbm}_N$} \put(-290, 40){\scriptsize$\Dcal^0$} \put(-152, 40){\scriptsize$\Dcal^1$} \put(-82, 40){\scriptsize$\Dcal^K$} \put(-259.5, 35){\tiny$c^0_1$} \put(-255, 27){\tiny$c^0_h$} \put(-249, 21){\tiny$c^0_N$} \put(-194, 71.5){\scriptsize$\Func{Rel}^c$} \put(-124, 71.5){\scriptsize$\Func{Rel}^c$} \end{center} \vspace{-18pt} \captionsetup{font=small} \caption{\small \textbf{Network architecture} of {\textsc{Reasoner}}. See \S\ref{sec:md} for more details. } \label{fig:model} \vspace{-12pt} \end{figure*} \begin{figure}[t] \vspace{-4pt} \begin{center} \includegraphics[width=1.\linewidth]{n_figs/temporal_.png} \end{center} \vspace{-16pt} \captionsetup{font=small} \caption{\small Illustration of our \textbf{contextualized directional position embedding} $\Ubm$ (\S\ref{sec:ec}). Darker color indicates larger attention.} \label{fig:temporal} \vspace{-14pt} \end{figure} \paragraph{Core Idea.} Building upon a Transformer encoder-decoder architecture (Fig.~\ref{fig:model}), our {{\textsc{Reasoner}}} is aware of two core challenges posed by the {VAR} task: \textbf{i)} inferring cause-effect relations, and \textbf{ii)} reasoning beyond the partial$_{\!}$ observation.$_{\!}$ To$_{\!}$ address$_{\!}$ \textbf{i)},$_{\!}$ a$_{\!}$ \textit{contextualized} \textit{directional$_{\!}$ position$_{\!}$ embedding}$_{\!}$ strategy$_{\!}$ is adopted to$_{\!}$ capture$_{\!}$ causal relations residing in the input video $\tilde{\mathcal{V}}$, leading to a \textit{Causality-aware encoder} (\S\ref{sec:ec}). To accommodate \textbf{ii)}, a \textit{confidence-guided multi-step} \textit{reasoning} strategy is developed, \ie, utilize the prediction scores of sentences to guide cross-sentence information flow, yielding a \textit{cascaded-reasoning decoder} (\S\ref{sec:dec}). \subsection{Causality-Aware Encoder}\label{sec:ec} For notational simplicity, we redefine the partially obser- vable$_{\!}$ video$_{\!}$ $\tilde{\mathcal{V}}_{\!}\!=_{\!}\!\{O_1, \cdots_{\!}, O_{n-1}, \tilde{H}, O_n, \cdots_{\!}, O_{N-1}\}$$_{\!}$ as$_{\!}$ $\tilde{\mathcal{V}}\!=$ $\{E_n\}_{n=1}^{N}$, where $E_h$ refers to the masked explanation event $\tilde{H}$, and $\{E_n\}_{\neq h\!}$ indicates the visible, premise events $\{O_n\}_{n=1}^{N-1\!}$.{\ms}Let$_{\!}$ us$_{\!}$ denote$_{\!}$ the$_{\!}$ initial$_{\!}$ features$_{\!}$ of$_{\!}$ the$_{\!}$ $N${\ms}events$_{\!}$~as$_{\!}$ $\{\Ebm_{n\!}\!\in$ $\mathbb{R}^{d}\}_{n=1}^{N}$. For each premise event $E_{n\neq h}$, corresponding feature $\Ebm_{n\neq h}$ is obtained by aggregating the visual features of its frames. For the masked explanation event $E_h$, we set $\Ebm_h\!=\!\bm{0}^d$. The Causality-aware encoder is to leverage the context from the past and/or future observable events $\{E_n\}_{\neq h\!}$ to reinforce their own representations as well as posit a meaningful representation for the most likely explanatory hypothesis, \ie, the masked explanation event $E_h$. The attention operation is the core of Transformer: \vspace{-2pt} \begin{equation}\small\label{eq:attn} \!\!\bm{A}\! \sim \bm{X}\bm{W}^{q}(\bm{X}\bm{W}^{k})^{\top\!}, ~~\bm{Y} = \texttt{softmax}(\bm{A})\bm{X}\bm{W}^{v}. \vspace{-2pt} \end{equation} where the output $\bm{Y}_{\!}\!\in\!\mathbb{R}^{N\times d\!}$ is with the same length $N$ and embedding dimension $d$ as the input $\bm{X}_{\!}\!\in\!\mathbb{R}^{N\times_{\!}d\!}$, and $\bm{W}^{q,k,v\!}\!\in_{\!}\!\mathbb{R}^{d_{\!}\times_{\!}d}$ project$_{\!}$ the$_{\!}$ input into \textit{query}, \textit{key}, and \textit{value} matrices, respectively. As the attention computation is invariant with respect to reordering of the inputs, explicit position encoding is widely adopted, in two typical ways: \textbf{i)} \textit{Absolute position encoding}$_{\!}$~\cite{vaswani2017attention}: each position $n$ is assigned an embedding, \ie, $\Ubm_{n\!}\!=_{\!}\!\Func{Abs}(n)\!\in\!\RR^{1\x d}$, and the position embeddings are directly added to the input, \ie, $\Xbm\!\leftarrow\!\Xbm\!+\!\Ubm$. $\Func{Abs}(\cdot)$ can be a$_{\!}$ linear$_{\!}$ projection$_{\!}$~\cite{dosovitskiy2020image}, a$_{\!}$ sinusoidal$_{\!}$ function \cite{vaswani2017attention}, \etc. \textbf{ii)} \textit{Relative position encoding} \cite{shaw2018self}: the position embeddings are constructed considering the pairwise relationships between positions, \ie, $\Ubm_{nm}\!=\!\Func{Rel}(n,m)\!\in\!\RR$. \paragraph{Contextualized Directional Position Embedding.} Since the {VAR} task is essentially aware of the plausible chains of cause-effect, the relative ordering of the input events matters. We continue in the vein of relative position encoding \cite{shaw2018self,wu2021rethinking} and adopt a \textit{contextualized} \textit{directional$_{\!}$ position$_{\!}$ embedding}$_{\!}$ strategy, \ie, $\Ubm_{nm}\!=\!\Func{Rel}(n,m,\Xbm_n)\!\in\!\RR$: \vspace{-3pt} \begin{equation}\small\label{eq:temp_func} \begin{aligned} \Func{Rel}(n,m,\Xbm_n)\!&=\! \Xbm_n\Rbm^\top_{\ell(n,m)}, \\ \ell(n,m)\!&=\!n-m+N, \end{aligned} \vspace{-3pt} \end{equation} where$_{\!}$ $\Rbm_{\!}\!\in_{\!}\!\RR^{(2N-1)\x d\!}$ is$_{\!}$ a learnable$_{\!}$ matrix,$_{\!}$ and$_{\!}$ $\ell(\cdot,\cdot)$ is a directional indexing function, \ie, $\ell(n,m)\neq \ell(m,n)$. The directional projection $\Func{Rel}$ is conditioned on the visual context, \ie, $\Xbm_n$, since the causal dependency between events is typically related to specific content, \eg, when we see people are laughing, we tend to look back only a short time into the past to figure out the reason; when we see a man falls off his horse, we worry about whether he gets hurt and the impact on his future life. Some more visual examples regarding our contextualized directional position embedding strategy can be found in Fig.~\ref{fig:temporal}. Then, $\Ubm_{\!}\!\in_{\!}\!\RR^{N_{\!}\x_{\!} N\!}$ is injected by manipulating on the attention matrix$_{\!}$ $\Abm_{\!}\!\in_{\!}\!\RR^{N_{\!}\x_{\!} N_{\!}}$: \vspace{-3pt} \begin{equation}\small\label{eq:relative} \begin{aligned} \Abm_{nm} \sim \bm{X}_{n}\bm{W}^{q}(\bm{X}_{m}\bm{W}^{k})^{\top\!} + \Ubm_{nm}. \end{aligned} \vspace{-2pt} \end{equation} We further set $\Abm_{nh}\!=\!0$ to encourage leveraging the context from the observable events $\{E_n\}_{\neq h\!}$ to infer the masked explanation event $E_h$, rather than vice versa. The Causality-aware encoder in {{\textsc{Reasoner}}} is therefore achieved by stacking several Transformer encoder blocks~\cite{vaswani2017attention} with our contextualized directional position embedding strategy. We denote the output event representations as $\{\tilde{\Vbm}_n\!\in\!\RR^d\}_{n=1}^{N}$. \subsection{Cascaded-Reasoning Decoder}\label{sec:dec} With the discriminative representations $\{\tilde{\Vbm}_n\}_{n=1\!}^{N}$ of the observable premise events $\{O_n\}_{n=1}^{N-1\!}$ as well as the explanatory hypothesis $\tilde{H}$, the cascaded-reasoning decoder first generates a descriptive sentence for each event/hypothesis individually, and then refines all the sentences in a comprehensive, confidence-guided, and step-by-step manner. \paragraph{Initial$_{\!}$ Description$_{\!}$ Generation.$_{\!}$} For$_{\!}$ each$_{\!}$ event$_{\!}$ representation $\tilde{\Vbm}_n\!\in\!\mathbb{R}^{d}$, a multi-modal, \textit{masked} Transformer decoder is first adopted for initial description generation: \vspace{-4pt} \begin{equation}\small\label{eq:dec1} \begin{aligned} [\tilde{\Vbm}^{0}_n, \Hbm^{0}_n] & = \mathcal{D}^{0}([\tilde{\Vbm}_n, \Hbm_n]), \end{aligned} \vspace{-4pt} \end{equation} where $\Hbm_{n\!}\!\in_{\!}\!\mathbb{R}^{L_{n\!}\x d\!}$ is a set of $L_{n\!}$ words embeddings. During training, it is computed over the groundtruth description, \ie, $\hat{S}^{En}$, and masked attention~\cite{vaswani2017attention} is adopted to prevent the$_{\!~}$ leakage$_{\!~}$ of$_{\!~}$ future$_{\!~}$ words. During$_{\!~}$ inference,$_{\!~}$ it$_{\!~}$ is$_{\!~}$ recur- \noindent rently generated. Learnable modal-type embeddings$_{\!}$~\cite{devlin2018bert,lei2020mart} are also added into the input yet omitted for brevity. By fusing visual and linguistic representations as the input, $\mathcal{D}^{0\!}$ conducts cross-modal reasoning, and hence generates improved event representation, \ie, $\tilde{\Vbm}^{0}_n\!\in\!\mathbb{R}^{d}$, and updated visual-linguistic state, \ie, $\Hbm^{0}_n\!\in\!\mathbb{R}^{L_n\x d}$, for each event $E_n$. Then a captioning head is adopted to map $\Hbm^{0}_n$ into word distribution. The probability of $l$-th word is given as: \vspace{-4pt} \begin{equation}\small \begin{aligned} \!\!\!\!\!\!\!\!P(w_l^{En}|w_{<l}^{En\!},\tilde{\mathcal{V}}) &=\! P(w_l^{En}|w_{<l}^{En\!},\Hbm^{0}_n) \\ &=\! \texttt{softmax}(\Hbm^{0}_n(l)\bm{\Omega}^{\!\top}\!),\!\! \end{aligned} \vspace{-4pt} \end{equation} where $\bm{\Omega}\!\in\!\RR^{|\Omega|\x d\!}$ is the embedding matrix of the word vocabulary $\Omega$, and $\Hbm^{0}_n(l)_{\!}\!\in_{\!}\!\mathbb{R}^{d\!}$ denotes $l$-th vector of $\Hbm^{0}_n$. As standard, the description $S^{0, En\!}\!=_{\!}\!\{w_l^{En}\}_{l=1\!}^{L_n}$ for event $E_{n\!}$ is generated by greedy prediction, and we set the averaged prediction score as the confidence: $c^{0\!}_{n\!}\!=_{\!}\!\frac{1}{L_n\!}\!\sum_i\!P(w_l^{En})$. \paragraph{Iterative Description Refinement.} To better respond to the fundamental challenge of {VAR} task in reasoning beyond observation, we further cascade several Transformer decoder blocks over $\mathcal{D}^{0\!}$ for iterative description refinement. This allows {{\textsc{Reasoner}}} to make full use of both visual and linguistic context from the past and/or future observable events, and improves the explanatory hypothesis in a step-by-step manner, boosting the reasoning ability eventually. \begin{figure}[t] \vspace{-4pt} \begin{center} \includegraphics[width=.98\linewidth]{n_figs/cascade.pdf} \put(-26, 50){\scriptsize$\Dcal^K$} \put(-102, 50){\scriptsize$\Dcal^0$} \put(-218, 49){\small$E_1$} \put(-218, 21){\small$E_h$} \end{center} \vspace{-18pt} \captionsetup{font=small} \caption{\small{\m}Sentences from the \textbf{cascaded-reasoning decoder} (\S\ref{sec:dec}).} \label{fig:crd} \vspace{-14pt} \end{figure} Specifically, our whole refinement procedure can be defined in a recursive, confidence-guided form: \vspace{-3pt} \begin{equation}\small\label{eq:dec2} \begin{aligned} \!\!\!\!\!\![\tilde{\Vbm}^{k}_n, \Hbm^{k}_n]\!=\!\mathcal{D}^{k}([\tilde{\Vbm}^{k-1}_n, &\Hbm_n, \{{\hbm}^{k-1}_n\}^N_{n=1}]), \!~k\!=\!\{1,2,\cdots\!,K\}\!\!\\ \!\!\!\!\!\!\!\!\!\!\!\!P(w_l^{En}|w_{<l}^{En\!},\tilde{\mathcal{V}}) &\!=\! P(w_l^{En}|w_{<l}^{En\!},\Hbm^{k}_n) \\ &\!=\! \texttt{softmax}(\Hbm^{k}_n(l)\bm{\Omega}^{\!\top}\!),\!\! \end{aligned} \vspace{-4pt} \end{equation} where $\mathcal{D}^{k\!}$ refers to $k$-th refinement module and all the refinement modules are weight-sharing Transformer decoders; ${\hbm}^{k-1\!}_n\!\in\!\mathbb{R}^{d}$ indicates a condensed representation of $\Hbm^{k-1\!}_n\!\in\!\mathbb{R}^{L_n\x d}$: ${\hbm}^{k-1\!}_n\!=\!\texttt{maxpool}(\Hbm^{k-1}_n)$. In this way, each $\mathcal{D}^{k\!}$ can leverage inter-sentential relationship in previously generated descriptions $\{{\hbm}^{k-1\!}_n\}^N_{n=1\!}$ for refinement and better reason about the explanatory hypothesis. Moreover, we introduce the event confidence, \ie, $\{c^{k}_n\}^N_{n=1}$, as a kind of bias into the refinement procedure: leverage the information from those more confident descriptions to help improve the predictions with relatively lower confidence. Without causing ambiguity, we denote $\Xbm$ as the input of the decoder $\mathcal{D}^{k}$, \ie, $\Xbm\!=\![\tilde{\Vbm}^{k-1}_n, \Hbm_n, \{\hbm^{k-1}_n\}^N_{n=1}]$ and omit the superscript $k$. For each input ``token'' $\Xbm_i$, its confidence score $c_{n_i}$ is the one of its sourced event $E_{n_i}$, and we normalize $\{c_n\}^N_{n=1\!}$ over all the $N$ events. Analogous to Eq.$_{\!}$~\ref{eq:relative}, the attention computation in $\mathcal{D}^{k\!}$ is modified as: \vspace{-3pt} \begin{equation}\small\label{eq:relative2} \begin{aligned} &\Abm_{ij}\sim \bm{X}_{i}\bm{W}^{q}(\bm{X}_{j}\bm{W}^{k})^{\top\!} + \Func{Rel}^c(c_{n_i},c_{n_j}),\\ &\Func{Rel}^c(c_{n_i},c_{n_j})= \rbm^c_{\iota(c_{n_i},c_{n_j})}, \end{aligned} \vspace{-4pt} \end{equation} where the learnable vector $\rbm^{c\!}\!\in_{\!}\!\RR^{2B-1\!}$ can be viewed as a bucket to store the relative confidence weight; and the directional indexing function $\iota(\cdot,\cdot)$ is given as $\iota(c_{n_i\!},c_{n_j\!})\!=_{\!}\!\lceil{c_{n_i\!}\!\cdot_{\!}\!B}\rceil\!-\!\lceil{c_{n_j\!}\!\cdot_{\!}\!B}\rceil\!+\!B$. With such confidence-guided decoding scheme, descriptions are refined by intelligently gathering context from more reliable sentences, while ignoring noisy cues from less confident ones. By stacking several such decoders $\{\mathcal{D}^{k}\}_k$, outputs will be progressively improved (Fig.\!~\ref{fig:crd}). Related experiments can be found in \S\ref{sec:exp:ablation}. \subsection{Training Objective} \label{sec:train} \vspace{-1pt} Given the groundtruth sentences $\{\hat{S}^{En}\}_{n=1\!}^{N\!}$ corresponding to the $N$ events $\{E_n\}_{n=1\!}^{N}$ of video $\tilde{\mathcal{V}}$, {{\textsc{Reasoner}}} is trained by minimizing the negative log-likelihood over the outputs of the cascaded-reasoning decoder $\{\mathcal{D}^{k\!}\}_{k=0}^K$: \vspace{-2pt} \begin{equation}\small \Lcal_{\textrm{Main}} = -\textstyle\sum_{k=0}^{K}\textstyle\sum_{n=1}^{N}\textstyle\sum_{l=1}^{L_n} P(\hat{w}_l^{En}|\hat{w}_{<l}^{En\!},\Hbm^{k}_n), \vspace{-0pt} \end{equation} where$_{\!}$ $\hat{S}^{En\!}\!=_{\!}\!\{\hat{w}_l^{En}\}_{l=1}^{L_n}$.$_{\!}$ As$_{\!}$ the$_{\!}$ teacher$_{\!}$ forcing$_{\!}$ scheme$_{\!}$~\cite{williams1989learning}$_{\!}$ is used for training, $\Hbm_n$ in Eq.~\ref{eq:dec1} and \ref{eq:dec2} is embedded over one-hot encoded groundturth words $\{\hat{w}_l^{En}\}_{l}$. We further adopt a \textit{hypothesis reconstruction} based optimization criterion, to provide the encoder with more explicit supervision signals for explanatory hypothesis reasoning: \vspace{-2.5pt} \begin{equation}\small \label{eq:aux} \Lcal_{\textrm{Aux}} = \norms{\Func{Proj}(\tilde{\bm{V}}_h) - \Func{Proj}(\hat{\bm{V}}_h)}_2, \vspace{-3pt} \end{equation} where $\tilde{\bm{V}}_{h\!}$ and $\hat{\bm{V}}_{h\!}$ are embeddings for the explanatory hypothesis obtained from the masked and original videos, \ie, $\tilde{\mathcal{V}}$ and ${\mathcal{V}}$, respectively, and $\Func{Proj\!}$ is a projection head, based on a small multi-layer perceptron. This auxiliary training objective forces {{\textsc{Reasoner}}} to ``imagine'' an effective representation $\tilde{\bm{V}}_{h}$ that better aligns with the original content of $E_{h}$. $\hat{\bm{V}}_{h\!}$ is from the momentum version of the encoder. \subsection{Implementation Details} \vspace{-1pt} Details on implementing the algorithm are as follows: \begin{fullitemize} \item \textit{Detailed network architecture}: The encoder (\S\ref{sec:ec}) of {{\textsc{Reasoner}}} is implemented as two Transformer encoder blocks, and each decoder module (\S\ref{sec:dec}), \ie, $\mathcal{D}^{k}$, is implemented as two Transformer masked decoder blocks. They have $d\!=\!768$ hidden size and \cnum{12} attention heads. We use a bucket size $B\!=\!10$ to quantize confidence scores (Eq.\!~\ref{eq:relative2}). We stack a total of $K\!=\!3$ decoders for cascaded reasoning. \item \textit{Data preprocessing}: For each video event, action/appearance features are pre-extracted using ActivityNet\tcite{caba2015activitynet} pre-trained ResNet200\tcite{he2016deep}/BN-Inception\tcite{ioffe2015batch}, as in\tcite{zhou2018end,lei2020mart,wang2021end}. We uniformly sample \cnum{50} frames per event and concatenate their features as the corresponding event representation which is denoted in a vector form in \S\ref{sec:ec}-\ref{sec:train} for ease of notation. Sentences are padded or truncated into \cnum{20} words. \item \textit{Training/Inference}: For the first decoder $\mathcal{D}^{0}$, we adopt scheduled sampling~\cite{bengio2015scheduled} to make the later decoders fully trained. The coefficient between the main and auxiliary training objectives is set as \cnum{0.2}. During inference, the final descriptive sentences are generated from the last decoder $\mathcal{D}^{K}$, using deterministic decoding, \ie, greedy search. All the experiments are conducted on \cnum{2} NVIDIA GeForce RTX 2080 Ti GPUs with a \cnum{11}GB memory per-card. \end{fullitemize} \begin{table*}[t] \centering\small \resizebox{1.\textwidth}{!}{ \setlength\tabcolsep{2pt} \renewcommand\arraystretch{1.03} \begin{tabular}{x{74}||cc|ccccc|ccccc} \hline\thickhline \rowcolor{mygray} & & & \multicolumn{5}{c|}{\textbf{Premise Event}} & \multicolumn{5}{c}{\textbf{Explanation Event}} \\ \rowcolor{mygray} \multirow{-2}{*}{Method} &\multirow{-2}{*}{Encoder} & \multirow{-2}{*}{Decoder} & BLEU@4 & METEOR & ROUGE & CIDEr & BERT-S & BLEU@4 & METEOR & ROUGE & CIDEr & BERT-S \\ \hline \hline Human & - & - & {\void{20}{10}{13.26}}& {\void{20}{10}{21.27}}& {\void{20}{10}{39.47}}& {\void{20}{10}{155.72}} & {\void{20}{10}{45.33}} & {\void{20}{10}{11.35}} & {\void{20}{10}{19.36}}& {\void{20}{10}{36.92}}& {\void{20}{10}{147.79}} & {\void{20}{10}{40.59}}\\ \hline \subt{48}{22}{VTrans \cite{zhou2018end}}{\sub{CVPR18}} & Trans. & Trans. & {\void{20}{10}{4.20}}& {\void{20}{10}{9.94}} & {\void{20}{10}{21.13}} & {\void{20}{10}{31.09}} & {\void{20}{10}{29.05}} & {\void{20}{10}{0.71}}& {\void{20}{10}{6.92}}& {\void{20}{10}{19.12}} & {\void{20}{10}{7.11}} & {\void{20}{10}{22.13}} \\ \subt{48}{22}{MFT \cite{xiong2018move}}{\sub{ECCV18}} & RNN & RNN & {\void{20}{10}{3.93}}& {\void{20}{10}{9.69}} & {\void{20}{10}{20.81}} & {\void{20}{10}{30.96}} & {\void{20}{10}{27.41}} & {\void{20}{10}{1.81}}& {\void{20}{10}{7.16}}& {\void{20}{10}{19.16}} & {\void{20}{10}{17.67}} & {\void{20}{10}{25.90}} \\ \subt{48}{22}{Trans-XL\cite{dai2019transformer}}{\sub{ACL19}} & Trans. & Trans. & {\void{20}{10}{3.98}}& {\void{20}{10}{9.53}} & {\void{20}{10}{21.02}} & {\void{20}{10}{30.87}} & {\void{20}{10}{29.12}} & {\void{20}{10}{2.96}}& {\void{20}{10}{7.51}}& {\void{20}{10}{20.94}} & {\void{20}{10}{24.54}} & {\void{20}{10}{27.23}} \\ \subt{48}{22}{MART \cite{lei2020mart}}{\sub{ACL20}} & Trans. & Trans. & {\void{20}{10}{3.74}}& {\void{20}{10}{9.48}} & {\void{20}{10}{21.17}} & {\void{20}{10}{29.22}} & {\void{20}{10}{29.03}} & {\void{20}{10}{2.86}}& {\void{20}{10}{7.47}}& {\void{20}{10}{20.87}} & {\void{20}{10}{24.05}} & {\void{20}{10}{27.77}} \\ \subt{48}{22}{PDVC \cite{wang2021end}}{\sub{ICCV21}} & Trans. & RNN & {\void{20}{10}{4.28}}& {\void{20}{10}{9.95}} & {\void{20}{10}{21.19}} & {\void{20}{10}{33.59}} & {\void{20}{10}{29.37}} & {\void{20}{10}{3.00}}& {\void{20}{10}{8.54}}& {\void{20}{10}{20.71}} & {\void{20}{10}{25.14}} & {\void{20}{10}{27.80}} \\\hline \textbf{{{{\textsc{Reasoner}}}}} & Trans. & Trans. & {\bbetter{20}{10}{\textbf{5.03}}{0.72}}& {\bbetter{20}{10}{\textbf{10.75}}{0.80}} & {\bbetter{20}{10}{\textbf{24.81}}{3.62}} & {\bbetter{20}{10}{\textbf{38.27}}{4.68}} & {\bbetter{20}{10}{\textbf{34.88}}{5.51}} & {\bbetter{20}{10}{\textbf{3.44}}{0.44}}& {\bbetter{20}{10}{\textbf{9.05}}{0.49}}& {\bbetter{20}{10}{\textbf{22.89}}{1.95}} & {\bbetter{20}{10}{\textbf{30.75}}{5.61}} & {\bbetter{20}{10}{\textbf{30.64}}{2.84}} \\ \hline \end{tabular} } \captionsetup{font=small} \caption{\small \textbf{Quantitative results} on the \texttt{test} set of our {VAR} dataset. `Trans.' indicates Transformer-based architecture. See \S\ref{sec:exp:main} for details.} \label{tab:vad} \vspace{-8pt} \end{table*} \begin{figure*} \begin{minipage}{\textwidth} \begin{minipage}[t]{0.65\textwidth} \begin{center} \includegraphics[width=1.\linewidth]{n_figs/qualitative_res_2.pdf} \put(-323, 27){\scriptsize \textbf{{\textsc{Reasoner}}}} \put(-302.5, 44){\scriptsize \cite{wang2021end}} \put(-300.5, 60.5){\scriptsize \cite{lei2020mart}} \put(-280.5, 77.5){\scriptsize \cite{zhou2018end}} \end{center} \vspace{-18pt} \figurecaption{Qualitative comparison} {(\S\ref{sec:exp:main}) of {{\textsc{Reasoner}}} and \cite{wang2021end,lei2020mart,zhou2018end} on {VAR} \texttt{test}.} {qualitative} \end{minipage} \hspace{0.5ex} \begin{minipage}[t]{0.33\textwidth} \vspace{-1.53in} \centering\small \resizebox{1.\linewidth}{!}{ \setlength\tabcolsep{8pt} \renewcommand\arraystretch{1.} \begin{tabular}{r|c|l} \hline\thickhline \rowcolor{mygray} \multicolumn{3}{c}{\textbf{Premise Event}} \\ \hline \multicolumn{1}{c|}{Prefer A} & Neutral & \multicolumn{1}{c}{Prefer B} \\ \textbf{{{\textsc{Reasoner}}}}~~\textbf{34.2} & 41.4 & 15.9~~PDVC~\cite{wang2021end} \\ \textbf{{{\textsc{Reasoner}}}}~~16.0 & 35.3 & \textbf{39.5}~~Human \\ \hline\thickhline \rowcolor{mygray} \multicolumn{3}{c}{\textbf{Explanation Event}} \\ \hline \multicolumn{1}{c|}{Prefer A} & Neutral & \multicolumn{1}{c}{Prefer B} \\ \textbf{{{\textsc{Reasoner}}}}~~\textbf{29.9} & 13.7 & 10.4~~PDVC~\cite{wang2021end} \\ \textbf{{{\textsc{Reasoner}}}}~~~~8.9 & 22.1 & \textbf{64.8}~~Human \\ \hline \end{tabular} } \vspace{-6pt} \captionsetup{font=small} \makeatletter\def\@captype{table}\makeatother\captionsetup{font=small} \caption{\small \textbf{User study} of pairwise model preference (\%). ``Neutral'' means A and B models are ``equally good''. Percentage of ``equally bad'' are omitted. See \S\ref{sec:exp:main} for details.} \label{tab:User_study} \end{minipage} \end{minipage} \vspace*{-12pt} \end{figure*} \vspace{-4pt} \section{Experiments}\label{sec:exp} \vspace{-1pt} $_{\!\!}$We$_{\!}$ first$_{\!}$ provide$_{\!}$ benchmarking$_{\!}$ results$_{\!}$ on$_{\!}$ our$_{\!}$ {VAR}$_{\!}$ dataset$_{\!}$ (\S\ref{sec:exp:main}). Then, to verify the efficacy of our core model designs, we conduct a set of diagnostic studies (\S\ref{sec:exp:ablation}). Finally, for comprehensive evaluation, we test our {{\textsc{Reasoner}}} on the classic, dense video captioning (DVC) task~\cite{krishna2017dense} (\S\ref{sec:exp:dvc}). \vspace{-1pt} \subsection{Performance on {VAR} Task}\label{sec:exp:main} \vspace{-1pt} \paragraph{Competitor.}\label{sec:baseline} We benchmark five{\ms}top-leading{\ms}DVC{\ms}models{\ms}on{\ms}{VAR}{\ms}to{\ms}reveal{\ms}the{\ms}abductive{\ms}reasoning{\ms}ability{\ms}in{\ms}existing{\ms}techniques.{\ms} They{\ms}include three Transformer-based~\cite{zhou2018end,dai2019transformer,lei2020mart} and two RNN-based~\cite{xiong2018move,wang2021end} models, which are trained on \texttt{train} set of our VAR dataset with pre-provided event segments using their original training protocols. \paragraph{Evaluation Metric.} Five well-known automated metrics, \ie, BLEU@4~\cite{papineni2002bleu}, CIDEr~\cite{vedantam2015cider}, METEOR~\cite{banerjee2005meteor}, ROUGE-L~\cite{lin2002manual}, and BERTScore~\cite{zhang2019bertscore}, are used for evaluation. \def\ADDINTEXT{ \vspace{-2.5ex} \subsection{Diagnostic Experiment}\label{sec:exp:ablation} \vspace{-0.5ex} A set of ablative studies is conducted on {VAR} \texttt{test} for indepth analyzing each component in$_{\!}$ our$_{\!}$ {{\textsc{Reasoner}}},$_{\!}$ using$_{\!}$ BLEU@4,$_{\!}$ CIDEr$_{\!}$~and BERT-S metrics, averaged over all the events. \paragraph{Key Component Analysis.} We first study the efficacy of core model components. The first row in Table$_{\!}$~\ref{tab:AS1} gives the performance of a basic Transformer model, which simply uses ab- } \begin{figure*}[t!] \begin{minipage}{\textwidth} \vspace{-.75ex} { \begin{minipage}[t]{0.375\textwidth} \begin{minipage}[t][0.1255\textheight][t]{0.5\textwidth} ~ \end{minipage}\\ {\normalsize \ADDINTEXT } \end{minipage} } \hspace{.05in} \begin{minipage}[t]{0.62\textwidth} \vspace{-2ex} \begin{threeparttable}[t] {\hspace{-0.645\textwidth} \subfloat[\scalebox{.95}{Key components} \label{tab:AS1}]{ \grouptablestyle{3pt}{1} \begin{tabular}{c|cc||ccc} \hline\thickhline \rowcolor{mygray} & Causality-Aware & Cascaded-Reasoning & & & \\ \rowcolor{mygray} \multirow{-2}{*}{\#} & {Encoder (\S\ref{sec:ec})} & {Decoder (\S\ref{sec:dec})} & \multirow{-2}{*}{BLEU@4} & \multirow{-2}{*}{CIDEr} & \multirow{-2}{*}{BERT-S} \\ \hline \hline 1 & & & 3.39 & 30.04 & 26.35 \\ 2 & \ding{51} & & 3.91 & 32.32 & 29.85 \\ 3 & & \ding{51} & 4.05 & 33.71 & 29.94 \\ 4 & \ding{51} & \ding{51} & 4.66 & 36.13 & 33.44 \\ \hline \end{tabular} } } \hspace{.6ex} \parbox{.15\textwidth}{ \vspace{0.03in} \subfloat[\scalebox{.95}{Position embedding strategy}\label{tab:AS2}]{% \grouptablestyle{3pt}{1.} \begin{tabular}{cc||ccc} \hline\thickhline \rowcolor{mygray} $\Ubm_{n}$/$\Ubm_{nm\!}$ & & & & \\ \rowcolor{mygray} (\S\ref{sec:ec}) & \multirow{-2}{*}{Formulation} & \multirow{-2}{*}{BLEU@4} & \multirow{-2}{*}{CIDEr} & \multirow{-2}{*}{BERT-S} \\ \hline \hline Absolute &$\Ubm_{n\!}\!=_{\!}\!\Func{Abs}(n)$ & 4.20 & 33.27 & 29.95 \\ Directional & $\Ubm_{nm\!}\!=_{\!}\!\Func{Rel}(n,m)$ & 4.35 & 34.25 & 31.79 \\ \tabincell{c}{Contextualized\\Directional}& $\Ubm_{nm\!}\!=_{\!}\!\Func{Rel}(n,m,\bm{X}_{\!n\!})$ & 4.66 & 36.13 & 33.44 \\ \hline \end{tabular} } } \vfill\vspace{-.11in} \subfloat[\scalebox{.95}{Cascaded reasoning} \label{tab:AS3}]{ \grouptablestyle{4pt}{1.11}% \begin{tabular}{x{35}||ccc} \hline\thickhline \rowcolor{mygray} $\mathcal{D}^{k\!}$ (\S\ref{sec:dec}) & {B@4} & {CIDEr} & {BERT-S} \\ \hline \hline $K = 0$ & 3.91 & 32.72 & 29.50 \\ $K = 1$ & 4.34 & 34.89 & 31.60 \\ $K = 2$ & 4.61 & 35.53 & 32.57 \\ $K = 3$ & 4.66 & 36.13 & 33.44 \\ $K = 4$ & 4.66 & 36.05 & 33.51 \\ $K = 5$ & 4.60 & 35.90 & 33.32 \\ \hline \end{tabular} } \hspace{.6ex} \parbox{.2\textwidth}{ \vspace{0.03in} \subfloat[\scalebox{.95}{Confidence embedding} \label{tab:AS4}]{ {\grouptablestyle{3.5pt}{1.}% \begin{tabular}{x{40}||ccc} \hline\thickhline \rowcolor{mygray} $\Func{Rel}^{c\!}$ (Eq.$_{\!}$~\ref{eq:relative2}) & {BLEU@4} & {CIDEr} & {BERT-S} \\ \hline \hline & 4.45 & 35.22 & 33.17 \\ \ding{51} & 4.66 & 36.13 & 33.44 \\ \hline \end{tabular} } }\\\vspace{-.16in} \subfloat[\scalebox{.95}{Training objective} \label{tab:AS5}]{ {\grouptablestyle{3.5pt}{1.}% \begin{tabular}{x{40}||ccc} \hline\thickhline \rowcolor{mygray} Loss (\S\ref{sec:train}) & {BLEU@4} & {CIDEr} & {BERT-S} \\ \hline \hline $\mathcal{L}_{\text{Main}}$ & 4.40 & 35.51 & 32.83 \\ $\mathcal{L}_{\text{Main}}\!\!+\!\!\mathcal{L}_{\text{Aux}}$ & 4.66 & 36.13 & 33.44 \\ \hline \end{tabular} } } } \vfill \vspace{-8pt} \makeatletter\def\@captype{table}\makeatother\captionsetup{font=small} \caption{\small A set of \textbf{ablation studies} (\S\ref{sec:exp:ablation}) on the \texttt{test} set of our VAR dataset.} \label{tab:ablations} \end{threeparttable} \end{minipage} \end{minipage} \vspace*{-20pt} \end{figure*} \paragraph{Quantitative$_{\!}$ Result.$_{\!\!}$} Table$_{\!}$~\ref{tab:vad}$_{\!}$ summarizes$_{\!}$ the$_{\!}$ benchmarking results on the \texttt{test} set of our {VAR} dataset. For detailed analysis, we report the performance over the observable premise events and invisible explanation events separately. Moreover, to probe the upper bound of model performance, we evaluate human performance by asking ten volunteers to perform {VAR}. Specifically, we randomly sample \cnum{500} examples from unique videos in {VAR} \texttt{test}. The volunteers are only provided with partially observable videos and requested to write down the corresponding descriptions and hypotheses. The human-written descriptions and hypotheses are evaluated by the automatic metrics, and evaluation scores are shown in the first row of Table~\ref{tab:vad}. Several essential conclusions can be drawn from Table~\ref{tab:vad}: \textbf{i)} Humans are good at {VAR}; although human-written hypotheses for explanation scored lower than the descriptions for the visual premise, they are still very plausible in absolute terms. \textbf{ii)} All traditional DVC models~\cite{xiong2018move,wang2021end,zhou2018end,dai2019transformer,lei2020mart} struggle with {VAR} that humans excel at. Their generated hypotheses are usually untrusted, and far worse than their created premise narratives. This suggests that existing video-based language generation models are not good at reasoning beyond observation. \textbf{iii)} Our {{\textsc{Reasoner}}} outperforms other AI models~\cite{xiong2018move,wang2021end,zhou2018end,dai2019transformer,lei2020mart}, in both explanatory hypothesis reasoning and premise description, demonstrating the effectiveness of our whole model design. Compared to other AI models, {{\textsc{Reasoner}}} also yields a relatively smaller performance drop, from premise description to hypothesis reasoning. This suggests that {{\textsc{Reasoner}}} can make better use of the context of observed events to infer the explanatory hypothesis. \textbf{iv)} Although our {{\textsc{Reasoner}}} shows more promising results, there still remains a significant gap from human performance, that is waiting for more sophisticated abductive reasoning models to conquer. \paragraph{User Study.} For comprehensive performance assessment, we further carry out a subjective evaluation, based on pairwise model comparison. Specifically, we randomly sample \cnum{500} examples from unique videos in {VAR} \texttt{test}. Three volunteers are presented the outputs of a pair of systems (\ie, {{\textsc{Reasoner}}} \textit{vs} PDVC~\cite{wang2021end} or human) on the sampled examples, and requested to do a comparison about which one is better, or ``equally good'' or ``equally bad''. The human preference results are collected in Table$_{\!}$~\ref{tab:User_study}, and again the statistics for premise events and explanation events are presented separately. The human subjective judgments are generally accordant with the trends reflected by Table~\ref{tab:vad}. Specifically, the human pairwise comparison results confirm the superiority of {{\textsc{Reasoner}}} over PDVC, the second-best model in Table~\ref{tab:vad}: {{\textsc{Reasoner}}} receives \cnum{34.2} and \cnum{29.9} percent preference votes on the premise description and explanatory hypothesis, respectively. However, human-written hypotheses and descriptions are much more favorable than our results, showing again {VAR} is a very challenging task. \paragraph{Qualitative Analysis.} A test video example in {VAR} dataset is shown in Fig.~\ref{fig:qualitative}. It contains the explanatory hypotheses and premise descriptions from our {{\textsc{Reasoner}}} and {other competitors~\cite{wang2021end,lei2020mart,zhou2018end}} as well as groundtruth sentences. We can find that our {{\textsc{Reasoner}}} is able to discover and correctly describe the cause-effect chain, and hence generate a plausible hypothesis, \ie, \textit{making a small splash}, that well explains the observed events, \ie, \textit{standing on the podium}. In contrast, other competitors typically produce unsatisfactory results, especially for the explanatory hypothesis. \noindent solute position embedding in the encoder and only adopts one single decoder, \ie, $\mathcal{D}^{0}$. The results in the first two rows reveal that contextualized directional position embedding (\S\ref{sec:ec}) consistently improves the performance over the three metrics. Moreover, from the first and third rows we can observe that confidence-guided multi-step reasoning (\S\ref{sec:dec}) indeed boosts the performance. By further considering the scores in the last row, we can safely conclude that combining the two model designs together leads to the best results. \paragraph{Contextualized Directional Position Embedding.} Next, to thoroughly study the impact of our contextualized directional position embedding strategy (\S\ref{sec:ec}), we report the performance of two alternatives in Table$_{\!}$~\ref{tab:AS2}. Specifically, ``absolute'' refers to the widely used, learnable absolute position embedding, while ``directional'' indicates learning relative position embedding without considering any input context. As seen, our contextualized directional position embedding is significantly better than the two alternatives. \paragraph{Cascaded Reasoning.$_{\!}$} Table~\ref{tab:AS3} reports the performance with different steps of our cascaded reasoning (\S\ref{sec:dec}), \ie, $K\!=\!\{0, 1, \cdots\!, 5\}$. When $K\!=\!0$, only one decoder $\mathcal{D}^{0}$ is adopted and the CIDEr score is just \cnum{32.72}. However, after adding an extra refinement decoder, the score is greatly improved to \cnum{36.13}. The increasing trend is gradually saturated until $K\!>\!3$. We therefore use $K\!=\!3$ as our default setting for balancing performance and inference efficiency. \paragraph{Confidence Embedding.$_{\!}$} We inject sentence scores into~the cascaded reasoning for guiding information flow (Eq.$_{\!}$~\ref{eq:relative2}).~As shown in$_{\!}$ Table$_{\!}$~\ref{tab:AS4}, removing confidence embedding hinders the performance, \eg, \cnum{36.13}$\rightarrow$\cnum{35.22} in terms of CIDEr. \paragraph{Training Objective.} Finally we examine our training objective design (\S\ref{sec:train}). Table$_{\!}$~\ref{tab:AS5} demonstrates a beneficial impact of the hypothesis reconstruction loss $\mathcal{L}_{\text{Aux}}$ (Eq.\!~\ref{eq:aux}). \vspace{-2pt} \subsection{Performance on DVC Task}\label{sec:exp:dvc} \vspace{-2pt} For completeness, we report performance on DVC task. \paragraph{Dataset.} As a gold-standard dataset for DVC, ActivityNet{\ms}Captions\tcite{krishna2017dense}{\ms}contains{\ms}a{\ms}total{\ms}of{\ms}20k{\ms}untrimmed{\ms}videos (\cnum{10009}/\cnum{4917}/\cnum{5044} for \texttt{train}/\texttt{val}/\texttt{test}). Each video lasts 120s and is annotated with 3.65 temporally-localized sentences on average. Following~\cite{zhou2018end,lei2020mart,song2021towards}, \texttt{val} set is further split into two subsets: \texttt{ae-val} with \cnum{2460} videos and \texttt{ae-test} with \cnum{2457} videos without overlapping. \paragraph{Evaluation Metric.} As in\tcite{song2021towards,lei2020mart,zhou2018end}, BLEU@4~\cite{papineni2002bleu}, METEOR~\cite{banerjee2005meteor}, and CIDEr~\cite{vedantam2015cider} metrics are used for evaluation. \paragraph{Quantitative Result.} {{\textsc{Reasoner}}} is trained on the \texttt{train} set and evaluated on \texttt{ae-val} set in paragraph-level. Since we focus only on descriptive quality, the sentences are generated from a provided list of events, like in~\cite{park2019adversarial,lei2020mart,ji2021hierarchical}. As shown in Table~\ref{tab:densecap}, {{\textsc{Reasoner}}} outperforms state-of-the-art DVC models over all the metrics, \eg, $+2.81$ performance gain in CIDEr. This proves the strong reasoning ability of {{\textsc{Reasoner}}} and emphasizes the value of our VAR task in promoting innovations of powerful video-language models. \begin{table}[t] \centering\small \resizebox{0.44\textwidth}{!}{ \hspace{-2ex} \setlength\tabcolsep{2pt} \renewcommand\arraystretch{1.} \begin{tabular}{c||ccc} \hline\thickhline \rowcolor{mygray} Method & BLEU@4 & METEOR & CIDEr \\ \hline \hline \subt{50}{21}{HSE~\cite{zhang2018cross}}{\sub{ECCV18}} & {\void{20}{15}{9.84}} & {\void{20}{15}{13.78}} & {\void{20}{15}{18.78}} \\ \subt{50}{21}{Trans-XL~\cite{dai2019transformer}}{\sub{ACL19}} & {\void{20}{15}{10.39}} & {\void{20}{15}{15.09}} & {\void{20}{15}{21.67}} \\ \subt{50}{21}{VTrans~\cite{zhou2018end}}{\sub{CVPR18}} & {\void{20}{15}{9.75}} & {\void{20}{15}{15.64}} & {\void{20}{15}{22.16}} \\ \subt{50}{21}{MART~\cite{lei2020mart}}{\sub{ACL20}} & {\void{20}{15}{10.33}} & {\void{20}{15}{15.68}} & {\void{20}{15}{23.42}} \\ \subt{50}{21}{PDVC~\cite{wang2021end}}{\sub{ICCV21}} & {\void{20}{15}{11.80}} & {\void{20}{15}{15.93}} & {\void{20}{15}{27.27}} \\ \hline \textbf{{\textsc{Reasoner}}} & {\bbetter{20}{15}{\textbf{12.45}}{0.65}} & {\bbetter{20}{15}{\textbf{16.43}}{0.50}} & {\bbetter{20}{15}{\textbf{30.08}}{2.81}} \\ \hline \end{tabular} } \captionsetup{font=small} \caption{\small $_{\!\!}$\textbf{Quantitative$_{\!}$ results$_{\!}$} (\S\ref{sec:exp:dvc})$_{\!}$ on$_{\!}$ the$_{\!}$ \texttt{ae-val}$_{\!}$ set$_{\!}$ of$_{\!}$ Activi- tyNet Captions~\cite{krishna2017dense}. The scores are mainly borrowed from~\cite{wang2021end}. } \label{tab:densecap} \vspace{-12pt} \end{table} \vspace{-4pt} \section{Conclusion} \vspace{-4pt} We introduce VAR~({Visual Abductive Reasoning}) -- a novel task that investigates the abductive reasoning ability of machine intelligence in the visual world. We establish {{\textsc{Reasoner}}}, a new Transformer based visual-language model, which captures the context from visual premise in a causality-aware manner, and generates premise descriptions and hypothesis sentences in a confidence-guided, step-by-step fashion. {{\textsc{Reasoner}}} shows promising results on both VAR~and dense video captioning tasks. We also observe a remaining large headroom for AI systems in VAR, which is expected to encourage exciting avenues in the future.
1,116,691,499,952
arxiv
\section{Introduction} \label{sec:introduction} At present, there is widespread interest in interfaces and heterostructures between SrTiO$_3$ (STO) and polar perovksite materials such as LaAlO$_3$ (LAO). Transition metal oxides are characterized by strong local interactions that often lead to novel magnetic, superconducting, or orbital-ordered phases that may be tailored by interface engineering.\cite{Zubko:2011ho} The specific interest in STO was sparked by the observation of a two-dimensional electron gas (2DEG) at a LaTiO$_3$/STO interface,\cite{Ohtomo:2004hm} and by subsequent observations\cite{Reyren:2007gv,Brinkman:2007fk,Li:2011jx,Dikin:2011gl,Bert:2011,Kalisky:2012wf} that these 2DEGs exhibit ferromagnetism and superconductivity. The ability to tune LAO/STO interfaces through metal-insulator and superconductor-insulator transitions by application of a gate voltage\cite{Thiel:2006eo,Caviglia:2008uh,Bell:2009eo,Liao:2011bk} has raised questions about the role of quantum criticality\cite{Schneider:2009gt} and the origins of superconductivity at low electron density.\cite{Edge:2015fj,Gorkov:2016dd,Ruhman:2016} The 2DEGs reside primarily in the STO and extend very little into the cap layer,\cite{Popovic:2008ft,Son:2009wb,Delugas:2011ih} and consequently the basic elements of the electronic structure are similar for a variety of cap layer materials,\cite{Pentcheva:2007fn,Banerjee:2015wr,Chang:2013iq,Cancellieri:2013wa} and even for bare STO surfaces.\cite{SantanderSyro:2011hf,Meevasana:2011bh,Walker:2015} Band structure calculations for LAO/STO interfaces\cite{Popovic:2008ft,Son:2009wb,Delugas:2011ih} predict that the majority of the conducting electrons reside in the TiO$_2$ planes adjacent to the interface, and occupy bands with $d_{xy}$ symmetry, while occupied bands with $d_{xz}$ and $d_{yz}$ symmetry extend farther into STO. Because of the differences in their spatial extent, the $d_{xy}$ bands at the interface should be much more strongly affected by interfacial roughening than the $d_{xz}/d_{yz}$ bands,\cite{Popovic:2008ft} and indeed Hall measurements have been interpreted in terms of a two-component system with two distinct mobilities.\cite{Kim:2010fl,Lerer:2011bp,Jost:2014uz,Joshua:2012bl,Guduru:2013iz} A key feature of STO interfaces is that STO has an extremely high dielectric permittivity ($\epsilon \sim 10^4 \epsilon_0$, with $\epsilon_0$ the permittivity of free space) at low temperatures and weak electric fields, which strongly influences the profile of the charge density near the interface. Importantly, $\epsilon$ is a strong function of temperature and electric field,\cite{Hemberger:1995dd,Dec:2005cr} so that the charge density profile can change dramatically with both temperature $T$ and gate voltage. To understand this, several calculations\cite{Copie:2009ev,Stengel:2011hy,Khalsa:2012fu,Park:2013gf,Gariglio:2015jx,Reich:2015ut,Peelaers:2015fh} have been made based on tight-binding or continuum models that build in relevant properties of the dielectric function. These phenomenological approaches have tended to focus on the nonlinear response of $\epsilon$ to the electric field as a way to understand the doping-dependence of the charge profile near the interface, and most ignore the nonlocal dielectric response that is inferred from the strong phonon dispersion at small wavevectors.\cite{Cowley:tr} One notable exception is Ref.~\onlinecite{Khalsa:2012fu}, which treats the lattice polarization ${\bf P}({\bf r})$ within a Landau-Devonshire approximation that inherently includes nonlocal and nonlinear effects. Ref.~\onlinecite{Khalsa:2012fu} focused on the doping dependence of the band structure at fixed temperature. Here, we extend their model to perform a systematic study of the temperature-dependent band structure of a generic STO interface. As with previous work,\cite{Copie:2009ev,Stengel:2011hy,Khalsa:2012fu,Park:2013gf,Gariglio:2015jx} we find that the 2DEG at the STO interface can be divided into a quantum two-dimensional (2D) region that extends approximately 10 STO layers in from the interface, and a quasi-three dimensional (3D) region that extends deep into the STO. The 2D quantum region is dominated by a band with $d_{xy}$ character that is weakly temperature-dependent at typical doping levels. In contrast, the lowest energy $d_{xz/yz}$ bands extend much farther into the STO film, and are strongly temperature-dependent. As a consequence, there is a substantial shift of charge away from the interface as temperature is lowered from 300~K to 10~K. We show that this leads to large differences in the photoemission spectra at low and high temperatures. The model employs a tight-binding approximation for the electrons, in which interactions are treated within a self-consistent field approximation. The electrons couple to the polarization charge density $-\nabla\cdot {\bf P}$, where the polarization ${\bf P}$ is calcuated from a Landau-Devonshire energy that depends explicitly on temperature and electric field. The model is agnostic about the doping mechanism, and simply assumes a confining potential at the interface due to a uniform external 2D charge density as one would expect from an electronic reconstruction,\cite{Nakagawa:2006gt} from oxygen vacancies at the LAO surface,\cite{Bristowe:2014fc} or from application of a gate voltage. Alternative doping mechanisms such as O vacancies that accumulate at the interface\cite{Zhong:2010if,Bristowe:2014fc} are beyond the scope of this work. Despite its complexity, this model neglects certain complicating aspects of the STO band structure that are not expected to change the broad trends identified in our calculations. Notably, we ignore spin-orbit coupling, which mixes the different $t_{2g}$ orbitals and breaks the 3-fold $t_{2g}$ band degeneracy at the $\Gamma$ point.\cite{Caviglia:2010jv,BenShalom:2010kv,Khalsa:2012fu,Zhong:2013fv} By so doing, we are able to study systems of up to 200 layers with a 2D grid of $200\times 200$ ${\bf k}$-points; however, this means that some details of the band structure, especially at low charge densities, will be inaccurate.\cite{Khalsa:2012fu} We have also ignored the renormalization of the band masses by electron-phonon\cite{Cancellieri:2016fw} and electron-electron interactions,\cite{Tolsma:2016dy} and the effects of antiferrodistortive rotations of the unit cell below temperatures of 105 K.\cite{Mattheiss:1972dt,Tao:2016vv} While these will affect our results quantitatively, the qualitative aspects of the calculations should be robust. We describe the model in Sec.~\ref{sec:method}, and results of the calculations are given in Sec.~\ref{sec:results}. First, the temperature-dependence of the charge distribution is described in Sec.~\ref{sec:temperature} for low, intermediate, and high electron densities (relative to typical experimental densities). These results are then discussed in the context of the temperature- and doping-dependent band structure in Sec.~\ref{sec:bands}. One direct experimental measure of the band structre is angle-resolved photoemission (ARPES), and in Sec.~\ref{sec:arpes} we focus on the implications of our calculations for ARPES. We finish in Sec.~\ref{sec:local_v_nonlocal} with a brief examination of a particular aspect of our model, namely the role of nonlocal response in the dielectric function, which is shown to qualitatively affect the charge distribution near the interface at low temperatures. Finally, in Sec.~\ref{sec:discussion} we propose that 3D tail states, which are ubiquitous in our calculations, form the high-mobility component of the electron gas that is widely observed in magnetotransport experiments. \section{Method} \label{sec:method} \begin{figure} \centering \includegraphics[width=\columnwidth,natwidth=610,natheight=642]{Figs/sketch3} \caption{(Color online) Sketch of a model STO/LAO interface. $N$ unit cells of STO are stacked below an insulating LAO film in alternating TiO$_2$ and SrO layers in the [001] direction. Electronic reconstruction, gating, and surface O vacancies transfer charge from the top AlO$_2$ layer to the interface, leaving a residual 2D charge density $\sigma^s$ on the AlO$_2$ surface that attracts STO conduction electrons to the interface. The model is discretized along the $z$ direction, and assumes that the conducting TiO$_2$ layers are separated by blocks of dielectric; the polarization ${P_{i_z}}$ and electric field ${E_{i_z}}$ are therefore defined in the regions between the TiO$_2$ layers. The conduction electrons in layer $i_z$ have 2D charge density $\sigma^f_{i_z}$, while the bound charge density due to the polarization is $\sigma^b_{i_z} = P_{i_z} - P_{i_z+1}$. We assume translational invariance in the planes, so the polarization, field, and electron density depend only on the layer index $i_z$. An extra fictitious dielectric layer ($i_z = N+1$) is added to facilitate handling the boundary condition $P_{N+1} = 0$ at the bottom of the STO substrate. } \label{sketch} \end{figure} Our interface model has two distinct pieces: a self-consistent tight-binding description of the electronic bands and a Landau-Devonshire description of the polarization. We begin with an overview of the model before discussing the two pieces in detail. Figure \ref{sketch} shows the model's structure. We consider a thick film of $N$ STO layers stacked in the [001] direction beneath an insulating cap layer. In the figure, the cap is taken to be LAO, but our model only requires that it has a sufficiently wide band gap that it can be ignored. We assume that some combination of O-vacancy formation on the surface AlO$_2$ layer, electronic reconstruction, and application of a gate voltage transfers electrons to the STO interface, leaving a residual positive charge $\sigma^s$ (indicated by ``+'' signs) on the AlO$_2$ surface. This residual charge creates an electric field that confines the STO conduction electrons to the interface. The model is discretized along the $z$ direction (perpendicular to the interface). We treat the STO as a set of conducting TiO$_2$ planes separated by layers of dielectric. As shown in Fig.~\ref{sketch}, the polarization and electric field are defined in the dielectric layers, while the charge density is confined to the 2D TiO$_2$ planes. While the discretization process clearly limits the usefulness of the model at sub-unit cell length scales, it does nonetheless capture longer wavelength physics. We assume that we have translational invariance in the planar directions, so that the polarization, electric field, and charge density depend only on the layer index $i_z$. Then, by symmetry, the polarization and electric field vectors ${\bf P}$ and ${\bf E}$ must point in the $z$ direction. The 2D charge density in the $i_z$th TiO$_2$ plane has two contributions: a free charge density $\sigma^f_{i_z} $ due to the conduction electrons and a bound charge density $\sigma^b_{i_z} = P_{i_z} - P_{i_z+1}$ due to the polarization gradients. We require boundary conditions for both the electric field and the polarization. In the layered geometry, and for a fixed $\sigma^s$, the electric field in the STO is independent of the dielectric permittivity of the cap layer. For simplicity, then, we take the polarization to be zero above the interface (ie. $P_0 = 0$) and the electric field above the first STO layer is therefore (by Gauss' law) $E_0 = \sigma^s/\epsilon_0$. At large $z$, we expect the electric field and the polarization to be screened by the free charge density: to handle this, the electric field in the $N$th STO layer is zero (ie. $E_N=0$), and we add a fictitious $(N+1)$th layer in which $\sigma^f_{N+1} = P_{N+1} = 0$. \subsection{Electronic Hamiltonian} \label{sec:ham} STO has a 3.3 eV band gap between filled O${2p}$ orbitals and empty Ti t$_{2g}$ orbitals. For an electron-doped interface we therefore include only the t$_{2g}$ orbitals in our model. We adopt a tight-binding Hamiltonian with three orbitals per unit cell, having $d_{xy}$, $d_{xz}$, and $d_{yz}$ character. We consider only hopping between nearest-neighbor Ti atoms, and neglect matrix elements between orbitals of different symmetry: these vanish in a cubic lattice, and are assumed small provided the lattice distortions are small. Spin-orbit coupling also mixes different orbital types near band degeneracies, but as mentioned above, we gain a strong computational advantage by ignoring this effect. We assume we have translational invariance with periodic boundary conditions in the $x$ and $y$ directions, and apply open (hard-wall) boundary conditions in the $z$ direction. With these assumptions, we write the effective Hamiltonian for the STO conduction electrons as \begin{equation} \label{H} \hat H^{\mathrm{eff}}=\hat H_0 +\hat V^{\mathrm{ext}}+\hat V^{\mathrm{SC}}[\sigma^f,\sigma^b], \end{equation} where $\hat H_0$ is the tight-binding Hamiltonian for the inter-orbital hopping, $\hat V^{\mathrm{ext}}$ is the external potential energy due to the charge at the LAO surface, and $\hat V^{\mathrm{SC}}$ represents the self-consistent electrostatic potential energy due to both the free charge density $\sigma^f_{i_z}$ and the bound charge density $\sigma^b_{i_z}$ at the TiO$_2$ planes. The tight-binding term is \begin{equation} \hat H_0=\sum_{i_z,j_z}\sum_{\bf {k}}\sum_{\alpha\beta \sigma} c^\dagger_{i_z{\bf k}\alpha\sigma}{t}_{i_z\alpha,j_z\beta}({\bf k}) c_{j_z{\bf k}\beta\sigma}, \end{equation} where $c_{i_z{\bf k}\beta\sigma}$ is the annihilation operator for an electron with spin $\sigma$ in layer $i_z$ and orbital type $\beta$, ${\bf k}= (k_x,k_y)$ is a 2D wavevector, and ${t}_{i_z\alpha,j_z\beta}({\bf k})$ is an element of the the tight-binding matrix, \begin{equation} {\bf t}({\bf k}) =\left [ \begin{array}{cccccc} {\bf E}({\bf k}) & {\bf T} & \ldots \\ {\bf T} & {\bf E}({\bf k})\\ & &\ddots\\ & & & {\bf E}({\bf k}) & {\bf T}\\ & & & {\bf T} & {\bf E}({\bf k}) \end{array}\right ] , \end{equation} where ${\bf E}({\bf k})$ and ${\bf T}({\bf k})$ are matrices in the orbital basis, \begin{eqnarray} {\bf E}({\bf k}) &=&\left [ \begin{array}{cccccc} \xi_{xy}({\bf k}) & 0 & 0 \\ 0& \xi_{xz}({\bf k})&0\\ 0&0 & \xi_{yz}({\bf k}) \end{array}\right ] \\ {\bf T} &=&\left [\begin{array}{cccccc} -t^\perp & 0 & 0 \\ 0& -t^\parallel&0\\ 0&0 &-t^\parallel \end{array}\right], \end{eqnarray} and \begin{eqnarray} \nonumber &&\xi_{xy}({\bf k})=\epsilon_{t_{2g}}-2t^\parallel (\cos k_x a+\cos k_y a ),\\ &&\xi_{xz}({\bf k})=\epsilon_{t_{2g}}-2t^\parallel \cos k_x a-2t^\perp \cos k_y a, \label{eq:ea}\\ &&\xi_{yz}({\bf k})=\epsilon_{t_{2g}}-2t^\perp \cos k_x a-2t^\parallel \cos k_y a, \nonumber\end{eqnarray} are planar dispersions. Here, $\epsilon_{t_{2g}}$ is the on-site orbital energy (which can be set to 0), and $a$ is the STO lattice constant. For a given symmetry of $t_{2g}$ orbital there are two distinct hopping processes between nearest-neighbor Ti atoms: the hopping amplitude is $t^\parallel$ between Ti atoms in the same plane as the orbital (eg.\ the $x$-$y$ plane for $d_{xy}$ orbitals), while it is $t^\perp$ perpendicular to the plane of the orbitals (eg.\ along the $z$ direction for $d_{xy}$ orbitals). Since nearest-neighbor $d_{xy}$ orbital wavefunctions overlap more in the $x$-$y$ plane than along the $z$-axis, $t^\parallel \gg t^\perp $. Values for $t^\|$, $t^\perp$, and other model parameters are given in Table~\ref{cons}. \begin{table} \begin{tabular}{l | r} \hline \multicolumn{2}{c}{Model parameters} \\ \hline $t^\parallel $& 0.236 eV\\ $t^\perp$ & 0.035 eV\\ $a$ & 3.9 \AA \\ M & 24 amu\\ Q & $8.33e$ \\ $\omega_0$ & $2.5 \times 10^{13}$ s$^{-1}$ \\ $\omega_1$ & $1.7 \times 10^{13}$ s$^{-1}$ \\ $\alpha_1$ & $1.15a$ \\ $\alpha_2$ & $5a$ \\ $\epsilon_\infty$ & $5.5\epsilon_0$\\ $T_0$ & $1.46\times 10^4$ K\\ ${T_s}$ & 15 K\\ ${\xi}$ & 1.45\\ $\gamma$ & 63 eV$\cdot$\AA$^{-4}$ \\ \hline \end{tabular} \caption{Model parameters used in our calculations. Values are taken from Ref.~\onlinecite{Khalsa:2012fu} except for $T_0$, ${\xi}$, ${T_s}$, and ${\gamma}$, which are obtained by fitting to the temperature- and field-dependence of the experimental dielectric susceptibility (Appendix \ref{app:A}).} \label{cons} \end{table} Assuming that the LAO surface charge is uniformly distributed, we obtain a simple description for the potential energy of an electron in the confining field, \begin{equation} \hat V^{\mathrm{ext}}= \frac{\sigma^s e }{2\epsilon_{\infty}}\sum_{\bf {k}}\sum_{i_z\alpha\sigma} z c^\dag_{i_z\bf {k}\alpha\sigma}c_{i_z\bf {k}\alpha\sigma}, \end{equation} where $\epsilon_\infty$ is the optical dielectric constant due to electronic screening, and $z= i_z a$ is the distance from layer $i_z$ to the interface. Under a similar assumption, the self-consistent potential energy due to both the 2DEG and the 2D bound charge density is \begin{eqnarray} \nonumber \hat V^{\mathrm{SC}}[\sigma^f,\sigma^b]&=& \frac{e}{2 \epsilon_\infty } \sum_{\bf {k}}\sum_{i_z\alpha\sigma}\sum_{j_z} [\sigma^f_{j_z}+\sigma^b_{j_z}]\\ &&\times (\vert z^\prime -z \vert- z)c^\dag_{i_z\bf {k}\alpha\sigma}c_{i_z\bf {k}\alpha\sigma}, \label{eq:Vsc} \end{eqnarray} where $z^\prime =j_z a$, $\sigma^f_{j_z}= -e \sum_\beta n_{j_z\beta}/a^2$ is the 2D charge density in layer $j_z$ and $n_{j_z\beta}$ is the electron occupation number for orbitals of type $\beta$ in layer $j_z$. The charge density is calculated self-consistently from \begin{equation}\label{n} n_{j_z\beta}=\frac{2}{N_{\bf k}} \sum_{{\bf k}} \sum_{n} | \psi_{j_z \beta,n}({\bf k})|^2 f(\epsilon_{n{\bf k}}), \end{equation} where the factor of 2 is for spin, $\epsilon_{n{\bf k}}$ and $\psi_{j_z \beta,n}({\bf k}) $ are the energy eigenvalues and eigenstates of $\hat H^\mathrm{eff}$ respectively, and $ f(\epsilon_{n{\bf k}})$ is the Fermi-Dirac distribution function. The bound charge density is $\sigma^b_{j_z} = P_{j_z} - P_{j_z+1}$, where the polarization $P_{j_z}$ is obtained from the Landau-Devonshire model discussed in the next section. Because we have neglected contributions to the Hamiltonian that mix different orbital symmetries, each band has a well-defined orbital character. As a consequence, the band index $n$ can be written in the form $\tilde n\alpha$ where $\alpha$ is one of $xy$, $xz$, or $yz$ and $\tilde n$ is an integer labeling bands of type $\alpha$ (the $1xy$ band is the lowest-energy $xy$ orbital character band, etc.). Furthermore, the lack of orbital mixing leads to a particularly simple form of the Hamiltonian such that the eigenvectors $\psi_{j_z \beta,n}({\bf k})$ are independent of ${\bf k}$, and the eigenvalues obtain the simple form \begin{equation} \epsilon_{\tilde n\alpha {\bf k}} = \epsilon_{\tilde n\alpha {\bf k}=0}+ \xi_\alpha({\bf k}) - \xi_\alpha(0), \end{equation} where $\xi_\alpha({\bf k})$ is given by Eq.~(\ref{eq:ea}). It is thus possible to determine the spectrum at finite ${\bf k}$ from the energy eigenvalues at ${\bf k}=0$, and we therefore only need to diagonalize the Hamiltonian once per ${\bf k}$-sum. The resulting speed-up allows us to study large system sizes of up to 200 layers with $200 \times 200$ $k$-points. \subsection{Polarization Model} \label{sec:polarization} The high polarizability of STO is due to the presence of a soft transverse optical phonon mode that is associated with an incipient ferroelectric transition. The transition is suppressed by quantum fluctuations, so that the dielectric susceptibility saturates at a characteristic temperature $T_s \sim 15$ K. Here, the induced polarization $P_i$ is defined for unit cell $i = (i_x,i_y,i_z)$ in terms of the normal-mode coordinate $u_i$ and effective charge $Q$ associated with the soft mode via \begin{equation} P_i=\frac{Qu_i}{a^3}. \label{eq:Pu} \end{equation} The normal-mode coordinate represents the amplitude of the lattice distortion, projected onto the soft optical phonon eigenvector,\cite{Lines:2001bn} and $Q$ is a fitting parameter that relates the collective coordinate to the polarization (see Table~\ref{cons}). As discussed above, translational symmetry in the $x$-$y$ plane ensures that $u_i$ and $P_i$ are polarized along the $z$ direction, and that they are independent of $i_x$ and $i_y$. The polarization is obtained by minimizing a simple free energy that includes temperature, electric field, and nonlocal effects. Model parameters have been set by fitting to temperature- and field-dependent dielectric susceptibility measurements of Ref.~\onlinecite{Dec:1998} while the nonlocal correlations are inferred from neutron scattering measurements of the phonon dispersion.\cite{Cowley:tr} The fitting process is discussed in Appendix~\ref{app:A}, and the model reproduces the measured differential susceptibility with a maximum relative error of $16\% $ for temperatures $0 \leq T \leq 70$ K and $0 \leq E \leq 500$ V/mm; the relative error is $6\% $ at room temperature. The simplest quartic free energy has the form\cite{Khalsa:2012fu} \begin{equation}\label{pot} \frac{U}{N_{2D}}=\frac{1}{2}\sum_{i_z,j_z} u_{i_z}{D}_{i_zj_z}{u}_{j_z}-Q\sum_{{i_z}}{E}_{i_z}{u}_{i_z}+\frac{\gamma}{4}\sum_{i_z}{u}_{i_z}^4, \end{equation} where $N_{2D}$ is the number of 2D unit cells in the $x$-$y$ plane, $D_{i_zj_z}$ is a matrix that contains the force constants between layers $i_z$ and $j_z$, $E_{i_z}$ is the electric field, and ${\gamma}$ is a constant of proportionality for non-linear response. This latter term is important only at high electron densities where the electric field is very strong. The potential energy can be then minimized by taking the derivative with respect to $u_{l_z}$ and setting it equal to zero, from which we obtain the constituent equation \begin{equation} QE_{l_z}=\sum_{j_z}D_{l_z j_z }u_{j_z} +\gamma u_{l_z}^3. \label{u} \end{equation} for $u_{l_z}$. Here, the electric field $E_{l_z}$ is equal to minus the gradient of the total electric potential, which contains contributions from the external surface charge, the bound polarization charge, and the free electron charge. To obtain $D_{l_zj_z}$, we Fourier transform the phenomenological expression proposed in Ref.~\onlinecite{Khalsa:2012fu}, \begin{equation} D_{q_z}=M \left [\omega^2_0-\omega_1^2e^{-(\alpha_1 q_z)^2/2}-\omega_2^2(T)e^{-(\alpha_2 q_z)^2/2}\right ], \label{eq:Dkz} \end{equation} to model the dispersion of the ferroelectric phonon mode, given by $\omega_{q_z}(T) = \left [ D_{q_z} /M\right]^{1/2}$. The parameter values for $\omega_0$, $\omega_1$, and $\alpha_1$ (Table~\ref{cons}) are taken from Ref.~\onlinecite{Khalsa:2012fu}, but $\omega_2(T)$ and $\alpha_2$ are modified to fit the low temperature dielectric susceptibility. For $\omega_2(T)$, we take the phenomenological form (Appendix~\ref{app:A}) \begin{equation} \omega_2^2(T) =\omega_0^2-\omega_1^2-\frac{Q^2 T_Q^\xi}{M \epsilon_0 a^3 T_0^\xi}, \label{eq:w2T} \end{equation} where $T_Q = T_s \coth(T_s/T)$ is an effective temperature that incorporates quantum fluctuations of the ferroelectric phonon mode.\cite{Kleemann:1998ut} The power $\xi = 1.45$ is chosen to improve the quantitative fit to experiments and the constant $T_0$ is obtained from the zero-field susceptibility $\chi(T) = (T_0/T_Q)^\xi$. While the effective temperature reduces to $T_Q = T$ at high temperatures, giving a Curie-like susceptibility, it saturates at $T_Q = T_s$ at low temperatures; consequently, the divergence at $T=0$ is avoided and the susceptibility saturates at $\chi(T\rightarrow0) = (T_0/T_s)^\xi$. In summary, the self-consistency cycle for $\sigma^f_{i_z}$ and $\sigma^b_{i_z}$ involves solving Eqs.~(\ref{n}) and (\ref{u}) for a given electric field to obtain the electron density and lattice polarization, and then updating the electric field from the resulting potential. As has been pointed out elsewhere, the self-consistency cycle is numerically unstable\cite{Khalsa:2012fu}, and to address this we have implemented Anderson mixing of the electric potential.\cite{Eyert:1996gv} In addition, we have found that convergence is most readily obtained if the initial guess for the simulations takes the electron density to be $-\sigma^s$ in the 1st STO layer and zero elsewhere. \section{Results} \label{sec:results} In this section, we present the results of our calculations for temperature and doping-dependent electronic structure of the LAO/STO interfaces. Early DFT calculations established\cite{Gariglio:2015jx} that the interface breaks the cubic symmetry of the ideal STO lattice, so that a qualitative difference emerges between $d_{xy}$ orbitals (which are oriented parallel to the interface) and $d_{xz/yz}$ orbitals. The hopping amplitude along the $z$ axis is $t^\perp$ for $d_{xy}$ orbitals and $t^\|$ for $d_{xz}$ and $d_{yz}$ orbitals. Since $t^\| \sim 10 t^\perp$, this corresponds to an effective mass along the $z$ direction that is 10 times larger for $xy$ bands than for $xz$ or $yz$ bands. This difference sets the energy ordering of the bands, such that the lowest-energy band has $xy$ symmetry and is tightly confined to within a few unit cells of the interface; the lowest $d_{xz/yz}$ bands are higher in energy and extend farther from the interface. In an ideal polar catastrophe model, a charge transfer of $0.5$ electrons per unit cell is needed to suppress the potential divergence in the polar cap material. The ideal value of 0.5$e/a^2$ has been measured for GdTiO$_3$/SrTiO$_3$ interfaces,\cite{Moetakef:2011ko} and only sporadically in LAO/STO interfaces;\cite{Guduru:2013iz,Jost:2014uz} in most conducting interfaces typical experimental values of the electron density measured by the Hall effect \cite{Dubroka:2009,Cancellieri:2013wa} range from $10^{13}$ to $10^{14}$ $e$/cm$^{2}$. The charge density can be further modulated by a gate voltage, and we therefore perform calculations for three different doping levels that cover common experimental and theoretical values of the $2$D charge density: $\sigma^s = 0.5e/a^2$ ($3.3\times 10^{14}$ $e$/cm$^{2}$), as predicted by the polar catastrophe model; $\sigma^s = 0.1e/a^2$ ($6.5 \times 10^{13}$ $e$/cm$^{2}$), which is a typical doping found in LAO/STO interfaces; and $\sigma^s =0.05e/a^2$ ($3.3 \times 10^{13}$ $e$/cm$^{2}$), which is approaching the metal insulator transition that is observed at $\sim 10^{13}$~$e$/cm$^{2}$. Several calculations have explored the doping dependence of the electronic structure at low $T$,\cite{Copie:2009ev,Stengel:2011hy,Khalsa:2012fu,Park:2013gf,Gariglio:2015jx} and we observe similar trends with doping in our low-T calculations. The main new results at this paper refer to how the T-dependence of the electronic structure evolves with doping. \subsection{Effect of temperature on the charge distribution} \label{sec:temperature} In this section, we examine the temperature-dependence of the charge distribution for the three representative cases listed above. To minimize finite-size effects, all calculations are for an STO slab of thickness $L=200$ layers (see Appendix~\ref{sec:fs}). We show that there is a pronounced shift of charge density from 2D quantum states that are confined to within $\sim 4$ nm of the interface into 3D tail states that extend hundreds of unit cells into the STO; the degree of this shift depends strongly on doping. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth,natwidth=610,natheight=642]{Figs/n_T_doplog.pdf} \caption{(Color online) Electron density $n(z)$ per unit cell inside an STO slab at different temperatures and dopings. Results are for (a) $\sigma^s= 0.05e/a^2$, (b) $0.1e/a^2$, and (c) $0.5 e/a^2$ at $T=10$~K and $T=300$~K. The vertical dashed lines define regions A ($z \leq 10a$) and B ($z > 10a$), which roughly correspond to the interface and tail regions. (d)-(f) The total 2D electron density in regions A and B as a function of temperature. The figure shows the first 60 layers of an $L=200$ layer STO slab. } \label{n01} \end{figure} Figure~\ref{n01}(a)-(c) shows the electron density, $n(z)=\sum_{\beta} n_{i_z\beta}$ (where $z=i_za$), for 10~K and 300~K and for low ($0.05e/a^2$), intermediate ($0.1e/a^2$), and high ($0.5e/a^2$) electron densities. As we discuss below, the charge distribution is a mix of surface states with strongly 2D character and tails with 3D character. This is particularly evident in the low-$T$ results in Fig.~\ref{n01}, which show a clear distinction between surface and tail regions. At high $T$, the distinction blurs, and $n(z)$ drops off rapidly in the tail region. The crossover between surface and tail occurs at $z \approx 10a$ ($z\approx 4$ nm), and for discussion purposes we divide the profile into region A ($z \leq 10a$) and region B ($z >10a$). The charge densities $n_A$ and $n_B$ for each region are plotted as a function of $T$ in Fig.~\ref{n01}(d)-(f). There are two key points made by Fig.~\ref{n01}. The first is that the fraction of the total electron density in region A depends on $\sigma^s$. At 300~K, about 90\% of the charge lies in region A for high $\sigma^s$, whereas only about half of the total charge lies in region A at low $\sigma^s$. The second point is that, except at the highest doping levels, $n(z)$ depends strongly on $T$: the charge density near the interface decreases as the temperature is lowered while it increases in the tails. The contrast between low and high charge densities is striking: $n_A$ doubles between 300~K and 10~K for low charge density ($\sigma^s = 0.05e/a^2$), but changes by only 10\% for high charge density ($\sigma^s = 0.5e/a^2$). Focusing on the middle ``typical'' value of $\sigma^s = 0.1e/a^2$, we note that about 70\% of the total electron density lies in region A at 300 K, in agreement with Ref.~\onlinecite{Copie:2009ev}, and slightly under half remains at 10~K. One of the most striking features of Fig.~\ref{n01} is that the profile of $n(z)$ near the interface is almost independent of $T$ at the highest charge density, but is strongly $T$-dependent at the lowest charge density. This trend is connected to the nonlinearity of the dielectric response in strong electric fields. When $\sigma^s$ is large, the electric fields near the interface are large, and the nonlinear term ($\gamma u_{l_z}^3$) in Eq.~(\ref{u}) dominates the linear term ($\sum_{j_z}D_{l_z j_z }u_{j_z} $). Because we have taken $\gamma$ to be $T$-independent, $n(z)$ is also $T$-independent in this region. The electric field decreases both as one moves away from the interface, and as one decreases $\sigma^s$; in both regimes, $n(z)$ becomes temperature-dependent because the nonlinear contribution to the dielectric response is small. It should be noted that in the nonlinear regime, the lattice polarization due to an electric field is proportional to $\gamma^{-1/3}$ [from Eq.~(\ref{u})], so that $\gamma$ must change by a relatively large amount to have a significant effect on the charge distribution. Indeed, $\gamma$ has been measured experimentally\cite{Dec:2005cr} below 60~K and was found to be roughly constant down to 30~K, and then to increase by about 50\% as the system was further cooled. This corresponds to a change of only 15\% in the nonlinear dielectric screening. Unless $\gamma$ changes significantly at higher $T$, the assumption of constant $\gamma$ is reasonable. \begin{figure}[tb] \includegraphics[width=\columnwidth]{Figs/V_E_u_L200S01.pdf} \caption{(Color online) Details of the self-consistent solution at low and high temperature for $\sigma^s = 0.1e/a^2$. (a) The self-consistent potential energy, (b) the electric field, and (c) lattice normal mode displacement are shown at 300 K and 10 K.} \label{V01} \end{figure} To understand better the charge deconfinement that occurs at low temperatures, we plot the electronic potential energy (ie.\ the electron charge times the potential), the electric field, and the normal mode displacement at high and low temperatures in Fig.~\ref{V01} for the intermediate value of $\sigma^s$. Figure~\ref{V01}(a) shows that, in region A, there is a triangular quantum well that confines electrons in 2D quantum states near the interface at all temperatures. In contrast, the potential in region B is strongly temperature dependent, with a crossover from a deep well at high temperature to a nearly flat potential at $10$K. This strong $T$-dependence is connected to the linear dielectric function, which changes by two orders of magnitude between 300~K ($\epsilon \approx 300 \epsilon_0$) and 10~K ($\epsilon \sim 10^4\epsilon_0$). Because of the large value of $\epsilon$, the electric field is strongly screened in region B at low temperature [Fig.~\ref{V01}(b)]. According to Gauss' law, \begin{equation} \epsilon_\infty \frac{\partial E(z)}{\partial z} =-en(z)-\frac{\partial P(z)}{\partial z} \end{equation} where $P(z)$ is the lattice polarization, $E(z)$ is the electric field, and $\epsilon_\infty= 5.5\epsilon_0$ the optical dielectric constant. Because the electric field is small in region B, we have \begin{equation} en(z)\approx -\frac{\partial P}{\partial z}, \end{equation} at $T=10$~K. This means that the electric field generated by the conduction electrons in region B is nearly compensated by the lattice polarization. The normal coordinate $u(z)$ for the soft phonon mode, which is related to the polarization by Eq.~(\ref{eq:Pu}), is shown in Fig.~\ref{V01}(c). Here, we see that $u(z)$ decays with $z$ more slowly at low $T$ than it does at high $T$, consistent with enhanced dielectric screening at low $T$. For completeness, we plot the charge density for intermediate doping as a function of orbital type in Fig.~\ref{nxyz}. This figure shows that, while the interfacial $d_{xy}$ electron density $n_{xy}(z)$ is weakly temperature dependent, $n_{xz}(z)$ and $n_{yz}(z)$ evolve strongly with $T$ near the interface. In particular, the $d_{xz}$ and $d_{yz}$ bands combined account for 80\% of the charge transfer out of the first 10 layers as the temperature decreases. The different sensitivities of $n_{xy}(z)$ and $n_{xz/yz}(z)$ to temperature follow from the different mass anisotropies of the three bands: both the $xz$ and $yz$ bands are light along the $z$ direction while the $xy$ bands are heavy; the $xz$ and $yz$ wavefunctions are therefore more extended along $z$ than the $xy$ wavefunctions. It is unsurprising that the $xz$ and $yz$ bands are most affected as the confining potential weakens when $T$ is reduced. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth,natwidth=610,natheight=642]{Figs/n_xy_xz_L200S01.pdf} \caption{(Color online) Electron density per unit cell $n_\alpha(z)$ for orbital types (a) $\alpha = xy$, and (b) $\alpha=xz$. Results are at temperatures range from 300~K to 10~K. Note that $n_{yz}(z) = n_{xz}(z)$. Insets show the electron density on logarithmic scale. Results are for $\sigma^s = 0.1 e/a^2$.} \label{nxyz} \end{figure} In summary, we arrive at the following scenario: at room temperature, a majority of electrons is confined to quantum states within $\sim 4$~nm of the interface by strong electric fields associated with the surface charge $\sigma^s$; however, as $T$ is reduced, this electric field is increasingly screened by the dielectric response of the STO, causing a partial deconfinement of the electron gas. This deconfinement is most pronounced at the lowest $\sigma^s$, where approximately half of the interfacial electron density moves into the tail region. Despite the large fraction of electrons in the tails, the associated electric fields are vanishingly small because of the strong dielectric screening. We note in passing that the structure of the tails, and in particular the connection to ferroelectric quantum criticality in the STO substrate, is discussed in detail in Ref.~\onlinecite{Atkinson:2016} \subsection{Effect of temperature on the band structure} \label{sec:bands} The temperature-dependent band dispersions $\epsilon_{n{\bf k}}$ are shown in Fig.~\ref{band01} for intermediate charge density. The t$_{2g}$ orbital degeneracy is broken by the interface, resulting in multiple orbitally polarized sub-bands.\cite{Popovic:2008ft} The sub-bands consist of light bands (black lines) with $d_{xy}$ orbital character, and two anisotropic bands (blue and red lines) with $d_{xz}$ and $d_{yz}$ orbital character. At all temperatures, the two lowest-energy sub-bands at ${\bf k} = 0$ have $d_{xy}$ orbital character, while $d_{xz}$ and $d_{yz}$ sub-bands appear at higher energies. This structure is consistent with previous DFT calculations \cite{Stengel:2011hy,Zhong:2013fv} and with photoemission experiments.\cite{Walker:2015} \begin{figure}[tb] \center \includegraphics[width=\columnwidth,natwidth=610,natheight=642]{Figs/new_bnd_strc_01.pdf} \caption{(Color online) Self-consistent band structure along ${\bf k} = (k_x,0)$. Results are for (a) 300~K, (b) 100~K, (c) 50~K, and (d) 10~K, and $\sigma^s = 0.1e/a^2$. The Fermi-Dirac distribution function, $f(\epsilon)$, is shown in each panel (green line).} \label{band01} \end{figure} Figure~\ref{band01}(a) shows the $1xy$, $2xy$, $1xz$, and $1yz$ sub-bands at 300 K. We note that while the electrochemical potential $\mu$ lies below all but the $1xy$ band at 300~K, the thermal energy is sufficient that all bands shown in Fig.~\ref{band01}(a) have significant electron occupation. The $1xy$ band has the highest occupancy, containing about $20\%$ of the total electron density, while the first four bands combined contain approximately half of the total charge. Two significant changes occur as the temperature is lowered: first, there is a significant shift of the electrochemical potential $\mu$ between 300~K and 100~K; second, while the gap between the $1xy$ and $2xy$ bands evolves very little with $T$, the spacing between the remaining bands shrinks significantly. Coincident with this change in the spectrum, there is a shift of the occupied eigenstates towards three-dimensionality. At 300 K, the bands shown in Fig.~\ref{band01}(a) have strong 2D character, and the eigenstates are localized within the first 10 STO layers. This is illustrated in Fig.~\ref{bandwt}, which shows the projected weight $|\psi_{j_z \alpha, n}|^2$ of the first few sub-bands. Figure~\ref{bandwt} shows that the $1xy$ band is localized within 5 layers of the interface at all temperatures, but that the $2xy$ and $1xz/yz$ bands extend twice as far into the STO at 10~K as at 300~K. Higher bands are affected even more by temperature, and the $10xy$ band extends four times as far into the STO at 10~K as it does at 300~K. \begin{figure}[tb] \center \includegraphics[width=\columnwidth,natwidth=610,natheight=642]{Figs/bnd_wt_L200T300-10.pdf} \caption{(Color online) Projected band weights at 10~K (main panel) and 300~K (inset). The figure shows the band weights of the lowest five bands at 10~K and the lowest four bands at 300~K; these bands contain slightly more than half of the total charge. For illustration, the band weight of a high-energy $10xy$ band is also shown at each temperature. The projected weight of band $n$ in layer $j_z = z/a$ for orbital type $\alpha$ is $|\psi_{j_z\alpha,n}|^2$, where the $\psi_{j_z\alpha, n}$ is the electronic wavefunction. Note that the $xz$ and $yz$ band weights are the same. Results are for $\sigma^s = 0.1e/a^2$.} \label{bandwt} \end{figure} The distribution of charge amongst the bands is also $T$-dependent. At 300~K, 57\% of the charge is contained in the first 4 bands ($1xy$, $2xy$, $1xz/yz$); at 10~K, this charge is shared amongst the lowest 5 bands (including $3xy$). Thus, charge spreads away from the interface as $T$ is lowered for two reasons: first, occupied bands become less confined; and second, the density of bands increases, such that higher bands with larger spatial extent become occupied. In particular, the band structure in Fig.~\ref{band01}(d) shows evidence for coexisting 2D and 3D components to the electron gas: states that are confined to the interface region are characterized by bands that are clearly separated from each other at ${\bf k} = 0$, while 3D states are characterized by a dense continuum of bands. Indeed, we have found that the first half-dozen bands do not change much with the STO slab thickness $L$, indicative of quantum interface states; however, the sub-band structure at energies $\gtrsim \mu$ becomes denser as $L$ increases, indicating that these states extend to the back wall of the STO slab, even for $L=200$. Figure~\ref{band01} thus reinforces the narrative that there is a transfer of electrons from 2D quantum states localized within $\sim 10$ unit cells of the interface to extended 3D tails as $T$ is lowered. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth,natwidth=610,natheight=642]{Figs/bnd_strc_dopT300-10A.pdf} \caption{(Color online) Doping- and temperature-dependent band structure of a STO interface. Results are for (a)-(c) 300 K, and (d)-(f) 10 K. Doping levels are (a), (d) $\sigma^s= 0.05e/a^2$; (b), (e) $\sigma^s= 0.1e/a^2$; (c), (f) $\sigma^s= 0.5e/a^2$.} \label{banddop} \end{figure} Figure~\ref{banddop} compares the calculated band structures at low and high temperature for low, intermediate, and high doping. At all electron densities, the visible portions of the spectra comprise a set of distinct bands with 2D character at 300~K. At 10~K, the spectra consist of a small number of low-energy 2D bands that are clearly separated from a 3D continuum with $\epsilon_{n{\bf k}} \gtrsim \mu$. The low-energy bands are the source of the interfacial component of the charge density in Fig.~\ref{n01}. Consistent with Fig.~\ref{n01}, the 2D bands at high doping [Fig.~\ref{banddop}(c) and (f)] are nearly independent of $T$. In summary, we find that there is a discrete spectrum of quantum 2D states that are confined to within 10 unit cells of the interface, and a higher energy continuum of 3D states that extend hundreds of unit cells into the STO. The principal result of this section is that the 3D states lead to a partial deconfinement of the electrons from the interface at low $T$, and that this deconfinement becomes more pronounced as the total 2D electron density is reduced. \subsection{Spectral function} \label{sec:arpes} The temperature-dependent band structure can be observed by ARPES, and indeed recent ARPES experiments at low temperature have found features consistent with the predicted band structure.\cite{Cancellieri:2013wa,Cancellieri:2016fw} ARPES is a surface-sensitive technique that measures the projection of the spectral function onto the top STO layer; furthermore, photon polarization can be used to selectively probe different orbital symmetries. For direct comparison we therefore calculate $A_{i_z,\alpha}(\omega, {\bf k})$, the projected spectral function in layer $i_z$ for orbital type $\alpha$. This is given by \begin{equation} A_{i_z,\alpha}(\omega, {\bf k})=\sum_{n}| \psi_{i_z\alpha, n}({\bf k})|^2 \delta(\omega-\epsilon_{n\bf k}), \label{eq:Akw} \end{equation} where $ | \psi_{i_z\alpha, n}({\bf k})|^2$ is the weight of the $n$th band in layer $i_z$ for orbital type $\alpha$, and $\epsilon_{n\bf k}$ is the dispersion of the $n$th band. The delta-function has a Lorentzian broadening of 0.01 eV, which is comparable to the energy resolution of high-resolution ARPES experiments. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{Figs/Spec_fun_01.pdf} \caption{(Color online) Projected spectral function at the interface for quasiparticle energy $\mu$. The left panels show $A_{1,xy}(\mu,{\bf k})$ for $xy$ bands at (a) 300~K, (c) 100~K, (e) 50~K, and (g) 10~K. The right panels present the corresponding spectral function $A_{1,xz}(\mu,{\bf k})$. Results are for $\sigma^s=0.1e/a^2$.} \label{Spec01} \end{figure} We are principally concerned with two main points about the spectral function: the intensity of the various features of the band structure, which is nominally related to the weight of the different bands at the surface; and the size of the apparent Fermi surfaces, which is nominally related to the filling of each band. Because both the band weight and band dispersion change with temperature, as shown in Figs.~\ref{band01} and \ref{bandwt}, we expect that the projected spectral function must also change with temperature. We begin with the case of intermediate electron density. Figure~\ref{Spec01} shows the temperature-dependent spectral function $A_{1,\alpha}(\mu, {\bf k})$ at the interface ($i_z=1$) for quasiparticles at the electrochemical potential $\mu$. The left panels present the evolution of the projected spectral function for the $xy$ bands; the right panels show the corresponding spectral function for the $xz$ bands. (The spectral functions for the $yz$ bands can be obtained by rotating the $xz$ image by $\pi/2$.) At 300~K, we observe an intense ring with $xy$ symmetry, corresponding to the $1xy$ band [Fig.~\ref{Spec01}(a)], and a very weak cigar-shaped feature associated with the $1xz$ band [Fig.~\ref{Spec01}(b)]. The disparity between the $xy$ and $xz/yz$ intensities is consistent with the fact that only the $1xy$ band crosses $\mu$ at this high temperature. Indeed, the bottom of the $1xz$ band is $\sim 0.035$ eV above $\mu$, and is only observable in Fig.~\ref{Spec01}(b) because of the finite energy resolution in Eq.~(\ref{eq:Akw}). At 100~K, the intensity of the $1xy$ band decreases slightly, and an intense disk centered at ${\bf k}=0$ appears [Fig.~\ref{Spec01}(c)]. This change in the spectral function reflects both changes in the band structure and a shift of the chemical potential to higher energies [c.f.\ Fig.~\ref{band01}(b) and (c)]. At this temperature, multiple $xy$ bands pass within 0.01~eV of the chemical potential; while the $1xy$ band appears as a distinct ring, these remaining $xy$ bands blur together to form a disk. The $1xz$ band [Fig.~\ref{Spec01}(d)] continues to be an order of magnitude less intense than the $xy$ bands, despite the fact that the $1xz$ band dispersion crosses $\mu$ at 100~K. This is because of the small weight of the $1xz$ band at the interface [Fig.~\ref{bandwt}]. Below 50~K, the intensity of the $1xy$ ring does not change [Fig.~\ref{Spec01}(e) and (g)], but the disk intensity increases slightly because higher energy $xy$ bands shift downwards as $T$ decreases, as shown in Figs. \ref{band01}(c) and (d). At the lowest temperatures, this disk represents the projection of the 3D tail states onto the surface. The intensity of the $xz$ bands remains an order of magnitude smaller than that of the $xy$ bands [Fig.~\ref{Spec01}(f)]. There is very little change to the apparent spectrum below 50~K. Focusing on bands of $xy$ symmetry, we note that the apparent filling as determined from the area of the $1xy$ ring is temperature-dependent, and changes by $\sim 20\%$ between 300~K and 100~K. This change does not reflect a 20\% change in the filling of the $1xy$ band however, because of the rather large change in $\mu$, which shifts upwards by almost 0.02~eV as $T$ is lowered. Below 100~K, the ring's surface area does not significantly change with temperature. Next, the doping-dependence of the spectral function is shown in Fig.~\ref{Spec}. As expected, the surface area of the bands increases with $\sigma^s$, in agreement with Ref. \onlinecite{Cancellieri:2013wa}; however, it is the temperature-dependence of the intensity that is most striking. The spectral function is almost independent of $T$ at $\sigma^s = 0.5 e/a^2$, which is a direct result of the strongly nonlinear dielectric response in the interface region at high doping. In contrast, at low doping, the intensity of the spectral function at $\mu$ is strongly $T$-dependent, primarily because of the strong $T$-dependence of the chemical potential. Several groups have performed ARPES experiments on STO interfaces at low temperatures, and the shapes and surface areas of our calculations are in good agreement with the measured Fermi surfaces for approximately the same doping.\cite{Berner:2013kp,Cancellieri:2013wa,Cancellieri:2016fw} Notably the $xz$ (and $yz$) bands are more than an order of magnitude weaker than the $xy$ bands in our calculations; and while the relative intensities of the bands observed in ARPES depend on matrix elements, the $d_{xz/yz}$ bands are indeed considerably weaker than the $d_{xy}$ bands.\cite{Berner:2013kp} In summary, our calculations agree with ARPES experiments at low temperatures, and we make two predictions regarding spectral function $A_{1,\alpha}(\mu,{\bf k})$ at high temperatures: first that the area of the $1xy$ ring should shrink as $T$ is raised above 100 K; and second that the intensity of the $d_{xz/yz}$ bands should drop dramatically above 100 K. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth]{Figs/Spec_fun_dop.pdf} \caption{(Color online) Projected spectral function at low, intermediate, and high electron densities, and at $T=10$~K and $T=300$~K. Results are shown for $xy$ bands (rows 1 and 3) and $xz$ bands (rows 2 and 4) bands. } \label{Spec} \end{figure} \subsection{Local and nonlocal dielectric functions} \label{sec:local_v_nonlocal} We finish Sec.~\ref{sec:results} with a brief discussion of the dielectric model used in this work. The dielectric response obtained from Eq.~(\ref{u}) contains both nonlocal and nonlinear contributions to the polarization. The nonlinearity has been discussed previously,\cite{Khalsa:2012fu,Reich:2015ut,Peelaers:2015fh} and was generally found to be important only near the interface for $\sigma^s \gtrsim 10^{14}$ $e$/cm$^2$, consistent with our findings here. In this section, we investigate the effects of the nonlocal dielectric response on $n(z)$. We compare the charge density profile obtained from the nonlocal matrix of force constants $D_{i_zj_z}$, defined previously, with the one obtained from a local matrix ${\tilde { D}}_{i_zj_z}={\tilde { D}}_{i_z} \delta_{i_z,j_z}$. For purposes of comparison, we choose $\tilde D_{i_z}$ such that it gives the same linear response for a uniform electric field as $D_{i_zj_z}$. If the electric field $E_{l_z}$ and normal coordinate $u_{j_z}$ are independent of position in Eq.~(\ref{u}), we obtain in the weak-field limit \begin{eqnarray} QE &=& \sum_{j_z} D_{i_z j_z} u \nonumber \\ &=& D_{k_z=0} u. \end{eqnarray} To obtain the same result for $\tilde D_{i_z j_z}$, we define ${\tilde { D}}_{i_z}=D_{k_z=0}$. Figure \ref{nloc/loc} shows the charge density profile at different temperatures for local and nonlocal force constants. At 300~K, the two give the same charge density profile [Fig.~\ref{nloc/loc}(a)]. However, as the temperature is lowered, charge moves away from the interface more rapidly for the local case than for the nonlocal case [Fig.~\ref{nloc/loc}(b)-(d)]. Far from the interface, both cases yield identical results as found in Ref.~\onlinecite{Reich:2015ut}; this is because we defined $\tilde D_{i_z}$ such that it gives same homogeneous response as $D_{i_zj_z}$. The behavior shown in Fig.~\ref{nloc/loc} can be understood simply. The dielectric response is connected to a soft optical phonon mode with dispersion $\omega_{\bf k}$ satisfying $D_{\bf k} = M\omega_{\bf k}^2$ where $M$ is the effective mass of the mode. At high temperatures, $\omega_{\bf k}$ has a relatively smooth dispersion; however the dispersion, and consequently $D_{\bf k}$, develops a sharp feature at low $T$ as the mode softens near ${\bf k}=0$.\cite{Cowley:tr} From the properties of Fourier transforms, it follows that the range of $D_{i_zj_z}$ is therefore greater at low $T$ than at high $T$, or equivalently that the response is more local at high $T$. This accounts for the similarity between the two models at 300~K. The different charge profiles that emerge at low $T$ indicate that the local dielectric function is more effective at screening the electric field in regions where there are strong field gradients. \begin{figure}[tb] \centering \includegraphics[width=\columnwidth,natwidth=610,natheight=642]{Figs/n_loc_nloc.pdf} \caption{(Color online) Comparison of local and nonlocal models for the dielectric response. The charge density profile for the two models is shown at temperatures (a) $T=300$~K, (b) 100~K, (c) 50~K, and (d) 10~K with $\sigma^s=0.1e/a^2$. The first 60 layers of an $L=200$ layer thick STO slab are shown.} \label{nloc/loc} \end{figure} \section{Discussion and Conclusions} \label{sec:discussion} The calculations in this work are based on a combination of two established models: the dielectric properties of the STO are modeled by a Landau-Devonshire free energy similar to those used to describe the insulating parent compound,\cite{Dec:2005cr,Palova:2009js} while the electronic properties are described by a tight binding model, similar to what is done elsewhere.\cite{Stengel:2011hy,Zhong:2013cr} Unlike conventional semiconductors, the STO dielectric function is strongly temperature- and electric field-dependent. This leads to counterintuitive behavior at STO interfaces; namely, that the electron gas is more strongly confined at high temperatures and electron densities than at low temperatures and electron densities. Consequently, our calculations make predictions that differ from commonly held views regarding the electron distribution in STO interfaces. The conventional view is that the electronic properties are dominated by quantum 2D states, and indeed experiments find that the majority of the charge is bound to within $~\sim 10$ nm of the interface.\cite{Reyren:2009va,Copie:2009ev,Basletic:2008ja,Dubroka:2010bi} Measurements of the nonlinear Hall coefficient have been modeled by two occupied sub-bands: a low-mobility band containing most of the conduction electrons, and a high-mobility band containing a minority of carriers. The mobilities of the two components vary from sample to sample, and may differ by orders of magnitude.\cite{Joshua:2012bl,Lerer:2011bp,Jost:2014uz,Kim:2010fl,Guduru:2013iz} While the two-band interpretation is conceptually useful, it has been noted that inconsistencies within the two-band analysis suggest a more complicated band structure.\cite{Joshua:2012bl} At low electron densities, the picture is clearer: experiments have found a Lifshitz transition near electron densities of $1.5\times 10^{13}$ cm$^{-2}$,\cite{Joshua:2012bl} which is slightly above the metal-insulator transition at $\approx 10^{13}$ cm$^{-2}$. Below the Lifshitz transition, the magnetic field-dependence of the Hall resistivity is linear, indicating that only a single band is occupied. In contrast, the results reported in this work find a large number of occupied bands at all doping levels, similar to previous calculations.\cite{Copie:2009ev,Stengel:2011hy,Khalsa:2012fu,Park:2013gf} A significant fraction of the occupied bands corresponds to the quasi-3D tail states that extend hundreds of unit cells into the STO substrate. While the fraction of charge contained in the tails is small at high electron densities, it is over 50\% at low electron densities (Fig.~\ref{n01}). Perhaps more interestingly, we have found a strong temperature dependence to the charge distribution at intermediate electron densities, with a pronounced shift of charge into the tails as $T$ is lowered. The general trend that the charge spreads out as $T$ decreases was observed experimentally;\cite{Copie:2009ev} however, experimental confirmation of quasi-3D tails remains lacking. Indeed, direct observation of the tails may be difficult because, except at the lowest doping levels, the electron density $n(z)$ in the tails is at least an order of magnitude smaller than in the 2D component of the electron gas (Fig.~\ref{n01}). The tails may be most relevant to transport experiments, since interfacial disorder (eg.\ cation intermixing) is thought to severely reduce the mobility of 2D states near the interface. A proper comparison between theory and experiment requires a detailed disorder model, which is beyond the scope of this work. Nonetheless, we can make a few simple observations based on a crude model for the mobility $\mu_{n}$ of the first few bands ($n= 1xy$, $1xz/yz$, $2xy$). This model assumes that interfacial disorder (eg.\ cation intermixing) is the dominant scattering mechanism and that interband scattering can be neglected. These assumptions break down at low doping, first because the interband spacing becomes less than the scattering rate, and second because low-lying bands become part of the 3D continuum and are therefore subject to scattering by defects in the STO substrate. The model is also limited because it provides no information about the mobility of the 3D tails. For qualitative purposes, however, we can assume that the tails behave similarly to bulk STO. \begin{figure} \includegraphics[width=\columnwidth]{Figs/sct_T_dop_L2.pdf} \caption{(Color online) Transport properties of 2D interface states as a function of (a), (c), (e) 2D charge density at fixed temperature, and (b), (d), (f) temperature at fixed 2D charge density. (a), (b) Scattering rate $\hbar/\tau_n$; (c), (d) mobility $\mu_n$; and (e), (f) fraction of the total charge in band $n$ for $n=1xy$, $2xy$, and $1xz$. The calculations assume that elastic scattering comes predominantly from interfacial disorder (eg.\ cation intermixing), and that interband scattering process can be neglected. Contributions from the 3D tails are not included in this figure. } \label{fig:cmpr_to_expt} \end{figure} The simplest ansatz is to take a quenched disorder model in which the Ti site potentials in the first $\lambda$ STO layers adjacent to the interface are chosen from a random box-distribution of width $W$. Experimentally, cation intermixing is found to extend over a few unit cells,\cite{Nakagawa:2006gt} and for concreteness, we arbitrarily take $W=1$~eV and $\lambda=2$; however, the qualitative results do not depend strongly on this choice. Within a Born approximation the electron lifetime $\tau_{n}$ in band $n$ is \begin{equation} \frac{\hbar}{\tau_{n}} = \frac {\sqrt{m_{x,n}m_{y,n}} W^2 a^2}{24\hbar^2 } \sum_{i_z=1}^\lambda |\Psi_{i_z \alpha,n}|^2, \label{eq:taun} \end{equation} where $m_{x,n}$ and $m_{y,n}$ are effective mass components for band $n$. The mobility for transport in the $x$-direction is $\mu_n = e \tau_n/m_{x,n}$. The absolute values of the mobility, which depend on our arbitrary choice of $W$, are not especially meaningful; however, the trends with doping and temperature shown in Fig.~\ref{fig:cmpr_to_expt} are. Equation~(\ref{eq:taun}) shows that individual bands' scattering rates depend on the projected band weight $ |\Psi_{i_z \alpha,n}|^2$ onto layers adjacent to the interface. Two clear trends in Fig.~\ref{fig:cmpr_to_expt}, namely that $\mu_n$ increases when either $\sigma^s$ or $T$ is reduced, can be traced back to shifts of the band weight away from the interface (recall, for example, Fig.~\ref{bandwt}). Similarly, Fig.~\ref{fig:cmpr_to_expt} shows that at fixed $T$ and $\sigma^s$ the mobilities of different 2D bands may differ by an order of magnitude or more because of they have different band weights at the interface. While significant, the differences in mobilities between bands that are shown in Fig.~\ref{fig:cmpr_to_expt} are much less than the three orders of magnitude difference between high- and low-mobility electrons reported in Refs.~\onlinecite{Guduru:2013iz,Jost:2014uz}. Those experiments instead suggest that the two electronic components live in different environments. With this in mind, we speculate that the low-density high-mobility component of the electron gas observed over a wide range of electron dopings,\cite{Joshua:2012bl,Lerer:2011bp,Jost:2014uz,Kim:2010fl,Guduru:2013iz} may in fact correspond to the 3D tails in our calculations. These tails have very little overlap with the interface, and the scattering of conduction electrons will be determined by the defect density in the STO substrate. The remaining high-density low-mobility component of the electron gas then must correspond to the 2D interface states, whose mobility is limited by interfacial disorder. We point to three experimental observations that are broadly consistent with this proposed scenario: \begin{itemize} \item First, our calculated charge densities in the interface and tail regions roughly correspond to the observed fractions of low and high mobility charges. Ref.~\onlinecite{Guduru:2013iz} reports that for high electron densities, the high-mobility component of their electron gas comprises less than 10\% of the total electron density, while Ref.~\onlinecite{Lerer:2011bp} found that at intermediate densities the high-mobility component contains a third of the total electron density. Similarly, Fig.~\ref{n01} shows that the fraction of the total charge in the tail region at 10~K rises from less than 10\% at high electron density to roughly 50\% at intermediate density. \item Second, the predicted temperature dependence of the mobility is qualitatively consistent with available experiments. At intermediate electron densities, Ref.~\onlinecite{Lerer:2011bp} found that the conductivity of the high-density component is nearly independent of $T$ (up to 30~K), while the conductivity of the low-density component drops by an order of magnitude. Similarly, Fig.~\ref{fig:cmpr_to_expt} shows that the mobilities of the interface states are almost constant between 10~K and 30~K, owing to modest changes in the confinement of their wavefunctions to the interface. Conversely, we expect the tail states to exhibit a strong temperature-dependence, assuming that they follow the behavior of bulk STO.\cite{Spinelli:2010dm,Faridi:2016ue} \item Third, at low electron densities, Ref.~\onlinecite{Joshua:2012bl} argued that the electrochemical potential is pinned to the bottom of a heavy band that acts as a charge reservoir. They speculated that this reservoir consists of interfacial $d_{xz/yz}$ bands; however, our calculations find that at 10~K the electrochemical potential is pinned to the bottom of the quasi-3D tail bands (Fig.~\ref{banddop}). Because the density of states in the tails is extremely high compared to the 2D interface states, we argue that the tails provide a more natural explanation for the observed charge reservoir. \end{itemize} We note that there are open questions that are not addressed by the simple arguments presented here. Our model does not predict the Lifshitz transition observed by Ref.~\onlinecite{Joshua:2012bl} at low electron density, for example. Instead, the $1xy$ band in our calculations continuously merges with the 3D continuum as the electron density is lowered. We do not know the reason for this discrepancy, although our neglect of spin-orbit coupling, which is known to be important at low doping, is an obvious candidate. It is also not yet clear whether the multiple occupied bands predicted by our calculations are consistent with the two-band interpretation of transport coefficients; in particular, a proper calculation of magnetoresistance with a qualitatively accurate disorder model is required to understand the extend to which our model is compatible with experiments. Finally, we remark that our calculations have implications for the superconducting state that has been observed at STO interfaces. This state has been shown to be 2D, with a characteristic thickness of $\sim 10$~nm inferred from measurements of the critical magnetic field anisotropy.\cite{Reyren:2009va} While this naively seems to contradict the prediction of quasi-3D tails that extend hundreds of unit cells into the bulk, we note that bulk STO is superconducting for 3D electron densities between $6\times 10^{-4}$ and $2\times 10^{-2}$ electrons per unit cell,\cite{Edge:2015fj} such that the lowest density regions of the tail region are not expected to be superconducting. For our ``typical'' case of $\sigma^s = 0.1 e/a^2$, Fig.~\ref{n01}(b) suggests that superconductivity extends roughly 30 unit cells into the STO substrate, in agreement with experiments. In summary, we have explored the temperature- and doping-dependent band structure of model STO interfaces. The calculations presented in this work suggest a significant role for quasi-3D tail states, contrary to a widely held perception that the interfaces are dominated by 2D states. These tail states extend hundreds of unit cells into the STO substrate, and are extremely sensistive to both electron doping and temperature. We have shown that photoemission experiments can be used to probe the temperature-dependent band structure; however, the tail states exist far from the interface and are therefore invisible to ARPES. We speculate, however, that the tail states are key to understanding transport experiments, and have provided some qualitative evidence to support this idea. \section*{Acknowledgments} We acknowledge support by the Natural Sciences and Engineering Research Council (NSERC) of Canada.
1,116,691,499,953
arxiv
\section{Introduction}\label{s:intro} The representation dimension of a finite group $G$, denoted by $\rdim (G)$, is the minimal dimension of a faithful complex linear representation of $G$. In this paper we determine the maximal representation dimension of a group of order $p^{n}$. We are motivated by a recent result of N. Karpenko and A. Merkurjev \cite[Theorem 4.1]{km}, which states that if $G$ is a finite $p$-group then the essential dimension of $G$ is equal to $\rdim (G)$. For a detailed discussion of the notion of essential dimension for finite groups (which will not be used in this paper), see \cite{br} or \cite[\S 8]{jly}. We also note that a related invariant, the minimal dimension of a faithful complex {\em projective} representation of $G$, has been extensively studied for finite simple groups $G$; for an overview, see~\cite[\S 3]{tz}. Let $G$ be a $p$-group of order $p^n$ and $r$ be the rank of the centre $Z(G)$. A representation of $G$ is faithful if and only if its restriction to $Z(G)$ is faithful. Using this fact it is easy to see that a faithful representation $\rho$ of $G$ of minimal dimension decomposes as a direct sum \begin{equation} \label{e.decomp} \rho = \rho_1 \oplus \dots \oplus \rho_r \end{equation} of exactly $r$ irreducibles; cf. ~\cite[Theorem 1.2]{mr}. Since the dimension of any irreducible representation of $G$ is $\le \sqrt{[G:Z(G)]}$ (see, e.g.,~\cite[Corollary 3.11]{W03}) and $|Z(G)| \ge p^r$, we conclude that \begin{equation} \label{e.inequality} \rdim(G) \leq rp^{\left\lfloor(n-r)/2\right\rfloor} . \end{equation} Let \[ f_p(n) := \max_{r\in\mathbb{N}}(rp^{\left\lfloor(n-r)/2\right\rfloor}). \] It is easy to check that $f_p(n)$ is given by the following table: \begin{center} \vspace{0.1cm} \begin{tabular}{|c| c| c|} \hline $n$&$p$ & $f_p(n)$\\ \hline even & arbitrary & $2p^{(n-2)/2}$\\ \hline odd & odd & $p^{(n-1)/2}$\\ \hline odd, $\ge 3$ & $2$ & $3p^{(n-3)/2}$\\ \hline $1$ & $2$ & $1$ \\ \hline \end{tabular} \end{center} We are now ready to state the main result of this paper. \begin{thm} \label{main} Let $p$ be a prime and $n$ be a positive integer. For almost all pairs $(p,n)$, the maximal value of $\rdim(G)$, as $G$ ranges over all groups of order $p^n$, equals $f_p(n)$. The exceptional cases are \[ \text{$(p,n) = (2,5)$, $(2,7)$ and $(p, 4)$, where $p$ is odd.} \] In these cases the maximal representation dimension is $5$, $10$, and $p + 1$, respectively. \end{thm} The proof will show that the maximal value of $\rdim(G)$, as $G$ ranges over all groups of order $p^n$, is always attained for a group $G$ of nilpotency class $\le 2$. Moreover, if $(p, n)$ is non-exceptional, $n \ge 3$ and $(p, n) \neq (2, 3), (2, 4)$, the maximum is attained on a special class of $p$-groups of nilpotency class $2$. We call these groups {\em generalized Heisenberg groups} since their representation theory looks very similar to the usual Heisenberg group (the group of unipotent upper triangular $3\times 3$ matrices); see Section~\ref{sect.rep} The rest of this paper is structured as follows. In \S \ref{s:heisenberg} we introduce generalized Heisenberg groups and study their irreducible representations. In \S \ref{s:proof}, we prove Theorem~\ref{main}. \subsection*{Acknowledgement} We would like to thank Hannah Cairns, Robert Guralnick, Chris Parker, Burt Totaro, and Robert Wilson for helpful discussions. We are also grateful to the referee for constructive comments. \section{Generalized Heisenberg groups}\label{s:heisenberg} \subsection{Spaces of alternating forms} Let $V$ be a finite dimensional vector space over an arbitrary field $F$. Let $\mathcal{A}(V)$ denote the space of bilinear alternating forms on $V$; that is, linear maps $b:V\otimes V\ra F$ satisfying $b(v,v)=0$. Let $K$ be a subspace of $\mathcal{A}(V)$. Then $K$ defines a map $\omega_K: V\times V\ra K^*$ as follows. Let $j: \mathcal{A}(V)^*\ra K^*$ denote the dual of the natural injection $K\inj \mathcal{A}(V)$. Then $\omega_K$ is defined to be the composition \begin{equation}\label{eq:omega} \xymatrix{ V \times V \ar[r] \ar@/ _1.7pc/[rrr]_{\omega_K} & \Lambda^{2}(V)\ar[r] & \mathcal{A}(V)^*\ar[r]^j &K^{*},\\\\ } \end{equation} where the first map is the natural projection and the second one is the canonical identification of the two spaces. \subsection{Symplectic subspaces} \begin{df} A subspace $K\subseteq \mathcal{A}(V)$ is {\em symplectic} if every nonzero element of $K$ is non-degenerate, as a bilinear form on $V$. \end{df} \begin{rmk} \label{r:symplectic} Equivalently, $K\subset \mathcal{A}(V)$ is symplectic if and only if for every nonzero linear map $K^*\ra F$ the composition $V\times V\rar{\omega_K} K^* \ra F$ is non-degenerate. \end{rmk} Clearly nontrivial symplectic subspaces of $\mathcal{A}(V)$ can exist only if $\dim(V)$ is even. \begin{lem} \label{l:Existence} Suppose $V$ is an $F$-vector space of dimension $2m$. If $F$ admits a field extension of degree $m$ then there exists an $m$-dimensional symplectic subspace $K\subset \mathcal{A}$. \end{lem} \begin{proof} Choosing a basis of $V$, we can identify $\mathcal{A}(V)$ with the space of alternating $2m \times 2m$-matrices. Let $f \colon \Mat_m(F) \to \mathcal{A}(V)$ be the linear map \[ A \mapsto \begin{bmatrix} 0 & A\\ -A^{T} & 0\\ \end{bmatrix} \, . \] If $W$ is a linear subspace of $\Mat_m (F) = \End_F(F^m)$ such that $W \backslash \{0\} \subset {\rm GL}_m(F)$ then $K = f(W)$ is a symplectic subspace. It thus remains to construct an $m$-dimensional linear subspace $W$ of $\Mat_m(F)$ such that $W \backslash \{0\} \subset {\rm GL}_m(F)$. Let $E$ be a degree $m$ field extension of $F$. Then $E$ acts on itself by left multiplication. This gives an $F$-vector space embedding of $\Psi \colon E \hookrightarrow \End_F(E)$ such that $\Psi(e)$ is invertible for all $e \neq 0$. \end{proof} \subsection{Groups associated to spaces of alternating forms} \label{ss:HeisenbergGrps} Let $V$ be a finite-dimensional vector space over a field $F$. Let $K$ be a subspace of $\mathcal{A}(V)$ and let $\omega_K$ denote the induced map $V\times V\ra K^*$, see (\ref{eq:omega}). Choose a bilinear map $\beta:V\times V\ra K^*$ such that \begin{equation}\label{eq:decomposition} \omega_K(v,w)=\beta(v,w)-\beta(w,v). \end{equation} To see that this can always be done, note that if $\{e_i\}$ is a basis of $V$, we can define $\beta$ by \[ \beta(e_i, e_j) = \begin{cases} \text{$\omega_K(e_i,e_j)$, if $i>j$ and} \\ \text{$0$, otherwise.} \end{cases} \] We also remark that $\beta$ is uniquely determined by $\omega_K$, up to adding a symmetric bilinear form $V \times V \to K^*$. \begin{df} \label{def.group} Let $H=H(V,K,\beta)$ denote the group whose underlying set is $V\times K^*$ and whose multiplication is given by \begin{equation} (v,t) \cdot(v',t')=(v+v',t+t'+\beta(v,v')). \label{eq:HeisGroup} \end{equation} If $K$ is a symplectic subspace, we will refer to $H$ as a {\em generalized Heisenberg group}. \end{df} \begin{ex} \label{ex:Heisenberg} Suppose $\omega$ is a nondegenerate alternating bilinear form on $V=F\oplus F$, where $F$ is a field of characteristic not equal to $2$. Let $K$ be the span of $\omega$ in $A(V)$. Then $H(V,K, \frac{1}{2}\omega)$ is isomorphic to the group of unipotent upper triangular $3\times 3$ matrices over $F$. This group is known as the Heisenberg group. \end{ex} \begin{rmk} \label{rem.centre}It is easy to see that~\eqref{eq:HeisGroup} is indeed a group law with the inverse given by $(v, t)^{-1} = (-v, - t + \beta(v, v))$ and the commutator given by \begin{equation} \label{e.commutator} [(v_1, t_1), (v_2, t_2)] = (0, \omega_K(v_1, v_2)) \, . \end{equation} As $\omega_K$ is surjective, we see that $[H, H] = K^*$. Moreover,~\eqref{e.commutator} also shows that $K^* \subset Z(H)$, and that equality holds unless the intersection $\displaystyle \cap_{k \in K} \ker(k)$ is nontrivial. In particular, $Z(H) = K^*$ if $K$ contains a symplectic form. \end{rmk} \begin{rmk} \label{rem.special} A non-abelian finite $p$-group $S$ is called {\em special} if $Z(S)=[S,S]$ and $S/[S,S]$ is elementary abelian; see~\cite[\S 2.3]{hall}. Suppose K is a subspace of $\mathcal{A}(V)$ such that $\displaystyle \cap_{k \in K} \ker(k)$ is trivial. Then over the finite field $\F_p$, the groups $H(V,K,\beta)$ are examples of non-abelian special $p$-groups. We are grateful to the referee for pointing this out. \end{rmk} \begin{rmk}\label{r:uniqueness} If $\beta$ and $\beta'$ both satisfy~\eqref{eq:decomposition} then $H(V, K, \beta)$ may not be isomorphic to $H(V, K, \beta')$. For example, let $V$ be a 2-dimensional vector space over $F=\F_2$, $K$ be the one-dimensional (symplectic) subspace generated by $\begin{bmatrix} 0 & 1\\ 1 & 0\\ \end{bmatrix}$, and $\beta$, $\beta'$ be bilinear forms on $V$ defined by $\begin{bmatrix} 1 & 1\\ 0 & 1\\ \end{bmatrix}$ and $\begin{bmatrix} 0 & 1\\ 0 & 0\\ \end{bmatrix}$, respectively. Then $\beta$ and $\beta'$ both satisfy \eqref{eq:decomposition}, but $H(V,K,\beta)$ is isomorphic to the quaternion group while $H(V,K,\beta')$ is isomorphic to the dihedral group of order $8$. On the other hand, it is easy to see that $H(V,K,\beta)$ and $H(V,K,\beta')$ are always isoclinic. (Two groups $S$ and $T$ are isoclinic if there are isomorphisms $f: S/Z(S)\rightarrow T/Z(T)$ and $g:[S,S]\rightarrow [T,T]$ such that if $a, b \in S$ and $a', b' \in T$ with $f(aZ(S))=a'Z(T)$ and $f(bZ(S))=b'Z(T)$, then we have $g([a,b])=[a', b']$, see \cite{Ha40}.) \end{rmk} \subsection{Representations} \label{sect.rep} Let $p$ be an arbitrary prime and let $F=\F_p$ be the finite field of $p$ elements. Fix, once and for all, a homomorphism $\tau : (\F_p, +) \inj \C^{*}$. Let $W$ be a vector space over $F$. Using $\tau$, we identify the algebraic dual $W^*=\Hom(W,F)$ with the Pontriyagin dual $\Hom(W,\C^{*})$. It is clear that a bilinear alternating map $W\times W\ra \F_p$ is non-degenerate if and only if the composition $W\times W\ra \F_p\rar{\tau} \C^\times$ is non-degenerate. Now let $V$ be a vector space over $F$, $K$ a subspace of $\mathcal{A}(V)$, and $\omega=\omega_K$ the associated map. Choose $\beta$ satisfying (\ref{eq:decomposition}) and let $G = H(V, K, \beta) = V \times K^*$. Recall that $K^*$ is in the center of $G$ (Remark \ref{rem.centre}); in particular, it acts via a character on every irreducible representation of $G$. \begin{lem} \label{l:uniqueird} Let $\rho$ be an irreducible representation of $G$ such that $K^{*}$ acts by $\psi$. Assume $\psi \circ \omega: V\times V \ra \C^{\times}$ is non-degenerate. \begin{enumerate} \item[(a)]If $g \in G$, $g \notin K^{*}$, then $\Tr(\rho(g))=0$. \item[(b)]$\dim(\rho)=\sqrt{|V|}$. \item[(c)]$\rho$ is uniquely determined (up to isomorphism) by $\psi$. \end{enumerate} \end{lem} \begin{proof} (a) Let $g \in G\backslash K^*$. Since $\psi \circ \omega$ is non-degenerate there exists $h \in G$ such that $\psi \circ \omega(gK^{*},hK^{*}) \neq 1$. Observe that $\rho([g,h]) = \psi([g,h])\Id$, and that $\rho(h^{-1}gh) = \rho(g)\rho([g,h])$. Taking the trace of both sides, we have $\Tr(\rho(g)) = \psi([g,h])\Tr(\rho(g))$. Since $\psi([g,h]) \neq 1$ we must have $\Tr(\rho(g))=0$. \smallskip (b) Since $\rho$ is irreducible, and the trace of $\rho$ vanishes outside of $K^{*}$, we have: \begin{eqnarray*} 1 &=& \frac{1}{|G|}\sum_{g \in G} \Tr(\rho(g)) \overline{\Tr(\rho(g))} \\ &=&\frac{1}{|G|}\sum_{g \in K^{*}}\Tr(\rho(g))\overline{\Tr(\rho(g))} \\ &=& \frac{1}{|G|}\dim(\rho)^{2}\sum_{g \in K^{*}}\Tr(\psi(g)) \overline{\Tr(\psi(g))}\\ &=&\dim(\rho)^{2}\frac{|K^{*}|}{|G|} \end{eqnarray*} Thus $\dim \rho=\sqrt{|G|/|K^{*}|} = \sqrt{|V|}$. \smallskip (c) We have completely described the character of $\rho$, and it follows that $\rho$ is uniquely determined by $\psi$. Indeed, \[ \Tr(\rho(g)) = \begin{cases} \sqrt{|V|} \cdot \psi(g), & \text{if $g \in K^{*}$ and} \\ 0 & \text{otherwise.}\\ \end{cases} \] \end{proof} In view of Remark \ref{r:symplectic}, the following proposition is a direct consequence of the above lemma. \begin{prop} \label{p:repHeisenberg} The irreducible representations of a generalized Heisenberg group $H=H(V,K,\beta)$ are exhausted by the following list: \begin{enumerate} \item[(i)] $|V|$ one-dimensional representations, one for every character of $V$. \item[(ii)] $|K|-1$ representations of dimension $\sqrt{|V|}$, one for every nontrivial character $\psi:K^*\ra \C^\times $. \end{enumerate} \end{prop} The next corollary is also immediate upon observing the centre of a generalized Heisenberg groups $H=H(G,K,\beta)$ equals $K^{*}$; see Remark~\ref{rem.centre}. \begin{cor} \label{c:RepDimGenHeis} The representation dimension of a generalized Heisenberg group $H=H(V,K,\beta)$ equals $\dim(K)\sqrt{|V|}$. \end{cor} If $G$ is a finite Heisenberg group in the usual sense (as in Example~\ref{ex:Heisenberg}) then for each nontrivial character $\chi$ of $Z(G)$ there is a unique irreducible representation $\psi$ of $G$ whose central character is $\chi$; cf.~\cite[\S1.1]{GH07}. This is a finite group variant of the celebrated Stone-von Neumann Theorem. For a detailed discussion of the history and the various forms of the Stone-von Neumann theorem we refer the reader to~\cite{rosenberg}. We conclude this section with another immediate corollary of Proposition~\ref{p:repHeisenberg} which tells us that over the field $\mathbb{F}_p$ every generalized Heisenberg group has the Stone-von Neumann property. This corollary will not be needed in the sequel. \begin{cor} \label{cor.stone} Two irreducible representations of a generalized Heisenberg group with the same nontrivial central character are isomorphic. \end{cor} Corollary~\ref{cor.stone} is the reason we chose to use the term ``generalized Heisenberg group" in reference to the groups $H(V, K, \beta)$, where K is a symplectic subspace. Special $p$-groups (Remark~\ref{rem.special}) which are not generalized Heisenberg groups may not have the Stone-von Neumann property; see Remark \ref{r:SpecialNotHeisenberg}. \section{Proof of Theorem \ref{main}} \label{s:proof} The case where $n \le 2$ is trivial; clearly $\rdim(G) = \rank(G)$ if $G$ is abelian. We will thus assume that $n \ge 3$. In the non-exceptional cases of the theorem, in view of the inequality~\eqref{e.inequality}, it suffices to construct a group $G$ of order $p^n$ with $\rdim(G) = f_p(n)$. Here $f_p(n)$ is the function defined just before the statement of Theorem~\ref{main}. If $(p, n) = (2, 3)$ or $(2, 4)$, we take $G$ to be the elementary abelian group $(\bZ/{2\bZ})^3$ and $(\bZ/{2\bZ})^4$, yielding the desired representation dimension of $3$ and $4$, respectively. For all other non-exceptional pairs $(p, n)$, we take $G$ to be a generalized Heisenberg group as described in the table below. Here $H(V, K)$ stands for $H(V, K, \beta)$, for some $\beta$ as in~\eqref{eq:decomposition}. In each instance, the existence of a symplectic subspace $K$ of suitable dimension is guaranteed by Lemma \ref{l:Existence} and the value of $\rdim(H(V, K))$ is given by Corollary~\ref{c:RepDimGenHeis}. \begin{center} \vspace{0.1cm} \begin{tabular}{|c| c| c| c| c|} \hline $n$ & $p$ & $\dim(V)$ & $\dim(K)$ & $\rdim(H(V,K))$\\ \hline even, $\ge 6$ & arbitrary & $n-2$ & 2 & $2p^{(n-2)/2}$\\ \hline odd, $\ge 3$ & odd & $n-1$ & 1 & $p^{(n-1)/2}$\\ \hline odd, $\ge 9$ & $2$ & $n-3$ & 3 & $3p^{(n-3)/2}$\\ \hline \end{tabular} \end{center} \smallskip This settles the generic case of Theorem~\ref{main}. We now turn our attention to the exceptional cases. We will need the following upper bound on $\rdim(G)$, strengthening~\eqref{e.inequality}. Let $\Omega_{1}(Z(G))$ be the subgroup of elements $g \in Z(G)$ such that $g^p = 1$. \begin{lem} \label{lem.inequality2} Let $G$ be a $p$-group and $r = \rank(Z(G)) = \rank(\Omega_{1}(Z(G)))$. \smallskip (a) Let $\rho_1$ be an irreducible representation of $G$ such that $\Ker(\rho_1)$ does not contain $\Omega_{1}(Z(G))$. Then there are irreducible representations $\rho_2, \dots, \rho_r$ of $G$ such that $\rho_1 \oplus \dots \oplus \rho_r$ is faithful. In particular, \[ \rdim(G) \le \dim(\rho_1) + (r-1) \sqrt{[G:Z(G)]} \, . \] (b) If $\Omega_{1}(Z(G))$ is not contained in $[G, G]$, then \[ \rdim(G) \le 1 + (r-1) \sqrt{[G:Z(G)]} \, . \] \end{lem} The lemma can be deduced from~\cite[Remark 4.7]{km} or~\cite[Theorem 1.2]{mr}; for the sake of completeness we give a self-contained proof. \begin{proof} (a) Let $\chi_1$ be the restriction to $\Omega_{1}(Z(G))$ of the central character of $\rho_1$. By our assumption $\chi_1$ is nontrivial. Complete $\chi_1$ to a basis $\chi_1, \chi_2, \dots, \chi_r$ of the $r$-dimensional $\F_p$-vector space $\Omega_{1}(Z(G))^*$ and choose an irreducible representation $\rho_i$ such that $\Omega_1(Z(G))$ acts by $\chi_i$. (The representation $\rho_i$ can be taken to be any irreducible component of the induced representation $\Ind_{\Omega_{1}(Z(G))}^G(\chi)$.) The restriction of $\rho \colon = \rho_1 \oplus \dots \oplus \rho_r$ to $\Omega_{1}(Z(G))$ is faithful. Hence, $\rho$ is a faithful representation of $G$. As we mentioned in the introduction $\dim(\rho_i) \le \sqrt{[G: Z(G)]}$ for every $i \ge 2$, and part (a) follows. \smallskip (b) By our assumption there exists a one-dimensional representation $\rho_1$ of $G$ whose restriction to $\Omega_{1}(Z(G))$ is nontrivial. Now apply part (a). \end{proof} We are now ready to prove Theorem~\ref{main} in the three exceptional cases. \subsection{Exceptional case 1: $p$ is odd and $n=4$} \begin{lem} \label{lem.p^4} Let $p$ be an odd prime and $G$ be a group of order $p^4$. \smallskip (a) Then $\rdim(G)\leq p+1$. \smallskip (b) Suppose $Z(G) \simeq (\bZ/{p\bZ})^2$ and $G/Z(G) \simeq (\bZ/{p\bZ})^2$. Then $\rdim(G) = p + 1$. \end{lem} \begin{proof} (a) We argue by contradiction. Assume there exists a group of order $p^4$ such that $\rdim(G) \ge p+2$. If $|Z(G)| \ge p^3$ or $G/Z(G)$ is cyclic then $G$ is abelian and $\rdim(G) = \rank(G) \le 4 \le p+1$, a contradiction. If $Z(G)$ is cyclic then $\rdim(G) \le p$ by~\eqref{e.inequality}, again a contradiction. Thus $Z(G) \simeq G/Z(G) \simeq (\bZ/{p\bZ})^2$. This reduces part (a) to part (b). \smallskip (b) Here $\Omega_{1}(Z(G)) = Z(G)$ has rank $2$. Hence, a faithful representation $\rho$ of $G$ of minimal dimension is the sum of two irreducibles $\rho_1 \oplus \rho_2$, as in~\eqref{e.decomp}, each of dimension $1$ or $p$. Clearly $\dim(\rho_1) = \dim(\rho_2) = 1$ is not possible, since in this case $G$ would be abelian, contradicting $[G:Z(G)] = p^2$. It thus remains to show that $\rdim(G) \le p + 1$. Since $G/Z(G)$ is abelian, $[G,G] \subset Z(G)$. Hence, by Lemma~\ref{lem.inequality2}(b) we only need to establish that $[G, G] \subsetneq Z(G)$. To show that $[G, G] \subsetneq Z(G)$, note that the commutator map \begin{eqnarray*} \Psi: G/Z(G) \times G/Z(G) &\rightarrow& [G,G]\\ (gZ(G), g'Z(G)) &\mapsto& [g,g'] \end{eqnarray*} can be thought of as an alternating bilinear map from $\F_p^{2}$ to itself. Viewed in this way, $\Psi$ can be written as $\Psi(v,v')=(w_1(v,v'), w_2(v,v'))$ for alternating maps $w_1$ and $w_2$ from $(\F_p)^{2}$ to $\F_p$. Since the space of alternating maps is a one-dimensional vector space over $\F_p$, $w_1$ and $w_2$ are scalar multiples of each other. Hence, the image of $\Psi$ is a cyclic group of order $p$, and $[G,G] \subsetneq Z(G)$, as claimed. \end{proof} To finish the proof of Theorem~\ref{main} in this case, note that $G = \bZ/{p\bZ} \times G_0$, where $G_0$ is a non-abelian group of order $p^3$, satisfies the conditions of Lemma~\ref{lem.p^4}(b). Thus the maximal representation dimension of a group of order $p^{4}$ is $p+1$, for any odd prime $p$. \subsection{Exceptional case 2: $p=2$ and $n=5$} \begin{lem} Let $G$ be a group of order $32$. Then $\rdim(G)\leq 5$. \end{lem} \begin{proof} We argue by contradiction. Assume there exists a group of order $32$ and representation dimension $\ge 6$. Let $r = \rank(Z(G))$. Then $1 \le r \le 5$ and \eqref{e.inequality} shows that $\rdim(G) \le 5$ for every $r \neq 3$. Thus we may assume $r = 3$. If $|Z(G)| \ge 16$ or $G/Z(G)$ is cyclic then $G$ is abelian, and $\rdim(G) = \rank(G) \le 5$. We conclude that $Z(G) \simeq (\bZ/{2\bZ})^3$ and $G/Z(G) \simeq (\bZ/{2\bZ})^2$. Applying the same argument as in the proof of Lemma~\ref{lem.p^4}(b), we see that $[G,G] \subsetneq Z(G)$, and hence $\rdim (G) \leq 5$ by Lemma~\ref{lem.inequality2}(b), a contradiction. \end{proof} To finish the proof of Theorem~\ref{main} in this case, note that the elementary abelian group of order $2^{5}$ has representation dimension $5$. Thus the maximal representation dimension of a group of order $2^{5}$ is $5$. \subsection{Exceptional case 3: $p=2$ and $n=7$} \begin{lem} \label{l:rdimleqten}If $|G| = 128$ then $\rdim(G) \le 10$. \end{lem} \begin{proof} Again, we argue by contradiction. Assume there exists a group $G$ of order $128$ and representation dimension $\ge 11$. Let $r$ be the rank of $Z(G)$. By~\eqref{e.inequality}, $r = 3$; otherwise we would have $\rdim(G) \le 10$. As we explained in the introduction, this implies that a faithful representation $\rho$ of $G$ of minimal dimension is the direct sum of three irreducibles $\rho_1$, $\rho_2$ and $\rho_3$, each of dimension $\le \sqrt{2^7/|Z(G)|}$. If $|Z(G)| > 8$, then $\dim(\rho_i) \le 2$ and $\rdim(G) = \dim(\rho_1) + \dim(\rho_2) + \dim(\rho_3) \le 6$, a contradiction. Therefore, $Z(G) \cong (\bZ/{2\bZ})^{3}$ and $\dim(\rho_1) = \dim(\rho_2) = \dim(\rho_3) = 4$. By Lemma~\ref{lem.inequality2}(a) this implies that the kernel of every irreducible representation of $G$ of dimension $1$ or $2$ must contain $Z(G)$. In other words, any such representation factors through the group $G/Z(G)$ of order $16$. Consequently, if $m_i$ is the number of irreducible representations of $G$ of dimension $i$ then $m_1 + 4 m_2 = 16$. We can now appeal to~\cite[Tables I and II]{JNO90}, to show that no group of order $2^{7}$ has these properties. From Table I we can determine which groups $G$ (up to isoclinism, cf.~Remark~\ref{r:uniqueness}) have $|Z(G)|=8$ and using Table II we can determine $m_1$ and $m_2$ for these groups. There is no group $G$ with $|Z(G)|=8$ and $m_1 + 4m_2 = 16$. \end{proof} We will now construct an example of a group $G$ of order $2^{7}$ with $\rdim(G)=10$. Let $V = (\F_2)^4$ and let $K$ be the 3-dimensional subspace of $A(V)$ generated by the following three elements: \[ \begin{bmatrix} 0 & 0 & 0 & 1\\ 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0\\ 1 & 0 & 0 & 0\\ \end{bmatrix} , \quad \begin{bmatrix} 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 1\\ 1 & 1 & 0 & 0\\ 0 & 1 & 0 & 0\\ \end{bmatrix} , \quad \begin{bmatrix} 0 & 0 & 1 & 1\\ 0 & 0 & 0 & 1\\ 1 & 0 & 0 & 1\\ 1 & 1 & 1 & 0\\ \end{bmatrix} \, . \] Let $G:=H(V,K, \beta)=V \times K^{*}$ for some $\beta$ as in~\eqref{eq:decomposition}. Note that $K$ contains only one non-zero degenerate element (the sum of the three generators). In other words, there is only one non-trivial character $\chi$ of $K^{*}$ such that $\chi \circ \omega: V \times V \rightarrow \C^{\times}$ is degenerate. By Remark~\ref{rem.centre} \begin{equation}\label{eq:HSpecial} [G,G] = Z(G) = K^{*}. \end{equation} Let $\rho$ be a faithful representation of $G$ of minimal dimension. As we explained in the Introduction, $\rho$ is the sum of $\rank(Z(G)) = 3$ irreducibles. Denote them by $\rho_1$, $\rho_2$, and $\rho_3$, and their central characters by $\chi_1$, $\chi_2$ and $\chi_3$, respectively. Since $\rho$ is faithful, $\chi_1$, $\chi_2$ and $\chi_3$ form an $\mathbb F_2$-basis of $\Omega_{1}(Z(G))^{*} \simeq (\bZ/{2\bZ})^3$. By Lemma \ref{l:uniqueird}, for each nontrivial character $\chi$ of $K^{*}$ except one, there is a unique irreducible representation $\psi$ of $G$ such that $\chi$ is the central character to $\psi$, and $\dim \psi =4$. Thus at least $2$ of the irreducible components of $\rho$, say, $\rho_1$ and $\rho_2$ must have dimension $4$. By Lemma \ref{l:rdimleqten}, $\dim(\rho) \le 10$, i.e., $\dim(\rho_3) \le 2$. But every one-dimensional representation of $G$ has trivial central character. We conclude that $\dim(\rho_3) = 2$ and consequently $\rdim(G)= \dim(\rho) = 4+4+2=10$. Thus the maximal representation dimension of a group of order $2^{7}$ is $10$. \begin{rmk} \label{r:SpecialNotHeisenberg} The group $G$ constructed above has $16$ one-dimensional representations with trivial central character, $4$ two-dimensional representations with non-trivial degenerate central character, and $6$ four-dimensional representations with pair-wise distinct non-degenerate central characters. In view of (\ref{eq:HSpecial}), $H$ is a non-abelian special 2-group which does not enjoy the Stone-Von Neumann property (Corollary \ref{cor.stone}). \end{rmk}
1,116,691,499,954
arxiv
\section{The \bibifi Case Study}\label{subsec:bififi} As a real world web application of \lweb, we present \emph{Build it, Break it, Fix it} (\bibifi), a security-oriented programming contest~\cite{Ruef:2016:BBF:2976749.2978382} hosted at \url{https://builditbreakit.org}. The contest consists of three rounds. At the outset, the organizers publish the specification for some software that has particular security goals. During the first round, teams implement software to this specification, aiming for it to be both fast and secure. In the second round, teams find as many breaks as possible in the implementations submitted by other teams. During the final round, teams attempt to fix the identified problems in their submissions. \subsection{\bibifi Labels} \begin{wrapfigure}{r}{0.38\textwidth} \vspace*{-.4in} \begin{mcode} data Principal = PSys | PAdmin | PUser UserId | PTeam TeamId | PJudge JudgeId type BBFLabel = DCLabel Principal \end{mcode} \caption{\bibifi labels.} \label{code:BBFLabel} \end{wrapfigure} \bibifi labels include all entities that operate in the system. The @Principal@ data type, defined in ~\cref{code:BBFLabel}, encodes all such entities, including the system itself, the administrator, users, teams, and judges. Each of these entities is treated as a security level. For instance a policy can encode that data written by a user with id @5@, can get protected at the security level of this specific user, so that only he or she can read this data. A more flexible policy encodes that the system administrator can read data written by each user. To encode such policies, we use disjunction category labels (@DCLabel@)~\cite{stefan:dclabels} to create a security lattice out of our @Principal@s. In~\cref{code:BBFLabel} we define @BBFLabel@ as the @DCLabel Principal@ data type that tracks the security level of values as they flow throughout the web application and database. \begin{wrapfigure}{r}{0.45\textwidth} \vspace*{-.35in} \begin{mcode} *User* *account Text* *email Text* ^<Const Admin meet Id, Id>^ *admin Bool* ^<bot, Const Admin>^ \end{mcode} \vspace*{-.05in} \caption{Basic \bibifi \texttt{User} table.} \label{code:usertable} \vspace*{-.1in} \end{wrapfigure} \begin{figure}[t] \centering \begin{minipage}{.75\textwidth} \begin{mcode} *UserInfo* *user UserId* *school Text* ^<Const Admin meet Field user, Field user>^ *age Int* ^<Const Admin meet Field user, Field user>^ *experience Int* ^<Const Admin meet Field user, Field user>^ \end{mcode} \end{minipage} \caption{Table \texttt{UserInfo} contains additional \bibifi user information.} \label{code:userinfotable} \end{figure} \subsection{Users and Authentication}\label{subsubsec:users} Users' personal information is stored in the \bibifi database. \Cref{code:userinfotable} shows the @User@ table with the basics: a user's account id, email address, and whether they have administrator privileges. The label for the @email@ field refers to @Id@ in its label: This is a shorthand for the key of the present table. The label says that a user can read and write their emails, while the administrator can read every user's email. The label for the @admin@ field declares that it may be written by the administrator and read by anyone. Additional private information is stored in the @UserInfo@ table, shown in \Cref{code:userinfotable}, including a user's school, age, and professional experience. The @user@ field of this table is a foreign key to the @User@ table, as indicated by its type @UserId@ (see \cref{sec:yesod}). Each of the remaining fields is protected by this field, in part: users can read and write their own information while administrators can read any users' information. The current label is set by the code trusted to perform authentication. If a user is not logged in, the current label is set to @<bot,top>@: the confidentiality label is the upper bound on data read so far (i.e., none, so @bot@), and the integrity label is the level of least trust (i.e., @top@) for writing data. After authenticating, most users will have the label @<bot, PUser userId>@, thus lowering the integrity part (thus increasing the level of trust) to the user itself. Users who are also administrators will have current label lowered further to @<bot, PUser userId meet PAdmin>@. This is shown in the following code snippet. It determines the logged in user via @requireAuth@, and then adds administrator privileges if the user has them (per @userAdmin@). \begin{mcode} (Entity userId user) <- requireAuth let userLabel = dcIntegritySingleton (PrincipalUser userId) lowerLabelTCB $\$$ if userAdmin user then userLabel meet dcIntegritySingleton PrincipalAdmin else userLabel \end{mcode} The clearance is also set using trusted functions during authentication. For example, for an adminstrator it would be @<PUser userId join PAdmin,top>@. \subsection{Opening the Contest} \begin{figure}[t] \begin{tabular}{lclcl} \begin{mcode} *Announcement* ^<bot, Const Admin>^ *title Text* ^<bot Const Admin>^ *content Text* ^<bot, Const Admin>^ \end{mcode} &\quad \quad\quad& \begin{mcode} *Team* *name Text* *contest ContestId* \end{mcode} &\quad\quad\quad& \begin{mcode} *TeamMember* *team TeamId* *user UserId* \end{mcode} \end{tabular} \caption{Definition of \texttt{Announcement}, \texttt{Team}, and \texttt{TeamMember} tables and their policies.} \label{code:teamtable} \label{code:announcementtable} \end{figure} To start a contest, administrators write announcements that include information like instructions and problem specifications. It is important that only administrators can post these announcements. Announcements are stored in the database, and their (simplified) table definition is shown in \cref{code:announcementtable}. The @Announcement@ table has two @Text@ fields corresponding to an announcement's title and content. Only administrators can author announcements. An earlier version \bibifi relied on manual access control checks rather than monadic \lmonad enforcement of security. The old version had a security bug: it failed to check that the current user was an administrator when posting a new announcement. Here is a snippet of the old code. \begin{code} postAddAnnouncementR :: Handler Html postAddAnnouncementR = do ((res, widget), enctype) <- runFormPost postForm case res of ... FormSuccess (FormData title markdown) -> do runDB (insert (Announcement title markdown)) redirect AdminAnnouncementsR \end{code} This function parses POST data and inserts a new announcement. The user is never authenticated, so anyone can post new announcements and potentially deface the website. In the IFC version of the website, the database insertion fails for unauthorized or unauthenticated users as the integrity part of the current label is not sufficiently trusted (the label does not flow into @PAdmin@). \subsection{Teams and Declassification}\label{subsec:bibifi:declassification} To participate in a contest, a user must join a team. The teams and their members are stored in the eponymous tables of~\cref{code:teamtable}. Teams serve as another principal in the \bibifi system and \bibifi defines a TCB function that appropriately authenticates team members similarly to users (\cref{subsubsec:users}), authorizing a team member to read and write data labeled with their team. \bibifi uses declassification (as discussed in~\ref{subsec:impl:limitations}) to allow team members to send email messages to their team. The policy on the email field of the @User@ table states that only the user or an administrator can read the email address, so \bibifi cannot give a user's email address to a teammate. Instead, the function @sendEmailToTeam@ below sends the email on the teammate's behalf using declassification. \begin{code} sendEmailToTeam :: TeamId -> Email -> LHandler () sendEmailToTeam tId email = do protectedEmails <- runDB [lsql| pselect User.email from User inner jjoin TeamMember on TeamMember.user == User.id where TeamMember.team == #{tId} |] mapM_ (\protectedEmail -> do address <- declassifyTCB protectedEmail sendEmail address email ) protectedEmails \end{code} The function @sendEmailToTeam@'s parameters are the team identifier and an email return address. It queries the database for the (labeled) email addresses of the team's members, using @lsql@ (see~\cref{subsec:lweb} and~\cref{subsec:impl:ext}). The @sendEmailToTeam@ function maps over each address, declassifying it via @declassifyTCB@, so that the message can be sent to the address. The @declassifyTCB@ function takes a labeled value and extracts its raw value, \emph{ignoring label restrictions}. This is an unsafe operation that breaks noninterference, so the programmer must be careful with its use. Here for example, the function is careful not to reveal the email address to the sender but only use it to send the email. \subsection{Breaks and Advanced Queries} During the second round of the \bibifi context, teams submit breaks, \ie test cases that attack another team's submission. After a break is pushed to a registered git repository, \bibifi's backend infrastructure uploads it to a virtual machine and tests whether the attack succeeds. Results are stored in the @BreakSubmission@ table of~\cref{code:breaktable}, which has fields for the attacking team, the target team, and the (boolean) result of the attack. The integrity label for the result field is @PSys@ since only the backend system can grade an attack. The confidentiality label is @PAdmin meet PTeam attackerId meet PTeam targetId@ since administrators, the attacker team, and the target team can see the result of an attack. \begin{figure} \begin{mcode} *BreakSubmission* *attacker TeamId* ^<bot, Const Sys>^ *target TeamId* ^<bot, Const Sys>^ *result Bool* ^<Const Admin meet Field attacker meet Field target, Const Sys>^ \end{mcode} \caption{Definition of \texttt{BreakSubmission} table and its policy.} \label{code:breaktable} \end{figure} \bibifi has an administration page that lists all break submissions next to which team was attacked. This page's contents are retrieved via the following inner join. \begin{mcode} runDB $\$$ [lsql| select BreakSubmission.$\star$, Team.name from BreakSubmission inner jjoin Team on BreakSubmission.target == Team.id where Team.contest == #{contestId} order by BreakSubmission.id desc |] \end{mcode} \JPH{Could add attacking team too} This query performs a join over the @BreakSubmission@ and @Team@ tables, aligning rows where the target team equals the team's identifier. In addition, it filters rows to the specified contest identifier and orders results by the break submission identifiers. \mwhh{Queries: All breaks on me, all breaks I did that were successful. All breaks by team 1 on team 2 (admin).} \section{Case Study}\label{sec:case-studies} \input{bibifi} \section{Conclusion}\label{sec:conclusion} We presented \lweb, a information-flow security enforcement mechanism for Haskell web applications. \lweb combines \yesod with \lmonad, a generalization of the \lio library. \lweb performs label-based policy checks and protects database values with dynamic labels, which can depend on the values stored in the database. We formalized \lweb (as \lwebcalc) and used Liquid Haskell to prove termination-insensitive noninterference. Our proof uncovered two noninterference violations in the implementation. We used \lweb to build the web site of the \emph{Build it, Break it, Fix it} security-oriented programming contest, and found it could support rich policies and queries. Compared to manually checking security policies, \lweb impose a modest runtime overhead between \overheadnumbermin to \overheadnumber but reduces the trusted code base to \tcbnumberbibifi of the application code, and \tcbnumber overall (when counting \lweb too). \section{Experimental Evaluation}\label{sec:experiments} To evaluate \lweb we compare the \bibifi implementation that uses \lmonad with our initial \bibifi implementation that manually checked security policies via access control. We call this initial version the \textit{vanilla implementation}. Transitioning from the vanilla to the \lweb implementation reduced the trusted computing base (TCB) but imposed a modest runtime overhead. \subsection{Trusted Computing Base of \bibifi} The implementation of the \bibifi application is currently 11,529 lines of Haskell code. 80 of these lines invoke trusted functions (for authentication or declassification, see~\cref{subsec:impl:limitations}). \lweb's library is 3,009 lines of trusted code. The vanilla implementation is several years old, with 7,367 LOC; there is no IFC mechanism so the whole codebase is trusted. Switching from the vanilla to the \lweb implementation only added 151 LOC. The size of the TCB is now \tcbnumber of the codebase; considering only the code of the \bibifi web application (and not \lweb too), \tcbnumberbibifi of the code is trusted. \subsection{Running Time Overhead} \begin{table} \caption{ Latency comparison between the \textbf{Vanilla} and \textbf{\lweb} implementations of the \bibifi application. \EDIT{3}{The mean, standard deviation, and tail latency in milliseconds over 1,000 trials are presented. In addition, the response size in kilobytes and the overhead of LWeb are shown.} } \begin{center} \resizebox{\linewidth}{!}{ \begin{tabular}{| l | c | r | r | r | r | r | r | c | c |} \hline \textbf{Handler} & \textbf{Verb} & \multicolumn{3}{|c|}{\textbf{Vanilla Latency}} & \multicolumn{3}{|c|}{\textbf{LWeb Latency}} & \textbf{Size (kB)} & \textbf{Overhead} \\ \hline & & \textbf{Mean (ms)} & \textbf{SD (ms)} & \textbf{Tail (ms)} & \textbf{Mean (ms)} & \textbf{SD (ms)} & \textbf{Tail (ms)} & & \\ \cline{3-8} /announcements & GET & 4.646 & 1.215 & 16 & 5.529 & 1.367 & 20 & 18.639 & 19.01\% \\ /announcement/update & POST & 9.810 & 2.600 & 54 & 11.395 & 3.054 & 52 & 0.706 & 16.16\% \\ /profile & GET & 2.116 & 0.512 & 6 & 2.167 & 0.550 & 6 & 7.595 & 2.41\% \\ /buildsubmissions & GET & 6.364 & 1.251 & 17 & 7.441 & 1.706 & 22 & 14.434 & 16.92\% \\ /buildsubmission & GET & 28.633 & 2.772 & 52 & 30.570 & 3.477 & 75 & 9.231 & 6.76\% \\ /breaksubmissions & GET & 41.758 & 7.826 & 81 & 49.218 & 11.679 & 90 & 60.044 & 17.86\% \\ /breaksubmission & GET & 4.070 & 0.538 & 9 & 4.923 & 0.509 & 9 & 6.116 & 20.96\% \\ \hline \end{tabular} } \end{center} \label{table:time:bibifi} \end{table} We measured the query latency, \ie the response time (in milliseconds) of HTTP requests, for both the \lweb and the vanilla implementation. \EDIT{3}{ Measurements were performed over} @localhost@ \EDITCOLOR{and we ran 100 requests to warm up. We present the mean, standard deviation, and tail latency over 1,000 trials, as well as the response size (in kilobytes) and the overhead of \lweb over the vanilla implementation. \Cref{table:time:bibifi} summarizes this comparison. The server used for benchmarking runs Ubuntu 16.04 with two Intel(R) Xeon(R) E5-2630 2.60GHz CPUs and 64GB of RAM. PostgreSQL 9.5.13 is run locally as the database backend. We used ApacheBench to perform the measurements with a concurrency level of one. Here is a sample invocation of} @ab@: \begin{mcode} ab -g profile_lweb.gp -n 1000 -T "application/x-www-form-urlencoded; charset=UTF-8" -c 1 -C _SESSION=... http://127.0.0.1:4000/profile \end{mcode} Most of the requests are GET requests that display contest announcements, retrieve a user's profile with personal information, get the list of a team's submissions, and view the results of a specific submission. One POST request is measured that updates the contents of an announcement. Cookies and CSRF tokens were explicitly defined so that a user was logged into the site, and the user had sufficient permissions for all of the pages. \begin{figure} \centering \begin{subfigure}{0.475\textwidth \includegraphics[width=\textwidth]{throughput_16.png} \caption{Concurrency level of 16.} \end{subfigure} \hfill \begin{subfigure}{0.475\textwidth} \includegraphics[width=\textwidth]{throughput_32.png} \caption{Concurrency level of 32.} \end{subfigure} \caption{\EDIT{3}{Throughput (req/s) of the \textbf{Vanilla} and \textbf{\lweb} versions of the \bibifi application.}} \label{fig:throughput:bibifi} \end{figure} \EDIT{3}{To evaluate \lweb's impact on the throughput of web applications, we conduct similar measurements except we rerun} @ab@ \EDITCOLOR{with concurrency levels of 16 and 32. The rest of the experimental setup matches that of the latency benchmark, including number of requests, hardware, and handlers. \Cref{fig:throughput:bibifi} shows the number of requests per second for each version of the \bibifi web application across the various handlers. } \EDIT{3}{Most of the handlers show modest overhead between the vanilla and \lweb versions of the website. We measure \lweb's overhead to range from \overheadnumbermin to \overheadnumber, which comes from the IFC checks that \lweb makes for every database query and the state monad transformer that tracks the current label and clearance label. In practice, this overhead results in a few milliseconds of delay in response times. In most situations, this is a reasonable price to pay in order the reduce the size of the TCB and increase confidence that the web application properly enforces the user defined security policies. } \section{Label-based Security for Database Operations}\label{sec:formal-db} In this section we extend \liocalc with support for databases with label-based policies. We call the extended calculus \lwebcalc. In~\S~\ref{subsec:database:definitions}, we define a database that stores rows with three values: a key, a first field with a static label, and a second field whose label is a function of the first field. This simplification of the full generality of \lweb's implementation (which permits any field to be a label) captures the key idea that fields can serve as labels for other fields in the same row, and fields that act as labels must be labeled as well. In \S~\ref{subsec:database:pure} we define operations to insert, select, delete, and update the database. For each of these operations, in~\S~\ref{subsec:db:monadic} we define a monadic term that respects the database policies. Finally in \S~\ref{subsec:db:noninterference} we define erasure of the database and prove noninterference. \subsection{Database Definition}\label{subsec:database:definitions} \begin{figure}[t] \begin{tabular}{c} \begin{mcode} type DB l = [(Name, Table l)] type Name = String data Table l = Table {tpolicy :: TPolicy l, tRows :: [Row l]} data Row l = Row {rKey :: Term l, rVal1 :: DBTerm l, rVal2 :: DBTerm l } type DBTerm l = {t:Term l | isDBValue t } data TPolicy l = TPolicy { tpTableLabel :: l , tpFresh :: Int , tpLabelField1 :: {l1:l | l1 canFlowTo tpTableLabel } , tpLabelField2 :: Term l -> l } \end{mcode} \end{tabular} \caption{Definition of \lwebcalc database} \label{fig:database-def} \end{figure} \Cref{fig:database-def} contains Haskell definitions used to express the semantics of database operations in \lwebcalc. Rather than having concrete syntax (\eg as in Figure~\ref{fig:friendstable}) for database definitions, in our formalization we assume that databases are defined directly in the semantic model. A database @DB l@ maps names (@Name@) to tables (@Table l@). A table consists of a policy (@TPolicy l@) and a list of rows (@[Row l]@). Each row contains three terms: the key and two values. We limit values that can be stored in the database to basic terms such as unit, integers, label values, etc. This restriction is expressed by predicate @isDBValue@. Labeled terms are not permitted---labels of stored data are specified using the table policy. In \Cref{subsec:db:noninterference} we define erasure of the database to replace values with holes, thus @isDBValue@ should be true for holes too, but is false for any other term. \begin{tabular}{lcl} \begin{mcode} isDBValue :: Term l -> Bool isDBValue THole = True isDBValue (TInt _) = True \end{mcode} &\quad\quad & \begin{mcode} isDBValue TUnit = True isDBValue (TLabel _) = True isDBValue _ = False \end{mcode} \end{tabular} We define the refinement type alias @DBTerm@ to be terms refined to satisfy the @isDBValue@ predicate and define rows to contain values of type @DBTerm@. \paragraph{Table policy} The table policy @TPolicy l@ defines the security policy for a table. The field @tpTableLabel@ is the label required to access the length of the table. The field @tpLabelField1@ is the label required to access the first value stored in each row of the table. This label is the same for each row and it is refined to flow into the @tpTableLabel@. The field @tpLabelField2@ defines the label of the second value stored in a row as a function of the first. Finally, the field @tpFresh@ is used to provide a unique term key for each row. The term key is an integer term that is increased at each row insertion. \paragraph{Helper functions} For each field of @TPolicy@, we define a function that given a table accesses its respective policy field. \begin{tabular}{lcl} \begin{mcode} labelT t = tpTableLabel (tpolicy t) labelF1 t = tpLabelField1 (tpolicy t) \end{mcode} & \quad & \begin{mcode} labelF2 t v = tpLabelField2 (tpolicy t) v freshKey t = tpFresh (tpolicy t) \end{mcode} \end{tabular} We use the indexing function @db!!n@ to lookup the table named @n@ in the database. \begin{mcode} (!!) :: DB l -> Name -> Maybe (Table l) \end{mcode} \subsection{Querying the Database}\label{subsec:database:pure} \paragraph{Predicates} We use predicates to query database rows. In the \lweb implementation, predicates are written in a domain-specific query language, called @lsql@, which the \lweb compiler can analyze. Rather than formalizing that query language in \lwebcalc, we model predicates abstractly using the following datatype: \begin{mcode} data Pred = Pred { pVal :: Bool , pArity :: { i:Int | 0 <= i <= 2 } } \end{mcode} Here, @pVal@ represents the outcome of evaluating the predicate on an arbitrary row, and @pArity@ represents which of the row's fields were examined during evaluation. That is, a @pArity@ value of @0@, @1@, or @2@, denotes whether the predicate depends on (\ie computes over) none, the first, or both fields of a row, respectively. Then, we define a logical \textit{uninterpreted} function @evalPredicate@ that evaluates the predicate for some argument of type @a@: \begin{mcode} measure evalPredicate :: Pred -> a -> Bool \end{mcode} We define a Haskell (executable) function @evalPredicate@ and use an axiom to connect it with the synonymous logical uninterpreted function~\cite{refinement-reflection}: \begin{mcode} assume evalPredicate :: p:Pred -> x:a -> {v:Bool | v == evalPredicate p x } evalPredicate p x = pVal p \end{mcode} This way, even though the Haskell function @evalPredicate p x@ returns a constant boolean ignoring its argument @x@, the Liquid Haskell model assumes that it behaves as an uninterpreted function that does depend on the @x@ argument (with dependencies assumed by the @pArity@ definition). \paragraph{Primitive queries} It is straightforward to define primitive operators that manipulate the database but do not perform IFC checks. We define operators to insert, delete, select, and update databases. \begin{mcode} (+=) :: db:DB l -> n:Name -> r:Row l -> DB l -- insert (?=) :: db:DB l -> n:Name -> p:Pred -> Term l -- select (-=) :: db:DB l -> n:Name -> p:Pred -> DB l -- delete (:=) :: db:DB l -> n:Name -> p:Pred -> v1:DBTerm l -> v2:DBTerm l -> DB l -- update \end{mcode} \begin{itemize}[leftmargin=14.0mm] \item[\textit{Insert:}] @db += n r@ inserts the row @r@ in the @n@ table in the database and increases @n@'s unique field. \item[\textit{Select:}] @db ?= n p@ selects all the rows of the @n@ table that satisfy the predicate @p@ as a list of labeled terms. \item[\textit{Delete:}] @db -= n p@ deletes all the rows of the @n@ table that satisfy the predicate @p@. \item[\textit{Update:}] @db := n p v1 v2@ updates each row with key @k@ of the @n@ table that satisfies the predicate @p@ with @Row k v1 v2@. \end{itemize} Next we extend the monadic programs of~\S~\ref{sec:formal} with database operations to define monadic query operators that enforce the table and field policies. \subsection{Monadic Database Queries}\label{subsec:db:monadic} \begin{figure}[t] \begin{tabular}{c} \begin{mcode} data Program l = Pg { pLabel :: l, pDB :: DB l, pTerm :: Term l } | PgHole { pDB :: DB l } data Term l = ... | TInsert Name (Term l) (Term l) | TSelect Name Pred | TDelete Name Pred | TUpdate Name Pred (Term l) (Term l) \end{mcode} \end{tabular} \caption{Extension of programs and terms with a database.} \label{fig:database:def} \end{figure} \subsubsection{Syntax} \Cref{fig:database:def} defines \lwebcalc's syntax as an extension of \liocalc. Programs are extended to carry the state of the database. Erasure of a program at an observation level @l@ leads to a @PgHole@ that now carries a database erased at level @l@. Erasure is defined in~\S~\ref{subsec:db:noninterference}; here we note that preserving the database at program erasure is required since even though the result of the program is erased, its effects on the database persist. For instance, when evaluating @TBind t1 t2@ the effects of @t1@ on the database affect computing @t2@. Terms are extended with monadic database queries. @TInsert n (TLabeled l1 v1) (TLabeled l2 v2)@ inserts into the table @n@ database values @v1@ and @v2@ labeled with @l1@ and @l2@, respectively. @TSelect n p@ selects the rows of the table @n@ that satisfy the predicate @p@. @TDelete n p@ deletes the rows of the table @n@ that satisfy the predicate @p@. Finally, @TUpdate n p (TLabeled l1 v1) (TLabeled l2 v2)@ updates the fields for each row of table @n@ that satisfies the predicate @p@ to be @v1@ and @v2@, where the database values @v1@ and @v2@ are labeled with @l1@ and @l2@, respectively. \subsubsection{Semantics}\label{subsubsec:semantics} Figure~\ref{fig:database-eval} defines the operational semantics for the monadic database queries in \lwebcalc. Before we explain the evaluation rules, note that both insert and update attempt to insert a labeled value @TLabeled li vi@ in the database, thus @vi@ should be a value, and unlabeled, \ie satisfy the @isDBValue@ predicate.\footnote{We could allow inserting unlabeled terms, the label for which is just the current label. Explicit labeling is strictly more general.} In the \lweb implementation we use Haskell's type system to enforce this requirement. In \lwebcalc, we capture this property in a predicate @ς@ that constrains labeled values in insert and update to be database values: \begin{mcode} ς :: Program l -> Bool ς (Pg _ _ t) = ςTerm t ςTerm :: Term l -> Bool ςTerm (TInsert _ (TLabeled _ v1) (TLabeled _ v2)) = isDBValue v1 && isDBValue v2 ςTerm (TUpdate _ _ (TLabeled _ v1) (TLabeled _ v2)) = isDBValue v1 && isDBValue v2 ... \end{mcode} We specify that @eval@ is only called on \safe programs, \ie those that satisfy @ς@. For terms other than insert and update, \safety is homomorphically defined. Restricting \safety to permit only database values, as opposed to terms that eventually evaluate to database values, was done to reduce the number of cases for the proof, but does not remove any conceptual realism. \begin{figure}[t] \begin{tabular}{c} \begin{mcode} eval :: Label l => i:{ Program l | ς i && terminates i} -> {o:Program l | ς o } \end{mcode} \end{tabular} \begin{tabular}{ll} \begin{mcode} eval (Pg l db (TInsert n t1 t2) | TLabeled l1 v1 <- t1 , TLabeled l2 v2 <- t2 , Just t <- db!!n, l1 canFlowTo labelF1 t , l2 canFlowTo labelF2 t v1, l canFlowTo labelT t = let k = freshKey t r = Row k v1 v2 in Pg (l join l1) (db += n r) (TReturn k) eval (Pg l db (TInsert n t1 t2)) | TLabeled l1 v1 <- t1 , TLabeled l2 v2 <- t2 = Pg (l join l1) db (TReturn TException) eval (Pg l db (TDelete n p)) | Just t <- db!!n , l join labelPred p t canFlowTo labelT t = let l' = l join labelRead p t in Pg l' (db -= n p) (TReturn TUnit) eval (Pg l db (TDelete n p)) | Just t <- db!!n = let l' = l join labelRead p t in Pg l' db (TReturn TException) | otherwise = Pg l db (TReturn TException) \end{mcode} & \begin{mcode} eval (Pg l db (TSelect n p)) | Just t <- db!!n = let l' = l join labelT t join labelPred p t in Pg l' db (TReturn (db ?= n p)) eval (Pg l db (TSelect n p)) = Pg l db (TReturn TException) eval (Pg l db (TUpdate n p t1 t2) | TLabeled l1 v1 <- t1 , TLabeled l2 v2 <- t2 , Just t <- db!!n , l join l1 join labelPred p t canFlowTo labelF1 t , l join l2 join labelPred p t canFlowTo labelF2 t v1 = let l' = l join l1 join labelRead p t join labelT t in Pg l' (db := n p v1 v2) (TReturn TUnit) eval (Pg l db (TUpdate n p t1 t2) | TLabeled l1 v1 <- t1 , TLabeled l2 v2 <- t2 , Just t <- db!!n = let l' = l join l1 join labelRead p t join labelT t in Pg l' db (TReturn TException) | otherwise = Pg l db (TReturn TException) \end{mcode} \end{tabular} \caption{Evaluation of monadic database terms.} \label{fig:database-eval} \end{figure} \paragraph{Insert} Insert attempts to insert a row with values @v1@ and @v2@, labeled with @l1@ and @l2@ respectively, in the table @n@. To perform the insertion we check that \begin{enumerate}[leftmargin=*] \item the table named @n@ exists in the database, as table @t@. \item @l1@ can flow into the label of the first field of @t@, since the value @v1@ labeled with @l1@ will write to the first field of the table. \item @l2@ can flow into the label of the second field of @t@, as potentially determined by the first field @v1@ (\ie per @labelF2 t v1@). \item the current label @l@ can flow to the label of the table, since insert changes the length of the table. \end{enumerate} If all these checks succeed, we compute a fresh key @k = freshKey t@, insert the row @Row k v1 v2@ into the table @n@, and return the key. If any of the checks fail we return an exception and leave the database unchanged. Either way, we raise the current label @l@ by joining it with @l1@. This is because checking @l2 $\sqsubseteq$ labelF2 t v1@ requires examining @v1@, which has label @l1@. That this check succeeds can be discerned by whether the key is returned; if the check fails an exception is thrown, potentially leaking information about @v1@. This subtle point was revealed by the formalization: Our original implementation failed to raise the current label properly. \paragraph{Select} Select only checks that the table @n@ exists in the database, returning an exception if it does not. If the table @n@ is found as the table @t@, then we return the term @db ?= n p@ that contains a list of all rows of @t@ that satisfy the predicate @p@, leaving the database unchanged. The current label is raised to include the label of the table @labelT t@ since on a trivially true predicate, all the table is returned, thus the size of the table can leak. We raise the current label with the label of the predicate @p@ on the table @t@ that intuitively permits reading all the values of @t@ that the predicate @p@ depends on. We define the function @labelPred p t@ that computes the label of the predicate @p@ on the table @t@. \begin{mcode} labelPred :: (Label l) => Pred -> Table l -> l labelPred p (Table tp rs) | pArity p == 2 = foldl (join) (labelF1 tp) [labelF2 tp v1 | Row _ v1 _ <- rs] | pArity p == 1 = labelF1 tp | otherwise = bot \end{mcode} If the predicate @p@ depends on both fields, then its predicate is the join of the label of the first field and all the labels of the second fields. If @p@ only depends on the first field, then the label of the predicate @p@ is the label of the first field. Otherwise, @p@ depends on no fields and its predicate is $\bot$. Note that the primitive selection operator @db ?= n p@ returns labeled terms protected by the labels returned by the @labelF1@ and @labelF2@ functions. Since terms are labeled, select does not need to raise the current label to protect values that the predicate @p@ does not read. \paragraph{Delete} Deletion checks that the table named @n@ exists in the database as @t@ and that the current label joined with the label of the predicate @p@ on the table @t@ can flow into the label of the table @t@, since delete changes the size of the table. If both checks succeed, then database rows are properly deleted. The current label is raised with the ``read label'' of the predicate @p@ on the table @t@ that intuitively gives permission to read the label of the predicate @p@ on the same table. The function @labelRead p t@ computes the read label of the predicate @p@ on the table @t@ to be the label required to read @labelPredRow p t@, \ie equal to the label of the first field, if the predicate depends on the second field and bottom otherwise. \begin{mcode} labelRead :: (Label l) => Pred -> Table l -> l labelRead p t = if pArity p == 2 then labelF1 t else bot \end{mcode} Note that @labelRead p t@ always flows into @labelPred p t@, thus the current label is implicitly raised to this read label. When the runtime checks of @delete@ fail we return an exception and the database is not changed. If the table @n@ was found in the database, the current label is raised, even in the case of failure, since the label of the predicate was read. \paragraph{Update} Updating a table @n@ with values @v1@ and @v2@ on a predicate @p@ can be seen as a select-delete-insert operation. But, since the length of the table is not changing, the check that the current label can flow to the label of the table is omitted. Concretely, update checks that \begin{enumerate}[leftmargin=*] \item the table named @n@ exists in the database, as table @t@, \item @l join l1 join labelPred p t@ can flow into the label of the first field of @t@, since the value @v1@ labeled with @l1@ will write on the first field of the table and whether this write is done or not depends on the label of the predicate @p@ as a hole, \item @l join l2 join labelPred p t@ can flow into the label of the second field of @t@ when the first field is @v1@. \end{enumerate} If these checks succeed, then unit is returned, the database it updated, and the current label is raised to all the labels of values read during the check, \ie @l1 join labelF1 t@. If the checks fail then we return an exception and the database is not updated. In both cases, the current label is raised by joining with the table label, \ie @l' = ... $\sqcup$ labelT t@. This is because the last check depends on whether the table is empty or not, and its success can be discerned: if it succeeds, then unit is returned. Interestingly, our original implementation failed to update the current label in this manner. Doing so seemed intuitively unnecessary because an update does not change the table length. \subsection{Noninterference}\label{subsec:db:noninterference} As in~\S~\ref{sec:formal} to prove noninterference we prove the simulation between @eval@ and @ε l . eval@ for \lwebcalc programs. \begin{figure}[t] \begin{tabular}{c} \begin{mcode} ε :: (Label l) => l -> Program l -> Program l εDB :: (Label l) => l -> DB l -> DB l εTable :: (Label l) => l -> Table l -> Table l εRow :: (Label l) => l -> TPolicy l -> Row l -> Row l \end{mcode} \end{tabular} \begin{tabular}{lcl} \begin{mcode} ε l (PgHole db) = PgHole (εDB l db) ε l (Pg lc db t) | not (lc canFlowTo l) = PgHole (εDB l db) | otherwise = Pg lc (εDB l db) (εTerm l t) εDB l [] = [] εDB l ((n,t):db) = (n,εTable l):εDB l db \end{mcode} &\quad& \begin{mcode} εTable l (Table tp rs) | not (tpTableLabel tp canFlowTo l) = Table tp [] εTable l (Table tp rs) = Table tp (map (εRow l tp) rs) εRow l tp (Row k v1 v2) | not (tpLabelField1 tp canFlowTo l) = Row k THole THole | not (tpLabelField2 tp v1 canFlowTo l) = Row k (εTerm l v1) THole | otherwise = Row k (εTerm l v1) (εTerm l v2) \end{mcode} \end{tabular} \caption{Erasure of programs and databases.} \label{fig:database:erasure} \end{figure} Figure~\ref{fig:database:erasure} extends erasure to programs and databases. Erasure of programs is similar to~\S~\ref{sec:formal} but now we also erase the database. Erasure of a database recursively erases all tables. Erasure of a table removes all of its rows if the label of the table cannot flow into the erasing label, thus hiding the size of the table. Otherwise, it recursively erases each row. Erasure of a row respects the dynamic labels stored in the containing table's policy. Erasure of a row replaces \emph{both} fields with holes if the label of the first field cannot flow into the erasing label, since the label of the second field is not visible. If the label of the second field cannot flow into the erasing label, it replaces only the second field with a hole. Otherwise, it erases both fields. With this definition of erasure, we prove the simulation between @eval@ and @ε l . eval@, and with this, noninterference. The refinement properties in the database definition of \cref{fig:database-def} are critical in the proof, as explained below. \paragraph{\Safe programs} The simulation proof assumes that the input program is \safe, \ie satisfies the predicate @ς@ as defined in~\cref{subsubsec:semantics}, or equivalently evaluation only inserts values that satisfy the @isDBValue@ property. To relax this assumption, an alternative approach could be to check this property at runtime, just before insertion of the values. But, this would break simulation: @TInsert n (TLabeled l1 v1) t@ will fail if @v1@ is not a database value, but its erased version can succeed if @v1@ is erased to a hole (when @l1@ cannot flow into the erase label). Thus, the @isDBValue@ property cannot be checked before insertion and should be assumed by evaluation. In the implementation this safety check is enforced by Haskell's type system. \paragraph{Database values} Simulation of the delete operation requires that values stored in the database must have identity erasure, \eg cannot be labeled terms. Thus, we prove that all terms that satisfy @isDBValue@ also have erasure identity. We do this by stating the property as a refinement on term erasure itself. \begin{mcode} εTerm :: Label l => l -> i:Term l -> {o:Term l | isDBValue i => isDBValue o } \end{mcode} In the delete proof, each time a database term is erased, the proof identity @εTerm l v == v@ is immediately available. \paragraph{Note on refinements} The type @DBTerm l@ is a type alias for @Term l@ with the attached refinement that the term is a database value. A @DBTerm l@ \textit{does not carry} an actual proof that it is a database value. Instead, the refinement type that the term satisfies the @isDBValue@ property is statically verified during type checking. As a consequence, comparison of two @DBTerm@s does not require proof comparison. At the same time, verification can use the @isDBValue@ property. For instance, when opening a row @Row k v1 v2@, we know that @isDBValue v1@ and by the type of term erasure, we know that for each label @l@, @εTerm l v1 == v1@. \section{Mechanizing Noninterference of LIO in Liquid Haskell}\label{sec:formal} A contribution of this work is a formalization of \lweb's extension to \lio to support database security policies, along with a proof that this extension satisfies (termination insensitive) noninterference. We mechanize our formalization in Liquid Haskell~\cite{Vazou14}, an SMT-based refinement type checker for Haskell programs. Liquid Haskell permits refinement type specifications on Haskell source code. It converts the code into SMT queries to validate that the code satisfies the specifications. Our mechanized formalization and proof of noninterference constitutes the first significant metatheoretical mechanization carried out in Liquid Haskell. We present our mechanized \lweb formalism in two parts. In this section, we present \liocalc, a formalization and proof of noninterference for \lio. The next section presents \lwebcalc, an extension of \liocalc that supports database operations. Our Liquid Haskell mechanization defines \liocalc's syntax and operational semantics as Haskell definitions, as a definitional interpreter. We present them the same way in this paper, rather than reformatting them as mathematical inference rules. Metatheoretic properties are expressed as refinement types, following~\citet{refinement-reflection,a-tale}, and proofs are Haskell functions with these types (checked by the SMT solver). We assess our experience using Liquid Haskell for metatheory in comparison to related approaches in \Cref{sec:liquidhaskell-discussion}. \subsection{Security Lattice as a Type Class}\label{subsec:label:class} \begin{figure*} \begin{mcode} class Label l where (canFlowTo) :: l -> l -> Bool (meet) :: l -> l -> l (join) :: l -> l -> l bot :: l lawBot :: l:l -> { bot canFlowTo l } lawFlowReflexivity :: l:l -> { l canFlowTo l } lawFlowAntisymmetry :: l1:l -> l2:l -> { (l1 canFlowTo l2 $\land$ l2 canFlowTo l1) => l1 == l2 } lawFlowTransitivity :: l1:l -> l2:l -> l3:l -> { (l1 canFlowTo l2 $\land$ l2 canFlowTo l3) => l1 canFlowTo l3 } lawMeet :: z:l -> l1:l -> l2:l -> l:l -> { z == l1 meet l2 => z canFlowTo l1 $\land$ z canFlowTo l2 $\land$ (l canFlowTo l1 $\land$ l canFlowTo l2 => l canFlowTo z) } lawJoin :: z:l -> l1:l -> l2:l -> l:l -> { z == l1 join l2 => l1 canFlowTo z $\land$ l2 canFlowTo z $\land$ (l1 canFlowTo l $\land$ l2 canFlowTo l => z canFlowTo l) } \end{mcode} \caption{\texttt{Label} type class extended with \texttt{law*} methods to define the lattice laws as refinement types.} \label{fig:formalism:label} \end{figure*} Figure~\ref{fig:formalism:label} duplicates the @Label@ class definition of Figure~\ref{fig:label} but extends it with several methods that use refinement types to express properties of lattices that labels are expected to have. \paragraph{Partial order} The method @(canFlowTo)@ defines a partial order for each @Label@ element. That is, @(canFlowTo)@ is reflexive, antisymmetric, and transitive, as respectively encoded by the refinement types of the methods @lawFlowReflexivity@, @lawFlowAntisymmetry@, and @lawFlowTransitivity@. For instance, @lawFlowReflexivity@ is a method that takes a label @l@ to a Haskell unit (\ie @l -> ()@). This type is refined to encode the reflexivity property @l:l -> {v:() | l canFlowTo l }@ and further simplifies to ignore the irrelevant @v:()@ part as @l:l -> { l canFlowTo l }@. With that refinement, application of @lawFlowReflexivity@ to a concrete label @l@ gives back a proof that @l@ can flow to itself (\ie @l canFlowTo l@). At an instance definition of the class @Label@, the reflexivity proof needs to be explicitly provided. \paragraph{Lattice} Similarly, we refine the @lawMeet@ method to define the properties of the @(meet)@ lattice operator. Namely, for all labels @l1@ and @l2@, we define @z == l1 meet l2@ so that (i) @z@ can flow to @l1@ and @l2@ (@z canFlowTo l1 $\land$ z canFlowTo l2@) and (ii) all labels that can flow to @l1@ and @l2@, can also flow to @z@ @(forall l. l canFlowTo l1 $\land$ l canFlowTo l2 => l canFlowTo z)@. Dually, we refine the @lawJoin@ method to describe @l1 join l2@ as the minimum label that is greater than @l1@ and @l2@. \paragraph{Using the lattice laws} The lattice laws are class methods, which can be used for each @l@ that satisfies the @Label@ class constraints. For example, we prove that for all labels @l1@, @l2@, and @l3@, @l1 join l2@ cannot flow into @l3@ \textit{iff} @l1@ and @l2@ cannot both flow into @l3@. \begin{mcode} $\texttt{join}$Iff :: Label l => l1:l -> l2:l -> l3:l -> {l1 canFlowTo l3 $\land$ l2 canFlowTo l3 <=> (l1 join l2) canFlowTo l3} $\texttt{join}$Iff l1 l2 l3 = lawJoin (l1 join l2) l1 l2 l3 ? lawFlowTransitivity l1 l2 l3 \end{mcode} The theorem is expressed as a Haskell function that is given three labels and returns a unit value refined with the desired property. The proof proceeds by calling the laws of join and transitivity, combined with the proof combinator @(?)@ that ignores its second argument (\ie defined as @x ? _ = x@) while passing the refinements of both arguments to the SMT solver. The contrapositive step is automatically enforced by refinement type checking, using the SMT solver. \subsection{\liocalc: Syntax and Semantics}\label{subsec:label:calculus} Now we present the syntax and operational semantics of \liocalc. \begin{figure}[t] \begin{tabular}{l} \begin{mcode} data Program l = Pg { pLabel :: l, pTerm :: Term l } | PgHole data Term l -- pure terms = TUnit | TInt Int | TLabel l | TLabeled l (Term l) | TLabelOf (Term l) | TVar Var | TLam Var (Term l) | TApp (Term l) (Term l) | THole | ... -- monadic terms | TBind (Term l) (Term l) | TReturn (Term l) | TGetLabel | TLIO (Term l) | TTLabel (Term l) (Term l) | TUnlabel (Term l) | TException | TToLabeled (Term l) (Term l) \end{mcode} \end{tabular} \caption{Syntax of \liocalc.} \label{fig:label:syntax} \end{figure} \subsubsection{Syntax} \Cref{fig:label:syntax} defines a program as either an actual program (@Pg@) with a current label @pLabel@ under which the program's term @pTerm@ is evaluated, or as a hole (@PgHole@). The hole is not a proper program; it is used for to define adversary observability when proving noninterference (\cref{subsec:formal:noninterference}). We omit the clearance label in the formalism as a simplification since its rules are straightforward (when the current label changes, check that it flows into the clearance label). Terms are divided into \textit{pure} terms whose evaluation is independent of the current label and \textit{monadic} terms, which either manipulate or whose evaluation depends on the current label. \paragraph{Pure terms} Pure terms include unit @TUnit@, integers @TInt i@ for some Haskell integer @i@, and the label value @TLabel l@, where @l@ is some instance of the labeled class of \Cref{fig:formalism:label}. The labeled value @TLabeled l t@ wraps the term @t@ with the label @l@. The term @TLabelOf t@ returns the label of the term @t@, if @t@ is a labeled term. Pure terms include the standard lambda calculus terms for variables (@TVar@), application (@TApp@) and abstraction (@TLam@). Finally, similar to programs, a hole term (@THole@) is required for the meta-theory. It is straightforward to extend pure terms to more interesting calculi. In our mechanization we extended pure terms with lattice label operations, branches, lists, and inductive fixpoints; we omit them here for space reasons. \paragraph{Monadic terms} Monadic terms are evaluated under a state that captures the current label. Bind (@TBind@) and return (@TReturn@) are the standard monadic operations, that respectively propagate and return the current state. The current label is accessed with the @TGetLabel@ term and the monadic term @TLIO@ wraps monadic values, \ie computations that cannot be further evaluated. The term @TTLabel lt t@ labels the term @t@ with the label term @lt@ and dually the term @TUnlabel t@ unlabels the labeled term @t@. An exception (@TException@) is thrown if a policy is violated. Finally, the term @TToLabeled tl t@ locally raises the current label to @tl@ to evaluate the monadic term @t@, dropping it again when the computation completes. \subsubsection{Semantics} \begin{figure}[t] \begin{tabular}{lcl} \begin{mcode} eval :: Label l => Program l -> Program l \end{mcode}\hspace{-2em} &&\\ \begin{mcode} eval (Pg lc (TBind t1 t2)) | Pg lc' (TLIO t1') <- eval$*$ (Pg lc t1) = Pg lc' (TApp t2 t1') eval (Pg lc (TReturn t)) = Pg lc (TLIO t) eval (Pg lc TGetLabel) = Pg lc (TReturn (TLabel lc)) eval (Pg lc (TTLabel (TLabel l) t)) | lc canFlowTo l = Pg lc (TReturn (TLabeled l t)) | otherwise = Pg lc TException \end{mcode} && \begin{mcode} eval (Pg lc (TUnlabel (TLabeled l t))) = Pg (l join lc) (TReturn t) eval (Pg lc (TToLabeled (TLabel l) t)) | Pg lc' (TLIO t') <- eval$*$ (Pg lc t) , lc canFlowTo l, lc' canFlowTo l = Pg lc (TReturn (TLabeled l t')) | otherwise = Pg lc (TReturn (TLabeled l TException)) eval (Pg lc t) = Pg lc (evalTerm t) eval PgHole = PgHole \end{mcode}\\ &&\\ \begin{mcode} evalTerm :: Label l => Term l -> Term l evalTerm (TLabelOf (TLabeled l _)) = TLabel l evalTerm (TLabelOf t) = TLabelOf (evalTerm t) evalTerm (TApp (TLam x t) tx) = subst (x,tx) t evalTerm (TApp t tx) = TApp (evalTerm t) tx evalTerm v = v \end{mcode} &\quad& \begin{mcode} eval$*$ :: Label l => Program l -> Program l eval$*$ PgHole = PgHole eval$*$ (Pg lc (TLIO t)) = Pg lc (TLIO t) eval$*$ p = eval$*$ (eval p) subst :: Eq l => (Int, Term l) -> Term l -> Term l subst = ... \end{mcode} \end{tabular} \caption{Operational semantics of \liocalc.} \label{fig:label:calculus} \end{figure} \Cref{fig:label:calculus} summarizes the operational semantics of \liocalc as three main functions, (i) @eval@ evaluates monadic terms taking into account the current label of the program, (ii) @evalTerm@ evaluates pure terms, and (iii) @eval$*$@ is the transitive closure of @eval@. \paragraph{Program evaluation} The bind of two terms @t1@ and @t2@ fully evaluates @t1@ into a monadic value, using evaluation's transitive closure @eval$*$@. The result is passed to @t2@. The returned program uses the label of the evaluation of @t1@, which is safe since evaluation only increases the current label. In the definition of evaluation, we use Haskell's guard syntax @Pg lc' (TLIO t1') <- eval$*$ (Pg lc t1)@ to denote that evaluation of bind only occurs when @eval$*$ (Pg lc t1)@ returns a program whose term is a monadic value @TLIO@. Using refinement types, we prove that assuming that programs cannot diverge and are well-typed (\ie @t1@ is a monadic term), @eval$*$ (Pg lc t1)@ always returns a program with a monadic value, so evaluation of bind always succeeds. Evaluation of the @TReturn@ term simply returns a monadic value and evaluation of @TGetLabel@ returns the current label. Evaluation of @TTLabel (TLabel l) t@ returns the term @t@ labeled with @l@, when the current label can flow to @l@, otherwise it returns an exception. Dually, unlabeling @TLabeled l t@ returns the term @t@ with the current label joined with @l@. The term @ToLabeled (TLabel l) t@ under current label @lc@ fully evaluates the term @t@ into a monadic value @t'@ with returned label @lc'@. If both the current and returned labels can flow into @l@, then evaluation returns the term @t@ labeled with the returned label @lc'@, while the current label remains the same. That is, evaluation of @t@ can arbitrarily raise the label, since its result is labeled under @l@. Otherwise, an exception is thrown. The rest of the terms are pure, and their evaluation rules are given below. Finally, evaluation of a hole is an identity. \paragraph{Term evaluation} Evaluation of the term @TLabelOf t@ returns the label of @t@, if @t@ is a labeled term; otherwise it propagates evaluation until @t@ is evaluated to a labeled term. Evaluation of application uses the standard call-by-name semantics. The definition of substitution is standard and omitted. The rest of the pure terms are either values or a variable, whose evaluation is defined to be the identity. We define @eval$*$@ to be the transitive closure of @eval@. That is, @eval$*$@ repeats evaluation until a monadic value is reached. \subsection{Noninterference}\label{subsec:formal:noninterference} Now we prove noninterference for \liocalc. Noninterference holds when the \emph{low view} of a program is preserved by its evaluation. This low view is characterized by an \emph{erasure} function, which removes program elements whose security label is higher than the adversary's label, replacing them with a ``hole.'' Two versions of the program given possibly different secrets will start with the same low view, and if the program is noninterfering, they will end with the same low view. We prove nointerference of \liocalc by employing a simulation lemma, in the style of~\citet{Li2010, RussoCH08, lio}. We use refinement types to express this lemma and the property of noninterference, and rely on Liquid Haskell to certify our proof. \subsubsection{Erasure} The functions @ε@ and @εTerm@ erase the sensitive data of programs and terms, \textit{resp.} \begin{tabular}{lcl} \begin{mcode} ε :: Label l => l -> Program l -> Program l εTerm :: Label l => l -> Term l -> Term εTerm l (TLabeled l1 t) | l1 canFlowTo l = TLabeled l1 (εTerm l t) | otherwise = TLabeled l1 THole εTerm l (TTLabel (TLabel l1) t) | l1 canFlowTo l = TTLabel (TLabel l1) (εTerm l t) | otherwise = TTLabel (TLabel l1) THole ... \end{mcode} && \begin{mcode} ε l (Pg lc t) | lc canFlowTo l = Pg lc (εTerm l t) | otherwise = PgHole ε _ PgHole = PgHole $\quad$ $\quad$ $\quad$ \end{mcode} \end{tabular} The term erasure function @εTerm l@ replaces terms labeled with a label @l1@ with a hole, if @l1@ cannot flow into the erasure label @l@. Similarly, term erasure preemptively replaces the term @t@ in @TTLabel (TLabel l1) t@ with a hole when @l1@ cannot flow into the erasure label @l@, since evaluation will lead to a labeled term. For the remaining terms, erasure is a homomorphism. Program erasure with label @l@ of a program with current label @lc@ erases the term of the program, if @lc@ can flow into @l@; otherwise it returns a program hole hiding from the attacker all the program configuration (\ie both the term and the current label). Erasure of a program hole is an identity. \subsubsection{Simulation} \label{subsub:simulation} \begin{figure} \begin{tabular}{cc} \begin{minipage}{.45\textwidth} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=7.5em,minimum width=2em] { \texttt{p} & \texttt{p'} \\ \epsilon \texttt{ l p} & \epsilon \texttt{ l p'} \\}; \path[-stealth] (m-1-1) edge node [left] {$\epsilon \texttt{ l}$} (m-2-1) edge node [below] {$\texttt{eval}$} (m-1-2) (m-2-1.east|-m-2-2) edge node [below] {$\epsilon \texttt{ l . eval}$} (m-2-2) (m-1-2) edge node [right] {$\epsilon \texttt{ l}$} (m-2-2); \end{tikzpicture} \end{minipage} & \begin{mcode} measure terminates :: Program l -> Bool simulation :: Label l => l:l -> p:{Program l | terminates p } -> { ε l (eval (ε l p)) = ε l (eval p) } \end{mcode} \end{tabular} \caption{Simulation between $\texttt{eval}$ and $\epsilon\texttt{ l . eval}$.} \label{fig:simulation} \end{figure} In \Cref{fig:simulation} we state that for every label @l@, @eval@ and @ε l . eval@ form a simulation. That is, evaluation of a program @p@ and evaluation of its erased version @ε l p@ cannot be distinguished after erasure. We prove this property by induction on the input program term. \paragraph{Termination} Simulation (and later, noninterference) is termination-insensitive: it is defined only for executions that terminate, as indicated by the @terminates@ predicate. (\liocalc includes untyped lambda calculus, so \liocalc programs are not strongly normalizing.) This is necessary because, for soundness, Liquid Haskell disallows non-terminating functions, like @eval@, from being lifted into refinement types. To lift @eval@ in the logic we constrained it to only be called on terminating programs. To do so, we defined two logical, uninterpreted functions. \begin{mcode} measure terminates :: Program l -> Bool measure evalSteps :: Program l -> Int \end{mcode} We use a refinement-type precondition to prescribe that @eval@ is only called on programs @p@ that satisfy the @terminates@ predicate, and prove termination of @eval@ by checking that the steps of evaluation (@evalSteps p@) are decreasing at each recursive call. \begin{mcode} eval :: Label l => p:{Program l | terminates p} -> Program l / [evalSteps p] \end{mcode} While the functions @terminates@ and @evalSteps@ cannot be defined as Haskell functions, we can instead \emph{axiomatize} properties that are true under the assumption of termination. In particular, \begin{itemize} \item if a program terminates, so do its subprograms, and \item if a program terminates, its evaluation steps are strictly smaller than those of its subprograms. \end{itemize} To express these properties, we define axioms involving these functions in refinements for each source program construct. For instance, the following assumption (encoded as a Haskell function) handles bind terms: \begin{mcode} assume evalStepsBindAxiom :: lc:l -> db:DB l -> t1:Term l -> t2:{Term l | terminates (Pg lc db (TBind t1 t2)) } -> { (evalSteps (Pg lc db t1) < evalSteps (Pg lc db (TBind t1 t2))) && (0 <= evalSteps (Pg lc db t1)) && (terminates (Pg lc db t1))} } evalStepsBindAxiom _ _ _ _ = () \end{mcode} Here, @evalStepsBindAxiom@ encodes that if the program @Pg lc db (TBind t1 t2)@ terminates, then so does @Pg lc db t1@ with fewer evaluation steps. This assumption is required to prove simulation in the inductive case of the @TBind@, since we need to \begin{itemize} \item apply the simulation lemma for the @Pg lc db t1@ program, thus we need to know that it terminates; and \item prove that the induction is well founded, which we do by proving that the evaluation step counts of each subprogram are a decreasing natural number. \end{itemize} \subsubsection{Noninterference} The noninterference theorem states that if two terminating \liocalc programs @p1@ and @p2@ are equal after erasure with label @l@, then their evaluation is also equal after erasure with label @l@. As with simulation, noninterference is termination insensitive---potentially diverging programs could violate noninterference. We express the noninterference theorem as a refinement type. \begin{mcode} nonInterference :: Label l => l:l -> p1:{Program l | terminates p1 } -> p2:{Program l | terminates p2 } -> { ε l p1 == ε l p2 } -> { ε l (eval p1) == ε l (eval p2) } \end{mcode} The proof proceeds by simple rewriting using the simulation property at each input program and the low equivalence precondition. \begin{mcode} nonInterference l p1 p2 lowEquivalent = ε l (eval p1) ? simulation l p1 ==. ε l (eval (ε l p1)) ? lowEquivalent ==. ε l (eval (ε l p2)) ? simulation l p2 ==. ε l (eval p2) $***$ QED \end{mcode} The body of @nonInterference@ starts from the left hand side of the equality and, using equational reasoning and invocation of the @lowEquivalent@ and the @simulation@ theorem on the input programs @p1@ and @p2@, reaches the right hand side of the equality. As explained in~\ref{subsec:label:class} the proof combinator @x ? p@ returns its first argument and extends the SMT environment with the knowledge of the theorem @p@. The proof combinator @x ==. y = y@ equates its two arguments and returns the second argument to continue the equational steps. Finally, @x $***$ QED = ()@ casts its first argument into @unit@, so that the equational proof returns a @unit@ type. \section{Implementation}\label{sec:impl} \lweb has been available online since 2016 and consists of 2,664 lines of Haskell code.\footnote{\url{https://github.com/jprider63/lmonad-yesod}} It depends on our base \lmonad package that implements the @LMonadT@ monad transformer and consists of 345 lines of code.\footnote{\url{https://github.com/jprider63/lmonad}} \lweb also imports \yesod, a well established, external Haskell library for type-safe, web applications. This section explains how the implementation extends the formalization, and then discusses the trusted computing base. \subsection{Extensions}\label{subsec:impl:ext} The \lweb implementation generalizes the formalization of Sections~\ref{sec:formal} and~\ref{sec:formal-db} in several ways. \paragraph*{Clearance label} The implementation supports a \emph{clearance} label, described in \cref{sec:lio-intro}. Intuitively, the clearance label limits how high the current label can be raised. If the current label ever exceeds the clearance label, an exception is thrown. This label is not needed to enforce noninterference, but serves as an optimization, cutting off transactions whose current label rises to the point that they are doomed to fail. Adding checks to handle the clearance was straightforward. \paragraph*{Full tables and expressive queries} As first illustrated in \Cref{subsec:lweb}, tables may have more than two columns, and a column's label can be determined by other various fields in the same row. The labels of such \emph{dependency fields} must be constant, \ie not determined by another field, and flow into the table label (which also must be constant). A consequence of this rule is that a field's label cannot depend on itself. Finally, values stored in tables instantiate \yesod's @PersistField@ type class. The implementation uses only the predefined instances including @Text@, @Bool@, @Int@ but critically, does not define a @PersistField@ for labeled values. \lweb enforces these invariants at compile time via Haskell type checking and when preprocessing table definitions. \lweb rewrites queries to add labels to queried results. We have implemented database operations beyond those given in \cref{sec:formal-db}, to be more in line with typical database support. Some of these operations are simple variations of the ones presented. For example, \lweb allows for variations of @update@ that only update specific fields (not whole rows). \lweb implements these basic queries by wrapping Persistent~\cite{yesod}, \yesod's database library, with the derived IFC checks. To support more advanced queries, \lweb defines an SQL-like domain-specific language called @lsql@. @lsql@ allows users to write expressive SQL queries that include inner joins, outer joins, @where@ clauses, orderings, limits, and offsets. Haskell expressions can be included in queries using anti-quotation. At compile-time, \lweb parses @lsql@ queries using quasi-quotation and Template Haskell \cite{Sheard:2002:TMH:636517.636528}. It rewrites the queries to be run using Esqueleto~\cite{esqueleto}, a Haskell library that supports advanced database queries. As part of this rewriting, \lweb inserts IFC checks for queries based on the user-defined database policies. We show several examples of @lsql@ queries in \cref{subsec:bififi}. \paragraph*{Optimizations} Sometimes a label against which to perform a check is derived from data stored in every row. Retrieving every row is especially costly when the query itself would retrieve only a fraction of them. Therefore, when possible we compute an upper bound for such a label. In particular, if a field is fully constrained by a query's predicate, we use the field's constrained value to compute any dependent labels. When a field is not fully constrained, we conservatively set dependent labels to @top@. Suppose we wish to query the @Friends@ table from \cref{fig:friendstable}, retrieving all rows such that @user1 == 'Alice'@ and @date < '2000-01-01'@. The confidentiality portion of @user1@'s label is @bot@, but that portion of @date@'s is computed from @user1 meet user2@. Since @user1@ is always @'Alice'@ we know the computed label is $\bigsqcup_l$ @Alice@ $\sqcap ~ l$ for all values @user2 = @$l$ in the database. In this case, we can bound $l$ as @top@, and thus use label @Alice@, since it is equivalent to @Alice meet top@. While this bound is technically conservative, in practice we find it makes policy sense. In this example, if the @user2@ field can truly vary arbitrarily then $\bigsqcup_l ~ l$ will approach @top@. \paragraph*{Declassification} \lweb supports forms of \emph{declassification} \cite{Sabelfeld:2009:DDP:1662658.1662659} for cases when the IFC lattice ordering needs to be selectively relaxed. These should be used sparingly (and are, in our \bibifi case study), as they form part of the trusted computing base, discussed below. \paragraph*{Row ordering} As a final point, we note that our formalization models a database as a list of rows; insertion (via @+=@) simply appends to the list, regardless of the contents of a row. As such, \emph{row ordering} does not depend on the database's contents and thus reveals nothing about them (it is governed only by the table label). In the implementation, advanced operations may specify an ordering. \lweb prevents leaks in this situation by raising the current label with the label of fields used for sorting. If a query does not specify an ordering, \lweb takes no specific steps. However, ordering on rows is undefined in SQL, so a backend database could choose to order them by their contents, and thus potentially leak information in a query's results. In our experience with PostgreSQL, default row ordering depends on when values are written and is independent of the data in the table. \subsection{Trusted Computing Base}\label{subsec:impl:limitations} A key advantage of \lweb is that by largely shifting security checks from the application into the \lweb IFC framework, we can shrink an application's trusted computing base (TCB). In particular, for an application that uses \lweb, the locus of trust is on \lweb itself, which is made (more) trustworthy by our mechanized noninterference proof. A few parts of the application must be trusted, nevertheless. First, all of the policy specifications are trusted. The policy includes the labels on the various tables and the labels on data read/written from I/O channels. Specifying the latter requires writing some trusted code to interpret data going in or out. For example, in a multi-user application like \bibifi, code performing authentication on a particular channel must be trusted (\cref{subsubsec:users}). Second, any uses of declassification are trusted, as they constitute local modifications to policy. One kind of declassification can occur selectively on in-application data~\cite{sabelfeld:survey}. We give an example in \cref{subsec:bibifi:declassification}. Another kind of declassification is to relax some security checks during database updates. The update query imposes strong runtime checks, \eg that the label of the predicate should flow into the updated fields as formalized in~\cref{sec:formal-db}. LWeb provides an unsound update alternative (called @updateDeclassifyTCB@) that ignores this specific check. \section{Introduction}\label{sec:intro} Modern web applications must protect the confidentiality and integrity of their data. Employing access control and/or manual, ad hoc enforcement mechanisms may fail to block illicit information flows between components, \eg from database to server to client. Information flow control (IFC)~\cite{sabelfeld:survey} policies can govern such flows, but enforcing them poses practical problems. Static enforcement (\eg by typing~\cite{jif,flowcaml,Chong:2007:SEC:1362903.1362904,Schoepe:2014:STI:2628136.2628151,Chong:2007:SWA:1294261.1294265} or static analysis~\cite{Hammer:2009:FCO:1667545.1667547,JohnsonWMC2015,Arzt:2014:FPC:2666356.2594299}) can produce too many false alarms, which hamper adoption~\cite{king08implicit}. Dynamic enforcement~\cite{Chudnov:2015:IIF:2810103.2813684,Roy:2009:LPF:1542476.1542484,Tromer:2016:DII:2897845.2897888,YangHASFC16,Austin:2012:MFD:2103656.2103677} is more precise but can impose high overheads. A promising solution to these problems is embodied in the LIO system~\cite{lio} for Haskell. LIO is a drop-in replacement for the Haskell IO monad, extending IO with an internal \emph{current label} and \emph{clearance label}. Such labels are lattice ordered (as is typical~\cite{denning}), with the degenerate case being a secret (high) label and public (low) one. LIO's current label constitutes the least upper bound of the security labels of all values read during the current computation. Effectful operations such as reading/writing from stable storage, or communicating with other processes, are checked against the current label. If the operation's security label (\eg that on a channel being written to) is lower than the current label, then the operation is rejected as potentially insecure. The clearance serves as an upper bound that the current label may never cross, even prior to performing any I/O, so as to reduce the chance of side channels. Haskell's clear, type-enforced separation of pure computation from effects makes LIO easy to implement soundly and efficiently, compared to other dynamic enforcement mechanisms. This paper presents \lweb, an extension to \lio that aims to bring its benefits to Haskell-based web applications. This paper presents the three main contributions of our work. First, we present an extension to a core LIO formalism with support for database transactions. Each table has a label that protects its length. In our implementation we use DC labels~\cite{stefan:dclabels}, which have both confidentiality and integrity components. The confidentiality component of the table label controls who can query it (as the result may reveal something about the table's length), and the integrity component controls who can add or delete rows (since both may change the length). In addition, each row may have a more refined policy to protect its contents. The label for a field in a row may be specified as a function of other fields in the same row (those fields are protected by a specific, global label). This allows, for example, having a row specifying a user and some sensitive user data; the former can act as a label to protect the latter. We mechanized our formalism in Liquid Haskell~\cite{Vazou14} and proved that it enjoys noninterference. Our development proceeds in two steps: a core \lio formalism called \liocalc (\cref{sec:formal}), and an extension to it, called \lwebcalc, that adds database operations (\cref{sec:formal-db}). The mechanization process was fruitful: it revealed two bugs in our original rules that constituted real leaks. Moreover, this mechanization constitutes the largest-ever development in Liquid Haskell and is the first Liquid Haskell application to prove a language metatheory (\cref{sec:liquidhaskell-discussion}). As our next contribution, we describe a full implementation of \lweb in Haskell as an extension to the \yesod web programming framework (\cref{sec:overview} and~\cref{sec:impl}). Our implementation was carried out in two steps. First, we extracted the core label tracking functionality of \lio into a monad transformer called \lmonad so that it can be layered on monads other than @IO@\@. For \lweb, we layered it on top of the @Handler@ monad provided by the \yesod. This monad encapsulates mechanisms for client/server HTTP communications and database transactions, so layering \lmonad on top of @Handler@ provides the basic functionality to enforce security. Then we extended \yesod's database API to permit defining label-based information flow policies, generalizing the approach from our formalism whereby each row may have many fields, each of which may be protected by other fields in the same row. We support simple key/value lookups and more general SQL queries, extending the Esqueleto framework~\cite{esqueleto}. We use Template Haskell~\cite{Sheard:2002:TMH:636517.636528} to insert checks that properly enforce policies in our extension. Finally, we describe our experience using \lweb to build a substantial web site hosting the Build it, Break it, Fix it (\bibifi) security-oriented programming contest~\cite{Ruef:2016:BBF:2976749.2978382} hosted at \url{https://builditbreakit.org} (\cref{subsec:bififi}). This site has been used over the last few years to host more than a dozen contests involving hundreds of teams. It consists of 11500+ lines of Haskell and manages data stored in 40 database tables. The site has a variety of roles (participants, teams, judges, admins) and policies that govern their various privileges. When we first deployed this contest, it lacked \lweb support, and we found it had authorization bugs. Retrofitting it with \lweb was straightforward and eliminated those problems, reducing the trusted computing base from the entire application to just 80 lines of its code (\tcbnumberbibifi) plus the \lweb codebase (for a total of \tcbnumber). \lweb imposes modest overhead on \bibifi query latencies---experiments show between \overheadnumbermin and \overheadnumber (\cref{sec:experiments}). \begin{comment} \begin{wrapfigure}{r}{0.3\textwidth} \vspace{-0.4cm} \begin{tikzpicture}[ mynode/.style={ draw, text width=3cm, minimum height=0.7cm, align=center }, ] \node[mynode,fill=usercolor] (user) {\textcolor{white}{\textbf{Programmer}}}; \node[below=1cm of user] (lwebname) {$\qquad\qquad\qquad\qquad$ \textbf{\lweb}}; \node[mynode,fill=liocolor,below=0.2cm of lwebname] (lio) {\textcolor{white}{\textbf{\lmonad}}}; \node[mynode,fill=yesodcolor,below=1cm of lio] (yesod) {\textcolor{white}{\textbf{\yesod}}}; \node [draw=black,minimum width=4cm, yshift=0.5cm, fit={ (lio) (yesod) (lwebname)}, below=1.5cm of user] (lweb){}; \node[mynode,fill=dbcolor,below=1cm of lweb] (db) {\textcolor{white}{\textbf{DB}}}; \draw[<->,very thick] (user) -- node[left] {DB Query} (lweb); \draw[<->,very thick] (lio) -- node[left] {Label Check} (yesod); \draw[<->,very thick] (db) -- node[left] {DB Access} (lweb); \end{tikzpicture} \caption{Structure of \lweb.} \label{fig:structure} \vspace{-1.6cm} \end{wrapfigure} \end{comment} \lweb is not the first framework to use IFC to enforce database security in web applications. Examples of prior efforts include SIF/Swift~\cite{Chong:2007:SEC:1362903.1362904, Chong:2007:SWA:1294261.1294265}, Jacqueline~\cite{YangHASFC16}, Hails~\cite{Giffin:2012:HPD:2387880.2387886,stefan17hails}, SELinks~\cite{corcoran09selinks}, SeLINQ~\cite{Schoepe:2014:STI:2628136.2628151}, UrFlow~\cite{urflow}, and IFDB~\cite{Schultz:2013:IDI:2465351.2465357}. \lweb distinguishes itself by providing end-to-end IFC security (between/across server and database), backed by a formal proof (mechanized in Liquid Haskell), for a mature, full-featured web framework (\yesod) while supporting expressive policies (\eg where one field can serve as the label of another) and efficient queries (a large subset of SQL). The IFC checks needed during query processing were tricky to get right---our formalization effort uncovered bugs in our original implementation by which information could leak owing to the checks themselves. \Cref{sec:related} discusses related work in detail. The code for \lweb and its mechanized proof are freely available. \section{Labeled Values}\label{sec:labeled} \section{Manipulation of Labeled Values}\label{sec:labeling} In~\S~\ref{sec:labels} we provide the primitive @TLabeled@ to generate labeled values. To enforce noninterference, we offered no way to unlabel such values. Now that we have monadic operations that propagate the current label, we extend the terms of our programs to allow for explicit manipulation of labeled values. \paragraph{Terms} The term @TTLabel tl t@ labels the term @t@ with the label term @tl@. The term @TUnlabel t@ returns the value of the labeled term @t@, while term @TLabelOf t@ returns @t@'s label. \begin{mcode} data Term l = ... | TTLabel (Term l) (Term l) | TUnlabel (Term l) | TLabelOf (Term l) \end{mcode} \paragraph{Values} All of the introduced terms are evaluated, thus are not values. \begin{mcode} isValue (TTLabel _ _) = False isValue (TUnlabel _) = False isValue (TLabelOf _) = False \end{mcode} \paragraph{Term Evaluation} We extend term evaluation to evaluate the @TLabelOf t@ term. \begin{mcode} evalTerm (TLabelOf (TLabeled l _)) = TLabel l evalTerm (TLabelOf t) = TLabelOf (evalTerm t) \end{mcode} If the argument @t@ is evaluated to the @TLabeled l _@ value, then evaluation returns the label @l@, otherwise the argument @t@ is evaluated. Note that if @t@ is evaluated to a different value, then the evaluation will get stuck repeating the evaluation of a non-labeled value. To catch such errors, we can use an type system that separates labeled expressions from the rest. This error does not affect the noninterference proof since our proof assumes terminating programs. \paragraph{Program Evaluation} Labeling and unlabeling expressions are monadic terms since they depend on the label state. The term @TTLabel tl t@ labels the term @t@ with the label @tl@. First the term @tl@ is evaluated into a label @l@. If the current program label can flow into @l@, then the monadic labeled value @TLabeled l t@ is returned. Otherwise, a runtime exception is thrown. \begin{mcode} eval (Pg lc (TTLabel (TLabel l) t)) | lc canFlowTo l = Pg lc (TReturn (TLabeled l t)) | otherwise = Pg lc TException eval (Pg lc (TTLabel tl t)) = Pg lc (TTLabel (evalTerm tl) t) \end{mcode} The term @TUnlabel t@ unlabels the term @t@. First the term @t@ is evaluated into a labeled value @TLabeled l t@. Then, term @t@ is returned while the current label is raised to the join of the old label and @l@. \begin{mcode} eval (Pg lc (TUnlabel (TLabeled l t))) = Pg (l join lc) (TReturn t) eval (Pg lc (TUnlabel t)) = Pg lc (TUnlabel (evalTerm t)) \end{mcode} \paragraph{Erasure} Program erasure remains the same. Term erasure is updated to aggressively erase terms @t@ that are about to get labeled with a label @l1@ that does not flow to @l@. \NV{Can we skip that erasing and still have noninterference?} For the rest of the terms, erasure is a homomorphism. \begin{mcode} εTerm l (TTLabel (TLabel l1) t) | l1 canFlowTo l = TTLabel (TLabel l1) (εTerm l t) | otherwise = TTLabel (TLabel l1) THole εTerm l (TTLabel tl t) = TTLabel (εTerm l tl) (εTerm l t) εTerm l (TUnlabel t) = TUnlabel (εTerm l t) εTerm l (TLabelOf t) = TLabelOf (εTerm l t) \end{mcode} \paragraph{Noninterference} To prove noninterference under the extended expressions, we update the simulations proof for programs and expressions. Our extensions break the simulations proof of~\S~\ref{subsection:lio:noninterference} for the case that the current program label cannot flow into the erasure label. This is not surprising, since unlabeling is the first term whose evaluation modifies the current label. Thus, after this extension, the evaluation of the program @Pg lc t@ can return with a different label, say @lc'@. In the modification of the proof (below), we use monotonicity of labels during evaluation and the transitivity of @(canFlowTo)@ to show that if the erasure of the program @Pg lc t@ reduces to a hole then so does the evaluation @eval (Pg lc t)@. \begin{mcode} simulations l (Pg lc t) | not (lc canFlowTo l) = ε l (eval (ε l (Pg lc t))) ==. ε l (eval PgHole) ==. ε l PgHole ==. PgHole ==. ε l (Pg lc' t') ? monotonicEval (Pg lc t) &&& lawFlowTransitivity lc lc' l ==. ε l (eval (Pg lc t)) *** QED where (Pg lc' t') = eval (Pg lc t) \end{mcode} For each new monadic expression, we need to add a case in the simulations proof. Most of the cases are proven by following the cases splitting of evaluation and erasure. Interestingly, since erasure can happen before or after evaluation, @TTLabel tl t@ requires the distinction of three cases for the term @tl@: 1) @tl@ is a label 2) @tl@ is not a label, but evaluates in one step in a label (thus erasure happens after evaluation) 3) none of the above, thus erasure does not happen. \NV{not sure if this makes sense... check again} \JP{I'd say either add more detail/example or drop this} \section{Labels in Terms}\label{sec:labels} \subsection{Security Lattice} \begin{figure*} \begin{mcode} class Label l where (canFlowTo) :: l -> l -> Bool (meet) :: l -> l -> l (join) :: l -> l -> l lawFlowReflexivity :: l:l -> {l canFlowTo l} lawFlowAntisymmetry :: l1:l -> l2:l -> {(l1 canFlowTo l2 && l2 canFlowTo l1) => l1 == l2} lawFlowTransitivity :: l1:l -> l2:l -> l3:l -> { (l1 canFlowTo l2 && l2 canFlowTo l3) => l1 canFlowTo l3} lawMeet :: z:l -> l1:l -> l2:l -> l:l -> {z == l1 meet l2 <=> z canFlowTo l1 && z canFlowTo l2 && (l canFlowTo l1 && l canFlowTo l2 => l canFlowTo z)} lawJoin :: z:l -> l1:l -> l2:l -> l:l -> {z == l1 join l2 <=> l1 canFlowTo z && l2 canFlowTo z && (l1 canFlowTo l && l2 canFlowTo l => z canFlowTo l)} \end{mcode} \caption{Security Lattice as the Label type class} \label{fig:label} \end{figure*} In figure~\ref{fig:label} we extend the @Label@ type class to define the security lattice of labels. The class contains three lattice methods: the ordering @(canFlowTo)@, pronounced ``can flow to'', the meet @(meet)@ and the join @(join)@. The final five methods use refinement types to encode the class laws that must be satisfied. \paragraph{Partial Order} If @l@ is a @Label@, then @(canFlowTo)@ defines a partial order on @l@. Thus, @(canFlowTo)@ should be reflexive, antisymmetric, and transitive, as respectively encoded by the refinements of the methods @lawFlowReflexivity@, @lawFlowAntisymmetry@, and @lawFlowTransitivity@. For instance, the call @lawFlowReflexivity l@ returns a proof (\ie a unit Haskell value) that for all labels @l@, @l canFlowTo l@. Moreover, to define an instance of the class @Label@, one needs to explicitly provide a reflexivity proof, as a class method. \paragraph{Lattice} We refine the @lawMeet@ method to define the properties of the @(meet)@ lattice operator. Namely, for all labels @l1@ and @l2@, we define @z == l1 meet l2@ so that \begin{itemize} \item @z@ can flow to @l1@ and @l2@ (@z canFlowTo l1 && z canFlowTo l2@) \item all labels that can flow to @l1@ and @l2@, can also flow to @z@ @(forall l . l canFlowTo l1 && l canFlowTo l2 => l canFlowTo z)@. \end{itemize} Dually, we refine the @lawJoin@ method to describe @l1 join l2@ as the minimum label that is greater than @l1@ and @l2@. \paragraph{Using the label laws} The label laws are class methods, which can be used for each @l@ that satisfies the @Label@ class constraints. For example below we prove that foreach labels @l1@, @l2@, and @l3@, if @l1@ cannot flow into @l3@, then @l1 join l2@ cannot flow into @l3@. \begin{mcode} canNotFlowToJoin :: Label l => l1:l -> l2:l -> l3:l -> { not (l1 canFlowTo l3) => not ((l1 join l2) canFlowTo l3) } canNotFlowToJoin l1 l2 l3 = lawJoin (join l1 l2) l1 l2 l3 &&& lawFlowTransitivity l1 (join l1 l2) l3 \end{mcode} The proof proceeds by calling the laws of join and transitivity, while the contrapositive step is automatically enforced, since Liquid Haskell knows boolean logic. \subsection{Labels as Terms} Our goal is to extend programs (of~\S~\ref{sec:non-interference}) to contain labels. Towards this goal, we define terms that manipulate labels. Terms consist of labels (@TLabel@), a labeled value (@TLabeled@) and label operations (@TOp@). The labeled operations (@LabelOp@) are the binary operations of join, meet, or checking is a label flows to another. We also include the boolean terms @TTrue@ and @TFalse@ that will be the result of flow check. Finally, we include a hole term (@THole@) that denotes erased terms (as explained in~\S~\ref{subsec:label:erasure}). \JP{This references a paragraph} \begin{mcode} data Term l = TLabel l | TOp LabelOp (Term l) (Term l) | TLabeled l (Term l) | TTrue | TFalse | THole data LabelOp = LMeet | LJoin | LCanFlowTo \end{mcode} Note that, the terms provide the labeled construct to label terms, but there is no construct to unlabel them. Once a term is labeled there is no way to be accessed, which is crucial for proving noninterference. \paragraph{Values} We distinguish values from terms that can get evaluated using the @isValue@ function. Up to now, only the label operation needs evaluation. \begin{mcode} isValue :: Term l -> Bool isValue TTrue = True isValue TFalse = True isValue (TLabel _) = True isValue (TLabeled _ _) = True isValue THole = True isValue _ = False \end{mcode} \paragraph{Evaluation} We define term evaluation to evaluate the label operations and simply return values. To evaluate a label operation, first we fully evaluate the label arguments and then invoke the relevant @Label@ methods. \begin{mcode} evalTerm :: Label l => Term l -> Term l evalTerm (TOp LMeet (TLabel l1) (TLabel l2)) = TLabel (l1 meet l2) evalTerm (TOp LJoin (TLabel l1) (TLabel l2)) = TLabel (l1 join l2) evalTerm (TOp LCanFlowTo (TLabel l1) (TLabel l2)) = boolTerm (l1 canFlowTo l2) evalTerm (TOp o (TLabel l1) t2) = TLOp o (TLabel l1) (evalTerm t2) evalTerm (TOp o t1 t2) = TLOp o (evalTerm t1) t2 evalTerm v | isValue v = v \end{mcode} When evaluating a meet or a join, the result is a label value that gets converted into a term by the @TLabel@ constructor. When evaluating a @LCanFlowTo@ the result is a Haskell boolean that gets converted into a term using the @boolTerm@ function: \begin{mcode} boolTerm :: Bool -> Term l boolTerm True = TTrue boolTerm False = TFalse \end{mcode} \paragraph{Erasure} We define the term erasure function to replace any term labeled with labels higher than the observation label with holes. \begin{mcode} εTerm :: Label l => l -> Term l -> Term l εTerm l (TLabeled l1 t) | l1 canFlowTo l = TLabeled l1 (εTerm l t) | otherwise = TLabeled l1 THole εTerm l TTrue = TTrue εTerm l TFalse = TFalse εTerm l (TLabel i) = TLabel i εTerm l (TOp o x y) = TOp o (εTerm l x) (εTerm l y) εTerm l THole = THole \end{mcode} When the erasure level is @l@, all values labeled with labels that cannot flow to @l@ are replaced with holes. Otherwise, the erasure function is a homomorphism over terms. \subsection{Programs with Terms} We extend programs to contain terms: \begin{mcode} data Program l = PgHole | Pg {pTerm :: Term l} \end{mcode} Now program evaluation, reduces to term evaluation. \begin{mcode} eval :: Label l => Program l -> Program l eval PgHole = PgHole eval (Pg t) = Pg (evalTerm t) \end{mcode} Similarly, program erasure reduces to term erasure: \begin{mcode} ε :: Label l => l -> Program l -> Program l ε _ PgHole = PgHole ε l (Pg t) = Pg (εTerm l t) \end{mcode} The nonintereference proof of~\S~\ref{sec:non-interference} is still correct, since it relies on the simulations property of programs. But, the simulations property breaks, since erasure is no longer an identity. We conclude this section by proving the simulations property for our updated program definition. \subsection{Simulations} We extend the simulations proof of~\S~\ref{subsec:non-interference:simulations} to accommodate programs with terms. \begin{mcode} simulations l (Pg t) = ε l (eval (ε l (Pg t))) ==. ε l (eval (Pg (εTerm l t))) ==. ε l (Pg (evalTerm (εTerm l t))) ==. Pg (εTerm l (evalTerm (εTerm l t))) ? simulationsTerm l t ==. Pg (εTerm l (evalTerm t)) ==. ε l (Pg (evalTerm t)) ==. ε l (eval (Pg t)) *** QED \end{mcode} The proof proceeds by rewriting and is using the simulations property on terms. \paragraph{Term Simulations} Term simulation encodes the simulation between term evaluation @evalTerm@ and term evaluation after erasure @evalTerm . εTerm l@. It is a property similar to the one expressed in Figure~\ref{fig:simulation}, but for terms instead of programs. \begin{mcode} simulationsTerm :: Label l => l:l -> t:Term l -> { εTerm l (evalTerm (εTerm l t)) = εTerm l (evalTerm t) } \end{mcode} We prove the property by induction on the term input. The interesting case is where the term is labeled (@TLabeled l1 t@) since we need to case split on whether or not the term label @l1@ can flow into the erasing label @l@: \begin{mcode} simulationsTerm l (TLabeled l1 t) | l1 canFlowTo l = εTerm l (evalTerm (εTerm l (TLabeled l1 t))) ==. εTerm l (evalTerm (TLabeled l1 (εTerm l t))) ==. εTerm l (TLabeled l1 (εTerm l t)) ==. TLabeled l1 (εTerm l (εTerm l t)) ? εTermIdempotent l t ==. TLabeled l1 (εTerm l t) ==. εTerm l (TLabeled l1 t) ==. εTerm l (evalTerm (TLabeled l1 t)) *** QED | otherwise = εTerm l (evalTerm (εTerm l (TLabeled l1 t))) ==. εTerm l (evalTerm (TLabeled l1 THole)) ==. εTerm l (TLabeled l1 THole) ==. TLabeled l1 THole ==. εTerm l (TLabeled l1 t) ==. εTerm l (evalTerm (TLabeled l1 t)) *** QED \end{mcode} If @l1 canFlowTo l@, then erasure of the term simply erases the term @t@ and the proof proceeds by rewriting and idempotence of erasure, that is, erasing a term once is equivalent to erasing it twice. Otherwise, erasing a labeled term produces a hole, in which case the proof proceeds by rewriting. In the case of the label operation @TOp o t1 t2@, we split three cases, on whether or not the terms @t1@ and @t2@ are labels. \begin{mcode} simulationsTerm l (TOp o (TLabel l1) (TLabel l2)) = () simulationsTerm l (TOp o (TLabel l1) t2) = simulations l t2 simulationsTerm l (TOp o t1 t2) = simulations l t1 \end{mcode} When both terms are labels, the proof proceeds by rewriting that is automatically performed by Liquid Haskell. When a term (say @t1@) is not a label, evaluation performs one step of evaluation on this term, thus the proof requires an inductive call (say @simulations l t1@). For the rest of the cases, both evaluation and erasure is an identity thus the proof is trivial. \begin{mcode} simulationsTerm l t = εTerm l (evalTerm (εTerm l t)) ==. εTerm l (evalTerm t) *** QED \end{mcode} \paragraph{Idempotence} \JP{Maybe we can move this to the appendix to save space.} For completeness, we express and prove idempotence on term erasure. \begin{mcode} εTermIdempotent :: Label l => l:l -> t:Term l -> { εTerm l (εTerm l t) = εTerm l t } εTermIdempotent l TTrue = () εTermIdempotent l TFalse = () εTermIdempotent l (TLabel _) = () εTermIdempotent l (TOp o t1 t2) = εTermIdempotent l t1 &&& εTermIdempotent l t2 εTermIdempotent l (TLabeled l1 t) | l1 canFlowTo l = εTermIdempotent l t | otherwise = εTermIdempotent l t εTermIdempotent l THole = () \end{mcode} The proof is performed by induction and case splitting following the definition of erasure. In the label operations case, we use the infix proof combinator, @&&&@, to combine the two inductive proof terms. \section{Lambda Calculus Terms}\label{sec:lambda} Next, we extend our terms to contain the standard lambda calculus primitives and discuss how the proof of simulations, and thus interference is preserved. \paragraph{Lambda Calculus Terms} We extend the term definition to encode variables, application, lambda abstractions, and the fix combinator. \begin{mcode} type Var = Integer data Term l = ... | TVar Var | TApp Term Term | TLam Var Term | TFix Term \end{mcode} In our proof we also encode commonly used terms like if, unit, and booleans, but we omit them here for space. \paragraph{Values} We extend the @isValue@ predicate to denote the new terms that are not evaluated. \begin{mcode} isValue :: Term l -> Bool isValue (TVar _) = True isValue (TLam _ _) = True isValue _ = False \end{mcode} \JP{Maybe drop for space} \paragraph{Evaluation} The function @evalTerm@ is extended with standard call-by-name small step evaluation. \begin{mcode} evalTerm :: Term -> Term evalTerm (TApp (TLam x t1) t2) = subst x t2 t1 evalTerm (TApp t1 t2) = TApp (evalTerm t1) t2 evalTerm (TFix (TLam x t)) = subst x (TFix (TLam x t)) t evalTerm (TFix t) = TFix (evalTerm t) subst :: Var -> Term -> Term -> Term ... \end{mcode} @subst x tx t@ is naturally defined to recursively substitute @x@ with @tx@ in the term @t@. \paragraph{Erasure} Finally, we homomorphically extend term erasure over the new lambda calculus terms. \begin{mcode} εTerm :: Label l => l -> Term -> Term εTerm _ (TVar x) = TVar x εTerm l (TApp f t) = TApp (εTerm l f) (εTerm l t) εTerm l (TFix t) = TFix (εTerm l t) εTerm l (TLam x t) = TLam x (εTerm l t) \end{mcode} \paragraph{Preservation of Noninterference} The proof of noninterference is preserved, since, intuitively, erasure on the new terms is a homomorphism. The only extension is on the @simulationsTerm@ theorem where, in the cases of $\beta-$reduction (\ie when a $\lambda-$term is applied to the application or fix operator) we need a theorem that erasure is a homomorphism on substitution: \begin{mcode} homomorphicSubst :: Label l => l:l -> x:Var -> tx:Term -> t:Term -> { εTerm l (subst x tx t) == subst x (εTerm l tx) (εTerm l t)} \end{mcode} \subsection{Comparison with Coq} \section{Monadic Terms for the Label State}\label{sec:lio} Next, we extend programs with effectful computations that depend on a label. \begin{mcode} data Program l = PgHole | Pg {pLabel :: l, pTerm :: Term l} \end{mcode} For the program @Pg lc t@, the label @lc@ indicates the current observation label under which the term @t@ is executed. This current label tracks the label of all values in scope, similar to a program counter label. \paragraph{Terms} We extend the terms with standard monadic expressions and monadic expressions that manipulate the current label. The term @TGetLabel@ returns the current label. @TBind@ and @TReturn@ represent the standard monadic methods. @TLIO@ is the result of a monadic computation. Finally, we add the exception term @TException@ to represent the runtime error when a bind is performed on non-monadic terms. \NV{Check with James about exceptions.} \begin{mcode} data Term l = ... | TGetLabel | TReturn (Term l) | TBind (Term l) (Term l) | TLIO (Term l) | TException \end{mcode} \paragraph{Values} We extend the values with the expressions that cannot get evaluated, that is, the monadic value @TLIO@ and exceptions. \begin{mcode} isValue :: Term l -> Bool isValue (TLIO _) = True isValue TException = True isValue _ = False \end{mcode} \paragraph{Evaluation} Next, we extend the evaluation function to account for monadic expressions. \begin{mcode} eval :: Label l => Program l -> Program l eval (Pg lc (TReturn t)) = Pg lc (TLIO t) eval (Pg lc TGetLabel) = Pg lc (TReturn (TLabel lc)) eval (Pg lc (TBind t1 t2)) | Pg lc' (TLIO t1') <- #evalStar (Pg lc t1)# = Pg lc' (TApp t2 t1') | otherwise = Pg lc TException \end{mcode} The return expression returns the monadic value @LIO@. The get label expression returns the current label. To bind the expressions @t1@ and @t2@, first the expression @t1@ is fully evaluated into a monadic value; then the resulting value is applied to @t2@. If @t1@ is not evaluated into a monadic value, then evaluation returns an exception. The evaluation of the monadic bind uses the function @evalStar@, the transitive closure of the evaluation function, for full evaluation. If @evalStar@ reaches a value term, then it returns. Otherwise, it recursively calls itself after one step of evaluation. \begin{mcode} evalStar :: Label l => Program l -> Program l evalStar PgHole = PgHole evalStar (Pg lc t) | isValue t = (Pg lc t) evalStar p = #evalStar (eval p)# \end{mcode} \paragraph{Termination of evaluation function.} Since programs might diverge, the evaluation function might too diverge. In fact, in the above mutually recursive definitions of @eval@ and @evalStar@ Liquid Haskell will fire a termination error in the underlined recursive calls. We restrict the definition of the evaluation relation to only accept terminating programs, where the termination predicate is axiomatized to encode properties of termination. We define a logical function that returns the evaluation steps of a terminating program: \begin{mcode} evalSteps :: Program l -> Nat \end{mcode} We then declare than the evaluation function on terminating programs terminates, since the @evalSteps@ of the input program is decreasing. \begin{mcode} eval :: Label l => p:{Program l | terminates p} -> Program l / [evalSteps p, 1] evalStar :: Label l => p:{Program l | terminates p} -> Program l / [evalSteps p, 2] \end{mcode} Concretely, we use the termination metric annotation below to express that among the mutually recursive calls of @eval@ and @evalStar@, the termination metrics @[evalSteps p, 1]@ and @[evalSteps p, 2]@ should respectively decrease. That is, calls where @evalSteps@ decreases are always allowed. But also, @evalStar@ can call @eval@ on inputs with unchanged @evalSteps@ (since @evalStar p@ calls @eval p@). Next, we axiomatize the relation between terminating programs and their evaluation steps. We assume that when a program terminates, its evaluation requires strictly less steps and also terminates. \begin{mcode} assume evalStepsAxiom :: Label l => p:{Program l | terminates p } -> { terminates (eval p) && evalSteps (eval p) < evalSteps p } evalStepsAxiom _ = () \end{mcode} Also we assume that if a @TBind t1 t2@ terminates, then @t1@ terminates in fewer evaluation steps. \begin{mcode} assume bindAxiom :: Label l => lc:l -> t1:Term l -> t2:{Term l | terminates (Pg lc (TBind t1 t2))} -> {terminates (Pg lc t1) && evalSteps (Pg lc t1) < evalSteps (Pg lc (TBind t1 t2))} bindAxiom _ _ _ _ _ = () \end{mcode} We use these assumptions to remove the termination error in the evaluation definition. The standard library @const@ function is used to call these axioms while ignoring their result. The two mutually recursive calls are repaired as presented below: \begin{mcode} | Pg lc' (TLIO t') <- evalStar (Pg l t1) `const` bindAxiom l t1 t2 ... evalStar p = evalStar (eval p) `const` evalStepsAxiom p \end{mcode} \paragraph{Erasure} We adjust program erasure to account for the current label. \begin{mcode} ε :: Label l => l -> Program l -> Program l ε _ PgHole = PgHole ε l (Pg lc t) | lc canFlowTo l = Pg lc (εTerm l t) | otherwise = PgHole \end{mcode} To erase a program using label @l@, if the current label of the program can flow into @l@ then we just erase the term of the program, otherwise, we return a program hole. We homomorphically extend term erasure definition on monadic terms. \subsection{Noninterference}\label{subsection:lio:noninterference} To prove nonintereference, we need to update the program simulations proof for 1) programs whose label cannot flow into the erasing label and 2) programs with monadic terms. The case when the program label does not flow into the erasing label is easy, since program erasure leads into a program hole that evaluates into an identity. \begin{mcode} simulations l (Pg lc t) | not (lc canFlowTo l) = ε l (eval (ε l (Pg lc t))) ==. ε l (eval PgHole) ==. ε l PgHole ==. PgHole ==. ε l (Pg lc (evalTerm t)) ==. ε l (eval (Pg lc t)) *** QED \end{mcode} We extend the simulations proof for the case of the monadic operations. The get label and return cases are trivial and proceed by rewriting and idempotence. The interesting case is to prove that simulations are preserved in the case of the bind: \begin{mcode} ε l (eval (ε l (Pg lc cc m (TBind t1 t2)))) ==. ... == ε l (eval (Pg lc cc m (TBind t1 t2))) \end{mcode} The evaluation of @TBind@ splits cases on whether or not @t1@ evaluates into a monadic value and then returns the label of the evaluated @t1@. Thus, our proof spits cases on the result of the evaluation of @t1@ and if the resulting label can flow into @l@. The interesting case appears when both @t1@ and its erasure evaluate into a monadic value with labels that can flow into @l@. Then, the proof proceeds by rewriting until we reach a point where the label and term resulted by the evaluation of @t1@ and each erasure needs to be equated: \begin{mcode} simulations l (Pg lc (TBind t1 t2)) | Pg εlc' (TLIO εt') <- evalStar (Pg lc εt1) , Pg lc' (TLIO t' ) <- evalStar (Pg lc t1) , εlc' canFlowTo l, lc' canFlowTo l = let εt1 = εTerm l t1 in let εt2 = εTerm l t2 in ε l (eval (ε l (Pg lc (TBind t1 t2)))) ==. ε l (eval (Pg lc (εTerm l (TBind t1 t2)))) ==. ε l (eval (Pg lc (TBind εt1 εt2))) ==. ε l (Pg εlc' (TApp (εTerm l t2) εt')) ==. Pg εlc' (εTerm l (TApp (εTerm l t2) εt')) ==. Pg εlc' (TApp (εTerm l εt2) (εTerm l εt')) ==. Pg εlc' (TApp εt2 (εTerm l εt')) ? εTermIdempotent l t2 ==. Pg lc' (TApp εt2 (εTerm l t')) \end{mcode} To prove the last equation, we make a subproof that equates the label and term resulted by the evaluation of @t1@ with the corresponding erasure using the theory of data types: \begin{mcode} ? ( Pg lc' (TLIO (εTerm l t')) ==. Pg lc' (εTerm l (TLIO t')) ==. ε l (Pg lc' (TLIO t')) ==. ε l (evalStar (Pg lc t1)) ? bindAxiom lc t1 t2 ==. ε l (evalStar (ε l (Pg lc t1))) ? simulationsStar l (eval p) ==. ε l (evalStar (Pg lc (εTerm l t1))) ==. ε l (Pg εlc' (TLIO εt')) ==. Pg εlc' (εTerm l (TLIO εt')) ==. Pg εlc' (TLIO (εTerm l εt')) *** QED ) \end{mcode} For example, since @Pg lc' _ == Pg εlc' _@, then the solver concludes that @lc' == εlc'@. This subproof uses the simulation property of @evalStar@ as discussed later, as well as the @bindAxiom@ to prove that the mutual recursive proof (between @simulations@ and @simulationsStar@) is terminating and thus well defined. After the intermediate label and expressions are equated, the proof simply concludes by rewriting. \begin{mcode} ==. Pg lc' (TApp εt2 (εTerm l t')) ==. Pg lc' (εTerm l (TApp t2 t')) ==. ε l (Pg lc'(TApp t2 t')) ==. ε l (eval (Pg lc (TBind t1 t2))) *** QED \end{mcode} The simulation theorem on @evalStar@ states the simulation property between @evalStar@ and @evalStar . ε l@. \begin{mcode} simulationsStar :: Label l => l:l -> p:{Program l | terminates p } -> { ε l (evalStar (ε l p)) = ε l (evalStar p) } \end{mcode} Much like the definitions of @eval@ and @evalStar@, the proof @simulations@ and @simulationsStar@ are mutually recursive and rely on @evalStep@ for termination proving. Moreover the proof relies on monotonicity of labels during evaluation \begin{mcode} monotonicEvalStar :: Label l => p:{Program l | isPg p } -> { pLabel p canFlowTo pLabel (evalStar p) } \end{mcode} The other cases of the bind proof are easier. If none of the evaluations are monadic values (@TLIO@) then evaluation of bind throws an exception that is propagated via erasure. Since erasure preserves the structure of the terms and evaluation is structurally defined, we prove that it is impossible for exactly one to be a monadic value. If none of the evaluated labels flow to @l@, then both sides return a hole, while due to simulations on @evaluationStar@ the returned labels need to be the same. \JP{revisit this once we add exceptions} \section{Liquid Haskell for Metatheory}\label{sec:liquidhaskell-discussion} Liquid Haskell was originally developed to support lightweight program verification (\eg out-of-bounds indexing). The formalization of \lweb in Liquid Haskell, presented in~\Cref{sec:formal} and~\Cref{sec:formal-db}, was made possible by recent extensions to support general theorem proving~\cite{refinement-reflection}. Our proof of noninterference was a challenging test of this new support, and constitutes the first advanced metatheoretical result mechanized in Liquid Haskell.\footnote{\url{https://github.com/plum-umd/lmonad-meta}} The trusted computing base (TCB) of any Liquid Haskell proof relies on the correct implementation of several parts. In particular, we trust that \begin{enumerate} \item the GHC compiler correctly desugars the Haskell code to the core language of Liquid Haskell, \item Liquid Haskell correctly generates the verification conditions for the core language, and \item the SMT solver correctly discharges the verification conditions. \end{enumerate} We worked on the noninterference proof, on and off, for 10 months. The proof consists of 5,447 lines of code and requires about 5 hours to be checked. For this proof in particular, we (naturally) trust all of our semantic definitions, and also two explicit assumptions, notably the axiomatization of termination and modeling of predicates. These were discussed respectively in \cref{subsub:simulation} and \cref{subsec:database:pure}. Carrying out the proof had a clear benefit: As mentioned in \cref{subsec:db:monadic}, we uncovered two bugs in our implementation. In both cases, \lweb was examining sensitive data when carrying out a security check, but failed to raise the current label with the label of that data. Failure of the mechanized proof to go through exposed these bugs. The rest of this section summarizes what we view as the (current) advantages and disadvantages of using Liquid Haskell as a theorem prover compared to other alternatives (e.g., Coq and F-star~\citep{fstar}), expanding on a prior assessment~\cite{a-tale}. \subsection{Advantages} As a theorem proving environment, Liquid Haskell offers several advantages. \paragraph{General purpose programming language.} The Liquid Haskell-based formal development is, in essence, a Haskell program. All formal definitions (presented in \Cref{sec:formal} and~\Cref{sec:formal-db}) and proof terms (\eg illustrated in \cref{subsec:formal:noninterference}) are Haskell code. Refinement types define lemmas and theorems, referring to these definitions. In fact, some formal definitions (\eg the @Label@ class definition) were taken directly from the implementation. The first author of the paper and main developer of the proof is a Haskell programmer, thus he did not need to learn a new programming language (\eg Coq) to develop the formal proof. During development we used Haskell's existing development tools, including the build system, test frameworks, and deployment support (\eg Travis integration). \paragraph{SMT automation} Liquid Haskell, like Dafny~\citep{Leino:2010} and F-star~\citep{fstar}, uses an SMT solver to automate parts of the proof, especially the ones that make use of boolean reasoning, reducing the need for manual case splitting. For example, proving simulation for row updates normally proceeds by case splitting on the relative can-flow-to relation between four labels. The SMT automates the case splitting. \paragraph{Semantic termination checking} To prove termination of a recursive function in Liquid Haskell it suffices to declare a non negative integer value that is decreasing at each recursive call. The \lweb proof was greatly simplified by the semantic termination checker. In a previous Coq LIO proof~\cite{stefan:2017:flexible}, the evaluation relation apparently requires an explicit \emph{fuel} argument to count the number of evaluation steps, since the evaluation function (the equivalent to that in \cref{fig:label:calculus}) does not necessarily terminate. In our proof, termination of evaluation was axiomatized (per~\Cref{subsub:simulation}), which in practice meant that the evaluation steps were counted only in the logic and not in the definition of the evaluation function. \paragraph{Intrinsic \emph{and} extrinsic verification} The Liquid Haskell proving style allows us to conveniently switch between (manual) extrinsic and (SMT automated) intrinsic verification. Most of the \lweb proof is extrinsic, \ie functions are defined to state and prove theorems about the model. In few cases, intrinsic specifications are used to ease the proof. For instance, the refinement type specification of @εTerm@, as described in~\ref{subsec:db:noninterference}, intrinsically specifies that erasure of @isDBValue@ terms returns terms that also satisfy the @isDBValue@ predicate. This property is automatically proven by the SMT without cluttering the executable portion of the definition with proof terms. \subsection{Disadvantages} On the other hand, Liquid Haskell has room to improve as a theorem proving environment, especially compared to advanced theorem provers like Coq. \paragraph{Unpredictable verification time} The first and main disadvantage is the unpredictability of verification times, which owe to the invocation of an SMT solver. One issue we ran across during the development of our proof is that internal transformations performed by @ghc@ can cause massive blowups. This is because Liquid Haskell analyzes Haskell's intermediate code (@CoreSyn@), not the original source. As an example of the problem, using @|x,y@ instead of the logical @| x && y@ in function guards leads to much slower verification times. While the two alternatives have exactly the same semantics, the first case leads to exponential expansion of the intermediate code. \paragraph{Lack of tactics} Liquid Haskell currently provides no tactic support, which could simplify proof scripts. For example, we often had to systematically invoke label laws (\cref{fig:formalism:label}) in our proofs, whereas a proof tactic to do so automatically could greatly simplify these cases. \paragraph{General purpose programming language} Liquid Haskell, developed for light-weight verification of Haskell programs, lacks various features in verification-specific systems, such as Coq. For example, Liquid Haskell provides only experimental support for curried, higher-order functions, which means that one has to inline higher order functions, like @map@, @fold@, and @lookup@. There is also no interactive proof environment or (substantial) proof libraries. \bigskip In sum, our \lweb proof shows that Liquid Haskell can be used for sophisticated theorem proving. We are optimistic that current disadvantages can be addressed in future work. \section{Noninterference}\label{sec:non-interference} \NV{James said to clarify that it is noninterference is termination insensitive} In this section we use a simplistic program definition to prove non interference. Later we extend the programs to useful ones and show how the noninterference proof is preserved. \subsection{Programs \& Labels} We initially define a @Program@ Haskell datatype to be simply a hole that will get later extended to contain labels of type @l@. \begin{mcode} data Program l = PgHole deriving Eq \end{mcode} Evaluation of a program just returns the same program. \begin{code} eval :: Program l -> Program l eval PgHole = PgHole \end{code} Labels are abstractly represented by the Haskell type class @Labels@. For now we just define the ordering method @(canFlowTo)@ and in \S~\ref{sec:labels} we extend to the class to represent a lattice. \begin{mcode} class Label l where (canFlowTo) :: l -> l -> Bool \end{mcode} \subsubsection{Erasure \& Noninterference} We define the erase function @ε@ that erases all the sensitive data (classified above l) of the program. \begin{mcode} ε :: Label l => l -> Program l -> Program l ε l PgHole = PgHole \end{mcode} Noninterference states that for each observation label @l@ and two terminating programs @p1@ and @p2@, if the two programs are equal at observation level @l@ then their evaluation will also be equal as seen by an attacker at observation level @l@. We express noninterference as the following refinement type \begin{mcode} nonInterference :: Label l => l:l -> p1:{Program l | terminates p1} -> p2:{Program l | terminates p2} -> {ε l p1 == ε l p2} -> p1':{Program l | eval p1 == p1' } -> p2':{Program l | eval p2 == p2' } -> {ε l p1' == ε l p2'} \end{mcode} \NV{Shouldn't this be evalStar? Now we only do one step of evaluation.} For some refinement @p@ we write @{p}@ to express the unit Haskell type refined with @p@, \ie @{v:() | p}@. That is, @nonInterference@ is a Haskell function that always returns a unit value yet its type is refined to express the noninterference theorem. Later we define the body of @nonInterference@ to constitute a proof that the theorem holds. The predicate @terminates@ is a logical predicate (\ie not a Haskell function) that axiomatized program termination and is later (\S~\ref{sec:lio}) used justify well formedness of our proof. We use Liquid Haskell's @measure@ token to define the termination logical predicate \begin{mcode} measure terminates :: Program l -> Bool \end{mcode} \subsection{Simulations}\label{subsec:non-interference:simulations} To prove noninterference we use a simulations property between program evaluation and the composition of erasure and program evaluation as depicted in Figure~\ref{fig:simulation}. Intuitively, erasing sensitive data and applying @ε l . eval@ is the same as applying @eval@ and then erasing the sensitive data. \begin{figure} \begin{tikzpicture} \matrix (m) [matrix of math nodes,row sep=3em,column sep=10em,minimum width=2em] { \texttt{p} & \texttt{p'} \\ \epsilon \texttt{ l p} & \epsilon \texttt{ l p'} \\}; \path[-stealth] (m-1-1) edge node [left] {$\epsilon \texttt{ l}$} (m-2-1) edge node [below] {$\texttt{eval}$} (m-1-2) (m-2-1.east|-m-2-2) edge node [below] {$\epsilon \texttt{ l . eval}$} (m-2-2) (m-1-2) edge node [right] {$\epsilon \texttt{ l}$} (m-2-2); \end{tikzpicture} \caption{Simulation between $\texttt{eval}$ and $\epsilon\texttt{ l . eval}$.} \label{fig:simulation} \end{figure} We express the simulations property using a refinement type that states that for each label @l@ and terminating program @p@ the simulations equality holds. \begin{mcode} simulations :: Label l => l:l -> p:{Program l | terminates p} -> {ε l (eval (ε l p)) = ε l (eval p)} \end{mcode} Since erasure and evaluation are defined to be identities the proof of simulations is straightforward. The Haskell definition of @simulations@ is using Liquid Haskell's equational reasoning to simply equate the left and right hand side of the equality. \begin{mcode} simulations l PgHole = ε l (eval (ε l PgHole)) ==. ε l (eval PgHole) *** QED \end{mcode} The infix operator @(==.)@ returns its second argument while the ``postfix'' @*** QED@ casts its argument into a unit. \begin{mcode} (==.) :: a -> a -> a _ ==. x = x data QED = QED (***) :: a -> QED -> () _ *** QED = () \end{mcode} Using these operators the body of @simulations@ is defined to resemble a mathematical proof that equates @ε l (eval (ε l p))@ with @ε l (eval p)@ simply by unfolding the definition of @ε@ with inputs @l@ and @p@. Since Liquid Haskell by default checks that all the functions are terminating and defined for all inputs, @simulations@ is a total function thus can soundly be seen as a mathematical proof. \subsection{Proof on Noninterference} We prove noninterference by defining the body of the Haskell @nonInterference@ using equational reasoning and appropriately invoking the @simulations@ theorem. \begin{mcode} nonInterference l p1 p2 equivProof p1' p2' = ε l p1' ==. ε l (eval p1) ==. ε l (eval (ε l p1)) ? simulations l p1 ==. ε l (eval (ε l p2)) ? equivProof ==. ε l (eval p2) ? simulations l p2 ==. ε l p2' *** QED \end{mcode} The explanation operator @(?)@ is used to explain the equational steps by calling existing theorems. \begin{mcode} (?) :: a -> Proof -> a x ? _ = x \end{mcode} In the rest, we extend the programs definition with basic lambda terms (\S~\ref{sec:lambda}), label operations (\S~\ref{sec:labels}), basic monadic terms (\S~\ref{sec:lio}) and monadic terms that allow labeling values (\S~\ref{sec:labeling}) restriction of data access (\S~\ref{sec:clearance}) and data base operators (\S~\ref{sec:db}). For each such extension we accordingly extend the definitions of program evaluation and erasure and prove that the simulations property, and thus noninterference, is preserved. \section{Overview}\label{sec:overview} \begin{wrapfigure}{r}{0.3\textwidth} \vspace{-1.8cm} \begin{tikzpicture}[ mynode/.style={ draw, text width=3cm, minimum height=0.7cm, align=center }, ] \node[mynode,fill=usercolor] (user) {\textcolor{white}{\textbf{Programmer}}}; \node[below=1cm of user] (lwebname) {$\qquad\qquad\qquad\qquad$ \textbf{\lweb}}; \node[mynode,fill=liocolor,below=0.2cm of lwebname] (lio) {\textcolor{white}{\textbf{\lmonad}}}; \node[mynode,fill=yesodcolor,below=1cm of lio] (yesod) {\textcolor{white}{\textbf{\yesod}}}; \node [draw=black,minimum width=4cm, yshift=0.5cm, fit={ (lio) (yesod) (lwebname)}, below=1.5cm of user] (lweb){}; \node[mynode,fill=dbcolor,below=1cm of lweb] (db) {\textcolor{white}{\textbf{DB}}}; \draw[<->,very thick] (user) -- node[left] {DB Query} (lweb); \draw[<->,very thick] (lio) -- node[left] {Label Check} (yesod); \draw[<->,very thick] (db) -- node[left] {DB Access} (lweb); \end{tikzpicture} \caption{Structure of \lweb.} \label{fig:structure} \vspace{1.0cm} \end{wrapfigure} The architecture of \lweb is shown in~\cref{fig:structure}. Database queries/updates precipitated by user interactions are processed by the \lmonad component, which constitutes the core of \lio and confirms that label-based security policies are not violated. Then, the queries/updates are handled via \yesod, where the results continue to be subject to policy enforcement by \lmonad. \begin{wrapfigure}{r}{.3\textwidth} \vspace{-1.6cm} \centering \begin{mcode} class Eq a => Label a where bot :: a (join) :: a -> a -> a (meet) :: a -> a -> a (canFlowTo) :: a -> a -> Bool \end{mcode} \caption{The \texttt{Label} class} \label{fig:label} \end{wrapfigure} \subsection{Label-Based Information Flow Control with \lio} \label{sec:lio-intro} We start by presenting \lio~\cite{lio} and how it is used to enforce noninterference for label-based information flow policies. \paragraph{Labels and noninterference} As a trivial security label, consider a datatype with constructors @Secret@ and @Public@. Protected data is assigned a label, and an IFC system ensures that @Secret@-labeled data can only be learned by those with @Secret@-label privilege or greater. The label system can be generalized to any lattice~\cite{denning} where IFC is checked using the lattice's partial order relation @canFlowTo@. Such a system enjoys \emph{noninterference}~\cite{goguen} if an adversary with privileges at label @l1@ can learn nothing about data labeled with @l2@ where @l2@ $\not\sqsubseteq$ @l1@. In~\cref{fig:label} we define the label interface as the type class @Label@ that defines the bottom (least protected) label, least upper bound (join, $\sqcup$) of two labels, the greatest lower bound (meet, $\sqcap$), and whether one label can flow to ($\sqsubseteq$) another, defining a partial ordering. Instantiating this type class for @Public@ and @Secret@ would set @Public@ as the bottom label and @Public@ $\sqsubset$ @Secret@ (with join and meet operations to match). \paragraph{The LIO monad} \lio enforces IFC on labeled data using dynamic checks. The type @LIO l a@ denotes a monadic computation that returns a value of type @a@ at label @l@. \lio provides two methods to label and unlabel data. \begin{code} label :: (Label l) => l -> a -> LIO l (Labeled l a) unlabel :: (Label l) => Labeled l a -> LIO l a \end{code} The method @label l v@ takes as input a label and some data and returns a @Labeled@ value, \ie the data @v@ marked with the label @l@. The method @unlabel v@ takes as input a labeled value and returns just its data. The \lio monad maintains an ambient label---the \emph{current label} @lc@---that represents the label of the current computation. As such, labelling and unlabelling a value affects @lc@. In particular, @unlabel v@ updates @lc@ by joining it to @v@'s label, while @label l v@ is only permitted if @lc@ $\sqsubseteq$ @l@, \ie the current label can flow to @l@. If this check fails, \lio raises an exception. As an example, on the left, a computation with current label @Public@ labels data @"a secret"@ as @Secret@, preserving the same current label, and then unlabels the data, thus raising the current label to @Secret@. On the right, a computation with current label @Secret@ attempts to label data as @Public@, which fails, since the computation is already tainted with (\ie dependent on) secret data. \begin{flushleft} \begin{tabular}{lcl} \begin{mcode} -- lc := Public v <- label Secret "a secret" -- ok: Public canFlowTo Secret and lc := Public x <- unlabel v -- lc := Secret \end{mcode} &\quad\quad\quad& \begin{mcode} -- lc := Secret v <- label Public "public" -- exception: Secret cannotFlowTo Public $\quad$ \end{mcode} \end{tabular} \end{flushleft} \lio also supports labeled mutable references, and a scoping mechanism for temporarily (but safely) raising the current label until a computation completes, and then restoring it. \lio also has what is called the \emph{clearance} label that serves as an upper bound for the current label, and thus can serve to identify potentially unsafe computations sooner. A normal Haskell program can run an \lio computation via % @runLIO@, whose type is as follows. \begin{mcode} runLIO :: (Label l) => LIO l a -> IO a \end{mcode} Evaluating @runLIO m@ initializes the current label to $\bot$ and computes @m@. The returned result is an @IO@ computation, since \lio allows @IO@ interactions, \eg with a file system. If any security checks fail, @runLIO@ throws an exception. \subsection{\yesod} \label{sec:yesod} \yesod~\cite{yesod} is mature framework for developing type-safe and high performance web applications in Haskell. In a nutshell, \lweb adds \lio-style support to \yesod-based web applications, with a focus on supporting database security policies. \begin{figure} \centering \begin{minipage}{.7\textwidth} \begin{mcode} *Friends* ^<bot,Const Admin>^ *user1 Text* ^<bot,Const Admin>^ *user2 Text* ^<bot,Const Admin>^ *date Text* ^<Field User1 meet Field User2,Const Admin>^ \end{mcode} \end{minipage} \caption{Example \lweb database table definition. The \textcolor{yesodcolor}{green} is \texttt{Yesod} syntax and the \textcolor{liocolor}{blue} is the \textit{LWeb} policy.} \label{fig:friendstable} \end{figure} The \textcolor{yesodcolor}{green} part of \cref{fig:friendstable} uses \yesod's domain specific language (DSL) to define the table @Friends@. The table has three @Text@\footnote{\texttt{Text} is an efficient Haskell string type.} fields corresponding to two users (@user1@ and @user2@) and the date of their friendship. A primary key field with type @FriendsId@ is also automatically added. In~\cref{subsec:lweb} we explain how the \textcolor{liocolor}{blue} part of the definition is used for policy enforcement. \yesod uses Template Haskell~\cite{Sheard:2002:TMH:636517.636528} to generate, at compile time, a database schema from such table definitions. These are the Haskell types that \yesod generates for the @Friends@ table. \begin{code} data FriendsId = FriendsId Int data Friends = Friends { friendsUser1 :: Text, friendsUser2 :: Text , friendsDate :: Text } \end{code} Note that though each row has a key of type @FriendsId@, it is elided from the @Friends@ data record. Each generated key type is a member of the @Key@ type family; in this case @Key Friends@ is a type alias for @FriendsId@. \yesod provides an API to define and run queries. Here is a simplified version of this API. \begin{mcode} runDB :: YesodDB a -> Handler a get :: Key v -> YesodDB (Maybe v) insert :: v -> YesodDB (Key v) delete :: Key v -> YesodDB () update :: Key v -> [Update v] -> YesodDB () \end{mcode} The type alias @YesodDB a@ denotes the monadic type of a computation that queries (or updates) the database. The function @runDB@ runs the query argument on the database. @Handler@ is \yesod's underlying monad used to respond to HTTP requests. The functions @get@, @insert@, @delete@, and @update@ generate query computations. For example, we can query the database for the date of a specific friendship using @get@. \begin{code} getFriendshipDate :: FriendsId -> Handler (Maybe Text) getFriendshipDate friendId = do r <- runDB (get friendId) return (friendsDate <$> r) \end{code} \yesod also supports more sophisticated SQL-style queries via an interface called Esqueleto~\cite{esqueleto}. Such queries may include inner and outer joins, conditionals, and filtering. \subsection{\lweb: \yesod with \lio}\label{subsec:lweb} \lweb extends \yesod with \lio-style IFC enforcement. The implementation has two parts. As a first step, we generalize \lio to support an arbitrary underlying monad by making it a \emph{monad transformer}, applying it to \yesod's core monad. Then we extend \yesod operations to incorporate label-based policies that work with this extended monad. \paragraph{\lmonad: LIO as a monad transformer} \lmonad generalizes the underlying @IO@ monad of \lio to \textit{any} monad @m@. In particular, \lmonad is a monad transformer @LMonadT l m@ that adds the IFC operations to the underlying monad @m@, rather than making it specific to the @IO@ monad. \begin{mcode} label :: (Label l, Monad m) => l -> a -> LMonadT l m (Labeled l a) unlabel :: (Label l, Monad m) => Labeled l a -> LMonadT l m a runLMonad :: (Label l, Monad m) => LMonadT l m a -> m a \end{mcode} @LMonadT@ is implemented as a state monad transformer that tracks the current label. Computations that run in the underlying @m@ monad cannot be executed directly due to Haskell's type system. Instead, safe variants that enforce IFC must be written so that they can be executed in @LMonadT l m@. Thus, the \lio monad is an instantiation of the monad variable @m@ with @IO@: @LIO l = LMonadT l IO@. For \lweb we instantiate @LMonadT@ with \yesod's @Handler@ monad. \begin{mcode} type LHandler l a = LMonadT l Handler a \end{mcode} Doing this adds information flow checking to \yesod applications, but it still remains to define policies to be checked. Thus we extend \yesod to permit defining label-based policies on database schemas, and to enforce those policies during query processing. \paragraph{Label-annotated database schemas} \lweb labels are based on DC labels~\cite{stefan:dclabels}, which have the form @<l,r>@, where the left protects the \emph{confidentiality} and the right protects the \emph{integrity} of the labeled value. Integrity lattices are dual to confidentiality lattices. They track who can influence the construction of a value. Database policies are written as label annotations @p@ on table definitions, following this grammar: \begin{mcode} p := <l, l> l := Const c | Field f | Id | top | bot | l meet l | l join l \end{mcode} Here, @c@ is the name of a data constructor and @f@ is a field name. A database policy consists of a single \emph{table label} and one label for each field in the database. We explain these by example. The security labels of the @Friends@ table are given by the \textcolor{liocolor}{blue} part of \cref{fig:friendstable}. The first line's label @*Friends* ^<bot,Const Admin>^@ defines the table label, which protects the \emph{length} of the table. This example states that anyone can learn the length of the table (\eg by querying it), but only the administrator can change the length (\ie by adding or removing entries). \lweb requires the table label to be constant, \ie it may not depend on run-time entries of the table. Allowing it to do so would significantly complicate enforcing noninterference. The last line @*date Text* ^<Field User1 meet Field User2,Const Admin>^@ defines that either of the users listed in the first two fields can read the @date@ field but only the administrator can write it. This label is \emph{dynamic}, since the values of the @user1@ and @user2@ fields may differ from row to row. We call fields, like @user1@ and @user2@, which are referenced in another field's label annotation, \emph{dependency fields}. When a field's label is not given explicitly, the label @<bot,top>@ is assumed. To simplify security enforcement, \lweb requires the label of a dependency field to be constant and flow into (be bounded by) the table label. For @user1@ and @user2@ this holds since their labels match the table's label. The invariants about the table label and the dependency field labels are enforced by a compile-time check, when processing the table's policy annotations. Note that @Labeled@ values may not be directly stored in the database as there is no way to directly express such a type in a source program. Per \cref{fig:friendstable}, field types like @Text@, @Bool@, and @Int@ are allowed, and their effective label is indicated by annotation, rather than directly expressed in the type.\footnote{The formalism encodes all of these invariants with refinement types in the database definition.} \paragraph{Policy enforcement} \lweb enforces the table-declared policies by providing wrappers around each \yesod database API function. \begin{mcode} runDB :: Label l => LWebDB l a -> LHandler l a get :: Label l => Key v -> LWebDB l (Maybe v) insert :: Label l => v -> LWebDB l (Key v) delete :: Label l => Key v -> LWebDB l () update :: Label l => Key v -> [Update v] -> LWebDB l () \end{mcode} Now the queries are modified to return @LWebDB@ computations that are evaluated (using @runDB@) inside the @LHandler@ monad. For each query operation, \lweb wraps the underlying database query with information flow control checks that enforce the defined policies. For instance, if @x@ has type @FriendsId@, then @r <- runDB $\$$ get x@ joins the current label with the label of the selected row, here @user1 meet user2@. \lweb also extends IFC checking to advanced SQL queries expressed in Esqueleto~\cite{esqueleto}. As explained in~\cref{sec:impl}, \lweb uses a DSL syntax, as a @lsql@ quasiquotation, to wrap these queries with IFC checks. For example, the following query joins the @Friends@ table with a @User@ table: \begin{mcode} rs <- runDB [lsql|select $\star$ from Friends inner jjoin User on Friends.user1 == User.id|] \end{mcode} \section{Related Work}\label{sec:related} \lweb provides end-to-end information flow control (IFC) security for webapps. Its design aims to provide highly expressive policies and queries in a way that does not compromise security, and adds little overhead to transaction processing, in both space and time. This section compares \lweb to prior work, arguing that it occupies a unique, and favorable, spot in the design space. \paragraph*{Information flow control} \lweb is part of a long line of work on using lattice-ordered, label-based IFC to enforce security policies in software~\cite{lapadula1973,denning,sabelfeld:survey}. Enforcement can occur either \emph{statically} at compile-time, \eg as part of type checking~\cite{jif,flowcaml,Lourenco:2015:DIF:2676726.2676994,Lourenco:2013:IFA:3092395.3092410} or a static analysis~\cite{Hammer:2009:FCO:1667545.1667547,JohnsonWMC2015,Arzt:2014:FPC:2666356.2594299}, or \emph{dynamically} at run-time, \eg via source-to-source rewriting~\cite{Chudnov:2015:IIF:2810103.2813684,hedin15jsflow} or library/run-time support~\cite{Roy:2009:LPF:1542476.1542484,Tromer:2016:DII:2897845.2897888,lio}. Dynamic approaches often work by rewriting a program to insert the needed checks and/or by relying on support from the hardware, operating system, or run-time. Closely related to IFC, \emph{taint tracking} controls \emph{data flows} through the program, rather than overall influence (which includes effects on \emph{control flow}, \ie \emph{implicit} flows). Taint tracking can avoid the false positives of IFC, which often overapproximates control channels, but will also miss security violations~\cite{king08implicit}. \lweb builds on the \lio framework~\cite{lio}, which is a dynamic approach to enforcing IFC that takes advantage of Haskell's static types to help localize checks to I/O boundaries. \lio's \emph{current label} and \emph{clearance label} draw inspiration from work on Mandatory Access Control (MAC) operating systems~\cite{lapadula1973}, including Asbestos~\cite{Efstathopoulos:2005:LEP:1095810.1095813}, HiStar~\cite{Zeldovich:2006:MIF:1267308.1267327}, and Flume~\cite{Krohn:2007:IFC:1294261.1294293}. The baseline \lio approach has been extended in several interesting ways~\cite{Russo:2015:FPT:2784731.2784756,Buiras:2015:HMS:2784731.2784758, Waye:2017:CSI:3133956.3134036, Buiras13}, including to other languages~\cite{Heule:2015:IIR:2976888.2976892}. The proof of security in the original \lio (without use of a database) has been partially mechanized in Coq~\cite{stefan:2017:flexible}, while the derivative MAC library~\cite{Russo:2015:FPT:2784731.2784756} has been mechanized in Agda~\cite{Vassena:2016:FIC:2993600.2993608}. The MAC mechanization considers concurrency, which ours does not. Ours is the first mechanization to use an SMT-based verifier (Liquid Haskell). \paragraph*{IFC for database-using web applications} Several prior works apply IFC to web applications. FlowWatcher~\cite{Muthukumaran:2015:FDA:2810103.2813639} enforces information flow policies within a web proxy, which provides the benefit that applications need not be retrofitted, but limits the granularity of policies it can enforce. SeLINQ~\cite{Schoepe:2014:STI:2628136.2628151} is a static IFC system for F\# programs that access a database via language-integrated queries (LINQ). SIF~\cite{Chong:2007:SEC:1362903.1362904} uses Jif~\cite{jif} to enforce static IFC-based protection for web servlets, while Swift~\cite{Chong:2007:SWA:1294261.1294265} also allows client-side (Javascript) code. Unlike \lweb, these systems permit only statically determined database policies, not ones with dynamic labels (\eg stored in the database). The latter two lack language support for database manipulation, though a back-end database can be made accessible by wrapping it with a Jif signature (which we imagine would require an SeLINQ-style static policy). UrFlow~\cite{urflow} performs static analysis to prove that information flow policies are properly enforced. These policies are expressed as SQL queries over protected data and known information. Static analysis-based proofs about queries and flows impose no run-time overhead. But static analysis can be overapproximate, rejecting correct programs. Dynamic enforcement schemes do not have this issue, and \lweb's \lio-based approach imposes little run-time overhead. SELinks~\cite{corcoran09selinks} enforces security policies for web applications, including ones resembling the field-dependent policies we have in \lweb. To improve performance, security policy checks were offloaded to the database as stored procedures; \lweb could benefit from a similar optimization. SELinks was originally based on a formalism called Fable~\cite{swamy08fable} in which one could encode IFC policies, but this encoding was too onerous for practical use, and not present in SELinks, which was limited to access control policies. Qapla~\cite{qapla} also supports rich policies, but like SELinks these focus on access control, and so may fail to plug leaks of protected data via other server state. Jacqueline~\cite{YangHASFC16} uses faceted information flow control~\cite{Austin:2012:MFD:2103656.2103677} to implement policy-agnostic security~\cite{YangYS12,AustinYFS13} in web applications. Like \lweb, they have formalized and proved a noninterference property (but not mechanized it). Unlike \lweb that enforces IFC using the underlying \lio monad, Jacqueline at runtime explicitly keeps track of the secret and public views of sensitive values. While expressive, this approach can be expensive in both space and time: results of computations on sensitive values have up to $1.75\times$ slower running times, and require more memory. Latencies for Django and Jacqueline are around 160ms for typical requests to their benchmark application. The system most closely related to \lweb is Hails~\cite{Giffin:2012:HPD:2387880.2387886,stefan17hails}, which aims to enforce information flow-oriented policies in web applications. Hails is also based on \lio, and is particularly interested in confining third-party extensions (written in Safe Haskell~\cite{Terei:2012:SH:2430532.2364524}). In Hails, individual record fields can have policies determined by other data in the database, as determined by a general Haskell function provided by the programmer. Thus, Hails policies can encode \lweb policies, and more; \eg data in one table can be used to determine labels for data in another table. Evaluating the policy function during query processing is potentially expensive. That said, according to their benchmarks, the throughput of database writes of Hails is $2\times$ faster than Ruby Sinatra, comparable to Apache PHP, and $6\times$ slow than Java Jetty. They did not measure Hails' overhead, \eg by measuring the performance difference with and without policy checks. There are several important differences between \lweb and Hails. First, \lweb builds on top of a mature, popular web framework (\yesod). Extracting \lio into \lmonad makes it easy for \lweb to evolve as \yesod evolves. As such, \lweb can benefit from \yesod's optimized code, bugfixes, etc. Second, \lweb's @lsql@ query language is highly expressive, whereas (as far as we can tell) Hails uses a simpler query language targeting MongoDB where predicates can only depend on the document key. Third, there is no formal argument (and little informal argument) that Hails' policy checks ensure a high-level security property. The ability to run arbitrary code to determine policies seems potentially risky (\eg if there are mutually interacting policy functions), and there seems to be nothing like our database invariants that are needed for noninterference. Our mechanized formalization proved important: value-oriented policies (where one field's label depends on another field) were tricky to get right (per \cref{sec:liquidhaskell-discussion}). Finally, IFDB~\cite{Schultz:2013:IDI:2465351.2465357} defines an approach to integrating information flow tracking in an application and a database. Like Hails and \lweb, the application tracks a current ``contamination level,'' like \lio's current label, that reflects data it has read. In IFDB, one can specify per-row policies using secrecy and integrity labels, but not policies per field. Labels are stored as separate, per-row metadata, implemented by changing the back-end DBMS. Declassification is permitted within trusted code blocks. Performance overhead for HTTP request latencies was similar to \lweb, at about 24\%. Compared to IFDB, \lweb does not require any PSQL/database modifications; can support per-field, updatable labels; and can treat existing fields as labels, rather than requiring the establishment of a separate (often redundant) field just for the label. IFDB also lacks a clear argument for security, and has no formalization. Once again, we found such a formalization particularly useful for revealing bugs. \begin{comment} We believe the key challenge is not just one thing, but rather is finding a balance of many things. In particular, we want a system that provides end-to-end information flow control (IFC) security for webapps while supporting expressive and efficient policies and queries. No prior system manages to do this. 1) An end-to-end information security property that includes server and DB. LWeb supports end-to-end IFC security. Hails, Jacqueline, and SIF do as well. SELinks can do end-to-end security, but only for access control policies, due to limitations involving encodings. Qapla also supports only access control. Other systems enforce IFC only in the server or DB, but not across the two. 2) Expressive, easy-to-express IFC policies and queries. LWeb supports expressive, dynamic policies: Individual data items in a row can have security labels determined by dynamic data in the same row, and that data is permitted to change (thus dynamically updating the label). Jacqueline policies are also expressive; Hails policies seem similar to LWeb's. IFDB policies are expressive, but do not permit changing a row's label. SELINQ and UrWeb both are limited to static policies — they do not depend on dynamic data. Simpler policies are easier to enforce (properly) at lower cost, but can restrict realistic applications. LWeb supports expressive queries as a subset of SQL, built on top of a full featured web framework (Yesod). These were important for BIBIFI. Jacqueline is different in that it focuses more on per-value declassified views. Since it inserts additional rows and columns, Jacqueline cannot safely really on aggregate queries like counting or summing. Hails is limited in that it utilizes a key-value store for its backend database (MongoDB). As a result, queries in Hails are not as expressive as SQL queries and predicates can only depend on keys. The IFC checks LWeb performs are more complex to support the richer queries. ``In addition to the secrecy labels described above, IFDB supports integrity labels, which make it possible to track whether data came from trusted sources.'' ``Labels are sets of tags. Each data object has a label that summarizes the sensitivity of all the data it contains. Labels of data objects are immutable; they are specified when the object is created and cannot be changed later. Each process also has a label, which expands over time to reflect the sensitivity of all the data that has affected the process. Conceptually, a process becomes “contaminated” by the labels of all the data it reads.'' IFDB labels tuples, not fields. Supported via declassification. ``The security of the application depends on the code that runs with authority.'' ``We implemented PHP-IF and Python-IF by extending PHP and Python to support DIFC.'' ``The remaining changes to support IFC are implemented in PHP and Python; each respective implementation is about 1100 lines of code'' ``The system tracks sensitive information as it flows through the DBMS, and also between the application and the DBMS.'' No formal proof. ``Figure 5 reports the HTTP request latency on an idle system, with a single client issuing requests serially. The weighted mean increase in response time with IFDB and PHP-IF was 24\%'' (on web portal application) ``IFDB must store a label for every tuple, and compare the labels to the process label on every read and update.'' -- For Hails: - It has a notion of privileges that act on behalf of principals. We don't? What benefit does it provide? Declassification? ``Hails provides unforgeable objects called privileges with which code can assert the authority of principals'' ``labels used by Hails are called DC labels'' ``Hails introduces a novel approach to specifying document and field policies by assigning labels to documents and fields as a function of the document contents itself.'' The end of 2.3.1 says that MPs can perform arbitrarily complex operations when labeling collections, documents, and fields. The claim is that when an MP runs this action, the Hails runtime ensures that whatever it does is confined. I'm not sure I buy this---the label of the label matters, \eg it must be bound by the table label. No formal proof here. Yikes! Hails overhead compared to vanila Haskell not determined. Unclear the cost of the language vs. the cost of the system. One source of overhead is the invocation of the policy function on each query; since this could be arbitrary code, could be very expensive. --- For UrWeb: Claim from Hails paper is ``Policies are expressed in the form of SQL queries and while statically enforced, can depend on dynamic data from the database.'' Hmm. ``Jeeves policies to be specified on data stored in a database'' but Hails does not yet support this -- it requires invoking ``MP code'' to mediate policy. Fig. 3 is their policy. ``a tool UrFlow for static analysis of database-backed Web applications.'' 3) Efficient implementation, in terms of space and time. LWeb imposes low overheads. The LIO style — employing a current label — is inexpensive in terms of run-time checks, and the database stores no special metadata, as labels are computed. That said, label checks can occur on a (rewritten) query in a way that slows processing. SELinks avoided this problem by using user-defined functions in Postgres (LWeb could do the same). Jacqueline’s faceted labels require an expensive representation in both space (the extra facets, leading to extra rows and columns) and time for DB processing and normal processing — we see that transactions are significantly slower than LWeb. IFDB stores an extra column for each row’s label. 4) A formal guarantee for high assurance. We have a machine checked proof that LWeb enjoys noninterference. Doing this was important as it turned out the enforcing IFC for our expressive brand of policies and queries was tricky, and there were two bugs in our implementation owing to leaks from dynamic checks needed to enforce these policies. Jacqueline, SELinks, and baseline LIO have formal proofs, but as fair as we know Hails does not (which may be less of an issue, given its simpler queries). \end{comment} \section{Scoped Contexts} \label{sec:tolabeled} With the monadic expressions introduced in~\S~\ref{sec:labeling}, programs can interact with labeled values. A consequence of these expressions is that the current label in the environment is monotonically increasing. \paragraph{Terms} To limit the increase in current label, the term @TToLabeled tl t@ is introduced. Term @t@ is a monadic computation whose result is labeled by the label @tl@. Once the monadic computation is complete, the current label is reset to its initial value. \begin{mcode} data Term l = ... | TToLabeled (Term l) (Term l) \end{mcode} \paragraph{Values} @TToLabeled tl t@ is not a value. \begin{mcode} isValue (TToLabeled _ _) = False \end{mcode} \paragraph{Evaluation} Term @t@ is fully evaluated to its monadic value. As long as its resulting current label does not exceed @l@ and the initial current label can flow to @l@, the value is labeled with @l@ and returned. In addition, the current label reverts to its initial value from before evaluating @t@. Otherwise, an exception is thrown. \JP{Do we need a const/cast axiom call here?} \JP{I added TLIO. I think we need to update the proof to include fromTLIO as well.} \begin{mcode} eval (Pg lc (TToLabeled (TLabel l) t)) | Pg lc' (TLIO t') <- evalStar (Pg lc t) , lc canFlowTo l, lc' canFlowTo l = Pg lc (TReturn (TLabeled l t')) | otherwise = Pg lc TException eval (Pg lc (TToLabeled tl t)) = Pg lc (TToLabeled (evalTerm tl) t) \end{mcode} \paragraph{Erasure} \JP{The original proof + Deian's paper eagerly erase here too.} \begin{mcode} εTerm l (TToLabeled tl t) = TToLabeled (εTerm l tl) (εTerm l t) \end{mcode} \paragraph{Noninterference} The proof proceeds by case splitting and mutually recursively calling the simulations star theorem, similar to the bind proof of~\S~\ref{sec:lio}.
1,116,691,499,955
arxiv
\section{Introduction} \label{Introduction} Recent years, image classification has been a classical issue in computer vision. Many successful algorithms~\cite{yu2012adaptive,shi2018hypergraph,wright2009robust,yu2014high,song2018euler,yu2013pairwise,wang2018iterative,yu2012image,yang2009linear,yang2017discriminative,liu2014class,liu2017class,jiang2013label,hao2017class,chan2015pcanet,nakazawa2018wafer,ji2014spectral,xu2019sparse,yuan2016non} have been proposed to solve the problem. In these algorithms, there is one category that contributes a lot for image classification which is the sparse representation based method. \begin{figure*} \begin{center} \includegraphics[width=1.0\linewidth]{Comparision.png} \end{center} \caption{ The scheme of LEDL is on the right while the LC-KSVD is on the left. The difference between the two methods is the sparse regularization term which LEDL use the $\ell_1$-norm regularization term and LC-KSVD use the $\ell_0$-norm regularization term. Compared with $\ell_0$-norm, the sparsity constraint factor of $\ell_1$-norm is unfixed so that the basis vectors can be selected freely for linear fitting. Thus, our proposed LEDL method can get smaller errors than LC-KSVD. } \label{fig:Comparision} \end{figure*} Sparse representation is capable of expressing the input sample features as a linear combination of atoms in an overcomplete basis set. \cite{wright2009robust} proposed sparse representation based classification (SRC) algorithm which use the $\ell_1$-norm regularization term to achieve impressive performance. SRC is the most representative one in the sparse representation based methods. However, in traditional sparse representation based methods, training sample features are directly exploited without considering the discriminative information which is crucial in real applications. That is to say, sparse representation based methods can gain better performance if the discriminative information is properly harnessed. To handle this problem, dictionary learning (DL) method is introduced to preprocess the training sample features befor classification. DL is a generative model for sparse representation which the concept was firstly prposed by~\cite{mallat1993matching}. A few years later, \cite{olshausen1996emergence,olshausen1997sparse} proposed the application of DL on natural images and then it has been widely used in many fields such as image denoising~\cite{chang2000adaptive,li2012efficient,li2018joint}, image superresolution~\cite{yang2010image,wang2012semi,gao2018self}, and image classification~\cite{liu2017class,jiang2013label,chang2016learning}. A well learned dictionary can help to get significant boost in classification accuracy. Therefore, DL based methods in classification are more and more popular in recent years. Specificially, there are two strategies are proposed to successfully utilise the discriminative information: i) class specific dictionary learning ii) class shared dictionary learning. The first strategy is to learn specific dictionaries for each class such as~\cite{wang2012supervised,yang2014sparse,liu2016face}. The second strategy is to learn a shared dictionary for all classes. For example,~\cite{zhang2010discriminative} proposed discriminative K-SVD(D-KSVD) algorithm to directly add the discriminative information into objective function. Furthermore,~\cite{jiang2013label} proposed label consistence K-SVD (LC-KSVD) method which add a label consistence term into the objective function of D-KSVD. The motivation for adding this term is to encourage the training samples from the same class to have similar sparse codes and those from different classes to have dissimilar sparse codes. Thus, the discriminative abilities of the learned dictionary is effectively improved. However, the sparse regularization term in LC-KSVD is $\ell_0$-norm which leads to the NP-hard~\cite{natarajan1995sparse} problem. Although some greedy methods such as orthogonal matching pursuit (OMP)~\cite{tropp2007signal} can help solve this problem to some extent, it is usually to find the suboptimum sparse solution instead of the optimal sparse solution. More specifically, greedy method solve the global optimal problems by finding basis vectors in order of reconstruction errors from small to large until $T$ (the sparsity constraint factor) times. Thus, the initialized values are crucial. To this end, $\ell_0$-norm based sparse constraint is not conducive to finding a global minimum value to obtain the optimal sparse solution. In this paper, we propose a novel dictionary learning algorithm named label embedded dictionary learning (LEDL). This method introduces the $\ell_1$-norm regularization term to replace the $\ell_0$-norm regularization of LC-KSVD. Thus, we can freely select the basis vectors for linear fitting to get optimal sparse solution. In addition, $\ell_1$-norm sparse representation is widely used in many fields so that our proposed LEDL method can be extended and applied easily. We show the difference between our proposed LEDL and LC-KSVD in Figure~\ref{fig:Comparision}. We adopt the alternating direction method of multipliers (ADMM)~\cite{boyd2011distributed} framework and blockwise coordinate descent (BCD)~\cite{liu2014blockwise} algorithm to optimize LEDL. Our work mainly focuses on threefold. \begin{itemize} \item We propose a novel dictionary learning algorithm named label embedded dictionary learning which introduces the $\ell_1$-norm regularization term as the sparse constraint. The $\ell_1$-norm sparse constraint is able to help easily find the optimal sparse solution. \item We propose to utilize the alternating direction method of multipliers (ADMM)~\cite{boyd2011distributed} algorithm and blockwise coordinate descent (BCD)~\cite{liu2014blockwise} algorithm to optimize dictionary learning task. \item We verify the superior performance of our method on six benchmark datasets. \end{itemize} The rest of the paper is organized as follows. Section ~\ref{Related work} reviews two conventional methods which are SRC and LC-KSVD. Section~\ref{Methodology-A} presents LEDL method for image classification. The optimization approach and the convergence are elaborated in Section~\ref{Methodology-B}. Section~\ref{Experimental results} shows experimental results on six well-known datasets. Finally, we conclude this paper in Section~\ref{Conclusion}. \section{Related Work}\label{Related work} In this section, we overview two related algorithms, including sparse representation based classification (SRC) and label consistent K-SVD (LC-KSVD). \subsection{Sparse representation based classification (SRC)} SRC was proposed by~\cite{wright2009robust}. Assume that we have $C$ classes of training samples, denoted by $ {{\mathbf{X}}_{c}},c=1,2,\cdots ,C$, where ${{\mathbf{X}}_{c}}$ is the training sample matrix of class $c$. Each column of the matrix ${{\mathbf{X}}_{c}}$ is a training sample feature from the $c_{th}$ class. The whole training sample matrix can be denoted as $\mathbf{X}=\left[ {{\mathbf{X}}_{1}},{{\mathbf{X}}_{2}},\cdots {{\mathbf{X}}_{C}} \right]\in {{\mathbb{R}}^{D\times N}}$, where $D$ represents the dimensions of the sample features and $N$ is the number of training samples. Supposing that $\mathbf{y}\in {{\mathbb{R}}^{D\times 1}}$ is a testing sample vector, the sparse representation algorithm aims to solve the following objective function: \begin{equation} \begin{split} {\bf{\hat s}} = \arg {\min _{\bf{s}}}{\mkern 1mu} \left\{ {\left\| {{\bf{y}} - {\bf{Xs}}} \right\|_2^2 + 2\alpha {{\left\| {\bf{s}} \right\|}_1}} \right\} \end{split}\label{SRC} \end{equation} where, $\alpha$ is the regularization parameter to control the tradeoff between fitting goodness and sparseness. The sparse representation based classification is to find the minimum value of the residual error for each class. \begin{equation} \begin{split} id\left( {\bf{y}} \right) = \arg {\min _c}{\mkern 1mu} \left\| {{\bf{y}} - {{\bf{x}}_c}{{{\bf{\hat s}}}_c}} \right\|_2^2 \end{split}\label{residual} \end{equation} where $id\left( \bf{y} \right)$ represents the predictive label of $\bf{y}$, ${{{\bf{\hat s}}}_c}$ is the sparse code of $c_{th}$ class. The procedure of SRC is shown in Algorithm~\ref{Algorithm1}. Obviously, the residual $e_c$ is associated with only a few images in class $c$. \begin{algorithm}[!t] \scriptsize \caption{Sparse representation based classification}\label{Algorithm1} \hspace*{0.02in} {\bf Input:} ${\bf{X}}\in\mathbb{R}^{D\times N}$, ${\bf{y}}\in\mathbb{R}^{D\times 1}$, $\alpha$\\ \hspace*{0.02in} {\bf Output:} $id({\bf{y}})$ \begin{algorithmic}[1] \STATE Code ${\bf{y}}$ with the dictionary ${\bf{X}}$ via $\ell_1$-minimization. \STATE ${\bf{\hat s}} = \arg {\min _{\bf{s}}}{\mkern 1mu} \left\{ {\left\| {{\bf{y}} - {\bf{Xs}}} \right\|_2^2 + 2\alpha {{\left\| {\bf{s}} \right\|}_1}} \right\}$ \FOR{$c=1$;$c\le C$;$c\!+\!+$} \STATE Compute the residual ${e_c}({\bf{y}}) = {\left\| {{\bf{y}} - {{\bf{x}}_c}{{\bf{\hat s}}_c}} \right\|_2^2}$ \ENDFOR \STATE $id\left( {\bf{y}} \right) = \arg {\min _c}\left\{ {{e_c}} \right\}$ \RETURN $id({\bf{y}})$ \end{algorithmic} \end{algorithm} \subsection{Label Consistent K-SVD (LC-KSVD)} \begin{algorithm}[!t] \scriptsize \caption{Label Consistent K-SVD}\label{Algorithm2} \hspace*{0.02in} {\bf Input:} ${\bf{X}}\in\mathbb{R}^{D\times N}$, ${\bf{H}}\in\mathbb{R}^{C\times N}$, ${\bf{Q}}\in\mathbb{R}^{K\times N}$, $\lambda$, $\omega$, $T$, $K$\\ \hspace*{0.02in} {\bf Output:} ${\bf{B}}\in\mathbb{R}^{D\times K}$, ${\bf{W}}\in\mathbb{R}^{C\times K}$, ${\bf{A}}\in\mathbb{R}^{K\times K}$, ${\bf{S}}\in\mathbb{R}^{K\times N}$ \begin{algorithmic}[1] \STATE Compute ${{\bf{B}}_0}$ by combining class-specific dictionary items for each class using K-SVD~\cite{aharon2006k}; \STATE Compute ${{\bf{S}}_0}$ for ${\bf{X}}$ and ${{\bf{B}}_0}$ using sparse coding; \STATE Compute ${{\bf{A}}_0}$ using ${\bf{A}} = {\bf{Q}}{{\bf{S}}^T}{\left( {{\bf{S}}{{\bf{S}}^T} + {\bf{I}}} \right)^{ - 1}}$; \STATE Cpmpute ${{\bf{W}}_0}$ using ${\bf{W}} = {\bf{H}}{{\bf{S}}^T}{\left( {{\bf{S}}{{\bf{S}}^T} + {\bf{I}}} \right)^{ - 1}}$; \STATE Solve Eq.(3); Use ${\left[ {\begin{array}{*{20}{c}} {{\bf{B}}_0}\\ {\sqrt \omega {{\bf{A}}_0}}\\ {\sqrt \lambda {{\bf{W}}_0}} \end{array}} \right]}$ to initialize the dictionary. \STATE Normalize ${\bf{B}}$ ${\bf{A}}$ ${\bf{W}}$:\\ ${\bf{B}} \leftarrow \left\{ {\frac{{{{\bf{b}}_1}}}{{{{\left\| {{{\bf{b}}_1}} \right\|}_2}}},\frac{{{{\bf{b}}_2}}}{{{{\left\| {{{\bf{b}}_2}} \right\|}_2}}}, \cdots ,\frac{{{{\bf{b}}_K}}}{{{{\left\| {{{\bf{b}}_K}} \right\|}_2}}}} \right\}$\\ ${\bf{A}} \leftarrow \left\{ {\frac{{{{\bf{a}}_1}}}{{{{\left\| {{{\bf{b}}_1}} \right\|}_2}}},\frac{{{{\bf{a}}_2}}}{{{{\left\| {{{\bf{b}}_2}} \right\|}_2}}}, \cdots ,\frac{{{{\bf{a}}_K}}}{{{{\left\| {{{\bf{b}}_K}} \right\|}_2}}}} \right\}$\\ ${\bf{W}} \leftarrow \left\{ {\frac{{{{\bf{w}}_1}}}{{{{\left\| {{{\bf{b}}_1}} \right\|}_2}}},\frac{{{{\bf{w}}_2}}}{{{{\left\| {{{\bf{b}}_2}} \right\|}_2}}}, \cdots ,\frac{{{{\bf{w}}_K}}}{{{{\left\| {{{\bf{b}}_K}} \right\|}_2}}}} \right\}$\\ \RETURN ${\bf{B}}$, ${\bf{W}}$, ${\bf{A}}$, ${\bf{S}}$ \end{algorithmic} \end{algorithm} \cite{jiang2013label} proposed LC-KSVD to encourage the similarity among representations of samples belonging to the same class in D-KSVD. The authors proposed to combine the discriminative sparse codes error with the reconstruction error and the classification error to form a unified objective function, which gave discriminative sparse codes matrix ${\bf{Q}} = \left[ {{{\bf{q}}_1},{{\bf{q}}_2}, \cdots ,{{\bf{q}}_N}} \right] \in {{\mathbb{R}}^{K\times N}}$, label matrix ${\bf{H}} = \left[ {{{\bf{h}}_1},{{\bf{h}}_2}, \cdots ,{{\bf{h}}_N}} \right] \in {{\mathbb{R}}^{C\times N}}$ and training sample matrix ${\bf{X}}$. The objective function is defined as follows: \begin{equation} \begin{split} < {\bf{B}},{\bf{W}},{\bf{A}},{\bf{S}} > &= \mathop {\arg \min }\limits_{{\bf{B}},{\bf{W}},{\bf{A}},{\bf{S}}} \left\| {{\bf{X}} - {\bf{BS}}} \right\|_F^2 + \lambda \left\| {{\bf{H}} - {\bf{WS}}} \right\|_F^2 \\&+ \omega \left\| {{\bf{Q}} - {\bf{AS}}} \right\|_F^2\\ {\kern 12pt} & s.t.{\kern 3pt} {\left\| {{{\bf{s}}_i}} \right\|_0} < T {\kern 6pt} \left( {i = 1,2 \cdots ,N} \right)\\ &{\rm{ = }}\mathop {{\rm{argmin}}}\limits_{{\bf{B}},{\bf{W}},{\bf{A}},{\bf{S}}} \left\| {\left[ {\begin{array}{*{20}{c}} {\bf{X}}\\ {\sqrt \omega {\bf{Q}}}\\ {\sqrt \lambda {\bf{H}}} \end{array}} \right] - \left[ {\begin{array}{*{20}{c}} {\bf{B}}\\ {\sqrt \omega {\bf{A}}}\\ {\sqrt \lambda {\bf{W}}} \end{array}} \right]{\bf{S}}} \right\|_F^2\\ &s.t.{\kern 3pt}{\left\| {{{\bf{s}}_i}} \right\|_0} < T {\kern 6pt} \left( {i = 1,2 \cdots ,N} \right) \end{split}\label{LC-KSVD} \end{equation} where $T$ is the sparsity constraint factor, making sure that ${{{\bf{s}}_i}}$ has no more than $T$ nonzero entries. The dictionary ${\bf{B}} = \left[ {{{\bf{b}}_1},{{\bf{b}}_2}, \cdots ,{{\bf{b}}_K}} \right] \in {{\mathbb{R}}^{D\times K}}$, where $K>D$ is the number of atoms in the dictionary, and ${\bf{S}} = \left[ {{{\bf{s}}_1},{{\bf{s}}_2}, \cdots ,{{\bf{s}}_N}} \right] \in {{\mathbb{R}}^{K\times D}}$ is the sparse codes of training sample matrix ${\bf{X}}$. ${\bf{W}} = \left[ {{{\bf{w}}_1},{{\bf{w}}_2}, \cdots ,{{\bf{w}}_K}} \right] \\ \in {{\mathbb{R}}^{C\times K}}$ is a classifier learned from the given label matrix ${\bf{H}}$. We hope the ${\bf{W}}$ can return the most probable class this sample belongs to. ${\bf{A}} = \left[ {{{\bf{a}}_1},{{\bf{a}}_2}, \cdots ,{{\bf{a}}_K}} \right] \in {{\mathbb{R}}^{K\times K}}$ is a linear transformation relys on ${\bf{Q}}$. $\lambda$ and $\omega$ are the regularization parameters balancing the discriminative sparse codes error and the classification contribution to the overall objective, respectively. The algorithm is shown in Algorithm~\ref{Algorithm2}. Here, we denote $m\left( {m = 0,1,2, \cdots } \right)$ as the iteration number and ${\left( \bullet \right)_m}$ means the value of matrix $\left( \bullet \right)$ after ${m_{th}}$ iteration. While the LC-KSVD algorithm exploits the $\ell_0$-norm regularization term to control the sparseness, it is difficult to find the optimal sparse solution to a general image recognition. The reason is that LC-KSVD use OMP method to optimise the objective function which usually obtain the suboptimal sparse solution unless finding the perfect initialized values. \section{Methodology} \label{Methodology} In this section, we first give our proposed label embedded dictionary learning algorithm. Then we elaborate the optimization of the objective function. \subsection{Proposed Label Embedded Dictionary Learning (LEDL)} \label{Methodology-A} Motivated by that the optimal sparse solution can not be found easily with $\ell_0$-norm regularization term, we propose a novel dictionary learning method named label embedded dictionary learning (LEDL) for image classification. This method introduces the $\ell_1$-norm regularization term to replace the $\ell_0$-norm regularization of LC-KSVD. Thus, we can freely select the basis vectors for linear fitting to get optimal sparse solution. The objection function is as follows: \begin{equation} \begin{split} < {\bf{B}},{\bf{W}},{\bf{A}},{\bf{S}} > = &\mathop {\arg \min }\limits_{{\bf{B}},{\bf{W}},{\bf{A}},{\bf{S}}} \left\| {{\bf{X}} - {\bf{BS}}} \right\|_F^2 + \lambda \left\| {{\bf{H}} - {\bf{WS}}} \right\|_F^2 \\&+ \omega \left\| {{\bf{Q}} - {\bf{AS}}} \right\|_F^2 + 2\varepsilon {\left\| {\bf{S}} \right\|_{\ell_1}}\\ {\rm{s}}.t.{\kern 4pt}\left\| {{{\bf{B}}_{ \bullet k}}} \right\|_2^2 \le 1, {\kern 1pt} {\kern 1pt} &\left\| {{{\bf{W}}_{ \bullet k}}} \right\|_2^2 \le 1,{\kern 1pt} \left\| {{{\bf{A}}_{ \bullet k}}} \right\|_2^2 \le 1{\kern 3pt}\left( {k = 1,2, \cdots K} \right) \end{split}\label{LEDL} \end{equation} where, ${\left( \bullet \right)_{ \bullet k}}$ denotes the $k_{th}$ column vector of matrix $\left( \bullet \right)$. The $\ell_1$-norm regularization term is utilized to enforce sparsity and $\varepsilon$ is the regularization parameter which has the same function as $\alpha$ in Equation (\ref{SRC}). \subsection{Optimization of Objective Function} \label{Methodology-B} Consider the optimization problem (\ref{LEDL}) is not jointly convex in both ${\bf{S}}$, ${\bf{B}}$, ${\bf{W}}$ and ${\bf{A}}$, it is separately convex in either ${\bf{S}}$ (with ${\bf{B}}$, ${\bf{W}}$, ${\bf{A}}$ fixed), ${\bf{B}}$ (with ${\bf{S}}$, ${\bf{W}}$, ${\bf{A}}$ fixed), ${\bf{W}}$ (with ${\bf{S}}$, ${\bf{B}}$, ${\bf{A}}$ fixed) or ${\bf{A}}$ (with ${\bf{S}}$, ${\bf{B}}$, ${\bf{W}}$ fixed). To this end, the optimization problem can be recognised as four optimization subproblems which are finding sparse codes ($\bf{S}$) and learning bases ($\bf{B}$, $\bf{W}$, $\bf{A}$), respectively. Here, we employ the alternating direction method of multipliers (ADMM)~\cite{boyd2011distributed} framework to solve the first subproblem and the blockwise coordinate descent (BCD)~\cite{liu2014blockwise} algorithm for the rest subproblem. The complete process of LEDL is shown in Figure~\ref{fig:Procedure4LEDL}. \begin{figure*} \begin{center} \includegraphics[width = 1.0\linewidth]{Procedure4LEDL.png} \end{center} \caption{The complete process of LEDL algorithm} \label{fig:Procedure4LEDL} \end{figure*} \subsubsection{ADMM for finding sparse codes} While fixing ${\bf{B}}$, ${\bf{W}}$ and ${\bf{A}}$, we introduce an auxiliary variable ${\bf{Z}}$ and reformulate the LEDL algorithm into a linear equality-constrained problem with respect to each iteration has the closed-form solution. The objective function is as follows:\\ \begin{equation} \begin{split} < {\bf{B}},{\bf{W}},{\bf{A}},{\bf{C}},{\bf{Z}} > = &\mathop {\arg \min }\limits_{{\bf{B}},{\bf{W}},{\bf{A}},{\bf{C}},{\bf{Z}}} \left\| {{\bf{X}} - {\bf{BC}}} \right\|_F^2 + 2\varepsilon {\left\| {\bf{Z}} \right\|_{\ell_1}} \\&+ \lambda \left\| {{\bf{H}} - {\bf{WC}}} \right\|_F^2 + \omega \left\| {{\bf{Q}} - {\bf{AC}}} \right\|_F^2 \\s.t.{\kern 2pt} {\kern 2pt} {\bf{C}} = {\bf{Z}},{\kern 1pt} {\kern 1pt} &\left\| {{{\bf{B}}_{ \bullet k}}} \right\|_2^2 \le 1,{\kern 1pt} {\kern 1pt} {\kern 1pt} \left\| {{{\bf{W}}_{ \bullet k}}} \right\|_2^2 \le 1,{\kern 1pt} {\kern 1pt} {\kern 1pt}\\& \left\| {{{\bf{A}}_{ \bullet k}}} \right\|_2^2 \le 1(k = 1,2 \cdots K) \end{split}\label{LEDL_ADMM1} \end{equation} While utilising the ADMM framework with fixed ${\bf{B}}$, ${\bf{W}}$ and ${\bf{A}}$, the lagrangian function of the problem (\ref{LEDL_ADMM1}) is rewritten as: \begin{equation} \begin{split} <{\bf{C}},{\bf{Z}},{\bf{L}} > = &\mathop {\arg \min }\limits_{{\bf{C}},{\bf{Z}},{\bf{L}} } \left\| {{\bf{X}} - {\bf{BC}}} \right\|_F^2 + \lambda \left\| {{\bf{H}} - {\bf{WC}}} \right\|_F^2 + \omega \left\| {{\bf{Q}} - {\bf{AC}}} \right\|_F^2 \\&+ 2\varepsilon {\left\| {\bf{Z}} \right\|_{\ell_1}} + 2{{\bf{L}}^T}({\bf{C}} - {\bf{Z}}) + \rho \left\| {{\bf{C}} - {\bf{Z}}} \right\|_F^2\\ \end{split}\label{LEDL_ADMM2} \end{equation} where ${\bf{L}} = \left[ {{{\bf{l}}_1},{{\bf{l}}_2}, \cdots ,{{\bf{l}}_N}} \right] \in {{\mathbb{R}}^{K\times N}}$ is the augmented lagrangian multiplier and $ \rho>0$ is the penalty parameter. After fixing ${\bf{B}}$, ${\bf{W}}$ and ${\bf{A}}$, we initialize the ${{\bf{C}}_0}$, ${{\bf{Z}}_0}$ and ${{\bf{L}}_0}$ to be zero matrices. Equation (\ref{LEDL_ADMM2}) can be solved as follows:\\\\ $\left( 1 \right)$ { Updating ${{\bf{C}}}$ while fixing ${{\bf{Z}}}$, ${{\bf{L}}}$, ${{\bf{B}}}$, ${{\bf{W}}}$ and ${{\bf{A}}}$}: \begin{equation} \begin{split} {{\bf{C}}_{m + 1}} = < {{\bf{B}}_m},{{\bf{W}}_m},{{\bf{A}}_m},{{\bf{C}}_m},{{\bf{Z}}_m},{{\bf{L}}_m} > \end{split}\label{Update_C1} \end{equation} The closed form solution of ${\bf{C}}$ is \begin{equation} \begin{split} {{\bf{C}}_{m + 1}}& = {\left( {{{\bf{B}}_m}^T{{\bf{B}}_m} + \lambda {{\bf{W}}_m}^T{{\bf{W}}_m} + \omega {{\bf{A}}_m}^T{{\bf{A}}_m} + \rho {\bf{I}}} \right)^{ - 1}} \\& \times \left( {{{\bf{B}}_m}^T{{\bf{X}}} + \lambda {{\bf{W}}_m}^T{{\bf{H}}} + \omega {{\bf{A}}_m}^T{{\bf{Q}}} + \rho {{\bf{Z}}_m} - {\bf{L}}_m} \right) \end{split}\label{Update_C2} \end{equation} $\left( 2 \right)$ { Updating ${{\bf{Z}}}$ while fixing ${{\bf{C}}}$, ${{\bf{L}}}$, ${{\bf{B}}}$, ${{\bf{W}}}$ and ${{\bf{A}}}$}\\ \begin{equation} \begin{split} {{\bf{Z}}_{m + 1}} = < {{\bf{B}}_m},{{\bf{W}}_m},{{\bf{A}}_m},{{\bf{C}}_{m+1}},{{\bf{Z}}_m},{{\bf{L}}_m} > \end{split}\label{Update_Z1} \end{equation} The closed form solution of {\bf{Z}} is \begin{equation} \begin{split} {{\bf{Z}}_{m + 1}} = \max \left\{ {{{\bf{C}}_{m + 1}} + \frac{{{{\bf{L}}_m}}}{\rho } - \frac{\varepsilon }{\rho }{\bf{I}},{\bf{0}}} \right\}\\ + \min \left\{ {{{\bf{C}}_{m + 1}} + \frac{{{{\bf{L}}_m}}}{\rho } + \frac{\varepsilon }{\rho }{\bf{I}},{\bf{0}}} \right\} \end{split}\label{Update_Z2} \end{equation} where $\bf{I}$ is the identity matrix and $\bf{0}$ is the zero matrix. \\$\left( 3 \right)$ { Updating the Lagrangian multiplier} ${{\bf{L}}}$\\ \begin{equation} \begin{split} {{\bf{L}}_{m + 1}} = {{\bf{L}}_m} + \rho \left( {{{\bf{C}}_{m + 1}} - {{\bf{Z}}_{m + 1}}} \right) \end{split}\label{Update_L1} \end{equation} where the $\rho$ in Equation (\ref{Update_L1}) is the gradient of gradient descent (GD) method, which has no relationship with the $\rho$ in Equation (\ref{LEDL_ADMM2}). In order to make better use of ADMM framework, the $\rho$ in Equation (\ref{Update_L1}) can be rewritten as $\theta$. \begin{equation} \begin{split} {{\bf{L}}_{m + 1}} = {{\bf{L}}_m} + \theta \left( {{{\bf{C}}_{m + 1}} - {{\bf{Z}}_{m + 1}}} \right) \end{split}\label{Update_L2} \end{equation} \subsubsection{BCD for learning bases} Without consisdering the sparseness regulariation term in Equation (\ref{LEDL_ADMM1}), the constrained minimization problem of (\ref{LEDL}) with respect to the single column has the closed-form solution which can be solved by BCD method. The objective function can be rewritten as follows:\\ \begin{equation} \begin{split} <{\bf{B}},{\bf{W}},{\bf{A}} > &= \mathop {\arg \min }\limits_{{\bf{B}},{\bf{W}},{\bf{A}} } \left\| {{\bf{X}} - {\bf{BC}}} \right\|_F^2 + \lambda \left\| {{\bf{H}} - {\bf{WC}}} \right\|_F^2 + \omega \left\| {{\bf{Q}} - {\bf{AC}}} \right\|_F^2 \\&+ 2\varepsilon {\left\| {\bf{Z}} \right\|_{\ell_1}} + 2{{\bf{L}}^T}({\bf{C}} - {\bf{Z}}) + \rho \left\| {{\bf{C}} - {\bf{Z}}} \right\|_F^2\\ s.t.{\kern 2pt} {\kern 2pt} {\kern 1pt} {\kern 1pt} &\left\| {{{\bf{B}}_{ \bullet k}}} \right\|_2^2 \le 1,{\kern 1pt} {\kern 1pt} {\kern 1pt} \left\| {{{\bf{W}}_{ \bullet k}}} \right\|_2^2 \le 1,{\kern 1pt} {\kern 1pt} {\kern 1pt} \left\| {{{\bf{A}}_{ \bullet k}}} \right\|_2^2 \le 1(k = 1,2 \cdots K) \end{split}\label{LEDL_BCD} \end{equation} We initialize ${{\bf{B}}_0}$, ${{\bf{W}}_0}$ and ${{\bf{A}}_0}$ to be random matrices and normalize them, respectively. After that we use BCD method to update ${{\bf{B}}}$, ${{\bf{W}}}$ and ${{\bf{A}}}$.\\\\ $\left( 1 \right)$ { Updating ${{\bf{B}}}$ while fixing ${{\bf{C}}}$, ${{\bf{L}}}$, ${{\bf{Z}}}$, ${{\bf{W}}}$ and ${{\bf{A}}}$}\\ \begin{equation} \begin{split} {{\bf{B}}_{m + 1}} = < {{\bf{B}}_m},{{\bf{W}}_m},{{\bf{A}}_m},{{\bf{C}}_{m+1}},{{\bf{Z}}_{m+1}},{{\bf{L}}_{m+1}} > \end{split}\label{Update_B1} \end{equation} The closed-form solution of single column of ${{\bf{B}}}$ is \begin{equation} \begin{split} &\left( {{{\bf{B}}_{ \bullet k}}} \right){{\kern 1pt} _{m + 1}} = \frac{{{\bf{X}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T} - {{\left( {{{{\bf{\tilde B}}}^k}} \right)}_m}{{\bf{C}}_{m + 1}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T}}}{{{{\left\| {{\bf{X}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T} - {{\left( {{{{\bf{\tilde B}}}^k}} \right)}_m}{{\bf{C}}_{m + 1}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T}} \right\|}_2}}}{\kern 1pt} \end{split}\label{Update_B2} \end{equation} where ${\bf{\tilde B}}^k = \left\{ {\begin{array}{*{20}{c}} {{{\bf{B}}_{ \bullet p}},p \ne k}\\ {{\bf{0}},{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} p = k} \end{array}} \right.$, ${\left( \bullet \right)_{k \bullet }}$ denotes the $k_{th}$ row vector of matrix $\left( \bullet \right)$.\\\\ $\left( 2 \right)$ { Updating ${{\bf{W}}}$ while fixing ${{\bf{C}}}$, ${{\bf{L}}}$, ${{\bf{Z}}}$, ${{\bf{B}}}$ and ${{\bf{A}}}$}\\ \begin{equation} \begin{split} {{\bf{W}}_{m + 1}} = < {{\bf{B}}_{m+1}},{{\bf{W}}_m},{{\bf{A}}_m},{{\bf{C}}_{m+1}},{{\bf{Z}}_{m+1}},{{\bf{L}}_{m+1}} > \end{split}\label{Update_W1} \end{equation} The closed-form solution of single column of ${{\bf{W}}}$ is \begin{equation} \begin{split} &\left( {{{\bf{W}}_{ \bullet k}}} \right){{\kern 1pt} _{m + 1}} = \frac{{{\bf{H}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T} - {{\left( {{{{\bf{\tilde W}}}^k}} \right)}_m}{{\bf{C}}_{m + 1}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T}}}{{{{\left\| {{\bf{H}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T} - {{\left( {{{{\bf{\tilde W}}}^k}} \right)}_m}{{\bf{C}}_{m + 1}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T}} \right\|}_2}}}{\kern 1pt} \end{split}\label{Update_W2} \end{equation} where ${\bf{\tilde W}}^k = \left\{ {\begin{array}{*{20}{c}} {{{\bf{W}}_{ \bullet p}},p \ne k}\\ {{\bf{0}},{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} p = k} \end{array}} \right.$.\\\\ $\left( 3 \right)$ { Updating ${{\bf{A}}}$ while fixing ${{\bf{C}}}$, ${{\bf{L}}}$, ${{\bf{Z}}}$, ${{\bf{B}}}$ and ${{\bf{W}}}$}\\ \begin{equation} \begin{split} {{\bf{A}}_{m + 1}} = < {{\bf{B}}_{m+1}},{{\bf{W}}_{m+1}},{{\bf{A}}_m},{{\bf{C}}_{m+1}},{{\bf{Z}}_{m+1}},{{\bf{L}}_{m+1}} > \end{split}\label{Update_A1} \end{equation} The closed-form solution of single column of ${{\bf{A}}}$ is \begin{equation} \begin{split} &\left( {{{\bf{A}}_{ \bullet k}}} \right){{\kern 1pt} _{m + 1}} = \frac{{{\bf{Q}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T} - {{\left( {{{{\bf{\tilde A}}}^k}} \right)}_m}{{\bf{C}}_{m + 1}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T}}}{{{{\left\| {{\bf{Q}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T} - {{\left( {{{{\bf{\tilde A}}}^k}} \right)}_m}{{\bf{C}}_{m + 1}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T}} \right\|}_2}}}{\kern 1pt} \end{split}\label{Update_A2} \end{equation} where ${\bf{\tilde A}}^k = \left\{ {\begin{array}{*{20}{c}} {{{\bf{A}}_{ \bullet p}},p \ne k}\\ {{\bf{0}},{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} p = k} \end{array}} \right.$.\\ \begin{figure*} \begin{center} \includegraphics[width=0.8\linewidth]{Convergence.png} \end{center} \caption{ Convergence curve of LEDL Algorithm on four datasets. } \label{fig:Convergence} \end{figure*} \subsubsection{Convergence Analysis} Assume that the result of the objective function after ${m_{th}}$ iteration is defined as $f\left({{{\bf{C}}_m},{{\bf{Z}}_m},{{\bf{L}}_m},{{\bf{B}}_m},{{\bf{W}}_m},{{\bf{A}}_m}} \right)$. Since the minimum point is obtained by ADMM and BCD methods, each method will monotonically decrease the corresponding objective function after about 100 iterations. Considering that the objective function is obviously bounded below and satisfies the Equation (\ref{converge}), it converges. Figure~\ref{fig:Convergence} shows the convergence curve of the proposed LEDL algorithm by using four well-known datasets. The results demonstrate that our proposed LEDL algorithm has fast convergence and low complexity. \begin{equation} \begin{split} &f\left({{{\bf{C}}_m},{{\bf{Z}}_m},{{\bf{L}}_m},{{\bf{B}}_m},{{\bf{W}}_m},{{\bf{A}}_m}} \right) \\\ge &f\left( {{{\bf{C}}_{m + 1}},{{\bf{Z}}_{m + 1}},{{\bf{L}}_{m + 1}},{{\bf{B}}_m},{{\bf{W}}_m},{{\bf{A}}_m}} \right) \\\ge &f\left( {{{\bf{C}}_{m + 1}},{{\bf{Z}}_{m + 1}},{{\bf{L}}_{m + 1}},{{\bf{B}}_{m + 1}},{{\bf{W}}_{m + 1}},{{\bf{A}}_{m + 1}}} \right) \end{split}\label{converge} \end{equation} \subsubsection{Overall Algorithm} The overall updating procedures of proposed LEDL algorithm is summarized in Algorithm~\ref{Algorithm3}. Here, $maxiter$ is the maximum number of iterations, ${\bf{1}}\in\mathbb{R}^{K\times K}$ is a squre matrix with all elements 1 and $ \odot $ indicates element dot product. By iterating ${\bf{C}}$, ${\bf{Z}}$, ${\bf{L}}$, ${\bf{B}}$, ${\bf{W}}$ and ${\bf{A}}$ alternately, the sparse codes are obtained, and the corresponding bases are learned. \begin{algorithm}[!ht] \scriptsize \caption{Label Embedded Dictionary Learning}\label{Algorithm3} \hspace*{0.02in} {\bf Input:} ${\bf{X}}\in\mathbb{R}^{D\times N}$, ${\bf{H}}\in\mathbb{R}^{C\times N}$, ${\bf{Q}}\in\mathbb{R}^{K\times N}$, $\lambda$, $\omega$, $\varepsilon$, $\rho$, $\theta$, $K$\\ \hspace*{0.02in} {\bf Output:} ${\bf{B}}\in\mathbb{R}^{D\times K}$, ${\bf{W}}\in\mathbb{R}^{C\times K}$, ${\bf{A}}\in\mathbb{R}^{K\times K}$, ${\bf{C}}\in\mathbb{R}^{K\times N}$ \begin{algorithmic}[1] \STATE ${{\bf{C}}_0} \leftarrow zeros\left( {K,N} \right)$, ${{\bf{Z}}_0} \leftarrow zeros\left( {K,N} \right)$, ${{\bf{L}}_0} \leftarrow zeros\left( {K,N} \right)$ \STATE ${{\bf{B}}_0} \leftarrow rand\left( {D,K} \right)$, ${{\bf{W}}_0} \leftarrow rand\left( {C,K} \right)$, ${{\bf{A}}_0} \leftarrow rand\left( {K,K} \right)$ \STATE ${{\bf{B}}_{ \bullet k}} = \frac{{{{\bf{B}}_{ \bullet k}}}}{{{{\left\| {{{\bf{B}}_{ \bullet k}}} \right\|}_2}}}$, ${{\bf{W}}_{ \bullet k}} = \frac{{{{\bf{W}}_{ \bullet k}}}}{{{{\left\| {{{\bf{W}}_{ \bullet k}}} \right\|}_2}}}$, ${{\bf{A}}_{ \bullet k}} = \frac{{{{\bf{A}}_{ \bullet k}}}}{{{{\left\| {{{\bf{A}}_{ \bullet k}}} \right\|}_2}}}$, $(k = 1,2 \cdots K)$ \STATE $m = 0$ \WHILE {$m \le \max iter$} \STATE $m \leftarrow m + 1$ \STATE \textbf{Update ${\bf{C}}$:}\\ \STATE${{\bf{C}}_{m + 1}} = {\left( {{{\bf{B}}_m}^T{{\bf{B}}_m} + \lambda {{\bf{W}}_m}^T{{\bf{W}}_m} + \omega {{\bf{A}}_m}^T{{\bf{A}}_m} + \rho {\bf{I}}} \right)^{ - 1}}$ \STATE $ {\kern 22pt} \times \left( {{{\bf{B}}_m}^T{{\bf{X}}} + \lambda {{\bf{W}}_m}^T{{\bf{H}}} + \omega {{\bf{A}}_m}^T{{\bf{Q}}} + \rho {{\bf{Z}}_m} - {{\bf{L}}_m}} \right)$ \STATE \textbf{Update ${\bf{Z}}$:}\\ \STATE ${{\bf{Z}}_{m + 1}} = \max \left\{ {{{\bf{C}}_{m + 1}} + \frac{{{{\bf{L}}_m}}}{\rho } - \frac{\varepsilon }{\rho }{\bf{I}},{\bf{0}}} \right\}$ \STATE ${\kern 22pt}+ \min \left\{ {{{\bf{C}}_{m + 1}} + \frac{{{{\bf{L}}_m}}}{\rho } + \frac{\varepsilon }{\rho }{\bf{I}},{\bf{0}}} \right\}$ \STATE \textbf{Update ${\bf{L}}$:}\\ \STATE ${{\bf{L}}_{m + 1}} = {{\bf{L}}_m} + \theta \left( {{{\bf{C}}_{m + 1}} - {{\bf{Z}}_{m + 1}}} \right)$ \STATE \textbf{Update ${\bf{B}}$, ${\bf{W}}$, ${\bf{A}}$:}\\ \STATE Compute ${{\bf{D}}_{m + 1}} = \left( {{{\bf{C}}_{m + 1}}{{\bf{C}}_{m + 1}}^T} \right) \odot \left( {{\bf{1}} - {\bf{I}}} \right)$ \FOR{$\scriptsize k=1$;$\scriptsize k\le \scriptsize K$;$\scriptsize k\!+\!+$} \STATE $\left( {{{\bf{B}}_{ \bullet k}}} \right){{\kern 1pt} _{m + 1}} = \frac{{{\bf{X}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T} - {{\bf{B}}_m}{{\left( {{{\bf{D}}_{ \bullet k}}} \right)}_{m + 1}}}}{{{{\left\| {{\bf{X}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T} - {{\bf{B}}_m}{{\left( {{{\bf{D}}_{ \bullet k}}} \right)}_{m + 1}}} \right\|}_2}}}{\kern 1pt}$ \STATE $\left( {{{\bf{W}}_{ \bullet k}}} \right){{\kern 1pt} _{m + 1}} = \frac{{{\bf{H}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T} - {{\bf{W}}_m}{{\left( {{{\bf{D}}_{ \bullet k}}} \right)}_{m + 1}}}}{{{{\left\| {{\bf{H}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T} - {{\bf{W}}_m}{{\left( {{{\bf{D}}_{ \bullet k}}} \right)}_{m + 1}}} \right\|}_2}}}{\kern 1pt} $ \STATE $\left( {{{\bf{A}}_{ \bullet k}}} \right){{\kern 1pt} _{m + 1}} = \frac{{{\bf{Q}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T} - {{\bf{A}}_m}{{\left( {{{\bf{D}}_{ \bullet k}}} \right)}_{m + 1}}}}{{{{\left\| {{\bf{Q}}{{\left[ {{{\left( {{{\bf{C}}_{k \bullet }}} \right)}_{m + 1}}} \right]}^T} - {{\bf{A}}_m}{{\left( {{{\bf{D}}_{ \bullet k}}} \right)}_{m + 1}}} \right\|}_2}}}{\kern 1pt} $ \ENDFOR \STATE \textbf{Update the objective function:}\\ \STATE $f = \left\| {{\bf{X}} - {\bf{BC}}} \right\|_F^2 + \lambda \left\| {{\bf{Y}} - {\bf{WC}}} \right\|_F^2 + \omega \left\| {{\bf{Q}} - {\bf{AC}}} \right\|_F^2 + 2\varepsilon {\left\| {\bf{Z}} \right\|_{\ell_1}} + {{\bf{L}}^T}({\bf{C}} - {\bf{Z}}) + \rho \left\| {{\bf{C}} - {\bf{Z}}} \right\|_F^2$ \ENDWHILE \RETURN ${\bf{B}}$, ${\bf{W}}$, ${\bf{A}}$, ${\bf{C}}$ \end{algorithmic} \end{algorithm} In testing stage, the constraint terms are based on $\ell_1$-norm sparse constraint. Here, we exploit the learned dictionary ${\bf{D}}$ to fit the testing sample $\bf{y}$ to obtain the sparse codes ${\bf{s}}$. Then, we use the trained classfier ${\bf{W}}$ to predict the label of ${\bf{y}}$ by calculating $\max \left\{ {{\bf{Ws}}} \right\}$. \section{Experimental results} \label{Experimental results} In this section, we utilize several datasets (Extended YaleB~\cite{georghiades2001few}, CMU PIE~\cite{sim2002cmu}, UC Merced Land Use~\cite{yang2010bag}, AID~\cite{xia2017aid}, Caltech101~\cite{fei2007learning} and USPS~\cite{hull1994database}) to evaluate the performance of our algorithm and compare it with other state-of-the-art methods such as SRC~\cite{wright2009robust}, LC-KSVD~\cite{jiang2013label}, CRC~\cite{zhang2011sparse} and CSDL-SRC~\cite{liu2016face}. In the following subsection, we first give the experimental settings. Then experiments on these six datasets are analyzed. Moreover, some discussions are listed finally. \begin{table} \scriptsize \caption{Classification rates ($\%$) on different datasets} \label{table1} \begin{center} \begin{tabular}{cccccc} \multicolumn{1}{c}{\bf Datasets$\backslash$Methods} &\multicolumn{1}{c}{\bf SRC } &\multicolumn{1}{c}{\bf CRC} &\multicolumn{1}{c}{\bf CSDL-SRC} &\multicolumn{1}{c}{\bf LC-KSVD} &\multicolumn{1}{c}{\bf LEDL} \\ \hline Extended YaleB &$79.1$ &$79.2$ &$80.2$ &$73.5$ &$\bf81.3$\\ CMU PIE &$73.7$ &$73.3$ &$77.4$ &$67.1$ &$\bf77.7$\\ UC-Merced &$80.4$ &$80.7$ &$80.5$ &$79.4$ &$\bf80.7$\\ AID &$71.6$ &$72.6$ &$71.6$ &$70.2$ &$\bf72.9$\\ Caltech101 &$89.4$ &$89.4$ &$89.4$ &$88.3$ &$\bf90.1$\\ USPS &$78.4$ &$77.9$ &$78.8$ &$71.1$ &$\bf81.1$\\ \hline\\ \end{tabular} \end{center} \end{table} \subsection{Experimental settings} For all the datasets, in order to eliminate the randomness, we carry out every experiment 8 times and the mean of the classification rates is reported. And we randomly select 5 samples per class for training in all the experiments. For Extended YaleB dataset and CMU PIE dataset, each image is cropped to $32 \times 32$, pulled into column vector, and $\ell_2$ normalized to form the raw $\ell_2$ normalized features. For UC Merced Land Use dataset, AID dataset, we use resnet model~\cite{he2016deep} to extract the features. Specifically, the layer $pool5$ is utilized to extract 2048-dimensional vectors for them. For Caltech101 dataset, we use the layer $pool5$ of resnet model and spatial pyramid matching (SPM) with two layers (the second layer include five part, such as left upper, right upper, left lower, right lower, center) to extract 12288-dimensional vectors. And finally, each of the images in USPS dataset is resized into $16 \times 16$ vectors. For convenience, the dictionary size ($K$) is fixed to the twice the number of training samples. In addition, we set $\rho = 1$ and initial $\theta = 0.5$, then decrease the $\theta$ in each iteration. Moreover, there are other three parameters ($\lambda$, $\omega$ and $\varepsilon$) need to be adjust to achieve the highst classification rates. The details are showed in the following subsections. \subsection{Extended YaleB Dataset} The Extended YaleB dataset contains $2{,}432$ face images from 38 individuals, each having 64 frontal images under varying illumination conditions. Figure~\ref{fig:YaleB} shows some images of the dataset. \begin{figure}[h!] \begin{center} \includegraphics[width = 0.8\linewidth]{YaleB.png} \end{center} \caption{Examples of the Extended YaleB dataset} \label{fig:YaleB} \end{figure} \begin{figure*} \begin{center} \includegraphics[width=1.0\linewidth]{ConfusionMatrix4ExtendedYaleB.png} \end{center} \caption{Confusion matrices on Extended YaleB dataset} \label{fig:ConfusionMatrix4ExtendedYaleB} \end{figure*} In addition, we set $\lambda = {2^{ - 3}}$, $\omega = {2^{ - 11}}$, $\varepsilon = {2^{ - 8}}$ in our experiment. The experimental results are summarized in Table (\ref{table1}). We can see that our proposed LEDL algorithm achieves superior performance to other classical classification methods by an improvement of at least $1.1$$\%$. Compared with $\ell_0$-norm sparsity constraint based dictionary learning algorithm LC-KSVD, our proposed $\ell_1$-norm sparsity constraint based dictionary learning algorithm LEDL algorithm exceeds it $7.8$$\%$. The reason of the high improvement between LC-KSVD and LEDL is that $\ell_0$-norm sparsity constraint leads to NP-hard problem which is not conductive to finding the optimal sparse solution for the dictionary. In order to further illustrate the performance of our method, we choose the first 20 classes samples as a subdataset and show the confusion matrices in Figure \ref{fig:ConfusionMatrix4ExtendedYaleB}. As can be seen that, our method achieves higher classification rates in all the chosen $20$ classes than LC-KSVD. Especially in class1, class2, class3, class10, class16, LEDL can achieve at least $10.0$$\%$ performance gain than LC-KSVD. \subsection{CMU PIE Dataset} The CMU PIE dataset consists of $41{,}368$ images of 68 individuals with 43 different illumination conditions. Each human is under 13 different poses and with 4 different expressions. In Figure \ref{fig:CMU_PIE}, we list several samples from this dataset. \begin{figure}[h!] \begin{center} \includegraphics[width = 0.8\linewidth]{CMU_PIE.png} \end{center} \caption{Examples of the CMU PIE dataset} \label{fig:CMU_PIE} \end{figure} The comparasion results are showed in Table \ref{table1}, we can see that our proposed LEDL algorithm outperforms over other well-known methods by an improvement of at least $0.5$$\%$. To be attention, LEDL is capable of exceeding LC-KSVD $10.6$$\%$ in this dataset. The optimal parameters are $\lambda = {2^{ - 3}}$, $\omega = {2^{ - 11}}$, $\varepsilon = {2^{ - 8}}$. \subsection{UC Merced Land Use Dataset} The UC Merced Land Use dataset is widely used for aerial image classification. It consists of totally $2{,}100$ land-use images of $21$ classes. Some samples are showed in Figure \ref{fig:UCMerced}. \begin{figure}[h!] \begin{center} \includegraphics[width = 0.8\linewidth]{UCMerced.png} \end{center} \caption{Examples of the UC Merced dataset}\label{fig:UCMerced} \end{figure} In Table \ref{table1}, we can see that our proposed LEDL algorithm is only similar with CRC and still outperforms the other methods. Compared with LC-KSVD, LEDL achieves the higher accuracy by an improvement of $1.3$$\%$. Here, we set $\lambda = {2^{ 0}}$, $\omega = {2^{ - 9}}$, $\varepsilon = {2^{ - 6}}$ to get the optimal result. The confusion matrices of the UC Merced Land Use dataset for all classes are shown in Figure \ref{fig:ConfusionMatrix4UCMerced}. We can see that, in all classes except the tennis, LEDL almost achieve better results compared with LC-KSVD. In several classes such as building, freeway, river, and sparse, our method achieves superior performance to LC-KSVD by an improvement of at least $0.5$$\%$. \begin{figure*} \begin{center} \includegraphics[width=1.0\linewidth]{ConfusionMatrix4UCMerced.png} \end{center} \caption{Confusion matrices on UCMerced dataset} \label{fig:ConfusionMatrix4UCMerced} \end{figure*} \subsection{AID Dataset} The AID dataset is a new large-scale aerial image dataset which can be downloaded from Google Earth imagery. It contains $10{,}000$ images from 30 aerial scene types. In Figure \ref{fig:AID}, we show several images of this dataset. \begin{figure}[h!] \begin{center} \includegraphics[width = 0.8\linewidth]{AID.png} \end{center} \caption{Examples of the AID dataset}\label{fig:AID} \end{figure} Table \ref{table1} illustrates the effectiveness of LEDL for classifying images. We adjust $\lambda = {2^{ -6}}$, $\omega = {2^{ - 14}}$, $\varepsilon = {2^{ - 12}}$ to achieve the highest accuracy by an improvement of at least $0.3$$\%$ in the five algorithms. While compared with LC-KSVD, LEDL achieves an improvement of $2.7$$\%$. \subsection{Caltech101 Dataset} The caltech101 dataset includes $9{,}144$ images of $102$ classes in total, which are consisted of cars, faces, flowers and so on. Each category have about 40 to 800 images and most of them have about 50 images. In figure~\ref{fig:Caltech101}, we show several images of this dataset. \begin{figure}[h!] \begin{center} \includegraphics[width = 0.8\linewidth]{Caltech101.png} \end{center} \caption{Examples of the Caltech101 dataset}\label{fig:Caltech101} \end{figure} As can be seen in Table \ref{table1}, our proposed LEDL algorithm outperforms all the competing approaches by setting $\lambda = {2^{ - 4}}$, $\omega = {2^{ - 13}}$, $\varepsilon = {2^{ - 14}}$ and achieves improvements of $1.8$$\%$ and $0.7$$\%$ over LC-KSVD and other methods, respectively. Here, we also choose the first 20 classes to build the confusion matrices. They are shown in Figure~\ref{fig:ConfusionMatrix4Caltech101}. \begin{figure}[h!] \begin{center} \includegraphics[width=1.0\linewidth]{ConfusionMatrix4Caltech101.png} \end{center} \caption{Confusion matrices on Caltech101 dataset} \label{fig:ConfusionMatrix4Caltech101} \end{figure} \subsection{USPS Dataset} The USPS dataset contains $9{,}298$ handwritten digit images from 0 to 9 which come from the U.S. Postal System. We list several samples from this dataset in Figure~\ref{fig:USPS}. \begin{figure}[h!] \begin{center} \includegraphics[width = 0.8\linewidth]{USPS.png} \end{center} \caption{Examples of the USPS dataset}\label{fig:USPS} \end{figure} Table \ref{table1} shows the comparasion results of five algorithms and it is easy to find out that our proposed LEDL algorithm outperforms over other well-known methods by an improvement of at least $2.3$$\%$. And our proposed method achieves an improvement of $10.0$$\%$ over LC-KSVD method. The optimal parameters are $\lambda = {2^{ -4}}$, $\omega = {2^{ - 8}}$, $\varepsilon = {2^{ - 5}}$. \subsection{Discussion} From the experimental results on six datasets, we can obtain the following conclusions. (1) All the above experimental results illustrate that, our proposed LEDL algorithm is an effective and general classifier which can achieve superior performacne to state-of-the-art methods on various datasets, especially on Extended YaleB dataset, CMU PIE dataset and USPS dataset. (2) Our proposed LEDL method introduces the $\ell_1$-norm regularization term to replace the $\ell_0$-norm regularization of LC-KSVD. However, compared with LC-KSVD algorithm, LEDL method is always better than it on the six datasets. Moreover, on the two face datasets and USPS dataset, our method can exceed LC-KSVD nearly $10.0$$\%$. (3) Confusion matrices of LEDL and LC-KSVD on three datasets are shown in Figure~\ref{fig:ConfusionMatrix4ExtendedYaleB}~\ref{fig:ConfusionMatrix4UCMerced} and~\ref{fig:ConfusionMatrix4Caltech101}. They clearly illustrate the superiority of our method. Specificially, for Extended YaleB dataset, our method achieve outstanding performance in five classes (class1, class2, class3, class10, class16). For UC Merced dataset, LEDL almost achieve better classification rates than LC-KSVD in all classes except the tennis class. For Caltech101 dataset, our proposed LEDL method perform much better than LC-KSVD method in some classes such as beaver, binocular, brontosaurus, cannon and ceiling fan. \section{Conclusion} \label{Conclusion} In this paper, we propose a Label Embedded Dictionary Learning (LEDL) algorithm. Specifically, we introduce the $\ell_1$-norm regularization term to replace the $\ell_0$-norm regularization term of LC-KSVD which can help to avoid the NP-hard problem and find optimal solution easily. Furthermore, we propose to adopt ADMM algorithm to solve $\ell_1$-norm optimization problem and BCD algorithm to update the dictionary. Besides, extensive experiments on six well-known benchmark datasets have proved the superiority of our proposed LEDL algorithm. \section{Acknowledgment} This research was funded by the National Natural Science Foundation of China (Grant No. 61402535, No. 61671480), the Natural Science Foundation for Youths of Shandong Province, China (Grant No. ZR2014FQ001), the Natural Science Foundation of Shandong Province, China(Grant No. ZR2018MF017), Qingdao Science and Technology Project (No. 17-1-1-8-jch), the Fundamental Research Funds for the Central Universities, China University of Petroleum (East China) (Grant No. 16CX02060A, 17CX02027A), and the Innovation Project for Graduate Students of China University of Petroleum(East China) (No. YCX2018063). \bibliographystyle{elsarticle-harv}
1,116,691,499,956
arxiv
\section{\uppercase{Introduction}} \label{sec:introduction} \noindent Many tasks in pattern recognition and machine learning rely on the ability to quantify local similarities in data, and to infer meaningful global structure from such local characteristics~\cite{coifman:lafon:lee}. In the classification framework, the desired global structure is a descriptive partition of the data into categories or classes. Many studies have been devoted to the binary classification problems. The multiple-class case, where the data is partitioned into more than two clusters, is more challenging. One approach is to treat the problem as a series of binary classification problems~\cite{allwein:schapire:singer}. In this paper, we develop an alternative method, involving a multiple-class extension of the diffuse interface model introduced in~\cite{bertozzi:flenner}. The diffuse interface model by Bertozzi and Flenner combines methods for diffusion on graphs with efficient partial differential equation techniques to solve binary segmentation problems. As with other methods inspired by physical phenomena~\cite{bertozzi:esedoglu:gillette,jung:kang:shen,li:kim}, it requires the minimization of an energy expression, specifically the Ginzburg-Landau (GL) energy functional. The formulation generalizes the GL functional to the case of functions defined on graphs, and its minimization is related to the minimization of weighted graph cuts~\cite{bertozzi:flenner}. In this sense, it parallels other techniques based on inference on graphs via diffusion operators or function estimation~\cite{coifman:lafon:lee,chung,zhou:scholkopf,szlam:maggioni:coifman,wang:jebara:chang,buhler:hein,szlam:bresson,hein:setzer}. Multiclass segmentation methods that cast the problem as a series of binary classification problems use a number of different strategies: (i) deal directly with some binary coding or indicator for the labels~\cite{dietterich:bakiri,wang:jebara:chang}, (ii) build a hierarchy or combination of classifiers based on the one-vs-all approach or on class rankings~\cite{hastie:tibshirani,har-peled:roth:zimak} or (iii) apply a recursive partitioning scheme consisting of successively subdividing clusters, until the desired number of classes is reached~\cite{szlam:bresson,hein:setzer}. While there are advantages to these approaches, such as possible robustness to mislabeled data, there can be a considerable number of classifiers to compute, and performance is affected by the number of classes to partition. In contrast, we propose an extension of the diffuse interface model that obtains a simultaneous segmentation into multiple classes. The multiclass extension is built by modifying the GL energy functional to remove the prejudicial effect that the order of the labelings, given by integer values, has in the smoothing term of the original binary diffuse interface model. A new term that promotes homogenization in a multiclass setup is introduced. The expression penalizes data points that are located close in the graph but are not assigned to the same class. This penalty is applied {\em independently\/} of how different the integer values are, representing the class labels. In this way, the characteristics of the multiclass classification task are incorporated directly into the energy functional, with a measure of smoothness independent of label order, allowing us to obtain high-quality results. Alternative multiclass methods minimize a Kullback-Leibler divergence function~\cite{subramanya:bilmes} or expressions involving the discrete Laplace operator on graphs~\cite{zhou:bousquet:lal,wang:jebara:chang}. This paper is organized as follows. Section~\ref{sec:model} reviews the diffuse interface model for binary classification, and describes its application to semi-supervised learning. Section~\ref{sec:multiclass} discusses our proposed multiclass extension and the corresponding computational algorithm. Section~\ref{sec:results} presents results obtained with our method. Finally, section~\ref{sec:conclusion} draws conclusions and delineates future work. \section{\uppercase{Data Segmentation with the Ginzburg-Landau Model}} \label{sec:model} \noindent The diffuse interface model~\cite{bertozzi:flenner} is based on a continuous approach, using the Ginzburg-Landau (GL) energy functional to measure the quality of data segmentation. A good segmentation is characterized by a state with small energy. Let $u(\boldsymbol{x})$ be a scalar field defined over a space of arbitrary dimensionality, and representing the state of the system. The GL energy is written as the functional \begin{equation} E_{GL}(u) = \frac{\epsilon}{2} \int \! | \nabla u |^2 \; d\boldsymbol{x} + \frac{1}{\epsilon} \int \! F(u) \; d\boldsymbol{x}, \label{eq:GLf} \end{equation} \noindent with $\nabla$ denoting the spatial gradient operator, $\epsilon > 0$ a real constant value, and $F$ a double well potential with minima at $\pm 1$: \begin{equation} F(u) = \frac{1}{4} \left ( u^2 - 1 \right )^2. \label{eq:2pot} \end{equation} Segmentation requires minimizing the GL functional. The norm of the gradient is a smoothing term that penalizes variations in the field $u$. The potential term, on the other hand, compels $u$ to adopt the discrete labels of $+1$ or $-1$, clustering the state of the system around two classes. Jointly minimizing these two terms pushes the system domain towards homogeneous regions with values close to the minima of the double well potential, making the model appropriate for binary segmentation. The smoothing term and potential term are in conflict at the interface between the two regions, with the first term favoring a gradual transition, and the second term penalizing deviations from the discrete labels. A compromise between these conflicting goals is established via the constant $\epsilon$. A small value of $\epsilon$ denotes a small length transition and a sharper interface, while a large $\epsilon$ weights the gradient norm more, leading to a slower transition. The result is a diffuse interface between regions, with sharpness regulated by $\epsilon$. It can be shown that in the limit $\epsilon \to 0$ this function approximates the total variation (TV) formulation in the sense of functional ($\Gamma$) convergence~\cite{kohn:sternberg}, producing piecewise constant solutions but with greater computational efficiency than conventional TV minimization methods. Thus, the diffuse interface model provides a framework to compute piecewise constant functions with diffuse transitions, approaching the ideal of the TV formulation, but with the advantage that the smooth energy functional is more tractable numerically and can be minimized by simple numerical methods such as gradient descent. The GL energy has been used to approximate the TV norm for image segmentation~\cite{bertozzi:flenner} and image inpainting~\cite{bertozzi:esedoglu:gillette,dobrosotskaya:bertozzi_inpainting}. Furthermore, a calculus on graphs equivalent to TV has been introduced in~\cite{gilboa:osher,szlam:bresson}. \subsection*{Application of Diffuse Interface Models to Graphs} An undirected, weighted neighborhood graph is used to represent the local relationships in the data set. This is a common technique to segment classes that are not linearly separable. In the $N$-neighborhood graph model, each vertex $z_i\in Z$ of the graph corresponds to a data point with feature vector $\boldsymbol{x}_i$, while the weight $w_{ij}$ is a measure of similarity between $z_i$ and $z_j$. Moreover, it satisfies the symmetry property $w_{ij} = w_{ji}$. The neighborhood is defined as the set of $N$ closest points in the feature space. Accordingly, edges exist between each vertex and the vertices of its $N$-nearest neighbors. Following the approach of~\cite{bertozzi:flenner}, we calculate weights using the local scaling of Zelnik-Manor and Perona~\cite{zelnik-manor:perona}, \begin{equation} w_{ij} = \exp \left ( - \frac{|| \boldsymbol{x}_i - \boldsymbol{x}_j ||^2}{\tau(\boldsymbol{x}_i) \; \tau(\boldsymbol{x}_j)} \right ). \label{eq:local_graph} \end{equation} Here, $\tau(\boldsymbol{x}_i) = ||\boldsymbol{x}_i - \boldsymbol{x}^M_i||$ defines a local value for each $\boldsymbol{x}_i$, where $\boldsymbol{x}^M_i$ is the position of the $M$th closest data point to $\boldsymbol{x}_i$, and $M$ is a global parameter. It is convenient to express calculations on graphs via the graph Laplacian matrix, denoted by $\boldsymbol{L}$. The procedure we use to build the graph Laplacian is as follows. \begin{enumerate} \item Compute the similarity matrix $\boldsymbol{W}$ with components $w_{ij}$ defined in (\ref{eq:local_graph}). As the neighborhood relationship is not symmetric, the resulting matrix $\boldsymbol{W}$ is also not symmetric. Make it a symmetric matrix by connecting vertices $z_i$ and $z_j$ if $z_i$ is among the $N$-nearest neighbors of $z_j$ or if $z_j$ is among the $N$-nearest neighbors of $z_i$~\cite{luxburg}. \item Define $\boldsymbol{D}$ as a diagonal matrix whose $i$th diagonal element represents the degree of the vertex $z_i$, evaluated as \begin{equation} d_i = \sum_{j} w_{ij}. \end{equation} \item Calculate the graph Laplacian: $\boldsymbol{L} = \boldsymbol{D} - \boldsymbol{W}$. \end{enumerate} Generally, the graph Laplacian is normalized to guarantee spectral convergence in the limit of large sample size~\cite{luxburg}. The symmetric normalized graph Laplacian $\boldsymbol{L_s}$ is defined as \begin{equation} \boldsymbol{L_s} = \boldsymbol{D}^{-1/2} \; \boldsymbol{L} \; \boldsymbol{D}^{-1/2} = \boldsymbol{I} - \boldsymbol{D}^{-1/2} \; \boldsymbol{W} \; \boldsymbol{D}^{-1/2}. \label{eq:Ls} \end{equation} Data segmentation can now be carried out through a graph-based formulation of the GL energy. To implement this task, a fidelity term is added to the functional as initially suggested in~\cite{dobrosotskaya:bertozzi}. This enables the specification of a priori information in the system, for example the known labels of certain points in the data set. This kind of setup is called semi-supervised learning (SSL). The discrete GL energy for SSL on graphs can be written as~\cite{bertozzi:flenner}: \begin{eqnarray} \label{eqn:graphLaplacian} E_{GL_{\mathrm{SSL}}}(\boldsymbol{u}) & = & \frac{\epsilon}{2} \langle \boldsymbol{u}, \boldsymbol{L_s} \boldsymbol{u} \rangle + \frac{1}{\epsilon} \sum_{z_i \in Z} F(u(z_i)) \nonumber \\ & & + \sum_{z_i \in Z} \frac{\lambda(z_i)}{2} \; \left ( u(z_i) - u_0(z_i) \right )^2 \end{eqnarray} \noindent In the discrete formulation, $\boldsymbol{u}$ is a vector whose component $u(z_i)$ represents the state of the vertex $z_i$, $\epsilon > 0$ is a real constant characterizing the smoothness of the transition between classes, and $\lambda(z_i)$ is a fidelity weight taking value $\lambda > 0$ if the label $u_0(z_i)$ (i.e. class) of the data point associated with vertex $z_i$ is known beforehand, or $\lambda(z_i)=0$ if it is not known (semi-supervised). Equation (\ref{eqn:graphLaplacian}) may be understood as an example of the more general form of an energy functional for data classification, \begin{eqnarray} E(\boldsymbol{u}) = || \boldsymbol{u} ||_{a} + \frac{\lambda}{2} || \boldsymbol{u} - \boldsymbol{f} ||_{b}^{p}, \label{eqn:basic} \end{eqnarray} where the norm $||u||_{a}$ is a regularization term and $||u - f||_{b}$ is a fidelity term. The choice of the regularization norm $||\cdot||_{a}$ has non-trivial consequences in the final classification accuracy. Attractive qualities of the norm $|| \cdot ||_{a}$ include allowing classes to be close in a metric space, and obtain segmentations for nonlinearly separable data. Both of these goals are addressed using the GL energy functional for SSL. Minimizing the functional simulates a diffusion process on the graph. The information of the few labels known is propagated through the discrete structure by means of the smoothing term, while the potential term clusters the vertices around the states $\pm 1$ and the fidelity term enforces the known labels. The energy minimization process itself attempts to reduce the interface regions. Note that in the absence of the fidelity term, the process could lead to a trivial steady-state solution of the diffusion equation, with all data points assigned the same label. The final state $u(z_i)$ of each vertex is obtained by thresholding, and the resulting homogeneous regions with labels of $+1$ and $-1$ constitute the two-class data segmentation. \section{\uppercase{Multiclass Extension}} \label{sec:multiclass} \noindent The double-well potential in the diffuse interface model for SSL flows the state of the system towards two definite labels. Multiple-class segmentation requires a more general potential function $F(u)$ that allows clusters around more than two labels. For this purpose, we use the periodic-well potential suggested by Li and Kim~\cite{li:kim}, \begin{equation} F( u ) = \frac{1}{2} \, \{ u \}^2 \, (\{ u \} - 1)^2, \label{eq:well_ext} \end{equation} where $\{ u \}$ denotes the fractional part of $u$, \begin{equation} \{ u \} = u - \lfloor u \rfloor, \label{eq:multiphase} \end{equation} \noindent and $ \lfloor u \rfloor$ is the largest integer not greater than $u$. This periodic potential well promotes a multiclass solution, but the graph Laplacian term in Equation (\ref{eqn:graphLaplacian}) also requires modification for effective calculations due to the fixed ordering of class labels in the multiple class setting. The graph Laplacian term penalizes large changes in the spatial distribution of the system state more than smaller gradual changes. In a multiclass framework, this implies that the penalty for two spatially contiguous classes with different labels may vary according to the (arbitrary) ordering of the labels. This phenomenon is shown in Figure~\ref{fig:why_multiclass}. Suppose that the goal is to segment the image into three classes: class 0 composed by the black region, class 1 composed by the gray region and class 2 composed by the white region. It is clear that the horizontal interfaces comprise a jump of size 1 (analogous to a two class segmentation) while the vertical interface implies a jump of size 2. Accordingly, the smoothing term will assign a higher cost to the vertical interface, even though from the point of view of the classification, there is no specific reason for this. In this example, the problem cannot be solved with a different label assignment. There will always be an interface with higher costs than others independent of the integer values used. Thus, the multiclass approach breaks the symmetry among classes, influencing the diffuse interface evolution in an undesirable manner. Eliminating this inconvenience requires restoring the symmetry, so that the difference between two classes is always the same, regardless of their labels. This objective is achieved by introducing a new class difference measure. \begin{figure}[htb] \begin{center} \framebox{\scalebox{0.1}{\includegraphics{dummy_interface}}} \end{center} \caption{Three class segmentation. Black: class 0. Gray: class 1. White: class 2.} \label{fig:why_multiclass} \end{figure} \subsection{Generalized Difference Function} The final class labels are determined by thresholding each vertex $u(z_i)$, with the label $y_i$ set to the nearest integer: \begin{equation} y_i = \left \lfloor u(z_i) + \frac{1}{2} \right \rfloor. \end{equation} The boundaries between classes then occur at half-integer values corresponding to the unstable equilibrium states of the potential well. Define the function $\hat{r}(x)$ to represent the distance to the nearest half-integer: \begin{equation} \hat{r}(x) = \left | \frac{1}{2} - \{ x \} \right |. \label{eq:r_hat} \end{equation} A schematic of $\hat{r}(x)$ is depicted in Figure~\ref{fig:r_hat}. The $\hat{r}(x)$ function is used to define a generalized difference function between classes that restores symmetry in the energy functional. Define the generalized difference function $\rho$ as: \begin{equation} \rho(u(z_i),u(z_j)) = \left \{ \begin{array}{lll} \hat{r}(u(z_i)) + \hat{r}(u(z_j)) & \ & y_i \neq y_j \\ & & \\ \left|\hat{r}(u(z_i)) - \hat{r}(u(z_j))\right| & & y_i = y_j \end{array} \right . \end{equation} Thus, if the vertices are in different classes, the difference $\hat{r}(x)$ between each state's value and the nearest half-integer is added, whereas if they are in the same class, these differences are subtracted. The function $\rho(x,y)$ corresponds to the tree distance (see Fig.~\ref{fig:r_hat}). Strictly speaking, $\rho$ is not a metric since it does not satisfy $\rho(x,y) = 0 \Rightarrow x = y$. Nevertheless, the cost of interfaces between classes becomes the same regardless of class labeling when this generalized distance function is implemented. \begin{figure} \begin{center} \begin{picture}(150,70)(0,0) \put(60,40){\circle*{5}} \put(60,40){\line(-1,-1){40}} \put(60,40){\line(0,-1){40}} \put(60,40){\line(1,-1){40}} \put(101,-1){\circle{2}} \put(60,-1){\circle{2}} \put(19,-1){\circle{2}} \put(68,38){\mbox{Half-integer}} \put(110,-5){\mbox{Integer}} \put(55,45){\line(-1,-1){25}} \put(50,50){\line(1,-1){10}} \put(25,25){\line(1,-1){10}} \put(25,38){\mbox{$\hat{r}(x)$}} \end{picture} \end{center} \caption{Schematic interpretation of generalized difference: $\hat{r}(x)$ measures distance to nearest half-integer, and $\rho$ then corresponds to distance on tree.} \label{fig:r_hat} \end{figure} The GL energy functional for SSL, using the new generalized difference function $\rho$, is expressed as \begin{eqnarray} E_{MGL_{\mathrm{SSL}}}(\boldsymbol{u}) & = & \frac{\epsilon}{2} \sum_{z_i \in Z} \sum_{z_j \in Z} \frac{w_{ij}}{\sqrt{d_i d_j}} \, \left[\rho(u(z_i),u(z_j))\,\right]^2 \nonumber \\ & & + \frac{1}{2 \epsilon}\sum_{z_i \in Z} \{ u(z_i) \}^2 \, ( \{ u(z_i) \} - 1 )^2 \nonumber \\ & & + \sum_{z_i \in Z} \frac{\lambda(z_i)}{2} \; \left ( u(z_i) - u_0(z_i) \right )^2. \label{eq:multiclass_model} \end{eqnarray} Note that $\rho$ could also be used in the fidelity term, but for simplicity this modification is not included. In practice, this has little effect on the results. \subsection{Computational Algorithm} The GL energy functional given by (\ref{eq:multiclass_model}) may be minimized iteratively, using gradient descent: \begin{equation} u_i^{m+1} = u_i^{m} - dt \, \left [\frac{\delta E_{MGL_{\mathrm{SSL}}}}{\delta u_i} \right ], \end{equation} where $u_i$ is a shorthand for $u(z_i)$, $dt$ represents the time step and the gradient direction is given by: \begin{equation} \frac{\delta E_{MGL_{\mathrm{SSL}}}}{\delta u_i} = \epsilon G(u_i^m) + \frac{1}{\epsilon} F'(u_i^m) + \lambda_i \left ( u_i^m - {u_i}_0 \right ) \end{equation} \begin{equation} G(u_i^m) = \sum_j \frac{w_{ij}}{\sqrt{d_i d_j}} \left [ \hat{r}(u_i^m) \pm \hat{r}(u_j^m) \right ] \hat{r}'(u_i^m) \label{eq:G} \end{equation} \begin{equation} F'(u_i^m) = 2 \; \{ u_i^m \} ^3 - 3 \; \{ u_i^m \} ^2 + \{ u_i^m \} \end{equation} \begin{algorithm*}[ht] \caption{Calculate $\boldsymbol{u}$} \label{algo:iter} \begin{algorithmic} \REQUIRE $\epsilon > 0, dt > 0, m_{\mathrm{max}} > 0, K \mathrm{~given}$ \ENSURE $\mathrm{out} = \boldsymbol{u}^{m_{\mathrm{max}}}$ \STATE $\boldsymbol{u}^0 \leftarrow rand((0,K))-\frac{1}{2}, \ m \leftarrow 0$ \FOR{$m < m_{\mathrm{max}}$} \STATE $i \leftarrow 0$ \FOR{$i < n$} \STATE $u_i^{m+1} \leftarrow u_i^m - dt \left ( \epsilon \: G(u_i^m) + \frac{1}{\epsilon} \: F'(u_i^m) + \lambda_i \left ( u_i^m - {u_i}_0 \right ) \right )$ \IF{$\mathrm{Label}(u_i^{m+1}) \neq \mathrm{Label}(u_i^{m})$} \STATE $(v_i)_k \leftarrow k + \{ u_i ^{m+1}\}$ \STATE $u_i^{m+1} \leftarrow (v_i)_k \mathrm{~where~} k=\arg\min_{\; 0 \leq k < K} \; \sum_{j } \frac{w_{ij}}{\sqrt{d_i d_j}} \, \left[\rho((v_i)_k,u_j)\,\right]^2$ \ENDIF \STATE $ i \leftarrow i + 1$ \ENDFOR \STATE $m \leftarrow m + 1$ \ENDFOR \end{algorithmic} \end{algorithm*} The gradient of the generalized difference function $\rho$ is not defined at half integer values. Hence, we modify the method using a greedy strategy: after detecting that a vertex changes class, the new class that minimizes the smoothing term is selected, and the fractional part of the state computed by the gradient descent update is preserved. Consequently, the new state of vertex $i$ is the result of gradient descent, but if this causes a change in class, then a new state is determined. Specifically, let $k$ represent an integer in the range of the problem, i.e. $ k \in [0, K-1]$, where $K$ is the number of classes in the problem. Given the fractional part $\{u\}$ resulting from the gradient descent update, define $(v_i)_k = k + \{u_i\}$. Find the integer $k$ that minimizes $\sum_{j} \frac{w_{ij}}{\sqrt{d_i d_j}} \, \left[\rho((v_i)_k,u_j)\,\right]^2$, the smoothing term in the energy functional, and use $(v_i)_k$ as the new vertex state. A summary of the procedure is shown in Algorithm~\ref{algo:iter} with $m_{\mathrm{max}}$ denoting the maximum number of iterations. \section{\uppercase{Results}} \label{sec:results} \noindent The performance of the multiclass diffuse interface model is evaluated using a number of data sets from the literature, with differing characteristics. Data and image segmentation problems are considered on synthetic and real data sets. \subsection{Synthetic Data} A synthetic three-class segmentation problem is constructed following an analogous procedure used in~\cite{buhler:hein} for ``two moon'' binary classification, using three half circles (``three moons''). The half circles are generated in $\mathbb{R}^2$. The two top circles have radius $1$ and are centered at $(0, 0)$ and $(3, 0)$. The bottom half circle has radius $1.5$ and is centered at $(1.5, 0.4)$. We sample 1500 data points (500 from each of these half circles) and embed them in $\mathbb{R}^{100}$. The embedding is completed by adding Gaussian noise with $\sigma^2= 0.02$ to {\em each\/} of the 100 components for each data point. The dimensionality of the data set, together with the noise, make this a nontrivial problem. \begin{figure*}[tb] \centerline{ \subfigure{\scalebox{0.3}{\includegraphics{kclasses_spectral_3eig_chunk3r18}}} \hfil \subfigure{\scalebox{0.3}{\includegraphics{kclasses_multiGL_adaptEps_fid3_res3}}} } \caption{Three-class segmentation. Left: spectral clustering. Right: multiclass GL (adaptive $\epsilon$).} \label{fig:3moon} \end{figure*} The difficulty of the problem is illustrated in Figure~\ref{fig:3moon}, where we use both spectral clustering decomposition and the multiclass GL method. The same graph structure is used for both methods. The symmetric graph Laplacian is computed based on edge weights given by (\ref{eq:local_graph}), using $N = 10$ nearest neighbors and local scaling based on the $M = 10$ closest point. The spectral clustering results are obtained by applying a $k$-means algorithm to the first $3$ eigenvectors of the symmetric graph Laplacian. The average error obtained, over 100 executions of spectral clustering, is 20\% ($\pm 0.6\%$). The figure displays the best result obtained, corresponding to an error of $18.67\%$. The multiclass GL method was implemented with the following parameters: interface scale $\epsilon = 1$, step size $dt = 0.01$ and number of iterations $m_{\mathrm{max}} = 800$. The fidelity term is determined by labeling 25 points randomly selected from each class (5\% of all points), and setting the fidelity weight to $\lambda = 30$ for those points. Several runs of the procedure are performed to isolate effects from the random initialization and the arbitrary selection of fidelity points. The average error obtained, over 100 runs with four different fidelity sets, is 5.2\% ($\pm 1.01\%$). In general terms, the system evolves from an initially inhomogeneous state, rapidly developing small islands around fidelity points that become seeds for homogeneous regions and progressing to a configuration of classes forming nearly uniform clusters. The multiclass results were further improved by incrementally decreasing $\epsilon$ to allow sharper transitions between states as in~\cite{bertozzi:flenner}. With this approach, the average error obtained over 100 runs is reduced to 2.6\% ($\pm 0.3\%$). The best result obtained in these runs is displayed in Figure~\ref{fig:3moon} and corresponds to an average error of 2.13\%. In these runs, $\epsilon$ is reduced from $\epsilon_0 = 2$ to $\epsilon_f = 0.1$ in decrements of 10\%, with $40$ iterations performed per step. The average computing time per run in this adaptive technique is 1.53s in an Intel Quad-Core @ 2.4 GHz, without any parallel processing. For comparison, we note the results from the literature for the simpler two moon problem ($\mathbb{R}^{100}$, $\sigma^2= 0.02$ noise). The best errors reported include: 6\% for p-Laplacian~\cite{buhler:hein}, 4.6\% for ratio-minimization relaxed Cheeger cut~\cite{szlam:bresson}, and 2.3\% for binary GL~\cite{bertozzi:flenner}. While these are not SSL methods the last of these does involve other prior information in the form of a mass balance constraint. It can be seen that both of our procedures, fixed and adaptive $\epsilon$, produce high-quality results even for the more complex three-class segmentation problem. Calculation times are also competitive with those reported for the binary case (0.5s - 50s). \subsection{Image Segmentation} As another test setup, we use a grayscale image of size $191 \times 196$, taken from~\cite{jung:kang:shen,li:kim} and composed of 5 classes: black, dark gray, medium gray, light gray and white. This image contains structure, such as an internal hole and junctions where multiple classes meet. The image information is represented through feature vectors defined as $(x_i, y_i, \mathrm{pix}_i)$, with $x_i$ and $y_i$ corresponding to $(x, y)$ coordinates of the pixel and $\mathrm{pix}_i$ equal to the intensity of the pixel. All of these are normalized so as to obtain values in the range $[0,1]$. The graph is constructed using $N = 30$ nearest neighbors and local scaling based on the $M = 30$ closest point. We use parameters $\epsilon = 1$, $dt = 0.01$ and $m_{\mathrm{max}} = 800$. We then choose 1500 random points (4\% of the total) for the fidelity term, with $\lambda=30$. Figure~\ref{fig:image_segmentation} displays the original image with the randomly selected fidelity points (top left), and the five-class segmentation. Each class image shows in white the pixels identified as belonging to the class, and in black the pixels of the other classes. In this case, all the classes are segmented perfectly with an average run time of 59.7s. The method of Li and Kim~\cite{li:kim} also segments this image perfectly, with a reported run time of 0.625s. However, their approach uses additional information, including a pre-assignment of specific grayscale levels to classes, and the overall densities of each class. Our approach does not require these. \begin{figure*}[tb] \centerline{ \subfigure{\scalebox{0.4}{\includegraphics{sample3_test3} \label{fig:img_fid}}} \hfil \subfigure{\scalebox{0.4}{\includegraphics{inClass0_M50fid3_ONEPIX} \label{fig:class0}}} \hfil \subfigure{\scalebox{0.4}{\includegraphics{inClass1_M50fid3_ONEPIX} \label{fig:class1}}} } \vspace{0.2cm} \centerline{ \subfigure{\scalebox{0.4}{\includegraphics{inClass2_M50fid3_ONEPIX} \label{fig:class2}}} \hfil \subfigure{\scalebox{0.4}{\includegraphics{inClass3_M50fid3_ONEPIX} \label{fig:class3}}} \hfil \subfigure{\scalebox{0.4}{\includegraphics{inClass4_M50fid3_ONEPIX} \label{fig:class4}}} } \caption{Image Segmentation Results. Top left: Original five-class image, with randomly chosen fidelity points displayed. Other panels: the five segmented classes, shown in white.} \label{fig:image_segmentation} \end{figure*} \subsection{MNIST Data} The MNIST data set available at \textit{http://yann.lecun.com/exdb/mnist/} is composed of 70,000 images of size $28 \times 28$, corresponding to a broad sample of handwritten digits 0 through 9. We use the multiclass diffuse interface model to segment the data set automatically into 10 classes, one per handwritten digit. Before constructing the graph, we preprocess the data by normalizing and projecting into 50 principal components, following the approach in~\cite{szlam:bresson}. No further steps, such as smoothing convolutions, are required. The graph is computed with $N = 10$ nearest neighbors and local scaling based on the $M = 10$ closest points. An adaptive $\epsilon$ variant of the algorithm is implemented, with parameters $\epsilon_0 = 2$, $\epsilon_f = 0.01$, $\epsilon$ decrement 10\%, $dt = 0.01$, and 40 iterations per step. For the fidelity term, 7,000 images (10\% of total) are chosen, with weight $\lambda=30$. The average error obtained, over 20 runs with four different fidelity sets, is 7\% ($\pm 0.072\%$). The confusion matrix for the best result obtained, corresponding to a 6.86\% error, is given in Table~\ref{tab:mnist}: each row represents the segmentation obtained, while the columns represent the true digit labels. For reference, the average computing time per run in this adaptive technique is 132s. Note that, in the segmentations, the largest mistakes made are in trying to distinguish digits 4 from 9 and 7 from 9. For comparison, errors reported using unsupervised clustering algorithms in the literature are: 12.9\% for p-Laplacian~\cite{buhler:hein}, 11.8\% for ratio-minimization relaxed Cheeger cut~\cite{szlam:bresson}, and 12.36\% for the multicut version of the normalized 1-cut~\cite{hein:setzer}. A more sophisticated graph-based diffusion method applied in a semi-supervised setup (transductive classification), with function-adapted eigenfunctions, a graph constructed with 13 neighbors, and self-tuning with the 9th neighbor reported in~\cite{szlam:maggioni:coifman} obtains an error of 7.4\%. Results with similar errors are reported in~\cite{liu:he:chang}. Thus, the performance of the multiclass GL on this data set improves upon other published results, while requiring less preprocessing and a simpler regularization of the functions on the graph. \begin{table*}[tb] \caption{Confusion Matrix for the MNIST Data Segmentation.} \label{tab:mnist} \begin{center} \begin{tabular}{|c|r|r|r|r|r|r|r|r|r|r|} \hline \multicolumn{1}{|c|}{\bf Obtained / True} & \multicolumn{1}{|c|}{\bf 0} & \multicolumn{1}{|c|}{\bf 1} & \multicolumn{1}{|c|}{\bf 2} & \multicolumn{1}{|c|}{\bf 3} & \multicolumn{1}{|c|}{\bf 4} & \multicolumn{1}{|c|}{\bf 5} & \multicolumn{1}{|c|}{\bf 6} & \multicolumn{1}{|c|}{\bf 7} & \multicolumn{1}{|c|}{\bf 8} & \multicolumn{1}{|c|}{\bf 9} \\ \hline {\bf 0} & 6712 & 3 & 39 & 10 & 6 & 36 & 57 & 10 & 61 & 28 \\ \hline {\bf 1} & 1 & 7738 & 7 & 15 & 9 & 1 & 9 & 23 & 36 & 12 \\ \hline {\bf 2} & 24 & 50 & 6632 & 95 & 65 & 17 & 16 & 63 & 65 & 30 \\ \hline {\bf 3} & 13 &16 & 84 & 6585 & 8 & 218 & 5 & 42 & 153 & 84 \\ \hline {\bf 4} & 5 & 6 & 27 & 8 & 6279 & 32 & 13 & 59 & 43 & 305 \\ \hline {\bf 5} & 21 & 6 & 13 & 128 & 27 & 5736 & 57 & 3 & 262 & 34 \\ \hline {\bf 6} & 91 & 26 & 50 & 11 & 35 & 91 & 6693 & 0 & 45 & 1 \\ \hline {\bf 7} & 6 & 6 & 31 & 97 & 26 & 15 & 0 & 6689 & 24 & 331 \\ \hline {\bf 8} & 27 & 15 & 86 & 156 & 21 & 110 & 25 & 16 & 6065 & 66 \\ \hline {\bf 9} & 3 & 11 & 21 & 36 & 348 & 57 & 1 & 388 & 71 & 6067 \\ \hline \end{tabular} \end{center} \end{table*} \section{\uppercase{Conclusions}} \label{sec:conclusion} \noindent We have proposed a new multiclass segmentation procedure, based on the diffuse interface model. The method obtains segmentations of several classes simultaneously without using one-vs-all or alternative sequences of binary segmentations required by other multiclass methods. The local scaling method of Zelnik-Manor and Perona, used to construct the graph, constitutes a useful representation of the characteristics of the data set and is adequate to deal with high-dimensional data. Our modified diffusion method, represented by the non-linear smoothing term introduced in the Ginzburg-Landau functional, exploits the structure of the multiclass model and is not affected by the ordering of class labels. It efficiently propagates class information that is known beforehand, as evidenced by the small proportion of fidelity points (4\% - 10\% of dataset) needed to perform accurate segmentations. Moreover, the method is robust to initial conditions. As long as the initialization represents all classes uniformly, different initial random configurations produce very similar results. The main limitation of the method appears to be that fidelity points must be representative of class distribution. As long as this holds, such as in the examples discussed, the long-time behavior of the solution relies less on choosing the ``right'' initial conditions than do other learning techniques on graphs. State-of-the-art results with small classification errors were obtained for all classification tasks. Furthermore, the results do not depend on the particular class label assignments. Future work includes investigating the diffuse interface parameter $\epsilon$. We conjecture that the proposed functional converges (in the $\Gamma$-convergence sense) to a total variational type functional on graphs as $\epsilon$ approaches zero, but the exact nature of the limiting functional is unknown. \section*{\uppercase{Acknowledgements}} This research has been supported by the Air Force Office of Scientific Research MURI grant FA9550-10-1-0569 and by ONR grant N0001411AF00002. \bibliographystyle{apalike} {\small
1,116,691,499,957
arxiv
\section{Introduction} Polars are magnetic cataclysmic binaries consisting of a late-type main-sequence star and a strongly magnetic white dwarf locked in synchronous rotation. EF\,Eri\ was one of the 11 polars known in the pre-ROSAT era, it was the second brightest at optical and at X-ray wavelengths after the prototypical system AM Herculis. It was studied with all major X-ray observatories (EINSTEIN, EXOSAT, GINGA, ROSAT) in the past and was always found in a high accretion state. EINSTEIN observations revealed the presence of uncorrelated soft and hard X-ray emission and were used to observationally establish the standard picture of magnetic accretion onto white dwarfs in the high $\dot{m}$-regime dominated by a shock-heated accretion column and cooling by free-free radiation (Beuermann, Stella \& Patterson~1987, henceforth BSP87). The absence of a pronounced soft X-ray excess made BSP87 to coin EF\,Eri\ the textbook example of AM Herculis-type systems. The shape of the X-ray light curves, in particular the presence of a soft X-ray absorption dip, was used to uncover the accretion geometry. EF\,Eri\ had a main accretion pole which was continuously in view, the observer has a moderate inclination with respect to the orbital plane, so that the line of sight crosses the accretion stream on its way through the magnetosphere. This special geometry allowed detailed stream-density diagnostics with GINGA and EXOSAT (Watson et al.~1989). The accretion geometry was intensively studied using photo- and spectro-polarimetric data (e.g.~Bailey et al.~1982, Cropper 1985, Piirola et al.~1987, Meggitt \& Wickramasinghe 1989, Beuermann et al.~2007). The latter three papers agree that the white dwarf's magnetic field is probably more complex than that of a centered dipole. The zero point of Bailey's ephemeris (Bailey et al.~1982) centered on the IR (X-ray) absorption dip is widely used in the literature. Piirola et al.~(1987) determined an updated orbital period based on a linear regression of the arrival times of linear polarisation pulses. Beuermann et al.~(2007) derived a slightly revised ephemeris by including the ROSAT PSPC X-ray dip timings from July 1990. Phases in this paper refer to Bailey's phase zero and Piirola's period. EF\,Eri\ turned into a deep low state at $V\simeq 18$ in 1997 (Wheatley \& Ramsay 1998) and remains therein since then. A re-brightening was reported in VSNET on March 5, 2006 (ERI EF 20060305.724 at 14.2 unfiltered CCD based on the Henden-Sumner sequence), but the system returned to the low state shortly thereafter. While in the high state the stellar photospheres are outshone by accretion radiation, the low state offers the opportunity to investigate the stars, at least in principle. Since EF\,Eri\ is the polar with the shortest orbital period, $P_{\rm orb} =81$\,min, just a few minutes above the CV minimum period, low state observations are of utmost importance to test current scenarios of CV evolution and to search for the cool secondary. Indeed, following the more indirect conclusion by Beuermann et al.~(2000) of a substellar secondary in EF\,Eri\ from the non-detection of any spectral signature of the companion in optical spectra, Howell \& Ciardi (2001) claimed the detection of the secondary in near-infrared spectra. A more likely explanation of the observed infrared humps was given in terms of cyclotron radiation (Beuermann et al.~2000, Harrison et al.~2004). Beuermann et al.~(2000) also estimated the photospheric temperature of the white dwarf from their low-resolution optical spectra, $T_{\rm WD} =9500\pm500$\,K, one of the coldest WDs among all CVs. This allowed to draw some conclusions on the likely evolutionary state of the object. Recently, Szkody et al.~(2006) report on phase-resolved {\it GALEX} observations with the puzzling result of a distinct source of ultraviolet flux much larger than the underlying 9500\,K white dwarf. Here we report on archival XMM-Newton observations of EF\,Eri\ obtained in a low accretion state. We search for remaining X-ray emission in the low state either originating from the white dwarf or the secondary and analyse the data from the optical monitor taken through two different filters. \section{Low-state observations with XMM-Newton} The XMM-Newton Science Archive (XSA) contains three observations of the X-ray sky in the direction of EF\,Eri. They are listed with their nominal exposure times in Tab.~\ref{t:log}. \begin{table}[t] \caption{XMM-Newton observations of EF\,Eri. The first column lists the unique observation ID and the revolution number of the spacecraft, the last column lists the nominal and effective exposures times of the individual observations, the latter quantity after screening for high background and other instrumental defects.} \label{t:log} \begin{tabular}{lccc} \hline\hline OBSID/rev & Date & Instr. & Exp Nom/Eff\\ & & & (s)\\ \hline 0111320201/496 & 2002-08-24 & EPIC-PN & 6132/1859\\ & & OM V & 6000 \\ 0111320401/571 & 2003-01-20 & EPIC-MOS & 5160/5088\\ 0111320501/583 & 2003-02-14 & EPIC-PN & 5047/4559\\ & & EPIC-MOS & 6660/6575 \\ & & OM V & 3400 \\ & & OM UVW1 & 2600\\ \hline \end{tabular} \end{table} We refer to individual observations by an 'E' followed by the last three digits of the OBSID, i.e.~E201 for the observation on 28$^{\rm th}$ of April, 2002. All X-ray observations were obtained in full frame mode. The RGS did not reveal useful data due to low count rate and will not considered further. Data processing was performed with the latest version of the XMM-Newton SAS (version 7.0), a spectral analysis of the X-ray data was performed with XSPEC. Despite the relatively short exposure times, almost full phase coverage of the $P_{\rm orb} = 81$\,min binary was achieved at two occasions (E401 and E501). The accumulated phase uncertainty of the period derived by Piirola et al.~(1987) at the epoch of the last XMM-Newton X-ray observation, i.e. after $\sim$155000 binary cycles, is 0.014 phase units only and thus negligible. EPIC-MOS and EPIC-PN show a faint apparent companion to EF\,Eri\ at $\alpha(2000) = 03^h 14^m 14\fs0$ and $\delta(2000) = -22\degr 36' 04''$. The source has no counterpart on DSS2 images. It contributes at a level of $F_X \simeq 1.5 \times 10^{-14}$\,erg\,cm$^{-2}$\,s$^{-1}$\ in the 0.1 -- 10 keV band. This source could not be resolved in all previous X-ray observations with other satellites. Its faint flux was just a small contamination of all previous X-ray observations and thus irrelevant. It represents, however, a $\sim$20\% contamination of the flux of EF\,Eri\ during XMM-Newton observations. We thus chose source and background regions for the extraction of light curves and spectra avoiding the region around this source. \subsection{X-ray spectra and light curves} The net exposure time of observation E201 was just 1859\,s. The source was detected with EPIC-PN at a mean count rate of 0.022\,s$^{-1}$. The spectrum contains no photons above 5 keV, it could be successfully fitted (reduced $\chi^2 = 0.93$ for 9 degrees of freedom) with a cooling plasma model (MEKAL in XSPEC terms) with a temperature of $kT = 2.8\pm1.7$\,keV only very little affected by interstellar absorption. In general, all X-ray spectral fits based on XMM-Newton observations are compatible with zero interstellar absorption, in accord with the low column density inferred from ROSAT and EXOSAT, $N_{\rm H} = 10^{19}$\,cm$^{^2}$ (Beuermann et al.~1991; Watson, King \& Williams 1987), and from EINSTEIN, $N_{\rm H} < 1 \times 10^{20}$\,cm$^{^2}$ (BSP87). The integrated flux in this component was $F_X = 7 \times 10^{-14}$\,erg\,cm$^{-2}$\,s$^{-1}$\ (0.1 -- 10 keV). Observation E401 was performed with EPIC-MOS only and resulted in the detection of 26/20 photons in 5099/5088\,s with MOS1/2, respectively. The spectrum was found to be very soft again, an unconstrained fit yielded $kT = 0.5 \pm 1$\,keV, but the spectral parameters remained highly uncertain due to the small number of photons. A fit using the same parameters as for E501, see below, yielded a flux of $F_X \simeq 4 \times 10^{-14}$\,erg\,cm$^{-2}$\,s$^{-1}$\ (0.1 -- 10 keV), slightly indicative of a lower X-ray flux at that epoch. Observation E501 was performed with all three X-ray cameras onboard and revealed 140, 43, and 40 source photons with EPIC-PN, MOS1, and MOS2, respectively. The mean spectrum, which is a good approximation to the orbital mean spectrum also, is shown in Fig.~\ref{f:spec501}. Again, it is a soft spectrum which could be fitted with just one emission component (reduced $\chi^2 = 0.81$ for 33 d.o.f.). The best-fit plasma temperature of the MEKAL model is $kT = 1.7 \pm 0.2$\,keV, and the integrated flux $F_X = 6 \times 10^{-14}$\,erg\,cm$^{-2}$\,s$^{-1}$\ (0.1 -- 10 keV). The $O-C$ residuals of such a fit (see Fig.~\ref{f:spec501}) give the slight impression of a systematic trend with an excess of photons between 0.5--1.0\,keV. However, the parameters of any additional spectral component cannot be constrained significantly and we thus stick to a one component X-ray spectrum. \begin{figure}[t] \resizebox{\hsize}{!}{\includegraphics[angle=-90,clip=]{EFEri_501_PN_MOS_spec_wabs_mekal_1_1.ps}} \caption{Mean X-ray spectrum of EF\,Eri\ (observation E501) and best-fit thermal plasma model. } \label{f:spec501} \end{figure} We found no significant X-ray variability of the total X-ray flux between the three XMM-Newton observations. X-ray variability in E501, the longest of the three observations, was almost insignificant. A binned light curve with bin size 243 s (20 phase bins per orbital cycle) shows one bin with no source photon. It occurs at phase 0.7, i.e.~it cannot be associated with the high state absorption dip (if the accuracy of Piirola's period is as high as the formal uncertainties given in their suggest). Given the small number of X-ray photons a secure claim on the existence of a dip cannot be made. \begin{figure}[ht] \resizebox{\hsize}{!}{\includegraphics[angle=-90,clip=]{EFEri_201_OM_V_lc_p.ps}} \caption{OM light curve through $V$-filter from observation E201. The bin size is 243\,s corresponding to 0.05 phase units.} \label{f:omlc} \end{figure} \subsection{Optical/UV observations with the OM} The optical monitor OM was used in observations E201 and E501 with the $V$ and $UVW1$ filters, respectively (see Tab.~\ref{t:log} for details). In E201 full phase coverage was achieved, in E501 only parts of the orbital cycle were covered with the two filters. During E201, the mean countrate in the $V$-filter (phase 0.16 -- 1.61) was 0.88(8)\,s$^{-1}$\ corresponding to a mean flux of $F_V = 2.6(2) \times 10^{-16}$\,erg\,cm$^{-2}$\,s$^{-1}$\,\AA$^{-1}$. During E501 the mean countrate through the $V$-filter (phase interval 0.90 -- 1.55) was 0.87(5)\,s$^{-1}$\ corresponding to $F_V = 2.2(1) \times 10^{-16}$\,erg\,cm$^{-2}$\,s$^{-1}$\,\AA$^{-1}$), and through the $UVW1$-filter (phase interval 0.67 -- 1.17) it was 1.32(4)\,s$^{-1}$\ corresponding to $F_{UVW1} = 6.5(2) \times 10^{-16}$\,erg\,cm$^{-2}$\,s$^{-1}$\,\AA$^{-1}$. Some modulation of the optical flux at a level of about 50\% was discovered in E201 (Fig.~\ref{f:omlc}) with a minimum around phase zero. The low-state light curves by Skody et al.~(2006) show a minimum at phase 0.4. The phase difference between the two epochs by using either Bailey's period used by Szkody et al.~(2006) and Piirola's period used here is negligible, 0.01 phase units. Hence, this phase shift, if a real feature of the light curve, cannot be explained by different phase conventions. However, both data sets were obtained with small telescopes and are rather noisy. Given the rather large error bars on individual light curve bins we are not further discussing possible differences between the light curves. We note, however, that the accumulated phase difference from Bailey's zero point in 1979 to the epochs of the GALEX (2004) or XMM-Newton (2003) observations is 0.18 phase units with either Bailey's or Piirola's period and thus not negligible. Formally, the period derived by Piirola et al.~should be preferred due to the claimed higher accuracy, but an independent re-determination of the linear polarization ephemeris is highly desirable, should EF\,Eri\ ever return to a high accretion state. The UVW1-filter is centered on 2910\AA, between the GALEX-NUV passband (Szkody et al.~2006) and the optical broad-band filters. Mean flux values as measured with the OM are shown together with other low-state photometric and spectroscopic data (Harrison et al.~2004, Szkody et al.~2006) from the infrared to the ultraviolet spectral range in Fig.~\ref{f:seduvopt}. It shows that the different low state observations are compatible with each other. Based on an analysis of their low-state {\it GALEX} ultraviolet and optical photometry Szkody et al.~(2006) state the existence of a light-source reminiscent of a 20000\,K hot spot. Their spot model, however, could neither explain the large-amplitude FUV variations nor the spectral energy distribution and they arrive at the conclusion, that {\it no} spot model can explain their observations. The analysis by Szkody et al.~(2006) was based on an assumed effective temperature $T_{\rm eff} = 9500$\,K (Beuermann et al.~2000) which they approximated as a Planckian function. We note, that this approximation indeed gives rise to a large ultraviolet excess. We re-address the question of the white dwarf and spot temperature making use of state-of-the-art white dwarf model atmospheres (Koester et al.~2005 and references therein). The optical spectrum alone is best described by a model with $T_{\rm eff} = 10500\pm1000$\,K. This value is in accord with the more recent analysis by the G\"ottingen group (Beuermann et al.~2007) who used $T_{\rm eff} = 11000 \pm 1500$\,K. The pure white dwarf model falls short of matching the observed ultraviolet flux. We therefore fitted the SED at orbital minimum and maximum and the ultraviolet/optical light curves with a two-temperature model, a cooler one representing the white dwarf and a hotter one representing a spot. We folded our white-dwarf model spectra with the effective area curves of the two {\it GALEX} passbands\footnote{http://galexgi.gsfc.nasa.gov/tools/Resolution\_Response/index.html} thus converting Eddington flux to count rate. Size, temperature and location of the spot and the temperature of the white dwarf were varied until a satisfactory fit (by eye) to the optical and ultraviolet light curves and the SED were reached. We arrived at a consistent solution for $T_{\rm wd} = 9750$\,K and $T_{\rm spot} = 18500$\,K with an estimated uncertainty of 1000\,K in the white dwarf temperature. The spot temperature is subject to much larger systematic uncertainties, since e.g.~our assumption of a uniform spot temperature is a very crude approximation. The model spectra shown in Fig.~\ref{f:seduvopt} at orbital minimum and maximum were computed for a binary inclication $i=60\degr$, a spot extent of $24\degr$ (half opening angle), and a spot colatitude of just $12.5\degr$. A rather high inclination and a high 'northern' spot latitude, the spot undergoes a partial self-eclipse only, are required from the fact that the FUV-band is almost completely dominated by the spot. Even at orbital minimum the white dwarf contributes only $\sim$10\% to the total flux in that band (see the lowest model curve in Fig.~\ref{f:seduvopt}). Our model is simple and far from being unique but it fits the data well and contradicts the conclusion by Szkody et al.~(2006) that no spot model can explain both the SED and the variability. Combining parallaxes, proper motions and absolute magnitude constraints, Thorstensen (2003) derived distance estimates for 14 CVs with a Bayesian method, among them EF\,Eri. Dependent on the proper-motion, magnitude and velocity priors, he derived a short, $d = 113^{+19}_{-16}$\,pc, and a long, $d = 163^{+66}_{-50}$\,pc, distance to EF\,Eri. At 113\,pc the observed flux of our 9750 K white dwarf model results in a radius of $6.5 \times 10^{8}$\,cm and implies a relatively massive white dwarf of 0.87\,M$_\odot$. At 163\,pc the implied mass is $\sim$0.55\,M$_\odot$. These numbers differ slightly from those derived in Beuermann et al.~(2000), since the spot contribution was taken into account in our analysis. \begin{figure}[t] \resizebox{\hsize}{!}{\includegraphics[clip=]{sed_uv_opt.ps}} \caption{Ultraviolet to infrared spectral energy distribution of EF\,Eri\ in the low state. Shown are an optical low-state spectrum and infrared JHK photometry adapted from Harrison et al.~(2004), GALEX-UV and optical BVRI photometry from Szkody et al.~(2006, blue dots), OM V-band and UVW1-photometry (red dots), and the result of our spectral synthesis with a two-temperature model (see text for details) Whenever possible, orbital minimum and maximum brightness are indicated and connected by lines} \label{f:seduvopt} \end{figure} \section{Results and discussion} We have analysed archival XMM-Newton observations of EF\,Eri\ obtained in 2002 and 2003. At all three occasions the polar was detected as an X-ray source, although at a very low flux level. The spectra were compatible with emission from a low-temperature corona-like plasma. The longest X-ray observation had almost full phase coverage, the plasma temperature was as low as 2\,keV or less. The mean orbital integrated flux in this component is about $F_X = 6 \times 10^{-14}$erg\,cm$^{-2}$\,s$^{-1}$. Assuming isotropic radiation and a distance of 163\,pc (Thorstensen 2003), a luminosity of $L_X \sim 2 \times 10^{29}$\,erg\,s$^{-1}$\ is derived. The question arises if this faint X-ray flux originates from the corona of the secondary or from the accretion region on the white dwarf. Neither X-ray variability nor the X-ray spectrum give a clear answer. Both, a coronal plasma and the cooling plasma from low level accretion have such low temperatures as measured here. Evidence for X-ray emission from an accretion plasma can be given indirectly. Firstly, although not very much is known about X-ray emission from degenerate stars at the bottom of the main sequence, their X-ray luminosities seem to fall short by one dex with respect to the X-ray luminosiy of EF\,Eri\,(Stelzer et al.~2006). Secondly, EF\,Eri\ shows clear signs of residual accretion via the detection of infrared cyclotron harmonics (Harrison et al.~2004). It therefore appears reasonable to assign the observed X-ray emission to some remaining weak accretion. This will be our working hypothesis in the following. It remains unclear if residual accretion happens via an accretion stream or via a stellar wind, the latter being inferred in order to explain the faint X-ray emission from the small group of pre-CVs (termed also LARPs, Schwope et al.~2002, Schmidt et al.~2005, Vogel et al.~2006). Since the emission region in EF\,Eri\ is not self-eclipsing (Beuermann et al.~1987, 1991), we are lacking a distinct photometric feature to discern between the two possibilities. The pronounced soft X-ray absorption dip as a sign of stream accretion and seen in high accretion states was not secularly detected here, but due to the small number of photons its absence does not give a clear-cut answer to the question of the accretion mode. \begin{figure}[t] \resizebox{\hsize}{!}{\includegraphics[clip=]{sed_ir_x_mod.ps}} \caption{Infrared to X-ray spectral energy distribution of EF\,Eri\ in the low state. Shown are radiation components which are associated with the accretion process, i.e.~corrected for stellar photospheric radiation, at orbital maximum. The red arrow indicates the cyclotron fundamental for $B=14$\,MG.} \label{f:sedirx} \end{figure} We discuss the energy balance of the accretion process in the low state on the assumption that the observed X-ray emission is due to accretion onto the white dwarf primary. We make the further assumption that the excess emission in the infrared over the extrapolated white dwarf spectrum is solely due to cyclotron emission from the accretion plasma. The relevant radiation components are shown in Fig.~\ref{f:sedirx}. It shows the coronal plasma in the X-ray regime and the cyclotron component in the infrared, the latter corrected for the contribution from the underlying white dwarf. Included in the figure is the spot model at orbital maximum, represented by a white dwarf model spectrum with $T_{\rm eff} = 18500$\,K. The integrated flux in the thermal plasma X-ray component is $F_X = 6\times 10^{-14}$\,erg\,cm$^{-2}$\,s$^{-1}$. At an assumed field strength of 13\,MG (Beuermann et al.~2007) the cyclotron fundamental is at about $8\mu$m. If we assume a Rayleigh-Jeans limited flux up to the $H$-band, where the cyclotron component peaks (Harrison et al.~2004 and Figs.~\ref{f:seduvopt} and \ref{f:sedirx}), the integrated cyclotron flux is about $F_{\rm cyc} \simeq 4\times 10^{-12}$\,erg\,cm$^{-2}$\,s$^{-1}$. This value has an uncertainty of at least 50\%, since the cyclotron spectrum is covered only partly by observations. There is no evidence for any component of re-processed radiation in the extreme ultraviolet/soft X-ray regime. We assume that the spot component in the ultraviolet carries the complete information on reprocessed radiation. The integrated flux in this component is of order $F_{\rm rep} \simeq 2 \times 10^{-11}$\,erg\,cm$^{-2}$\,s$^{-1}$. This is clearly larger than the primary cyclotron radiation and a factor $\sim$300 larger than the thermal X-ray component. The large amount of radiation in the ultraviolet in excess of the primary radiation components is suggestive of a reservoir of heat from the previous high state and does not support a picture of instantaneous reprocessing. Clearly, more UV-observations are necessary to verify this picture by determining the cooling curve of the accretion spot. We thus derive a flux balance $F_{\rm UV} \simeq 5 \times F_{\rm cyc}$ and $F_{\rm cyc} \sim 60 F_X$, the latter ratio being suggestive of accretion in the bombardement regime at low mass flow rates and/or high magnetic field. The field of EF\,Eri\ is one of the lowest among all polars which made the binary a rather hard X-ray source with almost balanced flux contributions in the high state (see the detailed discussion in BSP87). Since Beuermann et al. found EF\,Eri\ to behave as described in the 'standard' accretion scenario (Lamb \& Masters 1979), it was termed 'the textbook example of AM Herculis stars' by them. This picture changed fundamentally in the low state. Beuermann (2004) presented a sequence of model spectra for $B=14$\,MG, $\Theta = 60\degr$ and variable mass flow rate $\dot{m}$ (in g\,cm$^{-2}$\,s$^{-1}$). The values of $B$ and $\Theta$ are quite similar to those of EF\,Eri. We include his model for $\dot{m} =10^{-2}$\,g\,cm$^{-2}$\,s$^{-1}$ in Fig.~\ref{f:sedirx}. This model predicts the right (within an order of magnitude) flux ratio between cyclotron and X-ray flux. At even lower mass flow rates, the next smaller value computed by Beuermann (2004) is $\dot{m} =10^{-3}$\,g\,cm$^{-2}$\,s$^{-1}$, the flux ratio $F_{\rm cyc}/F_X$ becomes much smaller than observed. Also, the predicted size of the cyclotron emitting area would be larger than the white dwarf (for an assumed distance of 120\,pc). We thus regard $\dot{m} = 10^{-2}$\,g\,cm$^{-2}$\,s$^{-1}$ as the likely value for EF\,Eri\ in its low accretion state. Fig.~\ref{f:sedirx} shows the blackbody approximation as non-appropriate for the reprocessed component. This was noted already by Beuermann (2004), the low state observations of EF\,Eri\ prove this observationally. At $\dot{m} = 10^{-2}$\,g\,cm$^{-2}$\,s$^{-1}$ and $B=13$\,MG the maximum predicted electron temperate is about 7\,keV (Fischer \& Beuermann 2001). Our measured temperature $kT \simeq 1.7$\,keV (E501) indicates that the bulk of X-ray emission originates from denser layers at lower temperatures, as expected. A comparison between high and low states fluxes of the main radiation components is instructive. For the mean fluxes in the high state, BSP87 derive $F_{\rm cyc} = 4.8 \times 10^{-11}$\,erg\,cm$^{-2}$\,s$^{-1}$, $F_{\rm brems} = 1.5 \times 10^{-10}$\,erg\,cm$^{-2}$\,s$^{-1}$, and $F_{\rm bb} = 5.5 \times 10^{-10}$\,erg\,cm$^{-2}$\,s$^{-1}$, respectively. Hence, when switching from the high to the low state, the cyclotron flux is reduced by a factor $\sim$10, and the flux in the thermal plasma component by a factor 2500. These numbers illustrate the variable occupation of the different channels of energy release, when switching from a high-accretion rate, shock-dominated flow to the low-rate, cyclotron-dominated bombarded atmosphere. A direct comparison between the high- and low-state fluxes in the re-processed component seems not to be possible, since a counterpart to the high-state blackbody is missing in the low state. The low-state ultraviolet flux seems still to be fed high-state accretion heating. The bolometric flux in the low state is $F_{\rm acc} \simeq F_{\rm cyc} \simeq 4 \times 10^{-12}$\,erg\,cm$^{-2}$\,s$^{-1}$, the luminosity is $L_{\rm acc} \simeq 2 \pi d^2 F_{\rm acc} \simeq 2.4 \times 10^{30} d_{100}^2$\,erg\,s$^{-1}$, a factor 100 -- 300 lower than the high state accretion luminosity derived by BSP87. We may also compare the cyclotron emitting areas in the high and low states. BSP87 derive $A_{\rm cyc} = (3.2-12.2) \times 10^{15} d_{100}^2$\,cm$^{2}$ while scaling of the model shown in Fig.~\ref{f:sedirx} implies $A_{\rm cyc} \simeq 30 \times 10^{15} d_{100}^2$\,cm$^{2}$. Again, care has to be taken by taking the latter number too literally since we have just scaled a pre-existing model, but they imply that the cyclotron emitting area has not shrunk by orders of magnitude. A final comment may be made on EF\,Eri\ as a polar and its relation to the class of (likely misleadingly termed) LARPs (Low-Accretion Rate Polars; Schwope et al.~2002). The latter are close white dwarf/M dwarf pairs with pronounced cyclotron harmonics which led to their discovery in the HQS and the SDSS spectroscopic surveys. They are likely detached binaries acrreting from the stellar wind of the secondary (Schmidt et al.~2005, Vogel et al.~2007). EF\,Eri\ is in a deep low state now for about 10 years. It is faint, its optical spectrum shows just the white dwarf and no secondary, variability in the optical is small, $\Delta V < 0.3^m$, its X-ray flux is faint and below all flux limits of surveys with sufficiently large survey area. Hence, all the classical discovery channels of CVs would not have led to the identification of EF\,Eri\ as a rather nearby cataclysmic variable in its extended low state. It would have been classified just as an isolated magnetic white dwarf in a spectroscopic survey. But it shows an infrared excess, which could be (erroneously) interpreted as originating from a secondary and therefore would hint to the binary nature of this source. Actually it still shows a high degree of variability, although most pronounced in the ultraviolet. And finally, it still emits X-rays, but at the given flux the X-ray sky is dominated by AGNs. Hence, it still shows many hallmarks of the polars. and in that respect we may coin EF\,Eri\ as the textbook example of the low-accretion rate polars. We are not going to speculate about a population of missing CVs in similar extended low states. But the case of EF\,Eri\ as a secure low-accretion rate polar underlines the importance of a multi-wavelength approach to find more of these intriguing sources. \begin{acknowledgements} We thank our referee, Klaus Beuermann, for helpful comments which improved the manuscript. We thank V. Hambaryan and G. Lamer for help with the data reduction. This work was supported in part by the Deutsches Zentrum f\"ur Luft- und Raumfahrt (DLR) GmbH under contract No. FKZ 50 OR 0404 and by the Deutsche Forschungsgemeinschaft DFG under contract No.~Schw536/20-1. \end{acknowledgements}
1,116,691,499,958
arxiv
\section{Introduction} \label{introducton} Discrete elliptic operators can be seen as the discrete counter part of elliptic partial differential operators. In particular, positive semi--definite Schr\"{o}dinger operators defined on a finite network are examples of those self--adjoint operators. Any elliptic operator defines an automorphism on the orthogonal subspace to the eigenfunctions associated with the lowest eigenvalue, whose inverse is the orthogonal Green operator. In \cite{CEM14}, some of the authors analyzed the effect of a perturbation of the network by computing the effective resistance of the perturbed networks through Sherman--Morrison--Woodbury like--formulas, instead of using the Sherman--Morrison formula recursively. In fact, since adding edges to a network does not modify the space of functions on the vertex set of the network, this class of perturbation was placed into the general framework of perturbations of discrete elliptic operators. Specifically, we showed that this problem corresponds with the superposition of rank one perturbations that are orthogonal to the eigenfunction associated with the lowest eigenvalue of the elliptic operator. The scenario changes when the perturbation consists on adding new vertices to the network. Only few works have tackled the problem of adding a new vertex, see for instance\cite{B85}. In this work, we consider perturbations that consist on adding a new vertex to a network. After some well--known operations on the Schr\"odinger operator of the perturbed network, that involves the inverse of the Schur complement of the block corresponding to the added vertices, we show that this Schur complement can be seen as a perturbation of the Schr\"odinger operator of the original network, understood as a discrete elliptic operator, that is a superposition of rank one perturbations that, this time, are not orthogonal to the eigenfunction associated with the lowest eigenvalue of the elliptic operator. Therefore, we can apply the general theory developed in \cite{CEM14} for this kind of perturbations. We start the study by revisiting the perturbation of an elliptic operator with a sum of projections that can be, or not, orthogonal to the eigenfunction associated with the smallest eigenvalue.\ Thus, we consider the relation between the Green o\-pe\-rator of the new operator in terms of the Green operator of the previous one. Next section is devoted to the application of the mentioned results to the addition of a new vertex to the network $\Gamma$ in order to get a network $\Gamma'$. Moreover, we obtain the relation between the Schr\"{o}dinger operators of the two networks $\Gamma$ and $\Gamma'$ and in addition, we give the explicit expression of the matrix associated with the Green operator. \section{Specific notation and preliminary results} \label{} Given a finite set $V$ of $n$ elements, we denote by $\mathcal{C}(V)$ the space of real valued functions on $V$. For any function $u\in \mathcal{C}(V)$, the associated vector in $\mathbb{R}^n$ will be denoted by ${\sf u}$. For any vertex $x \in V$, the Dirac function at $x$ is denoted by $\varepsilon_x\in \mathcal{C}(V)$; the scalar product on $\mathcal{C}(V)$ is $\langle u,v \rangle = \sum_{x\in V} u_xv_x$ for each $u, v\in \mathcal{C}(V)$. A unitary and positive function $\omega$ is called a weight and $\Omega(V)$ denote the set of weights. If $\mathcal{K}$ is an endomorphism of $\mathcal{C}(V)$, it is self-adjoint when $\langle \mathcal{K}(u),v\rangle=\langle u,\mathcal{K}(v)\rangle$ for any $u\in \mathcal{C}(V)$. Moreover, $\mathcal{K}$ is positive semi-definite when $\langle \mathcal{K}(u),u\rangle \geq 0$ for any $u\in \mathcal{C}(V)$. A self-adjoint operator $\mathcal{K}$ is elliptic if it is positive semi-definite and its lowest eigenvalue $\lambda$ is simple. Moreover, there exists a unique unitary function $\omega \in \mathcal{C}(V)$, up to sign, satisfying $\mathcal{K}(\omega)=\lambda \omega$, so $\mathcal{K}$ is called $(\lambda,\omega)$-elliptic operator. It is straighforward that a $(\lambda,\omega)$-elliptic operator is singular iff $\lambda=0$. We denote by $\lambda^\dag=\lambda^{-1}$ iff $\lambda\neq 0$, and $\lambda^\dag=0$ otherwise. Any function $K:V\times V\rightarrow \mathbb{R}$ is called a kernel on $V$, and it determines an endomorphim of $\mathcal{C}(V)$ by assigning to any $u\in \mathcal{C}(V)$ the function $\mathcal{K}(u)=\sum_{y\in V}K(\cdot,y)u(y)$. Conversely, each endomorphism of $\mathcal{C}(V)$ is determined by the kernel $K(x,y)=\langle \mathcal{K}(\epsilon_y),\epsilon_x\rangle$ for any $x$, $y\in V$. Therefore, an endomorphism $\mathcal{K}$ is self-adjoint iff its kernel $K$ is a symmetric function. Given $\sigma$, $\tau\in \mathcal{C}(V)$, we denote by $\P_{\sigma,\tau}$ the endomorphism of $\mathcal{C}(V)$ that assigns to each $u\in \mathcal{C}(V)$ the function $\P_{\sigma,\tau}(u)=\langle \tau, u\rangle \sigma$, and it is called a projector, as it assigns to any $u\in \mathcal{C}(V)$ its projection on $\sigma$ along $\tau$. Observe that the corresponding kernel is $P_{\sigma,\tau}(x,y)=(\sigma \otimes \tau)(x,y)=\sigma(x)\tau(y)$. In particular, when $\omega\neq 0$ the endomorfism $\P_{\omega,\omega}$ is denoted simply by $P_{\omega}$. Given $\lambda\geq0$, $\omega\in\Omega(V)$ and a $(\lambda,\omega)$-elliptic operator $\mathcal{F}$, we will be concerned with the so-called Poisson equation for $\mathcal{F}$ on $V$: for a given $f\in \mathcal{C}(V)$ find $u\in \mathcal{C}(V)$ such that $\mathcal{F}(u)=f$. As $\mathcal{F}$ defines an automorphism on $\omega^\perp$, the inverse of a $(\lambda,\omega)$-elliptic operator $\mathcal{F}$ on $\omega^{\perp}$ is called orthogonal Green operator and it is denoted by $\mathcal{G}$. This operator on $\omega^{\perp}$ can be extended to $\mathcal{C}(V)$ by assigning to any $f\in \mathcal{C}(V)$ the unique solution of the Poisson equation $\mathcal{F}(u)=f-\P_{\omega}(f)$. Now consider a non-null function $\sigma\in \mathcal{C}(V)$, the associated self-adjoint projection $\P_{\sigma}$ and the operator $\H_{\sigma}=\mathcal{F}+\P_{\sigma}$, called perturbation of $\mathcal{F}$ by $\sigma$. The relation between Green operators of both $\H_\sigma$ and $\mathcal{F}$ can be found in \cite[Corollary 3.6]{CEM14}. \begin{corollary} \label{corol1} Consider $\sigma\in \mathcal{C}(V)$. If $\lambda=\langle \sigma, \omega \rangle=0$, then $$\mathcal{G}^{\H_\sigma}= \mathcal{G} - \frac{1}{1+\langle \mathcal{G}(\sigma),\sigma \rangle}\P_{\mathcal{G}(\sigma)},$$ whereas when either $\lambda>0$ or $\langle \sigma, \omega \rangle\neq 0$, then $\H_\sigma$ is invertible and \begin{eqnarray*} \H^{-1}_\sigma&=&\mathcal{G}- \frac{1}{\beta} \Big[\lambda \P_{\mathcal{G}(\sigma)} - \langle \sigma,\omega \rangle \left(\P_{\mathcal{G}(\sigma),\omega} - \P_{\omega,\mathcal{G}(\sigma)}\right) -\big(1+\langle \mathcal{G}(\sigma),\sigma\rangle\big)\P_{\omega} \Big], \end{eqnarray*} where $\beta=\lambda(1+\langle \mathcal{G}(\sigma),\sigma \rangle)+\langle \sigma,\omega \rangle^2$. \end{corollary} Moreover, if we consider $\sigma_i\in\mathcal{C}(V)$, $i=1,\ldots,m+\ell$ such that $\sigma_i\notin \omega^\bot$ for $i=1,\dots,m$ and $\sigma_{m+i}\in \omega^\bot$, $i=1,\ldots,\ell$ the operator $$\H=\mathcal{F}+\sum_{i=1}^{m+\ell}\P_{\sigma_i},$$ is a perturbed operator. Here, $m$ or $\ell $ can be equal to $0$. The relation between the corresponding inverse operators is given in the following theorem. \begin{theorem}\cite[Theorem 3.5]{CEM14} \label{theorem35} The operator $\H$ is positive semi-definite, positive definite when $m\ge 1$ and moreover $${\mathcal H}^{\dag}= \displaystyle \mathcal{G}+ h \P_{\omega}+\sum_{i=1}^{m+\ell} h_i [\P_{\mathcal{G}(\sigma_i),\omega}- \P_{\omega,\mathcal{G}(\sigma_i)}]-\sum_{i,j=1}^{m+\ell} h_{ij}\P_{\mathcal{G}(\sigma_i),\mathcal{G}(\sigma_j)},$$ where $(b_{ij})=(I+\langle\mathcal{G}(\sigma_j),\sigma_i\rangle)^{-1}$, and $$\begin{array}{rl} h=& \hspace{-.25cm}\displaystyle \left(\lambda+\sum_{r,s=1}^{m} b_{rs}\langle\sigma_r,\omega\rangle\langle\sigma_s,\omega\rangle\right)^{\dag}, \\[2ex] h_i =& \hspace{-.25cm}h \displaystyle\sum_{r=1}^{m} b_{ir}\langle\sigma_r,\omega\rangle, \hspace{.25cm}i=1,\ldots,m+\ell,\\[2ex] h_{ij}=&\hspace{-.25cm}\displaystyle b_{ij}-h\left(\sum_{r=1}^{m}b_{ir}\langle\sigma_r,\omega\rangle \right)\left(\sum_{r=1}^{m} b_{jr}\langle\sigma_r,\omega\rangle \right), \hspace{.25cm}i,j=1,\ldots,m+\ell. \end{array}$$ \end{theorem} On the other hand, the {\it Schur complement} provides us with a fundamental tool for the results on next sections. \begin{lemma} \label{lemma1} If $\AA\in \mathcal{M}_{n\times n}$, $\sf{B}\in \mathcal{M}_{n\times m}$, $\sf{D}\in \mathcal{M}_{m\times m}$ invertible, and $\sf{S}=\AA-\sf{B}\sf{D}^{-1}\sf{B}^{\intercal}$, then $$ \left(\begin{array}{cc} \AA & \sf{B}\\ \sf{B}^{\intercal} & \sf{D} \end{array} \right)^{\dag} = \left( \begin{array}{cc} \sf{S}^{\dag} & -\sf{S}^{\dag}\sf{B}\sf{D}^{-1}\\ -\sf{D}^{-1}\sf{B}^{\top}\sf{S}^{\dag}& \sf{D}^{-1}+\sf{D}^{-1}\sf{B}^{\intercal}\sf{S}^{\dag}\sf{B}\sf{D}^{-1} \end{array} \right),$$ where $\displaystyle C^{\dag}$ stands for the Moore-Penrose inverse of the matrix $C$. \end{lemma} \section{Adding a new vertex} \label{} In the following, the triple $\Gamma=(V,E,c)$ denotes a finite network, i.e., a connected graph without loops nor multiple edges with vertex set $V$, with cardinality $n$, and edge set $E$, in which each edge $e_{xy}\in E$ has assigned a value $c(x,y)>0$ named conductance. The conductance $c$ is a symmetric function $c: V\times V \rightarrow [0,\infty)$ such that $c(x,x)=0$ for any $x\in V$ and where the vertex $x$ is adjacent to vertex $y$ iff $c(x,y)>0$. Given a weight on $V$, $\omega\in \Omega(V)$, for any pair of vertices $(x,y)\in V\times V$ the $\omega$-dipole between $x$ and $y$ is the function $\tau_{xy}=\frac{\varepsilon_x}{\omega(x)}-\frac{\varepsilon_y}{\omega(y)}$. The Laplacian of the network $\Gamma$ is the endomorphism of $\mathcal{C}(V)$ that assigns to each $u\in \mathcal{C}(V)$ the function $$\L(u)(x)=\sum_{y\in V}c(x,y)[u(x)-u(y)], \quad x\in V.$$ The Laplacian is a singular elliptic operator on $\mathcal{C}(V)$ and moreover $\L(u)=0$ iff $u$ is a constant function. Given $q\in \mathcal{C}(V)$ the Schr\"odinger operator on $\Gamma$ with potential $q$ is the endomorphism of $\mathcal{C}(V)$ that assigns to each $u\in \mathcal{C}(V)$ the function $\L_q(u)=\L(u)+qu$. Given a weight $\omega\in \Omega(V)$, the potential determined by $\omega$ is the function $q_{\omega}=-\frac{1}{\omega}\L(\omega)$. It is well-known that the Schr\"odinger operator $\L_q$ is $(\lambda,\omega)$-elliptic iff $q=q_{\omega}+\lambda$, see \cite{BCE12}. Moreover, it is singular iff $\lambda=0$ and then, $\L_{q_\omega}(v)=0$ iff $v=a\omega$, $a\in \mathbb{R}$. We denote by $\mathcal{G}_{\lambda,\omega}$ the orthogonal Green operator associated with $\L_q$ and by $\sf{G}_{\lambda,\omega}$ its corresponding kernel. From now on, we consider fixed the value $\lambda\geq 0$, the weight $\omega \in \Omega(V)$ and the Schr\"odinger operator $\mathcal{L}_{q}$ with $q=q_{\omega}+\lambda$. In this study we worry about perturbations of $\L_q$ performed by adding a new vertex. Namely, let $x'$ be a new vertex and assume we connect it to $m$ vertices $x_1,\dots, x_m\in V$, where $1\leq m \leq n$. The new network $\Gamma'=(V',E',c')$ has vertex set $V'=V\cup \{x'\}$, edge set $E'=E\cup \{e_{x_1x'},\dots,e_{x_mx'}\}$ and conductance $$c'(x,y)= \left\{\begin{array}{ll} a_i>0 & \textrm{if} \; \;x=x_i\in V,\; y=x', \quad 1\leq i \leq m,\\ c(x,y) & \textrm{if} \; \;x,y\in V,\\ 0& \textrm{otherwise}. \end{array} \right. $$ If $\omega(x')$ is a positive value assigned to $x'$, we define a weight on $\Gamma'$ by $\omega'(x)=\omega(x)/\sqrt{1+\omega(x')^2}$, for any $x\in V'$. Observe that in this case it holds $\omega(x)/\omega(y)=\omega'(x)/\omega'(y)$, for any $x$, $y\in V'$. The following notation is useful in what follows. For any $i=1,\ldots,m$, we denote by $\rho_i=\sqrt{a_i\omega(x_i)\omega(x')}$, $\sigma_i=\dfrac{\rho_i}{\omega(x_i)}\varepsilon_{x_i}$, $\sigma=\sum_{i=1}^m a_{i}\varepsilon_{x_i}$ and $\alpha=\lambda +\dfrac{1}{\omega^2(x')}\sum_{i=1}^m \rho_i^2$. \begin{prop} If $\L'$ is the Laplacian of $\Gamma'$ and $p=q_{\omega'}+\lambda,$ where $q_{\omega'}=-\dfrac{\L'(\omega')}{\omega'},$ then $$\begin{array}{rll} \L_p'=&\hspace{-.25cm}\L_q+\sum\limits_{i=1}^m\P_{\sigma_i}-\P_{\sigma,\varepsilon_{x'}} &\mbox{ on } V,\\[1ex] \L_p'=&\hspace{-.25cm}-\P_{\varepsilon_{x'},\sigma}+\alpha\P_{\varepsilon_{x'}} &\mbox{ on } \{x'\}. \end{array}$$ \end{prop} \proof For any $u\in\mathcal{C}(V')$ it is satisfied that $$\begin{array}{rl} \L'u(x_i)=&\hspace{-.25cm}\L u(x_i)+a_i(u(x_i)-u(x')), \hspace{.25cm} i=1,\ldots,m,\\[1ex] \L'u(x')=&\hspace{-.25cm}\sum\limits_{i=1}^ma_i(u(x')-u(x_i)),\\[1ex] \L'u(x)=&\hspace{-.25cm}\L u(x), \hspace{.25cm} \hbox{ otherwise,} \end{array}$$ and in particular, $$\begin{array}{rl} q_{\omega'}(x_i)=&\hspace{-.25cm}q_\omega(x_i)-a_i+a_i\dfrac{\omega(x')}{\omega(x_i)}, \hspace{.25cm} i=1,\ldots,m,\\[1ex] q_{\omega'}(x')=&\hspace{-.25cm}-\sum\limits_{i=1}^ma_i +\dfrac{1}{\omega(x')}\sum\limits_{i=1}^ma_i\omega(x_i),\\[1ex] q_{\omega'}(x)=&\hspace{-.25cm}q_\omega(x), \hspace{.25cm} \hbox{ otherwise}. \end{array}$$ Therefore, $$\begin{array}{rl} \L'_pu(x_i)=&\hspace{-.25cm}\L_q u(x_i)+\dfrac{\rho_i^2}{\omega^2(x_i)}u(x_i)-a_iu(x'), \hspace{.25cm} i=1,\ldots,m,\\[3ex] \L'_pu(x')=&\hspace{-.25cm}\displaystyle \lambda u(x')+\dfrac{u(x')}{\omega^2(x')}\sum\limits_{i=1}^m\rho_i^2-\sum\limits_{i=1}^ma_iu(x_i),\\[3ex] \L'_pu(x)=&\hspace{-.25cm}\L_q u(x), \hspace{.25cm} \hbox{ otherwise}, \end{array}$$ and the result follows.\hfill {\hbox{\footnotesize{$\Box$}}} The relation between the matrices associated with both Schr\"odinger operators of $\Gamma$ and $\Gamma'$ is given by $$ {\sf L}'_{p}= \left(\begin{array}{cc} {\sf H} & -{\sf s}\\ -{\sf s}^{\intercal} & \alpha \end{array} \right),$$ where ${\sf H}$ is the matrix associated with the operator $$\mathcal{H}=\mathcal{L}_{q}+\sum_{i=1}^m\P_{\sigma_i},$$ and ${\sf s}$ is the column vector ${\sf s}=\sum_{i=1}^m a_{i}{\sf e}_{x_i}$, where ${\sf e}_{i}$ for $i=1,\dots,n$ are the vectors of the canonical basis. In order to compute the Moore--Penrose inverse of ${\sf L}'_{p}$ we will use Lemma \ref{lemma1} and it will be useful to introduce the following perturbations. We define, for $k=1,\dots,m,$ $$ \pi_k=\sqrt{\dfrac{\lambda}{\alpha}}\sigma_k,$$ and for $i=1,\ldots,m-1$, $j=i+1,\ldots,m,$ let $k=\dfrac{(2m-1-i)i}{2}+j$ and $$ \pi_k=\dfrac{1}{\sqrt{\alpha}}\dfrac{\rho_i\rho_j}{\omega(x')}\left(\dfrac{\varepsilon_{x_i}}{\omega(x_i)}-\dfrac{\varepsilon_{x_j}}{\omega(x_j)}\right).$$ \begin{theorem} \label{maintheorem} The Moore--Penrose inverse of ${\sf L}'_{p}$ is given by $$ ({\sf L}'_{p})^{\dag} = \left( \begin{array}{cc} {\sf M} & -\frac{1}{\alpha}{\sf M}{\sf s}\\[1ex] -\frac{1}{\alpha}{\sf s}^{\intercal}{\sf M}& \frac{1}{\alpha}+\frac{1}{\alpha^2}{\sf s}^{\intercal}{\sf M}{\sf s} \end{array} \right), $$ where ${\sf M}$ is the matrix associated with the operator $$\displaystyle \mathcal{G}+ h \P_{\omega}+\sum_{i=1}^{\frac{m(m+1)}{2}} h_i [\P_{\mathcal{G}(\pi_i),\omega}- \P_{\omega,\mathcal{G}(\pi_i)}]-\sum_{i,j=1}^{\frac{m(m+1)}{2}} h_{ij}\P_{\mathcal{G}(\pi_i),\mathcal{G}(\pi_j)},$$ and where if $(b_{ij})=(I+\langle\mathcal{G}(\pi_j),\pi_i\rangle)^{-1}$, $$\begin{array}{rl} h=& \hspace{-.25cm}\displaystyle \lambda^\dag\alpha \left(\alpha+\sum_{r,s=1}^{m} b_{rs}\rho_r\rho_s\right)^{-1}, \\[2ex] h_i =& \hspace{-.25cm}h \sqrt{\dfrac{\lambda}{\alpha}}\displaystyle\sum_{r=1}^{m} b_{ir}\rho_r, \hspace{.25cm}i=1,\ldots,\frac{m(m+1)}{2},\\[2ex] h_{ij}=&\hspace{-.25cm}\displaystyle b_{ij}-\dfrac{h\lambda}{\alpha}\left(\sum_{r=1}^{m}b_{ir}\rho_r \right)\left(\sum_{s=1}^{m} b_{js}\rho_s\right), \hspace{.25cm}i,j=1,\ldots,\frac{m(m+1)}{2}. \end{array}$$ \end{theorem} \proof From Lemma \ref{lemma1}, we have that $$ ({\sf L}'_{p})^{\dag}=\left( \begin{array}{cc} \sf{S}^{\dag} & -\dfrac{1}{\alpha}\sf{S}^{\dag}{\sf s}\\ -\dfrac{1}{\alpha}{\sf s}^\intercal\sf{S}^{\dag}& \frac{1}{\alpha}+\frac{1}{\alpha^2}{\sf s}^{\intercal}{\sf S}^{\dag}{\sf s} \end{array} \right), $$ where $\sf{S}={\sf H}-\dfrac{1}{\alpha}{\sf s}\otimes{\sf s}$ is the matrix associated with the operator % $${\mathcal S}=\mathcal{L}_{q}+\sum_{i=1}^m\P_{\sigma_i}-\dfrac{1}{\alpha}\P_\sigma.$$ Now, let us prove that $$\P_{\sigma}=(\alpha-\lambda)\sum_{i=1}^m\P_{\sigma_i}-\sum_{1\le i<j\le m}\P_{\sigma_{ij}},$$ where $\sigma_{ij}=\sqrt{\alpha}\pi_k,$ $k=\dfrac{(2m-1-i)i}{2}+j$ for $i=1,\ldots,m-1$, $j=i+1,\ldots,m.$ If $P_\sigma$ denotes the kernel of $\P_\sigma$ and $P$ denotes the kernel of the operator in the right side of the equality, the claim is equivalent to prove that $P=P_\sigma$. Since $P_\sigma=\sigma\otimes \sigma$, we have $$P_\sigma=\sum\limits_{i,j=1}^ma_ia_j(\varepsilon_{x_i}\otimes \varepsilon_{x_j}).$$ On the other hand, for any $i=1,\ldots,m$, we have $$P_{\sigma_i}=\dfrac{\rho_i^2}{\omega(x_i)^2}\,(\varepsilon_{x_i}\otimes \varepsilon_{x_i})=\dfrac{a_i\omega(x')}{\omega(x_i)}\,(\varepsilon_{x_i}\otimes \varepsilon_{x_i}),$$ and hence, $$\sum\limits_{i=1}^mP_{\sigma_i}=\omega(x')\sum\limits_{i=1}^m\dfrac{a_i}{\omega(x_i)}\,(\varepsilon_{x_i}\otimes \varepsilon_{x_i}).$$ Moreover, for $1\le i<j\le m$, $$\begin{array}{rl} P_{\sigma_{ij}}=&\hspace{-.25cm}\displaystyle \dfrac{\rho_i^2\rho_j^2}{\omega(x')^2\omega(x_i)^2}\,(\varepsilon_{x_i}\otimes \varepsilon_{x_i})+\dfrac{\rho_i^2\rho_j^2}{\omega(x')^2\omega(x_j)^2}\,(\varepsilon_{x_j}\otimes \varepsilon_{x_j})\\[3ex] -&\hspace{-.25cm}\displaystyle\dfrac{\rho_i^2\rho_j^2}{\omega(x')^2\omega(x_i)\omega(x_j)}\big(\varepsilon_{x_i}\otimes \varepsilon_{x_j}+\varepsilon_{x_j}\otimes \varepsilon_{x_i}\big)\\[3ex] =&\hspace{-.25cm}\displaystyle \dfrac{a_ia_j\omega(x_j)}{\omega(x_i)}\,(\varepsilon_{x_i}\otimes \varepsilon_{x_i})+\dfrac{a_ia_j\omega(x_i)}{\omega(x_j)}\,(\varepsilon_{x_j}\otimes \varepsilon_{x_j})\\[3ex] -&\hspace{-.25cm}\displaystyle a_ia_j \big(\varepsilon_{x_i}\otimes \varepsilon_{x_j}+\varepsilon_{x_j}\otimes \varepsilon_{x_i}\big), \end{array}$$ and hence, $$\begin{array}{rl} \displaystyle \sum\limits_{1\le i<j\le m}P_{\sigma_{ij}} =&\hspace{-.25cm}\displaystyle \dfrac{1}{2} \sum\limits_{i=1}^m\sum\limits_{j=1\atop j\not=i}^m\dfrac{a_ia_j\omega(x_j)}{\omega(x_i)}\,(\varepsilon_{x_i}\otimes \varepsilon_{x_i})+\dfrac{1}{2} \sum\limits_{j=1}^m\sum\limits_{i=1\atop i\not=j}^m\dfrac{a_ia_j\omega(x_i)}{\omega(x_j)}\,(\varepsilon_{x_j}\otimes \varepsilon_{x_j})\\[3ex] -&\hspace{-.25cm}\displaystyle\displaystyle \sum\limits_{1\le i<j\le m} a_ia_j \big(\varepsilon_{x_i}\otimes \varepsilon_{x_j}+\varepsilon_{x_j}\otimes \varepsilon_{x_i}\big)\\[3ex] =&\hspace{-.25cm}\displaystyle \sum\limits_{i=1}^m\dfrac{a_i}{\omega(x_i)}\Big(\sum\limits_{j=1\atop j\not=i}^ma_j\omega(x_j)\Big)(\varepsilon_{x_i}\otimes \varepsilon_{x_i})- \sum\limits_{i,j=1\atop i\not=j}^ma_ia_j\big(\varepsilon_{x_i}\otimes \varepsilon_{x_j}\big).\end{array}$$ Taking into account that $$\sum\limits_{j=1\atop j\not=i}^ma_j\omega(x_j)=\sum\limits_{j=1}^ma_j\omega(x_j)-a_i\omega(x_i)=(\alpha-\lambda)\omega(x')-a_i\omega(x_i),$$ we obtain that $$\sum\limits_{1\le i<j\le m}P_{\sigma_{ij}} =(\alpha-\lambda)\sum\limits_{i=1}^mP_{\sigma_i}-\sum\limits_{i,j=1}^ma_ia_j\big(\varepsilon_{x_i}\otimes \varepsilon_{x_j}\big). $$ Therefore, $$ {\mathcal S}=\mathcal{L}_{q}+\sum\limits_{k=1}^{\frac{m(m+1)}{2}}\P_{\pi_k}.$$ Finally, from Theorem \ref{theorem35}, we get that ${\sf{S}}^\dag={\sf M}$ and the result follows.\hfill {\hbox{\footnotesize{$\Box$}}} Next we describe the coefficients of matrix $(\langle \mathcal{G}(\pi_\ell),\pi_k \rangle)$. \begin{lemma} \label{matrixperturbations} The elements of the matrix $A=(\langle \mathcal{G}(\pi_\ell),\pi_k \rangle)$ are given by $$\begin{array}{rl} (A)_{k, \ell}=&\hspace{-.25cm}\displaystyle \frac{\lambda\rho_k\rho_\ell}{\alpha}{\dfrac{{\sf G}_{\lambda,\omega}(x_k,x_\ell)}{\omega(x_k)\omega(x_\ell)}}, \hspace{.25cm} k,\ell=1,\ldots,m,\\[3ex] (A)_{k,\ell}=&\hspace{-.25cm}\displaystyle \frac{\sqrt{\lambda}\rho_k\rho_i\rho_j}{\alpha\omega(x')}\left[\frac{{\sf G_{\lambda,\omega}}(x_k,x_i)}{\omega(x_k)\omega(x_i)}- \frac{{\sf G_{\lambda,\omega}}(x_k,x_j)}{\omega(x_k)\omega(x_j)}\right],\\[3ex] & k=1,\dots,m,\; \ell=\dfrac{(2m-1-i)i}{2}+j,\\[3ex] (A)_{k,\ell}=&\hspace{-.25cm}\displaystyle \frac{\rho_i\rho_j\rho_r\rho_s}{\alpha\omega(x')^2} \left[\frac{{\sf G_{\lambda,\omega}}(x_i,x_r)}{\omega(x_i)\omega(x_r)}-\frac{{\sf G_{\lambda,\omega}}(x_i,x_s)}{\omega(x_i)\omega(x_s)}-\frac{{\sf G_{\lambda,\omega}}(x_j,x_r)}{\omega(x_j)\omega(x_r)}+\frac{{\sf G_{\lambda,\omega}}(x_j,x_s)}{\omega(x_j)\omega(x_s)}\right],\\[3ex] & \hspace{.25cm} for \; r=1,\ldots,m-1,\;s=r+1,\ldots,m,\; k=\dfrac{(2m-1-r)r}{2}+s, \\[3ex] & \hspace{.25cm} and \;for\; i=1,\ldots,m-1,\; j=i+1,\ldots,m,\; \ell=\dfrac{(2m-1-i)i}{2}+j. \end{array}$$ \end{lemma} Observe that we can deduce two special cases of Theorem \ref{maintheorem}; if $m=1$ that means the addition of a pendant vertex to the network (see~\cite{CEGM14}), and if $m=n$ it represents the join of a the new vertex with the graph; (see~\cite{BCE12}). \begin{corollary} \label{corolpendant}If $m=1$ and $a=c(x,x')$, then $$ ({\sf L}'_{p})^{\dag} = \left( \begin{array}{cc} {\sf M} & -\frac{a}{\alpha}{\sf M}{\sf e}_x \\[1ex] -\frac{a}{\alpha}{\sf e}_x ^\intercal{\sf M}& \frac{1}{\alpha}+\frac{a^2}{\alpha^2} {\sf e}_x ^\intercal{\sf M}{\sf e}_x \end{array} \right), $$ where ${\sf M}$ is the matrix associated with the operator $$ \mathcal{G}- \frac{1}{h} \Big[\lambda \P_{\mathcal{G}(\sigma)} -\rho_x \left(\P_{\mathcal{G}(\sigma),\omega} - \P_{\omega,\mathcal{G}(\sigma)}\right) -\left(1+ (\alpha-\lambda){\sf G}_{\lambda,\omega}(x,x)\right)\P_{\omega} \Big], $$ where $h=\lambda[1+(\alpha-\lambda){\sf G}_{\lambda,\omega}(x,x)]+\rho_x^2$. \end{corollary} \section{Acknowledgement} This work has been supported by the Spanish Research Council (Ministerio de Ciencia e Innovaci\'on) under the projects MTM2011-28800-C02-02 and MTM2011-28800-C02-01. \bibliographystyle{model1-num-names}
1,116,691,499,959
arxiv
\section{Introduction} Simulations of Wilson-type fermions at realistic quark masses require an improved action with good chiral properties and scaling behavior. A systematic improvement scheme that removes discretization errors order by order in the lattice spacing $a$ has been proposed by Symanzik~\cite{Symanzik:1983dc} and developed for on-shell quantities in~\cite{Luscher:1984xn,Sheikholeslami:1985ij}. $\mathcal{O}(a)$ improvement of the Wilson fermion action is achieved by complementing it with the so-called clover term~\cite{Sheikholeslami:1985ij}, provided the associated clover coefficient is tuned properly. Wilson-type fermions break all chiral symmetries. This introduces an additive negative mass renormalization term in the action, which gives rise to singularities in the quark propagator at small quark masses and makes the approach to the chiral regime difficult. A chiral improvement of the action is expected to reduce the additive mass renormalization and the spread of negative eigenvalues. Surprisingly, this is not accomplished by the clover action. While the magnitude of the additive mass term decreases with increasing clover term, the problem of negative eigenvalues is more severe for the clover than for the standard Wilson action. It is well known that via a combination of link fattening and tuning of the clover coefficient, it is possible to reduce both the negative mass term and the spread of negative eigenvalues~\cite{DeGrand:1998mn,Boinepalli:2004fz,Capitani:2006ni}. The focus of this investigation is to determine the clover coefficient and the additive mass renormalization for plaquette and Symanzik improved gauge action and stout link clover fermions in one-loop lattice perturbation theory. The Symanzik improved gauge action reads~\cite{Symanzik:1983dc} \begin{equation} S_G^{\rm Sym} = \frac{6}{g^2} \,\,\left\{c_0 \sum_{\rm Plaquette} \frac{1}{3}\, {\rm Re\, Tr\,}(1-U_{\rm Plaquette}) \, + c_1 \sum_{\rm Rectangle} \frac{1}{3}\,{\rm Re \, Tr\,}(1- U_{\rm Rectangle})\right\} \label{SG} \end{equation} with $c_0+8c_1=1$ and \begin{equation} c_0=\frac{5}{3}\,, \quad c_1=-\frac{1}{12}\,. \end{equation} This reduces to the standard plaquette action $S_G^{\rm Plaq}$ for $c_1=0$. Clover fermions have the action for each quark flavor~\cite{Sheikholeslami:1985ij} \begin{eqnarray} S_F &=& a^4\, \sum_x \Big\{ - \frac{1}{2a} \, \left[\bar{\psi}(x) \widetilde U_\mu(x)\,(1-\gamma_\mu)\, \psi(x+a\hat{\mu}) \right. \nonumber \\ && \hspace{8mm}\left. + \, \bar{\psi}(x) \widetilde U_\mu^\dagger(x-a\hat{\mu})\,(1+\gamma_\mu)\, \psi(x-a\hat{\mu})\right] \label{SF} \\ && \hspace{8mm} + \, \frac{1}{a}\, (4 + a m_0 +a m)\, \bar{\psi}(x)\psi(x) - c_{SW}\, g\, \frac{a}{4}\, \bar{\psi}(x)\, \sigma_{\mu\nu} F_{\mu\nu}(x)\, \psi(x) \Big\} \,, \nonumber \end{eqnarray} where \begin{equation} am_0=\frac{1}{2\kappa_c} - 4 \,, \label{kc} \end{equation} $\kappa_c$ being the critical hopping parameter, is the additive mass renormalization term, and $F_{\mu\nu}(x)$ is the field strength tensor in clover form with $\sigma_{\mu\nu}=(i/2)\,(\gamma_\mu\gamma_\nu-\gamma_\nu\gamma_\mu)$. We consider a version of clover fermions in which we do not smear links in the clover term, but the link variables $U_\mu$ in the next neighbor terms have been replaced by (uniterated) stout links~\cite{Morningstar:2003gk} \begin{equation} \widetilde{U}_\mu(x) = e^{i\, Q_\mu(x)} \, U_\mu(x) \label{Ustout} \end{equation} with \begin{equation} Q_\mu(x)=\frac{\omega}{2\,i} \left[V_\mu(x) U_\mu^\dagger(x) - U_\mu(x)V_\mu^\dagger(x) -\frac{1}{3} {\rm Tr} \,\left(V_\mu(x) U_\mu^\dagger(x) - U_\mu(x)V_\mu^\dagger(x)\right)\right] \, . \end{equation} $V_\mu(x)$ denotes the sum over all staples associated with the link and $\omega$ is a tunable weight factor. Stout smearing is preferred because (\ref{Ustout}) is expandable as a power series in $g^2$, so we can use perturbation theory. Many other forms of smearing do not have this nice property. Because both the unit matrix and the $\gamma_\mu$ terms are smeared, each link is still a projection operator in the Dirac spin index. The reason for not smearing the clover term is that we want to keep the physical extent in lattice units of the fermion matrix small which is relevant for non-perturbative calculations. In that respect we refer to these fermions as SLiNC fermions, from the phrase {\bf S}tout {\bf Li}nk{\bf N}on-perturbative {\bf C}lover. The improvement coefficient $c_{SW}$ as well as the additive mass renormalization $am_0$ are associated with the chiral limit. So we will carry out the calculations for massless quarks, which simplifies things, though it means that we cannot present values for the mass dependent corrections. For complete $\mathcal{O}(a)$ improvement of the action there are five terms which would have to be added to the $\mathcal{O}(a)$ effective action, they are listed, for example, in \cite{Luscher:1996sc}. Fortunately, in the massless case only two remain, \begin{eqnarray} \mathcal{O}_1 &=& \bar{\psi} \sigma_{\mu\nu} F_{\mu\nu} \psi\,, \\ \mathcal{O}_2 &=& \bar{\psi} \stackrel{\leftrightarrow}{D} \stackrel{\leftrightarrow}{D} \psi \,. \end{eqnarray} The first is the clover term, the second is the Wilson mass term. We have both in our action, there is no need to add any other terms to the action. In perturbation theory \begin{equation} c_{SW}=1 + g^2 \, c_{SW}^{(1)} + {\mathcal{O}(g^4)}\,. \label{csw} \end{equation} The one-loop coefficient $c_{SW}^{(1)}$ has been computed for the plaquette action using twisted antiperiodic boundary conditions~\cite{Wohlert:1987rf} and Schr\"odinger functional methods~\cite{Luscher:1996vw}. Moreover, using conventional perturbation theory, Aoki and Kuramashi~\cite{Aoki:2003sj} have computed $c_{SW}^{(1)}$ for certain improved gauge actions. All calculations were performed for non-smeared links and limited to on-shell quantities. We extend previous calculations of $c_{SW}^{(1)}$ to include stout links. This is done by computing the one-loop correction to the off-shell quark-quark-gluon three-point function. The improvement of the action is not sufficient to remove discretization errors from Green functions. To achieve this, one must also improve the quark fields. The most general form consistent with BRST symmetry is~\cite{Martinelli:2001ak}\footnote{In~\cite{Martinelli:2001ak} the authors use $\ensuremath{\stackrel{\rightarrow}{\slashed{D}}}$ and $\ensuremath{\slashed{\partial}}$ instead of $\ensuremath{\stackrel{\rightarrow}{\slashed{D}}}$ and $\ensuremath{\slashed{A}}$ - both choices are equivalent. Our choice is motivated by the discussion of off-shell improvement in the next section.} \begin{equation} \psi_{\star}(x)=\left(1 + a \,c_D \ensuremath{\stackrel{\rightarrow}{\slashed{D}}} + a \,i\,g\,\,c_{NGI} \ensuremath{\slashed{A}}(x) \right) \,\psi(x)\,. \label{imppsi} \end{equation} {}From now we denote improved quark fields and improved Green functions by an index~$\star$. These are made free of $\mathcal O (a)$ effects by fixing the relevant improvement coefficients. There is no {\it a priori} reason that the gauge variant contribution $c_{NGI} \ensuremath{\slashed{A}}(x)$ vanishes. The perturbative expansion of $c_{NGI}$ has to start with the one-loop contribution~\cite{Martinelli:2001ak}. As a byproduct of our calculation we determine that coefficient $c_{NGI}^{(1)}$ \begin{equation} c_{NGI}=g^2\,c_{NGI}^{(1)} + {\mathcal{O}(g^4)} \label{cNGI} \end{equation} and find that it is indeed nonvanishing. \section{Off-shell improvement} It is known~\cite{Aoki:2003sj} that the one-loop contribution of the Sheikoleslami-Wohlert coefficient in conventional perturbation theory can be determined using the quark-quark-gluon vertex $\Lambda_\mu(p_1,p_2,c_{SW})$ sandwiched between {\sl on-shell} quark states. $p_1$ ($p_2$) denotes the incoming (outgoing) quark momentum. In general that vertex is an {\sl amputated} three-point Green function. Let us look at the ${\mathcal{O}}(a)$ expansion of tree-level $\Lambda^{(0)}_\mu(p_1,p_2,c_{SW})$ which is derived from action (\ref{SF}) \begin{equation} \Lambda^{(0)}_\mu(p_1,p_2,c_{SW}) = -i\, g \,\gamma_\mu -g\, {\textstyle \frac{1}{2}} \, a\, {\bf 1} (p_1 + p_2)_\mu + c_{SW} \,i\, g\, {\textstyle \frac{1}{2}} \, a \,\sigma_{\mu \alpha} (p_1 -p_2)_\alpha\\ +\mathcal{O}(a^2)\,. \label{treevertex} \end{equation} For simplicity we omit in all three-point Green functions the common overall color matrix $T^{a}$. That tree-level expression between on-shell quark states is free of order $\mathcal O (a)$ if the expansion of $c_{SW}$ starts with one, as indicated in (\ref{csw}) \begin{equation} \bar u(p_2) \, \Lambda^{(0)}_{\star\mu}(p_1,p_2) \, u(p_1) = \bar u (p_2) \, (-i\, g \,\gamma_\mu )\, u(p_1) \,. \label{treeverteximproved} \end{equation} Therefore, at least a one-loop calculation of the $\Lambda_\mu(p_1,p_2,c_{SW}^{(1)})$ is needed as necessary condition to determine $c_{SW}^{(1)}$. The {\sl off-shell} improvement condition states that the {\sl non-amputated} improved quark-quark-gluon Green function $G_{\star \mu}(p_1,p_2,q)$ has to be free of $\mathcal{O}(a)$ terms in one-loop accuracy. In position space that non-amputated improved quark-quark-gluon Green functions is defined via expectation values of improved quark fields and gauge fields as \begin{equation} G_{\star\mu}(x,y,z)=\langle \psi_{\star}(x)\, \overline{\psi}_{\star}(y) \, A_\mu(z)\rangle \,. \end{equation} Since the gluon propagator is $\mathcal{O}(a)$-improved already, we do not need to improve gauge fields. Using relation (\ref{imppsi}) we can express the function $G_{\star\mu}$ by the unimproved quark fields $\psi$ \begin{eqnarray} G_{\star\mu}(x,y,z) &=& G_{\mu}(x,y,z)+ a\,c_{D}\,\left\langle \left(\ensuremath{\slashed{D}} \ensuremath{\slashed{D}}^{-1} +\ensuremath{\slashed{D}}^{-1}\ensuremath{\slashed{D}} \right)A_\mu \right\rangle \nonumber \\ & & \quad\quad +\, i \, a \, g \, c_{NGI}\,\left\langle\left(\ensuremath{\slashed{A}} \ensuremath{\slashed{D}}^{-1} +\ensuremath{\slashed{D}}^{-1}\ensuremath{\slashed{A}} \right)A_\mu \right\rangle \,, \end{eqnarray} where $G_{\mu}(x,y,z)$ is the unimproved Green function. Taking into account \begin{equation} \left\langle \left(\ensuremath{\slashed{A}} \ensuremath{\slashed{D}}^{-1}+\ensuremath{\slashed{D}}^{-1}\ensuremath{\slashed{A}} \right)A_\mu \right\rangle = 2\,a\,c_{D} \, \delta(x-y)\,\left\langle A_\mu(z) \right\rangle \end{equation} and setting $\langle A_\mu(z) \rangle=0$ (unless there is an unexpected symmetry breaking), we obtain the following relation between the improved and unimproved Green function \begin{equation} G_{\star\mu}(x,y,z)= G_{\mu}(x,y,z)+i\,a \, g \,c_{NGI}\,\left\langle\left(\ensuremath{\slashed{A}} \ensuremath{\slashed{D}}^{-1}+\ensuremath{\slashed{D}}^{-1}\ensuremath{\slashed{A}} \right) A_\mu \right\rangle\,. \label{qqgimp2} \end{equation} {}From (\ref{qqgimp2}) it is obvious that tuning only $c_{SW}$ to its optimal value in $G_{\mu}(x,y,z)$, there would be an $\mathcal{O}(a)$ contribution left in the improved Green function. The requirement that $G_{\star\mu}(x,y,z)$ should be free of ${\mathcal O}(a)$ terms leads to an additional condition which determines the constant $c_{NGI}$. It has not been calculated before. Taking into account the expansion (\ref{cNGI}) of $c_{NGI}$ we get in momentum space ($\mathcal{F}[\cdot]$ denotes the Fourier transform) \begin{equation} i\, a \, g\, c_{NGI}\,\mathcal{F} \Big[\left\langle\left(\ensuremath{\slashed{A}} \ensuremath{\slashed{D}}^{-1} + \ensuremath{\slashed{D}}^{-1}\ensuremath{\slashed{A}} \right)A_\mu \right\rangle^{\rm tree}\Big] = i\, a \, g^3\, c_{NGI}^{(1)} \left(\gamma_\nu\frac{1}{i\, \ensuremath{\slashed{p}}_1} +\frac{1}{i\, \ensuremath{\slashed{p}}_2}\gamma_\nu\right)\, K^{\rm tree}_{\nu\mu}(q)\,, \label{cNGI1} \end{equation} or its amputated version \begin{equation} i\, a \, g\, c_{NGI}\, \mathcal{F} \Big[\left\langle\left(\ensuremath{\slashed{A}} \ensuremath{\slashed{D}}^{-1} + \ensuremath{\slashed{D}}^{-1}\ensuremath{\slashed{A}} \right)A_\mu \right\rangle^{\rm tree}_{\rm amp}\Big] = -a \, g^3\, c_{NGI}^{(1)} \left(\ensuremath{\slashed{p}}_2\, \gamma_\mu+\gamma_\mu\, \ensuremath{\slashed{p}}_1\right)\,. \label{cNGI1amp} \end{equation} The relation between non-amputated and amputated unimproved and improved three-point Green functions are defined by \begin{eqnarray} G_\mu(p_1,p_2,q)&=& S(p_2)\, \Lambda_\nu(p_1,p_2,q,c_{SW}^{(1)})\, S(p_1)\, K_{\nu\mu}(q)\,, \label{nonamp} \\ G_{\star \mu}(p_1,p_2,q)&=& S_\star(p_2)\, \Lambda_{\star\nu}(p_1,p_2,q) \, S_\star(p_1) \, K_{\nu\mu}(q) \,, \label{nonampimp} \end{eqnarray} $K_{\nu\mu}(q)$ denotes the full gluon propagator which is $\mathcal{O}(a)$-improved already, $S(p)$ and $S_\star(p)$ the corresponding quark propagators. With the definition of the quark self energy \begin{equation} \Sigma(p)= \frac{1}{a} \Sigma_0 + i \, \ensuremath{\slashed{p}} \, \Sigma_1(p) + \frac{a \, p^2}{2} \Sigma_2(p) \end{equation} the unimproved and improved inverse quark propagators are given by \begin{eqnarray} S^{-1}(p)&=&i \, \ensuremath{\slashed{p}}\, \Sigma_1(p) +\frac{a \,p^2}{2}\Sigma_2(p)= i \, \ensuremath{\slashed{p}} \,\Sigma_1(p)\left(1-\frac{1}{2}a\, i \, \ensuremath{\slashed{p}}\, \frac{\Sigma_2(p)}{\Sigma_1(p)} \right)\,, \label{S} \\ S_\star^{-1}(p)&=&i \, \ensuremath{\slashed{p}}\, \Sigma_1(p)\,. \label{selfenergy} \end{eqnarray} Using the Fourier transformed (\ref{qqgimp2}) with (\ref{cNGI1amp}) and amputating the Green function~(\ref{nonamp}), taking into account the inverse quark propagators (\ref{S}), we get the off-shell improvement condition in momentum space \begin{eqnarray} \Lambda_{\mu}(p_1,p_2,q,c_{SW}^{(1)})&=&\Lambda_{\star \mu}(p_1,p_2,q)+ a \, g^3 c_{NGI}^{(1)} (\ensuremath{\slashed{p}}_2 \, \gamma_\mu +\gamma_\mu\, \ensuremath{\slashed{p}}_1) \nonumber\\ & & \hspace{-0.7cm} -\, \frac{a}{2}\,i\, \ensuremath{\slashed{p}}_2 \, \frac{\Sigma_2(p_2)}{\Sigma_1(p_2)}\, \Lambda_{\star\mu}(p_1,p_2,q) -\frac{a}{2}\,\Lambda_{\star\mu}(p_1,p_2,q)\, i\, \ensuremath{\slashed{p}}_1 \, \frac{\Sigma_2(p_1)}{\Sigma_1(p_1)} \,. \label{impcond} \end{eqnarray} This expression should hold to order $\mathcal{O}(g^3)$ by determining both $c_{NGI}^{(1)}$ and $c_{SW}^{(1)}$ correctly. It is clear from (\ref{impcond}) that the improvement term $\propto c_{NGI}^{(1)}$ does not contribute if both quarks are on-shell. \section{The one-loop lattice quark-quark-gluon vertex} The diagrams contributing to the amputated one-loop three-point function are shown in Fig.~\ref{fig2}. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.3,width=0.8\textwidth]{feyn3.eps} \end{center} \caption{One-loop diagrams contributing to the amputated quark-quark-gluon vertex.} \label{fig2} \end{figure} The calculation is performed with a mixture of symbolic and numerical techniques. For the symbolic computation we use a {\it Mathematica} package that we developed for one-loop calculations in lattice perturbation theory (for a more detailed description see ~\cite{Gockeler:1996hg}). It is based on an algorithm of Kawai et al.~\cite{Kawai:1980ja}. The symbolic treatment has several advantages: one can extract the infrared singularities exactly and the results are given as functions of lattice integrals which can be determined with high precision. The disadvantage consists in very large expressions especially for the problem under consideration. In the symbolic method the divergences are isolated by differentiation with respect to external momenta. Looking at the general analytic form of the gluon propagator for improved gauge actions~\cite{Horsley:2004mx} one easily recognizes that a huge analytic expression would arise. As discussed in ~\cite{Horsley:2004mx} we split the full gluon propagator $D_{\mu\nu}^{{\rm Sym}}(k,\xi)$ \begin{equation} D_{\mu\nu}^{{\rm Sym}}(k,\xi)=D_{\mu\nu}^{{\rm Plaq}}(k,\xi) + \Delta D_{\mu\nu}(k)\,, \label{Dprop} \end{equation} where $\xi$ is the covariant gauge parameter ($\xi=0$ corresponds to the Feynman gauge). The diagrams with $D_{\mu\nu}^{{\rm Plaq}}(k,\xi)$ only contain the logarithmic parts and are treated with our {\it Mathematica} package. The diagrams with at least one $\Delta D_{\mu\nu}(k)$ are infrared finite and can be determined safely with numerical methods. The decomposition (\ref{Dprop}) means that we always need to calculate the plaquette action result, as part of the calculation for the improved gauge action. Therefore, we will give the results for both plaquette gauge action and Symanzik improved gauge action using the corresponding gluon propagators $D_{\mu\nu}^{{\rm Plaq}}$ and $D_{\mu\nu}^{{\rm Sym}}$, respectively. Because the numerical part determines the accuracy of the total result we discuss it in more detail. There are several possibilities to combine the various contributions of the one-loop diagrams as given in Fig.~\ref{fig2}. In view of a later analysis we have decided to group all coefficients in front of the independent color factors $C_F$ and $N_c$ and the powers of the stout parameter $\omega$ \begin{eqnarray} \Lambda^{{\rm num.}}_\mu &=& C_F\,\left(C_{0}+C_{1}\,\omega+C_{2}\,\omega^2+C_{3}\,\omega^3\right)+ N_c\,\left(C_{4}+C_{5}\,\omega+C_{6}\,\omega^2+C_{7}\,\omega^3\right)\label{num2}\,, \end{eqnarray} where the $C_i$ have to be computed numerically. In order to obtain $C_i$ we first add all contributions of the diagrams shown in (\ref{fig2}) and integrate afterwards. We have used a Gauss-Legendre integration algorithm in four dimensions (for a description of the method see~\cite{Gockeler:1996hg}) and have chosen a sequence of small external momenta $(p_1,p_2)$ to perform an extrapolation to vanishing momenta. Let us illustrate this by an example: the calculation of the coefficient $C_4$. We know the general structure of the one-loop amputated three-point function as (we set $a=1$) \begin{eqnarray} M_\mu(p_1,p_2) &=& \gamma_\mu\, A(p_1,p_2) + {\rm\bf 1}\, p_{1,\mu}\, B(p_1,p_2) + {\rm\bf 1}\, p_{2,\mu}\, C(p_1,p_2) \nonumber\\ & & + \,\sigma_{\mu\alpha}\,p_{1,\alpha} \,D(p_1,p_2) + \sigma_{\mu\alpha}\,p_{2,\alpha} \,E(p_1,p_2)\,. \end{eqnarray} {}From this we can extract the coefficients by the following projections \begin{eqnarray} {\rm Tr}\,\gamma_\mu M_\mu &=& 4\,A(p_1,p_2), \quad\quad \mu \quad {\rm fixed}\,, \nonumber\\ {\rm Tr}\,M_\mu &=& 4 \,p_{1,\mu}\, B(p_1,p_2) + 4 \,p_{2,\mu} \,C(p_1,p_2) \,, \\ \sum_\mu\, {\rm Tr}\,\sigma_{\nu\mu}\,M_\mu &=& 12 \,p_{1,\nu} \,D(p_1,p_2) + 12 \,p_{2,\nu}\, E(p_1,p_2)\,. \nonumber \label{proj1} \end{eqnarray} Relations (\ref{proj1}) show that one has to compute the three-point function for all four values of $\mu$. Further they suggest choosing the external momenta orthogonal to each other: $p_1 \cdot p_2 = 0$. A simple choice is $p_{1,\mu}=(0,0,0,p_{1,4})$ and $p_{2,\mu}=(0,0,p_{2,3},0)$. We discuss the determination of $B(p_1,p_2)$ and $C(p_1,p_2)$ in more detail. For small momenta they can be described by the ansatz \begin{eqnarray} B(p_1,p_2)&=& B_0 + B_1\,p_1^2 + B_2 \,p_2^2\,, \nonumber\\ C(p_1,p_2)&=& C_0 + C_1\,p_1^2 + C_2 \,p_2^2\,. \label{BC} \end{eqnarray} The choice of the momenta is arbitrary except for two points. First, they should be sufficiently small in order to justify ansatz (\ref{BC}). Second, they should not be integer multiples of each other in order to avoid accidental symmetric results. The symmetry of the problem demands the relation $B_0=C_0$ which must result from the numerical integration also. Performing the integration at fixed $p_1$ and $p_2$ we obtain complex $4\times 4$ matrices for $M_3(p_1,p_2)$ and $M_4(p_1,p_2)$ from which the quantities $B(p_1,p_2)$ and $C(p_1,p_2)$ are extracted via (\ref{proj1}). A nonlinear regression fit with ansatz (\ref{BC}) gives \begin{eqnarray} B_0&=&0.00553791 \quad {\rm with \,\, fit\,\, error}\,\, \delta B_0=7\times 10^{-8}\,, \nonumber\\ C_0&=&0.00553789 \quad {\rm with \,\, fit\,\, error}\,\, \delta C_0=6\times 10^{-8}\,. \label{fitBC} \end{eqnarray} It shows that the symmetry is fulfilled up to an error of $\mathcal{O}(10^{-7})$ which sets one scale for the overall error of our numerical calculations. In Fig.~\ref{fig3} we \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.01,width=0.8\textwidth]{BCextrapol.eps} \end{center} \caption{$B(p_1,p_2)$ (circles) and $C(p_1,p_2)$ (squares) as function of $p_1^2$ together with their corresponding linear fits in $p_1^2$.} \label{fig3} \end{figure} show the almost linear dependence of $B(p_1,p_2)$ and $C(p_1,p_2)$ on $p_1^2$. (In the integration we haven chosen $p_{1,\mu}=0.87\, p_{2,\mu}$ so that we can restrict the plot to one variable.) Another source of errors is the numerical Gauss-Legendre integration routine itself. We have chosen a sequence of $n^4=14^4$, $18^4$, $22^4$, $26^4$ and $30^4$ nodes in the four-dimensional hypercube and have performed an extrapolation to infinite nodes with an $1/n^4$ fit ansatz. Both procedures, Gauss-Legendre integration and the fit $p \rightarrow 0$, give a combined final error of $10^{-6}$. The third error source are the errors of the lattice integrals of our {\it Mathematica} calculation for the terms containing the plaquette propagator $D_{\mu\nu}^{{\rm Plaq}}$ only. These integrals have been calculated up to a precision of $\mathcal{O}(10^{-10})$. Therefore, their errors can be neglected in comparison with the others. Summarizing we find that the error of our numerical procedure is of $\mathcal{O}(10^{-6})$. Additionally, we have checked our results by an independent code which completely numerically computes the one-loop contributions for each diagram including the infrared logarithms. Both methods agree within errors. The Feynman rules for non-smeared Symanzik gauge action have been summarized in~\cite{Aoki:2003sj}. For the stout smeared gauge links in the clover action the rules restricted to equal initial and final quark momenta are given in~\cite{Capitani:2006ni}. As mentioned in the introduction we perform a one-level smearing of the Wilson part in the clover action. The corresponding Feynman rules needed for the one-loop quark-quark-gluon vertex are much more complicated than those in~\cite{Capitani:2006ni}. The qqgg-vertex needed in diagrams (c) and (d) of Fig.~\ref{fig2} receives an additional antisymmetric piece. The qqggg-vertex in diagram (e) does not even exist in the forward case. The Feynman rules are given in Appendix A. The diagrams which are needed for the calculation of the quark propagator are shown in Fig.~\ref{fig1}. We have performed our calculation in general covariant gauge. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.01,width=0.8\textwidth]{selfenergy2.eps} \end{center} \caption{One-loop diagrams contributing to the quark self energy.} \label{fig1} \end{figure} \section{Results for the improvement coefficients and critical hopping parameter} The anticipated general structure for the amputated three-point function in one-loop is \begin{eqnarray} \Lambda_\mu(p_1,p_2,q)&=& \Lambda^{{\overline{MS}}}_\mu(p_1,p_2,q) +A_{\rm lat}\,i\,\frac{g^3}{16\pi^2}\,\gamma_\mu \nonumber\\ & & + \, B_{\rm lat}\,\frac{a}{2}\,\frac{g^3}{16\pi^2}\,\left(\ensuremath{\slashed{p}}_2\,\gamma_\mu +\gamma_\mu\,\ensuremath{\slashed{p}}_1\right) + C_{\rm lat}\,i\,\frac{a}{2}\,\frac{g^3}{16\pi^2}\,\sigma_{\mu\alpha}\,q_\alpha \,. \label{Lam} \end{eqnarray} $\Lambda^{{\overline{MS}}}_\mu(p_1,p_2,q)$ is the universal part of the three-point function, independent of the chosen gauge action, computed in the $\overline{MS}$-scheme \begin{eqnarray} \Lambda^{{\overline{MS}}}_\mu(p_1,p_2,q)&=& -i\, g\, \gamma_\mu - g\, \frac{a}{2}\,{\bf 1}\left( p_{1,\mu}+p_{2,\mu}\right)- c_{SW}\,i\, g\,\frac{a}{2}\sigma_{\mu\alpha}\,q_\alpha \nonumber\\ & & + \,i\, \frac{1}{2}\,\frac{g^3}{16\pi^2}\,\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q) + \frac{a}{2}\frac{g^3}{16\pi^2} \,\Lambda^{{\overline{MS}}}_{2,\mu}(p_1,p_2,q)\,. \label{LamMS} \end{eqnarray} We have calculated the complete expressions for $\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)$ and $\Lambda^{{\overline{MS}}}_{2,\mu}(p_1,p_2,q)$. The $\mathcal{O}(a)$ contribution, $\Lambda^{{\overline{MS}}}_{2, \mu}(p_1,p_2,q)$, simplifies if we set $c_{SW}=1+\mathcal{O}(g^2)$ as in (\ref{csw}). After some algebra we find \begin{eqnarray} \Lambda^{{\overline{MS}}}_{2,\mu}(p_1,p_2,q) &=& \frac{1}{2}\left(\ensuremath{\slashed{p}}_2\,\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)+ \Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)\,\ensuremath{\slashed{p}}_1\right) \nonumber\\ \label{lambda2} & & -\, C_F\left(\ensuremath{\slashed{p}}_2\, \gamma_\mu \,(1-\xi)(1-\log(p_2^2/\mu^2))\right. \\ & & \quad \quad \, \left. +\gamma_\mu\,\ensuremath{\slashed{p}}_1 \,(1-\xi)(1-\log(p_1^2/\mu^2))\right)\,, \nonumber \end{eqnarray} where $\mu^2$ is the $\overline{MS}$ mass scale (not to be confused with the index $\mu$). Therefore, we only need $\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)$ to present the one-loop result (\ref{LamMS}). $\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)$ is given in Appendix B. If we insert (\ref{Lam}) and (\ref{LamMS}) with (\ref{lambda2}) into the off-shell improvement relation (\ref{impcond}) we get the following conditions that all terms of order $\mathcal{O}(ag^3)$ have to vanish \begin{eqnarray} \left(c_{SW}^{(1)} - \frac{C_{\rm lat}}{16\pi^2}\right)\,\sigma_{\mu\alpha}\,q_\alpha &=& 0\,, \label{cSWcond}\\ \left(c_{NGI}^{(1)} - \frac{1}{32\pi^2}\,\left(A_{\rm lat}-B_{\rm lat}-\Sigma_{21}\right)\right) \left(\ensuremath{\slashed{p}}_2 \, \gamma_\mu+ \gamma_\mu\, \ensuremath{\slashed{p}}_1 \right) &=&0\,, \label{cNGIcond} \end{eqnarray} with $\Sigma_{21}$ defined from (\ref{S}) as \begin{eqnarray} \frac{\Sigma_2(p)}{\Sigma_1(p)}&=&1+\frac{g^2\,C_F}{16\pi^2}\left((1-\xi)(1-\log(a^2p^2))+\Sigma_{21,0} \right) \nonumber\\ &\equiv&1+\frac{g^2\,C_F}{16\pi^2}\left((1-\xi)(1-\log(p^2/\mu^2))\right)+\frac{g^2}{16\pi^2}\Sigma_{21} \label{SigmaWP} \end{eqnarray} and \begin{equation} \Sigma_{21}=C_F\,\left( -(1-\xi)\log(a^2\mu^2)+\Sigma_{21,0} \right)\,. \label{SigmaWPr} \end{equation} The constant $\Sigma_{21,0}$ depends on the chosen lattice action. It should be noted that equations (\ref{cSWcond}) and (\ref{cNGIcond}) are obtained by using the general structure (\ref{lambda2}) only -- we do not need to insert the complete calculated result for $\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)$. In order to get momentum independent and gauge invariant improvement coefficients we see from (\ref{cSWcond}) that $C_{\rm lat}$ itself has to be constant and gauge invariant. From (\ref{cNGIcond}) and (\ref{SigmaWPr}) we further conclude that the $\log(a^2\mu^2)$-terms from $A_{\rm lat}$ and $B_{\rm lat}$ have to cancel those from $\Sigma_{21}$. The same is true for the corresponding gauge terms. The terms $\propto(1-\xi)(1- \log(p_i^2/\mu^2))$ ($i=1,2$) coming from (\ref{SigmaWP}) are canceled by the corresponding terms in (\ref{lambda2}). Therefore, the relation between $\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)$ and $\Lambda^{{\overline{MS}}}_{2,\mu}(p_1,p_2,q)$ as given in (\ref{lambda2}) is a nontrivial result. Once more, it should be emphasized that this relation only holds if we use $c_{SW}=1$ at leading order in $g^2$. If (\ref{lambda2}) were not true, we would not be able to improve the Green functions by adding the simple $\mathcal{O}(a)$ terms we have considered. For completeness we also give the corresponding one-loop values for the quark field improvement coefficient $c_D$ as defined in (\ref{imppsi}). They can be derived from the $\mathcal{O}(a)$ improvement of the quark propagator. The one-loop improvement coefficient $c_D^{(1)}$ is related to the quark self energy by \begin{equation} c_D = -\frac{1}{4}\,\left(1+\frac{g^2\, C_F}{16 \pi^2} \, \left(2\,\Sigma_1-\Sigma_2\right)\right) +\mathcal{O}(g^4)\equiv -\frac{1}{4}\,\left(1+g^2\,c_D^{(1)}\right)+\mathcal{O}(g^4)\,. \label{cD1} \end{equation} $c_D^{(1)}$ has been calculated for ordinary clover fermions and plaquette gauge action in~\cite{Capitani:2000xi}. Now we present our numerical results for general covariant gauge $\xi$ and as function of the stout parameter $\omega$. For the plaquette action with stout smearing the quantities $A_{\rm lat}$, $B_{\rm lat}$ and $C_{\rm lat}$ are obtained as \begin{eqnarray} A_{\rm lat}^{\rm Plaq}&=&C_F\,\Big(9.206269 +3.792010\,\xi - 196.44601\,\omega + 739.683641\,\omega^2 \nonumber \\ && \quad \quad + (1-\xi)\log (a^2\mu^2) \Big) \nonumber\\ & &+ \, N_c\,\left(-4.301720 + 0.693147\,\xi + \,(1-\xi/4)\log (a^2\mu^2) \right)\,, \nonumber\\ B_{\rm lat}^{\rm Plaq}&=&C_F\,\Big(9.357942 + 5.727769\,\xi - 208.583208\,\omega + 711.565256\,\omega^2 \nonumber \\ && \quad \quad + 2\,(1-\xi)\log (a^2\mu^2) \Big) \\ & &+\, N_c\,\left(-4.752081 +0.693147\,\xi +3.683890\,\omega + (1-\xi/4)\log (a^2\mu^2) \right)\,, \nonumber\\ C_{\rm lat}^{\rm Plaq}&=&C_F\,\left(26.471857 + 170.412296\,\omega - 582.177099\,\omega^2\right) \nonumber\\ & &+\, N_c\,\left(2.372649 + 1.518742\,\omega -44.971612\,\omega^2\right)\,. \nonumber \label{ABCplaq} \end{eqnarray} For the stout smeared Symanzik action we get \begin{eqnarray} A_{\rm lat}^{\rm Sym}&=&C_F\,\Big(5.973656 +3.792010\,\xi - 147.890719\,\omega + 541.380348\,\omega^2 \nonumber \\ && + \, (1-\xi)\log (a^2\mu^2) \Big) \nonumber\\ & &+ \, N_c\,\left(-3.08478 + 0.693159\,\xi - 0.384236\,\omega + (1-\xi/4)\log (a^2\mu^2) \right)\,, \nonumber\\ B_{\rm lat}^{\rm Sym}&=&C_F\,\Big(6.007320 + 5.727769\,\xi - 163.833410\,\omega + 542.892478\,\omega^2 \nonumber \\ && + \, 2\,(1-\xi)\log (a^2\mu^2) \Big) \\ & &+\, N_c\,\left(-13.841082 +0.693179\,\xi +3.039641\,\omega + (1-\xi/4)\log (a^2\mu^2) \right)\,, \nonumber \\ C_{\rm lat}^{\rm Sym}&=&C_F\,\left(18.347163 + 130.772885\,\omega - 387.690744\,\omega^2\right) \nonumber\\ & &+\, N_c\,\left(2.175560 + 2.511657\,\omega -50.832203\,\omega^2\right)\,. \nonumber \label{ABC} \end{eqnarray} As shown in (\ref{impcond}) (or equivalently (\ref{cNGIcond})) we need the self energy parts $\Sigma_1(p)$ and $\Sigma_2(p)$ as defined in (\ref{S}) to solve the off-shell improvement condition. They have the general form \begin{eqnarray} \Sigma_{1}(p)&=&1-\frac{g^2\, C_F}{16 \pi^2} \, \left[(1-\xi)\log (a^2p^2) +\Sigma_{1,0}\right]\,,\nonumber\\ \Sigma_{2}(p)&=&1-\frac{g^2\, C_F}{16 \pi^2} \, \left[2\,(1-\xi)\log (a^2p^2) +\Sigma_{2,0}\right]\,. \label{SigmapW} \end{eqnarray} For the plaquette and Symanzik actions we obtain \begin{eqnarray} \Sigma_{1,0}^{\rm Plaq} &=& 8.206268 - 196.446005\,\omega + 739.683641\,\omega^2+4.792010\,\xi\,,\nonumber\\ \Sigma_{2,0}^{\rm Plaq} &=& 7.357942 - 208.583208\,\omega + 711.565260\,\omega^2+7.727769\,\xi\,,\nonumber\\ \Sigma_{1,0}^{\rm Sym} &=& 4.973689 - 147.890720\,\omega + 541.380518\,\omega^2+4.792010\,\xi\,,\\ \Sigma_{2,0}^{\rm Sym} &=& 4.007613 - 163.833419\,\omega + 542.892535\,\omega^2+7.727769\,\xi\,.\nonumber \label{sigmas} \end{eqnarray} This results in the following expressions for $\Sigma_{21}$ as defined in (\ref{SigmaWPr}) \begin{eqnarray} \Sigma_{21}^{\rm Plaq} &=& C_F\, \Big(-0.151673 - 1.935759\,\xi + 12.137203\,\omega + 28.118384\,\omega^2 \nonumber\\ & & \quad\quad\quad -\,(1-\xi)\,\log(a^2\mu^2)\Big)\,, \nonumber\\ \Sigma_{21}^{\rm Sym} &=& C_F\, \Big(-0.033924 - 1.935759\,\xi + 15.942699\,\omega-1.512017\,\omega^2 \\ & & \quad\quad\quad -\, (1-\xi)\,\log(a^2\mu^2)\Big)\,. \nonumber \label{SigmaWPnum} \end{eqnarray} Inserting the corresponding numbers into (\ref{cSWcond}), (\ref{cNGIcond}) and (\ref{cD1}), we obtain the one-loop contributions of the clover improvement coefficient \begin{eqnarray} c_{SW}^{(1),{\rm Plaq}}&=&C_F\,\left(0.167635 + 1.079148\,\omega - 3.697285\,\omega^2\right) \nonumber\\ & &+\, N_c\,\left(0.015025 + 0.009617\,\omega - 0.284786\,\omega^2\right)\,, \label{cswplaq}\\ c_{SW}^{(1),{\rm Sym}}&=&C_F\,\left(0.116185 + 0.828129\,\omega - 2.455080\,\omega^2\right) \nonumber\\ & &+\, N_c\,\left(0.013777 + 0.015905\,\omega - 0.321899\,\omega^2\right)\,, \label{cswSym} \end{eqnarray} the off-shell quark field improvement coefficient \begin{eqnarray} c_{NGI}^{(1),{\rm Plaq}}&=& N_c\,\left(0.001426 - 0.011664 \,\omega \right)\,, \label{cNGIplaq}\\ c_{NGI}^{(1),{\rm Sym}}&=& N_c\,\left(0.002395 - 0.010841\,\omega \right)\,, \label{cNGISym} \end{eqnarray} and the on-shell quark field improvement coefficient \begin{eqnarray} c_D^{(1),{\rm Plaq}}&=& C_F\,\left( 0.057339 + 0.011755\,\xi - 1.167149\,\omega + 4.862163\,\omega^2\right)\,, \label{cD2}\\ c_D^{(1),{\rm Sym}} &=& C_F\,\left(0.037614 + 0.011755\,\xi - 0.835571\,\omega + 3.418757\,\omega^2 \right)\,, \label{cD3} \end{eqnarray} for the plaquette and Symanzik action, respectively. For $\omega=0$ both the plaquette result (\ref{cswplaq}) and the Symanzik result (\ref{cswSym}) agree, within the accuracy of our calculations, with the numbers quoted in~\cite{Wohlert:1987rf,Luscher:1996vw} and~\cite{Aoki:2003sj}. {}From Ward identity considerations it is known that the coefficient $c_{NGI}$ has to be proportional to $N_c$ only. Additionally, $c_{NGI}$ and $c_{SW}$ should be gauge invariant. Both conditions are fulfilled within the errors which have been discussed in the previous section. It should be noted that (\ref{cNGIplaq}) and (\ref{cNGISym}) are the first one-loop results for the quark field improvement coefficient $c_{NGI}$. The gauge dependent improvement coefficient $c_D$ depends only on the color factor $C_F$ because it is determined by $\mathcal{O}(a)$ improvement of the quark propagator. The additive mass renormalization is given by \begin{equation} am_0=\frac{g^2\, C_F}{16 \pi^2} \,\frac{\Sigma_0}{4} \,. \end{equation} This leads to the critical hopping parameter $\kappa_c$, at which chiral symmetry is approximately restored, \begin{equation} \kappa_c=\frac{1}{8}\left( 1- \frac{g^2\, C_F}{16 \pi^2} \,\frac{\Sigma_0}{4}\right)\,. \label{kappac} \end{equation} Using the plaquette or Symanzik gauge actions, we obtain \begin{eqnarray} \Sigma_0^{\rm Plaq} &=& -31.986442 + 566.581765\,\omega -2235.407087\,\omega^2 \,, \label{Sigma0plaq} \\ \Sigma_0^{\rm Sym} &=& -23.832351 + 418.212508\,\omega - 1685.597405\,\omega^2\,. \label{Sigma0sym} \end{eqnarray} This leads to the perturbative expression for $\kappa_c$ \begin{eqnarray} \kappa_c^{\rm Plaq} &=& \frac{1}{8} \left[ 1 + g^2 \, C_F \left(0.050639 - 0.896980 \,\omega + 3.697285 \,\omega^2 \right) \right] \,, \label{kappacplaq} \\ \kappa_c^{\rm Sym} & =& \frac{1}{8} \left[ 1 + g^2 \, C_F \left( 0.037730 - 0.662090\,\omega +2.668543\,\omega^2 \right) \right] \,. \label{kappacSym} \end{eqnarray} For both actions $am_0$ can be tuned to zero for admissible values of $\omega$. Using the smaller possible value we find $\omega=0.089396$ for the plaquette action and $\omega=0.088689$ for the Symanzik gauge action. \section{Mean field improvement} It is well known that one-loop perturbation theory in the bare coupling constant $g^2$ leads to a poor approximation. The coefficient of $g^2$ is large in most quantities, and the series converges poorly. One traditional way to reduce this problem is by mean field improvement, which consists of two ideas. The first is that we calculate each quantity in a simple mean field approximation, and then re-express the perturbative result as the mean field result multiplied by a perturbative correction factor. If the mean field approximation is good, the correction factor will be close to 1, and we have resolved the problem of the large one-loop coefficient. As a good internal test of this part, we can simply look to see how large the coefficient in this correction factor is (the ``tadpole improved coefficient''), compared with the initial unimproved coefficient. The second part of the mean field approximation is that we change our expansion parameter from the bare coupling $g^2$ to some ``boosted'' coupling constant, $g^2_{MF}$, which we hope represents physics at a more relevant scale, and leads to a more rapidly convergent series. A well-chosen boosted coupling would reduce the two-loop coefficient. Unfortunately we usually cannot test this part of the improvement procedure, because the two-loop coefficient is unknown. Fortunately, if the mean field approximation is good, the exact choice of boosted coupling constant will not be too crucial, because the lowest order improved coefficient will be a small number. \subsection{Mean field approximation for smeared fermions} In the mean field approximation we typically assume that the gauge fields on each link are independently fluctuating variables, and that we can simply represent the links by an average value $u_0$. Typical choices for $u_0$ would be to choose $u_0^4$ to be the average plaquette value, or to choose $u_0$ to be the average link value in the Landau gauge. A natural question is how we should extend the mean field approximation if we employ smearing. One possibility is to express everything in terms of two quantities, $u_0$, a mean value for the unsmeared link, and $u_S$, a mean value for smeared links\footnote{PR would like to thank Colin Morningstar for conversations on this point.}. We will discuss the relation between these two quantities later, first we want to make a general point about mean field approximations and smearing. The reason we smear our gauge links is to suppress very short range fluctuations in the gauge field, which is justified by the argument that these short range fluctuations are very lattice-dependent, rather than physical. However, put another way, suppressing short range fluctuations means that we are correlating nearby gauge links. So there is a certain tension between smearing and the mean field notion that each link is fluctuating independently. We will take the attitude that it does still make sense to use the mean field approximation if smearing is mild -- but we should treat the results with some degree of caution if extreme smearing is used. Applying this double-$u$ mean field approximation to the SLiNC fermion matrix we find the following results for the principal fermion quantities, \begin{eqnarray} && \Sigma_1(p) \approx u_S \,, \quad \Sigma_2(p) \approx u_S \,, \quad Z_\psi \approx u_S \,, \quad \kappa_c \approx \frac{1}{8 u_S}\,, \quad c_{SW} \approx \frac{u_S}{u_0^4} \end{eqnarray} (we define $Z_\psi$ by the relation $S^{\rm ren} = Z_\psi S^{\rm lat}$). For reasonable smearing we expect the smeared link $u_S$ to be closer to 1 than the bare link $u_0$, so most quantities will lie closer to their tree-level values with smearing. However, the clover coefficient $c_{SW}$ is an exception; it will be further from 1 with smearing than without, because we construct our clover term from unsmeared links. As a result, we obtain the mean field expressions for $\kappa_c$ and $c_{SW}$ by performing the following replacements \begin{equation} \kappa_c(g^2) \rightarrow \kappa_c^{MF}(g_{MF}^2,u_S)= \frac{1}{8}\,\frac{u_S^{\rm pert}(g_{MF}^2)}{u_S}\,\kappa_c(g_{MF}^2) \end{equation} and \begin{equation} c_{SW}(g^2) \rightarrow c_{SW}^{MF}(g_{MF}^2,u_S,u_0)= \frac{u_S}{u_0^4}\,\frac{u_0^{\rm pert}(g_{MF}^2)^{\,4}}{u_S^{\rm pert}(g_{MF}^2)}\,c_{SW}(g_{MF}^2)\,. \end{equation} Here $u_S$ and $u_0$ are the measured smeared and unsmeared links at the given coupling and $u_S^{\rm pert}$ and $u_0^{\rm pert}$ denote the corresponding expressions in lattice perturbation theory. \subsection{The smeared plaquette in perturbation theory} We will use $u_S^{\rm pert}$ derived from the smeared perturbative plaquette $P_S$ \begin{equation} u_S^{\rm pert} \equiv P_S^{1/4}. \end{equation} To one-loop order we have \begin{equation} u_S^{\rm pert} = 1 - \frac{g^2\, C_F}{16 \pi^2} \, k_S\,, \end{equation} with\footnote{We have written this integral for the case of a plaquette in the 1-2 plane, any orientation gives the same result.} \begin{eqnarray} k_S = 8 \pi^2 a^4 \int \frac{d^4 k}{(2 \pi)^4}\hspace{-3mm} & D_{\alpha \beta}(k) & \hspace{-3mm} \Bigl[\, V_{\alpha 1}(k,\omega) V_{\beta 1}(k,\omega) s_2^2(k) +V_{\alpha 2}(k,\omega) V_{\beta 2}(k,\omega) s_1^2(k) \nonumber \\ & -& \hspace{-7mm} \left( V_{\alpha 1}(k,\omega) V_{\beta 2}(k,\omega) + V_{\beta 1}(k,\omega) V_{\alpha 2}(k,\omega) \right) s_1(k) s_2(k) \Bigr] \end{eqnarray} where $D_{\alpha \beta}(k)$ the gluon propagator for the action in question. The smearing function $V_{\alpha \mu}(k,\omega)$ is defined in (\ref{Vdef}) in Appendix A, $s_\mu(k)$ and $s^2(k)$ used below are given in~(\ref{eq:A2}). Using symmetry and the definition of $V$, the expression simplifies to \begin{equation} k_S = 16 \pi^2 a^4 \int \frac{d^4 k}{(2 \pi)^4} \left[ D_{1 1}(k) s_2(k) s_2(k) - D_{12}(k) s_1(k) s_2(k) \right] \left( 1 - 4 \, \omega \, s^2(k) \right)^2 \,. \label{SmearedPlaq} \end{equation} We can see from this form that mild smearing has the effect of suppressing the contribution from large $k$. Setting $\omega = 0$ in $k_S$, we recover the unsmeared link in perturbation theory \begin{equation} u_0^{\rm pert}= 1 - \frac{g^2\, C_F}{16 \pi^2} \, k_S(\omega=0)\,. \label{u0} \end{equation} For the plaquette action propagator we can calculate the integral exactly. The result is \begin{equation} k_S^{\rm Plaq} = \pi^2 \left( 1 - 16 \,\omega + 72 \,\omega^2 \right) \,. \end{equation} Let us see how well this improves the expressions for $\kappa_c$ and $c_{SW}$. Using the result ~(\ref{kappacplaq}) we find \begin{equation} \kappa_c^{{\rm Plaq},MF} = \frac{1}{8 u_S} \left[ 1 + g_{MF}^2\, C_F \, \left( -0.011861 + 0.103020 \,\omega -0.802715\,\omega^2 \right) \right] \end{equation} which successfully reduces the perturbative coefficients for every power of $\omega$. Trying the same thing with the clover coefficient (\ref{cswplaq}) gives \begin{eqnarray} c_{SW}^{{\rm Plaq},MF} = \frac{u_S}{u_0^4}\, \Bigl\{ \hspace{-3mm} &1& \hspace{-3mm} + \, g^2_{MF}\, \Bigl[ C_F\,\left(-0.019865 + 0.079148\,\omega + 0.813321\,\omega^2\right) \nonumber\\ & &+\, N_c\,\left(0.015025 + 0.009617\,\omega - 0.284786\,\omega^2\right)\,\Bigr] \Bigr\} \,. \end{eqnarray} Again, mean field improvement works well. For the Symanzik action we calculate the integral in (\ref{SmearedPlaq}) numerically, and get the result \begin{equation} k^{\rm Sym}_S = \pi^2 \left( 0.732525 -11.394696\,\omega + 50.245225\,\omega^2 \right)\,. \end{equation} The corresponding mean field improved expressions for $\kappa_c$ (\ref{kappacSym}) and $c_{SW}$ (\ref{cswSym}) are \begin{eqnarray} \kappa_c^{{\rm Sym}, MF} &=& \frac{1}{8 u_S} \left[1 + g_{MF}^2 \,C_F \, \left( -0.008053 + 0.0500781\,\omega -0.471784\,\omega^2 \right) \right] \,, \\ c_{SW}^{{\rm Sym},MF }& = & \frac{u_S}{u_0^4} \Big\{ 1 + g^2_{MF} \, \Big[ C_F\,\left(-0.0211635 + 0.115961\,\omega + 0.685247\,\omega^2 \right) \nonumber\\ & &+\, N_c\,\left(0.013777 + 0.015905\,\omega - 0.321899\,\omega^2\right)\,\Big] \Big\} \,. \end{eqnarray} \subsection{Choice of $g^2_{MF}$} In this section we discuss the boosted coupling for $SU(3)$, we have set $N_c=3$, $C_F=4/3$ throughout. From higher order continuum calculations we know that $g^2_{{\overline{MS}}}(\mu)$ is a good expansion parameter if $\mu$ is close to the appropriate physical scale. On the other hand, series in the bare lattice coupling $g^2(a)$ usually converge poorly. To understand this difference let us compare the two couplings. To one-loop order we have \begin{equation} \frac{1}{g^2_{{\overline{MS}}}(\mu)} - \frac{1}{g^2(a)} = 2b_0 \left(\log\frac{\mu}{\Lambda_{{\overline{MS}}}} - \log\frac{1}{a\Lambda_{\rm lat}}\right) = 2b_0 \log(a\mu) + d_g + N_f\, d_f \, , \label{gg} \end{equation} where $b_0=(11-2N_f/3)/(4\pi)^2$, and $N_f$ is the number of flavors. The ratio of $\Lambda$ parameters is thus given by \begin{equation} \frac{\Lambda_{\rm lat}}{\Lambda_{{\overline{MS}}}} = \exp \left(\frac{d_g + N_f\, d_f}{2b_0}\right) \, . \end{equation} The coefficient $d_g$ is known for the plaquette and Symanzik gauge action~\cite{Hasenfratz}: \begin{equation} d_g^{\rm Plaq} = -0.4682\,, \quad d_g^{\rm Sym} = -0.2361 \,. \end{equation} In Appendix C we show that $d_f$ is independent of the stout smearing parameter $\omega$. Therefore, we can use the value for clover fermions computed in~\cite{Booth:2001qp} \begin{equation} d_f=0.0314917 \,. \label{df} \end{equation} For $N_f=3$ this leads to \begin{eqnarray} \frac{\Lambda_{\rm lat}}{\Lambda_{{\overline{MS}}}} &= 0.038 \quad \mbox{Plaquette}\,,\\ \frac{\Lambda_{\rm lat}}{\Lambda_{{\overline{MS}}}} &= 0.289 \quad \mbox{Symanzik}\,. \end{eqnarray} These ratios are far from 1, especially for the plaquette action, which explains the poor convergence of series in $g^2(a)$. Now let us see what happens to the Lambda ratio if we make the popular choice of boosted coupling \begin{equation} g_{MF}^2 = \frac{g^2}{u_0^4} \, . \label{gmf} \end{equation} Upon inserting (\ref{u0}) and (\ref{gmf}) in (\ref{gg}), we obtain \begin{equation} \frac{1}{g^2_{{\overline{MS}}}(\mu)} - \frac{1}{g_{MF}^2(a)} = 2b_0 \left(\log\frac{\mu}{\Lambda_{{\overline{MS}}}} - \log\frac{1}{a\Lambda_{\rm lat}^{MF}}\right) = 2b_0 \log(a\mu) + d_g + N_f\, d_f +\frac{k_u}{3\pi^2} \, , \end{equation} which gives \begin{equation} \frac{\Lambda_{\rm lat}^{MF}}{\Lambda_{{\overline{MS}}}} = \exp \left(\frac{d_g + N_f\, d_f + k_u/3\pi^2}{2b_0}\right) \, . \label{ratio} \end{equation} For $N_f=3$ the numerical values of this ratio are \begin{eqnarray} \frac{\Lambda_{\rm lat}^{MF}}{\Lambda_{{\overline{MS}}}} &= 0.702 \quad \mbox{Plaquette}\,,\\ \frac{\Lambda_{\rm lat}^{MF}}{\Lambda_{{\overline{MS}}}} &= 2.459 \quad \mbox{Symanzik}\,. \end{eqnarray} We see that mean field improvement drives $\Lambda_{\rm lat}$ towards $\Lambda_{{\overline{MS}}}$ for both the plaquette and Symanzik gauge action, giving $g_{MF}^2 \approx g_{{\overline{MS}}}^2$, so that $g_{MF}^2$ appears to be a good expansion parameter in both cases. A perfect match is obtained for $\mu=1/0.702 a$ ($\mu=1/2.459 a$) for the plaquette (Symanzik) action. \section{Concluding remarks} In the present paper we have computed the improvement coefficient $c_{SW}$ and the additive mass renormalization/critical hopping parameter in one-loop perturbation theory for general stout parameter $\omega$ performing a single smearing. To separate the effect of improving the gauge action from the effect of tuning the fermion action, we have done the calculation for both the plaquette action and the tree-level Symanzik gauge action. In addition we also present the $\mathcal{O}(g^2)$ corrections to the coefficients $c_{NGI}$ and $c_D$ needed to $\mathcal{O}(a)$ improve the quark fields in the most general case. We give mean field (tadpole) improved results for $\kappa_c$ and $c_{SW}$. For both the plaquette and the Symanzik action the boosted coupling $g_{MF}^2$ turns out to be close to $g_{{\overline{MS}}}^2$, which makes $g_{MF}^2$ a good expansion parameter. We thus may expect that the perturbative series converges rapidly. For $N_f=3$ flavors of dynamical quarks it turns out that the one-loop improved Symanzik gauge action~\cite{Luscher:1984xn} coincides largely with its tree-level counterpart, with coefficients $c_0 \approx 5/3$, $c_1 \approx -1/12$ and $c_2 \approx 0$~\cite{Hao:2007iz}. This makes the tree-level Symanzik action (\ref{SG}) stand out against other improved gauge actions, at least from the perturbative point of view. SLiNC fermions represent a family of ultralocal, ultraviolet filtered clover fermions. While they share all prominent features of clover fermions, among them $\mathcal{O}(a)$ improvement and flavor symmetry, they allow to further optimize the chiral properties of the action by tuning the fattening of the links. In our forthcoming simulations with $N_f=2+1$ and $2+1+1$ flavors of dynamical quarks at realistic pion masses we shall employ this combination of gauge and fermion actions. Knowing the perturbative (asymptotic) value of $c_{SW}$, we can derive a closed expression for $c_{SW}$ that covers the whole range of $g^2$. We will do so in a subsequent paper employing the Schr\"odinger functional method. The one-loop coefficient $c_{SW}^{(1)}$ varies only slightly within the interval $0 \leq \omega \leq 0.2$ for both the plaquette and Symanzik action. For $\omega=0.1$, which is our favorite value, the tadpole improved one-loop coefficient becomes $c_{SW}^{(1)} \approx 0$, indicating that mean field approximation works well. The final result is $c_{SW}^{MF} \approx u_S/u_0^4$ to a very good approximation for both gauge actions, where $u_S$ is the average smeared link, found by measuring the smeared plaquette, and $u_0$ the average unsmeared link, found by measuring the unsmeared plaquette. This is to be compared with $c_{SW}^{MF} \approx 1/u_0^3$ over fermions with no smearing. We therefore expect $c_{SW}$ to be a steeper function of $g^2$ in the case of SLiNC fermions than for clover fermions. Stout link fattening reduces the additive mass renormalization considerably, with and without tadpole improvement, as expected. In fact, the critical hopping parameter $\kappa_c$ can be tuned to its continuum value of $1/8$ for an appropriate choice of $\omega$. We also confirm by early simulations with this action~\cite{preparation} that the spread of the negative eigenvalues is reduced by a factor of $\approx 2$ for $\omega=0.1$ and non-perturbative $c_{SW}$, as compared to ordinary clover fermions. SLiNC fermions have many other appealing features as well. The renormalization factors of quark bilinear operators, for example, come out to be very close to unity, which hints at virtually continuum-like behavior. \section*{Acknowledgment} This investigation has been supported by DFG under contract FOR 465 (Forschergruppe Gitter-Hadronen-Ph\"anomenologie). We also acknowledge support by the EU Integrated Infrastructure Initiative Hadron Physics (I3HP) under contract number RII3-CT-2004-506078. \renewcommand{\theequation}{A.\arabic{equation}} \setcounter{equation}{0} \section*{Appendix A: Feynman rules} In this Appendix we give the Feynman rules for quark-gluon vertices derived from action (\ref{SF}) with single stout smeared gauge link variables in the Wilson part and general Wilson parameter $r$. The pieces in the vertices proportional to $c_{SW}$ are denoted with $\widetilde{V}$. They have been rederived using our notations and they agree with the Feynman rules given in~\cite{Aoki:2003sj}. In the vertices we denote the incoming/outgoing quark momenta by $p_1/p_2$. The incoming gluons are described by momenta $k_i$, Lorentz indices $\alpha,\beta,\gamma$ and color indices $a,b,c=1,\dots,N_c^2-1$. For the color matrices we have: \begin{eqnarray} &&T^a T^b = \frac{1}{2 N_c} \delta^{ab} I_{N_c} + \frac{1}{2}( d^{abc}+ i \,f^{abc}) T^c \nonumber \\ && C_F = \frac{N_c^2-1}{2 N_c} \,, \quad [T^a,T^b]=T^a T^b - T^b T^a\,, \quad \{T^a,T^b\}=T^a T^b + T^b T^a \\ && T_{ss}^{abc}=\{T^a,\{T^b,T^c\}\} \,, \quad T_{aa}^{abc}=[T^a,[T^b,T^c]] \,, \quad T_{sa}^{abc}=\{T^a,[T^b,T^c]\} \,. \nonumber \end{eqnarray} We use the abbreviations \begin{eqnarray} &&\ms{k}{\mu}=\sin\left(\frac{a}{2}k_\mu\right), \quad \mc{k}{\mu}=\cos\left(\frac{a}{2}k_\mu\right) \,, \quad s^2(k) = \sum_\mu \mss{k}{\mu} \,, \nonumber \\ && s^2(k_1,k_2)= \sum_\mu \ms{k_1+k_2}{\mu}\ms{k_1-k_2}{\mu} \equiv s^2(k_1)-s^2(k_2)\,. \label{eq:A2} \end{eqnarray} For later use we give the bare massless quark propagator \begin{equation} S(k) = \frac{a}{ i \sum_\mu \gamma_\mu \ms{2 k}{\mu} + r \sum_\mu \left( 1 - \mc{2k}{\mu} \right) }\,. \label{quarkprop} \end{equation} The structure of the Wilson quark-gluon vertices is \begin{eqnarray} W_{1\mu}(p_2,p_1) &=& {i} \, \mc{p_2+p_1}{\mu} \,\gamma_\mu + r\,\ms{p_2+p_1}{\mu} \nonumber \\ W_{2\mu}(p_2,p_1) &=& {i}\, \ms{p_2+p_1}{\mu}\,\gamma_\mu - r\,\mc{p_2+p_1}{\mu} \label{eq:A3} \,. \end{eqnarray} Let us introduce the following functions to be useful in the definitions of the improved vertices \begin{eqnarray} V_{\alpha\mu}(k,\omega)& =& \delta_{\alpha\mu} + 4\, \omega \, v_{\alpha\mu}(k) \label{Vdef}\\ v_{\alpha\mu}(k)&=&\ms{k}{\alpha}\ms{k}{\mu} -\delta_{\alpha\mu} \, s^2(k) \nonumber \\ g_{\alpha\beta\mu}(k_1,k_2)&=& \delta_{\alpha\beta} \mc{k_1+k_2}{\alpha} \ms{k_1-k_2}{\mu} \nonumber\\ &&-\, \delta_{\alpha\mu} \mc{k_2}{\alpha}\ms{2 k_1+k_2}{\beta}+ \delta_{\beta\mu} \mc{k_1}{\beta}\ms{2 k_2+k_1}{\alpha} \\ w_{\alpha\mu}(k_1,k_2)&=& \ms{k_1+k_2}{\alpha}\ms{k_1-k_2}{\mu}- \delta_{\alpha\mu} \, s^2(k_1,k_2)\,, \\ w_{\alpha\mu}(k,0)&=&v_{\alpha\mu}(k)\nonumber \end{eqnarray} \subsection*{The qqg-vertex: $V_\alpha^a(p_2,p_1,k_1; c_{SW},\omega)$} The qqg-vertex including stout smeared links and clover contribution is given by the expression ($p_1+k_1=p_2$) \begin{eqnarray} V_\alpha^a(p_2,p_1,k_1; c_{SW},\omega) &=& - g\, T^a\, \sum_\mu V_{\alpha\mu}(k_1,\omega)\, W_{1\mu}(p_2,p_1)+ c_{SW}\,\widetilde{V}_\alpha^a(k_1)\,. \end{eqnarray} The stout smeared part shows the separation property mentioned in~\cite{Capitani:2006ni}. The clover part is given by \begin{eqnarray} \widetilde{V}_\alpha^a(k_1)&=& -i\,g\, T^a\, \frac{r}{2}\,\sum_\mu \sigma_{\alpha\mu} \mc{k_1}{\alpha}\ms{2k_1}{\mu}\,. \end{eqnarray} \subsection*{The qqgg-vertex: $V_{\alpha\beta}^{ab}(p_2,p_1,k_1,k_2; c_{SW},\omega)$} We define the qqgg-vertex as follows ($p_1+k_1+k_2=p_2$): \begin{eqnarray} V_{\alpha\beta}^{ab}(p_2,p_1,k_1,k_2; c_{SW},\omega)=V_{\alpha\beta}^{\{a,b\}} + V_{\alpha\beta}^{[a,b]}+ c_{SW}\,\widetilde{V}_{\alpha\beta}^{ab}(k_1,k_2)\,. \end{eqnarray} The stout smeared part is separated into two parts proportional to $\{T^a,T^b\}$ and $[T^a,T^b]$. The anticommutator part shows the factorization property mentioned for two and four quark operators \begin{eqnarray} V_{\alpha\beta}^{\{a,b\}} &=& \frac{1}{2} a\,g^2\,\{T^a,T^b\}\, \sum_\mu V_{\alpha\mu}(k_1,\omega)\, V_{\beta\mu}(k_2,\omega)\,W_{2\mu}(p_2,p_1) \,. \end{eqnarray} The commutator part is given by \begin{eqnarray} V_{\alpha\beta}^{[a,b]}&=& \frac{1}{2} a\,g^2\,[T^a,T^b]\, 4 \, \omega \sum_\mu g_{\alpha\beta\mu}(k_1,k_2)\, \,W_{1\mu}(p_2,p_1) \,. \label{eq:a12} \end{eqnarray} Note that this part is proportional to $\omega$. The part $\propto c_{SW}$ has been used in the form \begin{eqnarray} \label{eq:a13} \widetilde{V}_{\alpha\beta}^{ab}(k_1,k_2)&=& i\,\frac{r}{4} a\,g^2\,[T^a,T^b]\, \Big\{2\, \sigma_{\alpha\beta}\big[2 \mc{k_1}{\beta}\mc{k_2}{\alpha}\mc{k_1+k_2} {\alpha}\mc{k_1+k_2}{\beta} \\ && - \, \mc{k_1}{\alpha}\mc{k_2}{\beta}\big]+ \delta_{\alpha\beta}\,\sum_\mu\,\sigma_{\alpha\mu}\ms{k_1+k_2}{\alpha} \left[\ms{2k_2}{\mu}-\ms{2k_1}{\mu}\right]\Big\} \,. \nonumber \end{eqnarray} Both (\ref{eq:a12}) and (\ref{eq:a13}) vanish for tadpole diagrams along quark lines. \subsection*{The qqggg-vertex: $V_{\alpha\beta\gamma}^{abc}(p_2,p_1,k_1,k_2,k_3; c_{SW},\omega)$} We present that vertex contribution in the following form ($p_1+k_1+k_2+k_3=p_2$) \begin{eqnarray} &&\hspace{-20mm}V_{\alpha\beta\gamma}^{abc}(p_2,p_1,k_1,k_2,k_3; c_{SW},\omega)=\frac{1}{6} \, a^2 g^3 \times \nonumber\\ && \sum_\mu \bigg\{ W_{1\mu}(p_2,p_1)\,\Big[F^{abc}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3) + {\rm cyclic \ perm.}\Big] \nonumber \\ && -\, 6 \,\omega \, W_{2\mu}(p_2,p_1) \, \Big[T_{sa}^{abc} \, V_{\alpha\mu}(k_1) \, g_{\beta\gamma\mu}(k_2,k_3) + {\rm cyclic \ perm.} \Big] \bigg\} \nonumber\\ && +\, c_{SW}\, \widetilde{V}_{\alpha\beta\gamma}^{abc}(k_1,k_2,k_3) \,. \label{eq:A11} \end{eqnarray} Cyclic permutations have to be performed in the gluon momenta as well as in the color and Lorentz indices of the three gluons. Note that the general stout smeared part is proportional both to $W_{1\mu}$ and $W_{2\mu}$. The coefficient $F^{abc}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3)$ is decomposed into its different color structures: \begin{eqnarray} F^{abc}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3)&=& T_{ss}^{abc} f^{(1)}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3) + T_{aa}^{abc}\, \big( f^{(2)}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3) - f^{(2)}_{\alpha\gamma\beta\mu}(k_1,k_3,k_2)\big) \nonumber \\ && +\, \left(T_{ss}^{abc}- \frac{1}{N_c} d^{abc}\right) f^{(3)}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3) \,, \end{eqnarray} where the $f^{(i)}_{\alpha\beta\gamma\mu}$ are given as \begin{eqnarray} f^{(1)}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3)&=&\frac{1}{2} \, V_{\alpha\mu}(k_1,\omega)\, V_{\beta\mu}(k_2,\omega)\, V_{\gamma\mu}(k_3,\omega) \,, \nonumber \\ f^{(2)}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3)&=& \frac{1}{2} \, V_{\alpha\mu}(k_1,\omega)\, V_{\beta\mu}(k_2,\omega)\, \delta_{\gamma\mu} -\frac{1}{2} \,\delta_{\alpha\mu}\delta_{\beta\mu} \, V_{\gamma\mu}(k_3,\omega) \\ &+& \hspace{-2mm} 6 \, \omega \, \delta_{\alpha\beta} \Big[ \mc{k_1-k_2}{\mu} \mc{2 k_3+k_1+k_2}{\beta} \delta_{\gamma\mu} + \ms{k_3}{\mu} \ms{k_3 + 2 k_1}{\gamma} \, \delta_{\beta\mu} \Big] \,, \nonumber \\ f^{(3)}_{\alpha\beta\gamma\mu}(k_1,k_2,k_3)&=& 2 \, \omega \, \delta_{\beta\gamma} \Big[\big( 3\, w_{\alpha\mu}(k_1,k_2+k_3) + v_{\alpha\mu}(k_1+k_2+k_3)\big) \, \delta_{\alpha\beta} \nonumber \\ &+& \hspace{-2mm} 12 \ms{k_1}{\beta} \ms{k_2}{\alpha} \ms{k_3}{\alpha} \big( \ms{k_1+k_2+k_3}{\beta} \, \delta_{\alpha\mu}- \ms{k_1+k_2+k_3}{\alpha} \delta_{\beta\mu}\big) \Big] \,. \nonumber \end{eqnarray} The clover part of the qqggg-vertex is given by \begin{equation} \widetilde{V}_{\alpha\beta\gamma}^{abc}(k_1,k_2,k_3)=\frac{1}{6}\, \bigg\{\widetilde{\widetilde{V}}_{\alpha\beta\gamma}^{abc}(k_1,k_2,k_3) + {\rm total \ perm.}\bigg\} \label{Vtotclover} \end{equation} with \begin{eqnarray} {\widetilde{\widetilde{V}}}_{\alpha\beta\gamma}^{abc}(k_1,k_2,k_3)&=&-3\,i\,g^3\,a^2\,r\times\nonumber\\ && \hspace{-12mm}\bigg[T^aT^bT^c\delta_{\alpha\beta}\delta_{\alpha\gamma} \sum_\mu\,\sigma_{\alpha\mu}\bigg\{-\frac{1}{6}\mc{k_1+k_2+k_3}{\alpha}\ms{2(k_1+k_2+k_3)}{\mu} \nonumber\\ && \hspace{-10mm} + \, \mc{k_1+k_2+k_3}{\alpha}\mc{k_1+k_2+k_3}{\mu}\mc{k_3-k_1}{\mu}\ms{k_2}{\mu}\bigg\} \nonumber\\ && \hspace{-10mm} - \, \frac{1}{2}\bigg[T^aT^bT^c+T^cT^bT^a\bigg]\,\sigma_{\alpha\beta}\times \\ && \hspace{-10mm} \bigg\{2\, \delta_{\beta\gamma}\mc{k_1+k_2+k_3}{\alpha}\mc{k_1+k_2+k_3}{\beta} \mc{k_3+k_2}{\alpha}\ms{k_1}{\beta} \nonumber\\ && \hspace{-8mm} + \, \delta_{\beta\gamma}\ms{k_3+k_2}{\beta}\mc{k_1+2k_2}{\alpha} \nonumber\\ && \hspace{-8mm} + \, \delta_{\alpha\gamma}\ms{k_1+2k_2+k_3}{\alpha}\mc{k_1+k_2+k_3} {\beta}\mc{k_3-k_1}{\beta}\bigg\}\bigg] \nonumber\,. \end{eqnarray} In (\ref{Vtotclover}) the total permutation has to be performed in the gluon momenta, color and Lorentz indices. We only need this vertex for the gluon tadpole diagram of Fig.~\ref{fig2}, which simplifies the expressions. In the tadpole contribution to the vertex (\ref{eq:A11}) we denote the external gluon momentum by $q=p_2-p_1$, the color index of the gluon by $a$ and the internal momenta by $k$ and $-k$. The color indices ($b,c$) of the remaining gluons forming the tadpole are summed up using the color diagonality $\delta^{bc}$ of the gluon propagator, $k$ is the gluon momentum in the tadpole loop. So the stout smeared tadpole contribution is defined from the general qqggg-vertex (explicitly symmetrized in the three gluons) as \begin{eqnarray} V_{\alpha\beta\gamma}^a(p_2,p_1,k)&=& \sum_{b=1}^{N_c^2-1}\bigg\{ V_{\alpha\beta\gamma}^{a b b}(p_2,p_1,q,k, -k)+c_{SW}\,\widetilde{V}_{\alpha\beta\gamma}^{abb}(p_2,p_1,q,k,-k)\bigg\} \nonumber\\ &=&\frac{1}{6} a^2\,g^3 \, T^a\, \sum_\mu \, W_{1\mu}(p_2,p_1) V_{\alpha\beta\gamma\mu}(q,k) \\ && +\, c_{SW}\,\sum_{b=1}^{N_c^2-1}\,\widetilde{V}_{\alpha\beta\gamma}^{abb}(p_2,p_1,q,k,-k) \,.\nonumber \end{eqnarray} Using that definition we obtain for the stout smeared part \begin{eqnarray} V_{\alpha\beta\gamma\mu}(q,k)&=& \Bigg\{ \left(6 \, C_F-N_c\right) \, f^{(1)}_{\alpha\beta\gamma\mu}(q,k,-k) + \frac{N_c}{2} \Big[ f^{(2)}_{\beta\gamma\alpha\mu}(k,-k,q)-f^{(2)}_{\beta\alpha\gamma\mu}(k,q,-k) \nonumber\\ && - \, f^{(2)}_{\gamma\alpha\beta\mu}(-k,q,k)+f^{(2)}_{\gamma\beta\alpha\mu}(-k,k,q) \Big]+ 4 \, C_F \, f^{(3)}_{\alpha\beta\gamma\mu}(q,k,-k) \\ && + \, (4 \, C_F-N_c) \Big[ f^{(3)}_{\beta\gamma\alpha\mu}(k,-k,q) +f^{(3)}_{\gamma\alpha\beta\mu}(-k,q,k) \Big] \Bigg\} \,. \nonumber \end{eqnarray} {}From that expression a convenient representation is found in the form \begin{eqnarray} V_{\alpha\beta\gamma\mu}(q,k)& =& \Bigg\{ \left(6 \, C_F-N_c\right) \, V_{\alpha\mu}(q,\omega)\, V_{\beta\mu}(k,\omega)\, V_{\gamma\mu}(k,\omega) \nonumber\\ && \hspace{-3mm} +\, \frac{N_c}{2} \Big[ 2 \,\delta_{\alpha\mu} V_{\beta\mu}(k,\omega)\, V_{\gamma\mu}(k,\omega) - V_{\alpha\mu}(q,\omega)\, \big( \delta_{\beta\mu} \, V_{\gamma\mu}(k,\omega) + \delta_{\gamma\mu} \, V_{\beta\mu}(k,\omega) \, \big) \Big] \nonumber\\ && \hspace{-3mm}+\, 2 \, \omega \, \Big[3 \left( 4 \, C_F-N_c \right) \, C_{\alpha\beta\gamma\mu}(q,k) + N_c \, D_{\alpha\beta\gamma\mu}(q,k) \Big] \Bigg\} \,. \end{eqnarray} The structures $C_{\alpha\beta\gamma\mu}$ and $D_{\alpha\beta\gamma\mu}$, additionally contributing to $O(\omega)$, are \begin{eqnarray} C_{\alpha\beta\gamma\mu}(q,k)&=& - 4 \, \big[\delta_{\alpha\mu} \mss{p}{\gamma}-\delta_{\alpha\gamma}\ms{p}{\alpha}\ms{p}{\mu}\big]\, \big[\delta_{\beta \gamma}\mss{k}{\mu} - \delta_{\beta \mu} \ms{k}{\beta}\ms{k}{\gamma}\big] \nonumber \\ && - \, 4 \, \delta_{\gamma\mu} \ms{p}{\beta}\ms{k}{\alpha} \big[ \delta_{\alpha\beta} \ms{p}{\mu}\ms{k}{\mu } -\delta_{\alpha\mu} \ms{p}{\beta}\ms{k}{\beta} - \delta_{\beta\mu} \ms{p}{\alpha}\ms{k}{\alpha} \big] \nonumber \\ && - \, \delta_{\alpha\mu}\delta_{\beta\mu}\delta_{\gamma\mu} \, \big[ 2 s^2(p)+ 2 s^2(k) - s^2(p+k) - s^2(p-k) \big] \,, \nonumber \\ && \\ D_{\alpha\beta\gamma\mu}(q,k)&=& - 3\, \delta_{\alpha\gamma}\delta_{\beta\mu} \mc{p+k}{\beta}\mc{p+k}{\gamma} - 3\, \delta_{\alpha\beta}\delta_{\gamma\mu} \mc{p-k}{\beta}\mc{p-k}{\gamma} \nonumber \\ && + \, 4 \, \delta_{\beta\gamma}(\delta_{\alpha\beta}+\delta_{\beta\mu}) \ms{p}{\alpha}\ms{p}{\mu } + 4 \, \delta_{\alpha\mu} (\delta_{\beta\mu} +\delta_{\gamma\mu}) \ms{k}{\beta} \ms{k}{\gamma } \nonumber \\ && - \, 2\, \delta_{\alpha\mu} \delta_{\beta\mu} \delta_{\gamma\mu} \big[s^2(p)+s^2(k)\big] + 6 \, \delta_{\alpha\mu}\delta_{\beta\gamma} \, \big[2 \mcc{p}{\gamma}\mcc{k}{\alpha} -1 \big] \,. \nonumber \end{eqnarray} \renewcommand{\theequation}{B.\arabic{equation}} \setcounter{equation}{0} \section*{Appendix B: Three-point function - universal part} As discussed above, the universal part of the three-point function has the form (\ref{lambda2}) when $c_{SW}=1 + \mathcal{O}(g^2)$. Therefore, it is sufficient to give only the one-loop result for $\Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q)$. It is cast into the following form ($q=p_2 - p_1$) \begin{eqnarray} \Lambda^{{\overline{MS}}}_{1,\mu}(p_1,p_2,q) &=&F_1(p_1,p_2)\,\gamma_\mu+ F_2(p_1,p_2)\,\ensuremath{\slashed{p}}_2\, \gamma_\mu\ensuremath{\slashed{p}}_1 \nonumber \\& & +\, [ F_3(p_1,p_2) \,p_{1,\mu}+F_4(p_1,p_2) \,p_{2,\mu}]\, \ensuremath{\slashed{p}}_1 \label{Blam1} \\& & +\, [ F_5(p_1,p_2)\,p_{2,\mu}+F_6(p_1,p_2)\,p_{1,\mu}]\, \ensuremath{\slashed{p}}_2 \,. \nonumber \end{eqnarray} Due to the symmetries $F_5(p_1,p_2)=F_3(p_2,p_1)$ and $F_6(p_1,p_2)=F_4(p_2,p_1)$ we have four independent functions $F_i(p_1,p_2)$ only. We represent them as follows: \begin{eqnarray} F_1(p_1,p_2)&=&4\,C_F\,\xi - \frac{N_c}{2} (12+2\xi-\xi^2) + 2\,\Theta \left(\,\mathcal{C}_{1}\,\,\mathcal{S}+ N_c\,p_1.p_2 + C_F \,q^2\right) \nonumber\\ & &+\left(C_F(1-\xi)+\frac{N_c}{4}(4-\xi)\right) \log\left( \frac{p_1^2p_2^2 }{\left(\mu^2\right)^2} \right) \\ & &+\, V_1(p_1,p_2) \log \left( \frac{p_1^2}{q^2}\right)+V_1(p_2,p_1) \log\left( \frac{p_2^2}{q^2}\right) \,, \nonumber \end{eqnarray} \begin{eqnarray} \hspace{-1mm}F_2(p_1,p_2)=\frac{\Theta}{8}\left(2N_c(6-\xi)+ \,\mathcal{C}_2\, \frac{p_1.p_2\, q^2}{\Delta} \right) +\frac{\,\mathcal{C}_2\,}{4 \Delta} \left[ p_1.q \log\left( \frac{p_1^2}{q^2}\right) - p_2.q \log\left( \frac{p_2^2}{q^2}\right) \right] \,, \end{eqnarray} \begin{eqnarray} F_3(p_1,p_2)&=&\,\mathcal{C}_3\,\frac{p_2^2}{2\,\Delta}+\frac{2\,N_c\,\xi}{q^2} +\frac{\Theta}{8\,\Delta} \Big[ 4 N_c\,\xi (p_1.p_2)^2 + \left(2 \,\mathcal{C}_3\, (6 \, \mathcal{S} + p_2^2)-\,\mathcal{C}_4\, p_1.q \right)\, p_2^2 \Big] \nonumber\\ & & +\, \frac{1}{q^2}\left[V_2(p_1,p_2) \log\left( \frac{p_1^2}{q^2}\right)+V_3(p_1,p_2) \log\left( \frac{p_2^2}{q^2}\right) \right] \,, \end{eqnarray} \begin{eqnarray} F_4(p_1,p_2)&=&-\,\mathcal{C}_3\, \frac{p_1.p_2}{2\,\Delta}-\frac{2\,N_c\,\xi}{q^2} + \frac{\Theta}{8\,\Delta} \Big[ 4 (8\,C_F- N_c (4-\xi))\, (p_1.p_2)^2 \nonumber\\ & & -\, (12 \,\mathcal{C}_3\, \mathcal{S} + 4 \,\mathcal{C}_6\, \, p_1^2 +(\,\mathcal{C}_5\,+ 8\, C_F(2+\xi) ) \, p_2^2) \, p_1.p_2 +\,\mathcal{C}_{7}\, \, p_1^2\, p_2^2 \Big] \\ & & +\, \frac{1}{q^2}\left[ V_4(p_1,p_2) \log\left( \frac{p_1^2}{q^2}\right)+V_5(p_1,p_2) \log\left( \frac{p_2^2}{q^2}\right) \right] \,. \nonumber \end{eqnarray} The function $V_i$ in front of the logarithms are found as follows \begin{eqnarray} V_1(p_1,p_2) &=& C_F (3+\xi)-\frac{N_c}{4} (4-\xi) +\,\mathcal{C}_{1}\, \frac{ p_2.q \,p_1^2 }{\Delta} \nonumber\,, \nonumber \\ V_2(p_1,p_2) &=& \frac{1}{4 \, \Delta} \Big[ (4 \,\mathcal{C}_3\,-\,\mathcal{C}_4\,- 4 N_c \, \xi)\, p_2^2 \, q^2 \nonumber \\ && + \, (12 \,\mathcal{C}_3\, \mathcal{S}+ 4 N_c \, \xi \, p_1.p_2 + (\,\mathcal{C}_5\, + 8 \,C_F)\, q^2) \,p_2.q \Big] \,, \nonumber\\ V_3(p_1,p_2) &=& \frac{1}{4\,\Delta\,p_1^2} \left[ -4 N_c\,\xi \, p_1.p_2 \, p_2.q \, p_1^2+\left( -12 \,\mathcal{C}_3\,\mathcal{S} \, p_1.q +\,\mathcal{C}_4\, p_1^2 \, q^2 \right) \,p_2^2 \right]\,, \\ V_4(p_1,p_2) &=& V_2(p_2,p_1) + \frac{1}{4\,\Delta} \left[ - 8 \, C_F (1+\xi)\, p_1.q + ( 4 \, C_F (1- 3 \xi) + N_c (5-\xi) \xi)\,p_1^2 \right] \,, \nonumber\\ V_5(p_1,p_2) &=& V_2(p_1,p_2) +\frac{1}{4\,\Delta}\left[ (8\,C_F +N_c (2-\xi)\xi)\, p_2.q + (1+\xi) (4 \, C_F + N_c \, \xi) p_2^2 \right] \,.\nonumber \end{eqnarray} We have introduced the kinematic functions \begin{eqnarray} \Delta &=& (p_1.p_2)^2 - p_1^2\,p_2^2\,, \quad \mathcal{S} = \frac{p_1^2\,p_2^2\,q^2}{4\,\Delta}\,, \nonumber \\ \Theta &=& \frac{4}{\pi^2\,\sqrt{\Delta}}\Bigg({\rm Sp}\left(\frac{p_2.q+\sqrt{\Delta}}{p_2^2} \right)- {\rm Sp}\left(\frac{p_2.q-\sqrt{\Delta}}{p_2^2} \right) \\ && + \, \frac{1}{2}\log\left( \frac{p_1.p_2-\sqrt{\Delta}}{p_1.p_2 +\sqrt{\Delta}} \right) \log\left( \frac{q^2}{p_2^2} \right)\Bigg)\,, \nonumber \end{eqnarray} with ${\rm Sp}(x)$ being the Spence function: $$ {\rm Sp}(x)=-\int_0^x\,dy \frac{\log (1-y)}{y}\nonumber\,. $$ The quantities $\cg{i}$ depend on the color factors and gauge parameter and have the values \begin{eqnarray} \,\mathcal{C}_{1}\, &=& C_F\,(3+\xi)-\frac{1}{2} N_c\,(1-\xi)\,, \nonumber \\ \,\mathcal{C}_2\, &=& 8\,C_F + N_c\, (2+(3-\xi)\xi))\,, \nonumber\\ \,\mathcal{C}_3\, &=& 4\,C_F\,(1+\xi)-N_c\,(4+(1-\xi)\xi)\,, \nonumber\\ \,\mathcal{C}_4\, &=& 8\,C_F\,(2+\xi)-N_c\,(12+(4-3\xi)\xi)\,,\\ \,\mathcal{C}_5\, &=& -N_c\,(4-(2+\xi)\xi)\,, \nonumber\\ \,\mathcal{C}_6\, &=& 4\,C_F-N_c\,(1-\xi)\,, \nonumber\\ \,\mathcal{C}_{7}\,&=& 8 \, C_F - N_c\, (16 -\xi^2)\,. \nonumber \end{eqnarray} In order to express the one-loop result (\ref{Blam1}) in terms of Spence functions, logarithms and rational functions of external momenta we have proceeded in two steps. First we have expanded all tensor integrals over the internal momentum into scalar three-point integrals times tensor functions of the external momenta~\cite{Kizilersu:1995iz}. Then we used recursion relations of Davydychev~\cite{Davydychev:1992xr} to reduce these scalar three-point integrals into scalar two-point integrals and $\Theta$. \renewcommand{\theequation}{C.\arabic{equation}} \setcounter{equation}{0} \section*{Appendix C: $\omega$-independence of $d_f$} We find $d_f$, the coefficient which tells us the fermionic shift in $\Lambda_{\rm lat}$, by calculating the massless quark vacuum polarization in a gluon with $a^2 q^2 \ll 1$: \begin{eqnarray} \lefteqn{ \Pi_{\alpha \beta}^{a b}(q; c_{SW}, \omega) =} \nonumber \\ & -& N_f \int \frac{d^4 k}{(2 \pi)^4} {\rm Tr} \left[ V_\alpha^a(q+k,k,q; c_{SW}, \omega) S(k) V_\beta^b (k, q+k, -q; c_{SW}, \omega) S(k+q) \right] \nonumber \\ & -& N_f \int \frac{d^4 k}{(2 \pi)^4} {\rm Tr} \left[ V_{\alpha \beta}^{\{a,b\}}(k,k,q,-q; c_{SW}, \omega) S(k) \right] \,. \end{eqnarray} The quark propagator $S$ and the vertices $V$ are defined in Appendix A, the trace here is over both spin and color. The corresponding one-loop diagrams are shown in Fig. \ref{fig4}. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.01,width=0.8\textwidth]{gluonself1.eps} \end{center} \caption{One-loop quark vacuum polarization diagrams.} \label{fig4} \end{figure} In the required limit of small $a^2 q^2$ we can expand in $q^2$ and drop any terms $\mathcal{O}(a^2q^4)$. We then get \begin{eqnarray} \Pi_{\alpha \beta}^{a b}(q; c_{SW}, \omega) &=& \Pi_{\alpha \beta}^{a b}(q; c_{SW}, 0) - 2\, \omega\, N_f\,\delta^{a b} g^2 a^2 \times \nonumber\\ & & \Bigg\{ \sum_\mu\,\left(q_\alpha q_\mu - q^2 \delta_{\alpha \mu}\right) \int \frac{d^4 k}{(2 \pi)^4} {\rm Tr} \left[ W_{1 \mu}(k,k) S(k) W_{1 \beta}(k,k) S(k) \right] \nonumber \\ & & +\, a \left(q_\alpha q_\beta - q^2 \delta_{\alpha \beta}\right) \int \frac{d^4 k}{(2 \pi)^4} {\rm Tr} \left[ W_{2 \beta}(k,k) S(k) \right] \label{Pilong} \\ & &+ \sum_\mu\,\left(q_\beta q_\mu - q^2 \delta_{\beta \mu}\right) \int \frac{d^4 k}{(2 \pi)^4} {\rm Tr} \left[ W_{1 \alpha}(k,k) S(k) W_{1 \mu }(k,k) S(k) \right] \nonumber \\ & & + \, a \left(q_\alpha q_\beta - q^2 \delta_{\alpha \beta}\right) \int \frac{d^4 k}{(2 \pi)^4} {\rm Tr} \left[ W_{2 \alpha}(k,k) S(k) \right]\Bigg\} + \mathcal{O}(a^2 q^4) \nonumber \end{eqnarray} where $\Pi_{\alpha \beta}^{a b}(q; c_{SW}, 0)$ is the vacuum polarization tensor with no smearing, $W_1$ and $W_2$ are the Wilson quark gluon vertices defined in (\ref{eq:A3}), and the trace is now only over the spin index. All $\omega^2$ terms have dropped out because they first appear at $\mathcal{O}(a^2q^4)$. Calculating $\Pi_{\alpha \beta}^{a b}(q; c_{SW}, 0)$ in one loop for $c_{SW}=1$ leads to the value of $d_f$ given in Eq.~(\ref{df}). From power counting we would at first expect the integrals $\propto \,\omega$ in (\ref{Pilong}) to have values proportional to $1/a^2$ or $1/a^3$, and to make a finite contribution to $d_f$. However we show now that there is a perfect cancellation between the continuum-like diagram Fig.~\ref{fig4}(a) (the integrals involving $W_1$) and the tadpole contribution Fig.~\ref{fig4}(b) (those with $W_2$). To do this we use the identities \begin{eqnarray} \frac{ \partial}{\partial k_\mu} S(k) &=& - \, S(k) W_{1 \mu}(k,k) S(k)\,, \\ \frac{ \partial}{\partial k_\mu} W_{1 \nu}(k,k) &=& - \, a \,\delta_{\mu \nu} W_{2 \mu}(k,k) \end{eqnarray} which follow immediately from the definitions. Eq.~(\ref{Pilong}) becomes \begin{eqnarray} \Pi_{\alpha \beta}^{a b}(q; c_{SW}, \omega) &=& \Pi_{\alpha \beta}^{a b}(q; c_{SW}, 0) + 2 \,\omega\, N_f\, \delta^{a b} g^2 a^2 \times\nonumber\\ & & \bigg\{ \sum_\mu\,\left(q_\alpha q_\mu - q^2 \delta_{\alpha \mu}\right) \int \frac{d^4 k}{(2 \pi)^4}\, \frac{\partial}{\partial k_\beta} {\rm Tr} \left[ W_{1 \mu}(k,k) S(k) \right] \label{Pishort} \\ &&+\, \sum_\mu\,\left(q_\beta q_\mu - q^2 \delta_{\beta \mu}\right) \int \frac{d^4 k}{(2 \pi)^4}\, \frac{\partial}{\partial k_\alpha} {\rm Tr} \left[ W_{1 \mu}(k,k) S(k) \right]\bigg\} + \mathcal{O}(a^2 q^4) \nonumber \,. \end{eqnarray} The integrals are now zero because $W_1$ and $S$ are periodic, \begin{equation} \int_{-\pi/a}^{\pi/a} d k_\alpha \frac{\partial}{\partial k_\alpha} {\rm Tr} \left[ W_{1 \mu}(k,k) S(k) \right] = {\rm Tr} \left[ W_{1 \mu}(k,k) S(k) \right] \Big|_{k_\alpha=-\pi/a}^{k_\alpha=\pi/a} = 0 \,. \end{equation} Thus we have proved that the vacuum polarization is independent of smearing the one-link part of the fermion action, \begin{equation} \Pi_{\alpha \beta}^{a b}(q; c_{SW}, \omega) = \Pi_{\alpha \beta}^{a b}(q; c_{SW}, 0) + \mathcal{O}(a^2 q^4) \end{equation} which implies that $d_f$ depends on $r$ and $c_{SW}$, but not on $\omega$.
1,116,691,499,960
arxiv
\section{Introduction} The development of robust methods for the computation of nonequilibrium properties of quantum many-particle systems is a crucial issue in present-day condensed matter physics, with impact on topics ranging from nonequilibrium transport in nanostructures\cite{Goldhaber-Gordon98} to pump-probe spectroscopy of bulk condensed matter systems\cite{Iwai03, Iwai06} and the wide range of new spectroscopies possible in cold atom systems.\cite{Schneider08} An important step forward occurred with the development of continuous-time quantum Monte Carlo (CTQMC) methods for impurity models. These algorithms were first introduced as imaginary-time methods for obtaining equilibrium properties \cite{Rombouts99,Rubtsov05,Werner06,Gull08_ctaux} and soon afterwards were extended to real-time dynamics and nonequilibrium problems.\cite{Muehlbacher08,Schmidt08,Werner09, Schiro09} The continuous-time methods are in essence stochastic samplings of diagrammatic expansions of the time evolution operator. The mean perturbation order required in the calculation increases with the time (or inverse temperature) to be studied and the calculations are limited by the perturbation order which can be achieved with given computational resources. In the equilibrium case one considers the imaginary-time evolution operator $\exp[-\tau H]$ which is real and positive definite, so the computational task is to estimate a sum of real (decaying) exponentials and the only sign problem which arises is the fermion sign problem occurring in models complicated enough to sustain fermion loops. For these reasons the CTQMC methods have proven to be very powerful in the equilibrium context.\cite{Gull07_comparison} In the real time case, on the other hand, one must consider the intrinsically complex time evolution operator $\exp[-itH]$, and convergence comes from the cancellation of oscillations. The theoretical task is therefore to estimate the sum of terms with oscillating signs or rotating phases and a severe ``dynamical" sign problem occurs even in the absence of fermion loops. The average sign decreases exponentially with perturbation order, which limits the accessible range of interaction strengths and simulation times. Because of these limitations, important questions such as the nonequilibrium Kondo effect could so far not be adequately addressed. The equilibrium Kondo effect in quantum dots,\cite{Ng88, Glazman88} which involves the formation of a scattering resonance (density of states peak) at the Fermi level, was experimentally confirmed in the zero bias limit.\cite{Goldhaber-Gordon98} While at very low voltage biasses, the pinning of the Kondo resonance to the Fermi level leads to an unrenormalized conductance for symmetric dots, it is well known that a high voltage bias destroys the Kondo effect. The crossover from the low voltage universal regime (``linear response regime") to the higher bias Coulomb blockade regime is presently not understood. It has been proposed on the basis of the noncrossing approximation,\cite{Meir93} real time diagrammatic methods,\cite{Konig96} and perturbative calculations\cite{Fujii03} that the peak in the density of states splits into two in a certain parameter regime. Our previous investigation of the non-equilibrium Anderson model \cite{Werner09} produced no sign of this phenomenon, but the accuracy of the simulations in the low-bias region was not sufficient to settle the issue. Methodological improvements allowing a more accurate numerical study of the small-to-intermediate voltage regime are therefore needed. The existing continuous-time Quantum Monte Carlo approaches for nonequilibrium systems are more-or-less direct extensions of the imaginary time algorithms previously developed. It appears worthwhile to attempt to optimize them, even though the dynamical sign problem inherent in these methods unavoidably limits what can be achieved. In this paper we present an efficient implementation of the weak-coupling diagrammatic Monte Carlo method for non-equilibrium systems, describing ways to reorganize the expansion and to improve the measurement formulae in order to increase the accuracy of the numerical data for a given set of parameters. The method introduced previously\cite{Werner09} corresponds to the simulation of a system prepared in the nonequilibrium but non-interacting state, with the interaction turned on at time $t=0$. We refer to this simulation method as an ``interaction quench". Since the real-time methods compute the time evolution of the system after the quench, an important consideration is the time needed for the system to evolve to the interacting steady state. Optimized preparation of the initial ensemble has the potential to reduce this relaxation time, therefore leading to simulations requiring a smaller total time interval for the measurement of a given property. Motivated by this idea we extend the formalism from two real-time branches to an ``L-shaped" contour which includes an imaginary time branch. Evolution along the imaginary time branch may be thought of as preparing the system in a correlated equilibrium state, after which the voltage is turned on at time $t=0$. We refer to this simulation method as a ``voltage quench". One purpose of this paper is to compare interaction and voltage quenches. We will show that at temperature $T=0$ interaction quenches are suitable for voltage biasses larger than the Kondo temperature (i.~e. for voltage biasses large enough to suppress the Kondo resonance in the many-body density of states). The times which can be reached in the Monte Carlo simulation are long enough to observe convergence into a steady state even at large interaction strengths. On the other hand, if the voltage bias is small and the temperature is finite, the voltage quench is a suitable alternative, because it allows the important ground state correlations to be built up via the computationally less problematic imaginary time evolution. We show that our optimized implementation allows the computation of accurate currents over a wide voltage range, even for interaction strengths which are clearly outside the reach of low-order perturbation theory. We use the numerical results to test predictions based on fourth order perturbation theory in the self-energy.\cite{Fujii03} We determine the largest interaction strength for which the perturbation theory provides accurate results over the entire voltage range, and for larger interactions, the voltage window where deviations appear. The predicted splitting of the Kondo resonance \cite{Meir93,Konig96,Fujii03} is not evident in the numerical data. The rest of this paper is organized as follows. In section \ref{Model} we introduce the model to be solved and present the methods used to solve it, in particular defining the voltage and interaction quenches. Sections \ref{interactionquench} and \ref{voltagequench} present results for the interaction and voltage quenches respectively. Section \ref{iv} gives results for the current-voltage characteristics of the model and section \ref{conclusions} is a summary and conclusion. \section{Model and methods\label{Model}} \subsection{Model} We consider the one-orbital Anderson impurity model, which describes a single spin-degenerate ($\sigma$) level with a Hubbard interaction $U$ (the ``dot") coupled by hybridization $V$ to two reservoirs (``leads") labeled by $\alpha=L,R$. The Hamiltonian $H_{QI}=H^0_\text{dot}+H_U+H_\text{bath}+H_\text{mix}$ of this model contains the terms \begin{eqnarray} H_\text{bath} &=& \sum_{\alpha=L,R} \sum_{p,\sigma} \big(\epsilon^\alpha_{p,\sigma}-\mu_\alpha \big)a^{\alpha \dagger}_{p,\sigma} a^\alpha_{p,\sigma},\label{H_bath}\\ H_\text{mix} &=& \sum_{\alpha=L,R} \sum_{p,\sigma} \big(V_p^\alpha a^{\alpha \dagger}_{p,\sigma}d_\sigma+h.c. \big),\label{H_mix}\\ H^0_\text{dot}&=&\epsilon_d\sum_\sigma n_{d,\sigma},\label{H_d}\\ H_{U} &=& U(n_{d,\uparrow} n_{d,\downarrow}-(n_{d,\uparrow}+n_{d,\downarrow})/2).\label{H_U} \end{eqnarray} In the following we will consider two sources of time dependence in $H_{QI}$. In the interaction quench we take $U=0$ for times $t<0$ with an instantaneous step to a non-zero $U$ at $t=0$; in the voltage quench we take $\mu_L=\mu_R$ for time $t<0$ with an instantaneous step to a nonzero $\mu_L-\mu_R$ at $t=0$. We assume that the lead electrons equilibrate instantly to the new chemical potential so that the equal time correlators of lead operators are $\langle a^{\alpha\dagger}_{p,\sigma}a^{\beta}_{p',\sigma'}\rangle=\delta_{\alpha,\beta}\delta_{p,p'}\delta_{\sigma,\sigma^{'}}f_{T_\alpha}(\epsilon^\alpha_{p,\sigma}-\mu_\alpha)$, with $f_T(x)=(e^{x/T}+1)^{-1}$ the Fermi distribution function for temperature $T$ and $\mu_\alpha$ the value of the chemical potential for lead $\alpha$ at the appropriate time. In this paper we will consider only symmetric voltage biasses ($\mu_L=-\mu_R=V/2$) and half-filled dots ($\epsilon_d=0$). The consequences of relaxing these assumptions will be briefly mentioned in the conclusions. The energy scales of the model are set by the level broadenings \begin{equation} \Gamma^\alpha=\pi\sum_p|V_p^\alpha|^2\delta(\omega-\epsilon_p^\alpha) \label{Gamdef} \end{equation} associated with the leads $\alpha$. The total level broadening \begin{equation} \Gamma=\Gamma^L+\Gamma^R \label{Gammatotal} \end{equation} is used as the energy unit throughout the paper. We consider flat bands centered at zero, with a high energy cutoff $\omega_c$. As we shall see in Section \ref{voltagequench} a sharp high frequency cutoff leads to oscillations in the time evolution of the current after a voltage quench. A sufficiently smooth band cutoff damps the oscillations but does not affect the steady state current. We adopt a Fermi-function like smoothing with ``smoothing parameter" $\nu$, \begin{equation} \Gamma^{L,R}(\omega)=\frac{\Gamma^{L,R}}{(1+e^{\nu(\omega-\omega_c)})(1+e^{-\nu(\omega+\omega_c)})}. \end{equation} \subsection{Real-time Monte Carlo method: weak-coupling approach} We use the weak-coupling formulation of the real-time diagrammatic Monte Carlo approach as described in Ref.~\onlinecite{Werner09}. This is a real-time implementation of the continuous-time auxiliary field algorithm \cite{Gull08_ctaux} which is based on the combination of a weak-coupling expansion and an auxiliary field decomposition. Here, we will briefly summarize the main aspects of this method and then discuss some relevant issues concerning its efficient implementation. In order to enable simulations starting from an interacting initial state, we formulate the method on the L-shaped contour which runs from $0$ to $t$ and back to $0$ along the Keldysh real time axis, then to $-i\beta$ along the imaginary time axis. \begin{figure}[t] \begin{center} \includegraphics[angle=0, width=0.8\columnwidth]{Lcontour.eps} \caption{Illustration of the Keldysh contour for the interaction quench (top panel) and voltage quench (bottom panel). In an interaction quench starting from $U=0$, the imaginary time branch of the contour is shifted to $t=-\infty$ and need not be explicitly considered in the Monte Carlo simulation. The red arrows represent auxiliary Ising spin variables. The top panel shows a Monte Carlo configuration corresponding to perturbation order $n_+=2$, $n_-=2$, and the bottom panel a configuration corresponding to $n_+=3$, $n_-=2$, $n_\beta=2$.} \label{Lcontour} \end{center} \end{figure} The weak coupling algorithm may be taken to start from the following expression for the partition function $Z=Tr e^{-\beta H}$: \begin{eqnarray} Z &=& e^{K_\beta}Tr \big[ e^{-\beta(H_\text{bath}^\text{eq}+H^0_\text{dot}+H_\text{mix}+H_{\tilde U}-K_\beta/\beta)} \nonumber \\ &&\hspace{12mm}\times e^{it(H_\text{bath}^\text{neq}+H^0_\text{dot}+H_U+H_\text{mix}-K_t/t)} \nonumber\\ &&\hspace{12mm}\times e^{-it(H_\text{bath}^\text{neq}+H^0_\text{dot}+H_U+H_\text{mix}-K_t/t)}\big], \label{Z} \end{eqnarray} with $K_\beta$ and $K_t$ some arbitrary (non-zero) constants. The interaction and the chemical potentials need not be the same on the imaginary time branch as they are on the real-time branches. In the formalism as written the interaction and chemical potentials are taken to be time independent on the real-time branches, but it is straightforward to generalize the method to time-dependent $U$ and $\mu$. The notation $H_\text{bath}^\text{neq}$ indicates that on the real-time portion of the contour the two leads have different chemical potentials $\mu_\alpha=\mu_0\pm \delta \mu$, whereas $H_\text{bath}^\text{eq}$ means that on the imaginary time portion of the contour the two leads have the same chemical potential $\mu_0$. Henceforth we choose energies such that $\mu_0=0$ and consider a symmetrically applied bias voltage $V$ ($\delta \mu=V/2$). The time evolution along the real-time and imaginary-time contours is expanded in powers of $H_U-K_t/t$ and $H_U-K_\beta/\beta$, respectively. Each interaction vertex is then decoupled using Ising spin variables according to the formula\cite{Rombouts99} ($x=t$ or $\beta$) \begin{eqnarray} H_U-K_x/x &=& -\frac{K_x}{2x}\sum_{s=-1,1}e^{\gamma s (n_{d,\uparrow}-n_{d,\downarrow})},\\ \cosh(\gamma_x)&=&1+(xU)/(2K_x). \label{decouple} \end{eqnarray} The resulting collection of Ising spin variables on the contour represents the Monte Carlo configuration $\{ (t_{1}, s_1),(t_{2}, s_2), \ldots (t_{n}, s_{n}) \}$, with $t_i$ denoting the position of spin $i$ on the L-shaped contour (see illustration in Fig.~\ref{Lcontour}). There are $n_+$ spins on the forward branch, $n_-$ spins on the backward branch and $n_\beta$ spins on the imaginary-time branch of the contour ($n=n_++n_-+n_\beta$). The weight of such a configuration is obtained by tracing over the dot and lead degrees of freedom and can be expressed in terms of two determinants of $n\times n$ matrices $N_\sigma^{-1}$:\cite{Gull08_ctaux} \begin{widetext} \begin{eqnarray} w(\{ (t_{1}, s_1),(t_{2}, s_2), \ldots (t_{n}, s_{n}) \})&=&(-i^{n_-})(i^{n_+})(K_t dt/2t)^{n_-+n_+}(K_\beta d\tau/2\beta)^{n_\beta}\prod_\sigma \det N_\sigma^{-1},\label{weight}\\ N_\sigma^{-1} &=& e^{S_\sigma}-(iG_{0,\sigma})(e^{S_\sigma}-I). \end{eqnarray} \end{widetext} Here $(G_{0,\sigma})_{ij}=G_{0,\sigma}(t_i,t_j)$ is the $ij$ element of the $n\times n$ matrix of non-interacting Green functions \begin{equation} G_{0,\sigma}(t,t')=-i\langle \text{T}_\mathcal{C} d_\sigma(t)d^\dagger_\sigma(t')\rangle_0 \label{eqn:G0input} \end{equation} computed using the possibly time-dependent chemical potentials and evaluated at the time arguments defined by the Ising spins. The quantity $e^{S_\sigma}=\text{diag}(e^{\gamma_1 s_1\sigma}, \ldots, e^{\gamma_n s_n \sigma})$ is a diagonal matrix depending on the spin variables (with $\gamma_i=\gamma_t$ for spins located on the real-time branches and $\gamma_i=\gamma_\beta$ for spins on the imaginary time branch). ${\text T}_\mathcal{C}$ is the contour ordering operator, which exchanges the product $A(t) B(t')$ of two operators if $t$ is earlier on the contour than $t'$ (a minus sign is added if the exchange involves an odd number of Fermi operators). A Monte Carlo sampling of all possible spin configurations is then implemented based on the absolute value of the weights (\ref{weight}). The contribution of a specific configuration $c=\{ (t_{1}, s_1),(t_{2}, s_2), \ldots (t_{n}, s_{n}) \}$ to the current is given by \cite{Werner09} \begin{widetext} \begin{eqnarray} A_\sigma^c(t,t')&=&A_{0,\sigma}(t,t')+i\sum_{i,j=1}^n G_{0,\sigma}(t,t_{i})[(e^{S_\sigma}-I)N_\sigma]_{i,j}A_{0,\sigma}(t_{j}, t'),\label{eqA} \label{tildeA} \end{eqnarray} \end{widetext} with the first term on the right hand giving the contribution to the non-interacting current and the second term a correction due to the interactions. In Eq.~(\ref{tildeA}) \begin{equation} A_{0,\sigma}(t,t')=\langle \text{T}_\mathcal{C} \tilde a^{L\dagger}_\sigma(t')d_\sigma(t)\rangle_0 \label{defA0} \end{equation} denotes a dot-lead correlation function of the noninteracting model for the composite left lead operator $ \tilde a^{L}_{\sigma}=\sum_p V^L_p a^L_{p,\sigma}$. The current expectation value is \begin{equation} I(t)=-2\text{Im} \sum_\sigma[\langle A^c_\sigma(t,t) \phi_c\rangle/\langle \phi_c\rangle], \end{equation} where $\langle . \rangle$ denotes the Monte Carlo average and $\phi_c$ the phase of the weight of the configuration $c$. In an interaction quench, the imaginary-time evolution is not explicitly considered in the Monte Carlo simulation and temperature appears only as a parameter in the noninteracting Green functions (see Fig.~\ref{Lcontour}). Moreover, the latter depend only on time differences, and thus can be easily expressed in terms of their Fourier transform. Assuming a large band cutoff and neglecting the real part of the lead self-energy we find\cite{Jauho94, Werner09} \begin{widetext} \begin{eqnarray} G_0(t,t')&=&2i\sum_{\alpha=L,R}\int \frac{d\omega}{2\pi}e^{-i\omega(t-t')}\frac{\Gamma^\alpha(\omega)(f(\omega-\mu_\alpha)-\Theta_\mathcal{C}(t,t'))}{(\omega-\epsilon_d-U/2)^2+\Gamma^2},\label{G0}\\ A_0(t,t')&=&-2i\int \frac{d\omega}{2\pi}e^{-i\omega(t-t')}\frac{\Gamma_L(\omega) \Gamma_R(\omega) (f(\omega-\mu_L)-f(\omega-\mu_R))}{(\omega - \epsilon_d-U/2)^2+\Gamma(\omega)^2}\nonumber\\ &&+2\int \frac{d\omega}{2\pi}e^{-i\omega(t-t')}\frac{\Gamma_L(\omega)(\omega - \epsilon_d-U/2)(f(\omega-\mu_L)-\Theta_\mathcal{C}(t,t'))}{(\omega - \epsilon_d-U/2)^2+\Gamma^2}. \label{A0} \end{eqnarray} \end{widetext} In the voltage quench, on the other hand, the interaction is non-vanishing on the imaginary time portion of the contour (Fig.~\ref{Lcontour}), while the chemical potential difference jumps instantaneously from zero (on the imaginary branch) to $V$ (on the real branches). Because of the time dependence of the chemical potentials, the noninteracting Green functions are not time translation invariant and we cannot express $G_{0,\sigma}$ and the dot-lead correlator $A_{0,\sigma}$ in the form of a Fourier transform. Instead, those functions must be computed numerically from their equations of motion, as explained in the appendix. \subsection{Optimization of the Monte Carlo sampling} The sign (phase) problem in the weak-coupling CTQMC method grows exponentially with the average perturbation order on the real-time branches, which in turn is proportional to the simulation time, while operators on the imaginary time branch do not add significantly to the sign problem. To reach long times or strong interactions, it is therefore important to reduce the average perturbation order on the real-time branches as much as possible. An essential point to note in this context is that in the particle-hole symmetric case, the parameters $K_x$ of the algorithm can be chosen such that only even perturbation orders appear in the expansion. In fact, for \begin{equation} K_x=-xU/4 \end{equation} the spin degree of freedom effectively disappears ($e^{\gamma s \sigma}=-1$) and the algorithm becomes the real-time version of Rubtsov's weak-coupling method\cite{Rubtsov05} for the particle-hole symmetric interaction term $H_U-K_x/x=U(n_{d,\uparrow}-\frac{1}{2})(n_{d,\downarrow}-\frac{1}{2})$. (For a detailed discussion of the equivalence between the Rubtsov and CTAUX methods for the Anderson impurity model, consult Ref.~\onlinecite{Karlis09}). The odd perturbation orders are continuously suppressed as $K_x$ approaches $-xU/4$. For $K_x=-xU/4+\delta$ and sufficiently small $\delta$, the average perturbation order can be reduced by about half compared to the $|K|=0.1$ used in the simulations presented in Ref.~\onlinecite{Werner09}. This in turn allows us to reach times and interaction strengths which are a factor of two larger. We note in passing that the suppression of odd perturbation orders was also essential in the nonequilibrium dynamical mean field calculations of Ref.~\onlinecite{Eckstein09}. We next discuss some tricks to improve the efficiency of the current measurement. First, we rewrite Eq.~(\ref{eqA}) as \begin{widetext} \begin{equation} A_\sigma^c(t,t')=A_{0,\sigma}(t,t')+\int ds_1 \int ds_2 G_{0,\sigma}(t,s_1) \Big\langle i \sum_{i,j=1}^n \delta_\mathcal{C}(s_1,t_{i})[(e^{S_\sigma}-I)N_\sigma]_{i,j}\delta_\mathcal{C}(s_2,t_{j})\Big\rangle A_{0,\sigma}(s_2, t'), \label{X} \end{equation} \end{widetext} where the variables $s_1$ and $s_2$ run over the entire contour and the contour delta function is defined by $\int ds \delta_\mathcal{C}(t,s)f(s)=f(t)$. It is therefore sufficient to accumulate the quantity \begin{equation} X_\sigma(s_1, s_2)=\Big\langle i \sum_{i,j=1}^n \delta_\mathcal{C}(s_1,t_{i})[(e^{S_\sigma}-1)N_\sigma]_{i,j}\delta_\mathcal{C}(s_2,t_{j})\Big\rangle. \end{equation} Furthermore, it follows from Eq.~(\ref{weight}) that the weight of a Monte Carlo configuration changes sign if the last spin (corresponding to the largest time argument) is shifted from the forward contour to the backward contour or vice versa. Since the absolute value of the weight does not change, these two configurations will be generated with equal probability. As a result, all the terms in Eq.~(\ref{X}) which do not involve the last operator on the contour will cancel. It is therefore more efficient and accurate to accumulate \begin{align} &X_\sigma(s_1, s_2)=\Big\langle i(1-\delta(\{t_i\}))\sum_{i,j=1}^n x(s_1, i; s_2, j)\nonumber\\ &\hspace{5mm}+i\delta(\{t_i\})\sum_{l \text{ not last}}^n [ x(s_1, \text{last}; s_2, l)+x(s_1, l; s_2, \text{last})] \Big\rangle, \end{align} with $x(s_1, i; s_2, j) \equiv \delta_\mathcal{C}(s_1,t_i)[(e^{\Gamma_\sigma}-1)N_\sigma]_{i,j}\delta_\mathcal{C}(s_2,t)$ and $\delta(\{t_i\})=1$ if $\max_i \text{Re}(t_i)>0$ and 0 otherwise. Also, by comparing the contributions to the current of the original configuration and the one with the last operator shifted from the upper to the lower contour (or vice versa), one finds that they almost (but not completely) cancel. The errorbars on the current can thus be substantially reduced by appropriate symmetrizations of $X(s_1, s_2)$. \section{Results: Interaction quench \label{interactionquench}} \subsection{Convergence to the long time limit: large bias voltage} Calculations based on an interaction quench from $U=0$ are particularly simple, because there are no interaction vertices (or spins) on the imaginary time branch, and only the real-time branches of the contour need to be considered in the simulation. Temperature enters only as a parameter in the lead correlators, making it possible to treat arbitrary temperatures, including $T=0$. \begin{figure}[t] \begin{center} \includegraphics[angle=-90, width=0.9\columnwidth]{i_t_v4_bw.eps} \caption{Time evolution of the current for $V/\Gamma=4$ and different interaction strengths ($T=0$). In the initial state, the current is given by the steady state current through the non-interacting dot. At time $t=0$, the interaction is turned on. After a time of a few inverse $\Gamma$, the current saturates at the value corresponding to the steady state current in the interacting dot. } \label{i_t_v4} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[angle=-90, width=0.9\columnwidth]{i_t_u6_bw.eps} \caption{Time evolution of the current for different voltage biasses and interaction strength $U/\Gamma=6$ ($T=0$). In the initial state, the current is given by the steady state current through the non-interacting dot. At time $t=0$, the interaction is turned on. } \label{i_t_u6} \end{center} \end{figure} At time $t=0$, the system is non-interacting but subject to an applied bias $V$, so a current $I_0(V)$ appropriate to the non-interacting model is flowing through the dot. At $t=0_+$ the interaction is turned on and the system relaxes into the steady-state configuration appropriate to the interacting model. Figure~\ref{i_t_v4} shows the time dependence of the current calculated for the large bias voltage $V/\Gamma=4$ and several interaction strengths. We see that the transient behavior is such that the current initially decreases sharply, overshoots and eventually relaxes more slowly back up into the new steady state. The interaction-dependence of the steady-state current is a consequence of the Coulomb blockade physics, apparent even at the large voltages studied here. For intermediate and large voltage bias ($V/\Gamma\gtrsim 2$) and not too large interaction ($U/\Gamma \lesssim 8$) the time required for convergence to the steady state is $t\Gamma\approx 2$, essentially independent of interaction strength. Given the scaling of the perturbation order (and hence the sign problem) with $U$ and $t$, interactions up to $U/\Gamma \lesssim 10$ are accessible with the current implementation. A comparison of Fig.~\ref{i_t_v4} to Fig.~13 of Ref.~\onlinecite{Werner09} shows that the technical improvements introduced in this paper have substantially extended the range of applicability of the weak-coupling Monte Carlo method (about a factor 2-3 in $U$ or $t$) and allow us to obtain accurate results in the intermediate-to-strong correlation regime. In Fig.~\ref{i_t_u6} we plot the time evolution of the current for fixed $U/\Gamma=6$ and several voltage biasses. For voltages $V/\Gamma \gtrsim 2$, even though the transient behavior is clearly voltage-dependent, the current settles into the new steady state after a time $t\Gamma \approx 2$. However, as the voltage is decreased below $V/\Gamma\approx 2$ the transient time increases. At $V=\Gamma$ the long time limit is attained only for $t\Gamma \gtrsim 3$ and as $V$ is further decreased the approach to the asymptotic behavior becomes even slower. \subsection{Convergence to the long time limit: small bias voltage} To better analyse the approach to steady state at small voltages we present in the upper panel of Fig.~\ref{i_t_smallv} the time dependence of the current for several smaller voltages and two interaction strengths. For better comparison, we plot here the ratio $I/I_0$ of the interacting current $I$ to the noninteracting current $I_0$. One sees that as $V$ is decreased or $U$ is increased the evolution of the current from the post-quench minimum to the long-time steady state value takes an increasingly long time. Since the longest accessible time is $t\Gamma\approx 6$ for $U/\Gamma=4$ and $t\Gamma\approx 4$ for $U/\Gamma=6$, the accurate measurement of $I$ becomes impossible in the small voltage regime. However the short-time transient behavior is accessible at all voltages. While the ratio $(I/I_0)(t)$ is clearly voltage dependent at higher biasses, the data seem to converge as $V$ is reduced to a non-trivial curve with a pronounced minimum near an only weakly $U$-dependent time $t\Gamma\approx 1$. We believe that the increasingly slow convergence as $V\rightarrow 0$ is a signature of the Kondo effect, which is characterized by an energy scale which becomes exponentially small as $U$ increases. After the interaction quench, the Kondo resonance has to be built up as time progresses, and in the limit $V\rightarrow 0$, $T\rightarrow 0$ this requires an increasingly large number of interaction vertices and hence an increasingly long simulation time. On physical grounds one expects that the time needed to evolve into steady state is proportional to the inverse of the associated energy scale. Empirically, we find that the slow relaxation becomes an issue in the linear response regime, where the non-interacting and interacting currents are very similar. For $V/\Gamma\gtrsim 0.5$, where the interacting current is substantially smaller than $I_0$, a useful estimate of $I$ seems possible, even though in the voltage window up to $V/\Gamma\approx 2$ a small drift in the current may remain up to the longest accessible times. This drift makes it difficult to define reliable error bars on $I$, but it appears unlikely that the steady state value will differ from $I(t_\text{max})$ by more than the largest deviation in the window $[t_\text{max}/2, t_\text{max}]$, which we use as error estimate. For $V/\Gamma=0.25$, an accurate estimate is not possible from the interaction quench procedure, but the current is very close to the linear response value, so the uncertainty is in fact not that important. This is illustrated in the bottom panel of Fig.~\ref{i_t_smallv}, which compares the Monte Carlo data to the noninteracting current and results from fourth order perturbation theory.\cite{Fujii03} The plot also indicates as hashed region the voltage range $V/\Gamma\lesssim 0.4$ where accurate measurements of the long-time limit become prohibitively difficult at $T=0$. We will see below that this roughly corresponds to voltages smaller than the Kondo temperature $T_K$. \begin{figure}[t] \begin{center} \includegraphics[angle=-90, width=0.9\columnwidth]{smallv_anomaly_norm.eps} \includegraphics[angle=-90, width=0.9\columnwidth]{iv_anomaly.eps} \caption{Interaction quench in the small-voltage regime ($T=0$): The top panel shows the ratio of interacting to noninteracting current for $U/\Gamma=4$ and $U/\Gamma=6$ and indicated voltage biasses. For $V/\Gamma \lesssim 0.5$ the time needed to reach the steady state grows much beyond the largest time accessible in the Monte Carlo simulation. Bottom panel: comparison of $U$-quench estimates (symbols, red and blue online) for the steady-state current to the noninteracting current (thick black line) and fourth order perturbation theory\cite{Fujii03} (light lines, red and blue online). For $V/\Gamma \ge 0.5$ the Monte Carlo data show the value of $I(t_\text{max})$ with errorbar $\max_{t\in [t_\text{max}/2,t_\text{max}]}|I(t)-I(t_\text{max})|$ ($t_\text{max}\Gamma = 6$ for $U/\Gamma=4$ and $t_\text{max}\Gamma = 4$ for $U/\Gamma=6$). For $V/\Gamma=0.25$, we use $I \approx (I(t_\text{max})+I_0)/2$ with errorbar of size $(I_0-I(t_\text{max}))/2$. } \label{i_t_smallv} \end{center} \end{figure} \subsection{Temperature Dependence} It is also of interest to examine the temperature dependence of the current. The interplay between voltage and temperature as the Kondo regime is approached presents an interesting problem. One expects that as the temperature is increased, the Kondo effect gets washed out and the simulations would therefore more readily converge even at small bias voltages. The temperature dependence of the current calculated from the interaction quench for $U/\Gamma=6$ and several values of the voltage bias is plotted in Fig.~\ref{i_beta}. In the linear response regime ($V/\Gamma=0.125$, $0.25$) the ratio of the interacting current $I(T)$ to the noninteracting current $I_0(T=0)$ exhibits a strong temperature dependence, even at $T/V\ll 1$. The temperature dependence arises because lowering the temperature strengthens the Kondo resonance and leads to an increase in the interacting current. The temperature dependence for small voltage bias ($V/\Gamma=0.125$) approaches the analytical result for the temperature dependent zero-bias conductance in Ref.~\onlinecite{Konik01} and thus allows us to estimate (from the temperature at which $I(V\rightarrow 0)=I_0/2$) the Kondo temperature as $T_K/\Gamma\approx 0.24$, in good agreement with the {\it a priori} estimate from the standard formula\cite{Hewson} \begin{equation} T_K \approx U\Big(\frac{\Gamma}{2U}\Big)^{1/2}e^{-\pi U / 8\Gamma + \pi \Gamma / 2 U}. \end{equation} This formula is valid in the strong correlation regime and for $U/\Gamma=6$ yields $T_K/\Gamma=0.21$. As $V$ is increased the temperature dependence is weakened. At intermediate values of $V$, in the Coulomb blockade regime ($V/\Gamma=2$), the current has little temperature dependence at low $T$. At large voltage bias ($V/\Gamma=4$), correlation effects are already weakened due to the voltage, as is evident from the increase in $I/I_0$, and the almost perfect agreement with fourth order perturbation theory discussed in Section~\ref{sec:iv}. The current in this regime remains insensitive to temperature at low $T$. \begin{figure}[t] \begin{center} \includegraphics[angle=-90, width=0.9\columnwidth]{i_beta_normT0.eps} \caption{Temperature dependence of the ratio of interacting current at temperature $T$ to noninteracting current at temperature zero for indicated values of the voltage bias. The interaction strength is $U/\Gamma=6$. The symbols show Monte Carlo results, the black line the analytical curve for $V\rightarrow 0$ extracted from Ref.~\onlinecite{Konik01} and plotted for $T_K/\Gamma=0.24$. } \label{i_beta} \end{center} \end{figure} \section{Results: Voltage quench \label{voltagequench}} \label{vquench} \subsection{Cutoff dependence} An alternative procedure to calculate the steady state current of interacting quantum dots is to start from an interacting state in equilibrium ($V=0$) and turn on the voltage at $t=0_+$. While this approach is computationally more expensive and is restricted to nonzero temperatures, because it involves operators on the imaginary-time branch, it has the advantage that the Kondo resonance in the many-body density of states is present already in the initial state. The Kondo resonance is built up during the evolution along the imaginary-time branch, which does not add significantly to the sign problem. One might expect that this $V$-quench is particularly suitable to study the small voltage regime, because turning on a small voltage will not change the spectral function dramatically. Since the voltage quench has not yet been discussed in the previous literature, we will now analyze the properties of the current after such a $V$-quench in some detail, and in particular its dependence on the band width ($\omega_c$) and smoothness of the cutoff ($\nu$). \begin{figure}[t] \begin{center} \includegraphics[angle=-90, width=0.9\columnwidth]{nu.eps} \includegraphics[angle=-90, width=0.9\columnwidth]{cut.eps} \caption{Effect of the smoothing-parameter $\nu$ and cutoff $\omega_c$ for $\beta \Gamma=10$. The top panel shows the interacting ($U/\Gamma=4$) and noninteracting current for a quench to $V/\Gamma=0.25$. The red line and red circles correspond to $\nu\Gamma=10$ and cutoff $\omega_c/\Gamma=10$, the blue lines and blue diamonds to $\nu\Gamma=3$ and cutoff $\omega_c/\Gamma=10$. A sharp band edge (large value of $\nu$) leads to oscillations in the current which make it difficult to estimate the steady state value. The bottom panel plots non-interacting and interacting currents for $V/\Gamma=0.5$, $\nu\Gamma=3$, $\beta \Gamma=10$ and indicated values of the cutoff. While the short-time behavior of the current is cutoff dependent, the steady state value is essentially cutoff-independent, as long as $\omega_c\gtrsim V$. } \label{nu_cut} \end{center} \end{figure} The top panel of Fig.~\ref{nu_cut} shows the time evolution of the current in a model with $U/\Gamma=0$ (lines) and $U/\Gamma=4$ (symbols) if the voltage is suddenly increased to $V/\Gamma=0.25\ll \omega_c/\Gamma=10$. In the initial state, the system is in equilibrium, with no current flowing through the dot. After the voltage bias is turned on, the current increases. In the model with hard band cutoff ($\nu\Gamma=10$, red online) oscillations in the current appear which make it difficult to estimate the steady state value. A smoother band cutoff ($\nu\Gamma=3$, blue online) almost completely eliminates these oscillations. We will thus in the rest of this subsection show results for the ``smoothing parameter" $\nu\Gamma=3$. The lower panel of Fig.~\ref{nu_cut} illustrates the dependence of the current on the cutoff value $\omega_c$. While the short time behavior of the current depends strongly on the bandwidth, the steady state value shows little cutoff dependence as long as $\omega_c$ is substantially larger than the applied voltage bias. This is consistent with the observation for the $U$-quench in Ref.~\onlinecite{Werner09}, where it was found that $\omega_c/\Gamma=10$ was enough to get accurate results up to $V/\Gamma=10$. Hence, we will choose $\omega_c/\Gamma=10$ for the rest of this paper. \begin{figure}[t] \begin{center} \includegraphics[angle=-90, width=0.9\columnwidth]{vu4.eps} \includegraphics[angle=-90, width=0.9\columnwidth]{vu6.eps} \caption{Voltage dependence of the current for $U/\Gamma=4$ (top panel) and ($U/\Gamma=6$) bottom panel. Solid lined show the non-interacting current and symbols the interacting current. As the voltage reaches $V/\Gamma=1$ ($U/\Gamma=4$) or $V/\Gamma=0.5$ ($U/\Gamma=6$), the interacting current overshoots, resulting in a slow convergence to the steady state. For smaller voltages, however, the steady state current can be computed on the basis of $V$-quenches. All results are for $\beta\Gamma=10$, $\omega_c/\Gamma$=10, and $\nu/\Gamma=3$.} \label{vu46} \end{center} \end{figure} \subsection{Voltage dependence} In Fig.~\ref{vu46} we plot the non-interacting and interacting currents for $\beta\Gamma=10$ and several values of the voltage bias. The top panel is for $U/\Gamma=4$ and the bottom panel for $U/\Gamma=6$. In the small voltage regime ($V/\Gamma\lesssim 0.5$) the interacting current increases monotonically with time and eventually settles into a steady state within the accessible time window. The $V$-quench therefore allows us to measure accurate steady state currents for finite temperature in the small voltage regime. However, once the voltage becomes too big (see $V/\Gamma=1$ in the top panel of Fig.~\ref{vu46}), the interacting current overshoots and only slowly settles into the steady state, making it impossible to measure an accurate steady state value using this approach. However, as shown in the previous section, the simulation based on the $U$-quench provides accurate results once $V/\Gamma\gtrsim 0.5$. The two simulation methods are therefore complementary in the sense that the $V$-quench works best for small voltage bias ($V/\Gamma\lesssim 0.5$) and the $U$-quench at larger voltage bias ($V/\Gamma\gtrsim 0.5$). \subsection{Temperature dependence} \begin{figure}[t] \begin{center} \includegraphics[angle=-90, width=0.9\columnwidth]{beta.eps} \caption{Temperature effect on the current in the small voltage bias regime. Red circles, blue diamonds and black diamonds show the interacting current ($U/\Gamma=4$) for $V/\Gamma=0.25$ and $V/\Gamma=0.5$, while the red, blue and black curves indicate the corresponding non-interacting currents. As the temperature is lowered, the interacting current increases towards the non-interacting value.} \label{beta} \end{center} \end{figure} The temperature dependence of the current after a $V$-quench in the low-voltage regime ($V/\Gamma=0.25$ and 0.5) is shown in Fig.~\ref{beta}, which plots results for $U/\Gamma=0$ (lines) and $U/\Gamma=4$ (symbols) for $\beta\Gamma=5$, $10$ and $20$. A rather strong temperature dependence is evident, in particular in the interacting current. This is consistent with the $U$-quench data shown in Fig.~\ref{i_beta} and a consequence of the destruction of the Kondo resonance by temperature. Remarkably, a strong temperature dependence is observed even for $T\ll V$, which means that the applied voltage does not effectively raise the temperature to a value of order $V$. In this voltage regime the non-zero voltage state therefore is not simply equivalent to a thermal state. The nature of the correlations which give rise to the temperature dependence are an interesting subject for further investigation. \subsection{Comparison to the interaction quench} In the $V$-quench calculations, for $U/\Gamma=6$, we can access temperatures down to $\beta \Gamma \approx 20$. At even lower temperatures, the perturbation order on the imaginary time branch becomes so large and the individual Monte Carlo updates so expensive that it is increasingly difficult to reach the very high statistical accuracy required for simulations with average signs of the order $10^{-3}$. Since the problem of slow convergence in $U$-quench calculations at small bias is considerably alleviated by finite temperature, it turns out that the accuracy of the latter approach matches that of $V$-quench calculations even at very small voltage bias (see $U/\Gamma=6$ data in Fig.~\ref{currentsmall}). For finite temperature simulations in the experimentally relevant temperature range ($\beta \Gamma \gtrsim 20$), the $U$-quench approach thus appears to be more powerful and sufficient to treat the entire voltage range. The good agreement between the $U$-quench and $V$-quench results in Fig.~\ref{currentsmall} furthermore shows that the steady state results obtained by the diagrammatic Monte Carlo method do not depend on the initial preparation of the system. \begin{figure}[t] \begin{center} \includegraphics[angle=-90, width=0.9\columnwidth]{ivsmall.eps} \caption{ Current-voltage characteristics of the single-orbital Anderson impurity model in the small voltage regime for $U/\Gamma=4$, 6 at $\beta\Gamma=10$. The blue circles and black diamonds show $V$-quench results. For comparison, we also plot finite temperature $U$-quench data ($U/\Gamma=6$, green crosses). } \label{currentsmall} \end{center} \end{figure} \section{$I$-$V$ characteristics \label{iv}} \label{sec:iv} We now apply the machinery described in the previous section to compute the current-voltage characteristics of the Anderson impurity model at half-filling. \begin{figure}[t] \begin{center} \includegraphics[angle=-90, width=0.9\columnwidth]{iv_u10.eps} \caption{Current-voltage characteristics of the single-orbital Anderson impurity model. The symbols show Monte Carlo data for $U/\Gamma=4$, 6, 8 and 10, while the lines correspond to the fourth order perturbation calculation of Ref.\onlinecite{Fujii03}. The Monte Carlo results have been obtained by means of $U$-quenches at $T=0$. Error bars are on the order of the symbol size. } \label{currentlarge} \end{center} \end{figure} The initial rise of the current at finite temperature ($\beta\Gamma=10$) is shown in Fig.~\ref{currentsmall}. The blue circles and black diamonds have been obtained using the $V$-quench. The current-voltage characteristics in the $V\rightarrow 0$ limit becomes linear, although the slopes of the interacting and non-interacting models are not identical. This is the temperature effect on the Kondo resonance (particularly pronounced for large $U$) which was discussed in the previous sections. As the temperature is lowered to zero, the initial slope of the current approaches that of the non-interacting model. Figure~\ref{currentlarge} shows the $T=0$ result obtained using interaction quenches ($\omega_c/\Gamma=\nu\Gamma=10$, essentially the wide band limit). The black curve shows the monotonic increase of the non-interacting current with increasing applied bias voltage. The red, blue and pink lines show the interacting current for $U/\Gamma=4$, $6$ and $8$ predicted by fourth order perturbation theory.\cite{Fujii03} Consistent with analytical arguments,\cite{Ng88, Glazman88} the interacting current initially rises with the same slope as the non-interacting current, and reaches the non-interacting value also in the large-voltage limit. At intermediate values of $V$ the effect of interactions is to suppress the current (Coulomb blockade). In fourth order perturbation theory, a hump appears in the $I$-$V$ curve around $V/\Gamma=2$ for $U/\Gamma=6$ and $8$. At even larger $U$ (clearly outside the range of applicability) fourth order perturbation theory will presumably lead to a negative differential conductance at intermediate $V$. The appearance of this hump is related to the splitting of the Kondo resonance as discussed in Ref.~\onlinecite{Fujii03}. The Monte Carlo results for $U/\Gamma=4$, $6$, $8$, and 10 are shown by the red stars, blue circles, pink diamonds, and orange triangles, respectively. Since these are $U$-quench results for $T=0$, only $V/\Gamma\gtrsim 0.5$ data are shown. In the large voltage regime ($V/\Gamma \gtrsim 4$) the numerical results agree with the prediction from fourth order perturbation theory. Apparently, the fast decay of the Green functions for large voltage bias simplifies the diagram structure such that fourth order in $\Sigma$ is sufficient at $V/\Gamma \gtrsim 4$. At intermediate voltages, $1 \lesssim V/\Gamma \lesssim 3$, differences between the Monte Carlo data and fourth order perturbation theory appear. The essentially exact numerical data show no prominent hump feature near $V/\Gamma=2$, and hence no negative differential conductance in the intermediate to strong correlation regime. The hump, and the associated splitting of the Kondo resonance, must therefore be an artefact of fourth order perturbation theory. This is consistent with the conclusion reached in Ref.~\onlinecite{Werner09} on the basis of (less accurate) hybridization expansion results. The data in Fig.~\ref{currentlarge} show that fourth order perturbation theory yields correct results over the entire voltage range for $U/\Gamma<4$. For larger interactions, and in particular around $V/\Gamma\approx 2$ more complicated self energy diagrams become important. \section{Conclusions \label{conclusions}} We have discussed the implementation of the weak-coupling continuous-time Monte Carlo method on the L-shaped Keldysh contour and the application of this formalism to the study of transport through a quantum dot. Calculations based on interaction quenches from the current carrying state at $U=0$ can be restricted to the real-time contours and provide accurate steady state currents for $V/\Gamma\gtrsim 0.5$ and interaction strenghts $U/\Gamma\lesssim 10$, for arbitrary temperature and bandwidth. At finite temperature, convergence into the steady state is considerably faster, which allows access to the small voltage regime. As an alternative method, we have introduced calculations based on voltage quenches, which start from the interacting equilibrium state and which can be used to calculate the steady state current in the small voltage regime ($V/\Gamma\lesssim 0.5$) at finite temperature. Since the sign problem turns out to be essentially independent of the number of operators on the imaginary time branch, temperatures of order $\beta\Gamma=10$ can easily be dealt with. The $V$-quench approach is however not more efficient than finite-temperature $U$-quench calculations. We have used the methods to accurately compute the current-voltage characteristics of the half-filled Anderson-impurity model in the intermediate-to-strong coupling regime. Comparison to fourth order perturbation theory showed that the latter fails at voltages around $V/\Gamma\approx 2$ for $U/\Gamma>4$, but becomes accurate for $V/\Gamma\gtrsim 4$. The splitting of the Kondo resonance predicted by low order perturbation theory is an artefact not present in the numerical data. The results presented in this paper show that diagrammatic Monte Carlo is one of the most powerful numerical tools for the study of non-equilibrium systems. The accuracy of the improved weak-coupling approach and its range of applicability rivals or surpasses other state-of-the-art numerical approaches such as time-dependent DMRG.\cite{Kirino08, Heinrich-Meissner09} For most practical purposes the numerical problem of calculating the steady-state current through a half-filled Anderson impurity model with symmetrically applied voltage can be considered as solved. An interesting, and presumably straight-forward extension of our work would be the study of asymmetrically applied bias voltages. One of the optimizations of the Monte Carlo algorithm -- the suppression of the odd perturbation orders -- is however specific to the particle-hole symmetric model. Away from particle hole symmetry, odd perturbation orders contribute to the current and therefore must be considered in the simulation. This leads to an increase in the average perturbation order and to a more severe sign problem, such that the accessible times and interaction strengths will be reduced. The optimal choice of the $K_x$-parameters in the particle-hole asymmetric case is an open problem for future investigations. Another issue which should be considered is the optimal shape of the $U$- or $V$-quench. By slowly ramping up the interaction or voltage bias, it may be possible to avoid overshooting and thus observe a faster relaxation of the current into the steady state. \acknowledgements PW and ME are supported by the Swiss National Science Foundation (Grant PP002-118866), TO by a Grant-in-Aid for Young Scientists (B) from MEXT, and AJM by the US National Science Foundation Division of Materials Research under grant DMR-0705847. This work also benefitted from the academic guest program (Center for Theoretical Studies) of ETH Zurich (TO) and the hospitality of the Aspen Center for Physics (PW). We thank N.~Tsuji, T.~Fujii and K.~Ueda for helpful discussions. The simulations were performed on the Brutus cluster at ETH Zurich using a code based on ALPS.\cite{ALPS} \begin{appendix} \section{Noninteracting Green function for the voltage quench} \newcommand{t_\text{min}}{t_\text{min}} \newcommand{t_\text{max}}{t_\text{max}} \newcommand{\expval}[1]{\langle#1\rangle} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\text{T}_{\CC}}{\text{T}_{\mathcal{C}}} \newcommand{\int_\CC}{\int_\mathcal{C}} \newcommand{{\text{r}}}{{\text{r}}} \newcommand{{\text{a}}}{{\text{a}}} \newcommand{{\text{\tiny M}}}{{\text{\tiny M}}} \newcommand{{\makebox{$\neg$}}}{{\makebox{$\neg$}}} \newcommand{{\reflectbox{$\neg$}}}{{\reflectbox{$\neg$}}} \newcommand{<}{<} \newcommand{>}{>} In this appendix we present the formalism needed for the voltage quench, evaluating noninteracting Green functions on the L-shaped contour with lead chemical potential $\mu_\alpha$ equal to the equilibrium value $\mu(0)$ for times on the imaginary contour and with arbitrary time-dependence $\mu_\alpha(t)$ on the real time portions of the contour. The results of the paper correspond to $\mu_\alpha(t)=\mu(0)\pm V/2$. Because the voltage bias is time dependent, noninteracting Green functions cannot be expressed in the form of a Fourier transform, and instead they are computed numerically by the solution of their equations of motions in real (imaginary) time. A closed set of equations is obtained if one considers the noninteracting dot Green function [Eq.~(\ref{eqn:G0input})], \begin{equation} \label{g0-def} G_{0,\sigma}(t,t')= -i\expval{\text{T}_{\CC} d_{\sigma} (t) d_\sigma^\dagger(t')}_0, \end{equation} the hybridization of the dot to a single bath level \begin{equation} G_{p,\sigma}^\alpha(t,t')= i\expval{\text{T}_{\CC} d_\sigma(t) a_{p,\sigma}^{\alpha\,\dagger} (t')}_0, \end{equation} and the dot-decoupled Green function of a single bath state, \begin{equation} \label{geps-def} g_{p,\sigma}^\alpha(t,t')= -i\expval{\text{T}_{\CC} \,a_{p,\sigma}^\alpha (t) \,a_{p,\sigma}^{\alpha\,\dagger}(t')}_{0,V_p^\alpha=0}. \end{equation} Here $t,t'$ are arbitrary points on the real or imaginary portions of the contour, the time evolution is performed with $U=0$ but time-dependent voltage bias, and $\expval{\cdot}_0$ is the grand-canonical expectation value in the noninteracting initial state (at $\mu=\mu(0)$). The contour-ordering operator $\text{T}_{\CC}$ exchanges the product $A(t) B(t')$ of two operators if $t$ is earlier on the contour than $t'$ (a minus sign is added if the exchange involves an odd number of Fermi operators). Equations of motions for the Green functions (\ref{g0-def}) to (\ref{geps-def}) are obtained from taking time-derivatives and evaluation of the resulting commutators, \begin{align} \label{app-eom-g0} &[i\partial_{t'}+\epsilon_d] G_{0,\sigma}(t,t') = \sum_p G_{p,\sigma}^\alpha(t,t') V_p^\alpha - \delta_\mathcal{C}(t,t'), \\ \label{app-eom-F} &[i\partial_{t'} + \epsilon_{p\sigma}^\alpha-\mu_\alpha(t')] \,G_{p\sigma}^\alpha(t,t') = G_{0,\sigma}(t,t')\,(V_p^\alpha)^*, \\ \label{app-eom-geps} &[i\partial_{t'} + \epsilon_{p\sigma}^\alpha - \mu_\alpha(t')] \,g_{p\sigma}^\alpha(t,t') = -\delta_\mathcal{C}(t,t'). \end{align} Note that when $t=-i\tau$ is on the imaginary branch of the contour, the time derivative is given by $\partial_t=i\partial_\tau$. The contour delta function is given by $\delta_\mathcal{C}(t,t')=\partial_t \Theta_\mathcal{C}(t,t')$, where $\Theta_\mathcal{C}(t,t')=1$ if $t$ is later on $\mathcal{C}$ than $t'$ and zero otherwise. Eqs.~(\ref{app-eom-g0}) to (\ref{app-eom-geps}) have a unique solution, provided that the contour Green functions satisfy an antiperiodic boundary condition on the contour $\mathcal{C}$ in both time arguments. Equation (\ref{app-eom-geps}) can be solved explicitly, \begin{align} \label{gbath} g_{p\sigma}^{\alpha}(t,t') &\equiv g_\alpha(t,t';\epsilon_{p\sigma}^{\alpha}), \\ \label{geps} g_\alpha(t,t';\epsilon) &= -i\big[\Theta_\mathcal{C}(t,t')-f(\epsilon-\mu(0))\big] \nonumber\\ \times \exp\Big( &i\int_0^{t'} \!\!d\bar t \,[\epsilon-\mu_\alpha(\bar t)] -i\int_0^{t} \!\!d\bar t\, [\epsilon-\mu_\alpha(\bar t)] \Big), \end{align} where $\int_0^t d\bar t$ is along the contour. Furthermore, one can show from Eqs.~(\ref{app-eom-F}) and (\ref{app-eom-geps}) that the solution of Eq.~(\ref{app-eom-F}) is given by \begin{equation} \label{F} G_{p\sigma}^{\alpha}(t,t') = \int ds \, G_{0,\sigma}(t,s)\, (V_p^\alpha)^*\, g_{p\sigma}^{\alpha}(s,t'), \end{equation} where the integral runs over the whole contour. This expression is inserted into Eq.~(\ref{app-eom-g0}) in order to derive a single closed equation for $G_0$, \begin{equation} \label{app-eom-g0-2} [i\partial_{t'}+\epsilon_d] G_{0,\sigma}(t,t')- \int \!ds \, G_{0,\sigma}(t,s)\Delta_\sigma(s,t') = -\delta_\mathcal{C}(t,t'). \end{equation} Here the sum over bath states has been condensed into the integral over the hybridization function (\ref{Gamdef}) \begin{align} \label{deltadef} \Delta_{\sigma}(t,t') &=\sum_\alpha \Delta_{\sigma}^{\alpha}(t,t'), \\ \label{deltaalpha} \Delta_{\sigma}^{\alpha}(t,t') &\equiv \sum_p |V_p^\alpha|^2 g_{p\sigma}^\alpha(t,t') \\ \label{intgamma} &= \int \!d\epsilon\, \frac{1}{\pi}\Gamma^{\alpha}(\epsilon)\, g_{\alpha}(t,t';\epsilon). \end{align} In practice, we determine $\Delta$ from Eqs.~(\ref{intgamma}), (\ref{geps}) and (\ref{Gamdef}). Equation~(\ref{app-eom-g0-2}) is an integrodifferential equation on the contour $\mathcal{C}$. Its solution is equivalent to a boundary value problem for the imaginary time component of the Green function and initial value problems for the components involving real time-arguments. The equation is solved numerically, using Langreth rules for the decoupling of real and imaginary time components [see Ref.~\onlinecite{keldyshintro}]. The correlator (\ref{defA0}) which enters Eq.~(\ref{eqA}) for the current is by definition given by \begin{equation} \label{a1} A_{0,\sigma} (t,t') = i\sum_p G_{p\sigma}^L(t,t') V_{p\sigma}^L. \end{equation} Using Eqs.~(\ref{F}) and (\ref{deltaalpha}), this function can be obtained from the contour integral \begin{equation} \label{afinal} A_{0,\sigma}(t,t') = \int \!ds\, G_{0,\sigma}(t,s)\Delta_{\sigma}^L(s,t'). \end{equation} Note that the equations of motion (\ref{app-eom-F}) and (\ref{app-eom-geps}) also hold in the interacting case, with $G_0$ replaced by $G$. Hence Eqs.~(\ref{a1}) and (\ref{afinal}) are still valid in the interacting case with the same replacement, and the interacting current can be obtained directly from the interacting dot Green function. This procedure is however equivalent to the approach which is used in the present paper, where the Green function is not measured and the current is obtained instead from Eq.~(\ref{eqA}). \end{appendix}
1,116,691,499,961
arxiv
\section{Introduction} {\em Quantum supermaps} \cite{CDaP1,CDaP2} are the most general admissible transformations of quantum devices. Mathematically, the action of a quantum device is associated to a set of completely positive trace non-increasing maps, called {\em quantum operations} \cite{QTOS76,Kraus71}, which transform the states of an input quantum system into states of an output quantum system. In the dual (Heisenberg) picture, quantum operations are given by normal completely positive maps transforming the observables of the output system into observables of the input system, with the condition that each quantum operation is upper bounded by a unital completely positive map. A quantum supermap is then a higher-order linear map that transforms quantum operations into quantum operations. The theory of quantum supermaps has proven to be a powerful tool for the treatment of many advanced topics in quantum information theory \cite{opttomo,tredeoff,memorydisc,procqcmc,SedlakZiman,Ziman}, including in particular the optimal cloning and the optimal learning of unitary transformations \cite{unitlearn,unitclon} and quantum measurements \cite{measclon,measlearn}. Moreoever, quantum supermaps are interesting for the foundations of Quantum Mechanics as they are the possible dynamics in a toy model of non-causal theory \cite{puri}. A particular type of quantum supermaps has been considered by Zyczkowski \cite{zycko}, who used them to construct a theory with a state space that has a quartic relation between the number of distinguishable states and the number of parameters needed to specify a state. Quantum supermaps also attracted interest in the mathematical physics literature, as they suggested the study of a general class of completely positive maps between convex subsets of the state space \cite{jencova}. Originally, the definition and the main theorems on quantum supermaps were presented by D'Ariano, Perinotti, and one of the authors in the context of full matrix algebras describing finite dimensional quantum systems \cite{CDaP1,CDaP2}. An extension of the theory that includes both classical and quantum systems has been exposed informally in \cite{bitcommitment}, still in the finite dimensional setting. However, a rigorous definition and characterization of quantum supermaps in infinite dimension and for arbitrary von Neumann algebras is still lacking. This problem will be the main focus of the present paper. Before presenting our results, we briefly review the definition and characterization of supermaps for full matrix algebras. Quantum supermaps are defined axiomatically as linear completely positive maps transforming quantum operations into quantum operations (see \cite{CDaP1,CDaP2} for the physical motivation of linearity and complete positivity). A quantum supermap is {\em deterministic} if it transforms quantum channels (i.e.~unital completely positive maps, see e.g.~\cite{Holevo01}) into quantum channels. References \cite{CDaP1,CDaP2} proved the following dilation theorem for deterministic supermaps: denoting by $\lh$ and $\lk$ the $C^\ast$-algebras of linear operators on the finite dimensional Hilbert spaces $\hh$ and $\kk$, respectively, and writing $\cp{\lh,\lk}$ for the set of completely positive maps sending $\lh$ into $\lk$, we have that any deterministic supermap $\SS$ transforming quantum operations in $\cp{\elle{\hh_1}, \elle{\kk_1}}$ to quantum operations in $\cp{\elle{\hh_2}, \elle{\kk_2}}$ has the following form: \begin{equation}\label{eq. intro 1} [\SS (\ee)] (A) = V_1^\ast \left[(\ee\otimes \ii_{\vv_1}) ( V_2^\ast (A\otimes I_{\vv_2}) V_2 ) \right] V_1 \quad \forall A\in \elle{\hh_2} \end{equation} for all $\ee\in\cp{\elle{\hh_1}, \elle{\kk_1}}$, where $\vv_1$ and $\vv_2$ are two ancillary finite dimensional Hilbert spaces, $V_1: \kk_2 \to \kk_1 \otimes \vv_1$ and $V_2 : \hh_1 \otimes \vv_1 \to \hh_2 \otimes \vv_2 $ are isometries, $\ii_{\vv_1}$ is the identity map on $\elle{\vv_1}$ and $I_{\vv_2}$ is the identity operator on $\vv_2$. In the Schr\"odinger (or predual) picture, this result shows that the most general way to transform a quantum operation is achieved by connecting the corresponding device in a quantum circuit consisting in the following sequence of operations: \begin{enumerate} \item apply an invertible transformation (corresponding to the isometry $V_1$), which transforms the system $\kk_2$ into the composite system $\kk_1 \otimes \vv_1$; \item use the input device on system $\kk_1$, thus transforming it into system $\hh_1$; in the Schr\"odinger picture the action of the device will correspond to a set of predual quantum operations $\ee_\ast$ transforming states on $\kk_1$ into states on $\hh_1$; \item apply an invertible transformation (corresponding to the isometry $V_2$), which transforms the composite system $\hh_1 \otimes \vv_1$ into the composite system $\hh_2 \otimes \vv_2$; \item discard system $\vv_2$ (mathematically, take the partial trace over $\vv_2$). \end{enumerate} In this paper we will extend Eq.~\eqref{eq. intro 1} and the other results of \cite{CDaP1,CDaP2,bitcommitment} to the case where the input spaces $\elle{\hh_i}$ of the quantum operations are replaced by arbitrary separable von Neumann algebras and the outputs $\elle{\kk_i}$ also are allowed to be infinite dimensional. The usefulness of this extension for applications is twofold: on the one hand, it removes the restriction to finite dimensional quantum systems and provides the natural generalization of quantum supermaps to the infinite dimensional case; on the other hand, replacing the input algebras $\elle{\hh_i}$ with generic separable von Neumann algebras, it allows us to include transformations of quantum measuring devices, which are described in the Schr\"odinger picture by maps from the algebra of bounded operators on the Hilbert space of the measured system to the commutative algebra of functions on the outcome space. The supermaps defined in this paper are thus able to describe tasks like `measuring a measurement' \cite{lor,fiur,soto,eisert}, where one tries to measure properties of a quantum measuring device by inserting it in a suitable circuit. In trying to extend Eq.~\eqref{eq. intro 1} to the infinite dimensional setting, one encounters two key differences with respect to the finite dimensional case. The first difference concerns the domain of definition of quantum supermaps. Clearly, the natural domain for a quantum supermap is the linear space spanned by quantum operations. However, while in finite dimensions quantum operations in $\cp{\elle{\hh_i},\elle{\kk_i}}$ span the whole set of linear maps from $\elle{\hh_i}$ to $\elle{\kk_i}$, in infinite dimension they only span the {\em proper} subset $\cb{\elle{\hh_i},\elle{\kk_i}}$ of weak*-continuous completely bounded linear maps, which is even smaller than the set of bounded linear maps from $\elle{\hh_i}$ to $\elle{\kk_i}$. The second key difference concerns the necessary and sufficient conditions needed for the proof of the dilation theorem. Indeed, not every deterministic quantum supermap admits a dilation of the form of Eq.~\eqref{eq. intro 1} in the infinite dimensional case. We will prove that such a dilation exists if and only if the deterministic supermap $\SS$ is \emph{normal}, in a suitable sense that will be defined later. Under the normality hypothesis, a natural algebraic construction leads to our dilation theorem (Theorem \ref{teo. centr.}) for deterministic supermaps, which is the main result of the paper. This result can be compared with analogous results in the theory of operator spaces \cite{BlM,Paul}, which, however, though being more general, are much less specialized than ours and imply it only in trivial cases (see Remark \ref{rem:opspa2} in Section \ref{sez. stine} for a brief discussion). Our second result is a Radon-Nikodym theorem for probabilistic supermaps, namely supermaps that are dominated by deterministic supermaps. The class of probabilistic supermaps is particularly interesting for physical applications, as such maps naturally appear in the description of quantum circuits that are designed to test properties of physical devices \cite{combs,CDaP1,CDaP2,bitcommitment}. Higher-order quantum measurements are indeed described by \emph{quantum superinstruments}, which are the generalization of the quantum instruments of Davies and Lewis \cite{DavLew}. The third main result of the paper then will be the proof of a dilation theorem for quantum superinstruments, in analogy with Ozawa's dilation theorem for ordinary instruments \cite{Ozawa}. The paper is organized as follows. In Section \ref{sez. notaz.} we fix the elementary definitions and notations, and state or recall some basic facts needed in the rest of the paper. In particular, in Section \ref{sez. succ. cresc.} we extend the notion of increasing nets from positive operators to normal completely positive maps, while Section \ref{sez. amplificaz.} contains some elementary results about the tensor product of weak*-continuous completely bounded maps. In Section \ref{sez. centr.} we define normal completely positive supermaps and provide some examples. In Section \ref{sez. stine} we prove the dilation Theorem \ref{teo. centr.} for deterministic supermaps. As an application of Theorem \ref{teo. centr.}, in Section \ref{subsect:meastochan} we show that every deterministic supermap transforming measurements into quantum operations can be realized by connecting devices in a quantum circuit. Section \ref{sez. Radon} extends Theorem \ref{teo. centr.} to probabilistic supermaps, providing a Radon-Nikodym theorem for supermaps. We then define quantum superinstruments in Section \ref{sez. superstr.} and use the Radon-Nikodym theorem to prove a dilation theorem for quantum superinstruments, in analogy with Ozawa's result for ordinary instruments (see in particular Proposition 4.2 in \cite{Ozawa}). The dilation theorem for quantum superinstruments is finally applied in Section \ref{subsect:measmeas} to show how every abstract superinstrument describing a measurement on a quantum measuring device can be realized in a circuit. \section{Preliminaries and notations}\label{sez. notaz.} In this paper, unless the contrary is explicitly stated, we will always mean by \emph{Hilbert space} a complex and separable Hilbert space, with norm $\no{\cdot}$ and scalar product $\langle\cdot,\cdot\rangle$ linear in the second entry. If $\hh$, $\kk$ are Hilbert spaces, we denote by $\elle{\hh,\kk}$ the Banach space of bounded linear operators from $\hh$ to $\kk$ endowed with the uniform norm $\no{\cdot}_\infty$. If $\hh=\kk$, we will use the shortened notation $\lh :=\elle{\hh,\hh}$, and $I_\hh$ will be the identity operator in $\lh$. The linear space $\lh$ is ordered in the usual way by the cone of positive (semidefinite) operators. We denote by $\leq$ the order relation in $\lh$, and by $\elle{\hh}_+$ the cone of positive operators. By {\em von Neumann algebra} we mean a $\ast$-subalgebra $\mm\subset\lh$ such that $\mm=(\mm')'$, where $\mm'$ denotes the commutant of $\mm$ in $\lh$. Note that, as we will always assume that the Hilbert space $\hh$ is separable, the von Neumann algebras considered here will be those called \emph{separable} in the literature. When $\mm$ is regarded as an abstract von Neumann algebra (i.e.~without reference to the representing Hilbert space $\hh$), we will write its identity element $I_\mm$ instead of $I_\hh$. As usual, we define $\mm_+ := \mm\cap\elle{\hh}_+$. The identity map on $\mm$ will be denoted by $\ii_\mm$, and, when $\mm\equiv\lh$, the abbreviated notation $\ii_\hh:=\ii_{\elle{\hh}}$ will be used. The algebraic tensor product of linear spaces $U$, $V$ will be written $U\hotimes V$, while the notation $\hh\otimes\kk$ will be reserved to denote the Hilbert space tensor product of the Hilbert spaces $\hh$ and $\kk$. The inclusion $\hh\hotimes\kk \subset \hh\otimes\kk$ holds, and it is actually an equality iff $\hh$ or $\kk$ is finite dimensional. We will sometimes use the notation $\hhn : = \C^n \otimes \hh$. If $A\in\lh$ and $B\in\lk$, their tensor product $A\otimes B$, which is well defined as a linear map on $\hh\hotimes\kk$, uniquely extends to a bounded operator $A\otimes B \in\elle{\hh\otimes\kk}$ in the usual way (see e.g.~p.~183 in \cite{Tak}). Thus, the algebraic tensor product $\lh\hotimes\lk$ can be regarded as a linear subspace of $\elle{\hh\otimes\kk}$. Also in this case, the equality $\lh\hotimes\lk = \elle{\hh\otimes\kk}$ holds iff $\hh$ or $\kk$ is finite dimensional. More generally, let $\mm\subset \lh$ and $\nn\subset \lk$ be two von Neumann algebras. Then, $\mm\hotimes\nn$ is a linear subspace of $\elle{\hh\otimes\kk}$. Its weak*-closure is the von Neumann algebra $\mm\votimes\nn \subset \elle{\hh\otimes\kk}$ (see Definition 1.3 p.~183 in \cite{Tak}). Clearly, $\mm\hotimes\nn = \mm\votimes\nn$ iff $\mm$ or $\nn$ is finite dimensional. It is a standard fact that $\lh\votimes\lk = \elle{\hh\otimes\kk}$ (see Eq.~10, p.~185 in \cite{Tak}). We denote by $M_n (\C)$ the linear space of square $n\times n$ complex matrices, which we identify as usual with the space $\elle{\C^n}$. If $\mm\subset\lh$ is a von Neumann algebra, we write $\mmn : = M_n (\C) \votimes \mm$, which is a von Neumann algebra contained in $\elle{\hhn}$. As remarked above, $\mmn$ coincides with the algebraic tensor product $M_n (\C) \hotimes \mm$. If $\ee : M_m (\C) \frecc M_n (\C)$ and $\ff : \mm \frecc \nn$ are linear operators, we then see that their algebraic tensor product can be regarded as a linear map $\ee \otimes \ff : \mm^{(m)} \frecc \nnn$. Since both $\mm^{(m)}$ and $\nnn$ are von Neumann algebras, it makes sense to speak about positivity and boundedness of $\ee \otimes \ff$. This fact is at the heart of the classical definitions of complete positivity and complete boundedness. In both definitions, we use $\ii_n$ to denote the identity map on $M_n (\C)$, i.e.~$\ii_n : = \ii_{M_n (\C)}$. \begin{definition}{Definition}\label{def:CB-CP} Let $\mm$, $\nn$ be two von Neumann algebras. Then a linear map $\ee : \mm \frecc \nn$ is \begin{itemize} \item[-] {\em completely positive (CP)} if the linear map $\ii_n \otimes \ee$ is positive, i.e.~maps $\mmn_+$ into $\nnn_+$, for all $n\in\N$; \item[-] {\em completely bounded (CB)} if there exists $C>0$ such that, for all $n\in\N$, $$ \|(\ii_n \otimes \ee)(\tilde{A})\|_\infty \leq C \|\tilde{A}\|_\infty \quad \forall \tilde{A}\in \mmn , $$ i.e.~if the linear map $\ii_n \otimes \ee$ is bounded from the Banach space $\mmn$ into the Banach space $\nnn$ for all $n\in\N$, and the uniform norms of all the maps $\{\ii_n \otimes \ee\}_{n\in\N}$ are majorized by a constant independent of $n$. \end{itemize} \end{definition} \begin{definition}{Example} The simplest example of CP and CB map is given by a $\ast$-homomorphism $\pi: \mm \frecc \nn$. Indeed, for all $n\in\N$ the tensor product $\ii_n \otimes \pi : \mmn \frecc \nnn$ is again a $\ast$-homomorphism, hence it is positive and satisfies $\|(\ii_n \otimes \pi)(\tilde{A})\|_\infty \leq \|\tilde{A}\|_\infty \ \forall \tilde{A}\in\mmn$. \end{definition} We recall that a positive linear map $\ee:\mm \frecc \nn$ is {\em normal} if it preserves the limits of increasing and bounded sequences, i.e.~$\ee(A_n) \uparrow \ee(A)$ in $\nn$ for all increasing sequences $\{A_n\}_{n\in\N}$ and $A$ in $\mm_+$ such that $A_n\uparrow A$ (as usual, the notation $A_n\uparrow A$ means that $A$ is the {\em least upper bound} of the sequence $\{A_n\}_{n\in\N}$ in $\mm$, see e.g.~Lemma 1.7.4 in \cite{Sakai}). It is a standard fact that a positive linear map $\ee:\mm \frecc \nn$ is normal if and only if it is weak*-continuous (Theorem 1.13.2 in \cite{Sakai}). We introduce the following notations: \begin{itemize} \item[-] $\cb{\mm,\nn}$ is the linear space of {\em weak*-continuous} CB maps from $\mm$ to $\nn$; \item[-] $\cp{\mm,\nn}$ is the set of \emph{normal} CP maps from $\mm$ to $\nn$; \item[-] $\cpn{\mm,\nn}$ is the set of \emph{quantum channels} from $\mm$ to $\nn$, i.e.~the subset of elements $\ee\in\cp{\mm,\nn}$ such that $\ee(I_\mm) = I_\nn$. \end{itemize} \begin{definition}{Remark}\label{rem:CB=Lin in dim finita} Suppose $\nn\subset M_n (\C)$. Then the set $\cb{\mm,\nn}$ coincides with the space of all weak*-continuous linear maps from $\mm$ to $\nn$ (see e.g.~Exercise 3.11 in \cite{Paul}). In particular, if also $\mm\subset M_m (\C)$, then $\cb{\mm,\nn}$ is the set of all linear maps from $\mm$ to $\nn$. \end{definition} If $\mm_1$, $\mm_2$ and $\mm_3$ are von Neumann algebras and $\ff\in\cb{\mm_1,\mm_2}$, $\ee\in\cb{\mm_2,\mm_3}$, then $\ee\ff\in\cb{\mm_1,\mm_3}$. The same fact is true if we replace all ${\rm CB}$ spaces with ${\rm CP}$'s or ${\rm CP}_1$'s. \begin{definition}{Remark}\label{restr-CP} Let $\mm_0$, $\mm$ be two von Neumann algebras contained in the same operator algebra $\lh$, with $\mm_0\subset \mm$. Since the inclusion map $\ii_{\mm_0,\mm} : \mm_0 \hookrightarrow \mm$ is in $\cpn{\mm_0,\mm}$, it follows by the composition property that the restriction $\ee \mapsto \left.\ee\right|_{\mm_0} = \ee \ii_{\mm_0,\mm}$ maps $\cb{\mm,\nn}$ [resp., $\cp{\mm,\nn}$; $\cpn{\mm,\nn}$] into $\cb{\mm_0,\nn}$ [resp., $\cp{\mm_0,\nn}$; $\cpn{\mm_0,\nn}$]. A similar application of the composition property also shows the inclusions $\cb{\nn,\mm_0} \hookrightarrow \cb{\nn,\mm}$, $\cp{\nn,\mm_0} \hookrightarrow \cp{\nn,\mm}$ and $\cpn{\nn,\mm_0} \hookrightarrow \cpn{\nn,\mm}$. \end{definition} The relation between the two sets $\cb{\mm,\nn}$ and $\cp{\mm,\nn}$ is shown in the following theorem (see also \cite{Haag}). \begin{theorem}{Theorem}\label{CB = span CP} The inclusion $\cp{\mm , \nn} \subset \cb{\mm , \nn}$ holds, and the set $\cp{\mm , \nn}$ is a cone in the linear space $\cb{\mm , \nn}$. For $\nn\equiv\lk$, the linear space spanned by $\cp{\mm , \lk}$ coincides with $\cb{\mm , \lk}$. More precisely, if $\ee\in\cb{\mm , \lk}$, then there exists four maps $\ee_k\in\cp{\mm , \lk}$ ($k=0,1,2,3$) such that $\ee = \sum_{k=0}^3 i^k \ee_k$. \end{theorem} \begin{proof} We have already remarked that, if a positive map $\ee : \mm \frecc \nn$ is normal, then it is weak*-continuous. If $\ee$ is CP, then it is CB by Proposition 3.6 in \cite{Paul}. Thus, the inclusion $\cp{\mm , \nn} \subset \cb{\mm , \nn}$ holds. Clearly, $\cp{\mm , \nn}$ is a cone in $\cb{\mm , \nn}$. Now, suppose $\ee\in\cb{\mm , \lk}$. By Theorem 8.4 in \cite{Paul}, there exists a (not necessarily separable) Hilbert space $\hhat$, a unital $\ast$-homomorphism $\pi : \mm \frecc \elle{\hhat}$ and bounded operators $V_i : \kk\frecc\hhat$ ($i=1,2$) such that $$ \ee(A) = V_1^\ast \pi(A) V_2 \quad \forall A\in\mm \, . $$ Let $\mm^\ast$ be the Banach dual space of $\mm$, and let $\mm^\ast = \mm_\ast\oplus\mm_\ast^\perp$ be the direct sum decomposition of $\mm^\ast$ into its normal and singular parts, as described in Definition 2.13 p.~127 of \cite{Tak} (the normal part $\mm_\ast$ coincides with the {\em predual} of $\mm$). If $u,v\in\kk$ [resp., $u,v\in\hhat$], denote by $\omega_{u,v}$ the element in the Banach dual $\lk^\ast$ of $\lk$ [resp., $\elle{\hhat}^\ast$ of $\elle{\hhat}$] given by $\omega_{u,v} (A) = \scal{u}{Av}$ for all $A\in\lk$ [resp., $A\in\elle{\hhat}$]. By Theorem 2.14 p.~127 in \cite{Tak}, there exists an orthogonal projection $P\in\elle{\hhat}$ which commutes with $\pi$ and is such that: \begin{itemize} \item[-] the $\ast$-homomorphism $A\mapsto \left.\pi(A)\right|_{P\hhat}$ is a normal representation of $\mm$ on $P\hhat$; \item[-] $^t \pi (\omega_{P^\perp u,P^\perp v}) \in \mm_\ast^\perp$ for all $u,v\in\hhat$, where $P^\perp = I_{\hhat} - P$ and $^t \pi$ is the transpose of $\pi$, defined by $[^t \pi(\omega)] (A) : = \omega (\pi (A)) \ \forall \omega \in \elle{\hhat}^\ast, \ A \in \mm$. \end{itemize} Since $P$ and $\pi$ commute, we have $$ ^t \ee(\omega_{u,v}) = \,^t \pi (\omega_{V_1 u,V_2 v}) = \,^t \pi (\omega_{PV_1 u,PV_2 v}) + \,^t \pi (\omega_{P^\perp V_1 u,P^\perp V_2 v}) \, . $$ Since $^t \ee(\omega_{u,v}) \in \mm_\ast$, $\,^t \pi (\omega_{PV_1 u,PV_2 v}) \in \mm_\ast$ and $\,^t \pi (\omega_{P^\perp V_1 u,P^\perp V_2 v}) \in \mm_\ast^\perp$, it follows that $\,^t \pi (\omega_{P^\perp V_1 u,P^\perp V_2 v}) = 0$, hence $$ ^t \ee(\omega_{u,v}) = \,^t \pi (\omega_{PV_1 u,PV_2 v}) \quad \forall u,v\in\kk $$ or, equivelently, $$ \scal{u}{\ee(A)v} = \scal{PV_1 u}{\pi(A) PV_2 v} \quad \forall A\in\mm \, , \, u,v\in\kk \, . $$ We thus see that $\ee(A) = V_1^\ast P \pi(A) PV_2$ for all $A\in\mm$. As $\pi$ restricted to the subspace $P\hhat$ is normal, then each map $\ee_k$ ($k=0,1,2,3$), given by $$ \ee_k (A) = \frac{1}{4} (i^k V_1 + V_2)^\ast P \pi(A) P (i^k V_1 + V_2) \quad \forall A\in\mm \, , $$ is in $\cp{\mm,\lk}$. Since $\ee = \sum_{k=0}^3 i^k \ee_k$, this shows that $\cb{\mm,\lk}$ is the linear span of $\cp{\mm,\lk}$. \end{proof} The cone $\cp{\mm, \nn}$ induces a linear ordering in the space $\cb{\mm, \nn}$, that we will denote by $\preceq$. Namely, given two maps $\ee,\ff \in \cb{\mm, \nn}$, we will write $\ee \preceq \ff$ whenever $\ff - \ee \in\cp{\mm, \nn}$. An elementary example of maps in $\cb{\mm,\lk}$ is constructed in the following way. Suppose $\mm\subset\lh$. For $E\in\elle{\hh, \kk}$, $F\in\elle{\kk, \hh}$, denote by $E\odot_\mm F$ the linear map $$ E\odot_\mm F : \mm \frecc \lk \, , \qquad (E\odot_\mm F)(A) = E AF $$ [Note that the domain of the map $E\odot_\mm F$ is explicitly indicated by the subscript $\mm $]. The main properties of $E\odot_\mm F$ are collected in the next proposition. \begin{theorem}{Proposition}\label{prop:prop. di odot} Suppose $\mm\subset\lh$ is a von Neumann algebra. Then, for all $E\in\elle{\hh, \kk}$ and $F\in\elle{\kk, \hh}$, \begin{enumerate} \item $E\odot_\mm F\in\cb{\mm,\lk}$; \item $F^\ast\odot_\mm F\in\cp{\mm,\lk}$; \item for all operators $A\in\lh$ in the commutant $\mmp$ of $\mm$, we have $EA\odot_\mm F = E\odot_\mm AF$; \item for all $A\in\mmp$, with $0\leq A\leq I_\hh$, we have $A^{\frac 12}\odot_\mm A^{\frac 12} \preceq \ii_{\mm,\lh}$, where the map $\ii_{\mm,\lh}$ is the inclusion $\mm\hookrightarrow\lh$. \end{enumerate} \end{theorem} \begin{proof} (1) Weak*-continuity of $E\odot_\mm F$ is clear. For all $n\in \N$, we have the equality $\ii_n \otimes (E\odot_\mm F) = (I_{\C^n}\otimes E)\odot_{\mm^{(n)}} (I_{\C^n}\otimes F)$, and then \begin{align*} \|[\ii_n \otimes (E\odot_\mm F)](\tilde{A})\|_\infty & \leq \no{I_{\C^n}\otimes E}_\infty \|\tilde{A}\|_\infty \no{I_{\C^n}\otimes F}_\infty \\ & = \no{E}_\infty \|\tilde{A}\|_\infty \no{F}_\infty \end{align*} for all $\tilde{A}\in\mmn$, which shows that $E\odot_\mm F$ is CB (with $C=\no{E}_\infty \no{F}_\infty$). (2) For all $n\in \N$, we have $\ii_n \otimes (F^\ast\odot_\mm F) = (I_{\C^n}\otimes F)^\ast \odot_{\mm^{(n)}} (I_{\C^n}\otimes F)$, which is positive from $\mmn$ into $\elle{\kkn}$. (3) Trivial. (4) Since $A^{\frac 12} ,\, (I_\hh - A)^{\frac 12} \in \mmp$, by item (3) we have $A^{\frac 12} \odot_\mm A^{\frac 12} = A \odot_\mm I_\hh$ and $(I_\hh - A)^{\frac 12} \odot_\mm (I_\hh - A)^{\frac 12} = (I_\hh - A) \odot_\mm I_\hh = \ii_{\mm,\lh} - A \odot_\mm I_\hh$. Therefore, $$ A^{\frac 12} \odot_\mm A^{\frac 12} = \ii_{\mm,\lh} - (I_\hh - A)^{\frac 12} \odot_\mm (I_\hh - A)^{\frac 12} \, , $$ and the claim follows as $(I_\hh - A)^{\frac 12} \odot_\mm (I_\hh - A)^{\frac 12} \succeq 0$. \end{proof} The importance of the elementary maps $E\odot_\mm F$'s will become clear in the following, as we will briefly see that by Kraus theorem (see Theorem \ref{Teo. Stines.} below) every map in $\cb{\mm,\lk}$ is the limit (in a suitable sense) of sums of elementary maps $E \odot_\mm F$. Two of the main features of CB weak*-continuous maps which we will need in the rest of the paper are the following: \begin{itemize} \item[-] a notion of limit can be defined for a particular class of sequences in $\cb{\mm,\nn}$, which is the analogue of the least upper bound for increasing bounded sequences of operators; \item[-] if $\mm_1$, $\mm_2$, $\nn_1$, $\nn_2$ are von Neumann algebras, the maps in $\cb{\mm_1,\nn_1}$ and $\cb{\mm_2,\nn_2}$ can be tensored in order to obtain elements of the set $\cb{\mm_1\votimes\mm_2,\nn_1\votimes\nn_2}$. \end{itemize} As these concepts are the main two ingredients in our definition of supermaps and in the proof of a dilation theorem for them, we devote the next two sections to their explanation. \subsection{Increasing nets of normal CP maps}\label{sez. succ. cresc.} If $\Lambda$ is a directed set and $\{ A_\lambda \}_{\lambda\in\Lambda}$ is a net of operators in $\mm_+$, we say that the net is {\em increasing} if $A_{\lambda_1} \leq A_{\lambda_2}$ whenever $\lambda_1 \leq \lambda_2$, and {\em bounded} if there exists $B\in\mm_+$ such that $A_{\lambda} \leq B$ for all $\lambda\in\Lambda$. In this case, the net has a {\em least upper bound} $A\in\mm_+$, and we use the notation $A_\lambda\uparrow A$. We now extend the notion of increasing net and least upper bound to nets in $\cp{\mm,\nn}$. We say that the net $\left\{ \ee_\lambda \right\}_{\lambda\in\Lambda}$ of elements in $\cp{\mm,\nn}$ is \begin{itemize} \item {\em CP-increasing} if $\ee_{\lambda_1} \preceq \ee_{\lambda_2}$ whenever $\lambda_1 \leq \lambda_2$, \item {\em CP-bounded} if there exists a map $\ff\in\cp{\mm,\nn}$ such that $\ee_\lambda \preceq \ff$ for all $\lambda\in\Lambda$. \end{itemize} Note that, if the net $\left\{ \ee_\lambda \right\}_{\lambda\in\Lambda}$ is CP-increasing, then, for all $A\in\mm_+$, the net of operators $\left\{ \ee_\lambda (A) \right\}_{\lambda\in\Lambda}$ is increasing in $\nn_+$. Moreover, if $\left\{ \ee_\lambda \right\}_{\lambda\in\Lambda}$ is CP-bounded by $\ff\in\cp{\mm,\nn}$, then the net $\left\{ \ee_\lambda (A) \right\}_{\lambda\in\Lambda}$ is bounded by $\ff(A)$ in $\nn$. The following result now shows that the least upper bound exists for any CP-incresing and CP-bounded net in $\cp{\mm,\nn}$. \begin{theorem}{Proposition}\label{Teo. Berb. 2} If $\left\{ \ee_\lambda \right\}_{\lambda\in\Lambda}$ is a net in $\cp{\mm,\nn}$ which is CP-increasing and CP-bounded, then there exists a unique $\ee\in\cp{\mm,\nn}$ such that \begin{equation}\label{conv. di En} \wklim_{\lambda\in\Lambda} \ee_\lambda (A) = \ee(A) \quad \forall A\in \mm \, . \end{equation} $\ee$ has the following property: $\ee_\lambda \preceq \ee$ for all $\lambda\in\Lambda$, and, if $\ff\in \cp{\mm,\nn}$ is such that $\ee_\lambda \preceq \ff$ for all $\lambda\in\Lambda$, then $\ee \preceq \ff$. \end{theorem} \begin{proof} We have just seen that, if $A\in\mm_+$, then the sequence $\left\{ \ee_\lambda (A) \right\}_{\lambda\in\Lambda}$ is bounded and increasing in $\nn$. We thus define $\ee(A)\in\nn_+$ to be its least upper bound. Now, every operator in $\mm$ is the linear combination of four elements in $\mm_+$, therefore we can extend the definition of $\ee$ to all $A\in\mm$ by linearity (it is easy to see that such definition of $\ee(A)$ does not depend on the chosen decomposition of $A$ into positive operators). If $A=\sum_{k=0}^3 i^k A_k$, with $A_k\in\mm_+$, then $\ee_\lambda (A_k) \uparrow \ee(A_k)$ for all $k$, hence Eq.~\eqref{conv. di En} follows by linearity. In order to show that $\ee$ is normal, pick any positive sequence $\left\{ A_n \right\}_{n\in\N}$ in $\mm$ such that $A_n\uparrow A$ for some $A\in\mm_+$. Then, for all positive elements $\rho$ in the predual $\nn_\ast$ of $\nn$, \begin{eqnarray*} \rho(\ee(A)) & = & \sup_\lambda \rho(\ee_\lambda (A)) = \sup_\lambda \sup_n \rho(\ee_\lambda (A_n)) = \sup_n \sup_\lambda \rho(\ee_\lambda (A_n)) \\ & = & \sup_n \rho(\ee (A_n)) \end{eqnarray*} Hence $\ee (A_n) \uparrow \ee (A)$, and $\ee$ is normal. Finally, to show that $\ee$ is CP, note that, for all $\tilde{A}\in\mmn_+$, we have $\wklim_{\lambda\in\Lambda} (\ii_n \otimes \ee_\lambda) (\tilde{A}) = (\ii_n \otimes \ee) (\tilde{A})$ by Eq.~\eqref{conv. di En}. Since $(\ii_n \otimes \ee_\lambda) (\tilde{A}) \geq 0$ for all $\lambda$, it follows that $(\ii_n \otimes \ee) (\tilde{A}) \geq 0$. Hence, $\ee$ is CP. The remaining properties of $\ee$ are easy consequences of its definition and of the analogous properties of least upper bounds in $\mm$, $\nn$. \end{proof} If $\{\ee_\lambda \}_{\lambda\in\Lambda}$ and $\ee$ are as in the statement of the above proposition, then we write $\ee_\lambda \Uparrow \ee$. We can now formulate Kraus theorem \cite{Kraus71} for normal CP maps in terms of CP-increasing and CP-bounded nets. To this aim, note that, if $I$ is any set, then the class of its finite subsets $\Lambda_I$ is a directed set under inclusion. \begin{theorem}{Theorem}\label{Teo. Stines.} {\rm (Kraus theorem)} Suppose $\mm\subset\lh$ is a von Neumann algebra. We have the following facts. \begin{enumerate} \item If $I$ is a finite or countable set and $\{E_i\}_{i\in I}$ are elements in $\elle{\kk, \hh}$ such that the net of partial sums $\{\sum_{i\in J} E_i^\ast E_i\}_{J\in\Lambda_I}$ is bounded in $\lk$, then the net of partial sums $\{\sum_{i\in J} E_i^\ast \odot_\mm E_i\}_{J\in\Lambda_I}$ is CP-bounded and CP-increasing in $\cp{\mm,\lk}$, hence it converges in the sense of Proposition \ref{Teo. Berb. 2} to a unique limit $\ee\in\cp{\mm,\lk}$. \item If $\ee\in\cp{\mm,\lk}$, then there exists a finite or countable set $I$ and a sequence $\{ E_i \}_{i\in I}$ of elements in $\elle{\kk, \hh}$ such that the net of partial sums $\{\sum_{i\in J} E_i^\ast \odot_\mm E_i\}_{J\in\Lambda_I}$ converges to $\ee$ in $\cp{\mm,\lk}$ in the sense of Proposition \ref{Teo. Berb. 2}. \end{enumerate} In both cases, choosing an arbitrary ordering $i_1,i_2,i_3 \ldots$ of the elements of $I$, we have that the sequence of partial sums $\{\sum_{k=1}^n E_{i_k}^\ast \odot_\mm E_{i_k}\}_{n\in\N}$ is CP-bounded and CP-increasing, and converges to $\ee$ in the sense of Proposition \ref{Teo. Berb. 2}. \end{theorem} \begin{proof} (1) The claim is trivial when $\# I<\infty$, therefore we assume $I=\N$. If $J_1 , J_2 \in \Lambda_\N$ with $J_1\leq J_2$, then $$ \sum_{i\in J_2} E_i^\ast \odot_\mm E_i - \sum_{i\in J_1} E_i^\ast \odot_\mm E_i = \sum_{i\in J_2\setminus J_1} E_i^\ast \odot_\mm E_i \succeq 0 \, , $$ hence the net of partial sums is CP-increasing. To show that it is CP-bounded, we introduce the following bounded operator $$ V : \kk \frecc \hh\otimes\ell^2 \, , \qquad Vv := \sum_{i\in \N} E_i v \otimes \delta_i \, , $$ where $\ell^2$ is the Hilbert space of square-summable sequences and $\{\delta_i\}_{i\in \N}$ is its standard basis. The sum converges in the norm topology of $\hh\otimes\ell^2$, as $\sum_{i\in \N} \no{E_i v}^2 \leq \scal{v}{Bv} < \infty$, where $B\in\lk$ is any positive operator such that $\sum_{i\in J} E_i^\ast E_i \leq B$ for all $J\in\Lambda_\N$. Given $J\in \Lambda_\N$, we let $P_J$ be the orthogonal projection of $\ell^2$ onto the linear span of $\{\delta_i \mid i\in J\}$. Moreover, we define the following normal $\ast$-homomorphism $$ \pi : \mm\frecc \mm\votimes \C I_{\ell^2} \, , \qquad \pi(A) = A\otimes I_{\ell^2} \, , $$ and the map $$ \ff: = (V^\ast \odot_{\mm\votimes \C I_{\ell^2}} V) \, \pi \, . $$ As $\ff$ is the composition of normal CP maps, we have $\ff\in\cp{\mm,\lk}$. We claim that $\sum_{i\in J} E_i^\ast \odot_\mm E_i \preceq \ff$ for all $J\in\Lambda_\N$. Indeed, we have $$ \sum_{i\in J} E_i^\ast \odot_\mm E_i = (V^\ast \odot_{\elle{\hh\otimes\ell^2}} V) [(I_\hh \otimes P_J) \odot_{\mm\votimes \C I_{\ell^2}} (I_\hh \otimes P_J)] \, \pi \, , $$ and $(I_\hh \otimes P_J) \odot_{\mm\votimes \C I_{\ell^2}} (I_\hh \otimes P_J) \preceq \ii_{\mm\votimes \C I_{\ell^2},\, \elle{\hh\otimes\ell^2}}$ by item (4) of Proposition \ref{prop:prop. di odot}. Thus, \begin{align*} \sum_{i\in J} E_i^\ast \odot_\mm E_i & = (V^\ast \odot_{\elle{\hh\otimes\ell^2}} V) [(I_\hh \otimes P_J) \odot_{\mm\votimes \C I_{\ell^2}} (I_\hh \otimes P_J)] \, \pi\\ & \preceq (V^\ast \odot_{\mm\votimes \C I_{\ell^2}} V) \, \pi \\ & = \ff , \end{align*} and the claim follows. (2) If $\ee\in\cp{\mm,\lk}$, then by Theorem 4.6 in \cite{EvLew} (or also Theorem 4.3 p.~165 in \cite{AJP}) there exists a finite or countable set $I$ and a sequence $\{ E_i \}_{i\in I}$ of elements in $\elle{\kk, \hh}$ such that \begin{equation}\label{eq:Kraus} \ee(A) = \sum_{i\in I} E_i^\ast A E_i \quad \forall A\in\mm \, , \end{equation} where the series converges in the weak*-topology and is independent of the ordering of $I$. In particular, the net of partial sums $\{\sum_{i\in J} E_i^\ast E_i\}_{J\in\Lambda_I}$ is bounded by $\ee(I_\hh)$ in $\lk$, hence by item (1) the net $\{\sum_{i\in J} E_i^\ast \odot_\mm E_i\}_{J\in\Lambda_I}$ converges in the sense of Proposition \ref{Teo. Berb. 2} to a unique $\ee'\in\cp{\mm,\lk}$. Comparing Eqs.~\eqref{conv. di En} and \eqref{eq:Kraus}, we see that $\ee = \ee'$. The last statement follows considering the subnet $\{\sum_{i\in J_n} E_i^\ast \odot_\mm E_i\}_{n\in\N}$ of the net $\{\sum_{i\in J} E_i^\ast \odot_\mm E_i\}_{J\in\Lambda_I}$, where $J_n := \{i_1,i_2,\ldots ,i_n\}$, and by uniqueness of the limit. \end{proof} If $\ee$ and $\{ E_i \}_{i\in I}$ are as in item (2) of the above theorem, then we say that the expression $\sum_{i\in I} E_i^\ast \odot_\mm E_i$ is the {\em Kraus form} of $\ee$. Note that $\ee$ is a quantum channel (unital map) iff $I_\kk$ is the least upper bound of the net $\{\sum_{i\in J} E_i^\ast E_i\}_{J\in\Lambda_I}$ in $\lk$. Kraus theorem and Theorem \ref{CB = span CP} show that every map $\ee\in\cb{\mm,\lk}$ can be decomposed into a (possibly infinite) sum of elementary maps $E_i\odot_\mm F_i$. Indeed, by Theorem \ref{CB = span CP} we can choose four elements $\ee_k\in\cp{\mm,\lk}$ ($k=0,1,2,3$) such that $\ee = \sum_{k=0}^3 i^k \ee_k$, and each $\ee_k$ can be written in the Kraus form $\ee_k = \sum_{i\in I_k} E^{(k)\ast}_i \odot_\mm E^{(k)}_i$. It is clear, however, that such decomposition is not unique even if $\ee\in\cp{\mm,\lk}$ itself. \begin{definition}{Remark}\label{rem:opspa1} {\rm (The space $\cb{\mm,\lk}$ as a dual operator space)} It is interesting to note that, if $\mm = \lh$, the linear space $\cb{\lh,\lk}$ is a {\em dual operator space} in the sense of operator space theory (see e.g.~1.2.20 in \cite{BlM} for the definition of dual operator spaces, and 1.2.19 in \cite{BlM} or Proposition 14.7 in \cite{Paul} for the operator space structure of $\cb{\mm,\lk}$). Indeed, this is proven in Proposition 2.1 of \cite{BS}. In the same reference, it also is proven that the operator space $\cb{\lh,\lk}$ is completely isometrically isomorphic to the {\em weak*-Haagerup tensor product} $\elle{\hh,\kk}\whotimes\elle{\kk,\hh}$ (see \cite{BS} or 1.6.9 in \cite{BlM} for the definition). Moreover, still in the case $\mm = \lh$, Kraus theorem \ref{Teo. Stines.} above is a restatement of Theorem 2.2 in \cite{BS}, which asserts that each $\ee\in\cb{\lh,\lk}$ is the weak*-limit of a sequence of maps $\{\sum_{k=1}^n E^\ast_k \odot_{\lh} F_k\}_{n\in\N}$ for some sequence of operators $\{E_k\}_{k\in\N}$ and $\{F_k\}_{k\in\N}$ in $\elle{\kk,\hh}$. However, for simplicity of presentations in the following we will not phrase our result in the language of dual operator spaces, because most of the proofs are simpler (and more intuitive) in the language of operator algebras. \end{definition} \subsection{Tensor product of weak*-continuous CB maps}\label{sez. amplificaz.} If $\ee : \elle{\hh_1} \frecc \elle{\kk_1}$ and $\ff : \elle{\hh_2} \frecc \elle{\kk_2}$ are linear {\em bounded} maps, their tensor product $\ee\otimes\ff$ is well defined as a linear map $\elle{\hh_1} \hotimes \elle{\hh_2} \frecc \elle{\kk_1} \hotimes \elle{\kk_2}$. However, unless $\hh_1$ and $\kk_1$, or alternatively $\hh_2$ and $\kk_2$, are finite dimensional, in general one can not extend $\ee\otimes\ff$ to a map $\elle{\hh_1\otimes\hh_2} \frecc \elle{\kk_1\otimes\kk_2}$. Weak*-continuous CB maps constitute an important exception to this obstruction, as it is shown by the following proposition (see also Proposition 5.13 p.~228 in \cite{Tak}). \begin{theorem}{Proposition}\label{tensor product of maps} Let $\mm_1$, $\mm_2$, $\nn_1$, $\nn_2$ be von Neumann algebras. Given two maps $\ee \in \cb{\mm_1 , \nn_1}$ and $\ff \in \cb{\mm_2 , \nn_2}$, there is a unique map $\ee \otimes \ff \in \cb{\mm_1 \votimes \mm_2 , \nn_1 \votimes \nn_2}$ such that \begin{equation}\label{def on products} (\ee \otimes \ff) (A \otimes B) = \ee(A) \otimes \ff (B) \quad \forall A \in \mm_1 , \, B \in \mm_2 \, . \end{equation} If $\ee$ and $\ff$ are CP, then $\ee \otimes \ff \in \cp{\mm_1 \votimes \mm_2 , \nn_1 \votimes \nn_2}$. \end{theorem} \begin{proof} Without loss of generality, let us assume $\mm_k\subset\elle{\hh_k}$ and $\nn_k\subset\elle{\kk_k}$ for $k=1,2$. First suppose that the maps $\ee$ and $\ff$ are CP, and have Kraus forms $\ee = \sum_{i\in I} E_i^\ast \odot_{\mm_1} E_i$ and $\ff = \sum_{j \in J} F_j^\ast \odot_{\mm_2} F_j$. We can then define a map $\gg \in \cp{\mm_1 \votimes \mm_2 , \elle{\kk_1 \otimes \kk_2}}$, with Kraus form $$ \gg := \sum_{(i,j)\in I\times J} (E_i \otimes F_j)^\ast \odot_{\mm_1\votimes\mm_2} (E_i \otimes F_j) \, . $$ It is easy to check that $\gg(A\otimes B) = \ee(A) \otimes\ff(B)$ for all $A \in \mm_1$, $B \in \mm_2$, hence $\gg$ extends the linear map $\ee \otimes \ff : \mm_1 \hotimes \mm_2 \frecc \nn_1 \hotimes \nn_2$ defined in Eq.~\eqref{def on products} to a weak*-continuous CP map from $\mm_1\votimes\mm_2$ into $\elle{\kk_1\otimes\kk_2}$. Such extension is unique by weak*-density of $\mm_1 \hotimes \mm_2$ in $\mm_1 \votimes \mm_2$. Moreover, since $\gg (\mm_1\hotimes\mm_2) \subset \nn_1\hotimes\nn_2$, we have $\gg (\mm_1\votimes\mm_2) \subset \nn_1\votimes\nn_2$. Hence, $\gg \in \cp{\mm_1 \votimes \mm_2 , \nn_1 \votimes \nn_2}$. The claim of the theorem for generic elements $\ee \in \cb{\mm_1 , \nn_1}$ and $\ff \in \cb{\mm_2 , \nn_2}$ then follows by linearity and Theorem \ref{CB = span CP}. \end{proof} The map $\otimes : \cb{\mm_1 , \nn_1} \times\, \cb{\mm_2 , \nn_2} \frecc \cb{\mm_1\votimes\mm_2 , \nn_1\votimes\nn_2}$ defined in Proposition \ref{tensor product of maps} is clearly bilinear, and yelds the inclusion $$ \cb{\mm_1 , \nn_1} \hotimes\, \cb{\mm_2 , \nn_2} \subset \cb{\mm_1\votimes\mm_2 , \nn_1\votimes\nn_2} \, . $$ \begin{definition}{Remark}\label{rem:id dei CBn} When $\mm_1 = M_m (\C)$ and $\nn_1 = M_n (\C)$, the product $\ee\otimes\ff$ defined in Proposition \ref{tensor product of maps} clearly coincides with the algebraic product that we already encountered in the definition of CB and CP maps (Definition \ref{def:CB-CP}). Moreover, the above inclusion actually becomes the equality \begin{equation}\label{eq:eqoftens} \cb{M_m (\C) , M_n (\C)} \hotimes\, \cb{\mm , \nn} = \cb{\mm^{(m)} , \nnn} \, . \end{equation} Indeed, choose two bases $\{f_i\}_{i=1}^{m^2}$ of $M_m (\C)$ and $\{g_j\}_{j=1}^{n^2}$ of $M_n (\C)$. For a map $\tilde{\ee}\in\cb{\mm^{(m)} , \nnn}$, define $$ \tilde{\ee}_{ji} (A) := (g^\dag_j \otimes\ii_\nn) [\tilde{\ee} (f_i \otimes A)] \quad \forall A\in\mm $$ (where the superscript $^\dag$ labels the dual basis). We then have $\tilde{\ee}_{ji} \in\cb{\mm , \nn}$, as $\tilde{\ee}_{ji}$ is obtained by composing and tensoring weak*-continuous CB maps (recall that the maps $g^\dag_j : M_n (\C) \frecc \C$ and $f_i : \C \frecc M_m (\C)$ are CB by Remark \ref{rem:CB=Lin in dim finita}). Since $$ \tilde{\ee} = \sum_{i=1}^{m^2} \sum_{j=1}^{n^2} (g_j f^\dag_i) \otimes \tilde{\ee}_{ji} \, , $$ the equality of sets \eqref{eq:eqoftens} follows. \end{definition} It is easy to check that the tensor product $\otimes$ defined above preserves \begin{itemize} \item[-] composition of maps: $(\ee_1 \otimes \ff_1) (\ee_2 \otimes \ff_2) = \ee_1\ee_2 \otimes \ff_1 \ff_2$; \item[-] ordering: if $\ee_1 \preceq \ee_2$ and $\ff_1 \preceq \ff_2$, then $\ee_1 \otimes \ff_1 \preceq \ee_2 \otimes \ff_2$; \item[-] least upper bounds: if $\ee_\lambda \Uparrow \ee$ and $\ff_\mu \Uparrow \ff $, then $\ee_\lambda \otimes \ff_\mu \Uparrow \ee\otimes \ff$ (where $(\lambda_1 ,\mu_1) \leq (\lambda_2 ,\mu_2)$ iff $\lambda_1 \leq \lambda_2$ and $\mu_1 \leq \mu_2$); \item[-] quantum channels: if $\ee\in\cpn{\mm_1,\nn_1}$ and $\ff\in\cpn{\mm_2,\nn_2}$, then $\ee\otimes \ff \in \cpn{\mm_1 \votimes\mm_2 ,\nn_1 \votimes\nn_2}$. \end{itemize} Moreover, when tensoring the elementary maps $E_1\odot_{\mm_1} F_1$ and $E_2\odot_{\mm_2} F_2$, we clearly obtain $$ (E_1\odot_{\mm_1} F_1) \otimes (E_2\odot_{\mm_2} F_2) = (E_1\otimes E_2)\odot_{\mm_1\votimes\mm_2} (F_1\otimes F_2) \, . $$ In particular, we see that, if $\vv$ is another Hilbert space, then $(E\odot_\mm F)\otimes \ii_\vv = (E\otimes I_\vv)\odot_{\mm\votimes\elle{\vv}}(F\otimes I_\vv)$. \section{Quantum supermaps}\label{sez. centr.} In this section we introduce the central object in our study, i.e.~a particular set of linear maps $\SS : \cb{\mm_1,\nn_1} \frecc \cb{\mm_2,\nn_2}$ which mathematically describe the physically admissible transformations of quantum channels. These maps were introduced and studied in \cite{CDaP1,CDaP2} in the case where $\mm_i = \elle{\hh_i}$ and $\nn_i = \elle{\kk_i}$ are the full algebras of linear operators on finite dimensional Hilbert spaces $\hh_i$ and $\kk_i$. The main difference in the infinite dimensional case is the role of normality, which will be crucial for our dilation theorem (see Theorem \ref{teo. centr.} of the next section). Let us start from some basic terminology: \begin{definition}{Definition} Suppose $\mm_1$, $\mm_2$, $\nn_1$, $\nn_2$ are von Neumann algebras. A linear map $\SS: \cb{\mm_1,\nn_1} \frecc \cb{\mm_2,\nn_2}$ is \begin{itemize} \item[-] {\em positive} if $\SS (\ee) \succeq 0$ for all $\ee\succeq 0$; \item[-] {\em completely positive (CP)} if the map $$\II_n \otimes \SS: \cb{\mm_1^{(n)} , \nn_1^{(n)}} \to \cb{\mm_2^{(n)}, \nn_2^{(n)}}$$ is positive for every $n\in\N$, where $\II_n$ is the identity map on the linear space $\cb{M_n (\C),M_n (\C)}$; \item[-] {\em normal} if $\SS(\ee_n) \Uparrow \SS(\ee)$ for all sequences $\{ \ee_n \}_{n\in\N}$ in $\cp{\mm_1,\nn_1}$ such that $\ee_n \Uparrow \ee$. \end{itemize} \end{definition} Note that in the above definition of complete positivity we used the identification $\cb{\mmn , \nnn} = \cb{M_n (\C),M_n (\C)} \hotimes\, \cb{\mm , \nn}$ estabilished in Remark \ref{rem:id dei CBn}. \begin{definition}{Remark} Not every CP map $\SS: \cb{\mm_1,\nn_1} \frecc \cb{\mm_2,\nn_2}$ is normal, even though, by definition, $\SS$ transforms normal maps into normal maps. A simple example of non-normal CP map is the following: suppose $\mm_1 = \C$ and $\nn_1 = \lk$, with $\kk$ infinite dimensional. In this case, we have the natural identifications $\cb{\C,\lk} = \lk$ and $\cp{\C,\lk} = \lk_+$, and elements $\{\ee_n\}_{n\in\N}$ and $\ee$ in $\cp{\C,\lk}$ satisfy $\ee_n\Uparrow\ee$ iff $\ee_n (1)\uparrow\ee(1)$ in $\lk_+$. Consider a singular state $\rho : \lk \frecc \C$, i.e.~a positive functional such that $\rho (K)=0$ for every compact operator $K \in \lk$ and $\rho (I_\kk) =1$. Define the linear map $\SS: \cb{\C,\lk} \frecc \cb{\mm_2,\nn_2}$ given by $\SS (\ee) = \rho (\ee (1)) \ff$, where $\ff\in\cp{\mm_2,\nn_2}$ is fixed. Since $\rho$ is CP (see Proposition 3.8 in \cite{Paul}), it is easy to check that $\SS$ is CP. However, $\SS$ is not normal: consider for example a Hilbert basis $\{e_i\}_{i\in \N}$ for $\kk$ and let $P_n$ be the orthogonal projection onto $\spanno{e_i \mid i \leq n}$. If $\ee_n,\ee\in\cp{\C,\lk}$ are given by $\ee_n (1) = P_n$ and $\ee (1) = I_\kk$, in this way one has $\ee_n \Uparrow \ee$, whereas $\SS (\ee_n) =0$ and $\SS (\ee) = \ff$. Hence, $\SS$ is not normal. \end{definition} We are now in position to define quantum supermaps. \begin{definition}{Definition} A {\em quantum supermap} (or simply, \emph{supermap}) is a linear normal CP map $\SS : \cb{\mm_1,\nn_1} \frecc \cb{\mm_2,\nn_2}$. \end{definition} The convex set of quantum supermaps from $\cb{\mm_1,\nn_1}$ to $\cb{\mm_2,\nn_2}$ will be denoted by $\cpq{\mm_1 , \nn_1 ; \mm_2 , \nn_2}$. A partial order $\ll$ can be introduced in it as follows: given two maps $\SS_1, \SS_2 \in \cpq{\mm_1 , \nn_1 ; \mm_2 , \nn_2}$, we write $\SS_1 \ll \SS_2$ if $\SS_2 - \SS_1 \in \cpq{\mm_1 , \nn_1 ; \mm_2 , \nn_2}$. We now specialize the definition of quantum supermaps to the following two main cases of interest. \begin{definition}{Definition} A quantum supermap $\SS \in \cpq{\mm_1 , \nn_1 ; \mm_2 , \nn_2}$ is \begin{itemize} \item[-] {\em deterministic} if it preserves the set of quantum channels, i.e.~if $\SS (\ee) \in \cpn{\mm_2, \nn_2}$ for all $\ee\in\cpn{\mm_1,\nn_1}$; \item[-] {\em probabilistic} if a deterministic supermap $\mathsf T \in \cpq{\mm_1, \nn_1; \mm_2,\nn_2 }$ exists such that $\SS \ll \mathsf T$. \end{itemize} \end{definition} Deterministic supermaps are a particular case of probabilistic supermaps. We will label by $\cpqn{\mm_1, \nn_1; \mm_2,\nn_2}$ the subset of deterministic supermaps in $\cpq{\mm_1, \nn_1; \mm_2,\nn_2}$. Obviously, composing two quantum supermaps one still obtains a supermap: if $\SS_1 \in \cpq{\mm_1, \nn_1; \mm_2,\nn_2}$ and $\SS_2 \in \cpq{\mm_2, \nn_2; \mm_3,\nn_3}$, the composition map $\SS_2 \SS_1$ is an element in $\cpq{\mm_1, \nn_1; \mm_3,\nn_3}$. Similarly, the composition of two probabilistic [resp.~deterministic] supermaps is a probabilistic [resp.~deterministic] supermap. We now introduce two examples of supermaps which will play a very important role in the next section. \begin{theorem}{Proposition}\label{compo} {\rm (Concatenation)} Given two maps $\aa \in \cp{\nn_1, \nn_2}$ and $\bb \in \cp{\mm_2, \mm_1}$, define the map $$ \CC_{\aa,\bb} : \cb{\mm_1,\nn_1} \frecc \cb {\mm_2,\nn_2} \, , \qquad \CC_{\aa,\bb} (\ee) = \aa \ee \bb \, . $$ Then $\CC_{\aa,\bb}\in\cpq{\mm_1, \nn_1; \mm_2, \nn_2}$. Moreover, if $\aa$ and $\bb$ are quantum channels, then $\CC_{\aa,\bb}$ is deterministic. \end{theorem} \begin{proof} The map $\CC_{\aa,\bb}$ is normal: if $\ee_n \Uparrow \ee$, then the sequence $\{\aa \ee_n\bb\}_{n \in \mathbb N}$ is CP-increasing and CP-bounded by $\aa \ee \bb$. Using Proposition \ref{Teo. Berb. 2}, we have $\wklim_n \aa \ee_n\bb (A) = \aa \ee \bb (A)$ for all $A\in\mm_2$, hence $\aa \ee_n\bb \Uparrow \aa \ee\bb$, i.e.~$\CC_{\aa,\bb}$ is normal. To prove complete positivity, note that for every map $\tilde \ee \in \cb{\mmn,\nnn}$ one has $(\II_n \otimes \CC_{\aa,\bb}) (\tilde \ee) = (\ii_n \otimes \aa) \tilde \ee (\ii_n \otimes \bb)$. Therefore, if $\tilde \ee \succeq 0$, then also $(\II_n \otimes \CC_{\aa,\bb})(\tilde \ee)\succeq 0$, hence $\II_n \otimes \CC_{\aa,\bb}$ is positive and $\CC_{\aa,\bb}$ is CP. Finally, if $\aa$ and $\bb$ are quantum channels, then $\aa\ee\bb\in\cpn{\mm_2,\nn_2}$ for all $\ee\in\cpn{\mm_1,\nn_1}$, i.e.~$\CC_{\aa,\bb}$ is deterministic. \end{proof} \begin{theorem}{Proposition}\label{ampli} {\rm (Amplification)} Suppose $\vv$ is a Hilbert space, and define the amplification supermap $$ \PI_\vv : \cb{\mm,\nn} \frecc \cb{\mm\votimes \lv , \nn\votimes \lv} \, , \qquad \PI_\vv (\ee) = \ee \otimes \ii_\vv \, , $$ where we recall that $\ii_\vv := \ii_{\lv}$ (cf.~Proposition \ref{tensor product of maps} for the definition of tensor product). Then the map $\PI_\vv$ is a deterministic supermap, that is, $\PI_\vv \in \cpqn{\mm,\nn ; \mm\votimes \lv, \nn\votimes \lv}$. \end{theorem} \begin{proof} If $\ee_n\Uparrow \ee$, then the sequence $\{\ee_n \otimes \ii_\vv\}_{n \in \mathbb N}$ is CP-increasing and CP-bounded by $\ee \otimes \ii_\vv$, hence $\ee_n \otimes \ii_\vv \Uparrow \tilde{\aa}$ for some $\tilde{\aa}\in\cp{\mm\votimes\lv,\nn\votimes\lv}$ by Proposition \ref{Teo. Berb. 2}. We have \begin{align*} \tilde{\aa} (A\otimes B) & = \wklim_n (\ee_n \otimes \ii_\vv) (A\otimes B) = \wklim_n \ee_n (A) \otimes B = \ee (A) \otimes B \\ & = (\ee \otimes \ii_\vv) (A\otimes B) \end{align*} for all $A\in\mm$ and $B\in\lv$, which implies $\tilde{\aa} = \ee \otimes \ii_\vv$ by Proposition \ref{tensor product of maps}. Thus, $\ee_n\otimes\ii_\vv \Uparrow \ee\otimes\ii_\vv$, i.e.~$\Pi_\vv$ is normal. Clearly, if $\ee$ is unital, so is $\Pi_\vv (\ee) = \ee \otimes \ii_\vv$. To prove complete positivity, note that for every $\tilde \ee\in \cp{\mmn , \nnn}$ we have $ (\II_n \otimes \Pi_\vv) (\tilde \ee) = \tilde \ee \otimes \ii_\vv \succeq 0$, hence $\II_n \otimes \Pi_\vv$ is positive and $\Pi_\vv$ is CP. \end{proof} The main result in the next section is that every deterministic supermap in the set $\cpqn{\mm_1,\elle{\kk_1} ; \mm_2,\elle{\kk_2}}$ is the composition of an amplification followed by a concatenation. \section{Dilation of deterministic supermaps}\label{sez. stine} This section contains the central result of our paper, namely the following dilation theorem for deterministic supermaps. \begin{theorem}{Theorem}\label{teo. centr.} {\rm (Dilation of deterministic supermaps)} Suppose $\mm_1$, $\mm_2$ are von Neumann algebras. A linear map $\SS : \cb{\mm_1 , \elle{\kk_1}} \frecc \cb{\mm_2 , \elle{\kk_2}}$ is a deterministic supermap if and only if there exists a triple $(\vv ,\, V ,\, \ff)$, where \begin{itemize} \item[-] $\vv$ is a separable Hilbert space \item[-] $V:\kk_2\frecc \kk_1 \otimes \vv$ is an isometry \item[-] $\ff$ is a quantum channel in $\cpn{\mm_2,\mm_1\votimes\lv}$ \end{itemize} such that \begin{equation}\label{eq. centr. 2} [\SS (\ee)](A) = V^\ast \left[ (\ee\otimes \ii_{\vv}) \ff(A) \right] V \quad \forall \ee\in\cb{\mm_1 , \elle{\kk_1}} \, , \, \forall A\in\mm_2 \, . \end{equation} The triple $(\vv ,\, V ,\, \ff)$ can always be chosen in a way that \begin{equation}\label{dens. in hhat1} \vv = \spannochiuso{(u^\ast\otimes I_{\vv}) Vv \mid u\in\kk_1 \, , \, v \in\kk_2} \, . \end{equation} \end{theorem} In Eq.~\eqref{dens. in hhat1}, the adjoint $u^\ast$ of $u\in\kk_1$ is the linear functional $u^\ast : w\mapsto \scal{u}{w}$ on $\kk_1$. \begin{definition}{Definition}\label{def:min} If a Hilbert space $\vv$, an isometry $V:\kk_2\frecc \kk_1 \otimes \vv$, and a quantum channel $\ff\in\cpn{\mm_2,\mm_1\votimes\lv}$ are such that Eq.~\eqref{eq. centr. 2} holds, then we say that the triple $(\vv ,\, V ,\, \ff)$ is a {\em dilation} of the supermap $\SS$. If also Eq.~\eqref{dens. in hhat1} holds, then we say that the dilation $(\vv ,\, V ,\, \ff)$ is \emph{minimal}. \end{definition} The importance of the minimality property is highlighted by the following fact. \begin{theorem}{Proposition}\label{prop: minimality} Let $(\vv ,\, V ,\, \ff)$ and $(\vv' ,\, V' ,\, \ff')$ be two dilations of the deterministic supermap $\SS\in\cpqn{\mm_1, \elle{\kk_1} ; \mm_2, \elle{\kk_2}}$. If $(\vv ,\, V ,\, \ff)$ is minimal, then there exists a unique isometry $W : \vv \frecc \vv^\prime$ such that $V^\prime = (I_{\kk_1} \otimes W) V$. Moreover, the relation $\ff(A) = (I_{\mm_1} \otimes W^\ast) \ff'(A) (I_{\mm_1} \otimes W)$ holds for all $A\in\mm_2$. \end{theorem} The proofs of Theorem \ref{teo. centr.} and Proposition \ref{prop: minimality} will be given at the end of this section. \begin{definition}{Remark} In Proposition \ref{prop: minimality}, if also the dilation $(\vv' ,\, V' ,\, \ff')$ is minimal, then the isometry $W$ is actually unitary. Indeed, let $W' : \vv' \frecc \vv$ be the isometry such that $V = (I_{\kk_1} \otimes W') V'$. We have $V = (I_{\kk_1} \otimes W') (I_{\kk_1} \otimes W) V = (I_{\kk_1} \otimes W'W) V$. Uniqueness then implies $W'W=I_{\vv}$, hence $W$ is unitary. \end{definition} \begin{definition}{Remark} As claimed at the end of the previous section, Theorem \ref{teo. centr.} shows that every deterministic supermap $\SS\in\cpqn{\mm_1,\elle{\kk_1} ; \mm_2,\elle{\kk_2}}$ is the composition of an amplification followed by a concatenation. Indeed, setting $\aa = V^\ast\odot_{\elle{\kk_1\otimes\vv}} V$, we have $\aa\in\cpn{\elle{\kk_1\otimes\vv},\elle{\kk_2}}$, and Eq.~\eqref{eq. centr. 2} gives $\SS =\CC_{\aa,\ff} \, \Pi_\vv$. \end{definition} \begin{definition}{Remark} It is useful to connect Theorem \ref{teo. centr.} with Eq.~\ref{eq. intro 1} and the previous results of \cite{CDaP1,CDaP2}. So, let us assume $\mm_1=\elle{\hh_1}$ and $\mm_2\subset\elle{\hh_2}$. We claim that a linear map $\SS:\cb{\elle{\hh_1},\elle{\kk_1}}\frecc\cb{\mm_2,\elle{\kk_2}}$ is a deterministic supermap if and only if there exist two separable Hilbert spaces $\vv,\uu$ and two isometries $V:\kk_2\frecc \kk_1 \otimes \vv$, $U:\hh_1\otimes\vv\frecc \hh_2 \otimes \uu$ such that \begin{equation}\label{eq. centr. old} [\SS (\ee)](A) = V^\ast \left[ (\ee\otimes \ii_\vv) (U^\ast(A\otimes I_\uu)U) \right] V \end{equation} for all $\ee\in\cb{\elle{\hh_1} , \elle{\kk_1}}$ and $A\in\mm_2$. Indeed, by Stinespring theorem (Theorem 4.3 p.~165 in \cite{AJP} and the discussion following it) every quantum channel $\ff\in\cpn{\mm_2,\elle{\hh_1}\votimes\lv}=\cpn{\mm_2,\elle{\hh_1\otimes\vv}}$ can be written as $$ \ff(A)=U^\ast(A\otimes I_\uu)U \quad \forall A\in\mm_2 $$ for some separable Hilbert space $\uu$ and some isometry $U:\hh_1\otimes\vv\frecc \hh_2 \otimes \uu$. Eq.~\eqref{eq. centr. old} then follows by Eq.~\eqref{eq. centr. 2}, thus recovering the main result of \cite{CDaP1,CDaP2}. \end{definition} \begin{definition}{Remark}\label{rem:opspa2} Theorem \ref{teo. centr.} can be compared with an analogous result in the theory of operator spaces, namely the Christensen-Effros-Sinclair-Pisier (CSPS) theorem for maps $\varphi : \lh\hagotimes\lh \frecc \lk$ which are {\em completely bounded (CB)} in the sense of operator spaces; here, $\lh\hagotimes\lh$ is the algebraic tensor product $\lh\hotimes\lh$ endowed with the operator space structure given by the Haagerup tensor norm (see Chapter 17 in \cite{Paul} for a review of these topics). Indeed, one can show that, if a linear map $\SS : \cb{\mm_1,\elle{\kk_1}} \frecc \cb{\mm_2,\elle{\kk_2}}$ is CP {\em and} probabilistic, then it is automatically CB. In this case, if moreover $\mm_i = \elle{\hh_i}$, regarding the linear spaces $\cb{\elle{\hh_i},\elle{\kk_i}}$ as dual operator spaces according to Remark \ref{rem:opspa1}, normality of $\SS$ is equivalent to its weak*-continuity. These facts can be proven with some efforts as direct consequences of definitions, or more easily checked {\em a posteriori} by making use of Eq.~\eqref{eq. centr. 2} in Theorem \ref{teo. centr.}. Being an operator space, $\cb{\mm_2,\elle{\kk_2}}$ can be completely isometrically immersed into some $\lk$ by Ruan theorem (Theorem 13.4 in \cite{Paul}). On the other hand, by the completely isometric isomorphism $\cb{\elle{\hh_1},\elle{\kk_1}} \simeq \elle{\hh_1,\kk_1}\whotimes\elle{\kk_1,\hh_1}$ explained in Remark \ref{rem:opspa1}, $\SS$ can be regarded as a CB map from $\elle{\hh_1,\kk_1}\hagotimes\elle{\kk_1,\hh_1}$ into $\lk$. Assuming $\hh_1 = \kk_1 = \hh$, CSPS theorem (in the form of Theorem 17.12 of \cite{Paul}) then applies, implying the existence of an Hilbert space $\uhat$, two operators $S,T : \kk\frecc\uhat$ and two unital $\ast$-homomorphisms $\pi_1,\pi_2: \lh \frecc \elle{\uhat}$ such that \begin{equation}\label{eq:CSPS} \SS(E\otimes F) = S^\ast \pi_1 (E) \pi_2 (F) T \quad \forall E,F\in\lh \, . \end{equation} However, we stress that this expression is very different from the the dilation of Theorem \ref{teo. centr.} above for deterministic supermaps. In particular, our central Eq.~\eqref{eq. centr. 2} {\em does not} follow from Eq.~\eqref{eq:CSPS} in any way. The main novelty of Theorem \ref{teo. centr.} with respect to CSPS theorem may be traced back to the requirement that deterministic supermaps preserve quantum channels. Indeed, this is a very strong request, which can not be employed in the CSPS dilation of Eq.~\eqref{eq:CSPS} for the reason that Ruan theorem gives no means to characterize the image of the subset of quantum channels $\cpn{\mm_2,\elle{\kk_2}}$ under the immersion $\cb{\mm_2,\elle{\kk_2}} \hookrightarrow \lk$. In other words, it is not possible to translate the requrement that a deterministic supermap $\SS$ preserves the set of quantum channels into Eq.~\eqref{eq:CSPS}. Instead, we will see that, in order to prove Theorem \ref{teo. centr.}, one needs to explicitely construct {\em two} Stinespring-type dilations $(\uhat_1,\pi_1,U_1)$ and $(\uhat_2,\pi_2,U_2)$ associated to $\SS$ (see the proof of Proposition \ref{teo. centr. prel.} below), and make an essential use of the quantum channel preserving property in the construction of the dilation $(\uhat_1,\pi_1,U_1)$ (via Lemma \ref{lemma agg.} below). Of course, one can recover our dilation \eqref{eq. centr. 2} from CSPS Eq.~\eqref{eq:CSPS} in the simple case $\mm_2 = \C$, for which the equality $\cb{\mm_2,\elle{\kk_2}} = \elle{\kk_2}$ is trivial and does not require Ruan theorem. We leave the details of the proof to the reader. Note however that even in this case the proof still needs an application of Lemma \ref{lemma agg.} below. \end{definition} \begin{definition}{Remark}\label{rem:teo.centr.pred.} As anticipated in the Introduction, Eq.~\eqref{eq. centr. 2} shows that all deterministic supermaps can be obtained by connecting quantum devices in suitable circuits. Such a physical interpretation is clear in the Schr\"odinger picture: indeed, turning Eq.~\eqref{eq. centr. 2} into its predual, we obtain $$ [\SS (\ee)]_\ast (\rho) = \ff_\ast \left[( \ee \otimes \ii_\vv)_\ast ( V \rho V^\ast) \right] $$ for all elements $\rho$ in the set $\trcl{\kk_2}$ of trace class operators on $\kk_2$ and $\ee \in \cb{\mm_1 , \elle{\kk_1}}$. The above equation means that the higher-order transformation $\SS$ can be obtained in the following way: \begin{enumerate} \item apply an invertible transformation (corresponding to the isometry $V$), which transforms the system $\kk_2$ into the composite system $\kk_1 \otimes \vv$; \item use the input device (corresponding to the predual quantum operation $\ee_*$) on system $\kk_1$, thus transforming it into system $\hh_1$; \item apply a physical transformation (corresponding to the predual channel $\ff_*$). \end{enumerate} In particular, it $\mm_i=\elle{\hh_i}$, we can take the Stinespring dilation $\ff(A) = U^\ast (A\otimes I_\uu) U$ of $\ff$. The last equation then rewrites $$ [\SS (\ee)]_\ast (\rho) = {\rm tr}_\uu \left\{ U \left[( \ee \otimes \ii_\vv)_\ast ( V \rho V^\ast) \right] U^\ast \right\} $$ where ${\rm tr}_\uu$ denotes the partial trace over $\uu$. If $\rho$ is a quantum state (i.e.~$\rho\geq 0$ and $\trt{\rho} =1$), this means that the quantum system with Hilbert space $\kk_2$ first undergoes the invertible evolution $V$, then the quantum channel $(\ee \otimes \ii_\vv)_\ast$, and finally the invertible evolution $U$, after which the ancillary system with Hilbert space $\uu$ is discarded. It is interesting to note that the same kind of sequential composition of invertible evolutions also appears in a very different context: the reconstruction of quantum stochastic processes from correlation kernels \cite{belavkin,lindblad,parthasarathy}. That context is very different from the present framework of higher-order maps, and it is a remarkable feature of Theorem \ref{teo. centr.} that any deterministic supermap on the space of quantum operations can be achieved through a two-step sequence of invertible evolutions. \end{definition} Theorem \ref{teo. centr.} contains as a special case the Stinespring dilation of quantum channels. This fact is illustrated in the following two examples. \begin{definition}{Example} Suppose that $\mm_1 = \mm_2 = \C$, the trivial von Neumann algebra. In this case we have the identification $\cb{\C , \elle{\kk_i}} = \elle{\kk_i}$. Precisely, the element $\ee\in \cb{\C , \kk_i}$ is identified with the operator $A_\ee = \ee (1) \in \elle{\kk_i}$. Using the fact that $\cpn{\mm_2,\mm_1\votimes\lv} = \{ I_\vv \}$ we then obtain that Eq.~\eqref{eq. centr. 2} becomes $$ [\SS(\ee)](1) = V^\ast (A_\ee\otimes I_\vv) V \, , $$ which is just Stinespring dilation for normal CP maps. A linear map $\SS : \elle{\kk_1} \frecc \elle{\kk_2}$ is thus in $\cpqn{\C,\elle{\kk_1} ; \C,\elle{\kk_2}}$ if and only if it is a unital normal CP map, i.e.~a quantum channel. \end{definition} \begin{definition}{Example} Suppose now that $\kk_1 = \kk_2 =\C$. In this case we have the identification $\cb{\mm_i , \C} = \mm_{i\, \ast}$, the predual space of $\mm_i$ (see e.g.~Proposition 3.8 in \cite{Paul}). Precisely, the CP map $\ee\in \cb{\mm_i , \C}$ is identified with the element $\rho_\ee\in\mm_{i\, \ast}$ given by $\ee(A) = \rho_\ee(A) \ \forall A\in\mm_i$. Moreover, the isometry $V : \C \to \C \otimes \vv = \vv$ is identified with a vector $v \in \vv$ with $\no {v} = 1$, and Eq.~\eqref{eq. centr. 2} becomes \begin{align*} [\SS (\ee)](A) & = \scal{v}{\left[(\ee\otimes \ii_{\vv}) \ff(A)\right] v}\\ & = (\rho_\ee\otimes \omega_v)(\ff(A)) \\ & = [\ff_\ast (\rho_\ee\otimes \omega_v)] (A) \, , \end{align*} where $\omega_v \in \elle{\vv}_\ast$ is the linear form $\omega_v : A\mapsto \scal{v}{Av}$. Thus, $\SS (\ee) = \ff_\ast (\rho_\ee\otimes \omega_v)$, hence $\SS$, viewed as a linear map $\SS: \mm_{1\, \ast} \frecc \mm_{2\, \ast}$, is CP and trace preserving. In other words, $\SS$ is a quantum channel in the Schr\"odinger picture. \end{definition} The rest of this section is devoted to the proof of Theorem \ref{teo. centr.}, which first requires some auxiliary lemmas. \begin{theorem}{Lemma}\label{to lemma agg.} Suppose $\mm \subset \lh$, and let $\SS \in \cpqn{\mm,\lh;\nn,\lk}$. If $\ee, \ff \in \cp{\mm,\lh}$ are such that $\ee(I_\hh) = \ff (I_\hh)$, then $[\SS(\ee)] (I_\nn) = [\SS(\ff)] (I_\nn)$. \end{theorem} \begin{proof} By linearity, it is enough to prove the claim for $\ee (I_\hh) = \ff (I_\hh) \leq I_\hh$. Let $A := I_\hh - \ee (I_\hh)$, $\aa:= A^{\frac 12} \odot_\mm A^{\frac 12}$, $\ee' :=\ee + \aa$, and $\ff' := \ff+\aa$. With this definition, $\ee^\prime ,\ff^\prime \in \cpn{\mm,\lh}$. Since $\SS$ is deterministic, one has $$ \begin{array}{lll} I_\kk &= [\SS (\ee')] (I_\nn) &= [\SS(\ee)] (I_\nn) + [\SS(\aa)](I_\nn) \\ I_\kk &= [\SS (\ff')] (I_\nn) &= [\SS (\ff)](I_\nn) + [\SS(\aa)](I_\nn). \end{array} $$ By comparison, this implies that $[\SS(\ee)] (I_\nn) = [\SS (\ff)] (I_\nn)$. \end{proof} \begin{theorem}{Lemma}\label{lemma norm.} Suppose $\mm \subset \lh$, and let $\SS \in \cpqn{\mm,\lh;\nn,\lk}$. Then, for all $\ee\in\cp{\lh,\lh}$, $$ [\SS (\ee\ff)] (I_\nn) = [\SS (\left.\ee\right|_\mm)] (I_\nn) \quad \forall \ff\in\cpn{\mm,\lh} \, . $$ \end{theorem} \begin{proof} Note that the restriction $\left.\ee\right|_\mm$ belongs to $\cp{\mm,\elle{\hh}}$ by Remark \ref{restr-CP}. Therefore, since $\ee\ff (I_\hh) = \left.\ee\right|_\mm (I_\hh)$ for all $\ff\in\cpn{\mm,\lh}$, the claim is an immediate consequence of Lemma \ref{to lemma agg.}. \end{proof} \begin{theorem}{Lemma}\label{lemma agg.} Suppose $\mm \subset \lh$, and let $\SS \in \cpqn{\mm,\lh;\nn,\lk}$. Then $$ [\SS (\ee (I_\hh \odot_\mm A))] (I_\nn) = [\SS (\ee (A\odot_\mm I_\hh))](I_\nn) $$ for all $\ee \in\cb{\lh,\lh}$ and $A \in\lh$. In particular, $$ [\SS (E \odot_\mm A F)] (I_\nn) = [\SS ( EA \odot_\mm F)] (I_\nn) \quad \forall E, F , A \in \lh \, . $$ \end{theorem} \begin{proof} By linearity, it is enough to prove the claim for $A^* = A$ and for $\ee \in \cp{\lh,\lh}$. One has $$ A\odot_\mm I_\hh - I_\hh \odot_\mm A = \frac 1{2i} (\ee_+ - \ee_-) \, , $$ where $\ee_+ , \ee_- \in \cp{\mm,\lh}$ are given by $$ \ee_\pm := (A \pm iI_\hh)^\ast \odot_\mm (A \pm iI_\hh) \, . $$ Since $\ee_+(I_\hh) = \ee_-(I_\hh)$, we can apply Lemma \ref{to lemma agg.} to the maps $\ee\ee_+$ and $\ee\ee_-$, and obtain \begin{align*} & [\SS ( \ee (A \odot_\mm I_\hh))] (I_\nn)- [\SS(\ee (I_\hh \odot_\mm A))] (I_\nn) = \\ & \qquad \qquad \qquad \qquad \qquad \qquad = \frac{1}{2i} ([\SS (\ee\ee_+)] (I_\nn) - [\SS (\ee\ee_-)] (I_\nn)) \\ & \qquad \qquad \qquad \qquad \qquad \qquad = 0 \, , \end{align*} hence the claim. The last statement trivially follows taking $\ee=E\odot_\mm F$. \end{proof} \begin{theorem}{Lemma}\label{prop. sulla forma ass.} Suppose $\mm_1\subset\elle{\hh_1}$, and let $\SS$ be a (not necessarily deterministic) supermap in the set $\cpq{\mm_1,\elle{\kk_1};\mm_2,\elle{\kk_2}}$. Define a sesquilinear form $\scal{\cdot}{\cdot}_\SS$ on the algebraic tensor product $\elle{\kk_1,\hh_1} \hotimes \mm_2 \hotimes \kk_2$ as follows $$ \scal{E_1\otimes A_1 \otimes v_1}{E_2\otimes A_2 \otimes v_2}_\SS := \scal{v_1}{\left[ \SS \left( E_1^\ast \odot_{\mm_1} E_2 \right) \right] \left( A_1^\ast A_2 \right) v_2} \, . $$ Then, the sesquilinear form $\scal{\cdot}{\cdot}_\SS$ is positive semidefinite. If also $\TT\in \cpq{\mm_1,\elle{\kk_1};\mm_2,\elle{\kk_2}}$ and $\TT\ll\SS$, then $$ 0\leq \scal{\phi}{\phi}_\TT \leq \scal{\phi}{\phi}_\SS \quad \forall \phi\in \elle{\kk_1,\hh_1} \hotimes \mm_2 \hotimes \kk_2 \, . $$ \end{theorem} \begin{proof} Let $\phi = \sum_{i=1}^n E_i \otimes A_i \otimes v_i$ be a generic element in the linear space $\elle{\kk_1,\hh_1} \hotimes \mm_2 \hotimes \kk_2$. Let $\{e_i\}_{i=1}^n$ be the standard basis for the Hilbert space $\C^n$, and $\{e_{ij}\}_{i,j=1}^n$ be the standard basis of the matrix space $M_n(\C)$, given by $e_{ij} (e_k) = \delta_{jk} e_i$. Define \begin{align*} \tilde v &:= \sum_{i=1}^n e_i \otimes v_i \in \kk_2^{(n)} \\ \tilde A &:= \sum_{i=1}^n e_{1i} \otimes A_i \in \mm_2^{(n)} \\ \tilde E &: = \sum_{i=1}^n e_{ii} \otimes E_i \in \elle{\kk_1^{(n)} , \hh_1^{(n)}} \, . \end{align*} With these definitions, we have $\tilde{E}^\ast \odot_{\mm_1^{(n)}} \tilde{E} = \sum_{i,j=1}^n (e_{ii} \odot_{M_n(\C)} e_{jj} ) \otimes (E_i^\ast \odot_{\mm_1} E_j)$ and $\tilde A^*\tilde A =\sum_{i,j=1}^n e_{ij} \otimes A_i^* A_j$. Hence, we obtain \begin{align*} (\II_n \otimes \SS) (\tilde{E}^\ast \odot_{\mm_1^{(n)}} \tilde E) & = \sum_{i,j} (e_{ii} \odot_{M_n(\C)} e_{jj} ) \otimes \SS (E_i^\ast \odot_{\mm_1} E_j) \end{align*} and \begin{align*} [(\II_n \otimes \SS) (\tilde{E}^\ast \odot_{\mm_1^{(n)}} \tilde E) ] (\tilde A^* \tilde A ) &= \sum_{i,j} e_{ij} \otimes [\SS (E_i^\ast \odot_{\mm_1} E_j)] (A^*_i A_j) . \end{align*} Complete positivity of $\SS$ then implies \begin{align*} 0 &\leq \scal{\tilde v}{ [(\II_n \otimes \SS)(\tilde{E}^\ast \odot_{\mm_1^{(n)}} \tilde E)] (\tilde A^*\tilde A) \tilde v } \\ &= \sum_{i,j} \scal{v_i}{[\SS(E_i^\ast \odot_{\mm_1} E_j)] (A_i^* A_j) v_j} \\ & = \scal{\phi}{\phi}_\SS \, , \end{align*} which shows that the sesquilinear form $\scal{\cdot}{\cdot}_\SS$ is positive semidefinite. Since the sesquilinear forms $\scal{\cdot}{\cdot}_\TT$, $\scal{\cdot}{\cdot}_\SS$ and $\scal{\cdot}{\cdot}_{\SS-\TT}$ are all positive semidefinite, the second statement in the lemma follows from $$ \scal{\phi}{\phi}_\TT = \scal{\phi}{\phi}_\SS - \scal{\phi}{\phi}_{\SS-\TT} \leq \scal{\phi}{\phi}_\SS \, . $$ \end{proof} In the next two lemmas, we \emph{do not} assume separability as a part in the definition of Hilbert spaces. \begin{theorem}{Lemma}\label{normal pi} Let $\hh$ be separable, $\{e_i\}_{i\in \mathbb N}$ be a Hilbert basis for $\hh$, and $P_n$ be the orthogonal projection onto $\spanno{ e_i \mid i \leq n}$. A unital $\ast$-homomorphism $\pi: \lh \frecc \elle{\uhat}$ (with $\uhat$ not assumed separable) is normal if and only if $\pi (P_n) \uparrow I_{\uhat}$. \end{theorem} \begin{proof} Since $P_n \uparrow I_\hh$, if $\pi$ is normal one must necessarily have $\pi(P_n) \uparrow \pi (I_\hh) = I_\uhat$. Conversely, assume that $\pi (P_n) \uparrow I_\uhat$. Let us decompose $\pi$ into the orthogonal sum of $\ast$-homomorphisms $\pi = \pi_{\rm nor} \oplus \pi_{\rm sin}$, where $\pi_{\rm nor}$ is normal and $\pi_{\rm sin}$ is singular, that is $\pi_{\rm sin} (K) = 0$ for every compact operator $K\in\lh$ (see e.g.~Proposition 10.4.13, p.~757 of \cite{KadRin}). We then have $\pi (P_n) = \pi_{\rm nor} (P_n) \uparrow \pi_{\rm nor} (I_\hh)$ by normality, hence $\pi_{\rm nor} (I_\hh) = I_\uhat$. On the other hand, $I_\uhat = \pi_{\rm nor} (I_\hh) \oplus \pi_{sin} (I_\hh)$. This implies $\pi_{\rm sin} (I_\hh) = 0$, and, therefore, $\pi_{\rm sin} =0$. \end{proof} \begin{theorem}{Lemma}\label{lemma sep. di K} Let $\hh$ be separable and $\pi : \lh \frecc \elle{\uhat}$ be a normal unital $\ast$-homomorphism (with $\uhat$ not assumed separable). If there exists a separable subset $\ss\subset\uhat$ such that the linear space \begin{equation}\label{denso1} \spanno{\pi (A) v \mid A\in\lh , v\in\ss} \end{equation} is dense in $\uhat$, then $\uhat$ is separable. \end{theorem} \begin{proof} Since the Hilbert space $\hh$ is separable, the Banach subspace $\lzh$ of the compact operators in $\lh$ is separable. Let $P_n$ be defined as in the previous proposition. By normality of $\pi$, we have $\lim_n \no{\pi(P_n) v - v} = 0$ for all $v\in\uhat$ (Lemma 5.1.4 in \cite{KadRinI}). Therefore, $\pi (A) v = \lim_n \pi (AP_n)v$ for all $A\in \lh$ and $v\in\uhat$, where $AP_n\in\lzh$ because $P_n$ has finite rank. Therefore, the closure of the linear space defined in \eqref{denso1} coincides with the closure of the linear space spanned by the set $\left\{\pi (A) v \mid A\in\lzh , v\in\ss\right\}$, which is separable by separability of $\lzh$ and $\ss$ and by continuity of the mapping $\lzh \times \ss \ni (A,v) \mapsto \pi(A) v \in \uhat$. Separability of $\uhat$ then follows. \end{proof} We are now in position to prove the existence of the dilation of Theorem \ref{teo. centr.} in the special case $\mm_1 \subset \elle{\hh} = \mm_2$ and $\dim \hh = \infty$. \begin{theorem}{Proposition}\label{teo. centr. prel.} Let $\dim \hh = \infty$, and assume $\mm\subset\lh$. If the linear map $\SS : \cb{\mm,\lh} \frecc \cb{\nn,\lk}$ is a deterministic supermap, then there exists a Hilbert space $\uu$, an isometry $U:\kk\frecc \hh \otimes \uu$ and a quantum channel $\gg\in\cpn{\nn,\mm\votimes\lu}$ such that \begin{equation}\label{eq. centr.} [\SS (\ee)](A) = U^\ast \left[(\ee\otimes \ii_\uu) \gg (A) \right] U \quad \forall \ee\in\cb{\mm,\lh} \, , \, \forall A\in\nn \, . \end{equation} \end{theorem} \begin{proof} Suppose that $\SS : \cb{\mm,\lh} \frecc \cb{\nn,\lk}$ is a deterministic supermap. Let $\scal{\cdot}{\cdot}_1$ be the positive semidefinite sesquilinear form in $\lh \hotimes \kk$ defined by $$ \scal{E_1\otimes v_1}{E_2\otimes v_2}_1: = \scal{E_1 \otimes I_\nn \otimes v_1}{ E_2 \otimes I_\nn \otimes v_2 }_\SS . $$ Let $\rr_1$ be its kernel and $\uhat_1$ be the Hilbert space completion of the quotient space $\lh \hotimes \kk / \rr_1$ (not assumed separable). We denote by $\scal{\cdot}{\cdot}_1$ and $\no{\cdot}_1$ the scalar product and norm in $\uhat_1$. Moreover, let $\rr_2$ be the kernel of the positive semidefinite sesquilinear form $\scal{\cdot}{\cdot}_\SS$, and let $\uhat_2$ be the Hilbert space completion (not assumed separable) of the quotient space $\lh \hotimes \nn \hotimes \kk / \rr_2$ with respect to such form. We denote by $\scal{\cdot}{\cdot}_2$ and $\no{\cdot}_2$ the resulting scalar product and norm in $\uhat_2$. We define two linear maps $$ \begin{array}{ll} U_1: \kk \frecc \lh \hotimes \kk \qquad &U_1 v = I_\hh \otimes v \\ U_2 : \lh \hotimes \kk \frecc \lh \hotimes \nn \hotimes \kk \qquad & U_2 (E\otimes v) = E\otimes I_\nn \otimes v \, . \end{array} $$ It is easy to verify that $U_1$ and $U_2$ extend to isometries $U_1: \kk \frecc \uhat_1$ and $U_2: \uhat_1 \frecc \uhat_2$, respectively. Indeed, for $U_1$ we have the equality \begin{align*} \no{U_1 v }_1^2 &= \scal{I_\hh \otimes I_\nn \otimes v}{I_\hh \otimes I_\nn \otimes v}_\SS \\ & = \scal{v}{[\SS(I_\hh \odot_\mm I_\hh)](I_\nn) v} \\ &= \no{v}^2 \, , \end{align*} where we used the fact that $\SS$ is deterministic, implying $[\SS (I_\hh \odot_\mm I_\hh)] (I_\nn) = I_\kk$. For $U_2$, taking $\phi = \sum_{i=1}^n E_i \otimes v_i$ we have the equality \begin{align*} \no{U_2 \phi}_2^2 & = \sum_{i,j} \scal{ E_i \otimes I_\nn \otimes v_i} { E_j \otimes I_\nn \otimes v_j }_\SS \\ & = \sum_{i,j} \scal{ E_i \otimes v_i}{E_j \otimes v_j}_1 \\ & = \no{\phi}_1^2 \, . \end{align*} For $B\in\lh$, we introduce the linear operator $\pi_1 (B)$ on $\lh \hotimes \kk$ defined by $$ [\pi_1 (B)] (E\otimes v) := B E\otimes v $$ for all $E\in\lh$, $v\in\kk$. We claim that $\pi_1 (B)$ extends to a bounded linear operator on $\uhat_1$, that $\pi_1$ is a normal unital $\ast$-homomorphism of $\lh$ into $\elle{\uhat_1}$, and that $\uhat_1$ is separable. Indeed, for every $\phi = \sum_{i=1}^n E_i \otimes v_i$ and $\psi = \sum_{j=1}^m F_j \otimes w_j$, we have \begin{align*} \scal{\phi}{\pi_1 (B) \psi }_1 & = \sum_{i,j} \scal{v_i}{\left[ \SS (E_i^\ast \odot_\mm BF_j ) \right] (I_\kk) w_j} \\ & = \sum_{i,j} \scal{v_i}{\left[ \SS (E_i^\ast B \odot_\mm F_j ) \right] (I_\kk) w_j} \\ & = \scal{\pi_1 (B^\ast)\phi}{\psi}_1 , \end{align*} where we used Lemma \ref{lemma agg.}. Note that $\pi_1 (I_\hh)$ is the identity on $\lh\hotimes\kk$, and $$ \pi_1 (B_1) \pi_1 (B_2) = \pi_1 (B_1 B_2) \quad \forall B_1 ,B_2 \in \lh \, . $$ It follows that, for all $\phi \in \lh \hotimes \kk$, the map $\omega_\phi : B \mapsto \scal{\phi}{\pi_1 (B) \phi}_1$ is a positive linear functional on $\lh$, hence $$ \no{\pi_1 (B) \phi}_1^2= \omega_\phi (B^*B) \leq \no{B^*B}_\infty \omega_\phi (I_\hh) =\no{B}_\infty^2\no{\phi}_1^2 . $$ Therefore, $\pi_1 (B)$ extends to a bounded operator on $\uhat_1$, and $\pi_1$ is a unital $\ast$-homomorphism of $\lh$ into $\elle{\uhat_1}$. We now prove that $\pi_1$ is normal. Let $\{e_i\}_{i\in \mathbb N} $ be a Hilbert basis for $\hh$, $Q_i$ be the orthogonal projection onto $\C e_i$, and $P_n$ be the orthogonal projection onto $\spanno{e_i \mid i\leq n }$. By Lemma \ref{normal pi}, to prove that $\pi_1$ is normal it is enough to prove that $\pi_1 (P_n)\uparrow I_{\uhat_1}$. For every $\phi = E\otimes v$, $\psi = F \otimes w$ we have \begin{align*} \scal{\phi}{\pi_1 (P_n) \psi}_1 & = \sum_{i=1}^n \scal{\pi_1 (Q_i) \phi}{\pi_1 (Q_i) \psi} _1\\ &= \sum_{i=1}^n \scal{v}{[\SS (E^\ast Q_i \odot_\mm Q_i F)](I_\nn) w} \\ & = \scal{v}{[\SS ((E^\ast \odot_{\lh} F) \ff_n )](I_\nn) w} , \end{align*} where $\ff_n = \sum_{i=1}^n Q_i \odot_\mm Q_i \in \cp{\mm,\lh}$. Let $\ff \in \cpn{\mm,\lh}$ be the quantum channel defined by $\ff_n \Uparrow \ff$. Using the polarization identity $E^\ast \odot_{\lh} F = \frac{1}{4} \sum_{k=0}^3 i^k (i^k E + F)^\ast \odot_{\lh} (i^k E + F)$, the normality of $\SS$ and Lemma \ref{lemma norm.}, we then obtain \begin{align*} & \lim_n \scal{\phi}{\pi_1 (P_n) \psi}_1 =\lim_n \scal{v}{[\SS ((E^\ast \odot_{\lh} F) \ff_n )](I_\nn) w} \\ & \qquad \qquad \quad = \frac 1 4 \sum_{k=0}^3 i^k \lim_n \scal{v}{[\SS (((i^k E + F)^\ast \odot_{\lh} (i^k E + F)) \ff_n )](I_\nn) w} \\ & \qquad \qquad \quad = \frac 1 4 \sum_{k=0}^3 i^k \scal{v}{[\SS (((i^k E + F)^\ast \odot_{\lh} (i^k E + F)) \ff )](I_\nn) w}\\ & \qquad \qquad \quad = \frac 1 4 \sum_{k=0}^3 i^k \scal{v}{[\SS ((i^k E + F)^\ast \odot_\mm (i^k E + F))](I_\nn) w}\\ & \qquad \qquad \quad = \scal{v}{[\SS (E^\ast \odot_\mm F)](I_\nn) w} \\ & \qquad \qquad \quad = \scal{\phi}{\psi}_1 \, . \end{align*} This relation extends by linearity to all $\phi ,\psi \in \lh\hotimes \kk$, and, since the sequence $\{ \pi_1 (P_n) \}_{n\in\N}$ is norm bounded, by density to all $\phi ,\psi \in \uhat_1$. Therefore, we obtain $\wklim_n \pi_1 (P_n) = I_{\uhat_1}$, thus concluding the proof of normality of $\pi_1$. Note that the linear space $\spanno{E\otimes v = \pi_1 (E) U_1 v \mid E\in\lh,\, v\in\kk}$ is dense in $\uhat_1$ by definition, hence, using Lemma \ref{lemma sep. di K} with $\uhat \equiv \uhat_1$ and $\ss \equiv U_1\kk$, we obtain that $\uhat_1$ is separable. For $C\in\nn$, we define a linear operator $\pi_2 (C)$ on $\lh \hotimes \nn \hotimes \kk$ given by $$ [\pi_2 (C)] (E\otimes A\otimes v) := E\otimes C A\otimes v $$ for all $E\in\lh$, $A\in\nn$, $v\in\kk$. We claim that $\pi_2 (C)$ extends to a bounded linear operator on $\uhat_2$ and that $\pi_2$ is a unital $\ast$-homomorphism of $\nn$ into $\elle{\uhat_2}$. Indeed, for all vectors $\phi,\psi\in\lh\hotimes\nn\hotimes\kk$, with $\phi = \sum_{i=1}^n E_i \otimes A_i \otimes v_i$ and $\psi = \sum_{j=1}^m F_j \otimes B_j \otimes w_j$, we have \begin{eqnarray*} \scal{\phi}{\pi_2 (C) \psi}_2 & = & \sum_{i,j} \scal{v_i}{\left[ \SS (E_i^\ast \odot_\mm F_j ) \right] (A_i^\ast C B_j) w_j} \\ & = & \scal{\pi_2 (C^\ast)\phi}{\psi}_2 . \end{eqnarray*} Clearly, $\pi_2 (I_\nn)$ is the identity on $\lh\hotimes\nn\hotimes\kk$. Moreover, $\pi_2 (C_1) \pi_2 (C_2) = \pi_2 (C_1 C_2)$. The same argument used for $\pi_1$ then shows that $\pi_2 (C)$ extends to a bounded operator on $\elle{\uhat_2}$, and $\pi_2$ is a unital $\ast$-homomorphism of $\nn$ into $\elle{\uhat_2}$. We now introduce the following linear map $$ \gg : \nn \frecc \elle{\uhat_1} \, , \qquad \gg(A) = U_2^\ast \pi_2 (A) U_2 \, . $$ Clearly, the map $\gg$ is CP and unital. If $\{A_n\}_{n\in\N}$ is an increasing sequence in $\nn$ such that $A_n \uparrow A$, then, for all vectors $\phi,\psi\in\lh\hotimes\kk$, with $\phi = \sum_{i=1}^m E_i \otimes v_i$, $\psi = \sum_{j=1}^k F_j \otimes w_j$, we have \begin{eqnarray*} \lim_n \scal{\phi}{\gg(A_n)\psi}_1 & = & \lim_n \scal{U_2 \phi}{\pi_2 (A_n) U_2 \psi}_2 \\ & = & \lim_n \sum_{i,j} \scal{E_i \otimes I_\nn \otimes v_i}{F_j \otimes A_n \otimes w_j}_\SS \\ & = & \lim_n \sum_{i,j} \scal{v_i}{[\SS(E_i^\ast\odot_\mm F_j)] (A_n) w_j} \\ & = & \sum_{i,j} \scal{v_i}{[\SS(E_i^\ast\odot_\mm F_j)] (A) w_j} \\ & = & \scal{\phi}{\gg(A)\psi}_1 \end{eqnarray*} as a consequence of weak*-continuity of $\SS(E_i^\ast\odot_\mm F_j)$. Hence, $\gg$ is normal, and, therefore, we have $\gg\in\cpn{\nn,\elle{\uhat_1}}$. By Lemma 2.2 p.~139 in \cite{QTOS76} (or Corollary 10.4.14 in \cite{KadRin}), separability of $\uhat_1$ and normality of $\pi_1$ imply that there exists a (separable) Hilbert space $\uu$ such that $\uhat_1 = \hh\otimes\uu$ and $\pi_1 (B) = B \otimes I_\uu$ for all $B\in\lh$. We now prove that $\gg(A) \in \mm\votimes \elle{\uu}$ for all $A\in\nn$, i.e.~actually $\gg\in\cpn{\nn,\mm\votimes \elle{\uu}}$. By Proposition 1.6 p.~184 in \cite{Tak} and by von Neumann's double commutation theorem (Theorem 3.9 p.~74 in \cite{Tak}), it is enough to show that $\gg(A) (B\otimes I_\uu) = (B\otimes I_\uu) \gg(A)$ for all $A\in\nn$ and $B\in\mm'$. So, for $\phi,\psi\in\lh\hotimes\kk$ with $\phi = \sum_{i=1}^n E_i \otimes v_i$, $\psi = \sum_{j=1}^m F_j \otimes w_j$, we have \begin{eqnarray*} \scal{\phi}{\gg(A) (B\otimes I_\uu) \psi}_1 & = & \scal{U_2\phi}{\pi_2 (A) U_2 \pi_1 (B) \psi}_1 \\ & = & \sum_{i,j} \scal{E_i \otimes I_\nn \otimes v_i}{BF_j \otimes A \otimes w_j}_\SS \\ & = & \sum_{i,j} \scal{v_i}{[\SS(E_i^\ast\odot_\mm BF_j)] (A) w_j} \\ & = & \sum_{i,j} \scal{v_i}{[\SS(E_i^\ast B \odot_\mm F_j)] (A) w_j} \\ & = & \scal{(B^\ast\otimes I_\uu)\phi}{\gg(A)\psi}_1 \, , \end{eqnarray*} where the equality $E_i^\ast\odot_\mm BF_j = E_i^\ast B\odot_\mm F_j$ comes from item (3) of Proposition \ref{prop:prop. di odot}. Hence $\gg(A) \in (\mm'\votimes \C I_\uu)' = \mm\votimes\lu$, as claimed. We conclude with the proof of Eq.~\eqref{eq. centr.}. If $E\in\lh$, $A\in\nn$ and $v,w\in\kk$, then we have, for $\ee = E^\ast \odot_\mm E$, \begin{eqnarray*} \scal{v}{\left[ \SS (\ee) \right] (A) w} & = & \scal{E\otimes I_\nn \otimes v}{E \otimes A \otimes w}_\SS \\ & = & \scal{U_2 \pi_1 (E) U_1 v}{\pi_2 (A) U_2 \pi_1 (E) U_1 w}_2 \\ & = & \scal{\pi_1 (E) U_1 v}{\gg(A) \pi_1 (E) U_1 w}_1 \\ & = & \scal{v}{U_1^\ast (E^\ast \otimes I_\uu) \gg(A) (E \otimes I_\uu) U_1 w} \\ & = & \scal{v}{U_1^\ast [(\ee \otimes \ii_\uu) \gg(A)] U_1 w} \, . \end{eqnarray*} Setting $U :=U_1$, we then obtain Eq.~\eqref{eq. centr.} in the special case $\ee = E^\ast \odot_\mm E$. The equality for generic $\ee\in\cp{\mm,\lh}$ then follows by Kraus Theorem \ref{Teo. Stines.} using normality of $\SS$ and of the amplification supermap $\PI_\uu : \ee \mapsto \ee \otimes \ii_\uu$. Finally, linearity and Theorem \ref{CB = span CP} extend the equality to all $\ee \in \cb{\mm,\lh}$. This concludes the proof of Proposition \ref{teo. centr. prel.}. \end{proof} We still need another easy auxiliary lemma before coming to the proof of Theorem \ref{teo. centr.}. \begin{theorem}{Lemma}\label{lemma:span} Let $\kk$, $\vv$ be Hilbert spaces, and let $\ss$ be a linear subspace of $\kk\otimes\vv$. The following facts are equivalent: \begin{itemize} \item[{\rm (i)}] $\vv = \spannochiuso{(u^\ast\otimes I_\vv) v \mid v\in\ss \, , \, u\in\kk}$; \item[{\rm (ii)}] the equality $\hh_0\otimes\vv = \spannochiuso{(A\otimes I_\vv)v \mid v\in\ss \, , \, A\in\elle{\kk,\hh_0}}$ holds for some Hilbert space $\hh_0$; \item[{\rm (iii)}] the equality $\hh\otimes \vv = \spannochiuso{(A\otimes I_\vv) v \mid v\in\ss \, , \, A\in\elle{\kk,\hh}}$ holds for all Hilbert spaces $\hh$. \end{itemize} \end{theorem} \begin{proof} Clearly, condition {\rm (i)} implies {\rm (ii)} by taking $\hh_0 \equiv \C$, and condition {\rm (iii)} implies {\rm (i)} by taking $\hh \equiv \C$. We now suppose that condition {\rm (ii)} holds, and prove {\rm (iii)}. If $\hh$ is a Hilbert space, let $\hhat = \spannochiuso{(A\otimes I_\vv) v \mid v\in\ss \, , \, A\in\elle{\kk,\hh}}$. Denote by $\hat{P}$ the orthogonal projection of $\hh\otimes \vv$ onto $\hhat$. Since $(\lh\otimes I_\vv)\hhat \subset \hhat$, the operator $\hat{P}$ commutes with $\lh\otimes I_\vv$, hence $\hat{P} = I_\hh\otimes P$ for some orthogonal projection $P$ of $\vv$ by Proposition 1.6 p.~184 in \cite{Tak}. Choose a Hilbert basis $\{ e_i \}_{i\in I}$ of $\hh_0$, and fix a vector $e\in\hh$ with $\no{e}=1$. Defining $E_i = ee_i^\ast \in \elle{\hh_0,\hh}$, we have $\sum_{i\in I} E_i^\ast E_i = I_{\hh_0}$, where the sum converges in the strong sense if $\# I = \infty$. It follows that, for all $A\in\elle{\kk,\hh_0}$ and $v\in\ss$, \begin{eqnarray*} (I_{\hh_0} \otimes P)(A\otimes I_\vv) v & = & \sum_{i\in I} (E_i^\ast \otimes I_\vv) (I_\hh \otimes P)(E_i A\otimes I_\vv) v \\ & = & \sum_{i\in I} (E_i^\ast \otimes I_\vv) \hat{P} (E_i A\otimes I_\vv) v \\ & = & \sum_{i\in I} (E_i^\ast \otimes I_\vv) (E_i A\otimes I_\vv) v \\ & = & (A\otimes I_\vv) v \, , \end{eqnarray*} where convergence of the sum is in the norm topology of $\hh_0\otimes\vv$. By density, we conclude $I_{\hh_0} \otimes P = I_{\hh_0\otimes\vv}$, hence $P=I_\vv$, i.e.~$\hhat = \hh\otimes\vv$. \end{proof} We are now in position to prove Theorem \ref{teo. centr.}. \begin{proof}(Proof of Theorem \ref{teo. centr.}) The `if' part of the statement follows since $\SS = \CC_{\aa,\ff} \PI_\vv$, where $\aa\in\cpn{\elle{\kk_1}\votimes\lv,\elle{\kk_2}}$ is the quantum channel $\aa := V^\ast \odot_{\elle{\kk_1}\votimes\lv} V$, and $\CC_{\aa,\ff} : \cb{\mm_1\votimes\lv,\elle{\kk_1}\votimes\lv} \frecc \cb{\mm_2,\elle{\kk_2}}$ and $\PI_\vv : \cb{\mm_1,\elle{\kk_1}} \frecc \cb{\mm_1\votimes\lv,\elle{\kk_1}\votimes\lv}$ are the concatenation and amplification supermaps defined in Propositions \ref{compo} and \ref{ampli}, respectively. Conversely, suppose $\SS\in\cpqn{\mm_1, \elle{\kk_1} ; \mm_2, \elle{\kk_2}}$. We can assume without restriction that $\mm_1\subset\elle{\hh_1}$ for some Hilbert space $\hh_1$. Let $\ell^2$ denote the Hilbert space of square-summable sequences, and define an isometry $T$ as follows $$ T : \kk_1 \to \kk_1 \otimes \ell^2 \, , \qquad Tv = v \otimes e \, , $$ where $e \in \ell^2$ is a fixed vector with $\no{e} = 1$. Then, define two deterministic supermaps \begin{align*} \TT & : \cb{\mm_1 \votimes \elle{\ell^2} , \elle{\kk_1\otimes\ell^2}} \frecc \cb{\mm_1, \elle{\kk_1}} \\ \tilde{\SS} & : \cb{\mm_1 \votimes \elle{\ell^2}, \elle{\kk_1\otimes\ell^2}} \frecc \cb{\mm_2, \elle{\kk_2}} \end{align*} given by $$ [\TT (\tilde{\ee})] (A) = T^\ast \tilde{\ee} (A \otimes I_{\ell^2}) T \quad \forall \tilde{\ee}\in \cb{\mm_1 \votimes \elle{\ell^2} , \elle{\kk_1\otimes\ell^2}} \, , \, \forall A\in\mm_1 $$ and $$ \tilde{\SS} = \SS \TT \, . $$ Since $\mm_1 \votimes \elle{\ell^2} \subset \elle{\hh_1\otimes \ell^2}$ and the Hilbert spaces $\hh_1\otimes \ell^2$ and $\kk_1\otimes \ell^2$ are isomorphic and infinite dimensional, we can apply Proposition \ref{teo. centr. prel.} to the deterministic supermap $\tilde{\SS}$ and obtain the existence of a Hilbert space $\uu$, an isometry $U:\kk_2\frecc \kk_1\otimes\ell^2\otimes\uu$ and a channel $\gg\in\cpn{\mm_2, \mm_1\votimes\elle{\ell^2}\votimes\lu}$ such that $$ [\tilde{\SS}(\tilde{\ee})] (A) = U^\ast [(\tilde{\ee} \otimes \ii_\uu) \gg(A)] U \quad \forall \tilde{\ee} \in \cb{\mm_1 \votimes \elle{\ell^2}, \elle{\kk_1\otimes\ell^2}} , \forall A\in \mm_2 . $$ On the other hand, we have $\TT (\ee\otimes\ii_{\ell^2}) = \ee$ for all $\ee\in\cb{\mm_1,\elle{\kk_1}}$ by directly inspecting the definition, hence $\tilde{\SS} (\ee\otimes\ii_{\ell^2}) = \SS(\ee)$. It follows that, for all $\ee\in\cb{\mm_1,\elle{\kk_1}}$ and $A\in\mm_2$, \begin{align*} [\SS (\ee)] (A) & = [\tilde{\SS} (\ee\otimes\ii_{\ell^2})] (A) \\ & = U^\ast [(\ee\otimes\ii_{\ell^2} \otimes \ii_\uu) \gg(A)] U \\ & = U^\ast [(\ee\otimes\ii_\ww) \gg(A)] U \, , \end{align*} where we set $\ww : = \ell^2\otimes \uu$. Now, let $\hat{\hh}_1$ be the following closed subspace of $\hh_1\otimes\ww$ \begin{equation}\label{eq:dens2} \hat{\hh}_1 = \spannochiuso{(E\otimes I_\ww)Uv \mid v\in\kk_2 \, , \, E\in\elle{\kk_1,\hh_1}} \, , \end{equation} and let $\hat{P}$ be the orthogonal projection of $\hh_1\otimes\ww$ onto $\hat{\hh}_1$. Since $(\elle{\hh_1}\otimes I_\ww) \hat{\hh}_1 \subset \hat{\hh}_1$, there is an orthogonal projection $P$ of $\ww$ such that $\hat{P} = I_{\hh_1}\otimes P$. Let $\vv=P\ww$, and define the operator $V:\kk_2\frecc \kk_1 \otimes \vv$ as $V := (I_{\kk_1} \otimes P) U$. From the fact that $\hat{P} = I_{\hh_1}\otimes P$, it clearly follows $\hat{P} (\hh_1\otimes \ww) = \hh_1\otimes \vv$ and $\hat{P} (E\otimes I_\ww) Uv = (E\otimes I_\vv) Vv$, so Eq.~\eqref{eq:dens2} can be turned into $$ \hh_1\otimes\vv = \spannochiuso{(E\otimes I_\vv)Vv \mid v\in\kk_2 \, , \, E\in\elle{\kk_1,\hh_1}} \, . $$ By Lemma \ref{lemma:span} (with $\ss \equiv V\kk_2$), we then have $$ \vv = \spannochiuso{(u^\ast\otimes I_\vv)Vv \mid u\in \kk_1 \, , \, v\in\kk_2} \, . $$ Define the quantum channel $\ff\in\cpn{\mm_2 , \mm_1\votimes \lv}$ given by $$ \ff(A) := (I_{\hh_1} \otimes P) \gg(A) (I_{\hh_1} \otimes P^\ast) =(\ii_{\mm_1} \otimes \mathcal{P}) \gg (A) \quad \forall A\in\mm_2 \, , $$ with $\mathcal{P} := P\odot_{\elle{\ww}} P^\ast\in\cpn{\elle{\ww},\elle{\vv}}$. Then, for $E\in\elle{\kk_1,\hh_1}$ and $\ee = E^\ast \odot_{\mm_1} E$, \begin{eqnarray*} [\SS(\ee)] (A) & = & U^\ast (E^\ast \otimes I_\ww) \gg(A) (E \otimes I_\ww) U \\ & = & U^\ast (E^\ast \otimes I_\ww) \hat{P}^\ast \hat{P} \gg(A) \hat{P}^\ast \hat{P} (E \otimes I_\ww) U \\ & = & V^\ast (E^\ast \otimes I_\vv) \ff(A) (E \otimes I_\vv) V \\ & = & V^\ast [(\ee \otimes \ii_\vv) \ff(A)] V \end{eqnarray*} for all $A\in\mm_2$. This equation extends to all $\ee\in\cb{\mm_1,\elle{\kk_1}}$ by the usual continuity and linearity argument. Finally, in order to show that $V$ is an isometry, pick a quantum channel $\ee\in\cpn{\mm_1,\elle{\kk_1}}$, and, since $\SS$ is deterministic, $$ V^\ast V = V^\ast [(\ee \otimes \ii_\vv) \ff(I_{\mm_2})] V = [\SS (\ee)] (I_{\mm_2}) = I_{\kk_2} \, . $$ This concludes the proof of Theorem \ref{teo. centr.}. \end{proof} We end the section with the proof of Proposition \ref{prop: minimality}. \begin{proof}(Proof of Proposition \ref{prop: minimality}) Define the linear space $$ \vv_0 = \spanno{(u^\ast\otimes I_\vv)Vv \mid u\in\kk_1 \, , \, v\in\kk_2} \, , $$ and let $W:\vv_0 \frecc \vv'$ be the linear operator given by $$ W(u^\ast\otimes I_\vv)Vv := (u^\ast\otimes I_{\vv'})V'v \, . $$ We claim that $W$ is well defined and extends to an isometry $W:\vv \frecc \vv'$. As usual, we can assume with no restriction $\mm_1\subset\elle{\hh_1}$. Pick then a vector $e\in\hh_1$ with $\no{e}=1$. For all $u,w\in\kk_1$, define $$ \ee_{u,w} := (ue^\ast) \odot_{\mm_1} (ew^\ast) \in \cb{\mm_1,\elle{\kk_1}} \, . $$ If $\phi\in\vv_0$, with $\phi = \sum_{i=1}^n (u_i^\ast\otimes I_\vv)Vv_i$, we have \begin{eqnarray*} \no{W\phi}^2 & = & \sum_{i,j} \scal{v_j}{V^{\prime\ast}(u_j u_i^\ast \otimes I_{\vv'}) V' v_i} \\ & = & \sum_{i,j} \scal{v_j}{[\SS(\ee_{u_j,u_i})] (I_{\mm_2}) v_i} \\ & = & \sum_{i,j} \scal{v_j}{V^\ast (u_j u_i^\ast \otimes I_{\vv}) V v_i} \\ & = & \no{\phi}^2 \, . \end{eqnarray*} Thus, $W$ is well defined and isometric, and extends to an isometry $W:\vv \frecc \vv'$ by density of $\vv_0$ in $\vv$. For all $u\in\kk_1$, $v\in\kk_2$ and $w\in\vv'$, we have \begin{align*} \scal{u\otimes w}{(I_{\kk_1} \otimes W)Vv} & = \scal{w}{(u^\ast\otimes I_{\vv'})(I_{\kk_1} \otimes W)Vv} \\ & = \scal{w}{W(u^\ast \otimes I_\vv)Vv} \\ & = \scal{w}{(u^\ast \otimes I_{\vv'})V'v} \\ & = \scal{u\otimes w}{V'v} \, , \end{align*} hence $(I_{\kk_1} \otimes W)V = V'$. If $E,F\in\elle{\kk_1,\hh_1}$ and $v,w\in\kk_2$, then, for all $A\in\mm_2$, \begin{align*} & \scal{(E\otimes I_\vv)Vv}{\ff(A) (F\otimes I_\vv)Vw} = \scal{v}{[\SS(E^\ast \odot_{\mm_1} F)] (A) w} \\ & \qquad \qquad \qquad = \scal{(E\otimes I_{\vv'})V'v}{\ff'(A) (F\otimes I_{\vv'})V'w} \\ & \qquad \qquad \qquad = \scal{(E\otimes I_\vv)Vv}{(I_{\hh_1} \otimes W^\ast) \ff'(A) (I_{\hh_1} \otimes W) (F\otimes I_\vv)Vw} \, . \end{align*} By the minimality condition \eqref{dens. in hhat1} and Lemma \ref{lemma:span}, we have $$ \hh_1\otimes \vv = \spannochiuso{(E\otimes I_\vv)Vv \mid v\in\kk_2 \, , \, E\in\elle{\kk_1,\hh_1}} \, , $$ hence the last equation shows that $\ff(A) = (I_{\mm_1} \otimes W^\ast) \ff'(A) (I_{\mm_1} \otimes W)$. We finally come to uniqueness of $W$. Suppose that $U : \vv\frecc \vv'$ is another isometry such that $(I_{\kk_1} \otimes U)V = V'$. Then, for all $u\in\kk_1$, $v\in\kk_2$ and $w\in\vv'$, \begin{eqnarray*} \scal{w}{U(u^\ast\otimes I_\vv) V v} & = & \scal{u\otimes w}{(I_{\kk_1} \otimes U) V v} \\ & = & \scal{u\otimes w}{V' v} \\ & = & \scal{w}{(u^\ast\otimes I_\vv) V' v} \\ & = & \scal{w}{W (u^\ast\otimes I_\vv) V v} \, , \end{eqnarray*} i.e.~$U(u^\ast\otimes I_\vv) Vv = W(u^\ast\otimes I_\vv) Vv$. By the minimality condition \eqref{dens. in hhat1}, $U=W$. \end{proof} \subsection{An application of Theorem \ref{teo. centr.}: transforming a quantum measurement into a quantum channel}\label{subsect:meastochan} For simplicity we consider here quantum measurements with a countable set of outcomes, denoted by $X$. In the algebraic language, a measurement on the quantum system with Hilbert space $\kk_1$ and with outcomes in $X$ is described by a quantum channel $\ee \in \cpn{\mm_1, \elle {\kk_1}}$, where $\mm_1 \equiv \ell^\infty (X)$ is the von Neumann algebra of the bounded complex functions (i.e.~sequences) on $X$ with uniform norm $\no{f}_{\infty} : = \sup_{i \in X} |f_i|$. The channel $\ee$ maps the function $f \in \ell^\infty (X)$ into the operator \begin{equation}\label{eq:CP1=POVM} \ee (f) = \sum_{i \in X} f_i \, P_i \in \elle{\kk_1} \, , \end{equation} where each $P_i$ is a non-negative operator in $\elle {\kk_1}$ and $\sum_{i \in X} P_i = I_{\kk_1}$. Note that the map $i\mapsto P_i$ is a normalized {\em positive operator valued measure (POVM)} based on the discrete space $X$ and with values in $\elle{\kk_1}$. Actually, Eq.~\eqref{eq:CP1=POVM} allows us to identify the convex set of measurements $\cpn{\ell^\infty(X), \elle {\kk_1}}$ with the set of {\em all} normalized $\elle{\kk_1}$-valued POVMs on $X$.\footnote{Indeed, by commutativity of $\ell^\infty (X)$ the set $\cpn{\ell^\infty (X), \elle{\kk_1}}$ coincides with the set of all normalized weak*-continuous {\em positive} maps from $\ell^\infty (X)$ into $\elle{\kk_1}$ (Theorem 3.11 in \cite{Paul}). The latter set is just the set of all normalized $\elle{\kk_1}$-valued POVMs on $X$, the identification being the one given in Eq.~\eqref{eq:CP1=POVM}.} The probability of obtaining the outcome $i \in X$ when the measurement is performed on a system prepared in the quantum state $\rho\in \trcl{\kk_1}$ ($\rho \ge0$, $\trt{\rho}=1$) is given by the Born rule \begin{align*} p_i = \trt{\rho P_i} \, , \end{align*} and the expectation value of the function $f \in \ell^\infty (X)$ with respect to the probability distribution $p$ is given by \begin{align*} \mathbb E_{p} (f) := \sum_{i\in X} p_i f_i = \trq{\rho \ee (f)} \, . \end{align*} The above equation allows us to interpret the channel $\ee$ as an \emph{operator valued expectation} (see e.g.~\cite{dirk}). Now, consider the deterministic supermaps sending quantum measurements in the set $\cp {\ell^\infty (X), \elle{\kk_1} }$ to quantum operations in $\cp {\mm_2, \elle {\kk_2}}$, where $\mm_2 \equiv \elle {\hh_2} $. Our dilation Theorem \ref{teo. centr.} (in the predual form of Remark \ref{rem:teo.centr.pred.}) states that every deterministic supermap $\SS : \cb{\ell^\infty (X) , \elle{\kk_1}} \frecc \cb{\elle{\hh_2} , \elle{\kk_2}}$ is of the form \begin{equation}\label{eq:non so} [\SS (\ee)]_\ast (\rho) = \ff_\ast [(\ee\otimes \ii_{\vv})_\ast (V\rho V^\ast)] \quad \forall \ee\in\cb{\ell^\infty (X) , \elle{\kk_1}} \, , \, \forall \rho\in\trcl{\kk_2} \, , \end{equation} where $\vv$ is a Hilbert space, $V:\kk_2\frecc \kk_1 \otimes \vv$ an isometry and $\ff\in\cpn{\elle{\hh_2},\ell^\infty (X)\votimes\lv}$ a quantum channel. In our case, we have the identification $$ \ell^\infty (X) \votimes \lv \simeq \ell^\infty (X ; \lv) \, , $$ where $\ell^\infty (X ; \lv)$ is the von Neumann algebra of the bounded $\lv$-valued functions on $X$. Its predual space is $$ (\ell^\infty (X) \votimes \lv)_\ast \simeq \ell^1 (X ; \trcl{\vv}) \, , $$ i.e.~the space of norm-summable sequences with index in $X$ and values in the Banach space of the trace class operators on $\vv$ (see Theorem 1.22.13 in \cite{Sakai}). In the Schr\"odinger picture, the channel $\ff_\ast$ can be realized by first reading the classical information carried by the system with algebra $\ell^\infty (X)$ and, conditionally to the value $i \in X$, by performing the quantum channel $\ff_{i\,\ast} : \trcl{\vv} \frecc \trcl{\hh_2}$ given by $$ \ff_{i\,\ast} (\sigma) = \ff_\ast (\delta_i\, \sigma) \quad \forall\sigma \in \trcl{\vv} \, , $$ where $\delta_i\, \sigma\in\ell^1 (X ; \trcl{\vv})$ is the sequence $(\delta_i\, \sigma)_k = \delta_{ik} \, \sigma \ \forall k\in X$, $\delta_{ik}$ being the Kronecker delta. Hence, Eq.~\eqref{eq:non so} can be rewritten as $$ [\SS (\ee)]_\ast (\rho) = \sum_{i\in X} \ff_{i\,\ast} [(\ee\otimes \ii_{\vv})_\ast (V\rho V^\ast)_i] \, . $$ In other words, Theorem \ref{teo. centr.} states that the most general transformation of a quantum measurement on $\kk_1$ into a quantum channel from states on $\kk_2$ to states on $\hh_2$ can be realized by \begin{enumerate} \item applying an invertible dynamics (the isometry $V)$ that transforms the input system $\kk_2$ into the composite system $\kk_1 \otimes \vv$, where $\vv$ is an ancillary system; \item performing the given measurement ($\ee_*$, in the predual picture) on $\kk_1$, thus obtaining the outcome $i\in X$; \item conditionally to the outcome $i \in X$, applying a physical transformation (the channel $\ff_{i\, \ast}$) on the ancillary system $\vv$, thus converting it into the output system $\hh_2$. \end{enumerate} \section{Radon-Nikodym derivatives of supermaps}\label{sez. Radon} The dilation theorem for deterministic supermaps will be generalized here to probabilistic supermaps. In this case, the following theorem provides an analog of the Radon-Nikodym theorem for CP maps (compare with \cite{Arveson,BelStasz}, and see also \cite{Raginsky} for the particular case of quantum operations). \begin{theorem}{Theorem}\label{teo. Radon-Nicodym} {\rm (Radon-Nikodym theorem for supermaps)} Suppose that $\SS$ is a deterministic supermap in $\cpqn{\mm_1,\elle{\kk_1} ; \mm_2,\elle{\kk_2}}$ and let $(\vv , \, V, \, \ff)$ be its minimal dilation. If $\TT \in\cpq{\mm_1,\elle{\kk_1} ; \mm_2,\elle{\kk_2}}$ is such that $\TT \ll \SS$, then there exists a unique element $\gg\in\cp{\mm_2, \mm_1\votimes \lv}$ with $\gg \preceq \ff$ and such that \begin{equation}\label{radnic} \left[\TT (\ee)\right] (A)= V^\ast [(\ee \otimes \ii_\vv) \gg(A)] V \quad \forall \ee \in \cb{\mm_1,\elle{\kk_1}} \, , \, \forall A\in\mm_2 \, . \end{equation} \end{theorem} \begin{proof} Without loss of generality, let us suppose $\mm_1 \subset \elle{\hh_1}$ for some suitable Hilbert space $\hh_1$. Hence, we can regard the quantum channel $\ff$ as an element in $\cpn{\mm_2, \elle{\hh_1 \otimes \vv}}$. Consider the Stinespring dilation of the channel $\ff $, given by $$ \ff(A) = U^\ast \pi(A) U \quad \forall A\in\mm_2, $$ where $U: \hh_1\otimes\vv \frecc \uu$ is an isometry, $\uu$ is a separable Hilbert space, and $\pi : \mm_2 \frecc \elle{\uu}$ is a normal unital $\ast$-homomorphism (see e.g.~Theorem 2.1 p.~137 of \cite{QTOS76}). In particular, we can take the minimal Stinespring dilation, which satisfies the relation $$ \uu = \spannochiuso{\pi(A) U u \mid A\in\mm_2 \, , \, u\in\hh_1\otimes\vv} \, . $$ Let us define the dense subset $\uu_0 \subset \uu$ as $$ \uu_0 :=\spanno{\pi(A) U u \mid A \in \mm_2 \, , \, u \in \hh_0}, $$ where $\hh_0$ is the following dense subset of $\hh_1\otimes\vv$ \begin{equation}\label{eq:ins.denso} \hh_0 := \spanno{(E\otimes I_\vv) Vv \mid E\in\elle{\kk_1, \hh_1} \, , \, v\in\kk_2} \end{equation} (see Eq.~\eqref{dens. in hhat1} and Lemma \ref{lemma:span} for the proof that $\hh_0$ is dense in $\hh_1\otimes\vv$). We now introduce a positive sesquilinear form $\scal{\cdot}{\cdot}_0$ on $\uu_0$, which we will briefly see that is bounded and thus can be extended by continuity to a form on $\uu$. If $\phi = \sum_{i=1}^n \pi(A_i) U (E_i\otimes I_\vv) Vv_i$ and $\psi = \sum_{j=1}^m \pi(B_j) U (F_j\otimes I_\vv) Vw_j$ are two generic elements in $\uu_0$, define \begin{align*} \scal{\phi}{\psi}_0 & := \sum_{i,j} \scal{v_i}{[\TT ( E_i^\ast \odot_{\mm_1} F_j)] (A_i^\ast B_j) w_j} \\ & = \scal{\sum_i E_i\otimes A_i \otimes v_i}{\sum_j F_j\otimes B_j \otimes w_j}_\TT \, . \end{align*} We claim that $\scal{\cdot}{\cdot}_0$ is a well defined positive and bounded sesquilinear form on $\uu_0$. In order to show this, it is enough to prove that $$ 0\leq \scal{\phi}{\phi}_0 \leq \no{\phi}^2 \quad \forall \phi\in\uu_0 \, . $$ Indeed, the first inequality is clear from Lemma \ref{prop. sulla forma ass.}. For the second, again by Lemma \ref{prop. sulla forma ass.} we have, for $\phi = \sum_{i=1}^n \pi(A_i) U (E_i\otimes I_\vv) Vv_i$, \begin{align*} \scal{\phi}{\phi}_0 & = \scal{\sum_i E_i\otimes A_i \otimes v_i}{\sum_j E_j\otimes A_j \otimes v_j}_\TT \\ & \leq \scal{\sum_i E_i\otimes A_i \otimes v_i}{\sum_j E_j\otimes A_j \otimes v_j}_\SS \\ & = \sum_{i,j} \scal{v_i}{[\SS(E_i^\ast \odot_{\mm_1} E_j)](A_i^\ast A_j) v_j} \\ & = \sum_{i,j} \scal{v_i}{V^\ast (E_i^\ast \otimes I_\vv) \ff (A_i^\ast A_j) (E_j \otimes I_\vv) V v_j} \\ & = \sum_{i,j} \scal{v_i}{V^\ast (E_i^\ast \otimes I_\vv) U^\ast \pi(A_i^\ast A_j) U (E_j \otimes I_\vv) V v_j} \\ & = \scal{\sum_i \pi(A_i)U(E_i \otimes I_\vv) Vv_i}{\sum_j \pi(A_j) U (E_j \otimes I_\vv) V v_j} \\ & = \no{\phi}^2 \, . \end{align*} This concludes the proof of our claim. We continue to denote by $\scal{\cdot}{\cdot}_0$ the previous form extended by continuity to the whole space $\uu$. Then, there exists a bounded operator $C\in\elle{\uu}$, with $0\leq C \leq I_\uu$, such that $$ \scal{\phi}{\psi}_0 = \scal{\phi}{C\psi} \quad \forall \phi ,\psi\in\uu \, . $$ Note that $C$ commutes with the von Neumann algebra $\mm_\pi : = \pi (\mm_2)$.\footnote{The linear space $\pi (\mm_2)$ is a von Neumann algebra in $\elle{\uu}$ by Proposition 3.12 p.~136 in \cite{Tak}.} Indeed, if $\phi= \sum_{i=1}^n \pi(A_i) U (E_i\otimes I_\vv) V v_i $ is a generic element in $\uu_0$, then, for all $A\in\mm_2$, \begin{align*} \scal{\phi}{C\pi(A)\phi} & = \scal{\phi}{\pi(A)\phi}_0 \\ & = \sum_{i,j} \scal{v_i}{[\TT(E_i^\ast \odot_{\mm_1} E_j)] (A_i^\ast A A_j) v_j} \\ & = \scal{\pi(A^\ast) \phi}{\phi}_0 \\ & = \scal{\pi(A)^\ast \phi}{C\phi}\\ & = \scal{ \phi}{\pi(A) C\phi} . \end{align*} By density and the polarization identity we then obtain $C\pi(A) = \pi(A) C $ for all $A \in \mm_2$. We are now ready to define the map $\gg\in\cp{\mm_2,\elle{\hh_1\otimes\vv}}$ as $$ \gg := (U^\ast \odot_{\elle{\uu}} U) (C^{\frac12} \odot_{\mm_\pi} C^{\frac12}) \, \pi \, . $$ For all $E,F\in\elle{\kk_1,\hh_1}$, $A\in\mm_2$ and $v,w\in\kk_2$, we have \begin{align*} \scal{v}{[\TT(E^\ast\odot_{\mm_1} F)](A) w} & = \scal{U(E\otimes I_\vv)Vv}{\pi(A)U(F\otimes I_\vv)Vw}_0 \\ & = \scal{U(E\otimes I_\vv)Vv}{C\pi(A)U(F\otimes I_\vv)Vw} \\ & = \scal{v}{V^\ast (E^\ast\otimes I_\vv) U^\ast C^{\frac 12} \pi(A) C^{\frac 12} U(F\otimes I_\vv)Vw} \\ & = \scal{v}{V^\ast (E^\ast\otimes I_\vv) \gg (A) (F\otimes I_\vv)Vw} \, . \end{align*} Since $v,w \in\kk_2$ are arbitrary, we just proved the relation \begin{equation}\label{punto chiave} [\TT(E^\ast\odot_{\mm_1} F)](A) = V^\ast (E^\ast\otimes I_\vv) \gg(A)(F\otimes I_\vv) V \end{equation} for all $E,F\in \elle{\kk_1,\hh_1}$ and $A \in \mm_2$. Eq.~\eqref{punto chiave} allows us to prove that the range of the map $\gg$ is contained in $\mm_1\votimes\lv$, i.e., that for all $A\in\mm_2$ we have $\gg (A) \in \mm_1\votimes\lv$. To prove this, it is enough to show that $\gg(A)$ commutes with $\left( \mm_1\votimes\lv\right)^\prime = \mm_1^\prime \votimes \C I_\vv$. Indeed, for every $B\in\mm_1'$ and for a generic element $u= \sum_{i=1}^n (E_i\otimes I_\vv) V v_i \in \hh_0$ we have \begin{align*} \scal{u} { \gg(A) (B\otimes I_\vv) u } &= \sum_{i,j} \scal{(E_i\otimes I_\vv) V v_i}{\gg(A)(B\otimes I_\vv)(E_j\otimes I_\vv) V v_j}\\ &= \sum_{i,j} \scal{ v_i}{ V^* (E_i^*\otimes I_\vv)\gg(A)(B E_j\otimes I_\vv) V v_j}\\ & = \sum_{i,j} \scal{v_i}{[\TT(E_i^\ast\odot_{\mm_1} BE_j)](A) v_j} \\ &=\sum_{i,j} \scal{v_i}{[\TT(E_i^\ast B\odot_{\mm_1} E_j)](A) v_j} \\ &= \sum_{i,j} \scal{ v_i}{ V^* (E_i^* B\otimes I_\vv)\gg(A)(E_j\otimes I_\vv) V v_j}\\ &= \sum_{i,j}\scal{(E_i\otimes I_\vv) V v_i}{(B\otimes I_\vv)\gg(A)(E_j\otimes I_\vv) V v_j}\\ &= \scal{u}{(B\otimes I_\vv) \gg(A)u} \, , \end{align*} where the equality $E_i^\ast\odot_{\mm_1} BE_j = E_i^\ast B \odot_{\mm_1} E_j$ is item (3) of Proposition \ref{prop:prop. di odot}. By the polarization identity and by density of $\hh_0$ in $\hh_1\otimes\vv$ we then obtain $\gg(A) (B\otimes I_\vv) = (B\otimes I_\vv) \gg(A)$. Moreover, from Eq.~\eqref{punto chiave} the desired relation Eq.~\eqref{radnic} easily follows: indeed, Eq.~\eqref{punto chiave} proves Eq.~\eqref{radnic} for all $\ee\in\elle{\hh_1,\kk_1} \odot_{\mm_1} \elle{\kk_1,\hh_1}$; Eq.~\eqref{radnic} for all $\ee\in\cb{\mm_1,\elle{\kk_1}}$ then follows by linearity and normality of $\TT$ and Theorems \ref{CB = span CP} and \ref{Teo. Stines.}. Note that Eq.~\eqref{radnic} determines $\gg$ uniquely: if $\gg' \in \cp{ \mm_2, \elle{ \mm_1 \votimes \vv} }$ is a map satisfying Eq.~\eqref{radnic}, then for a generic element $u \in \hh_0$, written as $u= \sum_{i=1}^n (E_i \otimes I_{\vv}) V v_i $, we must have \begin{align*} \scal{u} { \gg' (A) u} & = \sum_{i,j} \scal{v_i}{ V^\ast (E_i^\ast\otimes I_\vv) \gg'(A)(E_j\otimes I_\vv) V v_j} \\ & = \sum_{i,j} \scal{v_i}{ V^* \left[\TT (E_i^* \odot E_j)\right] (A) V v_j} \\ &= \sum_{i,j} \scal{v_i}{ V^\ast (E_i^\ast\otimes I_\vv) \gg(A)(E_j\otimes I_\vv) V v_j} \\ & = \scal{u} { \gg (A) u}, \end{align*} which, by polarization identity and by density of $\hh_0$, implies $\gg'(A) = \gg(A)$ for every $A \in \mm_2$, and therefore $\gg' = \gg$. To conclude we prove that the map $\gg$ has the property $\gg\preceq \ff$: \begin{align*} \gg &= (U^\ast \odot_{\elle{\uu}} U) (C^{\frac12} \odot_{\mm_\pi} C^{\frac12}) \, \pi \\ & \preceq (U^\ast \odot_{\elle{\uu}} U) \, \pi \\ & = \ff, \end{align*} the inequality $C^{\frac12} \odot_{\mm_\pi} C^{\frac12} \preceq \ii_{\mm_\pi,\,\elle{\uu}}$ being item (4) of Proposition \ref{prop:prop. di odot}. \end{proof} \begin{definition}{Definition} In Theorem \ref{teo. Radon-Nicodym}, the CP map $\gg\in\cp{\mm_2, \mm_1\votimes \lv}$ defined by Eq.~\eqref{radnic} is the {\em Radon-Nikodym derivative} of the supermap $\TT$ with respect to $\SS$. \end{definition} \begin{definition}{Remark} Note that the validity of Theorem \ref{teo. Radon-Nicodym} can be trivially extended to quantum supermaps that are bounded by positive multiples of deterministic supermaps, i.e.~to supermaps $\TT$ such that $\TT \ll \lambda \SS$ for some positive $\lambda \in \R$ and some deterministic supermap $\SS$. \end{definition} \section{Superinstruments}\label{sez. superstr.} Here we apply the Radon-Nikodym theorem proven in the previous section to the study of \emph{quantum superinstruments}. Quantum superinstruments describe measurement processes where the measured object is not a quantum system, as in ordinary instruments, but rather a quantum device. While ordinary quantum instruments are defined as probability measures with values in the set of quantum operations (see \cite{DavLew}, and also \cite{QTOS76} for a more complete exposition), quantum superinstruments are defined as probability measures with values in the set of quantum supermaps. \begin{definition}{Definition} Let $\Omega$ be a measurable space with $\sigma$-algebra $\sigma (\Omega)$ and let $\SS$ be a map from $\sigma (\Omega)$ to $\cpq{\mm_1,\elle{\kk_1}; \mm_2, \elle{\kk_2}}$, sending the measurable set $B \in \sigma (\Omega)$ to the supermap $\SS_B \in \cpq{\mm_1,\elle{\kk_1}; \mm_2, \elle{\kk_2}}$. We say that $\SS$ is a {\em quantum superinstrument} if it satisfies the following properties: \begin{enumerate} \item[{\rm (i)}] $\SS_\Omega$ is deterministic; \item[{\rm (ii)}] if $n\in\N\cup\{\infty\}$ and $B = \bigcup_{i=1}^n B_i$ with $B_i \cap B_j = \emptyset$ for $i\neq j$, then $\SS_B = \sum_{i=1}^n \SS_{B_i}$, where if $n = \infty$ convergence of the series is understood in the following sense: $$ [\SS_B (\ee)] (A) = \wklim_k \sum_{i=1}^k [\SS_{B_i} (\ee)] (A) \quad \forall \ee\in\cb{\mm_1,\elle{\kk_1}} , \forall A\in\mm_2 . $$ \end{enumerate} \end{definition} We will briefly see that every quantum superinstrument is associated to an ordinary quantum instrument in a unique way. Before giving the precise statement, we recall the notion of quantum instrument, which is central in the statistical description of quantum measurements: \begin{definition}{Definition} A map $\jj:\sigma (\Omega) \frecc \cp{\mm,\nn}$ is a \emph{quantum instrument} if it satisfies the following properties: \begin{enumerate} \item[{\rm (i)}] $\jj_\Omega$ is a quantum channel; \item[{\rm (ii)}] if $n\in\N\cup\{\infty\}$ and $B = \bigcup_{i=1}^n B_i$ with $B_i \cap B_j = \emptyset$ for $i\neq j$, then $\jj_B = \sum_{i=1}^n \jj_{B_i}$, where if $n = \infty$ convergence of the series is understood in the following sense: $$ \jj_B (A) = \wklim_k \sum_{i=1}^k \jj_{B_i} (A) \quad \forall A\in\mm \, . $$ \end{enumerate} \end{definition} We then have the following dilation theorem for quantum superinstruments. \begin{theorem}{Theorem}\label{Osawa} {\rm (Dilation of quantum superinstruments)} Suppose that $\SS: \sigma (\Omega) \frecc \cpq{\mm_1,\elle{\kk_1}; \mm_2, \elle{\kk_2}}$ is a quantum superinstrument and let $(\vv, \, V, \, \ff)$ be the minimal dilation of the deterministic supermap $\SS_\Omega $. Then there exists a unique quantum instrument $\jj: \sigma (\Omega) \frecc \cp{\mm_2,\mm_1 \votimes \lv}$ such that \begin{equation}\label{postselect} [\SS_B (\ee)] (A) = V^\ast [(\ee\otimes \ii_\vv) \jj_B (A)] V \quad \forall \ee\in\cb{\mm_1,\elle{\kk_1}} \, , \, \forall A\in\mm_2 \end{equation} for all $B\in\sigma(\Omega)$. \end{theorem} \begin{proof} Let $B \in \sigma(\Omega)$ be an arbitrary measurable set. By additivity of the measure $\SS$, we have $\SS_{\Omega} = \SS_B + \SS_{\Omega\setminus B}$, that is, $\SS_B \ll \SS_\Omega$. Let $(\vv,V,\ff)$ be the minimal dilation of $\SS_\Omega$. By Theorem \ref{teo. Radon-Nicodym}, Eq.~\eqref{postselect} holds for some uniquely defined $\jj_B \in \cp{\mm_2,\mm_1\votimes\elle{\vv}}$, with $\jj_B \preceq \ff$. Clearly, for $B = \Omega$ one has $\jj_{\Omega} = \ff$, hence $\jj_{\Omega}$ is a quantum channel. Now, suppose that $n\in\N\cup\{\infty\}$ and $B = \bigcup_{i=1}^{n} B_i$, with $B_i \cap B_j = \emptyset$ for $i\neq j$. If $n\in\N$, the equality $\jj_{\cup_{i=1}^{n} B_i} = \sum_{i=1}^n \jj_{B_i}$ easily follows by additivity of the superinstrument $\SS$ and uniqueness of the Radon-Nikodym derivative. If $n=\infty$, then the sequence of CP maps $\gg_n =\sum_{i=1}^n \jj_{B_i}$ is CP-increasing and CP-bounded, since $\gg_n=\jj_{\cup_{i=1}^{n} B_i}\preceq \ff$. Therefore, we have $\gg_n \Uparrow \gg_\infty$ for some $\gg_\infty \in \cp{\mm_2,\mm_1\votimes\elle{\vv}}$. We prove that $\gg_{\infty} = \jj_{B}$. Indeed, for every $\ee \in \cb{\mm_1,\elle{\kk_1}} $ and $A\in\mm_2$, by Proposition \ref{Teo. Berb. 2} we have \begin{align*} V^\ast [(\ee\otimes \ii_{\vv}) \gg_{\infty} (A)] V & = \wklim_n \sum_{i=1}^n V^\ast [(\ee\otimes \ii_{\vv}) \jj_{B_i} (A)] V \\ & = \wklim_n \sum_{i=1}^n [\SS_{B_i} (\ee)] (A) \\ & = [\SS_B (\ee)] (A) \, . \end{align*} By uniqueness of the Radon-Nikodym derivative we then conclude $\gg_{\infty} = \jj_B$. \end{proof} The physical interpretation of the dilation of quantum superinstruments is clear in the Schr\"odinger picture. Indeed, taking the predual of Eq.~\eqref{postselect}, we have for all $\rho \in \trcl{\kk_2}$ and $\ee \in \cb{\mm_1,\elle{\kk_1}}$ $$ [\SS_B (\ee)]_\ast (\rho) = \jj_{B\, \ast} \left[(\ee \otimes \ii_\vv)_\ast ( V \rho V^\ast) \right] \, . $$ This means that the system with Hilbert space $\kk_2$ (initially prepared in the quantum state $\rho$) undergoes an invertible evolution, given by the isometry $V$, that transforms it into the composite system with Hlbert space $ \kk_1 \otimes \vv $; then the system with Hilbert space $\kk_1$ is transformed by means of the quantum channel $\ee_*$, while nothing is done on the ancilla; finally, the quantum measurement described by the instrument $\jj_*$ is performed jointly on the system and ancilla. \subsection{Application of Theorem \ref{Osawa}: Measuring a measurement}\label{subsect:measmeas} Suppose that we want to characterize some property of a quantum measuring device on a system with Hilbert space $\kk_1$: For example, we may have a device performing a projective measurement on an unknown orthonormal basis, and we may want to find out the basis. In this case the set of possible answers to our question is thus the set of all orthonormal bases. In a more abstract setting, the possible outcomes will constitute a measure space $\Omega$ with $\sigma$-algebra $\sigma (\Omega)$. This includes also the case of full tomography of the measuring device \cite{lor,fiur,soto,eisert}, in which the outcomes in $\Omega$ label all possible measuring devices. The mathematical object describing our task will be a superinstrument taking the given measurement as input and yielding an outcome in the set $B \in \sigma(\Omega)$ with some probability. In the algebraic framework, we will describe the input measurement as a quantum channel $\ee \in \cp { \mm_1, \elle{\kk_1}}$, where $\mm_1 \equiv \ell^\infty (X)$ is the algebra of the complex bounded functions on $X$ (see the discussion in Section \ref{subsect:meastochan}). \subsubsection{Outcome statistics for a measurement on a measuring device} If we only care about the outcomes in $\Omega$ and their statistical distribution, then the output of the superinstrument will be trivial, that is $ \mm_2 \equiv \elle {\kk_2} \equiv \C$. In this case, Theorem \ref{Osawa} states that every superinstrument $\SS: \sigma (\Omega) \frecc \cpq{\ell^\infty(X),\elle{\kk_1}; \mathbb C, \mathbb C}$ will be of the form $$ \SS_B (\ee) = \scal{v}{(\ee\otimes \ii_\vv) (\jj_B) v} \quad \forall \ee\in\cb{\ell^\infty (X) , \elle{\kk_1}} \, , \, B \in \sigma(\Omega) \, , $$ where $\vv$ is an ancillary Hilbert space, $v\in\kk_1\otimes \vv$ is a unit vector, and $\jj: \sigma (\Omega) \frecc \cp{\C,\ell^\infty (X) \votimes \lv} \simeq \ell^\infty (X;\lv)_+$ is just a weak*-countably additive positive measure on $\Omega$ with values in $\ell^\infty (X;\lv)$, satisfying $(\jj_\Omega)_i = I_\vv \ \forall i\in X$. Note that in this case each supermap $\SS_B$ is actually a linear map $\SS_B : \cb{\ell^\infty (X) , \elle{\kk_1}} \frecc \C$, and, if $\ee$ is a quantum channel, the map $B\mapsto \SS_B (\ee)$ is a probability measure on $\Omega$. In the Schr\"odinger picture \begin{equation}\label{eq:non so2} \SS_B (\ee) = [\jj_{B\,\ast} (\ee\otimes \ii_\vv)_\ast] (\omega_v) \, , \end{equation} where $\omega_v$ is the state in $\trcl{\kk_1\otimes\vv}$ given by $\omega_v (A) := \scal{v}{Av} \ \forall A\in\elle{\kk_1\otimes\vv}$. Note that $\jj_{B\,\ast} : \ell^1 (X;\trcl{\vv}) \frecc \C$. Thus, if for all $i\in X$ we define the following normalized $\lv$-valued POVM on $\Omega$: $$ Q_i : \sigma (\Omega) \frecc \lv \, , \qquad Q_{i, B} := (\jj_B)_i \, , $$ then we have $$ \jj_{B\,\ast} (\delta_i \, \sigma) = \trt{\sigma Q_{i,B}} \quad \forall \sigma\in \trcl{\vv} $$ and Eq.~\eqref{eq:non so2} becomes $$ \SS_B (\ee) = \sum_{i\in X} \trq{Q_{i,B} (\ee\otimes \ii_\vv)_\ast (\omega_v)_i} \, , $$ which shows that, conditionally on the outcome $i\in X$, we just perform a measurement with POVM $Q_i$ on the states in $\trcl{\vv}$. In other words, Theorem \ref{Osawa} claims that the most general way to extract information about a measuring device on system $\kk_1$ consists in \begin{enumerate} \item preparing a pure bipartite state $\omega_v$ in $\kk_1 \otimes \vv$; \item performing the given measurement $\ee$ on $\kk_1$, thus obtaining the outcome $i\in X$; \item conditionally on the outcome $i \in X$, performing a measurement (the POVM $Q_i$) on the ancillary system $\vv$, thus obtaining an outcome in $\Omega$. \end{enumerate} Note that the choice of the POVM $Q_i$ depends in general on the outcome of the first measurement $\ee$. \subsubsection{Tranformations of measuring devices induced by a higher-order measurement} In a quantum measurement it is often interesting to consider not only the statistics of the outcomes, but also how the measured object changes due to the measurement process. For example, in the case of ordinary quantum measurements, one is interested in studying the state reduction due to the occurrence of particular measurement outcomes. We can ask the same question in the case of higher-order measurements on quantum devices: for example, we can imagine a measurement process where a measuring device is tested, and, due to the test, is transformed into a new measuring device. This situation is described mathematically by a quantum superinstrument with outcomes in an outcome set $\Omega$, sending measurements in $\cp{ \mm_1, \elle{\kk_1} }$ to measurements in $\cp{ \mm_2, \elle{\kk_2}}$, where $\mm_1 \equiv \ell^\infty (X)$ and $\mm_2 \equiv \ell^\infty (Y)$ for some countable sets $X$ and $Y$. In this case, it follows from Theorem \ref{Osawa} that every superinstrument $\SS: \sigma (\Omega) \frecc \cpq{\ell^\infty (X) ,\elle{\kk_1}; \ell^\infty (Y) , \elle{\kk_2}}$ is of the form $$ [\SS_B (\ee)] (f) = V^\ast [(\ee\otimes \ii_\vv) \jj_B (f)] V \quad \forall \ee\in\cb{\ell^\infty (X),\elle{\kk_1}} \, , \, \forall f\in\ell^\infty (Y) $$ for all $B\in\sigma(\Omega)$, where $\vv$ is an ancillary Hilbert space, $V\in \elle{\kk_2, \kk_1\otimes \vv} $ is an isometry, and $\jj: \sigma (\Omega) \frecc \cp{\ell^\infty (Y) , \ell^\infty (X;\lv)}$ is an instrument. Note that, by commutativity of $\ell^\infty (Y)$, the set $\cp{\ell^\infty (Y) , \ell^\infty (X;\lv)}$ coincides with the set of weak*-continuous {\em positive} maps from $\ell^\infty (Y)$ into $\ell^\infty (X;\lv)$. If for all $i\in X$ we define the positive map $$ \jj_{i,B} : \ell^\infty (Y) \frecc \lv \, , \qquad \jj_{i,B} (f) := \jj_B (f)_i \, , $$ then each mapping $\jj_i :\sigma(\Omega)\frecc \cp{\ell^\infty (Y) , \lv}$ is an instrument, with predual $$ \jj_{i,B \,\ast} : \trcl{\vv} \frecc \ell^1 (Y) \, , \qquad \jj_{i,B \,\ast} (\sigma) = \jj_{B\,\ast} (\delta_i\,\sigma) $$ for all $B\in\sigma(\Omega)$. From the relation $$ [\SS_B (\ee)]_\ast (\rho) = [\jj_{B\,\ast} (\ee\otimes \ii_\vv)_\ast](V\rho V^\ast) = \sum_{i\in X} \jj_{i,B \,\ast} [(\ee\otimes \ii_\vv)_\ast (V\rho V^\ast)_i] \, , $$ holding for all states $\rho\in\trcl{\kk_2}$, we then see that the most general measurement on a quantum measuring device can be implemented by \begin{enumerate} \item applying an invertible dynamics (the isometry $V)$ that transforms the input system $\kk_2$ into the composite system $\kk_1 \otimes \vv$, where $\vv$ is an ancillary system; \item performing the given measurement $\ee$ on $\kk_1$, thus obtaining the outcome $i\in X$; \item conditionally to the outcome $i \in X$, performing a quantum measurement (the predual instrument $\jj_{i\,\ast}$), thus obtaining an outcome in $\Omega$ and transforming the ancillary system $\vv$ into the classical system described by the commutative algebra $\ell^\infty(Y)$. \end{enumerate} If we assume that the set $\Omega$ also is countable, then the instrument $\jj: \sigma (\Omega) \frecc \cp{\ell^\infty(Y) , \ell^\infty(X,\lv)}$ is completely specified by its action on singleton sets, that is, by the countable set of quantum operations $\{\jj_{\omega} \in \cp{\ell^\infty(Y) , \ell^\infty(X,\lv)} \mid \omega \in \Omega\} $. In this case, if for all $i\in X$ we define $$ Q^{(i)}_{\omega , j} := \jj_\omega (\delta_j)_i = \jj_{i,\omega} (\delta_j) \quad \forall (\omega , j)\in \Omega\times Y \, , $$ then the map $(\omega , j) \mapsto Q^{(i)}_{\omega , j}$ is a normalized POVM on the product set $\Omega\times Y$ and with values in $\lv$. Note that, in terms of the POVM $Q^{(i)}$, we can express each $\jj_{i,\omega}$ as $$ \jj_{i,\omega} (f) = \sum_{j \in Y} f_j \, Q^{(i)}_{\omega,j} \quad \forall f\in\ell^\infty(Y) $$ or, equivalently, $$ (\jj_{i,\omega\,\ast} (\sigma))_j = \trt{\sigma Q^{(i)}_{\omega,j}} \quad \forall \sigma\in\trcl{\vv} \, . $$ In other words, the step (3) in the measurement process can be interpreted as a quantum measurement with outcome $(\omega, j) \in \Omega \times Y$, where only the classical information concerning the index $j \in Y$ is encoded in a physical system available for future experiments, whereas the information concerning index $\omega \in \Omega$ becomes unavailable after being read out by the experimenter. \section*{Acknowledgements} G.~C.~acknowledges support by the National Basic Research Program of China (973) 2011CBA00300 (2011CBA00301). A.~T.~and V.~U.~gratefully acknowledge the financial support of the Italian Ministry of Education, University and Research (FIRB project RBFR10COAQ).
1,116,691,499,962
arxiv
\section{Introduction} If $A$ is an abelian group, the \textit{$A$-fibered Burnside ring} of a finite group $G$, denoted by $B^A(G)$, is the Grothendieck ring of the category of finite $A$-fibered $G$-sets. This ring was first introduced by Dress in \cite{Dress}. The case of $A=k^{\times}$ for a field $k$ is particularly interesting, since $B^{k^\times}(G)$ is naturally isomorphic to the ring $D^k(G)$ of monomial $k$-representations of $G$. Recalling that $B^A(G) \cong B(G)$, the Burnside ring of $G$, if and only if $A$ has trivial $|G|$-torsion, we will say that two non-isomorphic finite groups $G$ and $H$ provide a \textit{non-trivial counterexample to the isomorphism problem of the $A$-fibered Burnside ring} if $B^A(G)$ and $B^A(H)$ are isomorphic as rings and there is a non-trivial element $a$ in $A$ such that $a^{|G|}=1$. Examples of non-isomorphic finite groups $G$ and $H$ with isomorphic Burnside rings have already been given for instance by Th\'evenaz in \cite{Thev}. The notion of a \textit{species isomorphism} for fibered Burnside rings was introduced by the second author in \cite{Gar1} for a ring isomorphism preserving the standard bases given by conjugacy classes of monomial pairs, which is analogous to an isomorphism of mark tables in the case of Burnside rings. In Section 2 of this note we present a sufficient condition on finite groups $G$ and $H$ for the existence of a species isomorphism between their fibered Burnside rings. In Section 3, we use this result to prove that Thévenaz' counterexamples to the isomorphism problem for Burnside rings (see \cite{Thev}) of order $p^2q$ for primes $p$ and $q$ such that $q|(p-1)$, provide also non-trivial counterexamples for the $A$-fibered Burnside ring when the fiber $A$ has trivial $p$-torsion and elements of order $q$. \subsection*{Notation} Throughout this note, the letters $G$ and $H$ stand for finite groups. We write $\mathcal{S}_G$ for the set of subgroups of $G$ and $[\mathcal{S}_G]\subseteq \mathcal{S}_G$ for a set of representatives of the conjugacy classes. For an element $g\in G$, we denote the resulting conjugation map $c_g$ also by $\lexp{g}{-}\colon G\to G,\ x\mapsto gxg^{-1}$. For subgroups $K$ and $L$ of $G$ we write $K=_G L$ and $K\le_G L$ if there exists $g\in G$ with $K=\lexp{g}{L}$ and $K\le \lexp{g}{L}$, respectively. If $G$ acts on a set $X$, we write $G\backslash X$ for the set of its orbits and we write $x=_Gy$ if two elements $x$ and $y$ of $X$ are in the same orbit. \section{A criterion for species isomorphisms} We first recall some basic definitions and results on fibered Burnside rings. We refer the reader to \cite{BY}, \cite{Dress} and \cite{Gar1} for further details. The $A$-fibered Burnside ring can be presented as follows: let $\mathcal{M}^A_G$ be the set consisting of all pairs $(K,\phi)$, where $K\leq G$ and $\phi\colon K\longrightarrow A$ is a group homomorphism, also called \textit{subcharacters} or \textit{monomial pairs}. Note that $G$ acts on $\mathcal{M}^A_G$ by conjugation. We write $[K,\phi]_G$ for the orbit of $(K,\phi)$. Then $B^A(G)$ is the free abelian group over the basis $G\backslash \mathcal{M}^A_G$, with multiplication given by $$[K,\phi]_G\cdot [L,\psi]_G=\sum_{KsL\in K\backslash G/ L}\bigl[K\cap {^sL},\phi|_{K\cap {^sL}}{^s\psi}|_{K\cap {^sL}}\bigr]_G$$ and with identity element $[G,1]_G$. \smallskip The group $G$ acts by conjugation on the ring $\prod_{K\leq G}\mathbb{Z} \mathrm{Hom}(K,A)$, and the \textit{ghost ring} of $B^A(G)$ is the subring $\widetilde{B^A}(G)=\left(\prod_{K\leq G}\mathbb{Z} \mathrm{Hom}(K,A)\right)^G$. Here, we consider $\mathrm{Hom}(K,A)$ as an abelian group via pointwise multiplication and $\mathbb{Z}\mathrm{Hom}(K,A)$ as the associated group ring. The map $$\Phi^A_G\colon B^A(G)\longrightarrow \widetilde{B^A}(G)\,,\quad [L,\psi]_G\mapsto \left(\sum_{\phi\in\mathrm{Hom}(K,A)}\gamma^G_{(K,\phi),(L.\psi)}\phi\right)_{K\leq G}\,,$$ is an injective ring homomorphism known as the \textit{mark morphism}, where $$\gamma^G_{(K,\phi),(L,\psi)}=|\{sL\in G/L\;|\; (K,\phi)\leq {^s(L,\psi)} \}|\,,$$ for $(K,\phi),(L,\psi)\in \mathcal{M}^A_G$. Several properties of these numbers are listed in \cite[Section 1]{Bolt} and in \cite[Lemma 2.2]{Gar1}. \smallskip The natural projection map $\pi_{[\mathcal{S}_G]}:\prod_{K\leq G}\mathbb{Z} \mathrm{Hom}(K,A)\longrightarrow \prod_{K\in [\mathcal{S}_G]}\mathbb{Z} \mathrm{Hom}(K,A)$ is a ring homomorphism and injective when restricted to $\widetilde{B^A}(G)$; this is easily seen by considering the $\mathbb{Z}$-basis of $\widetilde{B^A}(G)$ given by the elements $\widetilde{\phi}$ for $[K,\phi]_G\in G\backslash\mathcal{M}^A_G$ defined in \cite[Eq. 6]{Gar1}. For $L\le G$, the $L$-entry of $\widetilde{\phi}$ is equal to the sum of the distinct $N_G(\lexp{g}{K})$-conjugates of $\lexp{g}{\phi}$, if there exists $g\in G$ with $L=\lexp{g}{K}$ and $0$ otherwise. Moreover, one has $$\widetilde{B^A}(G)\cong \pi_{[\mathcal{S}_G]}\left(\widetilde{B^A}(G)\right)=\prod_{K\in [\mathcal{S}_G]}\left(\mathbb{Z} \mathrm{Hom}(K,A)\right)^{N_G(K)}.$$ We set $\overline{B^A}(G)=\prod_{K\in [\mathcal{S}_G]}\left(\mathbb{Z} \mathrm{Hom}(K,A)\right)^{N_G(K)}$, $\overline{\phi}=\pi_{[\mathcal{S}_G]}\left(\widetilde{\phi}\right)$ and $\overline{\Phi}^A_G=\pi_{[\mathcal{S}_G]}\Phi^A_G$ for simplicity. Thus, if $K\in[\mathcal{S}_G]$ and $\phi\in\mathrm{Hom}(K,A)$ then the $K$-entry of $\overline{\phi}\in\overline{B^A}(G)$ is the sum of the distinct $N_G(K)$-conjugates of $\phi$ and all the other entries of $\overline{\phi}$ are equal to $0$. \smallskip If $H$ is another finite group, a ring isomorphism $\Theta:B^A(G)\longrightarrow B^A(H)$ is called a \textit{species isomorphism} if $\Theta([K,\phi]_G)=[R,\rho]_H\in H\backslash \mathcal{M}^A_H$ for every $[K,\phi]_G\in G\backslash \mathcal{M}^A_G$ \cite[Def. 3.1]{Gar1}. We will make use of the following theorem which is part of the statement of Theorem~3.14 in \cite{Gar1}. \begin{theorem}\thlabel{Previous Species Thm} Let $A$ be an abelian group and let $G$ and $H$ be finite groups. There exists a species isomorphism from $B^A(G)$ to $B^A(H)$ if and only if there exist bijections $\theta_{\mathcal{S}}\colon \mathcal{S}_G\longrightarrow \mathcal{S}_H$ and $\theta_K\colon \mathrm{Hom}(K,A)\longrightarrow\mathrm{Hom}(\theta_{\mathcal{S}}(K),A)$, for $K\le G$, satisfying the following two conditions: \smallskip {\rm (a)} $\gamma_{(K,\phi),(L,\psi)}^G = \gamma_{(\theta_{\mathcal{S}}(K),\theta_K(\phi)), (\theta_{\mathcal{S}}(L),\theta_L(\psi))}^H$ for all $(K,\phi),(L,\psi)\in\mathcal{M}^A_G$. \smallskip {\rm (b)} The group homomorphism $\widetilde{\Theta}\colon \widetilde{B^A}(G) \longrightarrow \widetilde{B^A}(H)$ determined by $\widetilde{\phi}\mapsto \widetilde{\theta_K(\phi)}$, for $(K,\phi)\in\mathcal{M}^A_G$, is a ring isomorphism. \end{theorem} \begin{rem}\label{old remark} In order to clarify the statement in Theorem~\ref{Previous Species Thm}, we will show that the map $\widetilde{\Theta}$ in (b) is well-defined, provided that the condition in (a) holds. That is, if $(K,\phi), (L,\psi) \in \mathcal{M}^A_G$ are $G$-conjugate, then $\widetilde{\theta_{L}(\psi)} = \widetilde{\theta_K(\phi)}\in \widetilde{B^A}(H)$. For this it suffices to show that $(\theta_{\mathcal{S}}(K),\theta_K(\phi))$ and $(\theta_{\mathcal{S}}(L), \theta_{L}(\psi))$ are $H$-conjugate. By part 2 of \cite[Lemma 2.2]{Gar1}, $(K,\phi)=_G(L,\psi)$ if and only if $0\neq \gamma^G_{(K,\phi),(L,\psi)}$ and $0\neq \gamma^G_{(L,\psi),(K,\phi)}$, then the condition in (a) implies that $(K,\psi)=_G(L,\psi)$ if and only if $(\theta_{\mathcal{S}}(K),\theta_K(\phi))=_H(\theta_{\mathcal{S}}(L), \theta_{L}(\psi))$. Moreover, if $K$ and $L$ are conjugate subgroups of $G$, then $(\theta_{\mathcal{S}}(K),\theta_K(1))=_H(\theta_{\mathcal{S}}(L),\theta_L(1))$, which implies that $\theta_{\mathcal{S}}(K)=_H\theta_{\mathcal{S}}(L)$. Conversely, if $\theta_{\mathcal{S}}(K)=_H\theta_{\mathcal{S}}(L)$, then $(K,\theta_K^{-1}(1))=_G(L,\theta_L^{-1}(1))$, implying $K=_GL$. Thus, $K=_GL$ if and only if $\theta_{\mathcal{S}}(K)=_H\theta_{\mathcal{S}}(L)$. \end{rem} Next we prove a slight modification of the above Theorem. \begin{theorem} \thlabel{SpeciesThm} Let $A$ be an abelian group and let $G$ and $H$ be finite groups. Then there exists a species isomorphism from $B^A(G)$ to $B^A(H)$ if and only if for any given $[\mathcal{S}_G]$ and $[\mathcal{S}_H]$, there are bijections $\theta_{[\mathcal{S}]}\colon [\mathcal{S}_G]\longrightarrow [\mathcal{S}_H]$ and $\theta_K\colon\mathrm{Hom}(K,A)\longrightarrow \mathrm{Hom}(\theta_{[\mathcal{S}]}(K),A)$ for $K\in [\mathcal{S}_G]$, satisfying the following two conditions: \smallskip {\rm (a)} $\gamma^H_{\left(\theta_{[\mathcal{S}]}(K),\theta_{K}(\phi)\right),\left(\theta_{[\mathcal{S}]}(L),\theta_L(\psi)\right)}=\gamma^G_{\left(K,\phi\right),\left(L,\psi\right)}$ for all $K,L\in [\mathcal{S}_G]$, $\phi\in \mathrm{Hom}(K,A)$ and $\psi\in \mathrm{Hom}(L,A)$. \smallskip {\rm (b)} The map $\overline{\Theta}\colon \overline{B^A}(G) \longrightarrow \overline{B^A}(H)$, $\overline{\phi}\mapsto \overline{\theta_K(\phi)}$, for $\phi\in \mathrm{Hom}(K,A)$ and $K\in[\mathcal{S}_G]$, is a ring isomorphism. \end{theorem} \begin{rem}\label{new remark} With the same arguments as in Remark~\ref{old remark} one can show that in the situation of Theorem~\ref{SpeciesThm} the condition in (a) implies that the map $\overline{\Theta}$ in (b) is well-defined. More precisely, (a) implies that for any $K\in[\mathcal{S}_G]$, $\phi,\psi\in\mathrm{Hom}(K,A)$, after setting $K':=\theta_{[\mathcal{S}]}(K)$, $\phi':=\theta_K(\phi)$, $\psi':=\theta_K(\psi)$, one has $(K,\phi)=_G(K,\psi)$ if and only if $(K',\phi')=_H(K',\psi')$. \end{rem} \medskip\noindent {\it Proof of Theorem~\ref{SpeciesThm}.}\quad First suppose that there exists a species isomorphism from $B^A(G)$ to $B^A(H)$. Then there exist bijections $\theta_{\mathcal{S}}$ and $\theta_K$ for $K\le G$ as in Theorem~\ref{Previous Species Thm}, satisfying the conditions (a) and (b) in Theorem~\ref{Previous Species Thm}. Let $[\mathcal{S}_G]$ and $[\mathcal{S}_H]$ be given. Then for every $K\in[\mathcal{S}_G]$ there exists $h_K\in H$ such that $\lexp{h_K}{\theta_{\mathcal{S}}(K)}\in[\mathcal{S}_H]$. We replace the bijection $\theta_{\mathcal{S}}$ by the map $L\mapsto \lexp{h_K}\theta_{\mathcal{S}}(L)$ whenever $L=_GK$. This map is again a bijection, since $\theta_{\mathcal{S}}$ maps the $G$-conjugacy class of $K$ bijectively onto the $H$-conjugacy class of $\theta_{\mathcal{S}}(K)$, by Remark~\ref{old remark}. Moreover, for any $g\in G$, we replace $\theta_{\lexp{g}{K}}$ by $c_{h_K}\circ\theta_{\lexp{g}{K}} \colon \mathrm{Hom}(\lexp{g}{K},A)\longrightarrow \mathrm{Hom}(\lexp{h_K}{\theta_{\mathcal{S}}(^gK)},A)$. Then these new bijections satisfy again conditions (a) and (b) in Theorem~\ref{Previous Species Thm}. In fact, the map $\widetilde{\Theta}$ has stayed the same. Now we can restrict $\theta_{\mathcal{S}}\colon \mathcal{S}_G\to\mathcal{S}_H$ to the bijection $\theta_{[\mathcal{S}]}\colon [\mathcal{S}_G]\to[\mathcal{S}_H]$, and for every $K\in[\mathcal{S}_G]$ we choose the given bijection $\theta_K$. Then condition (a) in Theorem~\ref{SpeciesThm} is immediately satisfied and condition (b) holds, since with $\widetilde{\Theta}$ also $\overline{\Theta}=\pi_{[\mathcal{S}_H]}\widetilde{\Theta}\pi_{[\mathcal{S}_G]}^{-1}$ is a ring isomorphism. Conversely, for each $[K,\phi]_G\in G\backslash\mathcal{M}^A_G$ we can assume $K\in [\mathcal{S}_G]$, and mapping $[K,\phi]_G$ to $[\theta_{[\mathcal{S}]}(K),\theta_K(\phi)]_H$ gives a bijection from $G\backslash \mathcal{M}^A_G$ onto $H\backslash \mathcal{M}^A_H$, thus it extends to an isomorphism of abelian groups $\Theta:B^A(G)\longrightarrow B^A(H)$. Then the diagram $$\xymatrix{ B^A(G)\ar[rr]^-{\Theta}\ar[d]_{\overline{\Phi}^A_G} &&B^A(H)\ar[d]^{\overline{\Phi}^A_H}\\ \overline{B^A}(G) \ar[rr]_{\overline{\Theta}} &&\overline{B^A}(H) }$$ commutes, and since the bottom and the vertical arrows are injective ring homomorphisms, $\Theta$ is a ring isomorphism. \qed \bigskip As remarked in \cite[Cor. 3.12]{Gar1}, some of the $\theta_K$ are necessarily group isomorphisms, but we ignore whether this has to be the case for all these maps. However, when the $\theta_K$ are isomorphisms, we can drop Condition~(b) in Theorem~\ref{SpeciesThm}. \begin{prop}\thlabel{SpeciesCriterion} Let $A$ be an abelian group and let $G$ and $H$ be finite groups. Assume that there is a bijection $\theta_{[\mathcal{S}]}:[\mathcal{S}_G]\longrightarrow [\mathcal{S}_H]$ and, for each $K\in[\mathcal{S}_G]$, a group isomorphism $\theta_K\colon\mathrm{Hom}(K,A)\longrightarrow \mathrm{Hom}(\theta_{\mathcal{S}}(K),A)$ such that $$\gamma^G_{(K,\phi),(L,\psi)}=\gamma^H_{(\theta_{[\mathcal{S}]}(K),\theta_{K}(\phi)),(\theta_{[\mathcal{S}]}(L),\theta_{L}(\psi))}$$ for all $K,L\in [\mathcal{S}_G]$, $\phi\in \mathrm{Hom}(K,A)$, $\psi\in \mathrm{Hom}(L,A)$. Then the assignment $[K,\phi]_G\mapsto [\theta_{[\mathcal{S}]}(K),\theta_K(\phi)]_H$ for $K\in [\mathcal{S}_G]$ and $\phi\in\mathrm{Hom}(K,A)$ extends to a species isomorphism $\Theta\colon B^A(G) \longrightarrow B^A(H)$. \end{prop} \begin{proof} For $K\in [\mathcal{S}_G]$ and $\phi\in \mathrm{Hom}(K,A)$, the assingment $\overline{\phi}\mapsto \overline{\theta_K(\phi)}$ extends to an isomorphism $\overline{\Theta}:\overline{B^A}(G)\longrightarrow \overline{B^A}(H)$ of abelian groups. Then the diagram $$\xymatrix{ \overline{B^A}(G)\ar@{^{(}->}[d]\ar[rr]^{\overline{\Theta}} &&\overline{B^A}(H)\ar@{^{(}->}[d]\\ \prod_{K\in[\mathcal{S}_G]}\mathbb{Z} \mathrm{Hom}(K,A)\ar[rr]_{(\theta_{K})} &&\prod_{L\in[\mathcal{S}_H]}\mathbb{Z} \mathrm{Hom}(L,A)\\ }$$ where $\theta_K\colon\mathbb{Z} \mathrm{Hom}(K,A)\longrightarrow \mathbb{Z} \mathrm{Hom}(\theta_{\mathcal{S}}(K),A)$ is the $\mathbb{Z}$-linear extension of $\theta_K$ and the vertical arrows are the inclusions, commutes by Remark~\ref{new remark}. Note that the bottom map is a ring isomorphism, since each $\theta_K$ was a group isomorphism. Therefore, as the bottom and the vertical arrows are injective, also $\overline{\Theta}$ is a ring isomorphism. By \thref{SpeciesThm} and its proof, the maps $\theta_{[\mathcal{S}]}$ and $\theta_K$ determine a species isomorphism. \end{proof} \section{Nontrivial counterexamples} We recall the construction of Thévenaz' counterexamples in \cite{Thev}. Let $p$ and $q\geq 3$ be prime numbers such that $q|(p-1)$, and take elements $a\neq b$ of order $q$ in $(\mathbb{Z}/p\mathbb{Z})^{\times}$. Let $P_a=\mathbb{Z}/p\mathbb{Z}=\langle x\rangle$, $P_b= \mathbb{Z}/p\mathbb{Z}=\langle y\rangle$ and $Q=C_q=\langle z\rangle$, let $Q$ act on $P_a\oplus P_b$ by $\lexp{z}{x} = ax$ and $\lexp{z}{y}=by$, and consider the resulting semidirect product $G(a,b)=(P_a\oplus P_b)\rtimes Q$. A complete set of representatives of the conjugacy classes of subgroups of $G(a,b)$ is $\{1\}$, $P_a$, $P_b$, $P(j) = \langle x+jy\rangle$ for $j\in [(\mathbb{Z}/p\mathbb{Z})^{\times}/\langle a\rangle]$, $P_a\oplus P_b$, $Q$, $P_a\rtimes Q$, $P_b\rtimes Q$ and $G(a,b)$. Taken in this order, the table of marks of $G(a,b)$ is independent of the choice of $\{a,b\}$. For fixed $p$ and $q$, there are precisely $\frac{q-1}{2}$ isomorphism classes of these groups; in fact, if also $c\neq d$ are elements of order $q$ in $(\mathbb{Z}/p\mathbb{Z})^{\times}$ then $G(a,b)\cong G(c,d)$ if and only if there exists $n\in\{1.\ldots,q-1\}$ such that $\{c,d\}=\{a^n,b^n\}$ (see \cite{Thev}). Note that there are infinitely many choices for such $p$ and $q$: taking any prime $q\geq 3$, then by Dirichlet's theorem there are infinitely many primes $p$ in the arithmetic progression $1+q,1+2q,1+3q,\ldots$. \begin{theorem} \thlabel{ThevenazGroups} Let $p$ and $q\ge3$ be a primes with $q$ dividing $p-1$ and let $a\neq b$ and $c\neq d$ be elements of order $q$ in $(\mathbb{Z}/p\mathbb{Z})^{\times}$. If $A$ has trivial $p$-torsion, then $B^A(G(a,b))$ and $B^A(G(c,d))$ are isomorphic rings. \end{theorem} \begin{proof} Since we already know that these groups have isomorphic Burnside rings, we can assume that $A$ has elements of order $q$. For simplicity, set $G:=G(a,b)$ and $H:=G(c,d)$ and take $[\mathcal{S}_G]$ and $[\mathcal{S}_H]$ as in the first paragraph of this section. We let $\theta_{[\mathcal{S}]}:[\mathcal{S}_G]\longrightarrow [\mathcal{S}_H]$ be the obvious bijection inducing an isomorphism of the tables of marks and set $K':=\theta_{[\mathcal{S}]}(K)$ for $K\in [\mathcal{S}_G]$. Next we define group isomorphisms $\theta_K\colon\mathrm{Hom}(K,A)\longrightarrow \mathrm{Hom}(K',A)$ for $K\in[\mathcal{S}_G]$: if $K$ is a $p$-group so is $K'$, and $\mathrm{Hom}(K,A)$ and $\mathrm{Hom}(K',A)$ are trivial, hence there is only one choice for $\theta_K$; if $K$ is not a $p$-subgroup, then a homomorphism $\phi\colon K\longrightarrow A$ is determined by the value $\phi(z)$ which is either an element of order $q$ or $1$, and we define $\phi':=\theta_K(\phi)\colon K'\longrightarrow A$ by requiring $\phi'(z)=\phi(z)$. \smallskip We now compare $\gamma^G_{(K,\phi),(L,\psi)}$ and $\gamma^H_{(K',\phi'),(L',\psi')}$ for $K,L\in [\mathcal{S}_G]$, $\phi\in \mathrm{Hom}(K,A)$ and $\psi\in \mathrm{Hom}(L,A)$. First, since $\theta_{[\mathcal{S}]}$ preserves the marks, we have $$\gamma^G_{(K,1),(L,1)}=|(G/L)^K|=|(H/L')^{K'}|=\gamma^H_{(K',1),(L',1)},$$ for all $K$ and $L$ in $[\mathcal{S}_G]$. Note that the subgroups $P_a$, $P_b$ and $P_a\oplus P_b$ are normal, while $N_G(P(j))=P_a\oplus P_b$, and $P_a\rtimes Q$, $P_b\rtimes Q$ and $Q$ are self-normalizing. Moreover, it is straightforward to verify that if $K\not\leq L$ then $K\not\leq_{G} L$, and since $\theta_{[\mathcal{S}]}$ preserves containments, then $\gamma^G_{(K,\phi),(L,\psi)}=\gamma^H_{(K',\phi'),(L',\psi')}=0$ whenever $K\not\leq L$. \smallskip Therefore, we are left with the case when $K\leq L$, $L$ is not a $p$-subgroup and $\psi\neq 1$, and we distinguish the following cases for $K$: \smallskip (i) If $K\in\{\{1\}, P_a, P_b, P_a\oplus P_b\}$ then $K\unlhd G$ and $\phi=1$. Therefore $K\leq {\lexp{g}{L}}$ and $\lexp{g}{\psi}|_K=1$ for all $g\in G$ and hence $$\gamma^G_{(K,1),(L,\psi)}=[G:L]=[H:L']=\gamma^H_{(K',1),(L',\psi')}\,.$$ \smallskip (ii) If $K=P(j)$ we may assume that $L=G$. In this case $\gamma_{(K,1),(G,\psi)}^G= 1 = \gamma_{(K',1),(H,\psi')}^H$. \smallskip (iii) If $K$ is not a $p$-subgroup, then since $L$ is self-normalizing and both $K$ and $L$ contain $Q$, we have that if $g\notin L$ then $z\not\in \lexp{g}{L}$, hence $K\not\leq \lexp{g}{L}$. We can conclude that $$\gamma^{G}_{(K,\phi),(L,\psi)}=\begin{cases} 1 &\text{if}\;\phi(z)=\psi(z),\\ 0 &\text{otherwise}, \end{cases}$$ and by the way we have defined $\theta_K$, we have that $\gamma^{H}_{(K',\phi'),(L',\psi')}=\gamma^{G}_{(K,\phi),(L,\psi)}$. \smallskip We conclude that $\theta_{[\mathcal{S}]}$ and the isomorphisms $\theta_K$ for $K\in [\mathcal{S}_G]$ satisfy the condition of \thref{SpeciesCriterion}, hence they determine a species isomorphism. \end{proof} \begin{rem} The above theorem shows that Thévenaz' counterexamples to the isomorphism problem for the Burnside ring are also non-trivial counterexamples for the $A$-fibered Burnside ring if $A$ is any abelian group with trivial $p$-torsion and having elements of order $q$. In particular, we have a negative answer to the isomorphism problem of the $C_q$-fibered Burnside ring for any prime $q\geq 5$, and for the ring of monomial representations over any field $k$ of characteristic $p$, since in this case $D^k(G(a,b))\cong B^{C_q}(G(a,b))$. However, the existence of non-isomorphic finite groups $G$ and $H$ with $D^{\mathbb{C}}(G)\cong D^{\mathbb{C}}(H)$ remains open. \end{rem}
1,116,691,499,963
arxiv
\section{Introduction}\label{sec:introduction} Data analysis has become a useful technique to organize, process, and analyze large amounts of data in order to obtain useful knowledge effectively such as hidden patterns, implicit correlations, future trends, customer preferences, valuable business information etc \cite{DAP}. OLAP (\emph{online analytical processing}) \cite{OLAP}, as a key technology to provide rapid access to data (mostly relational data) for analysis via multidimensional structures, enables users (e.g., analysts, managers, executives etc.) to gain useful knowledge from data in a fast, consistent, interactive accessing way. There are many popular enterprise database management systems for supporting OLAP. For example, Oracle OLAP \cite{OrcaleDW,OrcaleOLAP} is Oracle's current computing engine for online analytical processing. IBM company based on the DB2 database proposes the IBM DB2 OLAP Server \cite{IBM,DB2} which can analyze the relational database quickly and directly. Microsoft also provides SQL Server Analytic Services (SSAS) \cite{MDX,MAS} supporting for OLAP to analyze information, tables, and files scattered across multiple databases. The characteristics of big data is not confined to only volume and velocity; it is also referred by the variety, variability and complexity of the data \cite{mckinsey2011big,BI-BD}. Due to the volume, variety and velocity at which the data grows, it is extremely difficult for organisations to process this data for timely and accurate decisions \cite{BigData-AP1}. For this challenge, big data analysis \cite{BigData-AP2} has become a tool to slove the problem. The primary goal of big data analysis is to help companies make more informed business decisions by enabling data scientists, predictive modelers and other analytics professionals to analyze large volumes of transaction data, as well as other forms of data that may be untapped by conventional business intelligence programs \cite{BigData-AP2}. Recently, many techniques have been successfully developped for providing big data analysis in various applications. For example, Oracle Bigdata \cite{OracleBD} builds on Hadoop \cite{Hadoop} through Oracle Direct Connector connecting Hadoop and Oracle databases. SQL Server 2012 \cite{SQL2012} provides the extension service of OLAP and business intelligence on Hadoop to support big data analysis. IBM SmartCloud provides a Hadoop-based analytical software InfoSphere BigInsights \cite{IBMBD} which can connect with IBM DB2. However, those existing techniques of big data analysis are mostly based on OLAP which is not effective to process data in various models (e.g., semi-structure \cite{BigData-AP2}), they do not always bring highly accurate analysis due to the variety and variability of big data in a complicated application--for example, the real-time data on the performance of traffic applications or of mobile applications. Besides, how to process big data analysis efficiently is always an important problem when the scale of big data grows exponentially \cite{Nature}. In this demonstration, we propose a hybrid framework for big data analysis on Apache Spark \cite{MLlib} (a high-performance computing architecture) which builds on HDFS of Hadoop. The framework features a three-layer data process module and a business process module which controls the former. Within this framework, we can support multi-paradigm data process (i.e., a technical connectivity between various disparate process \cite{paradigm}) in order to improve the accuracy of analysis, where various big data analysis techniques (incl. OLAP, machine learning, and graph analysis etc.) are interoperated to process the analysis of various applications of big data (incl. data cube \cite{DataCube}, intelligent prediction, and complex network etc.) respectively. Moreover, our proposed framework built on Spark can process large-scale data efficiently. Finally, we implement hMDAP and demonstrate the strength of hMDAP by using traffic scenarios in a real world. \section{Architecture}\label{sec:architecture} In Figure \ref{fig:architecture}, we depict the architecture of our framework consisted of four parts: \emph{the storage management}, \emph{the resource scheduling}, \emph{the query analysis} and \emph{the business process}. In the following sections, we will introduce each part in detail. \begin{figure} \centering \begin{minipage}[t]{0.95\linewidth} \scalebox{1.8}{ \includegraphics[width=0.5\textwidth]{architecture.jpg}\\ } \caption{The hMDAP architecture.}\label{fig:architecture} \end{minipage} \vspace*{-10pt} \end{figure} \subsection{Storage management}\label{sec:storage} In Figure \ref{fig:storage}, there are two parts, the physical storage and the logical storage. The rapid growth of data makes the physical storage of data from single source storage to distributed storage. In order to solve the storage of multi-source data, we adopt the existing distributed file system. In our framework, it is HDFS (Hadoop Distributed File System \cite{Hadoop}). Besides, it products many types of data due to the different needs of applications, such as tables, texts, RCFile(the file type of Hive) and sequence data. In order to use these different types of data, we compose the abstract relational views by designing the metadata with semantics to convert data types to the relational data we can handle. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{storage.jpg}\\ \caption{Storage management.}\label{fig:storage} \end{figure} \vspace*{-15pt} \subsection{Resource scheduling}\label{sec:resource} In our framework, the development is based on Spark and the module of the resource scheduling is assigned to Spark. The Figure \ref{fig:scheduling} depicts the resource scheduling in our framework. We use MySQL \cite{MySQL} to query over relational database. The part of MLlib is Spark machine learning library. We call the functions in the library to compute. GraphX is the graph query module of Spark. We user it to query graphs and it provides a possibility to transform the different data formats to graph to query. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{scheduling.jpg}\\ \vspace*{-10pt} \caption{Resource scheduling.}\label{fig:scheduling} \vspace*{-20pt} \end{figure} \subsection{Query analysis}\label{sec:query} The module of the query and analysis is located on the top of the framework. It is not only the entrance to provide services, but also provides the standard syntax and semantic specification of multi-paradigm data analytical processing. At present, HiveQL is similar to the standard SQL, which is oriented to the classic OLAP task, and does not deal with the query language based on ML analysis and graph data analysis. On the basis of not changing the existing query language syntax standard, we develop a multi paradigm for large data fusion analysis query language expanded of machine learning(ML) and graph analysis. Our big data analysis and processing of the query language is based on the improvement of the fusion of SQL and HiveQL in multi-paradigm. First of all, we analyze the support of HiveQL and SQL respectively and count the amount of operations which can be supported by the traditional relational algebra model. On the basis of the relational algebra model, we add other necessary operators to construct an extension of the algebraic language model, which can fully support the operation of HiveQL and standard SQL. For the operator with higher complexity, it is split into smaller sub operator or used other methods to optimize it. For the ML analysis, we count the commonly used analytical processing methods, such as classification and clustering, and define the abstract interfaces for common ML analysis processing methods. For the graph analysis processing, we also count the commonly used analytical processing methods, such as the shortest path algorithm, and define the abstract interfaces for them. In this module, the framework also relates to the implementation of the OLAP on the relational database and ML and graph data processing tasks on the distributed framework. The traditional relational database query optimization method is no longer applicable to this situation. According to the different characteristics of relational storage management query engine and distributed file system of computing engine, we summarize the query information and optimize the performance. Firstly, we investigate the statistical index system used in traditional database and analyze the interaction between each index and the index in the system. Then, for each index in the index system of statistical information, we design efficient and accurate sampling methods to calculate the cost model in query optimization. According to the above statistics, we can also design a storage and maintenance programs which is easy to update and manage. And we may use the cost model in the traditional relational database to design a new cost model which can reflect the query cost of the mixed data. Figure \ref{fig:query} displays the query analysis. The main architectural components of the query analysis are \emph{Query} and \emph{Data Analysis Process Tools(DAP Tools)}. In the first part, we can query by SQL or the function user defined as specified format. The DAP tools contain classical OLAP, DAP on machine learning and DAP on graph. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{query.jpg}\\ \vspace*{-10pt} \caption{Query analysis.}\label{fig:query} \vspace*{-15pt} \end{figure} \subsection{Business process}\label{sec:business} Our framework provides an analysis method for the large scale data analysis process. But in the face of complex business processes in different fields, we need the domain knowledge and according to the domain knowledge, we can design the multi-paradigm fusion of analysis task. We can draw lessons from the method of service composition in service oriented architecture design. In this module, we need to do two things: developing a multi paradigm fusion analysis process orchestration language syntax and the complex business process scheduling method. In the first part, we need to analyze the patterns and characteristics of service orchestration language in service oriented architecture design and design an abstract model of the executable process. On the basis of the abstract model, we summarize the basic activities of complex business process analysis. Finally, we define the grammar of the business process. In the semantic, we need to research and analysis the meanings of basic business activities and define the start point,end point and the basic command. In the second part, we need to study and analyze complex business processes in practical applications. Then, we build complex business process models and refine the way to exchange messages in public business processes. After that, we need to control the interaction of each part of the resources through the interaction sequence of messages, achieving a reasonable call for each resource service. We still need to investigate the applicability of existing object-oriented design patterns. For the analysis of complex business process integration model, we design data business processes. We refine the design patterns in complex business processes based on the advantages and principles of existing design patterns. In the real world, the business process model is complex and it takes a lot of time to analyze. The Figure \ref{fig:business} illustrates the details of the business process in our framework. The user needs to write the configuration files before he or she submits the query. The format of configuration files are shown in Section \ref{sec:demonstration}. When the user submits a query to the framework, the query and analysis module in the framework starts to parse the user's query. This module parses queries according to predefined semantics, such as XML(Extensible Markup Language). The module transforms the user's query to two parts, the query over relational databases and the query in machine learning. We default that the user's query including the query over relational databases and the module determines whether or not to carry out the query in machine learning. We think that when the result of the query over relational databases is null, the framework begins to query in machine learning. After the analysis module, the framework uses the query over relational databases and the information about the databases which is read from the configuration files to query the relational databases. Then, the framework runs the query in machine learning. The input of machine learning is the result of querying by relational databases which the query statement is stored in the configuration files. And the parameters of the machine learning algorithm is also stored in the configuration files. When the framework gets the information of the machine learning algorithm, it starts to train and calculate and the parameters of the training of the machine learning also comes from the configuration files. Finally, the framework makes a join of the results of two parts. \begin{figure} \centering \begin{minipage}[t]{0.95\linewidth} \scalebox{1.8}{ \includegraphics[width=0.5\textwidth]{business.jpg}\\ } \vspace*{-15pt} \caption{Business process.}\label{fig:business} \vspace*{-15pt} \end{minipage} \end{figure} \vspace*{-10pt} \section{Demonstration}\label{sec:demonstration} In this section, we present the interface of hMDAP based in Javascript, which communicate with the service in Java. We show the screenshot of hMDAP in Figure \ref{fig:screenshot} and the configuration file we mentioned above in Figure \ref{fig:ml}. The interface is composed as follows: \begin{compactitem} \item Configuration of Machine Learning: it is a text to input the path of the configuration file of the machine learning algorithm, such as parameters. \item Configuration of Relation Database: it is a text to input the path of the configuration file of the relational databases, such as the user name. \item Results: it is a text to display the results of the background. \item Run: it is a button to start the program and when the program runs over, the results are shown in the \emph{Results}. \item Save: it is a button to save the context from \emph{Results} in text file and at the same time, empty all the text box contents. \item Cancel: cancel the running of this program and empty all the text box contents. \end{compactitem} \begin{figure} \centering \begin{minipage}[t]{0.95\linewidth} \scalebox{1.8}{ \includegraphics[width=0.5\textwidth]{screenshot.jpg}\\ } \vspace*{-10pt} \caption{Query interface of hMDAP.}\label{fig:screenshot} \end{minipage} \vspace*{-10pt} \end{figure} \begin{figure}[h] \centering \begin{minipage}[t]{0.95\linewidth} \scalebox{1.8}{ \includegraphics[width=0.5\textwidth]{ml.jpg}\\ } \vspace*{-10pt} \caption{The configuration file of the machine learning.}\label{fig:ml} \vspace*{-20pt} \end{minipage} \end{figure} And the details of the configuration file is as follows: \begin{compactitem} \item configuration: it is the beginning of the configuration file. \item input: it is the training dataset of the machine learning algorithm. \item database: it indicates that the input dataset comes from the relational database as following information \item url, user, password: they are the parameters to connect to the relational database, the location of the database, the user name and the password of the user. \item sql: it is the statement to query the relational database. \item parameter: the contents under this label are the parameters of the machine learning algorithms except the input parameter. \item value: a series of these labels are the values of the parameters. \item algorithm: it is the name of algorithm. For example, the value of \emph{algorithm} is \emph{KMeans} and our framework runs the algorithm named \emph{KMeans} which is defined in our library. User can customize the algorithm and give the location of the algorithm in this label. \end{compactitem} Before running the interface, the user should write two configuration files, the configuration of machine learning algorithms as Figure \ref{fig:ml} and the configuration of relational databases that the contexts are the parts of \emph{<database>} in Figure \ref{fig:ml}. When the user writes two files, he or she should write the paths of the files in the texts on the interface. Then, click the button \emph{Run}. If the user wants to save the results, he or she clicks the button of \emph{Save}. If the user don't need the results, he or she clicks the button of \emph{Cancel}. \vspace*{-10pt} \section{Conclusion}\label{sec:conclusion} In this demonstration, we proposed hMDAP, a hybrid framework for large-scale data analytical processing to support multi-paradigm process on Spark. The multi-paradigm processing mechanism of hMDAP can provide the interoperability of data analytical process techniques to process data which might be not effectively handled if we only apply single data analytical process technique. On the other hand, hMDAP takes advantage of the high-performance of Spark in processing large-scale data. We believe that hMDAP provides a new approach to big data analysis in a multi-paradigm way. \vspace*{-10pt} \section*{Acknowledgments}\label{sec:acknowledgments} This work is supported by the programs of the Key Technology Research and Development Program of Tianjin (16YFZCGX00210), the National Key Research and Development Program of China (2016YFB1000603), the National Natural Science Foundation of China (NSFC) (61672377), and the Open Project of Key Laboratory of Computer Network and Information Integration, Ministry of Education (K93-9-2016-05). Xiaowang Zhang is supported by Tianjin Thousand Young Talents Program. \vspace*{-10pt}
1,116,691,499,964
arxiv
\section{Introduction}\label{sec:introduction} Verse-chorus song form is a very common structure for popular music. In it, verses alternate with choruses, with the lyrics of the verses varying and the choruses repeating more strictly and more frequently. The authors of~\cite{vanbalen2013analysis} cite other generalizations used to define choruses, including that they are the `most prominent' and `most catchy' sections of a piece. These traits make it desirable to detect choruses automatically, whether for generating ``thumbnails''~\cite{bartsch2001catch, chai2003thumbnailing, muller2012robust}, for finding the emotional ``highlights'' of a piece~\cite{huang2018pop}, or for enabling convenient navigation based on the song structure~\cite{goto2003smartmusickiosk}. However, most previous approaches to chorus detection and thumbnailing~\cite{goto2006chorus,eronen2007chorus,bartsch2001catch,chai2003thumbnailing} are unsupervised. They begin with an observation about what typefies chorus sections, and search for them on this basis: e.g., finding the loudest, most frequently repeated, and/or the most homogenous section. Since the definition of `chorus' is a generalization that does not apply in all cases, even a perfectly-designed system of this type will fail to detect the chorus in many songs. A better approach may be to let a model \emph{learn} what defines `chorusness' from labeled examples; this would allow a system to leverage the timbral and spectral features identified by~\cite{vanbalen2013analysis} in a study of what acoustic features differentiate choruses. This approach, when applied to the related task of music boundary detection by~\cite{ullrich2014_ismir}, led to a huge leap in the state of the art. Prior segmentation algorithms would generally focus on a definable proxy task (e.g., detecting points of change or onsets of repetitions), assisted by sensible heuristics (e.g., rounding boundary estimates to the nearest downbeat). A convolutional neural network (CNN) is trained to detect whether the center of a 16-second input is a boundary. When post-processed with an appropriate threshold, \cite{ullrich2014_ismir} demonstrated a 10\% improvement in f-measure over the state of the art. We propose a similar approach: train a neural network to predict the ``chorusness'' of an excerpt directly from the audio, and without the context of the rest of the song. We train a binary classifier to predict the ``chorusness'' of each point in a window, and slide this window throughout the song to obtain a chorus probability curve. However, this leaves the problem of finding an appropriate threshold for post-processing. To ease this, we propose to jointly model the chorus activation and boundary activation curves, so that the loss on the signals around the boundaries is naturally emphasized. At the inference phase, it also eases the process of converting the raw probability curve to a binary output for a song. Chorus detection is clearly related to two tasks with a long tradition of MIR research: thumbnailing and music structure analysis (MSA)~\cite{muller2015a}. The objective of thumbnailing is to find a short excerpt of a song that would be an effective preview. However, there is no definition of what makes a good preview; \cite{chai2003thumbnailing} cited several. In practice, thumbnailing systems are evaluated by testing how often they select all or part of a chorus~\cite{bartsch2001catch}, or whichever segment is repeated most often~\cite{muller2012robust}. Recently, \cite{huang2018pop} proposed a novel, related objective---to find the emotional highlights of pop songs---and evaluated their system based on whether it captured the choruses, which were assumed to correspond to the highlights, but their system used a neural network trained to detect emotion, not choruses. In music structure analysis, it is assumed that one family of segments corresponds to the chorus, but predicting which one is only rarely attempted. We are aware of three prior systems: \cite{maddage2004content}, who assumed a highly restricted template for song structures and used heuristics to predict labels; \cite{paulus2010improving}, who paired a standard structure analysis system with an HMM trained to label the sections; and \cite{shibata2020music}, published very recently, who proposed a hierarchical generative model (with section parts generating chord progressions, and these in turn generating observed feature sequences). This last model benefits from supervision, but still relies on a hand-set strategy of detecting homogeneity and repetitions, based on handcrafted features (chroma and MFCCs). The lack of attention paid to chorus detection may be due to the difficulty of obtaining sufficient training data. SALAMI~\cite{smith2011design} contains 1446 songs, but these come from diverse genres, so it may be difficult to learn a coherent notion of ``chorusness'' from it. Introduced in 2019, the Harmonix Set~\cite{NietoISMIR2019} contains 912 songs, 888 with ``chorus'' sections; it is the most frequent label, with over 3100 choruses altogether, which is 41\% more than the ``verse'' instances. We also have the annotated chorus locations for an internal dataset (denoted as \emph{In-House}) of 2480 Asian pop songs. We use these three sources to train or evaluate our system. Since the data sources all have different properties, we investigate the cross-dataset performance of our system. In addition to the usefulness of detecting choruses for other applications, the annotations of choruses (that we depend on) seem more reliable than for other sections. In SALAMI, we observed that if one annotator perceives a segment starting at time $t$, there is a 66\% chance that the other annotator placed a boundary at the same time (within 0.5 seconds)---but this probability rises to 78\% if the boundary marks the start of a `chorus'. This greater agreement could be the result of choruses having more salient beginnings than other section types~\cite{bruderer2009perception}. Therefore, the reliability of the annotations makes a supervised system more feasible. \section{Proposed Approach}\label{sec:method} This section details the three main stages of the system. The overall pipeline is illustrated in Figure~\ref{fig:SYS}. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/flow_chart.png} \caption{The system diagram.} \label{fig:SYS} \end{figure} \subsection{Feature and Label Pre-processing} \label{sec:method_stage1} We use the mel-spectrogram of a song as input. The model takes a window of $N$ frames (defined as a \emph{chunk}) with a hop size of $S$ frames at a time. Note that $N$ is appropriately large to allow the model to see longer contexts of the audio. The annotations include the starting and ending timestamps of each chorus. For each song, we create two types of target labels: a \textit{chorus activation curve} $\mathbf{c}$ and a \textit{boundary activation curve} $\mathbf{b}$. For a song of length $L$, we define $\mathbf{c}=\left[c_1, \ldots, c_L\right]$, with $c_t=1$ if $t$ lies within a chorus section, and $c_t=0$ otherwise. To smooth the transitions, half of a 2-second wide Hann window is used to ramp from 0 to 1 prior to the chorus onset; a similar ramp down is added after the chorus offset. To create the boundary activation curve, we convert each boundary instant into a ``boundary section'' of duration 0.5 seconds, and then apply the same ramp up and down. Thus, each boundary produces a 2.5-second wide bump in $\mathbf{b}$. We use a wider target than in~\cite{ullrich2014_ismir} to tolerate greater deviations from the true boundaries in our case, since our goal is to predict the full extent of the chorus. In previous works~\cite{ullrich2014_ismir, grill2015_eusipco}, the system models the probability of a single target (i.e., a boundary) at the center of a chunk. By contrast, we design the system to model the probabilities of the entire activation curve in the chunk, with each probability aligned with a frame in the mel-spectrogram. This enables the network to explicitly learn the contextual dependency from the target activation curve. To sum up, a chunk-level training sample for the CNN is represented as $\{\mathbf{X}\in\mathbb{R}^{N\times D}, \mathbf{c}\in\mathbb{R}^{N}, \mathbf{b}\in\mathbb{R}^{N}\}$, where $\mathbf{X}$ is the mel-spectrogram using $D$ components. \subsection{CNN-based Model} \label{sec:method_stage2} The model is shown in the center part of Figure~\ref{fig:SYS}. To facilitate reproducibility, we adopt the model architecture proposed in~\cite{pons2018end}, which has shown excellent performance in music classification/tagging tasks. We make three modifications to meet the requirements of our task: First, we add a temporal max-pooling layer prior to the spectrum front-end model to sub-sample the input mel-spectrogram. We use a pool size of [6, 1] with a stride of [6, 1]. To ensure synchronization with the mel-spectrogram, we also apply median-pooling for $\mathbf{c}$ and $\mathbf{b}$ with a pool size of 6 with a stride of 6. Second, we replace the global pooling (for mean- and max-pooling over time) with a \emph{local pooling} at the penultimate layer of the back-end model. A pool size of [24, 1] and a stride of [12, 1] are used. This design serves the need to model the entire temporal activation curve. Third, we add a final dense layer to output the chorus and boundary predictions, denoted by $\mathbf{\hat c}\in\mathbb{R}^{N/6}$ and $\mathbf{\hat b}\in\mathbb{R}^{N/6}$, respectively. All the model parameters remain the same as~\cite{pons2018end} except those mentioned above. To achieve multi-task learning, we calculate the losses for $\mathbf{\hat c}$ and $\mathbf{\hat b}$ separately. Then, the final loss is the weighted combination: $\alpha \cdot \text{loss}(\mathbf{\hat c}) + (1-\alpha) \cdot \text{loss}(\mathbf{\hat b})$, where $\alpha \in [0, 1]$ and $\text{loss}(\cdot)$ is a reduce-mean operation that averages the element-wise losses. \subsection{Output Merging and Post-processing} \label{sec:method_stage3} We obtain the chunks from a song using a large overlap (e.g. 95\%), so that during training, the model can see the labels for multiple times with multiple shifts of mel-spectrogram, which is expected to help fast convergence. At the prediction stage, we can merge the predictions of multiple overlapping windows to improve robustness. We take the average of the overlapped probabilities to obtain the merged activation $y[t]$ at each global time step $t \in [1, \dots, L]$ of a song, which can be formulated as follow: \begin{equation} y[t] = \frac{1}{|Q(t)|}\sum_{i \in Q(t)} \hat y_i[m(i,t)], \end{equation} where $\left\{\hat y_i[t']\right\}$, $t'=[1,\dots,N]$ is the predicted activation of the $i$-th chunk, $m(i,t)$ is the function that maps a global time step $t$ to a local time step $t'$ for the the $i$-th chunk, and the function $Q(t)$ returns the set of chunks that are available at $t$. For example, using 95\% overlap, $|Q(t)|$ would be 20 for most of the song, but it would ramp down to be 1 at the start and end of the song, with $|Q(1)|$ = $|Q(L)|$ = 1. This method is used to obtain the final predicted curves for both chorus and boundary activations. To obtain a binary prediction, we must apply some peak-picking or thresholding heuristics to the predicted activation curves. However, we observed in our pilot study that the overall probability values can be very low for some songs that the model is less confident about. Setting a global threshold to binarize the curves could thus lead to no choruses or boundaries being detected in these songs. To avoid this, we develop a more flexible method which makes use of the relative likelihoods of the segmented curve. The post-processing includes three phases: (1) select top $P$ peaks from the boundary curve to partition the song into segments; (2) calculate the chorus likelihood by averaging the chorus probabilities within each segment; (3) select the top $R$ segments (by likelihood) as the choruses, and assign the others as non-choruses. For the first phase, we follow the peak-picking method in \cite{ullrich2014_ismir} to select boundary candidates: any boundary having the maximum probability within a 10-second non-overlapped window throughout the curve is kept. Each candidate is assigned a boundary score by subtracting the average of the activation curve in the past 10 and future 5 seconds. We tailor $P$ and $R$ to the dataset, since the annotation guidelines and hence the typical number of segments for each dataset are different. For example, in Harmonix it is possible for two chorus sections to occur back-to-back with a boundary in between, but this arrangement was not possible in the In-House dataset. Accordingly, we calculate $\theta$, the average number of choruses per 3-minutes, from the training set as prior knowledge. We use it to set $P$ and $R$ as follows: $P = 2.5 \times R$ and $R = 2\times d \times (\theta/180)$, where $d$ is the test song's duration in seconds. Intuitively, $d \times (\theta/180)$ is the expected number of chorus sections for a test song. Our choice of $R$ thus reflects a strategy to slightly over-segment the song at first, which is reasonable since adjacent sections with the same predicted label will be merged. \section{Experiments}\label{sec:experiment} \subsection{Implementation Details} LibROSA \cite{mcfee2015librosa} is used to extract the log-scaled mel-spectrogram with $D$ = 96 components. The waveform is resampled at 32KHz, and an FFT window of 2048 samples with 1024-sample hop size is applied. For segmenting chunks, we adopt a window size of $N$ = 600 frames (19.2 seconds) with a hop size of $S$ = 30. In our preliminary experiments, we found the value of $S$ does not significantly affect the validation accuracy when it is appropriately small (e.g. $< 50$). Since it is related to the amount of data to be processed, increasing $S$ can reduce the time complexity. We use $\alpha$ = 0.1, as we observed in the validation that the boundary curve is more difficult to learn. We note that our model is not sensitive to $\alpha$ when $\alpha<0.5$. Smaller $\alpha$, which emphasizes learning the boundary curve, can result in better overall results. This observation makes intuitive sense: there are far fewer positive training examples for boundary frames than for chorus frames (ratio is smaller than 0.1), so emphasizing this loss can force the model to be more careful with frames near boundaries, which can eventually help the post-processing to make better decisions. Our model is implemented with TensorFlow 1.15 and trained using the Adam SGD optimizer that minimizes the cross entropy loss. We use a mini-batch of 256 examples and apply batch normalization with momentum 0.9 at every layer of the network. The initial learning rate is 0.0005 and annealed by half at every 15,000 training steps. \subsection{Experimental Settings} We use three datasets to evaluate the proposed approach: the subset of SALAMI in the ``popular'' genre (denoted by \emph{SALAMI-pop}) \cite{smith2011design}; the Harmonix Set \cite{NietoISMIR2019} with $\theta$ = $\sim$3.7 (training sets); and an internal music collection (In-House) with $\theta$ = $\sim$2.2 (training sets). SALAMI-pop was used for testing only, so its $\theta$ was never computed or used; the other datasets were used to conduct 4-fold cross-validation and cross-dataset evaluations. SALAMI-pop contains 210 songs. Since some songs are annotated twice, we treat each annotation of a song as a separate test case, yielding 320 test cases. For both SALAMI and Harmonix Set, we categorized ``pre-chorus'' as non-chorus (to disentangle the build from the true chorus) and ``post-chorus'' as chorus (since they seem more related to the chorus than to the rest of the song), and merged the segments accordingly. The In-House dataset was compiled for the purpose of training a chorus detector. It contains 2480 full tracks covering many genres of popular music, including Chinese-pop, J-pop, K-pop, hip-hop, rock, folk, electronic, and instrumental. At least one chorus section is annotated in each track. We study the performance of the raw chorus activation curve using the area under the ROC (AUC), and the final binary output using the pairwise F1 score, which is the standard metric for evaluating music structure analysis~\cite{muller2015a} and related tasks like beat/downbeat tracking~\cite{dixon2007evaluation}. Our main proposed model is named as \textbf{Temporal} model (Section~\ref{sec:method}), because it predicts the entire temporal activation of a chunk. We also introduce a variant, termed as \textbf{Scalar} model, that predicts a scalar chorus and boundary probability (two values) at the center of an input chunk (like in~\cite{ullrich2014_ismir,maezawa2019music}). Specifically, we set $S$ = 6, use global pooling in the back-end model, and skip the output merging stage. To study the potential accuracy loss due to the post-processing design, we create \textbf{OracleBound}, which uses the ground-truth boundaries and uses the number of choruses for $R$ to parse the predicted chorus curve of the best-performing Temporal model. We compare these models to four open-source baseline systems that use existing approaches: pychorus~\cite{Jayaram2018blog}, which is based on~\cite{goto2006chorus}, and three algorithms implemented in MSAF~\cite{nieto2016systematic}. We optimized pychorus using the following heuristics: we modified it to output up to 4 top candidates (default is one); and, when no chorus is found with an initial reference duration (15 seconds), we iteratively reduce the duration by 3 seconds until it finds a chorus. MSAF provides implementations of many algorithms for segmenting songs and grouping segments. None give explicit function labels like ``verse'' or ``chorus,'' but we can take the predicted segment groups as chorus candidates, and try two heuristics to guess which group represents the choruses: (1) \textit{Max-freq}: choose the most frequent label as the chorus, and (2) \textit{Max-dur}: choose the segment group that covers the greatest duration of a song as the choruses. We use the CNMF \cite{nieto2013convex}, SCluster \cite{mcfee2014analyzing}, and VMO \cite{wang2016structural} algorithms, all with default settings. As \textit{Max-dur} consistently outperformed \textit{Max-freq} for each algorithm, we report these results only. \subsection{Results and Discussion}\label{sec:discussion} \begin{table} \begin{center} \begin{tabular}{|l|ccc|ccc|} \hline Metric & \multicolumn{3}{c|}{AUC} & \multicolumn{3}{c|}{F1} \\ \hline \hline Model~\textbackslash~Test & HS & IH & SP & HS & IH & SP \\ \hline Temporal-HS & \textbf{.827} & .767 & .723 & \textbf{.692} & .624 & .602 \\ Scalar-HS & \textbf{.826} & .728 & .706 & \textbf{.688} & .597 & .585 \\ Temporal-IH & .775 & \textbf{.868} & .736 & .630 & \textbf{.668} & .596 \\ Scalar-IH & .764 & \textbf{.860} & .735 & .616 & \textbf{.665} & .592 \\ \hline OracleBound & - & - & - & \textbf{.738} & \textbf{.825} & .709 \\ \hline pychorus & .629 & .585 & .557 & .466 & .378 & .330 \\ CNMF~\cite{nieto2013convex} & .570 & .524 & .525 & .479 & .367 & .416 \\ SCluster~\cite{mcfee2014analyzing} & .603 & .523 & .506 & .534 & .297 & .418 \\ VMO~\cite{wang2016structural} & .455 & .463 & .481 & .272 & .229 & .277 \\ \hline \end{tabular} \end{center} \caption{Mean score comparison on the three datasets: Harmonix Set (HS), In-House (IH), and SALAMI-pop (SP). Temporal-`X' and Scalar-`X' indicate the results of each model when trained on dataset `X'. Results in bold were obtained using 4-fold cross-validation. All results of the proposed models (upper 4 rows) are significantly greater than results of the existing systems (lower 4) with p-value $< {10}^{-20}$. } \label{tab:summary_result} \end{table} The results are summarized in Table \ref{tab:summary_result}, where each value is the mean score averaged over a complete dataset. To perform cross-dataset (CD) evaluation (e.g., Temporal-HS on IH or SP), we select the best-performing model in terms of F1 from the four models trained in the cross-validation (CV) (i.e., among folds of Temporal-HS on HS), and use it to test all the songs of the other dataset (i.e., IH or SP). We observe, first of all, that our proposed models outperform the existing ones by a large margin: the worst of the proposed models was, on average, 0.14 greater than the best of the baseline models, for both AUC and F1. This outcome validates our expectation that ``chorusness'' could be learned in a supervised fashion. Second, the Temporal models consistently outperform their Scalar counterparts; in particular, the difference between Temporal-HS and Scalar-HS is statistically significant (p-value $< {10}^{-5}$). This indicates that modeling longer contexts of the activation is a better approach, perhaps because it exploits the temporal dependency of the activation curves. Third, although training on a dataset tends to improve performance on that dataset, we observe strong CD performance: the CD F1 scores all lie within 0.61 $\pm$ 0.03 across the three datasets, demonstrating the generalizability of our approach. Since $\theta$ is fixed by the training set, high CD performance indicates robustness to different values of $\theta$. On the other hand, the margin between our results and the OracleBound suggests that an orthogonal approach---e.g., one based on repetition---could improve the post-processing. \section{Conclusion and Future Work}\label{sec:conclusion} We have presented a supervised approach to detecting choruses in music audio. In experiments, our systems performed better than several existing ones, even when trained on other datasets. With this promising result, we believe that more types of segment labels, such as verse, bridge and solo, can be detected with supervised learning, and with less dependence on context. The current model is relatively simple: it only considers the local context of audio signals. It could be improved if we use features and techniques to inform it of a greater context, such as structure features \cite{serra2012unsupervised}, recurrent architecture and attention modelling \cite{huang2018pop}. \bibliographystyle{IEEEbib}
1,116,691,499,965
arxiv
\section{Introduction} Recent studies have shown that despite claims of spectral scarcity, the actual licensed spectrum remains unoccupied for long periods of time~\cite{FCC}. Thus, cognitive radio (CR) systems have been proposed~\cite{CR01} in order to efficiently exploit these spectral holes. CRs or secondary users (SUs) are wireless devices that can intelligently monitor and adapt to their environment, hence, they are able to share the spectrum with the licensed primary users (PUs), operating whenever the PUs are idle. Three key design challenges are active topics of research in cognitive radio networks, namely, distributed implementation, spectral efficiency, and the tradeoff between sensing and spectrum access. Previous studies have tackled various aspects of spectrum sensing and spectrum access. In \cite{CS00}, the performance of spectrum sensing, in terms of throughput, is investigated when the SUs share their instantaneous knowledge of the channel. The work in \cite{DT00} studies the performance of different detectors for spectrum sensing, while in \cite{CS01} spatial diversity methods are proposed for improving the probability of detecting the PU by the SUs. Other aspects of spectrum sensing are discussed in \cite{CS02} and \cite{CS04}. Furthermore, spectrum access has also received increased attention, e.g., \cite{SA00,SA01,SA02,SA03,SA04}. In \cite{SA00}, a dynamic programming approach is proposed to allow the SUs to maximize their channel access time while taking into account a penalty factor from any collision with the PU. The work in \cite{SA00} (and the references therein) establish that, in practice, the sensing time of CR networks is large and affects the access performance of the SUs. In \cite{SA01}, the authors model the spectrum access problem as a non-cooperative game, and propose learning algorithms to find the correlated equilibria of the game. Non-cooperative solutions for dynamic spectrum access are also proposed in \cite{SA02} while taking into account changes in the SUs' environment such as the arrival of new PUs, among others. When multiple SUs compete for spectral opportunities, the issues of fairness and efficiency arise. On one hand, it is desirable for an SU to access a channel with high availability. On the other hand, the effective achievable rate of an SU decreases when contending with many SUs over the most available channel. Consequently, efficiency of spectrum utilization in the system reduces. Therefore, an SU should explore transmission opportunities in other channels if available and refrain from transmission in the same channel all the time. Intuitively, diversifying spectrum access in both frequency (exploring more channels) and time (refraining from continuous transmission attempts) would be beneficial to achieving fairness among multiple SUs, in that SUs experiencing poorer channel conditions are not starved in the long run. The objective of this paper is to design a mechanism that enables fair and efficient sharing of spectral resources among SUs. We model spectrum access in cognitive radio networks as a repeated auction game with entry and monitoring costs. Auctioning the spectral opportunities is carried out repeatedly. At the beginning of each period, each SU that wishes to participate in the spectrum access submits a bid to a coordinator based on its view of the channel and past auction history. Knowledge regarding other secondary users' activities is limited due to the distributed nature of the network. The resulting formulation is thus a dynamic game with incomplete information. The bidder with the highest bid gains spectrum access. Entry fees are charged for all bidders who participate in the auction irrespective of the outcome of the auction. An SU can also choose to stay out (SO) of the current round, in which case no entry fee is incurred. At the end of each auction period, information regarding bidding and allocation are made available to all SUs, and in turn a monitoring fee is incurred. To achieve efficient bidding, a learning algorithm is proposed based on the outcome of past transactions. Each SU decides on local actions with the objective of increasing its long-term cost effectiveness. As demonstrated through extensive simulations, the proposed distributed scheme outperforms a myopic one-stage algorithm where an SU always participates in the spectrum access game in both single channel and multi-channel networks. A comment is in order on the feasibility of such an auction-based approach to spectrum access in practice. Due to commercial and industrial exploitation and different stake holders' interests, the functional architectures and cognitive signaling schemes are currently under discussion within standardization forums, including IEEE SCC 41 and ETSI TC RRS (Reconfigurable Radio Systems). Cognitive pilot channel (CPC) has gained attention as a potential enabler of data-aided mitigation techniques between secondary and primary communication systems as well as a mechanism to support optimized radio resource and data management across heterogeneous networks. In CPC, a common control channel is used to provide the information corresponding to the operators, Radio Access Technology and frequencies allocated in a given area. We can thus leverage the intelligence of the CPC coordinator and the control channel to solicit bidding and broadcast the outcome of auctions. The main contributions of this paper are: \begin{enumerate} \item We have formulated the spectrum access problem in cognitive radio networks as a repeated auction game. \item A distributed learning algorithm is proposed for single-channel networks, and a non-regret learning algorithm is investigated for multi-channel networks. \end{enumerate} The rest of the paper is organized as follows. In Section~\ref{sec:model}, the system model and terminology are introduced. Mechanism design of the repeated auction with learning is presented in Section~\ref{sec:algorithm}. Simulation results are given in Section~\ref{sec:simulation} followed by conclusions and a discussion of future work in Section~\ref{sec:conclusion}. \section{Physical layer and System Model} \label{sec:model} We consider a cognitive radio network consisting of $K$ channels to be occupied by $N$ SUs who compete repeatedly for access to the channels at each discrete time $t$. At time $t$, the $i^{th}$ SU can reasonably estimate its channel rate $\theta_{i,k}^t$ in the $k^{th}$ channel while having no knowledge of that of other SUs. In other words, each SU has imperfect information. We assume that both $N$ and $K$ are known to all $SUs$. The primary user's activity follows Bernoulli distribution, i.e., at time $t$, the $k$th channel is occupied with probability $\Theta_k$ at time $t$. Without loss of generality, all secondary users use a common transmit power $P_0$ with a thermal noise level $\sigma^2$ at the basestation. The channel gain for the $i^{th}$ secondary user is assume to be $G_ih_{i,k}^t$, where $G_i$ is the propagation path loss and $h_{i,k}^t$ follows the Rayleigh fading distribution. The rate for the $i^{th}$ user on the $k^{th}$ channel at time $t$ can be written as \begin{equation} \theta_{i,k}^t=W\log_2\left(1+\frac{G_ih_{i,k}^t}{\sigma^2}\right), \end{equation} where $W$ is the bandwidth for each channel. All channels are assumed to have the same bandwidth for ease of exposition. \begin{figure}[htb] \centerline{\epsfig{figure=system_model.eps,width=3.5in}} \caption{Illustration of the System Model} \label{fig:system_model} \end{figure} At time $t$, an SU may incur two types of costs, namely, the cost of accessing $c_t$, which accounts for the energy expenditure needed for spectrum access; and the cost of monitoring $e_t$, which is the cost of sensing and subscribing to the control channel (e.g., CPC) to obtain information regarding past auctions. The spectrum access among SUs is modeled as a repeated auction. The access cost, also called the entry free is charged only when the user decides to participate in the auction at time $t$. On the other hand, SUs always need to pay for the monitoring cost regardless of their decisions. At the beginning of a slot $t$, an SU chooses whether to stay out (SO) or participate in spectrum access. If the latter option is chosen, the SU (or bidder) sends a confidential message to the coordinator (or auctioneer) containing its bid. Let the bid submitted by SU $i$ be $\textbf m_{i}^t$, which is a $K\times 1$ vector with component $m_{i,k}^t$ for the $k^{th}$ channel. We define the set of actions of user $i$'s opponents as \begin{equation} \textbf m_{-i}^t=\{\textbf m_{j}^t|j\in N\backslash i\}. \end{equation} The cost incurred is thus $$e_t+c_t 1(m_{i,k}^t\neq SO),$$ where $1(\cdot)$ is the indicator function. In each round, the bidder with the maximum bid wins and is granted the spectrum access opportunity. A payment is incurred accordingly. There are two key differences compared to existing spectrum access models, where upon sensing an idle channel, all SUs contend for spectrum access. First, an SU may choose to stay out if participating in the spectrum access game is deemed unfavorable (because of low data rate or large number of contending SUs). Second, the transmission opportunity in each available channel is granted only to a single SU after the auction. Therefore, no further contention will occur. Each SU is assumed to follow a symmetric strategy based on its local state and information learned. Figure \ref{fig:system_model} gives an illustration of the system model. An analogy can be drawn in casinos, in which different gamblers try to select which table to play and how much to bet. Each secondary user shall decide which channel to sense and bid for, and how much the bid should be. These two issues will be addressed in the following sections for single-channel (i.e., $K=1$) and multi-channel (i.e., $K>1$) cognitive radio networks, respectively. \section{Repeated Auction with Learning} \label{sec:algorithm} In this section, we investigate the spectrum access problem among multiple SUs in cognitive radio networks. We first discuss the auction mechanism and then define the utility function. Finally, an efficient bidding mechanism in repeated auctions with learning is proposed. \subsection{Mechanism} Recall from Section~\ref{sec:model} that at the beginning of a slot $t$, an SU decides to either place a bid, or stay out and monitor the results. Based on the SUs' actions, the allocation strategy at time $t$ for channel $k$ can be written as $$X_t(k) \stackrel{\Delta}{=} \{\chi_{1, k}^t, \ldots,\chi _{n,k}^t| \chi_{i,k} \in \{0, 1\}\wedge \sum_{i}{\chi_{i,k}^t} = 1\}, \forall k.$$ If SU $i$ chooses to stay out, then its allocation equals zero, i.e., \begin{equation} \chi_{i,k}^t(m_{i,k}^t=SO)=0. \end{equation} The SU with the highest bid would win the right to access the channel, i.e., \begin{equation} \chi_{i,k}^t(m_{i,k}^t\neq SO)=\left\{ \begin{array}{ll} 1, & m_{i,k}^t>m_{j,k}^t, \forall j\neq i;\\ & \mbox{and primary user does not exist}\\ 0, & \mbox{Otherwise}. \end{array} \right. \end{equation} The winner would pay \begin{equation} p_{i,k}^t=\left\{ \begin{array}{ll} 0, & m_{i,k}^t=SO \mbox{ or } \exists j, m_{j,k}^t>m_{i,k}^t;\\ & \mbox{and primary user exists}\\ \psi_{i,k}^t, & \mbox{Otherwise} \end{array} \right. \end{equation} for its bid, where \begin{equation} \psi_{i,k}^t=\left\{ \begin{array}{ll} m_{i,k}^t, & \mbox{First Price Auction};\\ \max(\textbf m_{-i,k}^t), & \mbox{Second Price Auction}. \end{array} \right. \end{equation} For ease of presentation, a second price auction is assumed in the remaining discussion of the paper. The auction mechanism can be written as follows: \begin{enumerate} \item SU $i$ observes its current valuation (rate) $\theta_{i,k}^t$; \item SU $i$ decides $m_{i,k}^t$; \item The mechanism implements $\chi_{i,k}^t$ and $\psi_{i,k}^t$; and \item SU $i$ observes $\chi_{i,k}^{t}$ and $p_{i,k}^t$. \end{enumerate} Mechanisms and results for ``one-shot" auction with and without entry fees have been well established in the literature~\cite{RePEc:nwu:cmsems:1096}. Typically, a symmetric and known strategy is assumed. In our formulation, at each stage of the auction, an SU decides on its action according to the bidding history monitored thus far. The number of participants varies from stage to stage depending on the SU's valuation and its knowledge regarding other players. \subsection{Utility Function} To assess the expenditure in the course of the game, we define the accumulated cost for SU $i$ at time $t$ as \begin{equation} c_{i}^t(h^{t-1}_i)=\sum_{k=1}^K \sum_{\tau=1}^{t-1} \left[ p_{i,k}^\tau\chi_{i,k}^\tau+c_t 1(m_{i,k}^\tau \neq 0)+ e_t\right], \end{equation} where $h^{t-1}_i$ is the bidding history observed by SU $i$ up to time $t$. The cost includes payment for the spectrum access opportunity, entry and monitoring fees over the history and across the different channels. The accumulated reward of SU $i$ is given by \begin{equation}\label{eqn:reward} r_{i}^t(h^{t-1}_i)=\sum_{k=1}^K \sum_{\tau=1}^{t-1} \theta_{i,k}^\tau \chi _{i,k}^\tau. \end{equation} The utility is thus defined as the accumulated reward over the total cost, i.e., \begin{equation}\label{eqn:utility} \gamma_{i}^t(h^{t-1}_i)=\frac{r_{i}^t(h^{t-1}_i)}{c_{i}^t(h^{t-1}_i)}. \end{equation} The utility function is essentially the revenue to cost ratio of the SU's actions over time. An SU will try to maximize its utility. Intuitively, when an SU's valuation is low compared with others, it is beneficial for the SU to stay out so that the entry cost is not incurred unnecessarily. On the other hand, staying out all the time leads to zero accumulation of revenue and starvation of the SU, and thus should be avoided. Optimizing (\ref{eqn:utility}) is difficult even in a centralized manner due to the large decision space. Therefore, distributed heuristic learning algorithms are warranted to determine $m_{i,k}^t$ at each SU individually. At time $t$, an SU decides whether to participate in the auction and if so, its bid. If SU $i$'s decision is to participate (or $m_{i,k}^t \neq SO$), it can be proved straightforwardly that SU $i$'s dominating strategy is to bid its own valuation in the second price auction. More formally, we have \begin{proposition} The equilibrium of the repeated auction with utility function \eqref{eqn:utility} consists of each bidder using the following strategy at time $t$: \begin{equation} m_{i,k}^t = \left\{\begin{array}{cc} SO & f(\theta_{i,k}^t, h_i^i{t-1}) \ge 0 \\ \theta_{i,k}^t & else, \\ \end{array}\right. \end{equation} \end{proposition} where $f(\theta_{i,k}^t, h_i^{t-1})$ is a function of SU $i$'s current valuation and bidding history. The above strategy implies a thresholding criterion for participating in the game. The form of $f(\theta_{i,k}^t, h_i^{t-1})$ differs in the single channel and multi-channel scenarios, and will be discussed in more detail in subsequent sections. \subsection{Repeated Auction in a Single Channel} When there is only a single channel, we can drop $k$ in the notation. An SU stays out of bidding if it deems that participation is likely to reduce its payoff. Formally, $m_{i}^t = SO$, if \begin{equation} \gamma_{i}^{t+1}(\theta_i^t,h^{t-1}_i:m_{i}^t = SO) \ge \mathbb{E}_{\theta_{-i}^t}\left(\gamma_{i}^{t+1}(\theta_i^t,h^{t-1}_i:m_{i}^t = \theta_i^t)\right). \label{eq:thresh} \end{equation} In \eqref{eq:thresh}, the expectation is taken over all possible valuations of SU $i$'s opponents. In the first auction, no past history is available. The same thresholding function is applied at each SU under the assumption that the valuations of SUs are independent and identically distributed with cumulative distribution function (CDF) $F(\cdot)$ and probability density function (PDF) $f(\cdot)$. Therefore, the CDF and PDF of the second largest valuation among $N$ users are $G(y) = F(y)^{N-1}$ and $g(y) = (N-1)F(y)^{N-2}f(y)$, respectively. Let $r_i^1 = c_i^1 = 1$, for all $i$. The strategy for the first auction is stated as follows. \begin{proposition} In the first auction, $m_i^1 = SO$ if and only if $\theta_i^1 < \overline{c}$, where $$\overline{\theta}G(\overline{\theta}) = \frac{e_1}{1+c_1}. $$ \label{prop:first} \end{proposition} \begin{proof} Since $\overline{c}$ is the lowest valuation of any SU to participate in the auction, only when all other SUs have a valuation less than $\overline{\theta}$ will SU $i$ with valuation $\overline{\theta}$ win the auction. Therefore, $$\gamma_{i}^{1}(\theta_i^1:m_{i}^1 = SO) = \frac{1}{1+e_1},$$ and $$\mathbb{E}_{\theta_{-i}^1}\left(\gamma_{i}^{1}(\theta_i^1:m_{i}^1 = \overline{\theta})\right) = \frac{1+G(\overline{\theta})\overline{\theta}}{1+e_1+c_1}.$$ To satisfy \eqref{eq:thresh}, we have $$\overline{\theta}G(\overline{\theta}) = \frac{e_1}{1+c_1}. $$ \end{proof} Direct evaluation of \eqref{eq:thresh} is difficult after the first auction. This is because the accumulated reward, cost and current valuation are only available to each SU individually (although the auctioneer provides the information regarding the highest bid and associated payment at the end of each stage). Next, we introduce a simple heuristic to approximate the right hand side of \eqref{eq:thresh}. SU $i$ maintains a private threshold value $\overline{\theta_i}$, initiated according to Proposition~\ref{prop:first}. At time $t$, SU $i$ updates $\gamma_{i}^{t}(\theta_i^t:m_{i}^t = SO) = \frac{r_{i}^t}{c_{i}^t + e_t}$. Furthermore, $$\mathbb{E}_{\theta_{-i}^t}\left(\gamma_{i}^{t}(\theta_{i}^t:m_{i}^t = \theta_i^t)\right) \approx \frac{r_{i}^t + \theta_i^t-\overline{\theta_i}}{c_{i}^t + e_t + c_t}.$$ SU $i$'s action is thus, \begin{equation} m_{i}^t = \left\{\begin{array}{cc} SO & \frac{r_{i}^t}{c_{i}^t + e_t} > \frac{r_{i}^t + \theta_i^t-\overline{\theta_i}}{c_{i}^t + e_t + c_t} \\ \theta_{i}^t & else \\ \end{array}\right.. \end{equation} At the end of stage $t$, the SU obtains the largest bid and associated payment. If SU $i$ chooses to stay out, but the payment of the winner is less than $\overline{\theta_{i}}$, its $\overline{\theta_{i}}$ is set to the payment amount. Otherwise, $\overline{\theta_{i}}$ remains the same. On the other hand, if SU $i$ participates in the auction but either loses the auction or is required to make a payment higher than $\overline{\theta_{i}}$, its $\overline{\theta_{i}}$ is set to the payment amount. To avoid fluctuation of $\overline{\theta_{i}}$ estimates, a moving average of old and new values can be applied. The above mechanism is summarized in Algorithm~\ref{algo_singlechannel}. \begin{algorithm} \label{algo_singlechannel} \SetKwData{Left}{left} \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \SetKwInOut{Init}{init} \SetKwFor{For}{for}{do}{endfor} \caption{Strategy in single-channel spectrum access} \Input{Number of SU's $n$, monitoring $e_t$ and entry fee $c_t$ at time $t$} \Init{Set $\overline{\theta_i} = \overline{\theta}$ s.t., $\overline{\theta}G(\overline{\theta}) = \frac{e_1}{1+c_1}$} \BlankLine {$a = \gamma_{i}^{t}(\theta_{i,t}:m_{i}^t = SO)$}\; {$b = \frac{r_{i}^t + \theta_i^t-\overline{\theta_i}}{c_{i}^t + e_t + c_t}$}\; \If{$a > b$ or PU detected} { $m_{i,t} = SO$; } \Else{ $m_{i,t} = \theta_{i,t}$ } Let the maximum payment at stage $t$ be $p_m(t)$\; \If{$m_{i,t} = SO$ and $p_m(t) < \overline{\theta_i}$} { $\overline{\theta_i} = p_m(t)*\alpha + \overline{\theta_i}*(1-\alpha)$ } \If{$m_{i,t} \neq SO$ and ($\chi_{i,t} = 0$ or $p_m(t) \ge \overline{\theta_i}$)} { $\overline{\theta_i} = p_m(t)*\alpha + \overline{\theta_i}*(1-\alpha)$ } \end{algorithm} \subsection{Non-Regret Algorithm for the Multi-channel Case}\label{sec:multi-channel} In this section, we will address the spectrum access problem in multi-channel networks. A class of algorithms called regret-matching \cite{Hart_Mas-Colell00} is explored. The resulting stationary solution of the learning algorithm exhibits no regret by setting the probability of a particular action proportional to the ``regrets" for not having played other actions. In particular, for any two distinct actions $\textbf{m}_i^t\neq \bar {\bf m}_i^{t}$ at every time $t$, the regret of SU $i$ at time $t$ for not playing $\bar{\bf m}_i^t$ is \begin{equation}\label{eqn:Regret2} \mathbb{R}_i^t(\textbf{m}_i^t,\bar {\bf m}_i^{t}):=\max\{D_i^t(\textbf{m}_i^t,\bar {\bf m}_i^{t}),0\}, \end{equation} where \begin{equation}\label{eqn:Regret1} D_i^t(\textbf{m}_i^t,\bar {\bf m}_i^{t})=\frac{1}{\nu}\sum_{t-\nu \leq \tau < t} (r^t_i(\bar {\bf m}_i^{t},\bar {\bf m}_{-i}^{t})-r^t_i(\textbf{m}^t_i,\textbf{m}^t_{-i})), \end{equation} with$\nu$ denoting the size of time window. $D_i^t(\textbf{m}_i^t,\bar {\bf m}_i^{t})$ has the interpretation of average payoff that SU $i$ would have obtained, if it had bid $\bar {\bf m}_i^{t}$ every time in the past instead of choosing $\textbf{m}_i^t$. The expression $\mathbb{R}_i^T(\textbf{m}_i^t,\bar {\bf m}_i^{t})$ can be viewed as a measure of the average regret. In the context of spectrum access in multi-channel networks, the alternative actions correspond to participating in the auction game in different channels\footnote{Each user decides which channel to bid on, and then uses the value $\theta_{i,k}^t$ for bidding that channel.}. The probability $\textbf P_i^t$ for SU $i$ to join auctions in channel $k$ is a linear function of the regret. Here $P_i^t$ is a $K$-by-$1$ probability vector with $\sum_{k=1}^K P_{i,k}^t=1$. Define $\textbf 1 (\textbf m_i^t)$ as the indication vector for whether or not SU $i$ competes in the $k$th channel. The detailed regret-matching algorithm is given in Algorithm \ref{table:regretmatching}. The complexity of the algorithm is $O(K)$ and can be implemented distributively. Furthermore, its convergence has been proved in the literature \cite{Hart_Mas-Colell00}. Once SU $i$ chooses the channel to access, its action is decided by Algorithm~\ref{algo_singlechannel}. Note that even though an SU can access only one channel at a time, the bidding history on all data channels is made available through the control channel to all SUs. \begin{algorithm} \SetKwData{Left}{left} \SetKwInOut{Input}{input} \SetKwInOut{Output}{output} \SetKwInOut{Init}{init} \SetKwFor{For}{for}{do}{endfor} \caption{Non-regret learning algorithm for the multi-channel case}\label{table:regretmatching} \Init{The probability of SU $i$, $\textbf P_{i}^1$ is set arbitrarily} \ForEach{$t=1,2,3,...$} { Find $D_i^t(\textbf{m}_i^t,\bar {\bf m}_i^{t})$ as in (\ref{eqn:Regret1})\; Find average regret $\mathbb{R}_i^t(\textbf{m}_i^t,\bar {\bf m}_i^{t})$ as in (\ref{eqn:Regret2})\; $\textbf P_i^{t+1}(\bar {\bf m}_i^t ) =\frac{1}{\kappa}\mathbb{R}_i^t(\textbf{m}_i^t,\bar {\bf m}_i^{t})\textbf 1 (\bar{\bf m}_i^t), \forall \textbf{m}_i^t\neq \bar {\bf m}_i^{t}$\; $\textbf P_i^{t+1}(\textbf m_i^t)=\left[ 1-\sum_{\bar {\bf m}_i^t\neq \textbf{m}^t_i}\textbf P_i^{t+1}(\bar {\bf m}_i^t )\right] \textbf 1 (\textbf m_i^t) $,\ where $\kappa$ is a certain constant that is sufficiently large; } \end{algorithm} \begin{figure}[htb] \centerline{\epsfig{figure=r_vs_t_2user.eps,width=8cm}} \caption{Convergence of repeated auction games under the proposed and myopic schemes. 2 SUs, 1 channel} \label{fig:r_vs_t_2user} \end{figure} \section{Simulation}\label{sec:simulation} In this section, we investigate the performance of the proposed schemes by simulations. We construct a network of dimensions $100$m-by-$100$m, in which the SUs are randomly placed. All SUs transmit to a base station at a fixed location $1000$m away from the center of the network. The propagation loss exponent is set to be $3$. The common transmission power level of all SUs is set to be 100mW and the noise level at -90dbm. A unit bandwidth is assumed with frame length at 100$\mu$s, and Doppler frequency 100Hz. We set $\alpha=0.05$ and $\nu=10$. The proposed schemes are compared to a myopic scheme, in which SUs always participate in the auctions. \begin{figure}[htb] \centerline{\epsfig{figure=r_vs_ct_e_2user.eps,width=8cm}} \caption{Effects of monitoring and entry costs. 2 SUs} \label{fig:r_vs_ct_e_2user} \end{figure} \paragraph*{Single channel} First, we consider a simple 2-user case to understand the convergence of the proposed algorithm. Figure \ref{fig:r_vs_t_2user} shows a snapshot of the change of the utility function $\gamma_{i,t}$ for user $i$ over time. The entry and monitoring fees are fixed at $c_t=10$ and $e_t=1$, respectively. From the figure, we can see that the proposed scheme converges quickly and then tracks the changes in the channel. Furthermore, compared to the myopic scheme in which the SUs always bid, the utilities attained are higher for both users. This is because the SUs can decide whether to bid or not based on its valuation and outcome of past auctions. Between time $4000$ and $6000$, a primary user is active, and all SUs stay out of the auction but still pay for the monitoring fee. The average value of gamma decreases during that period of time. After the primary user stops transmitting, the auction game resumes. Since the effects of a primary user are very predictable, in the remaining simulations we assume the primary user is always idle. \begin{figure}[htb] \centerline{\epsfig{figure=fairness.eps,width=8cm}} \caption{Fairness achieved by SUs} \label{fig:fairness} \end{figure} In Figure \ref{fig:r_vs_ct_e_2user}, we demonstrate the effects of entry and monitoring costs on performance. The proposed scheme is shown to achieve better performance in all cases and the average gain is up to 15\%. We can see that when the monitoring fee is fixed, as the entry fee ($c_t$) increases, the average utility decreases. This is expected as it is more expensive to participate in the auction. Furthermore, the gap between the utilities attained by the proposed scheme and the myopic scheme also increases. This is because in the proposed scheme SUs are selective and would participate in the auction only if they are likely to win. The myopic scheme would incur high losses in revenue as the result of a higher entry cost. In Figure \ref{fig:fairness}, we show the fairness achieved in the proposed and myopic schemes. We adopt Jain's fairness index~\cite{jain84}: $$F = \frac{(\sum_{i=1}^N{\gamma_i})^2}{N\sum_{i=1}^N{\gamma_i^2}}.$$ Clearly, $F$ is between $0$ and $1$. The larger the value, the better the fairness is. We can see that the proposed scheme results in fairer resource allocation compared with the myopic bidding scheme. As the entry cost increases, the fairness of the myopic scheme also decreases. This is because users experiencing worse channels are repeatedly penalized by losing the game and paying entry fees. In comparison, when $e = 10$ and $e=5$, the proposed scheme achieves slightly better fairness as the entry cost $c$ increases. \begin{figure}[htb] \centerline{\epsfig{figure=multiuser.eps,width=8cm}} \caption{Valuation and bids of SUs} \label{fig:multiuser} \end{figure} \begin{figure}[htb] \centerline{\epsfig{figure=gamma_no_users.eps,width=8cm}} \caption{Utility achieved as the number of SUs changes} \label{fig:gamma_no_users} \end{figure} In the next set of experiments, we set the number of SUs to be $N=16$. Figure \ref{fig:multiuser} ($c_t=e_t=5$) compares the SUs' average valuation $R$, average bid $b$ and instantaneous bid at time $5000$, respectively. Several observations are in order. First, the users with a higher average value generally agree with users with a higher average bid though not always. This is because the average bid also includes the case in which an SU stays out (treated as a zero bid). Second, as expected, not all the users are bidding in each slot; only the users with low cost and high chance of winning would participate. \begin{figure}[htb] \centerline{\epsfig{figure=fairness_no_users.eps,width=8cm}} \caption{Fairness achieved as the number of SUs changes} \label{fig:fairness_no_users} \end{figure} \begin{figure}[htb] \centerline{\epsfig{figure=two_user_two_channel.eps,width=8cm}} \caption{Utility in two-SU two-channel networks under different algorithms} \label{fig:multi_channel} \end{figure} In Figure \ref{fig:gamma_no_users} and Figure \ref{fig:fairness_no_users}, we show the utility and fairness as a function of the number of SUs varying from 2 to 16. The costs are set as $c_t=10$ and $e_t=1$, respectively. We can see that as the number of SUs increases, the utility decreases due to limited radio resources. The fairness also decreases since there might be more chances for users to dominate when the number of users is large. The proposed scheme has better performance in both utility and fairness, compared with the myopic scheme. The gain in utility ranges from 12\% to 25\%. \paragraph*{Multiple channels} In this set of experiments, we study the performance and convergence of a two-user two-channel case. The parameters are set as $c_t=5$, $e=5$, and $\nu=10$. Three schemes are compared, namely, Best Channel Bidding (BCB), Geni Aided (GA), and Non-Regret Learning (NRL). In BCB, the users select to bid on the channel with the highest channel gain. In GA, a Geni tells the SUs not to bid on the channel that they would not win and instead to bid on the other channels. The GA solution is thus the performance upper bound for practical systems. We can see that the BCB has the worst performance since the SUs might bid on the same channel while the other channels are vacant. The proposed NRL solution on the other hand, performs closely to the GA solution, and can be easily implemented in a distributed manner. \section{Conclusions}\label{sec:conclusion} In this paper, we have investigated the problem of spectrum access in single and multi-channel cognitive radio networks. A repeated auction based framework has been adopted. In single-channel spectrum access, SUs selectively participate in the auction based on their valuation and past auction history. This scheme has been shown to outperform a myopic scheme in which SUs always compete. In multi-channel networks, a non-regret approach has been proposed. Its performance has been shown to be significantly better than a naive greedy solution and come close to that of the Geni aided solution. As future work, we plan to improve on the convergence speed and optimality of the proposed scheme. Also of interest is the study of robust mechanisms for situations in which the monitored information may be inaccurate. \section*{Acknowledgments} This research was supported in part by the Air Force Office of Scientific Research under Grant FA 9550-08-1-0480 and the National Science Foundation under Grant CNS-0832084. \bibliographystyle{abbrv}
1,116,691,499,966
arxiv
\section{Introduction} \label{sec:intro} Disorder is ubiquitous in materials and can drastically affect their properties, especially their electronic structure and transport properties. It can induce localization of electrons and lead to a metal-insulator transition, which is known as the Anderson localization transition~\cite{p_anderson_1958,e_abrahams_1979}. Signatures of Anderson localization are reported in many materials such as doped semi-conductors~\cite{a_richardella_10,m_winkler_2011}, polycrystalline phase-change materials~\cite{t_siegrist2011,w_zhang2012} and single crystal Li$_x$Fe$_7$Se$_8$~\cite{Ying1501283}, where disorder plays an important role in the detected metal-insulator transition. In order to capture Anderson localization in an effective medium theory, Dobrosavljevi\'{c} \etal, introduced the typical medium theory (TMT)~\cite{v_dobrosavljevic_03} which is an extension of the coherent potential approximation (CPA).~\cite{p_soven_67,b_velicky_68} In the CPA, the disordered lattice is replaced by an impurity embedded in an arithmetically averaged momentum-independent effective medium. While the CPA has been successful in describing some one-particle properties, such as the density of states (DOS) in substitutionally disordered alloys~\cite{p_soven_67, s_kirpatrick_70}, it fails to describe the Anderson localization transition. This failure stems from the arithmetic average used to define the effective medium, which always favors the metallic state. In the TMT the arithmetic average DOS is replaced by the geometric average, or typical DOS~\cite{v_dobrosavljevic_03,m_janssen_98,m_janssen_94,a_mirlin_94,e_crow_88}, which vanishes at the localization transition and can therefore serve as a proper order parameter for Anderson localization. Although the TMT captures the localization transition, it underestimates the critical disorder strength, due to its local nature. A cluster extension of TMT, the typical medium dynamical cluster approximation (TMDCA) was developed recently~\cite{c_ekuma_14}. It predicts a more accurate critical disorder strength and captures the re-entrant behavior of the mobility edge for the single band Anderson model with a box disorder potential. Generalizations of the TMDCA for multiband systems~\cite{y_zhang_15}, and systems with off-diagonal disorder~\cite{h_ter_14} using the Blackman, Esterling, and Berk (Blackman)~\cite{j_blackman_71} formalism, were also developed to study more complicated disordered systems. In this paper, we combine these methods to study a multiband model with both diagonal and off-diagonal disorder, which is suitable for any general disordered system, and we introduce and validate a new ansatz for the treatment of disorder which greatly enhances the stability and applicability of the method. We apply this method to one of the diluted magnetic semiconductors, (Ga,Mn)N. Diluted magnetic semiconductors (DMS) are ideal candidates for spintronic device applications~\cite{i_zutic_04} including the design of nonvolatile computer memory~\cite{g_prinz_98a,s_dassarma_01}, electric field controlled magnetization~\cite{Ohno00,i_stolichnov_08,d_chiba_08} and spin-generating solar cells~\cite{i_zutic_01,b_endres_13}. The Ga$_{1-x}$Mn$_x$N based DMS has attracted great interest and is extensively studied, due to its close relationship to blue LED~\cite{h_amano_86,i_akasaki_93,s_nakamura_94} technology whose host compound is GaN. Mn doping induces ferromagnetism making Ga$_{1-x}$Mn$_x$N a good candidate for spintronic devices. The efficiency of these devices depends on the Curie temperature (T$_c$) of the ferromagnetic order. Extensive experimental studies of the ferromagnetic order have been done on both zincblende and wurtzite Ga$_{1-x}$Mn$_x$N, leading to various values of T$_c$, including low T$_c$ around 10 K in zincblende~\cite{s_novikov_05} and wurtzite~\cite{m_overberg_01,s_stef_13,m_sawicki_12} structures, and high T$_c$ around room temperature in zincblende~\cite{v_chitta_04} and wurtzite~\cite{g_thaler_02,m_reed_01,t_sasaki_02} samples. There are three predominant theoretical models proposed to understand the ferromagnetism in DMS. First, the mean-field Zener model of Dietl has been the accepted paradigm for these systems~\cite{t_dietl_00,t_dietl_01,t_jungwirth_06,a_macdonald_05a} until relatively recently. Here, a magnetic exchange between localized Mn moments mediated by the valence band holes drives the magnetism. Based on this, the T$_c$ of zincblende Ga$_{1-x}$Mn$_x$N with $x$=5\% is predicted to be higher than room temperature. Second, an impurity band based theory states that the ferromagnetism is due to a double-exchange coupling mediated by the impurity levels~\cite{k_sato_10,k_sato_04}. This theory is supported by evidence that the magnetic properties of (Ga,Mn)As are determined by the location of the chemical potential in this distinct impurity band brought on by Mn doping, and even by the Anderson localization of the impurity band carriers~\cite{Dobrowolska12,m_sawicki_10a,m_flatte_11,n_samarth_12}. A direct experimental probe of the impurity band states in (Ga,Mn)As by scanning tunneling spectroscopy shows that the local density of states obtains a log-normal distribution at the verge of the localization transition~\cite{a_richardella_10} and the typical value of the distribution is vanishing. This could also happen in (Ga,Mn)N and the fact that DMSs can undergo an Anderson localization transition makes the study of DMSs more challenging, especially if we consider the competition between localization and magnetism in (Ga,Mn)N, which is not well understood. Third, and most recently, the ferromagnetic coupling in insulating systems was also interpreted in terms of super exchange.~\cite{s_stef_13,m_sawicki_12} In this paper, as an illustration of our formalism, we systematically study the metal-insulator transition due to localization in the ferromagnetic phase of (Ga,Mn)N. We find that (Ga,Mn)N is always insulating within the compositional limit consistent with transport measurements.\cite{transport_GaMnN} Our results indicate that both the second and the third models are important to explain the magnetism in this material. For relatively high doping, double exchange might be more important due to the finite density of delocalized states in the impurity band, and for low concentrations, since the impurity band is completely localized, superexchange should play a more important role. \section{Formalism} \label{sec:formalism} To study the effect of disorder in zincblende (Ga,Mn)N, we use the generalized spin-Fermion Hamiltonian~\cite{r_nelson_15} generated by the first-principles Effective Hamiltonian Method~\citep{t_berl_11}: $H_{eff}=H_0+\Delta$, where \begin{equation}\label{eq:DFT1} H_{0}=\sum_{\mathbf{i,i'} m,m',\sigma}t_{\mathbf{ii'}}^{mm'}c_{im\sigma}^{\dagger}c_{\mathbf{i'} m'\sigma}+h.c. \end{equation} is the Hamiltonian of the pure GaN, and \begin{equation}\label{eq:DFT2} \begin{split} \Delta&=\sum_{\mathbf{j}}\Delta_{\mathbf{j}}^{imp} =\sum_{\mathbf{j,i,i'},m,m',\sigma}T_{\mathbf{jii'}}^{mm'}c_{\mathbf{i}m\sigma}^{\dagger}c_{\mathbf{i'}m'\sigma}\\& +\sum_{\mathbf{j,i,i'},m,m',\sigma,\sigma'}J_{\mathbf{jii'}}^{mm'}c_{\mathbf{i}m\sigma}^{\dagger} \boldsymbol{\mathbf{\tau}}_{\sigma\sigma'}c_{\mathbf{i'}m'\sigma'}\cdot\mathbf{S_{j}}+h.c. \end{split} \end{equation} contains the impurity potentials $\Delta_{\mathbf{j}}^{imp}$ induced by replacing one Ga with one Mn in the primitive unit cell $\mathbf{j}$. Here, $c_{\mathbf{i}m\sigma}^{+}$ and $c_{\mathbf{i}m\sigma}$ are the creation and annihilation operators of an electron with spin $\sigma$ at unit cell $\mathbf{i}$ in the $m$-th effective $\widetilde{N}$-$sp^3$ Wannier orbital. $t_{\mathbf{ii'}}^{mm'}$ contains the bare orbital energy and hopping integral of the pure GaN. $T_{\mathbf{jii'}}^{mm'}$ and $J_{\mathbf{jii'}}^{mm'}$ are the spin-independent and spin-dependent impurity potentials, respectively. $\mathbf{S_{j}}$ is the spin-$\frac{5}{2}$ unit-vector and $\tau_{\sigma\sigma'}$ are the elements of Pauli's matrices and $h.c.$ denotes the Hermitian conjugate. More details of the first-principles calculation can be found in Ref. ~\onlinecite{r_nelson_15}. For impurity concentration $x$, i.e. Ga$_{1-x}$Mn$_x$N, the disorder potential is sampled independently on each cluster site with a binary probability density distribution function: \begin{equation} P(\Delta_{\mathbf{j}})=x\delta(\Delta_{\mathbf{j}}-\Delta_{\mathbf{j}}^{imp})+(1-x)\delta(\Delta_{\mathbf{j}}-0). \end{equation} with $\Delta_\mathbf{j}$ the potential at site $\mathbf{j}$. As pointed out in the Appendix of Ref.~\onlinecite{y_zhang_15}, the critical behavior of the typical DOS for each orbital is independent of the basis, as long as the basis is local. In the downfolding procedure, used here to obtain the Hamiltonian, a series of local rotations are used to generate a Hamiltonian which is block diagonal in a low energy window, while retaining the local nature of the orbital basis. Since the choice of the downfolding basis will not change the critical behavior of the typical DOS, i.e., the order parameter of the Anderson localization, it is possible to incorporate the first-principles calculation within TMDCA to study localization effects in real materials with disorder without further approximations. From the leading parameters of the impurity potential listed in Table.~\ref{tab:impurity_p}, we see that the diagonal disorder potentials $T_{\mathbf{jii}}^{mm}$ and $J_{\mathbf{jii}}^{mm}$ are very strong and short-ranged, extending only up to nearest neighbors. The off-diagonal disorder potential $T_{\mathbf{jji}}^{mm'}$ and $J_{\mathbf{jji}}^{mm'}$ which are directly related to the hopping integrals from and to the impurity site are not weak and can not be ignored. So to solve this Hamiltonian using effective medium theory, we need to adopt the Blackman formalism to deal with the off-diagonal disorder potentials between pairs. In addition to the disorder potentials listed in Table.~\ref{tab:impurity_p}, we also find that the impurity has a significant effect on the hopping integrals between sites that are different from but close to the impurity site. We denote these hopping integrals as the non-local off-diagonal disorder potentials represented by the parameters $T_{\mathbf{jii'}}^{mm'}$ and $J_{\mathbf{jii'}}^{mm'}$ whose leading strength are about 498 meV for $T$ and 336 meV for $J$. In order to consider these impurity potentials in our calculation, we slightly modify the Blackman formalism. These developments are described in Appendix~\ref{appendixB}. \begin{table}[h] \begin{tabular}{|c|c|c|c|c|} \hline & $T_{\mathbf{j}\mathbf{i}\i}^{mm}$ & $T_{\mathbf{j}\j\mathbf{i}}^{mm'}$ & $J_{\mathbf{j}\mathbf{i}\i}^{mm}$ & $J_{\mathbf{j}\j\mathbf{i}}^{mm'}$\tabularnewline \hline $\mathbf{i}=\mathbf{j}$ & 2488 & -170 & 1752 & -633\tabularnewline \hline $\mathbf{i}=NN(\mathbf{j})$ & 406 & 885 & 449 & 800\tabularnewline \hline $\mathbf{i}=NNN(\mathbf{j})$ & 15 & 68 & $<$10 & 38\tabularnewline \hline \end{tabular} \caption{Leading parameters of the impurity potential in meV near the impurity site $\mathbf{j}$. NN($\mathbf{j}$) and NNN($\mathbf{j}$) denotes nearest neighboring and next nearest neighboring sites and $m\protect\neq m'$ from Ref. ~\onlinecite{r_nelson_15}.} \label{tab:impurity_p} \end{table} We use a combination of the multiband DCA and TMDCA with a modified Blackman formalism to study Anderson localization in the first-principles effective Hamiltonian of (Ga,Mn)N. We assume the system is already in the ferromagnetic phase, so that the local spins induced by the Mn impurities are pinned to some certain direction, which is set to the quantized direction of the electron spins. Then the two spin species are decoupled in the Hamiltonian and each contains four effective $\widetilde{N}$-$sp^3$ Wannier orbitals. So for each spin species, we are dealing with a four-band Anderson model with both diagonal and off-diagonal disorder. We find that if we directly generalize the multiband TMDCA ansatz of Ref.~\onlinecite{y_zhang_15} to its Blackman version, we encounter severe numerical instability problems when solving the self-consistent equations. We find that the source of the instability comes from the Hilbert transformation used to calculate the real part of the typical Green function from the typical density of states. The Hilbert transformation connects the typical Green function at all the frequencies and makes the real component of the typical Green function a functional of its imaginary part. This means that a small error at certain frequency can spread to its neighbor frequencies, which makes the calculation numerically unstable, especially for systems with multiple bands and complicated disorder potentials. This frequency mixing is also somewhat unphysical, since the scattering processes are purely elastic, and processes at different energy are independent. The Hilbert transformation does not cause problems for simple Hamiltonians, but for complex first-principles effective Hamiltonians with multiple bands and bare gap structures, together with off-diagonal disorder, it causes numerical instabilities which interfere with the convergence of the calculation. So in this paper, we introduce a new ansatz, to calculate the typical Green function directly, without invoking the Hilbert transformation. The spirit of searching for a proper ansatz is to find one that incorporates the typical value of the local density of states, which serves as the order parameter of the Anderson localization, that becomes exact when the cluster size approaches infinity, that promotes numerical stability, and that converges quickly as the cluster size increases. Since the ansatz Eq.~(\ref{eq:ansatz}) satisfies all these features, it is a proper ansatz to capture the physics of Anderson localization. We find that the new ansatz yields an algorithm which is more numerically stable, converges quickly with cluster size and produces physical results. It is defined as: \begin{widetext} \begin{equation}\label{eq:ansatz} G_{typ}^{mm'}(K,\omega)=e^{\frac{1}{N_{c}}\sum_{\mathbf{i}}\left\langle \ln\left(\sum_{m}\rho_{ii}^{mm}(\omega)\right)\right\rangle } \left(\begin{array}{cc} \left\langle \frac{G_{c,AA}^{mm'}(K,\omega)}{{\displaystyle {\textstyle {\scriptstyle \frac{1}{N_c}\sum_{\mathbf{i},m}\rho_{\mathbf{i}\i}^{mm}(\omega)}}}}\right\rangle & \left\langle \frac{G_{c,AB}^{mm'}(K,\omega)}{{\displaystyle {\textstyle {\scriptstyle \frac{1}{N_c}\sum_{\mathbf{i},m}\rho_{\mathbf{i}\i}^{mm}(\omega)}}}}\right\rangle \\ \left\langle \frac{G_{c,BA}^{mm'}(K,\omega)}{{\displaystyle {\textstyle {\scriptstyle \frac{1}{N_c}\sum_{\mathbf{i},m}\rho_{\mathbf{i}\i}^{mm}(\omega)}}}}\right\rangle & \left\langle \frac{G_{c,BB}^{mm'}(K,\omega)}{{\displaystyle {\textstyle {\scriptstyle \frac{1}{N_c}\sum_{\mathbf{i},m}\rho_{\mathbf{i}\i}^{mm}(\omega)}}}}\right\rangle \end{array}\right) \end{equation} with \begin{equation} G_{c,\mathbf{i}\i}^{mm'}(\omega)=\sum_{\mathbf{K}}(G_{c,AA}^{mm'}(\mathbf{K},\omega) +G_{c,BB}^{mm'}(\mathbf{K},\omega)+G_{c,AB}^{mm'}(\mathbf{K},\omega)+G_{c,BA}^{mm'}(\mathbf{K},\omega)) \end{equation} \begin{equation}\label{eq:rho} \rho_{\mathbf{i}\i}^{mm'}(\omega)=-\frac{1}{\pi}\mathrm{Im}[G_{c,\mathbf{i}\i}^{mm'}(\omega)] \end{equation} \end{widetext} where $\left\langle (\cdots)\right\rangle$ represents averaging over disorder configurations, $m,m'$ denote the band indices, $i$ denotes the site index and $A$,$B$ are the component indices in the Blackman formalism, with $A$ representing the host atoms (Ga here) and $B$ representing the impurity atoms (Mn here). Our calculation is carried out on the real frequency axis, so there is no need to perform analytic continuation and the density of states can be calculated directly from the imaginary part of the Green function (see Appendix B for more details). This ansatz consists of a prefactor $e^{\frac{1}{N_{c}}\sum_{\mathbf{i}}\left\langle \ln\left(\sum_{m}\rho_{\mathbf{i}\i}^{mm}(\omega)\right)\right\rangle }$ which is just the geometric average or typical value of the local DOS and a normalized algebraic average Green function in Blackman formalism. The prefactor can be regarded as the order parameter of the Anderson localization transition, which can be calculated as: \begin{equation} \rho_{typ}(\omega)=\frac{1}{N_c}\sum_{\mathbf{K},m}\sum_{AA,BB,AB,BA}-\frac{1}{\pi}\mathrm{Im}G_{typ}^{mm}(\mathbf{K},\omega), \end{equation} so Eq.~(\ref{eq:ansatz}) contains a proper order parameter. For very weak disorder, since the geometric and algebraic average are the same, the ansatz reduces to the multiband DCA. It directly calculates the typical Green function without invoking the Hilbert transformation, which makes the typical DOS for each frequency completely independent of each other. This feature is consistent with the elastic scattering in disorder systems and it dramatically increases the numerical stability of the calculation. This ansatz does not recover the TMT in the $N_c=1$ limit, but for large cluster size it converges quickly and approaches the exact results as we will show below. Since the multiband TMT is not a physical limit, we believe our ansatz does not need to recover it for cluster of size $N_c=1$. \section{Results}\label{sec:results} We first test the new ansatz in the single-band model Anderson Hamiltonian with nearest-neighbor hopping $t$ and onsite disorder potential $V$. As shown in Fig.~\ref{fig:single}, the new ansatz reproduces the results of the previous ansatz~\cite{c_ekuma_14} where a Hilbert transformation is used and captures the physics of Anderson localization for large cluster sizes. \begin{figure}[h!] \includegraphics[trim = 0mm 0mm 0mm 0mm,width=1\columnwidth,clip=true]{Fig_single.eps} \caption{The typical DOS at the band center ($\omega=0$) vs.\ disorder strength with cluster size $N_c$=38, 64, and 92. The critical disorder strength is estimated by linear extrapolation and it converges to around 2.2, as it was the case for the previous ansatz. Inset: Plots of the typical DOS($\omega$) with $N_c$=38 for two disorder strength for the old and new ansatz. The curves practically overlap.} \label{fig:single} \end{figure} Next we apply the multiband DCA and multiband TMDCA with the new ansatz to the Hamiltonian for Ga$_{1-x}$Mn$_x$N. In all figures we calculate the DOS of the system using DCA and the typical DOS using TMDCA. The self-consistent loop is described in Appendix~\ref{appendixB}. It is similar to that in Ref.~\onlinecite{y_zhang_15} generalized to the Blackman formalism as in Ref.~\onlinecite{h_ter_14}, and with some modifications to deal with non-local off-diagonal disorder. We first calculate the DOS of both spin species for $x=0.05$. As shown in Fig.~\ref{fig:dos_up_down}, the impurity band only contains the spin up species as the spin down species has no impurity band around the chemical potential and is always fully filled. This can be understood by looking at the leading order diagonal disorder potential in Table.~\ref{tab:impurity_p}, where $T_{\mathbf{j}\j\mathbf{j}}^{mm}=2488$ meV and $J_{\mathbf{j}\j\mathbf{j}}^{mm}=1752$ meV, so the disorder potential felt by the spin down channel $T_{\mathbf{j}\j\mathbf{j}}^{mm}-J_{\mathbf{j}\j\mathbf{j}}^{mm}=736$ meV is much weaker than that felt by the spin up channel $T_{\mathbf{j}\j\mathbf{j}}^{mm}+J_{\mathbf{j}\j\mathbf{j}}^{mm}=4240$ meV. Since we are interested in the localization of the states near the chemical potential, we will only focus on the spin up channel in our calculation below. \begin{figure}[h!] \includegraphics[trim = 0mm 0mm 0mm 0mm,width=1\columnwidth,clip=true]{dos_up_down.eps} \caption{The DOS of Ga$_{1-x}$Mn$_x$N with $x=0.05$ for both spin species (calculated with DCA and a cluster of size $N_c=32$) compared with the DOS of the pure GaN (calculated directly from the downfolded first-principle Hamiltonian).The chemical potential is calculated assuming each Mn impurity contributes one hole to the system.} \label{fig:dos_up_down} \end{figure} Next, we check the convergence of the DCA and the TMDCA with cluster size and find that the DOS and typical DOS calculated for clusters of size $N_c=16$ and $32$ are quite close indicating that the result is nearly converged for $N_c=32$ as shown in Fig.~\ref{fig:dos_tdos_Nc}. \begin{figure}[h!] \includegraphics[trim = 0mm 0mm 0mm 0mm,width=1\columnwidth,clip=true]{dos_tdos_Nc.eps} \caption{The spin-up DOS and TDOS of Ga$_{1-x}$Mn$_x$N with $x=0.05$ for two cluster sizes $N_c=16$ and $N_c=32$. The inset shows the same plot around the impurity band region.} \label{fig:dos_tdos_Nc} \end{figure} Then to study localization, we calculate the evolution of the typical DOS and average DOS around the chemical potential as the Mn concentration $x$ increases. To determine the chemical potential, indicated by $\omega=0$, we suppose that each Mn impurity contributes one hole in the system. Then the total electron density in the effective $\widetilde{N}$-$sp^3$ basis is $8-x$, and since spin down species are always fully filled, the density of the spin up species is $4-x$. Then we use the DCA to calculate the average DOS and determine the position of the chemical potential. As shown in Fig.~\ref{fig:dos_tdos_mu_x}, the typical DOS of the impurity band vanishes when $x<0.03$, which means that the midgap states induced by the impurities are completely localized. As $x$ increases, the impurity band starts to hybridize with the valence band and the TDOS of the impurity band gradually increases indicating the occurrence of delocalized states in the impurity band. While the chemical potential still lies above, or on, the localization edge within our numerical accuracy, the system is still insulating. In Fig.~\ref{fig:dos_tdos_mu_x2} the chemical potential starts to cross the localization edge for $x>0.25$ indicating a transition to the metallic state. This means that Ga$_{1-x}$Mn$_x$N is always insulating due to Anderson localization within the compositional limit which is about 10\% consistent with the experimental results\cite{transport_GaMnN}. For $x<0.03$, since the impurity band is completely localized, the carriers are trapped and are not able to mediate the double exchange interactions between local spins to form long-range ferromagnetic order. In terms of our mean field theory, the host Green function is no longer polarized. The possible ferromagnetic phase may come from the super exchange as pointed out in Ref.~\onlinecite{s_stef_13,m_sawicki_12}. For $x>0.03$, a non-magnetic dopant such as Zn, or e.g., the application of a gate bias, could move the chemical potential down, leading to a metallic phase with ferromagnetism induced by double exchange as well as superexchange\cite{s_stef_13,m_sawicki_12} possibly enhancing the Curie temperature significantly. In this work, we focus on the Anderson localization of electrons in (Ga,Mn)N, rather than the mechanism of its ferromagnetism. So we do not directly compute the magnetic properties of Ga$_{1-x}$Mn$_x$N (and, as such, neither we evaluate the superexchange). Instead we assume the Mn moments to be aligned ferromagnetically and investigate from first principles the Anderson localization in the impurity band. This is important because Ref.~\onlinecite{Dobrowolska12,m_sawicki_10a,m_flatte_11,n_samarth_12} proposed that the Anderson localization suppresses the double-exchange mechanism of ferromagnetism in this class of materials. The generalized first-principles spin-Fermion Hamiltonian of Ref.~\onlinecite{r_nelson_15} provides an impurity potential which has both spin-dependent and spin-independent components. The assumption of ferromagnetism implies that we are underestimating the effects of disorder, since we only include chemical but not magnetic disorder. Nevertheless, we find that for Mn concentrations less than 3\% the impurity band is fully localized and, therefore, the ferromagnetism is unlikely due to the double exchange mechanism. This supports the dominance of ferromagnetic superexchange for concentrations less than 3\%. The generalized first-principles spin-Fermion Hamiltonian of Ref.~\onlinecite{r_nelson_15} used in the current work does not incorporate superexchange processes described in Ref.~\onlinecite{s_stef_13,m_sawicki_12}. One may wonder, whether such high order perturbative corrections could suppress the Anderson localization. We have two reasons to think this is not the case. First of all, experimentally it is found that Ga$_{1-x}$Mn$_x$N is an insulator with variable-range hopping behavior\cite{transport_GaMnN}, a Hall-Mark of Anderson localization. Secondly, in the current model we have assumed the moments to be perfectly ordered which underestimates the effects of disorder, since only chemical disorder but not magnetic disorder is considered. So we expect the finding of the Anderson localization is robust against the inclusion of superexchange. It will be interesting to study the Anderson localization in the presence of superexchange. To include superexchange we would need to redo the analysis in Ref.~\onlinecite{r_nelson_15} without removing the Mn-d charge degrees of freedom. However, to determine localization in such an interacting and disordered multi-band model is a formidable task, beyond the scope of the current manuscript. \begin{figure}[h!] \includegraphics[trim = 0mm 0mm 0mm 0mm,width=1\columnwidth,clip=true]{dos_tdos_mu_x.eps} \caption{DOS(blue) and typical DOS(red) of Ga$_{1-x}$Mn$_x$N for various Mn concentrations: x=0.02, 0.03, 0.05, 0.1, with N$_c$=32, showing that the impurity band is completely localized for $x\le 0.03$. The chemical potential is set to be zero and denoted as the dash line. Inset: Zoom in of the DOS and TDOS around the chemical potential.} \label{fig:dos_tdos_mu_x} \end{figure} \begin{figure}[h!] \includegraphics[trim = 0mm 0mm 0mm 0mm,width=1\columnwidth,clip=true]{dos_tdos_mu_x2.eps} \caption{ DOS(blue) and typical DOS(red) of Ga$_{1-x}$Mn$_x$N for higher Mn concentrations: x=0.15, 0.2, 0.25, 0.3, with N$_c$=32, showing a transition to the metallic state happens around $x=0.25$. The chemical potential is set to be zero and denoted as the dash line.} \label{fig:dos_tdos_mu_x2} \end{figure} \section{Conclusion}\label{sec:conclusion} We combine the multiband TMDCA with the Blackman formalism to study multiband systems with both diagonal and off-diagonal disorder. We extend the Blackman formalism to describe off-diagonal disorder potentials which are not pairwise. We also introduce a new TMDCA ansatz to overcome the numerical instability problem caused by the Hilbert transformation. We tested our new ansatz with the single-band Anderson model, where it reproduces previous results for large cluster sizes. Our developed method will allow first-principles studies of many functional materials in which Anderson localization plays an important role. We apply our new generalized multiband TMDCA ansatz to the diluted magnetic semiconductor Ga$_{1-x}$Mn$_x$N, using a first-principles tight-binding spin-fermion model, and predict that the impurity band is completely localized for a Mn concentration of less than 3\% and, since the chemical potential lies at or above the localization edge, the system is always insulating within the compositional limit of 10\%. This implies that ferromagnetism in Ga$_{1-x}$Mn$_x$N for x $\le$ 0.03 cannot be mediated by double exchange, which would require itinerant carriers in the impurity band. For larger concentrations, chemical doping or the application of a gate bias could move the chemical potential down, leading to a metallic phase with ferromagnetism induced by double exchange as well as superexchange\cite{s_stef_13,m_sawicki_12} possibly enhancing the Curie temperature significantly. \textit{Acknowledgments}-- We thank D. Young for useful discussion on the results. This material is based upon work supported by the National Science Foundation under the Cooperative Agreement No. EPS-1003897 with additional support from the Louisiana Board of Regents. Work by TB was performed at the Center for Nanophase Materials Sciences, a DOE Office of Science user facility. This manuscript has been authored by UT-Battelle, LLC under Contract No. DE-AC05-00OR22725 with the U.S. Department of Energy. WK was supported by DOE Contract No. DEAC02-98CH10886. This work used the high performance computational resources provided by the Louisiana Optical Network Initiative (http://www.loni.org), and HPC@LSU computing. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The Department of Energy will provide public access to these results of federally sponsored research in accordance with the DOE Public Access Plan (http://energy.gov/downloads/doepublic-access-plan).
1,116,691,499,967
arxiv
\section{Introduction} Optical lattice formed by interference of counter propagating laser beams is one of the most fruitful technical inventions in the progress of atomic gas physics~\cite{FGOL}. The seminal reports (see, e.g., Ref.~\cite{quantum simulator}) indicate that this system is regarded as a quantum simulator to emulate electronic structures in solid-state systems, with controllability of model parameters and flexibility of lattice geometric structures. The interaction is widely tuned from strongly-attractive to repulsive couplings by the Feshbach resonance. The different lattice structures, such as a 1D chain, bipartite square 2D lattice\,\cite{bipartite} like High-$T_{\rm c}$ cuprate superconductors, and frustrated triangular lattices\,\cite{triangular} are available by tuning laser interference. Simulating solid-state electronic structures in optical lattices requires treating the orbital degrees of freedom, as well as the charge and the spin degrees. Orbital degeneracy plays a crucial role in transition metals, for example. Such materials are currently targets in applied physics, owing to their wide usage of various industrial scene. Ultracold atomic gases with multiple band-degeneracy enable us to directly address quantum phenomena associated with the orbital degrees~\cite{bipartite,p-band,spin3/2,orbital order,Yb,Tsuchiya;Paramekanti:2012,Zi Cai,Kobayashi,topological ladder}. In this paper, focusing on $p$-orbitals next higher to the lowest $s$-orbital, we show that the low-lying double degenerate orbitals lead to a rich phase structure of the ground states in an attractively interacting 1D chain (see Fig.~\ref{schematic fig}). All the numerical calculations are done by the density matrix renormalization method (DMRG) (See, e.g., Refs.~\cite{DMRG1,DMRG2}). We also derive an effective model to clarify the origin of the resultant quantum phase. To identify the ground state in a many-body system is a primary issue for understanding quantum many-body effects. The nature of the ground state depends on the degeneracy intrinsic to a many-body system. In our system, the $p$-orbital degeneracy is a key ingredient of the various quantum phases. The $p$-orbitals in a 1D chain along $z$-axis lead to double degeneracy with respect to $p_{x}$ and $p_{y}$ orbitals, as seen in Fig.~\ref{schematic fig}. \begin{figure}[h] \begin{center} \includegraphics[width=1.0\linewidth]{Fig1.eps} \end{center} \caption{(Color Online) Schematic diagram of fermionic gases on an optical lattice, with multiple bands (multi-band Hubbard chain). The intra-orbital interaction ($U_{pp}$) forms fermion pairs. The fermion pairs hop between different orbitals, by the pair-hopping interaction ($U_{p_x p_y}$) [see Eq.~(\ref{eq:p1d})].} \label{schematic fig} \end{figure} In this paper, we list up the ground-state properties in this attractive 1D chain. First, we show that the \textit{inter-orbital} interaction leads to the emergence of the Haldane phase~\cite{Haldane}, close to half-filling. Below half-filling, the Luther-Emery phase~\cite{Luther} occurs, in the same way as \textit{single-band} attractive Hubbard chains~\cite{Machida_attra,Gao}. In contrast to the gapless charge excitations in the Luther-Emery phase, the Haldane phase brings about a charge gapped feature. We remark that the Haldane phase with a charge excitation gap, a nonlocal string order, and edge states is known as a Haldane insulator phase~\cite{Haldane boson1,Haldane boson2,Haldane boson3,Haldane boson4, Haldane fermi1,Haldane fermi2}. This realization is proposed in bosonic chains with dipole interaction~\cite{Haldane boson1,Haldane boson2,Haldane boson3,Haldane boson4} and multi-component fermionic chains~\cite{Haldane fermi1,Haldane fermi2}. A two-leg spin-$1/2$ ladder also shows the presence of a similar gapped phase~\cite{Hund}. Our numerical calculations and effective model reveal the occurrence of such an intriguing insulator phase in the present system. Next, in the presence of population imbalance, we show that a phase separation of polarized components occurs in a low fermion-pair-density case. This behavior comes from the double exchange interaction~\cite{double exchange1,double exchange2} between Fermion pairs. Finally, we study an effect of trap potential. We propose that the trapped system allows the direct verification of the Haldane gap and the phase-separated polarized phase. This paper is organized as follows. The $p$-orbitals 1D Hubbard Hamiltonian is derived in Sec.~\ref{sec:hamiltonian}. In Sec.~\ref{sec:results}A, applying the DMRG method to this model, we calculate the ground-state of the $p$-orbitals 1D chain, and we show the presence of the charge gap and the edge state. Furthermore, deriving an effective spin-$1$ Hamiltonian, the realization of the Haldane insulator phase is more evident. Section \ref{sec:results}B shows the results with population imbalance. We suggest a phase separation of polarized components. This phenomenon is explained, in terms of the double exchange interaction between fermion-pair particles. The effect of harmonic trap potential is shown in Sec.~\ref{sec:results}C. Section \ref{sec:summary} is devoted to the summary. \section{Model} \label{sec:hamiltonian} We start with the Hamiltonian for two-component Fermi gases, \be &&H= \sum_{\sigma=\uparrow,\downarrow}\int d^{3} \bo{x} \left( \psi_{\sigma}^{\dagger}h\psi_{\sigma} +\frac{g}{2}\psi_{\sigma}^{\dagger}\psi_{\bar{\sigma}}^{\dagger} \psi_{\bar{\sigma}}\psi_{\sigma} \right), \ee with \( h= -(\hbar^{2}/2m)\nabla^{2}+V_{\rm opt}(\bo{x}) \) and two body interaction $g$. The optical lattice potential is $V_{\rm opt}(\bo{x})=\sum_{\alpha=x,y,z}V_{\alpha}\sin^2(2\pi \alpha/\lambda_\alpha)$\,. When the lattice potential is highly elongated along $z$-axis (i.e., $V_x=V_y\gg V_z$ and $\lambda_x=\lambda_y\neq\lambda_z$), this 3D atomic gases can be decomposed into an array of independent 1D chains, as seen in Fig.1. Throughout this paper, we focus on the case when the multiple higher orbitals are partially filled and the lower orbitals are fully occupied, inside each piece well of the optical lattices. Such a high-density filling is attainable, tuning either the total particle number or the confinement of the harmonic trap potential. Amoung the $p$-orbitals, the Bloch band formed by $p_{z}$ (i.e., a component along the elongated direction) has a different character from $p_{x}$ and $p_{y}$, as shown in Fig.~\ref{schematic fig}. Hence, we focus on the double degenerate $p_x$- and $p_y$-orbitals, hereafter. Now, we derive a 1D Hamiltonian with $p$-orbital degeneracy. We first approximate the optical lattice potential, $ V_{\rm opt}(\bo{x})\simeq V_{z}\sin^2(2\pi z/\lambda_z)+ \sum_{\alpha=x,y}V_{\alpha}(2\pi \alpha/\lambda_\alpha)^2 $. Then, we expand the field operator as \begin{equation} \psi(\bm{x}) =\sum_{i}\sum_{p=p_x,p_y} c_{p,\sigma,i}\,u_{p}(\bm{x}_{\bot})w_{i}(z), \end{equation} with two kinds of the functions $u_{p}$ and $w_{i}$. The former is the exact solution of $h_{\bot}u_{p}=\epsilon_p u_{p}$, with $h_{\bot}=[-(\hbar^2 /2m) \nabla_{\bot}^{2} +\sum_{\alpha=x,y}V_{\alpha}(2\pi \alpha/\lambda_\alpha)^2]$, and the latter is the Wannier function formed by the optical lattice potential. Using the tight-binding approximation, we obtain \be H=\sum_{p,\sigma} \sum_{<i,j>}h_{p,\sigma,i,j}^{({\rm t})}+ \sum_{p,\sigma,i}h_{p,\sigma,i}^{(\mu)}+ \sum_{p,p',i}h_{p,p',i}^{({\rm U})}\,, \lb{eq:p1d} \ee with \be && h_{p,\sigma,i,j}^{({\rm t})}=-t c_{p,\sigma,i}^{\dagger}c_{p,\sigma,j}\,, \nn \\ && h_{p,\sigma,i}^{(\mu)}=-\bar{\mu} n_{p,\sigma,i}\,, \nn\\ && h_{p,p'=p,i}^{({\rm U})} =U_{pp} \left(n_{p,\uparrow,i}-\frac{1}{2}\right) \left(n_{p,\downarrow,i}-\frac{1}{2}\right)\,, \nn \\ &&h_{p,p'\neq p,i}^{({\rm U})} =U_{pp'} \Big{ ( } \bo{\rho}_{p,i}\cdot\bo{\rho}_{p',i} -\bo{S}_{p,i}\cdot\bo{S}_{p',i} \Big{ ) }\,, \nn \ee where $\bar{\mu}=\mu-(U_{pp}+U_{p_xp_y})/2$. The hopping and the on-site interaction energy integrals are defined by, respectively, \begin{eqnarray} && t=-\int dz\, w_{i+1} \Big{(} \frac{-\hbar^2}{2M}\frac{d^2}{d z^2} +V_{z}\sin^{2} \frac{2\pi z}{\lambda_z} \Big{)}\, w_{i}, \\ && U_{pp'}=g\int d^{3}\bo{x}\,w_{i}^{4}u_{p}^2u_{p'}^2 . \end{eqnarray} The on-site number density of $p$ orbital is \mbox{ \( n_{p,\sigma,i}= c_{p,\downarrow,i}^\dagger c_{p,\uparrow,i} \)}. The spin-$1/2$ operator is \mbox{ \( \bo{S}_{p,i} = \frac{1}{2}\sum_{\sigma,\sigma'} c_{p,i,\sigma}^{\dagger}\bo{\tau}_{\sigma,\sigma'} c_{p,i,\sigma'} \)}, with the $2\times 2$ Pauli matrices $\bo{\tau}=(\tau^{(x)},\,\tau^{(y)},\,\tau^{(z)})$, whereas the pseudo spin-$1/2$ one is defiend as \be &&\rho_{p,j}^{(x)}=\frac{1}{2}(\rho_{p,j}^{(+)}+\rho_{p,j}^{(-)})\,, \quad \rho_{p,j}^{(y)}=\frac{1}{2i}(\rho_{p,j}^{(+)}-\rho_{p,j}^{(-)})\,, \nn \\ &&\rho_{p,j}^{(z)}=\frac{1}{2}\Big(\sum_{\sigma}n_{p,\sigma,j}-1\Big)\,,\nn \ee with \( \rho_{p,j}^{(+)}=c_{p,\uparrow,j}^{\dagger}c_{p,\downarrow,j}^{\dagger} \) and \( \rho_{j}^{(-)}=[\rho_{j}^{(+)}]^{\dagger} \). Throughout this paper, we use the symbol $U_{pp}$ for representing the intra-orbital interaction strength, since $U_{p_{x}p_{x}} = U_{p_{y} p_{y}}$. We also find that $U_{p_x p_y}=U_{p_y p_x}$. The intra- and the inter-orbital interaction strength are evaluated by the exact solution of $h_{\bot}u_{p}=\epsilon_p u_{p}$. We can find the relation $U_{p_xp_y}=(4/9)U_{pp}$. This relation is also used throughout this paper. In this paper, we set $U_{pp^{\prime}}$ as a negative value, i.e., an attractive two-body interaction $g<0$. The virtue of using the spin representation by $\bo{S}_{p,j}$ and $\bo{\rho}_{p,j}$ in the Hamiltonian is to clarify an underlying symmetry feature of this model. Furthermore, our numerical results are intuitively understood by this representation; in the subsequent sections a fermion-pair particle will be discussed, in terms of $\bo{\rho}_{p,j}$. Let us here show the symmetry of Eq.~(\ref{eq:p1d}) explicitly. Taking the summation over $p_{x}$ and $p_{y}$, we build two kinds of operators, \( S_{i}^{(l)}=\sum_{p}S_{p,i}^{(l)} \) and \( \rho_{i}^{(l)}=\sum_{p}\rho_{p,i}^{(l)} \), with $l=x,\,y,\,z$. We can obtain the operators for $l = \pm $, in a similar manner to the definition of $\bo{\rho}_{p,j}$. The former is a (local) spin-$1$ operator, whereas the latter is a (local) pseudo spin-$1$ operator. After the straightforward calculations, we obtain the algebraic relations, \be [H,S^{(l)}]=0\,,\quad [H,\rho^{(\pm)}]=\mp 2\bar{\mu}\rho^{(\pm)}\,,\quad [H,\rho^{(z)}]=0\,, \ee with $S^{(l)} = \sum_{i}S_{i}^{(l)}$ and $\rho^{(l)} = \sum_{i}\rho_{i}^{(l)}$. The first one indicates that Hamiltonian (\ref{eq:p1d}) is isotropic with respect to the global spin rotation. The latter two relations mean that this model possesses a highly symmetric property at half filling ($\bar{\mu}=0$). In other words, the present Hamiltonian has \( SU(2)_{\rm spin} \times SU(2)_{\text{pseudo-spin}} \simeq SO(4) \) symmetry at $\bar{\mu}=0$. \section{Results} \label{sec:results} Let us study the quantum phases of Eq.~(\ref{eq:p1d}) at zero temperature. All the numerical calculations are performed by the DMRG method~\cite{DMRG1,DMRG2}. Our DMRG code is directly extended toward ladder systems, by parallelizing the superblock matrix diagonalization~\cite{YAMADA}. The number of states kept is varied from $m=400$ to maximally $1000$ depending on the convergence tendency of the calculations. The boundary condition is open in all the calculations. The subsequent subsection shows the results in a spatially uniform case, without any population imbalance. Next, we turn into the case with spatially uniformity and population imbalance. In the third subsection, the effect of the confinement harmonic trap potential is studied. \subsection{Zero population imbalance} \begin{figure}[h] \begin{center} \includegraphics[width=1.00\linewidth]{Fig2.eps} \end{center} \caption{(Color online) (a) Charge gap $E_{\rm c}$ versus filling rate $\tilde{n}$. (b) Particle density $n(i)$, with $\tilde{n}=1.0,\,0.8,\,0.2$. (c) Absolute value of the Fourier-transformed density fluctuations, $|\bar{n}(k)|$ on \mbox{$k$--$\tilde{n}$} plane. In all the figures, the population imbalance is $P=0$, the coupling parameters are $U_{pp}=-10$ and $U_{p_x p_y}=(4/9)U_{pp}$, and the total lattice site number is $L=100$. } \label{phase1} \end{figure} Figure \ref{phase1}(a) shows our DMRG results about the charge gap with respect to the filling rate $\tilde{n}= (1/2L)\sum_{i}n(i)$, where $n(i) = \sum_{p,\sigma}n_{p,\sigma,i}$ is the particle density and $L$ is the total lattice site number. The charge gap is evaluated by \( E_{\rm C} = E(N+\uparrow\downarrow)+E(N-\uparrow\downarrow)-2E(N) \), with the DMRG ground-state energy $E(\cdot)$. In Fig.~\ref{phase1}, the population imbalance $P$ is zero; \( P \equiv \sum_{p,i}(n_{p,\uparrow,i}-n_{p,\downarrow,i}) = 0 \). We find that the charge gap drastically grows up when $\tilde{n}$ is close to half filling (i.e., $\tilde{n}\simeq 1.0$). In contrast, when $\tilde{n}$ decreases from $1$, the gap reduces. These behaviors indicate that in this double-degenerate-band attractive 1D model a gapped phase emerges, close to half-filling. We stress that the charge excitation gap does not open in single-band attractive 1D Hubbard chains~\cite{Machida_attra,Gao}. We show that the emergence of this charge gap is attributed to the Haldane gap, by mapping the present Hamiltonian onto an interacting spin-$1$ chain. Using the second order perturbation~\cite{BW} and the attractive-repulsive transformation~\cite{ARtrans}, we obtain an effective model of Eq.~(\ref{eq:p1d}), for strong coupling regime \mbox{$|U_{pp'}|\gg t$}. The attractive-repulsive transformation~\cite{ARtrans} makes Eq.~(\ref{eq:p1d}) a half-filled system. This transformation is defined by $ c_{p,\uparrow,i} = \bar{c}_{p,\uparrow,i} $ and $ c_{p,\downarrow,i}=(-1)^i \bar{c}_{p,\downarrow,i}^\dagger $. For the \textit{free} Hamiltonian $\sum_{p,p',i}h_{p,p',i}^{({\rm U})}$, we obtain \be &&H_{\rm eff} = \sum_{p,\sigma,i} \bar{\mathcal{P}}h_{p,\sigma,i}^{({\rm \mu})}\bar{\mathcal{P}} - \!\!\!\!\!\!\!\!\! \sum_{p,\sigma,<i,j>\atop p',\sigma',<i',j'>} \!\!\!\!\!\!\!\!\! V_{p,\sigma,i,j} \, H_{0}^{-1} \, V_{p^{\prime},\sigma^{\prime},i^{\prime},j^{\prime}}^{\dagger}, \lb{eq:BW} \ee with \( V_{p,\sigma,i,j} = \bar{\mathcal{P}}h_{p,\sigma,i,j}^{({\rm t})}\bar{\mathcal{Q}} \). The operators $\bar{\mathcal{P}}$ and $\bar{\mathcal{Q}}$ are, respectively, the projectors onto the subspaces, \be \mathcal{\bar{H}}_{P}&=&\otimes_{i} \big{\{} |\bar{\uparrow},\bar{\uparrow}\rangle\,, |\bar{\downarrow},\bar{\downarrow}\rangle\,, (|\bar{\downarrow},\bar{\uparrow}\rangle+ |\bar{\uparrow},\bar{\downarrow}\rangle)/\sqrt{2} \big{\}} \\ \mathcal{\bar{H}}_{Q}&=&\otimes_{i} \big{\{} |\bar{\uparrow},\bar{0}\rangle\,, |\bar{0},\bar{\uparrow}\rangle\,, |\bar{\downarrow},\bar{0}\rangle\,, |\bar{0},\bar{\downarrow}\rangle\,, \nn \\ &&\quad|\bar{\uparrow},\bar{\uparrow}\bar{\downarrow}\rangle\,, |\bar{\uparrow}\bar{\downarrow},\bar{\uparrow}\rangle\,, |\bar{\downarrow},\bar{\uparrow}\bar{\downarrow}\rangle\,, |\bar{\uparrow}\bar{\downarrow},\bar{\downarrow}\rangle \big{\}}\,. \ee Here, $|\bar{\cdot},\bar{\cdot}\rangle$ means $|\bar{\cdot},\bar{\cdot}\rangle = |\bar{\cdot}\rangle_{p_{x}} |\bar{\cdot}\rangle_{p_{y}}$ and the ket vector $|\bar{0}\rangle_{p_{x(y)}}$ is defined by $\bar{c}_{p_{x(y)},\sigma,i}|\bar{0}\rangle_{p_{x(y)}}=0$. Eq.~(\ref{eq:BW}) can be rewritten in terms of the pseudo-spin-$1$ operators $\rho_{i}^{(\pm)}$ and $\rho_{i}^{(z)}$\,. The effective low-energy Hamiltonian of the system is reduced to a 1D pseudo-spin-$1$ chain, \be H_{\rm eff}&=& J_{\rm ex}\sum_{<i,j>}\Big{ [} \rho_{i}^{(z)}\rho_{j}^{(z)}- \frac{1}{2}(\rho_{i}^{(+)}\rho_{j}^{(-)}+\rho_{i}^{(-)}\rho_{j}^{(+)}) \Big{ ]}\nn \\ &&-\sum_{i}2\bar{\mu}\rho_{i}^{z}\,,\lb{eq:xxz} \ee with $J_{\rm ex}=2t^2/\left(|U_{pp}|+|U_{p_x p_y}|\right)$\,. Thus, we find that the charge gap (i.e., the pseudo spin gap) opens, according to Haldane's conjecture\,\cite{Haldane}. Next, we show another evidence of the Haldane phase in our system. In the open boundary condition, it is well-known that the Haldane phase forms a free half spin state near the boundaries. Figure \ref{phase1}(b)-1 shows that a staggered charge density modulation occurs and exponentially decays toward the bulk region. This behavior corresponds to the $S=1/2$ edge state~\cite{AKLT,Miyashita}. Thus, a free half spin emerges as a free fermion-pair particle, near the edges. This free fermion-pair particle induces a \textit{gapless} charge excitation when the system is the half filling case. Figure \ref{phase1}(a) shows this behavior; the charge gap occurs right below half filling, whereas a gapless behavior is found at $\tilde{n}=1.0$. We stress that the gapless behavior at $\tilde{n}=1.0$ comes from the edge contribution. Now, we study the case when the filling rate is much lower than half filling. Figures \ref{phase1}(b)-2 and (b)-3 show that the periodic oscillations dominate over the whole spatial region, not only the boundaries, when going below the half filling rate. We obtain the charge density wave (CDW) below half filling. The spatial periodicity indicates the presence of the Luther-Emery phase. If the Luther-Emery phase occurs, ths periodicity of the CDW should be characterized by the Fermi wave vector in the equivalent spinless Fermion~\cite{LL1,1Dbook}, $2k_{\rm F}=2\pi(\rho-|\sum_{i}\rho_i^{(z)}|/L)$. Here, $\rho$ is pseudo-spin length (i.e., $\rho=1$). Let us calculate the Fourier transformed density fluctuations, \( \bar{n}(k)= \sum_j [n(i)-2\tilde{n}] e^{ikj}/\sqrt{L} \). Figure \ref{phase1}(c) shows the absolute value \( |\bar{n}(k)| \), varying $k$ and $\tilde{n}$. We find the two kinds of the peaks, a strong peak around $(k,\tilde{n})\simeq (3.14,1.0)$ caused by staggered CDW (edge states) and a peak consistent with the prediction of the Luttinger theory $k=2k_{\rm F}=2\pi\tilde{n}$\,. The latter peak indicates the emergence of the Luther-Emery phase below half-filling. In other words, our model below half filling behaves like an attractive 1D Hubbard chain. \subsection{Nonzero population imbalance} \begin{figure}[htbp] \begin{center} \includegraphics[width=1.00\linewidth]{Fig3.eps} \end{center} \caption{(Color online) Spatial distributions of the particle density $n(i)$ (soild line) and the spin density $m(i)$ (dashed line), with filling rates (a) $\tilde{n}=0.99$, (b) $\tilde{n}=0.6$, (c) $\tilde{n}=0.4$, and (d) $\tilde{n}=0.2$. In the all figures, the population imbalance is fixed as $P=12$. Other physical parameters are the same as in Fig.\ref{phase1}. } \label{open2} \end{figure} We study the effects of population imbalance in a spatially uniform case. Figure \ref{open2} shows the spatial distributions of the particle density $n(i)$ and the spin density \( m(i) = \sum_{p}(n_{\uparrow,p,i}-n_{\downarrow,p,i}) \), with fixed population imbalance $P=12$, varying filling rates. At higher filling rates ($\tilde{n}=0.99,\,0.6$), we obtain the spin density wave, as seen in Figs.~\ref{open2}(a,b) (dashed lines). We find that the spatial period is characterized by \( 2\Delta k_{{\rm F},p} = \pi P/L \) (i.e., \( 2\Delta k_{{\rm F},p}L/ 2\pi = 6 \)), with the difference of the $p$-orbital Fermi wave vectors \( \Delta k_{{\rm F},p} = (\pi/L)\sum_{i}( n_{p,\uparrow,i} - n_{p,\downarrow,i} ) \). The density profile depends on the filling rate more sensitively. Right below half filling ($\tilde{n}=0.99$), a uniform density profile is found in the bulk region, and a staggered CDW occurs at the edges. These results are similar to the case without population imbalance; the charge gap opens in the bulk region, while a free half spin state may induce gapless charge excitations near the boundaries. When the filling rate decreases slightly ($\tilde{n}=0.6$), a small spatial modulation is found in the whole spatial region. At much lower filling rates ($\tilde{n}=0.4,\,0.2$), we find a drastic effect of the population imbalance. Figures \ref{open2}(c,d) show a phase separation of polarized components. The small spatial modulation of the particle density for $\tilde{n}=0.6$ [Fig.~\ref{open2}(b)] is regarded as a precursory phenomenon of this phase separation. Decreasing the filling rate enhances the amplitude of the density-profile oscillation. Then, the low-particle-density regions are created for lower filling rates~\cite{note1}. \begin{figure}[h] \begin{center} \includegraphics[width=1.00\linewidth]{Fig4.eps} \end{center} \caption{(Color online) Schematic diagram of hopping processes of unpaired fermions (single up-arrow) in fermion-pair particles (pair of up- and down-arrow). } \label{2exchange} \end{figure} To clarify the origin of the phase separation, we focus on the kinetics energy of unpaired fermions. Applying the first order perturbation to Eq.~(\ref{eq:p1d}), we obtain \be H_{\rm K}=-t\sum_{p,\sigma}\sum_{<i,j>} \mathcal{P}_i\mathcal{P}_j h_{p,\sigma,i.j}^{({\rm t})} \mathcal{P}_i\mathcal{P}_j . \ee The operator $\mathcal{P}_i$ is the projector onto the subspace spanned by the nonzero spin imbalance states and the pseudo-spin-$1$ states at the spatial site $i$, \be \mathcal{H}^{(i)}_{P>0,\,\rho=1} &=& \Big{\{} |\uparrow,\uparrow\downarrow\rangle\,, |\uparrow\downarrow,\uparrow\rangle\,, |\uparrow,0\rangle\,, |0,\uparrow\rangle \,, |\uparrow,\uparrow\rangle , \nn \\ && \quad |0,0\rangle\,, |\Psi_{+}\rangle\,, |\uparrow\downarrow,\uparrow\downarrow\rangle, \Big{\}}\,, \ee with \mbox{ \( |\Psi_{+}\rangle = (|\uparrow\downarrow,0\rangle+ |0,\uparrow\downarrow\rangle)/\sqrt{2} \)}. The ket vector symbol $|\cdot,\cdot\rangle$ means $|\cdot,\cdot\rangle = |\cdot\rangle_{p_{x}} |\cdot\rangle_{p_{y}}$, with \mbox{$c_{p_{x(y)},\sigma,i}|0\rangle_{p_{x(y)}}=0$}. Figure \ref{2exchange} shows hopping processes of the unpaired Fermion (single up-arrow) embedded in the continuum formed by the fermion-pair particles (pair of up- and down-arrows). If a low fermion-pair-particle-density region spreads, as seen in case I of Fig.~\ref{2exchange}(b), the transfer probability of the unpaired Fermions is $-t$. This fact is confirmed by, for example, \mbox{ \( \,_{i_{\rm I}+1}\langle \uparrow,0 |\,_{i_{\rm I}}\langle 0,\uparrow| H_{\rm K} |\uparrow,\uparrow\rangle_{i_{\rm I}}|0,0\rangle_{i_{\rm I}+1} = -t \)}. Similarly, in a high fermion-pair-particle-density region [see case \mbox{II} in Fig.~\ref{2exchange}(b)], the transfer probability is $-t$, since, for example, \mbox{ \( \,_{i_{\rm II}+1}\langle \uparrow,\uparrow\downarrow | \,_{i_{\rm II}}\langle \uparrow\downarrow,\uparrow\downarrow| H_{\rm K} |\uparrow,\uparrow\downarrow\rangle_{i_{\rm II}} |\uparrow\downarrow,\uparrow\downarrow\rangle_{i_{\rm II}+1} =-t \)}. In contrast, when the fermion-particle density is intermediate [e.g. CDW-like configuration as case \mbox{III} of Fig.~\ref{2exchange}(b)], the transfer probability changes, since, for example, \mbox{ \( \,_{i_{\rm III}+1} \langle \Psi_{+} | \,_{i_{\rm III}} \langle \uparrow,0 | H_{\rm K} |0,0\rangle_{i_{\rm III}} |\uparrow,\uparrow\downarrow\rangle_{i_{\rm III}+1} = -t/\sqrt{2} \). } Thus, the unpaired Fermions prefer to either low or high fermion-pair-particle-density regions. Now, we apply the above arguments to our numerical results. The second order perturbation terms in Eq.~(\ref{eq:xxz}) vanish as $J_{\rm ex} \to 0$. Furthermore, since the lower filling ($\tilde{n}\ll1$) means increasing the pseudo magnetic field $2\bar{\mu}$, the pseudo spin-spin interaction of Eq.~(\ref{eq:xxz}) is irrelevant to the total energy. Thus, the kinetic energy of the unpaired Fermions is predominant, for strong attractive interaction $|U_{pp}|\gg t$ and a lower filing rate ($\tilde{n} \ll 1$). The results shown in Fig.~\ref{open2} corresponds to the case below half filling (i.e., a low filling case). Therefore, from the consideration about the kinetic energy, the unpaired Fermions in this figure prefer to the low fermion-pair-particle-density regions. In other words, the spin imbalance excludes the fermion-pair particles and leads to a local spin polarized state. This process can be regarded as double exchange interaction~\cite{double exchange1,double exchange2} between the fermion-pair particles. \subsection{Harmonic trap potential} \begin{figure}[h] \begin{center} \includegraphics[width=1.00\linewidth]{Fig5.eps} \end{center} \caption{(Color Online) Spatial distributions of the particle density $n(i)$ (solid line) and the spin density $m(i)$ (dashed line) in harmonic trap potential, with different population imbalances $P=0$, $20$, $40$. The trap potential strength is $V/t=0.4$. The total particle number is $180$. The upper panels (a1)--(c1) show the results for the present double degenerate $p$-orbital 1D chain. For comparison, we show in the lower panels (a2)-(c2) the results for zero intra-orbital interaction ($U_{p_xp_y}=0$), i.e., a single-band attractive Hubbard chain. } \label{trap_V=0.4} \end{figure} We take the effect of the harmonic trap potential into account. The trap potential is typically employed in atomic gas experiments, to avoid the escape of the atoms. When the harmonic trap is considered, we simply add a potential term to our model. Thus, the total Hamiltonian is \mbox{$H+\sum_{p,\sigma,i}V_{\rm ho}(i)n_{p,\sigma,i}$}, with \mbox{$V_{\rm ho}(i)=V[2/(L-1)]^2 [i-(L+1)/2]^2$}. We obtain the ground state of this modified Hamiltonian, using the DMRG method. Figure \ref{trap_V=0.4}(a1) shows the results, without population imbalance ($P=0$). The significant feature of the trapped system is the emergence of the Mott core. This result is a contrast to the case of single-band attractive Fermi gases, as seen in Fig.~\ref{trap_V=0.4}(a2); there is no Mott-core structure. This structure implies that the Haldane insulator phase is formed at half filling $n(i)=2$. From the viewpoint of Eq.~(\ref{eq:xxz}), the effect of the trap potential is regarded as spatially-dependent pseudo-magnetic field, $\bar{\mu}(i)=\bar{\mu}-V(i)$. Thus, the Mott-core is considered to be a ``magnetization'' plateau associated with opening of the Haldane gap. The edge state of the Haldane phase is not observed in our trapped system. The smooth change of this magnetic field leads to the disappearance of the staggered CDW. Below half filling ($n(i)=2$), the present system is expected to show the CDW oscillation as discussed in section III.A. However, the density oscillation is not sharply observed in Fig.~\ref{trap_V=0.4}.(a1). The 1D single-band case shows a clear CDW oscillation [Fig.~\ref{trap_V=0.4}(a2)]. Therefore, the CDW order in the double degenerate $p$-orbital system is weaker, compared to the single band case. Next, we study the results in the presence of population imbalance. Figures \ref{trap_V=0.4}(b1, c1) show that the Haldane insulator phase occurs, in the same way as the zero population imbalance. Similarly, we find that there is no Mott-core structure, therefore no Haldane insulator phase occurs in a single-band attractive Hubbard chain [Figs.~\ref{trap_V=0.4}(b2,c2)]. The effect of the population imbalance appears, as a phase separation of the polarized components on merge of the trap potential [Fig.~\ref{trap_V=0.4}(b1,c1)]. As discussed in Sec.~\ref{sec:results}\,B, the kinetic energy of the unpaired Fermions prefers to a low fermion-pair-particle-density region. As a result, the polarized components concentrate at the edges of the trap potential (low density region). The phase separation of the polarized components can be observed in single band attractive Fermi gases with trap potential~\cite{Feiguin,Tezuka,phase separation1,phase separation2} [see Fig.~\ref{trap_V=0.4}(c2), as well]. However, the phase separation in a single-band case is vague, compared to the double degenerate $p$-orbital system, as seen in Figs.~\ref{trap_V=0.4}(b1,b2). The phase separation of the $p$-orbital Fermi gases is strong and easily induced by the double exchange interaction between the fermion-pair particles. Summarizing the results for the trapped system, we can suggest the direct and concrete check of our predictions in experiments. The Mott-core structure is induced by the Haldane gap, and is detectable via the measurement of the particle density profile. The phase separation of the polarized states is identified by comparing the spin density at the trap center to the one at the trap edges. One drawback of the harmonic trap is to erase the edge states related to the Haldane edge states. The realization of box trap potential\,\cite{box trap} may capture such states. Observing the string order parameter gives a strong signature of the Haldane insulator phase, as well as measuring the $p$-band Mott core and the edge states. We suggest that a single-site addressing technique (see, e.g., Ref.~\cite{string order}) allow a direct check of this quantity. Thus, calculating the string order parameter on the $p$-band Mott-core is an interesting future work. \section{Summary} \label{sec:summary} We explored the quantum phases in a 1D $p$-orbital Fermi gas with attractive interaction, via the DMRG calculations and the mapping onto an effective spin-$1$ model. To tune the filling rate and the population imbalance induces different phases, including the Haldane insulator phase, the Luther-Emery phase, and the phase separation of the polarized components. We also examined the effect of the harmonic trap potential. We found the emergence of the Mott core structure (i.e., Haldane insulator phase), in spite of attractive Fermi gases. Moreover, the strong phase separation induced by the population imbalance appears at the edges of the trap potential. Thus, the trapped system allows the direct verification of our predictions. \begin{acknowledgments} We wish to thank Y.~Nagai for useful discussions. This research was partially supported by a Grant-in-Aid for Scientific Research from JSPS (Grant No. 23500056). This work was partially supported by the Strategic Programs for Innovative Research, MEXT, and the Computational Materials Science Initiative (CMSI), Japan. We are indebted to T. Toyama for his support. The numerical work was partially performed on Fujitsu BX900 in JAEA. We acknowledge support from the CCSE staff. \end{acknowledgments}
1,116,691,499,968
arxiv
\section{Introduction} We consider the problem of training a machine learning model when the true evaluation metric is difficult to optimize on the training set. This general problem arises with many flavors and in different scenarios. For example, we may have a black-box metric whose mathematical expression is unknown or difficult to approximate with a convex training loss. The latter is particularly true with non-decomposable evaluation metrics, such as the F-measure or ranking metrics like Precision@$K$, where it is not straight-forward to construct a differentiable objective that closely approximates the metric. Another example is when the training labels are only a proxy for the true label. This arises in problems where one has access to cheap-to-acquire noisy labels, such as clicks, but wishes to optimize for a more expensive label, such as whether users rate a result as good. If we have access to a small auxiliary validation set with true labels, how can this information be used to influence the training loss? Similar examples also arise when the training data has noisy features and we have a small validation set with clean features, or in machine learning fairness problems where the training data contains group-dependent noise, but we may have access to a small set of auxiliary clean data. In many of the above scenarios, one wishes to optimize a black-box metric $M$ over $d$ model parameters, but does not have access to explicit gradients for $M$, nor is it practical to obtain reliable gradient estimates when $d$ is large. We provide a general solution to this problem by choosing $K \ll d$ convex surrogate losses, and expressing $M$ as an \textit{unknown} monotonic function $\psi: \mathbb{R}_+^K \> \mathbb{R}$ of the $K$ surrogates. We then reformulate the original problem as an optimization of $\psi$ over the $K$-dimensional surrogate space. The choice of surrogates can be as simple as the hinge losses on positive and negative samples, which should work well for metrics like the F-measure, or the surrogates can be chosen to be a family of different convex losses to handle robustness to training noise given a small set of clean validation samples. Our strategy is to estimate gradients for the unknown function $\psi$ with respect to its $K$ inputs by measuring changes in the metric $M$ and the $K$ surrogates for different perturbations on the model, and use the estimates for $\nabla\psi$ to perform \textit{projected gradient descent} over the $K$-dimensional surrogate space. We show how the projection step can be implemented inexactly but with convergence guarantees by solving a convex problem in the original $d$ parameters. We are thus able to \textit{adaptively} combine the $K$ surrogates to align well with the target metric $M$. The main contributions of this paper include: \vspace{-8pt} \begin{enumerate} \itemsep-0.1em \item A novel formulation that poses the problem of optimizing a black-box metric as a lower-dimensional problem in a surrogate space. \item A projected gradient descent based training algorithm using finite-differences and local linear interpolations to estimate gradients. \item Theoretical results showing convergence to a stationary point under smoothness assumptions on $\psi$. \item Experiments showing that the proposed approach works as well as methods that take advantage of the form of the metric if known, but can give substantial gains when the metric truly is a black-box. \end{enumerate} \section{Related Work} There has been much work on directly optimizing specialized classes of evaluation metrics during training. These include approaches that relax the metric using convex surrogates \citep{Joachims:2005,Kar+14,Narasimhan+15b,Kar+16}, plug-in or post-shift methods that tune a threshold on estimates of class probabilities \citep{Ye+12,Koyejo+14,Narasimhan+14,Yan+18}, reduction approaches that formulate a sequence of cost-sensitive learning tasks \citep{Parambath+14,Narasimhan+15,Alabi+18, Narasimhan18}, and approaches that use constrained optimization and game-based formulations \citep{Eban+17,Narasimhan+19}. However, all the above approaches require the evaluation metric to be available in closed-form. Of these, the closest to ours is the approach of \citet{Narasimhan+15}, which reformulates the learning problem as an optimization problem over the space of confusion matrices. To ensure the constraint set is convex, this approach requires the use of stochastic classifiers, and the theoretical guarantees assume that the metrics are convex or pseudo-convex in the confusion matrix. In contrast, we do not require stochastic classifiers, and can handle general metrics. Recently, there has been some work on optimizing evaluation metrics that are only available as a black-box. \citet{Zhao+19} approximate black-box metrics with a weighted training loss where the weighting function acts on a low-dimensional embedding of each example, and a validation set is used to estimate the parameters of the example-weighting function. A related approach by \citet{Ren+18} uses meta-gradient descent to re-weight the training examples to handle training set biases and label noise. In contrast we model the unknown metric as a function of surrogate losses, and directly estimate the metric gradients, rather than estimating a weighting function on each example. \citet{Huang+19} also propose jointly adaptively learning a metric with the model training. They use a parametric form for their learned metric, whereas we nonparametrically estimate the metric gradients. They use reinforcement learning to align the training objective's optimum with that of the true metric, whereas we use gradient descent over a surrogate space. They do not provide any theoretical guarantees. \citet{Grabocka+19} express the metric as a set function that maps each prediction to an embedding and maps the average embedding across all examples to the predicted metric. They jointly optimize the parameters for the loss and the model. This approach is similar to ours in that it expresses the metric as a function on surrogate losses, and attempts to learn that function. However, our approach is different in two key points. First, we take as \emph{given} known-useful surrogate losses, whereas they \emph{learn} decomposable surrogate mappings from scratch. Second, they parameterize their surrogate functions and final mapping as neural networks, whereas we nonparametrically adaptively estimate the local gradients. They provide limited theoretical guarantees. Similar to \citet{Grabocka+19}, the work of \citet{Wu+18} also learns a parameterized metric (e.g. as a neural network). An auxiliary parametric ``teacher" model is used to adaptively learn the parameters for the metric that will maximize performance on a validation set. They do not provide theoretical guarantees. \section{Problem Setup and High-level Approach} \label{sec:formulation} Let ${\mathcal{X}}$ be some instance space and ${\mathcal{Y}}$ be the label space. Let $f_\theta: {\mathcal{X}} \> \mathbb{R}$ be a model parametrized by $\theta \in \R^d$ that outputs a score $f_\theta(\mathbf{x})$ for instance $\mathbf{x} \in {\mathcal{X}}$. One can use this score to make a prediction; e.g. for binary classification problems, one predicts $\textrm{\textup{sign}}(f_\theta(\mathbf{x}))$. We measure performance w.r.t.\ a test distribution $D$ over ${\mathcal{X}} \times {\mathcal{Y}}$. We consider two scenarios, one where we are provided a training sample $S$ of $n$ examples directly drawn from $D$, and the other where the training sample $S$ is drawn from a noisy distribution, and we are provided a smaller clean validation set from $D$. The performance of $f_\theta$ is evaluated by a metric $M: \R^d \> [0,1]$ computed on $D$, where $M$ may be as simple as the error rate $M_{err}(\theta) = \mathbf{E}_{(\mathbf{x}, y)\sim D}\left[yf_\theta(\mathbf{x}) > 0\right]$ (or an estimate), or $M$ may be a complex, non-decomposable metric such as Precision@$K$ that depends on the scores and the distribution in a more intricate manner. We consider settings where the form of $M$ is unknown, and the metric is available only as a black-box, i.e.,\ for a given $\theta \in \R^d$, we can evaluate $M(\theta)$. The goal is to learn a good $f_\theta$ by solving: \begin{equation} \min_{\theta \in \R^d}\, M(\theta). \label{eq:opt} \vspace{-5pt} \end{equation} \subsection{Reformulation with Surrogates} \label{sec:re-formulation} To optimize (\ref{eq:opt}), one could directly estimate gradients of $M$ with respect to the $d$ parameters, but $d$ is usually too large for that to be practical. To relax (\ref{eq:opt}) to a more tractable problem, we take as given $K$ \textit{convex} surrogate loss functions $\ell_1, \ldots, \ell_K \colon \R^d \> \mathbb{R}_+$ where $K \ll d$, and express $M$ as an \textit{unknown} non-decreasing function of the $K$ surrogates, with an \textit{unknown} slack: \begin{align*} M(\theta) = \psi(\ell_1(\theta), \ldots, \ell_K(\theta)) \,+\, \epsilon(\theta), \end{align*} where $\psi\colon \mathbb{R}_+^K \rightarrow [0,1]$ is \textit{monotonic} but possibly non-convex, and the slack $\epsilon\colon \mathbb{R}^d \rightarrow [-1,1]$ determines how well the metric can be approximated by the $K$ surrogates. Note that this decomposition of $M$ is not unique. Our results hold for any such decomposition, but to enable a tighter analysis we consider a $\psi$ for which the associated worst-case slack over all $\theta$, i.e.,\ $\max_{\theta\in \mathbb{R}^d} |\epsilon(\theta)|$ is the minimum. Here are examples of target metrics and convex surrogates. \begin{example}[\textbf{Classification Metrics}] \label{exmp:gmean} \emph{ Consider the task of minimizing the G-mean metric given by $ 1 - \sqrt{\textrm{\textup{TPR}} \times \textrm{\textup{TNR}}}, $ where TPR is the true positive rate and TNR is the true negative rate. This metric promotes high accuracies on both the positive and negative class and is popular for classification tasks where there is class imbalance \citep{Daskalaki+06}. Possible surrogates for this metric include the average logistic or hinge losses on the positive and negatives examples as these serve as proxies for the TPR and TNR. It is reasonable to assume monotonic $\psi$ here, since lower surrogate values tend to produce better TPR and TNR values, and in turn lower G-means. The F-measure is another popular metric that can be written as a monotonic function of the TPR and TNR \cite{Koyejo+14}, and there again the average positive and negative losses would make good surrogates.} \end{example} \begin{example}[\textbf{Misaligned Training Data}] \emph{ Consider minimizing a metric using a training dataset that is noisy or misaligned with the test distribution, but we have access to a small validation set with clean data. The metric $M$ here is evaluated on the clean validation set, and the surrogates $\ell_1, \ldots, \ell_K$ might be convex $l_p$ losses on the training data with different values of $p>1$ to tune the noise robustness. In this case, the precise mathematical relationship $\psi$ between the validation metric and the surrogates is unknown. } \end{example} \begin{example}[\textbf{ML Fairness Problems}] \emph{ For blackbox ML fairness metrics, good surrogates might be logistic losses on the positive and negative samples for different groups. } \end{example} \begin{example}[\textbf{Ranking Metrics}] \emph{ Consider optimizing a ranking metric such as precision@$K$. While there are different convex surrogates available for this metric \cite{Joachims05, Narasimhan+15b}, the surrogate that performs the best can vary with the application. We have also observed in practice that sometimes setting a different value of $K$ in the training loss produces a better precision@$K$ during evaluation time. The proposed set-up gives us a way to combine multiple available ranking surrogates (possibly with different $K$ values) to align well with the test metric. } \end{example} \subsection{High-level Approach} Let ${\mathcal{L}} := \{(\ell_1(\theta), \ldots, \ell_K(\theta))\,|\,\theta \in\mathbb{R}^d\}$ be the set of feasible surrogate profiles. We then seek to approximate \eqref{eq:opt} by ignoring the slack $\epsilon$ and posing the problem as an optimization of $\psi$ over the $K$-dimensional set ${\mathcal{L}}$: \begin{equation} \min_{\boldsymbol{\ell} \in {\mathcal{L}}}\, \psi(\boldsymbol{\ell}). \label{eq:hippo} \end{equation} Our high-level idea is to solve this re-formulated problem by applying \textit{projected gradient descent} over ${\mathcal{L}}$. However, there are many challenges in implementing this idea. First, while each $\ell_k$ is convex, the space of feasible surrogates ${\mathcal{L}}$ is not necessarily a convex set. Second, the function $\psi$ is unknown to us, and therefore we need to estimate gradients for $\psi$ with only access to the metric $M$ and the surrogates $\boldsymbol{\ell}$. Third, we would need to implement projections onto the $K$-dimensional surrogate space without explicitly constructing this set. \vspace{-3pt} \section{Surrogate Projected Gradient Descent} \label{sec:pgd} We now explain how we tackle the above challenges. \vspace{-5pt} \subsection{Convexifying the Surrogate Space} To turn (\ref{eq:hippo}) into a problem over a convex domain, we define the \textit{epigraph} of the convex surrogate function profiles: \[ {\mathcal{U}} := \{\mathbf{u} \in \mathbb{R}_+^K~|~ \mathbf{u} \geq \boldsymbol{\ell}(\theta) ~\text{for some}~ \theta \in \R^d\}\, . \] \begin{observation} \label{obs:u-convex} ${\mathcal{U}}$ is a convex superset of ${\mathcal{L}}$. \end{observation} \vspace{-4pt} We then optimize $\psi$ over this $K$-dimensional convex set: \begin{equation} \min_{\mathbf{u} \in {\mathcal{U}}}\, \psi(\mathbf{u}). \label{eq:surrogate-opt} \end{equation} This relaxation preserves the optimizer for \eqref{eq:hippo} because $\psi$ is monotonic and ${\mathcal{U}}$ consists of upper bounds on surrogate profiles in ${\mathcal{L}}$: \begin{observation} \label{obs:u-opt} For any $\mathbf{u}^* \in \argmin{\mathbf{u} \in {\mathcal{U}}}\,\psi(\mathbf{u})$, there exists $\boldsymbol{\ell}^* \in {\mathcal{L}}, \, \boldsymbol{\ell}^* \leq \mathbf{u}^*$, such that $\psi(\boldsymbol{\ell}^*) = \psi(\mathbf{u}^*)$.\vspace{-1pt} \end{observation} \subsection{Projected Gradient Descent over ${\mathcal{U}}$} We then perform projected gradient descent over ${\mathcal{U}}$. We maintain iterates $\mathbf{u}^t$ in ${\mathcal{U}}$, and at each step, (i) estimate the gradient of $\psi$ w.r.t.\ the $K$-dimensional point $\mathbf{u}^t$, (ii) perform a descent step: $\tilde{\mathbf{u}}^{t+1} = \mathbf{u}^{t} - \eta\nabla\psi(\mathbf{u}^t)$, for some $\eta > 0$, and (iii) project $\tilde{\mathbf{u}}^{t+1}$ onto ${\mathcal{U}}$ to get the next iterate $\mathbf{u}^{t+1}$. In order to implement these steps without knowing $\psi$, or having direct access to the set ${\mathcal{U}}$, we simultaneously maintain iterates $\theta^t$ in the original parameter space that map to iterates $\mathbf{u}^t \in {\mathcal{U}}$, i.e., for which $\mathbf{u}^t =\boldsymbol{\ell}(\theta^t)$. Now to estimate gradients without direct access to $\psi$, we measure changes in the $K$ surrogates $\boldsymbol{\ell}(\cdot)$ and changes in the metric $M(\cdot)$ at different perturbations of $\theta^t$ and compute estimates of $\nabla\psi(\mathbf{u}^t))$ based on finite-differences or local linear interpolations. To compute projections without direct access to ${\mathcal{U}}$, we formulate a convex optimization problem over the original parameters $\theta$, and show that this results in an over-constrained projection onto ${\mathcal{U}}$. Thus we maintain iterates $(\mathbf{u}^t, \theta^t)$ such that $\mathbf{u}^t = \boldsymbol{\ell}(\theta^t)$, and execute the following at every iteration: $$ \tilde{\mathbf{u}}^{t+1} = \mathbf{u}^t \,-\, \eta\, \text{{gradient}}_\psi(\theta^t;\, M, \boldsymbol{\ell}) \vspace{-5pt} $$ $$ (\mathbf{u}^{t+1}, \theta^{t+1}) = \text{{project}}_{{\mathcal{U}}}(\tilde{\mathbf{u}}^{t+1};\, \boldsymbol{\ell}). $$ Figure \ref{fig:surrogate-pgd} gives a schematic description of the updates. The gradient computation takes the current $\theta^t$ as input and probes $M$ and $\boldsymbol{\ell}$ to return an estimate of $\nabla\psi(\mathbf{u}^t)$. We elaborate on how we estimate gradients in Section \ref{sec:grad-estimation}. The projection computation takes the updated $\tilde{\mathbf{u}}^{t+1}$ as input and returns a point $\mathbf{u}^{t+1}$ in ${\mathcal{U}}$ and an associated $\theta^{t+1}$ such that $\mathbf{u}^{t+1} = \boldsymbol{\ell}(\theta^{t+1})$. We explain this next. \begin{figure} \centering \includegraphics[scale=0.25]{plots/pgd.png} \vspace{-6pt} \caption{\textbf{PGD over $K$-dimensional set ${\mathcal{U}}$}.\ `project' performs an over-constrained projection onto ${\mathcal{U}}$. `gradient' probes $M$ and $\ell$ returns an estimate $\hat{\mathbf{g}}^{t+1} \in \mathbb{R}^K$ for $\nabla \psi$.} \label{fig:surrogate-pgd} \vspace{-8pt} \end{figure} \begin{figure} \centering \begin{tikzpicture}[scale=6.5] \fill[fill=green!20!white] (0,0) -- (-3mm,0mm) arc (-160:-87:3mm); \fill[fill=green!20!white] (-0.1mm,0) rectangle (-3mm,2mm); \fill[fill=green!20!white] (-0.1mm,2mm) rectangle (1mm,-2mm); \draw[line width=0.5mm,color=green] (-3mm,0) arc[radius = 3mm, start angle= -160, end angle= -87]; \draw[line width=0.5mm,color=green] (-3mm,0) -- (-3mm, 2mm); \draw[line width=0.5mm,color=green] (-0.01mm,-1.97mm) -- (1mm,-1.97mm); \draw[->, dashed] (-0.28,-0.05) -- (-0.38,-0.05); \node at (-0.4,-0.05) {${\mathcal{L}}$}; \node at (-1mm,1mm) {${\mathcal{U}}$}; \draw[->] (-4.3mm,-2.5mm) -- (1mm,-2.5mm); \draw[->] (-4.3mm,-2.5mm) -- (-4.3mm,2mm); \node at (1.6mm,-2.5mm) {\scriptsize $\ell_1(\theta)$}; \node at (-4.3mm,2.3mm) {\scriptsize $\ell_2(\theta)$}; \node at (-1.75mm,-1.2mm) {\scriptsize${\mathbf{u}_a}$}; \node at (-2.1mm,-1.3mm)[circle,fill,inner sep=1.5pt]{}; \draw[->] (-2.6mm,-1.95mm) -- (-2.17mm,-1.37mm); \node at (-0.5mm,-1.95mm)[circle,fill,inner sep=1.5pt]{}; \node at (-0.5mm,0.1mm) {\scriptsize$\tilde{\mathbf{u}}_b$}; \draw[->] (-0.5mm,-0.2mm) -- (-0.5mm,-1.85mm); \node at (-0.5mm,-0.2mm)[circle,fill,inner sep=1.5pt]{}; \node at (-0.5mm,-2.3mm) {\scriptsize$\mathbf{u}_b$}; \node at (-2.62mm,-1.95mm)[circle,fill,inner sep=1.5pt]{}; \node at (-3mm,-1.9mm) {\scriptsize$\tilde{\mathbf{u}}_a$}; \end{tikzpicture} \vspace{-5pt} \caption{\textbf{Over-constrained projection.} The space of surrogate profiles ${\mathcal{L}} = \{(\ell_1(\theta), \ell_2(\theta)) \,|\, \theta \in \mathbb{R}^d\}$ is a non-convex set (solid line), and its epigraph ${\mathcal{U}} = \{\mathbf{u} \geq \boldsymbol{\ell}\,|\, \boldsymbol{\ell} \in {\mathcal{L}}\}$ is convex (shaded region). For the point $\tilde{\mathbf{u}}_a$ outside ${\mathcal{U}}$, the solution $\mathbf{u}_a$ to \eqref{eq:project} is the same as the exact projection $\Pi(\tilde{\mathbf{u}}_a)$ onto ${\mathcal{U}}$. For the point $\tilde{\mathbf{u}}_b$ inside the set, $\Pi(\tilde{\mathbf{u}}_b)$ = $\tilde{\mathbf{u}}_b$, whereas $\mathbf{u}_b$ is one of many solutions to \eqref{eq:project} on the boundary and with $\mathbf{u}_b \leq \Pi(\tilde{\mathbf{u}}_b)$ in each coordinate. } \vspace{-7pt} \label{fig:project} \end{figure} \subsection{Over-constrained Projection} To implement the projection without explicit access to ${\mathcal{U}}$, we set up an optimization over $\theta$ by penalizing a clipped $L_2$-distance between the surrogate profile $\boldsymbol{\ell}(\theta)$ and $\tilde{\mathbf{u}}^{t+1}$: \begin{align} \theta^{t+1} &\in \argmin{\theta \in \R^d}\,\|\big(\boldsymbol{\ell}(\theta) \,-\, \tilde{\mathbf{u}}^{t+1}\big)_+\|^2 \nonumber \\ \mathbf{u}^{t+1} &= \boldsymbol{\ell}(\theta^{t+1}), \label{eq:project} \end{align} where $(z)_+ := \max\{0, z\}$ is applied element-wise and $\|\cdot\|$ is the $L_2$-norm. Note that we penalize errors in only one direction (i.e. the errors where $\ell_k(\theta) \geq \tilde{u}^{t+1}_k$). This has the advantage of the optimization problem being convex. Moreover, as we show below, \eqref{eq:project} results in an over-constrained projection: any solution $\mathbf{u}^{t+1}$ to \eqref{eq:project} is feasible (i.e.\ is in ${\mathcal{U}}$), and for a monotonic $\psi$, yields a $\psi$-value that is no worse than what we would get with an exact projection. \begin{lemma} Let $\mathbf{u}^+$ be the exact projection of $\tilde{\mathbf{u}}^{t+1} \in \mathbb{R}^K_+$ onto ${\mathcal{U}}$. For any solution $\mathbf{u}^{t+1}$ to \eqref{eq:project}, we have $\mathbf{u}^{t+1} \in {\mathcal{U}}$, $\mathbf{u}^{t+1} \leq \mathbf{u}^+$, and for a monotonic $\psi$, $\psi(\mathbf{u}^{t+1}) \leq \psi(\mathbf{u}^+)$. \label{lem:u-project} \end{lemma} \begin{figure} \vspace{-12pt} \begin{algorithm}[H] \caption{Surrogate Projected Gradient Descent} \label{algo:pgd} \begin{algorithmic}[1] \STATE \textbf{Input:} Black-box metric $M$, surrogate loss functions $\ell_1, \ldots, \ell_K:\mathbb{R}^d \rightarrow \mathbb{R}_+^K$, hyper-parameters: $T, \eta$ \STATE Initialize $\theta^1 \in \R^d, \mathbf{u}^1 = \boldsymbol{\ell}(\theta^1)$ \FOR{$t=1$ {\bfseries to} T} \STATE \textbf{Gradient estimate:}\ Obtain an estimate $\hat{\mathbf{g}}^t$ for gradient $\nabla\psi(\mathbf{u}^t)$ by invoking Algorithms 2 or 3 with inputs $\theta^t$, $M$ and $\ell_1, \ldots, \ell_K$ \STATE \textbf{Gradient update:} $\tilde{\mathbf{u}}^{t+1} = \mathbf{u}^t \,-\, \eta\, \hat{\mathbf{g}}^t$ \STATE \textbf{Over-constrained projection:} Solve: \vspace{-5pt} $$ \theta^{t+1} \in \argmin{\theta \in \R^d}\,\|\big(\boldsymbol{\ell}(\theta) \,-\, \tilde{\mathbf{u}}^{t+1}\big)_+\|^2 \vspace{-5pt} $$ to accuracy $\mathcal{O}\big(\frac{1}{\beta^2 T}\big)$ and set $\mathbf{u}^{t+1} = \boldsymbol{\ell}(\theta^{t+1})$ \ENDFOR \end{algorithmic} \end{algorithm} \vspace{-15pt} \end{figure} Problem \eqref{eq:project} may not have a unique solution. For example, when $\tilde{\mathbf{u}}^{t+1}$ is in the interior of ${\mathcal{U}}$, the exact projection $\mathbf{u}^+$ is the same as $\tilde{\mathbf{u}}^{t+1}$, whereas the solutions to \eqref{eq:project} are the points $\mathbf{u}$ on the boundary of ${\mathcal{U}}$ with $\mathbf{u} \leq \mathbf{u}^+$ (see Figure \ref{fig:project}). As $\psi$ is monotonic, picking any of these solutions for the next iterate doesn't hurt the convergence of the algorithm. An outline of the projected gradient descent with this inexact projection is presented in Algorithm \ref{algo:pgd}. One can interpret the algorithm as \textit{adaptively} combining the $K$ surrogates $\ell_k$'s to optimize the metric $M$ (see Appendix \ref{app:prox} for the details). \vspace{-2pt} \subsection{Convergence Guarantee} We show convergence of Algorithm \ref{algo:pgd} to a stationary point of $\psi(\boldsymbol{\ell}(\cdot))$. Since we probe $M$ to estimate gradients for $\psi$, the errors in the estimate would depend on how closely $\psi(\boldsymbol{\ell}(\cdot))$ approximates $M$, and in turn on the magnitude of the slack term $\epsilon$. We assume here that the gradient estimation error $\mathbf{E}\left[\|\hat{\mathbf{g}}^t \,-\, \nabla\psi(\boldsymbol{\ell}(\theta^t))\|^2\right]$ at each step $t$ is bounded by a $\kappa_\epsilon \in \mathbb{R}_+$ that depends on the slack $\epsilon$. In Section \ref{sec:grad-estimation}, we present gradient estimates that satisfy this condition. \begin{theorem}[Convergence of Algorithm \ref{algo:pgd}] \label{thm:meta-result} Let $M(\theta) = \psi(\boldsymbol{\ell}(\theta)) + \epsilon(\theta)$, for a $\psi$ that is monotonic, $\beta$-smooth and $L$-Lipschitz, and the worst-case slack $\max_{\theta \in \mathbb{R}^d}|\epsilon(\theta)|$ is the minimum among all such decompositions of $M$. Suppose each $\ell_k$ is $\gamma$-smooth and $\Phi$-Lipschitz in $\theta$ with $\|\boldsymbol{\ell}(\theta)\|\leq G, \, \forall \theta$. Suppose the gradient estimates $\hat{\mathbf{g}}^t$ satisfy $\mathbf{E}\left[\|\hat{\mathbf{g}}^t \,-\, \nabla\psi(\boldsymbol{\ell}(\theta^t))\|^2\right] \leq \kappa_{\epsilon}, ~\forall t \in [T]$ and the projection step satisfies $\|(\boldsymbol{\ell}(\theta^{t+1}) - \tilde{\mathbf{u}}^t)_+\|^2 \leq \min_{\theta \in \R^d}\|(\boldsymbol{\ell}(\theta)- \tilde{\mathbf{u}}^t)_+\|^2 \,+\, \mathcal{O}(\frac{1}{\beta^2 T}), ~\forall t \in [T]$. Set stepsize $\eta = \frac{1}{\beta^2}$. Then Algorithm \ref{algo:pgd} converges to an approximate stationary point of $\psi(\boldsymbol{\ell}(\cdot))$: \vspace{-3pt} \begin{align*} \min_{1\leq t\leq T}&\mathbf{E}\left[\|\nabla \psi(\boldsymbol{\ell}(\theta^t))\|^2\right] \le C\bigg( \frac{\beta}{\sqrt{T}} + \sqrt{\kappa_\epsilon} + \sqrt{L}\kappa_\epsilon^{1/4} \bigg), \vspace{-3pt} \end{align*} where the expectation is over the randomness in the gradient estimates, and $C = \mathcal{O}\big(KL\big(\gamma\big(G+\frac{L}{\beta^2}\big)+\Phi^2\big)\big)$. \end{theorem} \begin{rem}[\textbf{Stationary point of $M$}]\, \emph{ When the gradient estimation error $\kappa_\epsilon$ is small and the number of steps $T \> \infty$, the algorithm reaches a model $\theta$ with a small gradient norm $\|\nabla\psi(\boldsymbol{\ell}(\cdot))\|$. If additionally the slack term $\epsilon$ is Lipschitz in $\theta$, then this implies that the algorithm also converges to an approximate stationary point of the metric $M$. } \vspace{-5pt} \end{rem} The proof of Theorem \ref{thm:meta-result} proceeds in two parts. We first show that the algorithm converges to an approximate stationary point of $\psi$ over ${\mathcal{U}}$. For this, we extend recent results \cite{Ghadimi2016} on convergence of projected gradient descent for smooth non-convex objectives. \if 0 \begin{lemma}[Convergence in ${\mathcal{U}}$-space] \label{lem:converge_in_u} Define the gradient mapping at $\mathbf{u} \in {\mathcal{U}}$ for a vector $g \in \mathbb{R}^K$ as $P(\mathbf{u},\,g) := \frac{1}{\eta}(\mathbf{u}-\Pi_{{\mathcal{U}}}(\mathbf{u} \,-\, \eta \cdot g))$, where $\Pi_{{\mathcal{U}}}(z)$ denotes the projection of $z$ onto ${\mathcal{U}}$. Then under the assumptions of Theorem \ref{thm:meta-result}, \begin{eqnarray*} \min_{1\leq t\leq T}\mathbf{E}\left[\|{P}(\mathbf{u}^t, \nabla \psi(\mathbf{u}^t))\|^2\right]~\leq~ \mathcal{O}\left(\frac{\beta^2}{T} \,+\, \kappa \,+\, L\sqrt{\kappa} \right). \end{eqnarray*} \end{lemma} \fi We then exploit the smoothness of the surrogates $\boldsymbol{\ell}$ to show that this result translates to the algorithm converging to an approximate stationary point of $\psi(\boldsymbol{\ell}(\cdot))$ w.r.t.\ $\theta$. \begin{rem}[\textbf{Prior convergence results}] \emph{ A key difference between our analysis and prior works on zeroth-order gradient methods \citep{Duchi, Ghadimi2016, Nesterov+17} is that we do not directly optimize the given objective over the space of parameters $\theta$, and instead perform an optimization over a relaxed surrogate space ${\mathcal{U}}$ that is not directly specified, and do so using inexact projections and approximate gradient estimates. } \end{rem} \section{Gradient Estimation Techniques} \label{sec:grad-estimation} We now address the issue of estimating the gradient $\hat{\mathbf{g}}^t$ of $\psi$ at a given $\boldsymbol{\ell}(\theta^t)$ without explicit access to $\psi$. We provide an algorithm based on finite-differences, and another based on local linear interpolations. We also show error bounds for these algorithms, i.e.\ bound the errors $\kappa_\epsilon$ in Theorem \ref{thm:meta-result}. \subsection{Finite Differences} \label{sec:fd} We first consider the case where both the surrogates $\boldsymbol{\ell}$ and metric $M$ are evaluated on the same sample. Let $\mathbf{f}_\theta := [f_\theta(\mathbf{x}_1), \ldots, f_\theta(\mathbf{x}_n)]^\top \in \mathbb{R}^n $ denote the scores of the model $\theta$ computed on the $n$ training examples. We overload notation and use $M(\mathbf{f}_\theta, \mathbf{y})$ to denote the value of the evaluation metric $M$ on the model scores $\mathbf{f}_\theta \in \mathbb{R}^n$ and labels $\mathbf{y} \in {\mathcal{Y}}^n$. Similarly, we use $\ell_k(\mathbf{f}_\theta, \mathbf{y})$ to denote the value of surrogate loss $\ell_k$ on $\mathbf{f}_\theta$ and $\mathbf{y}$. We present our method in Algorithm \ref{algo:finite-diff}. We adopt a standard finite-difference gradient estimate \citep{Nesterov+17}, which requires us to perturb the surrogates $\boldsymbol{\ell}$ with random Gaussian vectors $Z^1, \ldots, Z^m \sim \mathcal{N}({\mathbf{0}}, \mathbf{I}_K)$, evaluate $\psi$ at the perturbed surrogate profiles, and calculate \[\frac{1}{m}\sum_{j=1}^m\frac{\psi(\boldsymbol{\ell}(\mathbf{f}_\theta, \mathbf{y}) \,+\, \sigma Z^j) \,-\, \psi(\boldsymbol{\ell}(\mathbf{f}_\theta, \mathbf{y}))}{\sigma},\] for $\sigma > 0$. In our case, we cannot directly perturb the surrogates $\boldsymbol{\ell}$ and evaluate changes in $\psi$. Instead, we perturb the scores $\mathbf{f}_\theta$ so that the corresponding changes in $\boldsymbol{\ell}$ follows a Gaussian distribution, and evaluate the difference between the metric $M$ at the original and perturbed scores. This is possible, for example, when each $\ell_k(\mathbf{f}_\theta, \mathbf{y})$ is an average of point-wise losses $\phi_k(y_if_\theta(x_i))$ on different subsets of the data, for some invertible function $\phi_k: \mathbb{R} \> \mathbb{R}$, in which case, it is easy to compute the right amount of perturbation to the scores $\mathbf{f}_\theta$ to produce the desired perturbation in $\ell_k$. \begin{lemma}[Finite difference estimate] \label{lem:fd} Let $M$ be as defined in Theorem \ref{thm:meta-result} and $|\epsilon(\theta)| \leq \bar{\epsilon}, \forall \theta$. Let $\hat{\mathbf{g}}$ be returned by Algorithm \ref{algo:finite-diff} for a given $\theta'$, $m$ perturbations and $\sigma = \frac{\sqrt{\bar{\epsilon}}}{\sqrt{K}\beta^2}$. \begin{align*} \mathbf{E}\left[\|\hat{\mathbf{g}} \,-\, \nabla\psi(\boldsymbol{\ell}(\theta'))\|^2\right] \,\leq\, \mathcal{O}\left(\frac{L^2K}{m} + \bar{\epsilon}K^2\beta^2\right), \end{align*} where the expectation is over the random perturbations. \vspace{-5pt} \end{lemma} This gives a bound on $\kappa_\epsilon$ in Theorem \ref{thm:meta-result} when Algorithm \ref{algo:finite-diff} is used for gradient estimates. Note the error depends on the slack magnitude $\bar{\epsilon}$, and decreases with more perturbations. \begin{figure} \vspace{-10pt} \begin{algorithm}[H] \caption{Finite-difference Gradient Estimate} \label{algo:finite-diff} \begin{algorithmic}[1] \STATE \textbf{Input:} $\theta' \in \mathbb{R}^d$, $M$, $\ell_1, \ldots, \ell_K$ \STATE \textbf{Hyper-parameters}: Num\ of perturbations $m$, $\sigma$ \STATE Draw $Z^1, \ldots, Z^m \sim \mathcal{N}({\mathbf{0}}, \mathbf{I}_K)$ \STATE Find $\Delta^j \in \mathbb{R}^n$ s.t.\ $\boldsymbol{\ell}(\mathbf{f}_{\theta'} \,+\, \Delta^j, \mathbf{y}) \,=\, \boldsymbol{\ell}(\mathbf{f}_{\theta'}, \mathbf{y}) \,+\, \sigma Z^j$, for $j=1, \ldots, m$ \STATE $\displaystyle \hat{\mathbf{g}} = \frac{1}{m}\sum_{j=1}^m\frac{M(\mathbf{f}_{\theta'} \,+\, \Delta^j, \mathbf{y}) \,-\, M(\mathbf{f}_{\theta'}, \mathbf{y}) }{\sigma}Z^j$ \STATE \textbf{Output}: $\hat{\mathbf{g}}$ \end{algorithmic} \end{algorithm} \vspace{-15pt} \begin{algorithm}[H] \caption{Linear Interpolation Gradient Estimate} \label{algo:ls} \begin{algorithmic}[1] \STATE \textbf{Input:} $\theta' \in \mathbb{R}^d$, $M$, $\ell_1, \ldots, \ell_K$ \STATE \textbf{Hyper-parameters}: Num\ of perturbations $m$, $\sigma$ \STATE Draw $Z_1^1, \ldots, Z_1^m, Z_2^1, \ldots, Z_2^m \sim \mathcal{N}({\mathbf{0}}, \mathbf{I}_d)$ \STATE $\mathbf{H}_{j,:} = \boldsymbol{\ell}(\theta' + \sigma Z_1^j) \,-\, \boldsymbol{\ell}(\theta' + \sigma Z_2^j),~j = 1, \ldots, m $ \STATE $\mathbf{M}_{j,:} = M(\theta' + \sigma Z_1^j) \,-\, M(\theta' + \sigma Z_2^j),~j = 1, \ldots, m$ \STATE $\displaystyle \hat{\mathbf{g}} \,\in\,\argmin{\hat{\mathbf{g}} \in \mathbb{R}^K}\, \|\mathbf{H}\hat{\mathbf{g}} - \mathbf{M}\|^2 $ \STATE \textbf{Output}: $\hat{\mathbf{g}}$ \end{algorithmic} \end{algorithm} \vspace{-20pt} \end{figure} \subsection{Local Linear Interpolations} The finite-difference approach is not applicable to settings where the metric is evaluated on a validation sample but the surrogates are evaluated on training examples (as in Example 2), or where finding the right amount of perturbation on the scores is difficult. For such cases we present a local linear interpolation based approach in Algorithm \ref{algo:ls}, where we perturb the model parameters $\theta$ instead of the scores. We use the fact that a smooth function $\psi$ can be locally approximated by a linear function, and estimate the gradient of $\psi$ of at $\boldsymbol{\ell}(\theta)$ by perturbing $\theta$, measuring the corresponding differences in the surrogates $\boldsymbol{\ell}$ and the metric $M$, and fitting a linear function from the surrogate differences to the metric differences. Specifically, for $d$ model parameters, we draw two independent sets of $d$-dimensional Gaussian perturbations $Z_1^1, \ldots, Z_1^m, Z_2^1, \ldots, Z_2^m \sim \mathcal{N}({\mathbf{0}}, \mathbf{I}_d)$, and return a linear fit from $\mathbf{H} = [\boldsymbol{\ell}(\theta + \sigma Z_1^j) \,-\, \boldsymbol{\ell}(\theta + \sigma Z_2^j)]_{j=1}^m$ to $\mathbf{M} = [M(\theta + \sigma Z_1^j) \,-\, M(\theta + \sigma Z_2^j)]_{j=1}^m$. \begin{lemma}[Linear interpolation estimate] \label{thm:ls} Let $M$ be defined as in Theorem \ref{thm:meta-result} and $|\epsilon(\theta)| \leq \bar{\epsilon}, \forall \theta$. Assume each $\ell_k$ is $\Phi$-Lipschitz in $\theta$ w.r.t.\ the $L_\infty$-norm, and $\|\boldsymbol{\ell}(\theta)\|\leq G\,\, \forall \theta$. Suppose for a given $\theta'$, $\sigma$ and perturbation count $m$, the expected covariance matrix for the left-hand-side of the linear system $\mathbf{H}$ is well-conditioned, and has the smallest singlular value $\lambda_{\min}(\sum_{i=1}^m \mathbf{E}[\mathbf{H}_i\mathbf{H}_i^{\top}]) = \mathcal{O}(m\sigma^2\Phi^2)$. Then setting $\sigma = \tilde{\mathcal{O}}\left(\frac{G^{1/3}\bar{\epsilon}^{1/3}}{\Phi K^{3/2}\beta^{1/3}}\right)$ and $m = \tilde{\mathcal{O}}\left(\frac{G^4K^9\beta^2}{\bar{\epsilon}^2}\right)$, Algorithm \ref{algo:ls} returns w.h.p.\ (over draws of random perturbations) a gradient estimate $\hat{\mathbf{g}}$ that satisfies: $$\|\hat{\mathbf{g}} \,-\, \nabla\psi(\boldsymbol{\ell}(\theta'))\|^2 \,\leq\, \tilde{\mathcal{O}}\left(G^{1/3}\bar{\epsilon}^{1/3}K^3\beta^{2/3}\right)\, . \vspace{-5pt} $$ \end{lemma} We show in Appendix \ref{app:ls-proof} how this high probability statement can then be used to derive a bound on the expected errors $\kappa_\epsilon$ in Theorem \ref{thm:meta-result}. Prior works provide error bounds on a similar gradient estimate under an assumption that the perturbation matrix $\mathbf{H}$ can be chosen to be invertible \citep{Conn08, Conn09, Berahas19}. In our case, however, $\mathbf{H}$ is not chosen explicitly, but instead contains measurements of changes in surrogates for random perturbations on $\theta$. Hence to show an error bound, we need a slightly subtle condition on the correlation structure of the surrogates (that essentially says the variance of the perturbed surrogates are large enough and the rates are not strongly correlated with each other), which we express as a condition on the smallest singular value of the covariance of $\mathbf{H}$. \subsection{Handling Non-smooth Metrics} For $\psi$ that is non-smooth and Lipschitz, we extend the finite difference gradient estimate in Section \ref{sec:fd} with a two-step perturbation. We draw two sets of Gaussian vectors $Z_1^1, \ldots, Z_1^m, Z_2^1, \ldots, Z_2^m \sim \mathcal{N}({\mathbf{0}}, \mathbf{I}_K)$ and approximately calculate \[\frac{1}{m}\sum_{j=1}^m\frac{\psi(\boldsymbol{\ell}(\mathbf{f}_\theta, \mathbf{y}) + \sigma_1 Z_1^j + \sigma_2 Z_2^j) - \psi(\boldsymbol{\ell}(\mathbf{f}_\theta, \mathbf{y}) + \sigma_1Z_1^j)}{\sigma_2}\] for $\sigma_1, \sigma_2 > 0$, by perturbing $\boldsymbol{\ell}$ through the scores $\mathbf{f}_\theta$ and measuring changes in $M$ instead of $\psi$. This approach computes a finite-difference gradient estimate for a smooth approximation to the original $\psi$, given by $\psi_{\sigma_1}(\mathbf{u}) := \mathbf{E}\left[\psi(\mathbf{u} \,+\, \sigma_1 Z_1)\right]$, where $Z_1 \sim \mathcal{N}({\mathbf{0}}, \mathbf{I}_K)$. We provide error bounds in Appendix \ref{app:non-smooth} by building on recent work by \citet{Duchi}, and discuss asymptotic convergence of Algorithm \ref{algo:pgd} as $\sigma_1 \> 0$. \section{Experiments} \label{sec:expts} We present experiments to show the proposed approach, Algorithm \ref{algo:pgd}, is able to perform as well as methods that take advantage of a metric's form where available, and is also able to provide gains for metrics that are truly a black-box. We consider a simulated classification task, fair classification with noisy features, a ranking task and classification with proxy labels. The datasets we use are listed in Table \ref{tab:datasets}. We use the linear interpolation approach in Algorithm \ref{algo:ls} for estimating gradients, as this is the most practical among the proposed estimation methods, and applicable when the surrogates and metrics are evaluated on different samples. We use linear models, and tune hyper-parameters such as step sizes and the perturbation parameter $\sigma$ for gradient estimation using a held-out validation set. We run the projected gradient descent with 250 outer iterations and 1000 perturbations. For the projection step, we run 100 iterations of Adagrad. See Appendix \ref{app:expts} for more details and a discussion on perturbations. The code has been made available. \begin{table} \centering \vspace{-5pt} \caption{Datasets used in our experiments.} \vspace{3pt} \label{tab:datasets} \begin{tabular}{lrrr} \hline Dataset & \#instances & \#features & Groups \\ \hline Simulated & 5000 & 2 & -\\ COMPAS & 4073 & 31 & M/F \\ Adult & 32561 & 122 & M/F \\ Credit & 30000 & 89 & M/F \\ Business & 11560 & 36 & C/NC \\ KDD Cup 08 & 102294 &117 &- \\ \hline \end{tabular} \vspace{-5pt} \end{table} \begin{table}[t] \centering \vspace{-5pt} \caption{Test G-mean on sim.\ data. \emph{Lower} is better.} \vspace{3pt} \label{tab:gmean} \begin{tabular}{lccc} \hline & LogReg & PostShift & Proposed \\\hline Simulated &1.000 &0.848 &\textbf{0.803} \\ \hline \end{tabular} \vspace{-10pt} \end{table} \begin{figure}[t] \centering \includegraphics[scale=0.4]{plots/simulated.png} \caption{Hyperplanes learned by proposed method and PostShift on simulated data. \vspace{-10pt} \label{fig:hyperplanes} \end{figure} \begin{table*}[t] \centering \vspace{-5pt} \caption{Average test macro F-measure across groups with clean features. \textit{Higher} is better. Despite having only black-box access to the metric, our approach performs comparable to methods that take advantage of the form of the metric.} \vspace{3pt} \label{tab:fm-clean} \begin{tabular}{lccccc} \hline & LogReg & PostShift & RelaxedFM & GenRates & Proposed \\\hline Business &0.793 &0.789 &0.794 &0.793 &\textbf{0.796} \\ COMPAS &0.560 &\textbf{0.631} &0.614 &0.620 &0.629 \\ Adult &\textbf{0.668} &0.664 &{0.665} &0.654 &{0.665} \\ Default &0.467 &\textbf{0.536} &0.525 &0.532 &0.533 \\ \hline \end{tabular} \vspace{-10pt} \end{table*} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.245\textwidth} \centering \includegraphics[scale=0.42]{plots/business.png} \caption{Business} \end{subfigure}\hspace{-2pt} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[scale=0.43]{plots/adult.png} \caption{Adult} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[scale=0.43]{plots/default.png} \caption{Default} \end{subfigure} \begin{subfigure}[b]{0.245\textwidth} \includegraphics[scale=0.42]{plots/compas.png} \caption{COMPAS} \end{subfigure} \vspace{-6pt} \caption{Test macro F-measure across groups for varying noise levels, averaged over 5 trials. \textit{Higher} is better.} \label{fig:fairness-noise} \end{figure*} \subsection{Optimizing G-mean on Simulated Data} We first apply our approach to maximize a non-black box evaluation metric: $ \text{G-mean} = 1 - \sqrt{\textrm{\textup{TPR}} \times \textrm{\textup{TNR}}}, $ described in Example \ref{exmp:gmean}. We consider a simulated binary classification task in two dimensions, containing 10\% positives and 90\% negatives. The positive examples are drawn from a Gaussian with mean $[0,0]$ and covariance matrix $0.2 \times \mathbf{I}_2$. The negative examples are drawn from a mixture of two Gaussians centered at $[-1,-1]$ and $[1,1]$, with equal priors, and with a covariance matrix of $0.1 \times \mathbf{I}_2$. We apply our method with two surrogate functions: the average hinge losses on the positive and negative examples. The results are shown in Table \ref{tab:gmean}. We compare against two baselines: logistic regression that optimizes a standard cross-entropy loss, and a plug-in or post-shift approach that shifts the a threshold on the logistic regression model to optimize G-mean \citep{Narasimhan+14}. Because of the class imbalance, logistic regression learns to always predict the majority negative class and yields zero true positive rate and as a result a poor G-mean. Post-shift produces a better G-mean, but the proposed method performs the best. It is clear from the resulting decision boundaries shown in Figure \ref{fig:hyperplanes} that the proposed method learns the better linear separator. \subsection{Macro F-measure with Noisy Features} \label{sec:expt-fm} For this experiment we consider training a classifier with fairness goals defined on binary protected attributes. We seek to maximize the average F-measure across the groups: \[ \text{Macro $F_1$} ~=~ \frac{1}{2}\sum_{G \in \{0,1\}} \frac{2 \times \text{Precision}_G \times \text{Recall}_G}{\text{Precision}_G + \text{Recall}_G}, \] where $\text{Precision}_G$ and $\text{Recall}_G$ are the precision and recall on protected group $G$. Optimizing a sum of F-measures is harder than optimizing the binary F-measure because the summation destroys its pseudo-convexity property \citep{Narasimhan+19}. We use four fairness datasets: (1) \textit{COMPAS}, where the goal is to predict recidivism with \textit{gender} as the protected attribute \citep{Angwin+16}; (2) \textit{Adult}, where the goal is to predict if a person's income is more than 50K/year, and we take \textit{gender} as the protected group \citep{uci}; (3) \textit{Credit Default}, where the task is to predict whether a customer would default on his/her credit card payment, and we take \textit{gender} as the protected group \citep{uci}; (4) \textit{Business Entity Resolution}, a proprietary dataset from a large internet services company, where the goal is to predict whether a pair of business descriptions refer to identical businesses, and we consider \textit{non-chain} businesses as protected. In each case, we split the data into train-validation-test sets in the ratio $4/9:2/9:1/3$. \textbf{Training with no noise.}\ The first set of experiments tests if the proposed approach is able to match the performance of existing methods that are customized to optimize the macro F-measure. We compare against (i) plain logistic regression method, (ii) a plug-in or post-shift method that tunes a threshold on the logistic regression model to maximize the F-measure \citep{Koyejo+14, Narasimhan+14}, (iii) an approach that optimizes a continuous relaxation to the F-measure that replaces the indicators with the hinge loss, and (iv) the recent ``generalized rates'' approach of \citet{Narasimhan+19} for optimizing metrics that are a sum of ratios. We apply our approach using four surrogate losses, each one is the hinge loss averaged over either the positive or negative examples, calculated separately for each of the two groups. As seen in Table \ref{tab:fm-clean}, despite having only black-box access to the metric, the proposed approach performs comparable to the other methods that are directly tailored to optimize the macro F-measure. \textbf{Training with noisy features.} The second set of experiments evaluates the performance of these methods when the training set has noisy features for just one of the groups, while the smaller validation set contains clean features. We use our approach to adaptively combine the same four surrogate losses computed on the noisy training set to best optimize the macro F-measure on the clean validation set. We chose a certain fraction of the examples at random from one of the groups, which we refer to as group 0, and for these examples, we add Gaussian noise to the real features (with mean 0 and the same standard deviation as the feature), and flip the binary features with probability 0.9. Figure \ref{fig:fairness-noise} shows the test F-measure for the different methods with varying fraction of noisy examples in group 0. Except for logistic regression, all other methods have access to the validation set: post-shift uses the validation set to tune a threshold on the logistic regression model; the RelaxedFM and GenRates method optimize their loss on the training set, but pick the best model iterate using the validation set. The proposed approach is able to make the best use of the validation set, and consistently performs the best across most noise levels. \subsection{Ranking to Optimize PRBEP} \label{sec:expt-ranking} We next consider a ranking task, where the goal is to learn a scoring function $f$ that maximizes the precision-recall break-even point (PRBEP), i.e. yields maximum precision at the threshold where precision and recall are equal. PRBEP is a special case of Precision@$K$ when $K$ is set to the number of positive examples in the dataset. For this task, we experiment with the KDD Cup 2008 breast cancer detection data set \citep{Rao+08} popularly used in this literature \citep{Kar+15, Mackey+18}. We randomly split this dataset 60/20/20 for training, validation, and test. Since the break-even point for a dataset is not known before-hand, we use surrogates that approximate precision at different recall thresholds $\tau$. We use the quantile-based surrogate losses of \citet{Mackey+18} with $\tau = 0.25, 0.5, 0.75$. As a comparison, we optimize the avg-precision@$K$ surrogate provided by \citet{Kar+15}. As seen in Table \ref{tab:prbep}, the proposed approach is able to learn a better training loss by combining the three quantile surrogates, and yields the best PRBEP on the both the training and test sets. \begin{table}[t] \centering \caption{Train and test PRBEP on KDD Cup 2008 data. \textit{Higher} is better. } \vspace{3pt} \label{tab:prbep} \begin{tabular}{lccc} \hline & LogReg & \citet{Kar+15} & Proposed \\\hline Train & 0.480 & 0.473 &\textbf{0.546} \\ Test & 0.472 & 0.441 &\textbf{0.480} \\ \hline \end{tabular} \end{table} \begin{table}[t] \centering \caption{Test classification error where the training labels are only proxy labels with unknown relationship to the true labels. The proposed method was run with both hinge and sigmoid surrogates. \textit{Lower} is better. } \vspace{3pt} \label{tab:proxy} \begin{tabular}{lccccc} \hline & LogReg & PostShift & Hinge & Sigmoid \\\hline Adult & 0.333 & 0.322 & \textbf{0.314} & \textbf{0.314} \\ Business & 0.340 & {0.251} & 0.256 & \textbf{0.236} \\ \hline \end{tabular} \vspace{-10pt} \end{table} \subsection{Classification with Proxy Labels} \label{sec:expt-proxy} Next, we consider classification tasks where the training labels are proxies for the true labels, but the validation data has the true labels. We seek to minimize the classification error on the validation set by combining hinge loss surrogates evaluated separately on the positive and negative training examples. While the theory requires convex losses, we experiment with also running the algorithm with non-convex sigmoid losses as surrogates. For the Adult data, we predict whether a candidate's gender is female, and take the marital-status-wife feature as the proxy label. For the Business Entity Resolution data, we predict whether a pair of business descriptions refer to the same business, and use the has-same-phone-number feature as a proxy label. We compare with a logistic regression model trained with the proxy labels and a post-shift method that corrects the logistic regression threshold to minimize classification error on the validation data. As expected logistic regression yields the highest test error. On Adult, both variants of the proposed method are better than PostShift. On Business, the proposed method performs slightly worse than PostShift when run with hinge surrogates, but yields notable improvements when run with sigmoid surrogates, which are tighter relaxations to the true errors. \section{Discussion} There is currently a lot of interest in training models with better alignment with evaluation metrics. Here, we have investigated a simple method that directly estimates only the needed gradients for gradient descent training, and does not require assuming a parametric form. This simplicity enabled us to provide rigorous theoretical guarantees. Experimentally, our approach was as good as strategies that take advantage of a metric's form (where available), gave notable gains over baselines for black-box ranking, and was significantly better than post-shifting for experiments with group-dependent noise. For the proxy label experiments, however, the results were mixed, with the proposed method requiring a tighter surrogate relaxation to perform better than post-shift. Post-shift is a strong baseline -- in theory, for many metrics it is optimal to simply post-shift the Bayes class probability model $\P(y=1|x)$ with a suitable threshold $\beta$ \citep{Koyejo+14, Yan+18}. Post-shift only has one degree of freedom, which limits it, but also enables choosing $\beta$ to directly optimize the \textit{true} metric. In contrast, our method acts through surrogate losses to optimize the target metric. We argue that post-shift should be a required baseline for experiments on custom metric optimization. We look forward to seeing further theoretical analysis for handling black-box metrics, and further experimentation comparing methods with fewer but smarter parameters to those with more flexible modeling. \bibliographystyle{apalike}
1,116,691,499,969
arxiv
\subsection*{\protect\bigskip Abstract} In this manuscript we give thought to the aftermath on the stable probability density function when standard multiplicative cascades are generalised cascades based on the $q$-product of Borges that emerged in the context of non-extensive statistical mechanics. \section{Introduction} \label{intro} In the twenty years that have elapsed since the publication of the non-additive entropy $S_{q}$, also fairly known as \emph{Tsallis entropy} \cite{ct-1988}, many applications and connections to natural and man-mind phenomena have been established \cite{applications}. One of the most exciting which that have emerged within the non-extensive scope is the definition of a whole new set of mathematical operations/functions that goes from the generalised algebra independently defined by Borges \cite{borges} and Nivanen \textit{et al.} \cite{nivanen} and the integro-differential operators by Borges to the $q$-trigonometric functions \cite{borges-tese}. Besides its inherent beauty, these generalisations have found its own field of applicability. Namely, the $q$-product plays a primary role in the definition of the $q$-Fourier transform \cite{sabir}, thus in $q$-Central Limit Theorem \cite{sabir1}, whereas the generalised trigonometric functions have been quite successful in describing the critical behaviour of a class of composed materials known as manganites \cite{manganites}. In this article, we inquire into the possible applications of the $q$-product in the generation of random variables and its consequence on the definition of a new class of probability density functions. \section{Preliminaries: the $q$-product} \label{preliminar} The $q$-product, $\otimes _{q}$, has been introduced with the purpose to find a functional form that is able to generalise in a non-extensive way the mathematical identity, \begin{equation} \exp \left[ \ln \,x+\ln \,y\right] =x\times y,\qquad \left( x,y>0\right) , \end{equation}% so that the equality, \begin{equation} x\otimes _{q}y\equiv \exp _{q}\left[ \ln _{q}\,x+\ln _{q}\,y\right] , \label{q-product} \end{equation}% holds. The representations $\ln _{q}\left( .\right) $ and $\exp _{q}\left( .\right) $ correspond to the $q$-logarithm \cite{ct-quimica}, \begin{equation} \ln _{q}\left( x\right) \equiv \frac{x^{1-q}-1}{1-q},\qquad \left( x>0,q\in \Re \right) , \end{equation} and its inverse, the $q$-exponential, \begin{equation} \exp _{q}\left( x\right) \equiv \left[ 1+\left( 1-q\right) \,x\right] ^{% \frac{1}{1-q}},\qquad \left( x,q\in \Re \right) , \end{equation} respectively ($\exp _{q}\left( x\right) =0$ if $1+(1-q)\,x\leq 0$). For $% q\rightarrow 1$, the equation (\ref{q-product}) recovers the usual property, \begin{equation*} \ln \left( x\times y\right) =\ln \,x+\ln \,y \end{equation*} ($x,y>0$), with $x\times y\equiv x\otimes _{1}y$. Its inverse operation, the $q$-division, $x\oslash _{q}y$, verifies the following equality $\left( x\otimes _{q}y\right) \oslash _{q}y=x$. Bearing in mind that the $q$-exponential is a non-negative function, the $q$% -product must be restricted to the values of $x$ and $y$ that respect the condition, \begin{equation} \left\vert x\right\vert ^{1-q}+\left\vert y\right\vert ^{1-q}-1\geq0. \label{cond-q-prod} \end{equation} Moreover, we can extend the domain of the $q$-product to negative values of $% x$ and $y$ writing it as, \begin{equation} x\otimes_{q}y\equiv\mathrm{\ sign}\left( x\,y\right) \exp_{q}\left[ \ln _{q}\,\left\vert x\right\vert +\ln_{q}\,\left\vert y\right\vert \right] . \label{q-product-new} \end{equation} Regarding some key properties of the $q$-product we mention: \begin{enumerate} \item $x\otimes_{1}y=x\ y$; \item $x\otimes_{q}y=y\otimes_{q}x$; \item $\left( x\otimes_{q}y\right) \otimes_{q}z=x\otimes_{q}\left( y\otimes_{q}z\right) =\left[ x^{1-q}+y^{1-q}-2\right] ^{\frac{1}{1-q}}$; \item $\left( x\otimes_{q}1\right) =x$; \item $\ln_{q}\left[ x\otimes_{q}y\right] \equiv\ln_{q}\,x+\ln_{q}\,y$; \item $\ln_{q}\left( x\,y\right) =\ln_{q}\left( x\right) +\ln_{q}\left( y\right) +\left( 1-q\right) \ln_{q}\left( x\right) \ln_{q}\left( y\right) $; \item $\left( x\otimes_{q}y\right) ^{-1}=x^{-1}\otimes_{2-q}y^{-1}$; \item $\left( x\otimes _{q}0\right) =\left\{ \begin{array}{ccc} 0 & & \mathrm{if\ }\left( q\geq 1\ \mathrm{and\ }x\geq 0\right) \ \mathrm{% or\ if\ }\left( q<1\ \mathrm{and\ }0\leq x\leq 1\right) \\ & & \\ \left( x^{1-q}-1\right) ^{\frac{1}{1-q}} & & \mathrm{otherwise}% \end{array}% \right. $ \end{enumerate} For particular values of $q$, \textit{e.g.}, $q=1/2$, the $q$-product provides nonnegative values at points for which the inequality $\left\vert x\right\vert ^{1-q}+\left\vert y\right\vert ^{1-q}-1<0$ is verified. According to the cut-off of the $q$-exponential, a value of zero for $% x\otimes _{q}y$ is set down in these cases. Restraining our analysis of the Eq. (\ref{cond-q-prod}) to the sub-space $x,y>0$, we can observe that for $% q\rightarrow -\infty $ the region $\left\{ 0\leq x\leq 1,0\leq y\leq 1\right\} $ is not defined. As the value of $q$ increases, the forbidden region decreases its area, and when $q=0$, we have the limiting line given by $x+y=1$, for which $x\otimes _{0}y=0$. Only for $q=1$, the entire set of $% x$ and $y$ real values of has a defined value for the $q$-product. For $q>1$% ,\ the condition (\ref{cond-q-prod}) implies a region, $\left\vert x\right\vert ^{1-q}+\left\vert y\right\vert ^{1-q}=1$ for which the $q$% -product diverges. This undefined region augments its area as $q$ goes to infinity. When $q=\infty $, the $q$-product is only defined in $\left\{ x\geq 0,0\leq y\leq 1\right\} \cup \left\{ 0\leq x\leq 1,y>1\right\} $. Illustrative plots are presented in Fig.\ (1) of Ref. \cite{part1}. \section{Multiplicative processes as generators of distributions} \label{multiplicative} Multiplicative processes, particularly stochastic multiplicative processes, have been the source of plenty of models applied in several fields of science and knowledge. In this context, we can name the study of fluid turbulence \cite{turbulence}, fractals \cite{feder}, finance \cite% {mandelbrot}, linguistics \cite{murilinho}, etc. Specifically, multiplicative processes play a very important role on the emergence of the log-Normal distribution as a natural and ubiquitous distribution. In simple terms, the log-Normal distribution is the distribution of a random variable whose logarithm is associated with a Normal distribution \cite% {log-normal-book},% \begin{equation} p\left( x\right) =\frac{1}{\sqrt{2\pi }\sigma x}\exp \left[ -\frac{\left( \ln x-\mu \right) ^{2}}{2\sigma ^{2}}\right] . \label{log-normal} \end{equation}% With regard to the dynamical origins of the log-Normal distribution, several processes have been thought up to generate it. In this work we highlight the two most famous --- the \emph{law of proportionate effect}~\cite{gibrat}, the \emph{theory of breakage}~\cite{kolmogorov} or from Langevin-like processes \cite{fa}. We shall now give a brief view of the former; Let us consider a variable $\tilde{Z}$ obtained from a multiplicative random process,% \begin{equation} \tilde{Z}=\prod\limits_{i=1}^{N}\tilde{\zeta}_{i}, \label{product} \end{equation}% where $\tilde{\zeta}_{i}$ are nonnegative microscopic variables associated with a distribution $f^{\prime }\left( \tilde{\zeta}\right) $. If we consider the following transform of variables $Z\equiv \ln \tilde{Z}$, then we have,% \begin{equation*} Z=\sum\limits_{i=1}^{N}\zeta _{i}, \end{equation*}% with $\zeta \equiv \ln \tilde{\zeta}$. Assume now $\zeta $ as a variable associated with a distribution $f\left( \zeta \right) $ with average $\mu $ and variance $\sigma ^{2}$. Then, $Z$ converges to the Gaussian distribution in the limit of $N$ going to infinity as entailed by the Central Limit Theorem \cite{araujo}. Explicitly, considering that the variables $\zeta $ are independently and identically distributed, the Fourier Transform of $% p\left( Z^{\prime }\right) $ is given by, \begin{equation} \mathcal{F}\left[ p\left( Z^{\prime }\right) \right] \left( k\right) =\left[ \int_{-\infty }^{+\infty }e^{i\,k\,\frac{\zeta }{N}}\,f\left( \zeta \right) \,d\zeta \right] ^{N}, \label{fourier1} \end{equation}% where $Z^{\prime }=N^{-1}Z$. For all $N$, the integrand can be expanded as,% \begin{equation} \begin{array}{c} \mathcal{F}\left[ p\left( Z^{\prime }\right) \right] \left( k\right) =\left[ \sum\limits_{n=0}^{\infty }\frac{\left( ik\right) ^{n}}{n!}\frac{% \left\langle \zeta ^{n}\right\rangle }{N}\right] ^{N}, \\ \\ \mathcal{F}\left[ p\left( Z^{\prime }\right) \right] \left( k\right) =\exp \left\{ N\ln \left[ 1+ik\frac{\left\langle \zeta \right\rangle }{N}-\frac{1}{% 2}k^{2}\frac{\left\langle \zeta ^{2}\right\rangle }{N^{2}}+O\left( N^{-3}\right) \right] \right\} ,% \end{array}% \end{equation}% expanding the logarithm,% \begin{equation} \mathcal{F}\left[ P\left( Z^{\prime }\right) \right] \left( k\right) \approx \exp \left[ ik\mu -\frac{1}{2N}k^{2}\sigma ^{2}\right] . \end{equation}% Applying the inverse Fourier Transform, and reverting the $Z^{\prime }$ change of variables we finally obtain,% \begin{equation} p\left( Z\right) =\frac{1}{\sqrt{2\,\pi \,N}\sigma }\exp \left[ -\frac{% \left( Z-N\,\mu \right) ^{2}}{2\,\sigma ^{2}\,N}\right] . \end{equation}% We can define the attracting distribution in terms of the original multiplicative random process, yielding the log-Normal distribution \cite% {log-normal-book},% \begin{equation} p\left( \bar{Z} \right) =\frac{1}{\sqrt{2\,\pi \,N}\sigma \,\bar{Z}}\exp \left[ -% \frac{\left( \ln \bar{Z}-N\,\mu \right) ^{2}}{2\,\sigma ^{2}\,N}\right] . \end{equation} Although this distribution with two parameters, $\mu $ and $\sigma $, is able to appropriately describe a large variety of data sets, there are cases for which the log-Normal distribution fails statistical testing \cite% {log-normal-book}. In some of these cases, such a failure has been overcome by introducing different statistical distributions (e.g., Weibull distributions) or changing the 2-parameter log-Normal distribution by a 3-parameter log-Normal distribution,% \begin{equation} p\left( x\right) =\frac{1}{\sqrt{2\,\pi }\sigma \,\left( x-\theta \right) }% \exp \left[ -\frac{\left( \ln \left[ x-\theta \right] -\mu \right) ^{2}}{% 2\,\sigma ^{2}}\right] . \end{equation}% In the sequel of this work we present an alternative procedure to generalise the Eq. (\ref{log-normal}). The motivation for this proposal comes from changing the $N$ products in Eq. (\ref{product}) by $N$ $q$-products,% \begin{equation} \tilde{Z}=\underset{}{\prod\limits_{i=1}^{N}}^{(q)}\tilde{\zeta}_{i}\equiv \tilde{\zeta}_{1}\otimes _{q}\tilde{\zeta}_{2}\otimes _{q}\ldots \otimes _{q}% \tilde{\zeta}_{N}. \label{p-product} \end{equation}% Applying the $q$-logarithm we have a sum of $N$ terms. If every term is identically and independently distributed, then for variables $\zeta _{i}=\ln _{q}$ $\tilde{\zeta}_{i}$ with finite variables we have a Gaussian has stable distribution, \textit{i.e.}, a Gaussian distribution in the $q$% -logarithm variable. From this scenario we can obtain our $q$\emph{-log Normal probability density function},% \begin{equation} p\left( x\right) =\frac{1}{\mathcal{Z}_{q}\,x^{q}}\exp \left[ -\frac{\left( \ln _{q}\,x-\mu \right) ^{2}}{2\,\sigma ^{2}}\right] ,\qquad \left( x\geq 0\right) , \label{qlog-normal} \end{equation}% with the normalisation,% \begin{equation} \mathcal{Z}_{q}=\left\{ \begin{array}{ccc} \sqrt{\frac{\pi }{2}}\mathrm{erfc}\left[ -\frac{1}{\sqrt{2}\sigma }\left( \frac{1}{1-q}+\mu \right) \right] \sigma & if & q<1 \\ & & \\ \sqrt{\frac{\pi }{2}}\mathrm{erfc}\left[ \frac{1}{\sqrt{2}\sigma }\left( \frac{1}{1-q}+\mu \right) \right] \sigma & if & q>1.% \end{array}% \right. \end{equation}% In the limit of $q$ equal to $1$, $\ln _{q\rightarrow 1}x=\ln x$ and $% \mathcal{Z}_{q\rightarrow 1}=\sqrt{2\,\pi }\sigma $ and the usual log-Normal is recovered thereof (erfc stands for complementary error function). Typical plots for cases with $q=\frac{4}{5}$, $q=1$, $q=\frac{5}{4}$ are depicted in Fig.~\ref{fig-pdf}. \begin{figure}[tbh] \begin{center} \includegraphics[width=0.65\columnwidth,angle=0]{dist.eps} % \includegraphics[width=0.65\columnwidth,angle=0]{distloglinear.eps} % \includegraphics[width=0.65\columnwidth,angle=0]{distloglog.eps} \end{center} \caption{Plots of the Eq. (\protect\ref{qlog-normal}) \textit{vs} $x$ for $q=% \frac {4}{5}$ (dotted line), $q=1$ (full line) and $q=\frac{5}{4}$ (dashed line) in linear-linear scale (upper), log-linear (centre), log-log (lower).} \label{fig-pdf} \end{figure} The raw statistical moments,% \begin{equation} \left\langle x^{n}\right\rangle \equiv \int_{0}^{\infty }x^{n}\,p\left( x\right) \,dx, \label{momentos} \end{equation}% can be analytically computed for $q<1$ giving \cite{gradshteyn},% \begin{equation} \left\langle x^{n}\right\rangle =\frac{\Gamma \left[ \nu \right] \exp \left[ -\frac{\gamma ^{2}}{8\,\beta }\right] D_{-\nu }\left[ \frac{\gamma }{\sqrt{% 2\,\beta }}\right] }{\sqrt{\beta ^{\nu }\,\pi }\sigma \left( 1-q\right) \mathrm{erfc}\left[ -\frac{1}{\sqrt{2}\sigma }\left( \frac{1}{1-q}+\mu \right) \right] }, \label{raw moment} \end{equation}% with% \begin{equation} \beta =\frac{1}{2\sigma ^{2}\left( 1-q\right) ^{2}};\quad \gamma =-\frac{% 1+\mu \,\left( 1-q\right) }{\left( 1-q\right) ^{2}\,\sigma ^{2}};\quad \nu =1+\frac{n}{1-q}, \end{equation}% where $D_{-a}\left[ z\right] $ is the \emph{parabolic cylinder function} \cite{paraboliccylinderd}. For $q>1$, the raw moments are given by an expression quite similar to the Eq. (\ref{raw moment}) with the argument of the erfc replaced by $\frac{1}{\sqrt{2}\sigma }\left( \frac{1}{1-q}+\mu \right) $. However, the finiteness of the raw moments is not guaranteed for every $q>1$ for two very related reasons. First, according to the definition of $D_{-\nu }\left[ z\right] $, $\nu $ must be greater than $0$. Second, the core of the probability density function, $\exp \left[ -\frac{\left( \ln _{q}\,x-\mu \right) ^{2}}{2\,\sigma ^{2}}\right] $, does not vanish in the limit of $x$ going to infinity $\infty $,% \begin{equation} \lim_{x\rightarrow \infty }\exp \left[ -\frac{\left( \ln _{q}\,x-\mu \right) ^{2}}{2\,\sigma ^{2}}\right] =\exp \left[ -\frac{\gamma ^{2}}{2}\right] . \end{equation}% This means that the limit $p\left( x\rightarrow \infty \right) =0$ is introduced by the normalisation factor $x^{-q}$, which comes from redefining the Gaussian of variables, \begin{equation} y\equiv \ln _{q}x, \end{equation}% as a distribution of variables $x$. Because of that, if the moment surpasses the value of $q$, then the integral (\ref{momentos}) diverges. \section{Examples of cascade generators} \label{examplo} In this section, we discuss the upshot of two simple cases in which the dynamical process described in the previous section is applied. We are going to verify that the value of $q$ influences the nature of the attractor in probability space. \subsection{Compact distribution $[0,b]$} Let us consider a compact distribution for indentically and independently distributed variables $x$ within the interval $0$ and $b$. Following what we have described in the preceding section, we can transform our generalised\ multiplicative process into a simple additive process of $y_{i}$ variables which are now distributed in conformity with the distribution,% \begin{equation} p^{\prime}\left( y\right) =\frac{1}{b}\left[ 1+\left( 1-q\right) y\right] ^{% \frac{q}{1-q}}, \label{q-uniform} \end{equation} with $y$ defined between $\frac{1}{q-1}$ and $\frac{b^{1-q}-1}{1-q}$ if $q<1$% , whereas $y$ ranges over the interval between $-\infty$ and $\frac{b^{1-q}-1% }{1-q}$ when $q>1$. Some curves for the special case $b=2$ are plotted in Fig.~\ref{fig-flat}. \begin{figure}[tbh] \begin{center} \includegraphics[width=0.65\columnwidth,angle=0]{flat-dist.eps} \end{center} \caption{Plots of the Eq. (\protect\ref{q-uniform}) \textit{vs} $y$ for $b=2$ and the values of $q$ presented in the text. } \label{fig-flat} \end{figure} If we look at the variance of this independent variable,% \begin{equation} \sigma _{y}^{2}=\left\langle y^{2}\right\rangle -\left\langle \mu _{y}\right\rangle ^{2}, \end{equation}% which is the moment whose finitude plays the leading role in the Central Limit Theory, we verify that for $q>\frac{3}{2}$, we obtain a divergent value,% \begin{equation} \sigma _{y}^{2}=\frac{b^{2-2\,q}}{\left( 3-2\,q\right) \left( 2-q\right) ^{2}% }. \end{equation}% Hence, if $q<\frac{3}{2}$, we can apply the Lyapunov's central Limit theorem and our attractor in the probability space is the Gaussian distribution. On the other hand, if $q>\frac{3}{2}$, the L\'{e}vy-Gnedenko's version of the central limit theorem \cite{levy}\ asserts that the attracting distribution is a L\'{e}vy distribution with a tail exponent, \begin{equation} \alpha =\frac{1}{q-1}. \end{equation}% Furthermore, it is simple to verify that the interval $\left( \frac{3}{2}% ,\infty \right) $ of $q$ values\ maps onto the interval $\left( 0,2\right) $ of $\alpha $ values, which is precisely the interval of validity of the L% \'{e}vy class of distributions that is defined by its Fourier Transform,% \begin{equation} \mathcal{F}\left[ L_{\alpha }\left( Y\right) \right] \left( k\right) =\exp % \left[ -a\,\left\vert k\right\vert ^{\alpha }\right] . \label{eq-levy} \end{equation}% In Fig.~\ref{fig-gen} we depict some sets generated by this process for different values of $q$. \begin{figure}[tbh] \begin{center} \includegraphics[width=0.65\columnwidth,angle=0]{generator.eps} % \includegraphics[width=0.65\columnwidth,angle=0]{generatorloglinear.eps} \end{center} \caption{Sets of random variables generated from the process (\protect\ref% {p-product}) with $N=100$ and $q=-\frac{1}{2}$ (green), $0$ (red), $\frac{1}{% 2}$ (blue), $1$ (black), $\frac{5}{4}$ (magenta) in linear (upper panel) and log scales (lower panel). The generating variable is uniformly distributed within the interval $\left[ 0,1\right] $ as is the same for all of the cases that we present. As visible, the value of $q$ deeply affects the values of $% X_{N}=\tilde{Z}$. } \label{fig-gen} \end{figure} \subsection{$q$-log Normal distribution} In this example, we consider the case of generalised multiplicative processes in which the variables follow a $q$-log Normal distribution. In agreement with what we have referred to in Sec. \ref{multiplicative}, the outcome strongly depends on the value of $q$. Consequently, in the associated $x$ space, if we apply the generalised process to $N$ variables $% y=\ln _{q}\,x$ ($x\in \left[ 0,\infty \right) $) which follow a Gaussian-like functional\footnote{% Strictly speaking, we cannot use the term Gaussian distribution because it is not defined in the interval $\left( -\infty ,\infty \right) $. The limitations in the domain do affect the Fourier transform and thus the result of the convolution of the probability density function.} form with average $\mu $ and finite standard deviation $\sigma $, \textit{i.e.}, $% \forall _{q<1} $ or $q>3$ in Eq.(\ref{qlog-normal}), the resulting distribution in the limit of $N$ going to infinity corresponds to the probability density function (\ref{qlog-normal}) with $\mu \rightarrow N\,\mu $ and $\sigma ^{2}\rightarrow N\,\sigma ^{2}$. In respect of the conditions of $q$ we have just mentioned here above, the $q$-log normal can be seen as an asymptotic attractor, a stable attractor for $q=1$, and an unstable distribution for the remaining cases with the resulting attracting distribution being computed by applying the convolution operation. \section{Final remarks} In this manuscript we have introduced a modification in the multiplicative process that has enabled us to present a modification on the log-Normal distribution as well as other distributions with slow decay. This distribution is controlled by an extra-parameter, $q$, when it is compared with the regular 2-parameter log-Normal distribution, that can be dynamically related to a change in the multiplicative random process. Besides, it provides interesting mechanisms of on-off dynamics. Regarding further applications, it is known that the standard log-normal distribution is unfitted for several data sets. This 3-parameter log-Normal probability function is expected to provide a better approach to these data \cite{stats}. \bigskip {\small SMDQ thanks his colleagues at Unilever for discussions and financial support of the Marie Curie Fellowship programme (European Union).}
1,116,691,499,970
arxiv
\section{Introduction} The Einstein-Maxwell theory forms the basis for other gravitational-electromagnetic theories. The Einstein-Maxwell theory arises from the Einstein-Hilbert gravitational action plus the Maxwell action. It is minimally coupled because there is no coupling in the Lagrangian between the Maxwell part and the curvature part. It also gives equations which are second order in the derivatives of the metric (as opposed to higher order), because the Lagrangian does not contain generic products of curvature terms (the second derivatives of the metric that might appear in the Einstein-Hilbert gravitational Lagrangian form a divergence of some vector and do not contribute to give equations of higher order). In addition the Einstein-Maxwell theory is linear in the electrodynamics, which means that the Maxwell Lagrangian is quadratic in the Maxwell tensor. The property that most interest us here is the coupling between the electromagnetic and the gravitational parts. Thus we are led to classify gravitational-electromagnetic theories in a useful way into two classes, according to the theory is minimally coupled or non-minimally coupled. The first class is minimally coupled gravitational-electromagnetism. It can have different subclasses. One can subdivide into two subclasses, whether the corresponding electrodynamics is linear or non-linear. One can then also subdivide into new sublacsses according to the gravitational action, whether the gravitational part yields a second order theory (such as Einstein-Hilbert theory), or a higher order theory. For instance, a minimally coupled theory, linear in electrodynamics, and second order in the gravity part is the standard Einstein-Maxwell theory \cite{mtw}. There are many exact results in the framework of this theory, such as the Reissner-Nordstr\"om solution for a charged black hole, gravity wave solutions in electrovacuum, cosmological models with a magnetic field, to name a few. Another instance is a minimally coupled theory, with non-linear electrodynamics, and second order Einstein gravity. The well-known models of Born and Infeld \cite{BornInf} and of Heisenberg and Euler \cite{HeiEuler} when coupled to gravity belong to this subclass. One of the most interesting problems in this theory is the search for the regular, non-singular, black holes. This search, started by Bardeen in 1968 \cite{Bardeen}, and developed by many authors (see, e.g., \cite{Shikin,Bronn,Burin,Berej}) led to the recent success of Ay\'on-Beato and Garc\'ia \cite{ay1,ay2,ay3} in finding exact solutions to the Einstein equations coupled with specific four-parameter models of non-linear electrodynamics. And of course there are the other cases of higher order theories coupled to Maxwell, or to non-linear electrodynamic theories. The second class is non-minimal coupled gravitational-electromagnetism. It can be subdivided according to whether the corresponding electrodynamics is linear or non-linear. Now, in non-minimal coupled models one can no longer divide, a priori, into second order and higher order theories, since by definition curvature terms appear in these models which in principle give rise in general to higher order terms in the equations. This second class includes non-minimal equations for the electrodynamics containing couplings with the Riemann and Ricci tensors and the Ricci scalar. This class is very wide and comprises several subclasses, such as: non-minimal linear electrodynamics plus Einstein-Hilbert term, non-minimal non-linear electrodynamics plus Einstein-Hilbert term, non-minimal linear electrodynamics plus Einstein-Hilbert and other pure curvature terms, non-minimal non-linear electrodynamics plus Einstein-Hilbert and other pure curvature terms, and others. All these subclasses of models belong to this second class since they have one specific feature: the Lagrangian contains an interaction part with specific cross-terms, including scalar products of the Riemann tensor and its convolutions, with the Maxwell tensor. Our goal is to study this last second class. This class is of great interest, since the appearance of cross-terms in the Lagrangian leads to modifications of the coefficients involving the second-order derivatives both in the Maxwell and Einstein equations. This means, in particular, that gravitational waves can propagate with a velocity different from the velocity of light in vacuum, in a similar fashion as electromagnetic waves propagate in a material medium. This new added feature has many interesting applications in various systems and models, such as cosmological scenarios, gravitational waves interacting with electromagnetic fields, and charged black holes. More specifically, in cosmology the evolution of the gravitational perturbations may have another rate and scale. In astrophysics, the interaction of gravitational with electromagnetic waves may lead to time delays in the arrival of those waves, and the gravitational waves themselves would change their own properties in a form noticeable in gravitational wave detection. It also leads to important modifications of the electromagnetic and gravitational structure of a charged black hole. First we consider the simpler case of minimal coupling in the electromagnetic and gravitational parts, in Section 2. Then in Section 3 we study non-minimal coupled theories. In section 3.1 we obtain the structure of the master equations of the non-minimal gravitational-electromagnetic theory, for both non-linear and linear electrodynamics. In section 3.2 we consider in detail the linear version of the theory. In section 3.3 we briefly study an example. \section{Minimal coupling of gravity and electromagnetism} \subsection{General formalism} In order to explain the novelty of our approach, let us first introduce the nomenclature in the well known case of gravitational-electromagnetic theories minimally coupled. We will consider, generically, high-order terms in the gravitational part, and non-linear terms in the electromagnetic part. The action functional is \cite{mtw} \begin{equation} S = \int d^4 x \sqrt{-g} \, {\cal L}_{\rm min} \,, \label{actionminimal} \end{equation} where, \begin{equation} {\cal L}_{\rm min} = \pounds \left[\frac{R}{\kappa}, R_{ik}R^{ik}, R_{ikmn}R^{ikmn}, ... \right] + {\cal L}(I_{(11)}, I^2_{(12)}) \,, \label{actionminimal2} \end{equation} $g$ is the determinant of the metric tensor $g_{ik}$, and ${\cal L}_{\rm min}$ is the Lagrangian for the minimally coupled theory. It is composed of two distinct parts, which do not cross, the Lagrangian $\pounds$, related to the metric field, and the Lagrangian ${\cal L}$, related to the electromagnetic field. The Lagrangian $\pounds\left[\frac{R}{\kappa}, R_{ik}R^{ik}, R_{ikmn}R^{ikmn}, ... \right]$ contains geometrical scalars only, \begin{equation} R\,,\quad R_{ik}R^{ik}\,,\quad R_{ikmn}R^{ikmn}, ...\,, \label{geometricscalars} \end{equation} where $R$ is the Ricci scalar, $R_{ik}$ is the Ricci tensor, and $R_{ikmn}$ is the Riemann tensor. The constant $\kappa$ is equal to $\kappa = \frac{2G}{c^4}$. The Lagrangian ${\cal L}(I_{(11)}, I^2_{(12)})$ is an arbitrary function of the quantities $I_{(11)}$ and $I^2_{(12)}$. $I_{(11)}$ and $I_{(12)}$ form a first set (first subscript 1) of electromagnetic field invariants. This first set is composed of two invariants (denoted in the second subscript), the first $I_{(11)}$ and the second $I_{(12)}$ invariants. These invariants are quadratic in the anti-symmetric Maxwell tensor $F_{ik}$, and given by \begin{equation} I_{(11)} \equiv \frac{1}{2} F_{ik} F^{ik} \,, \quad I_{(12)} \equiv \frac{1}{2} F^{*}_{ik} F^{ik}\,. \label{2} \end{equation} The asterisk denotes the dualization procedure, defined as follows \begin{equation} F^{*ik} = \frac{1}{2}\epsilon^{ikls} F_{ls} \,. \label{3} \end{equation} Here $\epsilon^{ikls} = \frac{1}{\sqrt{-g}}\, {\rm e}^{ikls}$ is the Levi-Civita tensor and ${\rm e}^{ikls}$ is the completely anti-symmetric symbol with ${\rm e}^{0123} = - {\rm e}_{0123} = 1$. The Maxwell tensor satisfies the condition \begin{equation} \nabla_{k} F^{*ik} =0 \,, \label{conditiononmaxwelltensor} \end{equation} where $\nabla_{k}$ is the covariant derivative. Equation (\ref{conditiononmaxwelltensor}) can also be written as $\nabla_i F_{kl} + \nabla_l F_{ik} + \nabla_k F_{li} = 0$. Due to (\ref{conditiononmaxwelltensor}), the Maxwell tensor may be represented in terms of a four-vector potential $A_i$ as \begin{equation} F_{ik} = \nabla_i A_{k} - \nabla_k A_{i} = \frac{\partial A_{k}}{\partial x^i} - \frac{\partial A_{i}}{\partial x^k} \,. \label{6} \end{equation} Now, the variation of the action functional (\ref{actionminimal}) with respect to the four-vector potential $A_i$ gives the minimal vacuum Maxwell equations \begin{equation} \nabla_{k} \left[\frac{\partial {\cal L}}{\partial I_{(11)}} \ F^{ik} + \frac{\partial{\cal L}}{\partial I_{(12)}} \ F^{*ik} \right] = 0 \,. \label{equationmaxwellminimal} \end{equation} On the other hand, the variation of the action functional (\ref{actionminimal}) with respect to $g^{ik}$ yields the gravitational equations \begin{equation} \frac{1}{\kappa}{\rm Ein}_{ik} = T_{ik} \,, \label{einsteinequationsminimal} \end{equation} where ${\rm Ein}_{ik}$ is the corresponding non-linear generalization of the Einstein tensor, $G_{ik}= R_{ik} - \frac{1}{2} R g_{ik} \,$. The tensor $T_{ik}$, defined by \begin{equation} T_{ik} \equiv \frac{1}{2} {\cal L} g_{ik} - \frac{\partial{\cal L}}{\partial I_{(11)}} F_{in}F_{k}^{\cdot n} - \frac{1}{2}\frac{\partial{\cal L}}{\partial I_{(12)}} (F^{*}_{il}F_{k}^{\cdot l} + F_{il}F_{k}^{* l}) \,, \label{energymomentumminimal} \end{equation} is the symmetric stress-energy tensor of the electromagnetic field in vacuum. The tensor $T_{ik}$ is conserved in accordance with the Bianchi identities \begin{equation} \nabla^k T_{ik} = 0 \,. \label{11} \end{equation} \subsection{Example: linear Einstein-Maxwell theory} Before we leave this section, we give in this subsection, as a simple example, the usual linear Einstein-Maxwell theory. It can be obtained from equations (\ref{actionminimal})-(\ref{energymomentumminimal}) when the gravitational Lagrangian is given by the Einstein-Hilbert term $\frac{R}{\kappa}$ only, and ${\cal L}(I_{(11)}, I^2_{(12)}) \equiv I_{(11)}$. The relations (\ref{equationmaxwellminimal})-(\ref{energymomentumminimal}) reduce, respectively, to \begin{equation} \nabla_{k} F^{ik} = 0 \,, \label{liniearmaxwell} \end{equation} \begin{equation} \frac{1}{\kappa} G_{ik} = T^{(0)}_{ik} \,, \label{einsteinlinearmaxwell} \end{equation} and \begin{equation} \quad T^{(0)}_{ik} \equiv \frac{1}{4} g_{ik} F_{mn}F^{mn} - F_{in}F_{k}^{\cdot n} \,, \label{linearenergymomentum} \end{equation} where the superscript $(0)$ denotes that the tensor $T^{(0)}_{ik}$ is the simplest part of a more general electromagnetic stress-energy tensor. Such a formalism describes a minimal coupling of gravitation and electromagnetism, since the right-hand-side of the Einstein equations (\ref{einsteinequationsminimal}), as well as the Maxwell equations (\ref{conditiononmaxwelltensor}) and (\ref{equationmaxwellminimal}) contain metric couplings and covariant derivatives only, while the curvature tensor appears exclusively in the left-hand-side of (\ref{einsteinlinearmaxwell}). In the following section, and in contrast to the minimal gravitational-electromag\-netic equations discussed in this section, we consider in some detail, along the section, a non-minimal gravitational-electromagnetic theory both with non-linear and linear electrodynamics, generalizing the Einstein-Maxwell theory and other minimal theories. This approach deals with self-consistent modifications to both the Einstein and the Maxwell equations. \section{Non-minimal extensions \\ of the Einstein-Maxwell Lagrangian} \subsection{Full formalism and equations} \subsubsection{Invariants containing the Maxwell and dual-Maxwell tensors coupled with the Ricci scalar and the Riemann and Ricci tensors} Let us introduce the invariant scalars, quadratic in the tensors $F^{ik}$ and $F^{*}_{ik}$ and containing the Riemann and Ricci tensors and the Ricci scalar. These scalars yield cross-terms, and are the appropriate quantities for the description of non-minimal interactions. They can be formally divided into five sets. The first set is the trivial one, containing $I_{(11)}$ and $I_{(12)}$ alone, as described before. The second set contains $I_{(11)}$ and $I_{(12)}$ multiplied by $R$, \begin{equation} I_{(21)} \equiv \frac{R}{2} g^{im}g^{kn}F_{ik}F_{mn} \,, \quad I_{(22)} \equiv \frac{R}{2} g^{im}g^{kn}F^{*}_{ik}F_{mn} \,. \label{13} \end{equation} The third set includes the Ricci tensor $R_{mn}$, \begin{equation} I_{(31)} \equiv \frac{1}{2} R^{im}g^{kn}F_{ik}F_{mn} \,, \quad I_{(32)} \equiv \frac{1}{2} R^{im}g^{kn}F^{*}_{ik}F_{mn} \,. \label{14} \end{equation} The fourth set is based on the convolutions of the quadratic combinations of $F^{ik}$ and $F^{*}_{ik}$ with the Riemann tensor \begin{equation} I_{(41)} \equiv \frac{1}{2} R^{ikmn}F_{ik}F_{mn} \,, \quad I_{(42)} \equiv \frac{1}{2} R^{ikmn}F^{*}_{ik}F_{mn} \,. \label{15} \end{equation} The invariants $I_{(21)}-I_{(42)}$ are chosen to be linear in the curvature. Note also that the scalar $\frac{1}{2} R^{im}g^{kn}F^{*}_{ik}F^{*}_{mn}$ can be reduced to a linear combination of $I_{(21)}$ and $I_{(31)}$, and the scalar $\frac{1}{2} R^{ikmn}F^{*}_{ik}F^{*}_{mn}$ can be represented as a linear combination of $I_{(21)}$, $I_{(31)}$, $I_{(41)}$. Finally, the fifth set includes the various scalars nonlinear in the curvature. Below we introduce a few of them, \begin{equation} I_{(51)} \equiv \frac{1}{2} g^{im}g^{kn}F_{ik}F_{mn} f_{{\rm R}} \,, \quad I_{(52)} \equiv \frac{1}{2} g^{im}g^{kn}F^{*}_{ik}F_{mn} F_{{\rm R}} \,, \label{fifthsetequation1} \end{equation} \begin{equation} I_{(53)} \equiv \frac{1}{2} R^{im}R^{kn}F_{ik}F_{mn} \,, \quad I_{(54)} \equiv \frac{1}{2} R^{im}R^{kn}F^{*}_{ik}F_{mn} \,, \label{17} \end{equation} \begin{equation} I_{(55)} \equiv \frac{1}{2} R^{ikab}R_{abmn}F_{ik}F^{mn} \,, \quad I_{(56)} \equiv \frac{1}{2} R^{ikab}R_{abmn}F^{*}_{ik}F^{mn} \,, \label{18} \end{equation} \begin{equation} I_{(57)} \equiv \frac{1}{2} R^{*ikab}R^{*}_{abmn}F_{ik}F^{mn} \,, \quad I_{(58)} \equiv \frac{1}{2} R^{*ikab}R^{*}_{abmn}F^{*}_{ik}F^{mn} \,, \label{19} \end{equation} \begin{equation} I_{(59)} \equiv \frac{1}{2} R^{ikab}R_{abcd}R^{cdmn}F_{ik}F_{mn} \,, ... \,. \label{20} \end{equation} In equation (\ref{fifthsetequation1}), $f_{{\rm R}}$ and $F_{{\rm R}}$ denote arbitrary functions of the all possible independent non-linear scalar invariants of the gravitational field, such as $R^2$, $R_{mn}R^{mn}$, $R_{ikmn}R^{ikmn}$, $R^{*}_{ikmn}R^{ikmn}$, ... Thus, the non-minimal Lagrangian can be written in the form \begin{equation} {\cal L}_{{\rm non{-}min}} {=} \pounds\left(\frac{R}{\kappa}, R_{mn}R^{mn}, ...\right) + {\cal L}(I_{(11)}, I^2_{(12)}, I_{(21)}, I^2_{(22)}, I_{(31)}, I^2_{(32)}, I_{(41)}, I^2_{(42)}, ...) \,. \label{actionfunctionalnonminimal} \end{equation} This non-minimal Lagrangian is $U(1)$ gauge invariant since it contains the Maxwell tensor $F_{ik}$ only, and does not include the potential four-vector $A^i$. \subsubsection{Non-minimal non-linear electrodynamics} The variation, with respect to the 4-vector $A_k$, of the action functional with Lagrangian (\ref{actionfunctionalnonminimal}) yields the equation for the non-minimal non-linear electromagnetic field, \begin{equation} \nabla_k H^{ik} = 0 \,, \label{non-minimalnon-linearelectromagneticequation} \end{equation} where $H^{ik}$ is the induction tensor given by \begin{equation} H^{ik} = {\cal V}^{ikmn}F_{mn} + {\cal W}^{ikmn}F^{*}_{mn} \,, \label{non-minimalnon-linearelectromagneticfield} \end{equation} with \begin{eqnarray} {\cal V}^{ikmn} &\equiv& \frac{1}{2}\,(g^{im} g^{kn} - g^{km} g^{in})\left[ \frac{\partial {\cal L}}{\partial I_{(11)}} + \frac{\partial {\cal L}}{\partial I_{(21)}} R \right] + \nonumber\\ &&+\frac{1}{4}\,(R^{im} g^{kn} - R^{in} g^{km} + R^{kn} g^{im} - R^{km} g^{in}) \frac{\partial {\cal L}}{\partial I_{(31)}} \nonumber\\ &&+ R^{ikmn} \frac{\partial {\cal L}}{\partial I_{(41)}} + ... \,, \label{definitonofnu} \end{eqnarray} and \begin{eqnarray} &&{\cal W}^{ikmn} \equiv \frac{1}{2}\,(g^{im} g^{kn} - g^{km} g^{in})\left[ \frac{\partial {\cal L}}{\partial I_{(12)}} + \frac{\partial {\cal L}}{\partial I_{(22)}} R \right] + \nonumber\\ &&+ \frac{1}{8} \left[R(g^{im} g^{kn} - g^{km} g^{in}) - (R^{im} g^{kn} - R^{in} g^{km} + R^{kn} g^{im} - R^{km} g^{in}) \right] \frac{\partial {\cal L}}{\partial I_{(32)}} \nonumber\\ &&+\left[R^{ikmn} {+} \frac{R}{4}(g^{im} g^{kn} {-} g^{km} g^{in}) -\frac{1}{2} (R^{im} g^{kn} {-} R^{in} g^{km} {+} R^{kn} g^{im} {-} R^{km} g^{in}) \right]\frac{\partial {\cal L}}{\partial I_{(42)}} \nonumber\\ &&+ ... \,. \label{definitonofW} \end{eqnarray} \subsubsection{The generalized higher order Einstein equations} The variation, with respect to the metric coefficients $g^{ik}$, of the action functional with Lagrangian (\ref{actionfunctionalnonminimal}) yields the mon-minimal extension of the Einstein equations, \begin{eqnarray} 0 & =& {\rm Ein}_{ik} - \frac{1}{2} {\cal L} g_{ik} + \left( \frac{\partial{\cal L}}{\partial I_{(11)}} + R \frac{\partial{\cal L}}{\partial I_{(21)}} \right) F_{in}F_{k}^{\cdot n} + \nonumber\\ && + \frac{1}{2}\left(\frac{\partial{\cal L}}{\partial I_{(12)}} + R \frac{\partial{\cal L}}{\partial I_{(22)}} \right) (F^{*}_{il}F_{k}^{\cdot l} + F^{*}_{kl}F_{i}^{\cdot l}) + R_{ik} \left( I_{(11)} \frac{\partial{\cal L}}{\partial I_{(21)}} + I_{(12)} \frac{\partial{\cal L}}{\partial I_{(22)}} \right) \nonumber\\ && + \left( g_{ik} \nabla^l\nabla_l - \nabla_i \nabla_k \right) \left(I_{(11)} \frac{\partial{\cal L}}{\partial I_{(21)}} + I_{(12)} \frac{\partial{\cal L}}{\partial I_{(22)}}\right) + \nonumber\\ && {+} \frac{1}{2} \frac{\partial{\cal L}}{\partial I_{(31)}} \left[ F^{ln}(R_{il}F_{kn} {+} R_{kl}F_{in}) {+} R^{mn}F_{im}F_{kn} \right] {+} \frac{1}{4}g_{ik} \nabla_{m} \nabla_{l} \left(\frac{\partial{\cal L}}{\partial I_{(31)}}F^{mn}F^{l}_{\cdot n} \right) \nonumber\\ && {+} \frac{1}{4} \nabla^m \nabla_m \left(\frac{\partial{\cal L}}{\partial I_{(31)}}F_{in}F_{k}^{\cdot n} \right) {-} \frac{1}{4}\nabla_l \left[ \nabla_i \left(\frac{\partial{\cal L}}{\partial I_{(31)}}F_{kn}F^{ln}\right) {+} \nabla_k \left( \frac{\partial{\cal L}}{\partial I_{(31)}} F_{in}F^{ln}\right) \right] \nonumber\\ && {+} \frac{1}{8} \nabla_m \left\{\nabla^m \left[\frac{\partial{\cal L}}{\partial I_{(32)}} ( F^{*}_{in}F_{k}^{\cdot n} {+} F_{in}F_{k}^{* n}) \right] {+} g_{ik} \nabla_{l} \left[\frac{\partial{\cal L}}{\partial I_{(32)}}(F^{*mn}F^{l}_{\cdot n} {+} F^{mn}F^{*l}_{ \ \cdot n})\right] \right\} \nonumber\\ && {-} \frac{1}{8}\nabla_l \left\{ \nabla_i\left[\frac{\partial{\cal L}}{\partial I_{(32)}} (F^{*}_{kn}F^{ln} {+} F_{kn}F^{*ln})\right] {+} \nabla_k \left[ \frac{\partial{\cal L}}{\partial I_{(32)}} (F^{*}_{in}F^{ln} {+} F_{in}F^{*ln}\right] \right\} {+} \nonumber\\ && + \frac{1}{16}\frac{\partial{\cal L}}{\partial I_{(32)}}\left[ R (F^{*}_{in}F_{k}^{\cdot n} + F^{*}_{kn}F_{i}^{\cdot n} + F_{in}F_{k \cdot }^{* n} + F_{kn}F_{i \cdot}^{* n}) + \right. \nonumber\\ &&\quad\quad\quad\quad\quad \left. + (F^{*}_{in} R_{km} + F^{*}_{kn} R_{im})F^{mn} + (F_{in} R_{km} + F_{kn} R_{im})F^{*mn} \right] \nonumber\\ && + \frac{3}{4} \frac{\partial{\cal L}}{\partial I_{(41)}} F^{ls}(F_{i}^{\cdot n}R_{knls}+F_{k}^{\cdot n}R_{inls}) + \frac{1}{2}\nabla_{m} \nabla_{n} \left[\frac{\partial{\cal L}}{\partial I_{(41)}} \left(F_{i}^{\cdot n}F_{k}^{\cdot m} + F_{k}^{\cdot n}F_{i}^{\cdot m} \right)\right] + \nonumber\\ && + \frac{3}{8} \frac{\partial{\cal L}}{\partial I_{(42)}} (F_{i}^{* m}R_{kmls} F^{ls} + F_{k}^{* m}R_{imls} F^{ls} + F_{i}^{\cdot m}R_{kmls} F^{*ls} + F_{k}^{\cdot m}R_{imls} F^{*ls}) + \nonumber\\ && + \frac{1}{2}g_{ik} \frac{\partial{\cal L}}{\partial I_{(42)}} I_{(42)} + \frac{1}{4} \nabla_{m} \nabla_{n}\left[\frac{\partial{\cal L}}{\partial I_{(42)}} \left(F_{i}^{* m}F_{k}^{\cdot n} {+} F_{k}^{* m}F_{i}^{\cdot n} + F_{i}^{\cdot m}F_{k}^{* n} {+} F_{k}^{\cdot m}F_{i}^{* n} \right)\right] \nonumber\\ && +...\, . \label{einsteinnonminimal} \end{eqnarray} These equations can be rewritten in the well-known form ${\rm Ein}_{ik} = \kappa T^{{\rm eff}}_{ik}$, but it is not the canonic form when one is dealing with general non-minimal non-linear electrodynamics. The reason is the following: even if the Lagrangian for the pure gravitational field is of the Einstein-Hilbert form, the equation (\ref{einsteinnonminimal}) contains higher order derivatives of the metric, coming from the curvature tensor in terms containing non-minimal scalars, $\frac{\partial{\cal L}}{\partial I_{(ab)}}$. Thus, a generic non-minimal non-linear electrodynamics is associated with a higher order gravitation. One can then ask the question, whether or not non-minimal non-linear electrodynamics models exist for which gravity is of second order. We believe that this is possible for a special choice of the dependence ${\cal L}(I_{(ab)})$ and for specific symmetric space-times. Below we consider a simple example confirming such idea. \subsection{Non-minimal coupling models, with the coupling linear in the curvature, in Einstein-Hilbert gravity} \subsubsection{The action} A special case worth of discussion is when one restricts the above theory to a Lagrangian that is Einstein-Hilbert in the gravity term, quadratic in the Maxwell tensor and the coupling between the electromagnetism and the metric is linear in the curvature. Thus the theory may contain the invariants $I_{(11)}$, $I_{(21)}$, $I_{(31)}$, $I_{(41)}$, only. Such a Lagrangian takes the form \begin{equation} {\cal L} = \frac{R}{\kappa} + \frac{1}{2} F_{mn}F^{mn} + \frac{1}{2} \,{\chi}^{ikmn} F_{ik}F_{mn} \,, \label{simplifiednonminimal} \end{equation} where the quantity ${\chi}^{ikmn}$ is the susceptibility tensor. The origin of such a terminology is the following. One obtains from the Lagrangian (\ref{simplifiednonminimal}) with the definition (\ref{non-minimalnon-linearelectromagneticfield}) that the induction tensor $H^{ik}$ and the Maxwell tensor $F_{mn}$ are linked by the linear constitutive law (see, e.g., \cite{Mauginbook,HehlObukhov}) \begin{equation} H^{ik} \equiv F^{ik} + {\chi}^{ikmn} F_{mn} \,. \label{inductiontensor} \end{equation} Another important tensor, appearing in the electrodynamics of continuum media, is the polarization-magnetization tensor $M^{ik}$, defined by \begin{equation} 4 \pi M^{ik} \equiv H^{ik} - F^{ik} \,, \label{polarization-magnetizationtensor} \end{equation} and equal to \begin{equation} 4 \pi M^{ik} = {\chi}^{ikmn} F_{mn} \,, \label{1polarization-magnetizationtensor} \end{equation} according to (\ref{inductiontensor}). In the standard terminology of continuum electrodynamics \cite{Nye,landau} the proportionality coefficients ${\chi}^{ikmn}$ form the so-called susceptibility tensor. Generally, it has the same symmetry of the indices transposition as the Riemann tensor, and has 21 independent components. In our case the susceptibility tensor is linear in the curvature. \subsubsection{Susceptibility tensor} According to the specifications above, the susceptibility tensor has to be of the form \begin{equation} {\chi}^{ikmn} \equiv \frac{q_1 R}{2}(g^{im}g^{kn} {-} g^{in}g^{km}) {+} \frac{q_2}{2} (R^{im}g^{kn} {-} R^{in}g^{km} {+} R^{kn}g^{im} {-} R^{km}g^{in}) {+} q_3 R^{ikmn} \,. \label{susceptibilitytensor1} \end{equation} The parameters $q_1$, $q_2$, and $q_3$ are in general arbitrary. They have to be chosen by some ad hoc constraint, phenomenological or otherwise. For instance, the Lagrangian of the type given by equations (\ref{simplifiednonminimal}) and (\ref{susceptibilitytensor1}), with $q_1=q_2=0, q_3= - \lambda$, and $\lambda$ a constant, has been proposed phenomenologically by Prasanna in the context of non-minimal modifications of the electrodynamics \cite{Prasanna1,Prasanna2}. Some general phenomenological properties of the Lagrangian (\ref{simplifiednonminimal}) and (\ref{susceptibilitytensor1}) have been discussed by Goenner in \cite{Go}. The problem of a phenomenological introduction of non-minimal terms into the electrodynamic equations has been exhaustively studied by Hehl and Obukhov \cite{hehl2}. Drummond and Hathrell \cite{drummond} have made a qualitatively new step, they obtained modified Maxwell equations from one-loop corrections of quantum electrodynamics in curved spacetime. Their model is not phenomenological and corresponds to the Lagrangian (\ref{simplifiednonminimal}) and (\ref{susceptibilitytensor1}) with specific choices for $q_1$, $q_2$, and $q_3$, which involve the fine structure constant and the Compton wavelength of the electron. A quantum electrodynamics motivation for the use of generalized Maxwell equations can also be found, for instance, in the work of Kostelecky and Mewes \cite{Kost1}. Accioly, Azeredo, Arag\~ao, and Mukai \cite{Accioly} used the Prasanna electrodynamic equations to construct a special example of a conserved non-minimal effective stress-energy tensor containing the Riemann tensor. Exact solutions of master equations of non-minimal electrodynamics in a non-linear gravitational wave background were obtained and discussed in \cite{B1}-\cite{BL1}. The susceptibility tensor ${\chi}^{ikmn}$ has the same index symmetries as the Riemann tensor $R^{ikmn}$. Its convolutions yield \begin{eqnarray} g_{kn}{\chi}^{ikmn} &=& R^{im}(q_2 + q_3) + \frac{1}{2}R g^{im}(3 q_1 + q_2) \,, \nonumber\\ g_{kn}g_{im}{\chi}^{ikmn}&=& R (6 q_1 + 3 q_2 + q_3) \,. \label{susceptconvoluted} \end{eqnarray} The coefficients $q_1$, $q_2$, and $q_3$ are considered to be independent phenomenological parameters. They introduce specific cross-terms, which describe non-minimal interactions of the electromagnetic and gravitational fields. Thus, one has a three-parametric family of non-minimal models. We now consider three specific variants in the choice of the set $q_1$, $q_2$ and $q_3$, and see how it influences the expression for the susceptibility tensor. \vskip0.5cm \noindent{\it (a) {The susceptibility tensor is proportional to the double-dual Riemann tensor}} \vskip0.2cm The gravitational analogue of the dual Maxwell tensor $F^{*}_{ik}$, is given by the double-dual Riemann tensor \begin{equation} {\cal G}_{ikmn} \equiv {}^{*}R_{ikmn}\,^{*} \equiv \frac{1}{4} \epsilon_{ikab}R^{abcd} \epsilon_{cdmn} \,. \label{31} \end{equation} The analogy is due to the similarity of the identity $\nabla^n F^{*}_{in} = 0$ for the Maxwell tensor, with the identity $\nabla^n{\cal G}_{ikmn} = 0$ for the double-dual Riemann tensor. The convolution of the double-dual Riemann tensor gives the Einstein tensor \begin{equation} g^{kn}\,{\cal G}_{ikmn} = R_{im} - \frac{1}{2}\,R \,g_{im} \,. \label{33} \end{equation} Now, the double-dual Riemann tensor is given by \begin{equation} {\cal G}^{ikmn} \equiv - \frac{R}{2}(g^{im}g^{kn} - g^{in}g^{km}) + (R^{im}g^{kn} - R^{in}g^{km} + R^{kn}g^{im} - R^{km}g^{in}) - R^{ikmn} \,. \label{34} \end{equation} Thus, if one imposes that the susceptibility tensor ${\chi}^{ikmn}$ is proportional to the double-dual Riemann tensor, i.e., \begin{equation} {\chi}^{ikmn} = q \ {\cal G}^{ikmn} \,, \label{susceptequaldoubledual} \end{equation} one obtains from equation (\ref{susceptibilitytensor1}) a one-parameter model with the following values for $q_1$, $q_2$, and $q_3$: $q_1 = q_3 = - q$, and $q_2 = 2 q$. This can also be written as, \begin{equation} q_1 + q_2 + q_3 = 0 \,, \quad 2q_1 + q_2 = 0 \,. \label{qsdoubledual2} \end{equation} For this one-parameter model the non-minimal Lagrangian (\ref{simplifiednonminimal}) can be rewritten in terms of the Ricci scalar, the Maxwell tensor, the dual Maxwell tensor and the standard Riemann tensor as follows, \begin{equation} {\cal L} = \frac{R}{\kappa} + \frac{1}{2} F_{mn}F^{mn} + \frac{q}{2} R^{ikmn} F^{*}_{ik}F^{*}_{mn} \,. \label{37} \end{equation} \vskip0.5cm \noindent{\it (b) {The susceptibility tensor is proportional to the Weyl conformal tensor}} \vskip0.2cm The Weyl tensor is given by \begin{equation} {\cal C}^{ikmn} \equiv R^{ikmn} + \frac{R}{6}(g^{im}g^{kn} {-} g^{in}g^{km}) {-} \frac{1}{2} (R^{im}g^{kn} {-} R^{in}g^{km} {+} R^{kn}g^{im} {-} R^{km}g^{in}) \,. \label{38} \end{equation} It has vanishing trace, i.e., $g_{kn}{\cal C}^{ikmn}= 0$. If one imposes that the susceptibility tensor ${\chi}^{ikmn}$ is proportional to the Weyl tensor, i.e., \begin{equation} {\chi}^{ikmn} = q \ {\cal C}^{ikmn} \,, \label{susceptequalweyl} \end{equation} one obtains from equation (\ref{susceptibilitytensor1}) that \begin{equation} 3q_1 + q_2 = 0 \,, \quad q_2 + q_3 = 0 \,. \label{qssuceptweyl} \end{equation} This is also a one-parameter model for which one can easily explicitly give the non-minimal Lagrangian (\ref{simplifiednonminimal}). \vskip0.5cm \noindent{\it (c) {The susceptibility tensor is equal to the Drummond-Hathrell tensor}} \vskip0.2cm Drummond and Hathrell \cite{drummond} have obtained modified Maxwell equations from one-loop corrections in quantum electrodynamics in curved spacetime. Their model corresponds to the Lagrangian (\ref{simplifiednonminimal}),(\ref{susceptibilitytensor1}) with the following coefficients \begin{equation} 2q_1-q_3=0 \,, \quad 13q_1+q_2=0 \,, \quad q_1 = - \frac{\alpha \lambda^2_e}{180 \pi} \,, \label{qsdrummondhathrell} \end{equation} where $\alpha$ is the fine structure constant and $\lambda_{\rm e}$ is the Compton wavelength of the electron. This is also a one-parameter model for which one can easily explicitly give the non-minimal Lagrangian (\ref{simplifiednonminimal}). \subsubsection{Non-minimal constitutive equations for the electromagnetic field} The relation (\ref{inductiontensor}) is of the type of a linear constitutive equation \cite{Mauginbook,HehlObukhov} \begin{equation} H^{ik} = C^{ikmn} F_{mn}\,, \label{link} \end{equation} where the material tensor $C^{ikmn}$ links the induction tensor with the Maxwell tensor. Comparing (\ref{inductiontensor}) with (\ref{link}) one finds \begin{equation} C^{ikmn} \equiv \frac{1}{2}(g^{im}g^{kn} - g^{in}g^{km}) + {\chi}^{ikmn} \,. \label{materialtensor} \end{equation} The material tensor $C^{ikmn}$ describes the properties of the linear response of the material to an electromagnetic field, and contains the information about dielectric and magnetic permeabilities, as well as about the magneto-electric coefficients \cite{Mauginbook,landau}. Using the medium four-velocity $U^i$, normalized such that $U^iU_i=1$, one can decompose $C^{ikmn}$ uniquely as \begin{eqnarray} &C^{ikmn} = \frac12 \left( \varepsilon^{im} U^k U^n - \varepsilon^{in} U^k U^m + \varepsilon^{kn} U^i U^m - \varepsilon^{km} U^i U^n \right) + \nonumber \\& +\frac12 \left[ -\eta^{ikl}(\mu^{-1})_{ls} \eta^{mns} + \eta^{ikl}(U^m\nu_{l \ \cdot}^{\ n} - U^n \nu_{l \ \cdot}^{\ m}) + \eta^{lmn}(U^i \nu_{l \ \cdot}^{\ k} - U^k \nu_{l \ \cdot}^{\ i} ) \right] \, . & \label{44} \end{eqnarray} Here $\varepsilon^{im}$ is the dielectric tensor, $(\mu^{-1})_{pq}$ is the magnetic permeability tensor, and $\nu_{p \ \cdot}^{\ m}$ is the magneto-electric coefficients tensor. These quantities are defined through \begin{eqnarray} \varepsilon^{im} &=& 2 C^{ikmn} U_k U_n\, \nonumber\\ (\mu^{-1})_{pq} &=& - \frac{1}{2} \eta_{pik} C^{ikmn} \eta_{mnq}\,, \nonumber\\ \nu_{p \ \cdot}^{\ m} &=& \eta_{pik} C^{ikmn} U_n =U_k C^{mkln} \eta_{lnp}\,. \label{varepsilonmagneticpermeabilitymagnetoelectriccoefficients} \end{eqnarray} The tensors $\eta_{mnl}$ and $\eta^{ikl}$ are anti-symmetric tensors orthogonal to $U^i$ and defined as \begin{equation} \eta_{mnl} \equiv \epsilon_{mnls} U^s \,, \quad \eta^{ikl} \equiv \epsilon^{ikls} U_s \,. \label{47} \end{equation} They are connected by the useful identity \begin{equation} - \eta^{ikp} \eta_{mnp} = \delta^{ikl}_{mns} U_l U^s = \Delta^i_m \Delta^k_n - \Delta^i_n \Delta^k_m \,, \label{usefulidentity} \end{equation} where the projection tensor $\Delta^{ik}$ is defined as \begin{equation} \Delta^{ik} = g^{ik} - U^i U^k \,. \label{49} \end{equation} The generalized 6-indices $\delta-$Kronecker tensor $\delta^{ikl}_{mns}$ (see, e.g., \cite{mtw}) may be defined by a recurrent formula through the $\delta-$Kronecker tensor with four indices, $\delta^{ik}_{mn}$, as \begin{equation} \delta^{ikl}_{mns} \equiv \delta^{i}_{m}\delta^{kl}_{ns} +\delta^{i}_{n}\delta^{kl}_{sm} +\delta^{i}_{s}\delta^{kl}_{mn} \,, \quad \delta^{ik}_{mn} \equiv \delta^{i}_{m}\delta^{k}_{n} - \delta^{i}_{n}\delta^{k}_{m} \,. \label{6indices} \end{equation} Upon contraction, equation (\ref{usefulidentity}) yields another useful identity \begin{equation} \frac{1}{2} \eta^{ikl} \eta_{klm} = - \delta^{il}_{ms} U_l U^s = - \Delta^i_m \,. \label{50} \end{equation} The tensors $\varepsilon_{ik}$ and $(\mu^{-1})_{ik}$ are symmetric, but $\nu_{l \ \cdot}^{\ k}$ is in general non-symmetric. The dot denotes the position of the second index when lowered. These three tensors are orthogonal to $U^i$, \begin{equation} \varepsilon_{ik} U^k = 0, \quad (\mu^{-1})_{ik} U^k = 0, \quad \nu_{l \ \cdot}^{\ k} U^l = 0 = \nu_{l \ \cdot}^{\ k} U_k \,. \label{orthog} \end{equation} Using the equation (\ref{materialtensor}), one can show through straightforward calculations that \begin{eqnarray} &\varepsilon^{im} = \Delta^{im} + 2 {\chi}^{ikmn} U_k U_n \,, \nonumber \\ & (\mu^{-1})_{pq} = \Delta_{pq} - \frac{1}{2} \eta_{pik} {\chi}^{ikmn} \eta_{mnq} = \Delta_{pq} - 2 \ ^{*}{\chi}^{*}_{plqs} U^l U^s \,, \nonumber \\ & \nu_{p \ \cdot}^{\ m} = \eta_{pik} {\chi}^{ikmn} U_n = - ^{*}{\chi}_{pln}^{\ \cdot \cdot \cdot \ m} U^l U^n \,, \label{materialtensors} \end{eqnarray} which in turn satisfy the relations (\ref{orthog}). From the relations given in (\ref{materialtensors}) one sees that the non-minimal interaction of the gravitational and electromagnetic fields effectively changes the dielectric and magnetic properties of the vacuum, and produces a specific magnetoelectric interaction. In this sense, under the influence of non-minimal interactions the vacuum behaves as a material medium, called a quasi-medium. Note from (\ref{materialtensors}) that the tensor ${\chi}^{ikmn}$ predetermines the changes in the dielectric properties of this quasi-medium, the double-dual tensor $^{*}{\chi}^{*}_{plqs}$ influences its magnetic properties, while the dual tensor $^{*}{\chi}_{pln}^{\ \cdot \cdot \cdot \ m}$ produces magneto-electric effects. In order to complete this analogy, one can write the relationships between the four-vectors electric induction $D^i$ and magnetic field $H^i$, on one hand, and the four-vectors electric field $E^i$ and the magnetic induction $B^i$ on the other hand. These relations are \begin{equation} D^i = \varepsilon^{im} E_m - B^l \nu_{l \ \cdot}^{\ i} \,, \quad H_i = \nu_{i \ \cdot}^{\ m} E_m + (\mu^{-1})_{im} B^m \,. \label{53} \end{equation} The vectors $D^i$, $H^i$, $E^i$ and $B^i$ are defined by the following formulae: \begin{equation} D^i = H^{ik} U_k \,, \quad H^i = H^{*ik} U_k \,, \quad E^i = F^{ik} U_k \,, \quad B^i = F^{*ik} U_k \,. \label{54} \end{equation} These vectors are orthogonal to the velocity four-vector $U^i$, \begin{equation} D^i U_i = 0 = E^i U_i \,, \quad H^i U_i = 0 = B^i U_i \,, \label{55} \end{equation} and form the basis for the $F_{mn}$ and $H_{mn}$ tensors decomposition \begin{equation} F_{mn} = E_m U_n - E_n U_m - \eta_{mnl} B^l \,, \quad H_{mn} = D_m U_n - D_n U_m - \eta_{mnl} H^l \,. \label{56} \end{equation} \subsubsection{Master equations for the gravitational field} We are working with a non-minimal electro-gravitational system, with the coupling terms linear in curvature, with the additional restrictions that the system is also linear in the Maxwell tensor, and the gravity part is Einstein-Hilbert. In this non-minimal theory, linear in the curvature terms, the equations for the gravity field (\ref{einsteinnonminimal}) can be written in such a way as to look like the standard form of Einstein equation, i.e., as \begin{equation} R_{ik} - \frac{1}{2} R \ g_{ik} = \kappa T^{({\rm eff})}_{ik} \,. \label{standardform} \end{equation} The effective stress-energy tensor $T^{({\rm eff})}_{ik}$ in the right-hand-side of (\ref{standardform}) is quad\-ratic in the Maxwell tensor and takes the following form \begin{equation} T^{({\rm eff})}_{ik} = T^{(0)}_{ik} + q_1 T^{(1)}_{ik} + q_2 T^{(2)}_{ik} + q_3 T^{(3)}_{ik} \,. \label{effectivestress} \end{equation} The linear part of the electromagnetic stress-energy tensor $T^{(0)}_{ik}$ is given in equation ({\ref{linearenergymomentum}). The definitions for the other three parts of the stress-energy tensor, $T^{(1)}_{ik}$, $T^{(2)}_{ik}$ and $T^{(3)}_{ik}$, are \begin{equation} T^{(1)}_{ik} = R \ T^{(0)}_{ik} - \frac{1}{2} R_{ik} F_{mn}F^{mn} - \frac{1}{2} g_{ik} \nabla^l \nabla_l (F_{mn}F^{mn}) + \frac{1}{2} \nabla_{i} \nabla_{k} (F_{mn}F^{mn}) \,, \label{part1} \end{equation} \begin{eqnarray} T^{(2)}_{ik} &=& - \frac{1}{2}g_{ik}\left[ \nabla_{m} \nabla_{l}(F^{mn}F^{l}_{\cdot n} ) - R_{lm}F^{mn}F^{l}_{\cdot n}\right] - F^{ln}(R_{il}F_{kn} + R_{kl}F_{in}) - \nonumber\\ && {-} R^{mn} F_{im} F_{kn} {-} \frac{1}{2} \nabla^l \nabla_l (F_{in}F_{k}^{\cdot n}) {+} \frac{1}{2}\nabla_l \left[ \nabla_i(F_{kn}F^{ln}) {+} \nabla_k(F_{in}F^{ln}) \right] \,, \label{part2} \end{eqnarray} \begin{equation} T^{(3)}_{ik} = \frac{1}{4}g_{ik} R^{mnls}F_{mn}F_{ls} {-} \frac{3}{4}F^{ls}(F_{i}^{\cdot n}R_{knls}+F_{k}^{\cdot n}R_{inls}) {-} \frac{1}{2}\nabla_{m} \nabla_{n}(F_{i}^{\cdot n}F_{k}^{\cdot m} {+} F_{k}^{\cdot n}F_{i}^{\cdot m})\,. \label{part3} \end{equation} Note that $T^{(3)}_{ik}$ in equation (\ref{part3}) takes the same form as the stress-energy tensor constructed in \cite{Accioly}. While the stress-energy tensor of the electromagnetic field, $T^{(0)}_{ik}$, has zero trace, the effective stress-energy tensor $T^{({\rm eff})}_{ik}$ has a nonvanishing trace. Indeed, ${T^{({\rm eff})}} \equiv g^{ik} T^{({\rm eff})}_{ik}$, is given by \begin{eqnarray} T^{({\rm eff})}&=& - q_1 \left[ \frac{1}{2} R F_{mn}F^{mn} + \frac{3}{2} \nabla^{k} \nabla_{k} (F_{mn}F^{mn}) \right] \nonumber\\ && - q_2 \left[ R^{mn}F^{k}_{\cdot m}F_{kn} + \frac{1}{2} \nabla^k \nabla_k (F_{mn}F^{mn}) + \nabla^{m} \nabla_{n}(F^{kn}F_{km}) \right] \nonumber\\ && - q_3 \left[ \frac{1}{2} R^{mnls}F_{mn}F_{ls} + \nabla^{m} \nabla_{n}(F^{kn}F_{km}) \right] \nonumber\\ &=& - \frac{1}{2} {\chi}^{mnls}F_{mn}F_{ls} - ( q_2 + q_3) \nabla^{m} \nabla_{n}(F^{kn}F_{km}) \nonumber\\ && - \frac{1}{2}(3 q_1 + q_2)\nabla^k \nabla_k (F_{mn}F^{mn}) \,. \label{traceefective} \end{eqnarray} Note that the sign of the trace is not defined a priori, depends on the specific model one uses. This feature also happens in non-linear electrodynamic models (see, e.g., \cite{LemosKerner}). Equations (\ref{standardform})-(\ref{part3}) contain covariant derivatives of the Max\-well tensor only, and do not involve derivatives of the Riemann tensor, Ricci tensor and Ricci scalar. Thus for a given electromagnetic field they form a system of differential equations containing second order partial derivatives in the metric. Nevertheless, these equations have to be completed by the self-consistent equations of non-minimal electrodynamics (\ref{conditiononmaxwelltensor}), (\ref{non-minimalnon-linearelectromagneticequation}), and (\ref{inductiontensor}), which contain the covariant derivatives of the Riemann tensor, Ricci tensor and Ricci scalar. In general, the Maxwell tensor, envisaged as a solution to equations (\ref{conditiononmaxwelltensor}), (\ref{non-minimalnon-linearelectromagneticequation}), and (\ref{inductiontensor}), depends on the second order partial derivatives of the metric. Thus, in general, the equations for the gravitational field become of fourth order. However, the parameters $q_1$, $q_2$ and $q_3$ are arbitrary and may be fixed in an appropriate way. So, the question of whether or not there are models which are effectively of second order in the derivatives of the metric is pertinent. Below in section 3.3. we show explicitly one such a model. \subsubsection{Bianchi identities} Since the Einstein tensor in the left-hand-side of equation (\ref{standardform}) is divergence-free, the effective stress-energy tensor (\ref{effectivestress})-(\ref{part3}) has to be conserved, i.e., \begin{equation} \nabla^k T^{({\rm eff})}_{ik} = 0 \,. \label{bianchiI} \end{equation} In order to check directly that this is true, one has to use, first, the Maxwell equations (\ref{conditiononmaxwelltensor}) and (\ref{non-minimalnon-linearelectromagneticequation}) with (\ref{inductiontensor}), and second, the Bianchi identities and the properties of the Riemann tensor, $\nabla_i R_{klmn} + \nabla_l R_{ikmn} + \nabla_k R_{limn} = 0$ and $R_{klmn} + R_{mkln} + R_{lmkn} = 0\,$, as well as the rules for the commutation of covariant derivatives, which for vectors yields $(\nabla_l \nabla_k - \nabla_k \nabla_l) W^i = W^m R^i_{\cdot mlk}\,$. \subsection{An example: static spherically symmetric gravitational and electromagnetic fields non-minimally coupled} The line element for the static spherically symmetric model has the form \begin{equation} ds^2 = B(r) \ c^2 dt^2 - A(r) \ dr^2 - r^2(d\theta^2 + \sin^2\theta \ d\varphi^2) \,. \label{metricstatictspheric} \end{equation} Assume also that the electromagnetic field inherits the static and spherical symmetries. Then the electric field potential $A_i$ has the form $A_i = \varphi(r) \delta^0_i$. The Maxwell tensor happens to be equal to $F_{ik}= \varphi^{\prime} (\delta^{r}_{i}\delta^{0}_{k} - \delta^{0}_{i}\delta^{r}_{k})$, where a prime denotes the derivative with respect to $r$. To characterize the electric field, it is convenient to introduce a new scalar quantity $E(r)$ as follows, \begin{equation} E^2(r) \equiv - E^i E_i = - \frac{1}{2} F_{ik}F^{ik} = \frac{1}{AB}F^2_{r0} = \frac{1}{AB}{\varphi^{\prime}}\,^2 \,, \label{71} \end{equation} where the four-vector $E^i$ is defined in equation (\ref{54}), and the velocity four-vector is chosen to be equal to $U^i = \delta^i_0 B^{-\frac{1}{2}}$. To fix the sign we choose $F_{r0}= - (AB)^{\frac{1}{2}}\,E(r)$ and $F^{r0}= (AB)^{-\frac{1}{2}}\,E(r)$. For this electromagnetic field the Maxwell equations (\ref{conditiononmaxwelltensor}) are satisfied identically, while equations (\ref{non-minimalnon-linearelectromagneticequation}) and (\ref{inductiontensor}) give only one non-trivial equation when $i=0$, \begin{equation} \left[r^2 E(r) \left(1 + 2 \chi^{0r}_{\cdot \cdot 0r}(r) \right) \right]^{\prime} = 0 \,. \label{72} \end{equation} The function $E(r)$ can then be found to be \begin{equation} E(r) = \frac{Q}{r^2 \varepsilon^r_r(r)} \,, \quad {\rm where} \quad \varepsilon^r_r(r) \equiv 1 + 2 \chi^{0r}_{\cdot \cdot 0r}(r) \,, \label{electricfieldspheric} \end{equation} and $Q$ is a constant. Assume now that the space-time with metric (\ref{metricstatictspheric}) is asymptotically flat and $\chi^{0r}_{\cdot \cdot 0r}(\infty) = 0$. Then, the constant $Q$ in (\ref{electricfieldspheric}) coincides with the total charge of the object if $\varphi(r)\to \frac{Q}{r}$ at $r \to \infty$. Using (\ref{susceptibilitytensor1}) one can compute the term $\chi^{0r}_{\cdot \cdot 0r}(r)$, \begin{eqnarray} \chi^{0r}_{\cdot \cdot 0r}(r) &=& (q_1+q_2+q_3) \left[\frac{B^{\prime \prime}}{2AB} - \frac{(B^{\prime})^2}{4 AB^2} - \frac{A^{\prime}B^{\prime}}{4 A^2 B} \right] +\nonumber\\ &&+ (2q_1 + q_2) \frac{1}{2rA} \left(\frac{B^{\prime}}{B} - \frac{A^{\prime}}{A} \right) - q_1 \frac{1}{r^2} \left(1 - \frac{1}{A} \right) \,. \label{75} \end{eqnarray} So, from equation (\ref{electricfieldspheric}), one sees that generally, $E(r)$ contains derivatives of the metric up to the second order. With such an electric field, equations (\ref{standardform})-(\ref{part3}) for the gravitational field become of the fourth order. To illustrate this statement take the trace of equation (\ref{standardform}), $R = - \kappa T^{({\rm eff})}$, where the trace $T^{({\rm eff})}$ is given in (\ref{traceefective}). For the metric (\ref{metricstatictspheric}) and the electric field (\ref{electricfieldspheric})-(\ref{75}) the trace equation takes the form $$ \frac{1}{\kappa}\left[\frac{B^{\prime \prime}}{B} - \frac{(B^{\prime})^2}{2 B^2} - \frac{A^{\prime}B^{\prime}}{2 A B} + \frac{2}{r} \left( \frac{B^{\prime}}{B} - \frac{A^{\prime}}{A}\right) - \frac{2}{r^2} \left( A - 1 \right) \right] = $$ $$ = (E^2)^{\prime \prime} (3q_1 {+} 2q_2 {+} q_3) {+} (E^2)^{\prime} \left[ (3q_1 {+} 2q_2 {+} q_3) \left( \frac{B^{\prime}}{2B} {-} \frac{A^{\prime}}{2A} {+} \frac{2}{r} \right) {+} \frac{2}{r}(q_2 {+} q_3) \right] {+} $$ $$ + E^2 \left[ (q_1 {+} q_2 {+} q_3) \left( {-} \frac{B^{\prime \prime}}{B} {+} \frac{(B^{\prime})^2}{2B^2} {+} \frac{A^{\prime}B^{\prime}}{2 A B} + \frac{2}{r^2} \right) {-} \right. $$ \begin{equation} \left. - \frac{(2q_1 {-} q_3)}{r} \left( \frac{B^{\prime}}{B} {-} \frac{A^{\prime}}{A} \right) + \frac{2q_1 }{r^2} (A {-} 2) \right] \,. \label{spur} \end{equation} Generally, equation (\ref{spur}) includes the first and the second derivatives of the square of the electric field $E(r)$, which contains, in its turn, the first and the second derivatives of the metric coefficients. Thus, for generic $q_1$, $q_2$ and $q_3$ we obtain a fourth order scalar equation for the gravity field. Direct calculations show that the equations derived from (\ref{standardform}) for the sets of indices $tt$, $rr$, $\theta \theta$, $\varphi \varphi$ display the same features. Now, when the susceptibility tensor is proportional to the double-dual Riemann tensor, i.e., $q_1+q_2+q_3=0$ and $2q_1+q_2=0$ or $q_1=q_3=-q$ and $q_2=2\,q$, all the derivatives disappear from the expression for $E(r)$, providing the formula \begin{equation} E(r) = \frac{Q}{r^2 + 2q \left(1 - \frac{1}{A} \right)} \,. \label{76} \end{equation} Thus, we recover the result obtained by M\"uller-Hoissen and Sippel in \cite{MH} for the special model with $q_1=q_2=\gamma, q_2=-2\gamma$. Moreover, equations (\ref{standardform})-(\ref{part3}) simplify significantly, in particular, equation (\ref{spur}) yields \begin{equation} R = \frac{2 \kappa q}{r^2 A} \left[ r (E^2)^{\prime} + E^2 (2 - A) + \frac{r}{2} E^2 \left( \frac{B^{\prime}}{B} {-} \frac{A^{\prime}}{A} \right) \right] \,. \label{spur1} \end{equation} This equation is, evidently, of second order with respect to the derivative ${d}/{dr}$. For $A(\infty) = 1$ this electric field is asymptotically Coulombian. Formally, (\ref{76}) has a form of the type discussed in \cite{Bardeen,ay1,ay2,ay3}. We intend to consider such a model in a future work. \section{Conclusion} We have established a new self-consistent system of equations for the gravitational and electromagnetic fields. The procedure was based on a non-minimal and non-linear extension of the standard Einstein-Hilbert$-$Maxwell Lagrangian. The class of systems we have studied includes non-minimal electrodynamic equations, containing the Riemann and Ricci tensors and the Ricci scalar both in the non-linear and linear versions. This class of models of non-minimal and non-linear coupling of the gravitational and electromagnetic fields is of great interest, since the appearance of cross-terms in the Lagrangian leads to modifications of the coefficients involving the higher-order derivatives both in the Maxwell and Einstein equations. This means, in particular, that the velocity of the coupled gravito-electromagnetic waves should differ from the speed of light in vacuum. The general field equations obtained in the paper can in principle be classified using the explicit dependence of ${\cal L} (I_{(ab)})$ on $I_{(ab)}$ in the non-linear theory, whereas in the linear theory one uses the phenomenological parameters $q_1$, $q_2$ and $q_3$. This is important for two reasons. First, one should search for non-minimal models in which the gravitational field is described by equations of the second order in the derivatives of the metric. We have shown explicitly, that static spherically symmetric configurations satisfy such a requirement if the susceptibility tensor is proportional to the double-dual Riemann tensor. This model requires a detailed analysis and we intend to consider it in a separate paper. Second, for the non-minimal non-linear coupling between electrodynamics and gravitation one should search for master equations (no matter whether they are of second or of higher order) admitting non-singular, regular, solutions for the gravitational and electromagnetic fields. \section{Acknowledgments} AB is grateful for the hospitality of CENTRA/IST in Lisbon and a grant from the Portuguese FCT. This work was partially funded by the Portuguese FCT through project POCTI/FNU/44648/2002, and by the Russian RFBR through project no 04-05-64895.
1,116,691,499,971
arxiv
\section{Introduction} Percolation phenomena~\cite{staufferbook} represent probably the simplest examples of phase transitions that one could possibly imagine. On infinite lattices, the process consists in occupying sites or bonds/links with some probability $p$. Nearest-neighboring occupied sites or links form clusters. When $p$ exceeds a given system-dependent threshold value $p_c$, a macroscopic cluster, i.e. a cluster occupying a finite fraction of all available sites or links, is formed ({\it percolation cluster}). This transition is continuous, or second-order, as the order parameter varies smoothly from zero to values greater than zero. The same type of connectedness transition not only occurs on regular graphs like lattices, but on any type of graphs. On random networks {\'a} la Erd\"os-R\'enyi (ER)~\cite{erdos}, for instance, one starts from a set of $N$ nodes and adds links such that the probability $p$ that two nodes are joined by a link is the same for all pairs of nodes. When $p$ exceeds the value $p_c\sim 1/N$, a percolation cluster, or {\it giant component}, emerges and the transition is again continuous. Another well studied example is that of random networks with power law degree distributions of degree (number of node neighbors), usually called {\it scale-free} (SF) {\it networks}~\cite{Newman:2003,vitorep,barratbook}. Here the process is better defined by removing, rather than adding, links. Links are removed until the graph is fragmented into microscopic clusters, i.e. there is no giant component. Remarkably, it has been shown that, if the exponent $\lambda$ of the degree distribution is smaller than $3$, the giant component disappears only if one removes nearly all links of the graph, so that the fraction of remaining links with respect to the initial number goes to zero in the limit of infinite system size~\cite{cohen00,newman01,pastor00,dorogovtsev08,vazquez04}. This can be equivalently stated by saying that the percolation threshold is zero. Nevertheless, whether the threshold is zero or non-zero, the percolation transition is still continuous. In fact, the continuous character of the transition is a feature of all known percolation processes. However, this is true for {\it random percolation}, where links are randomly placed on the system, like in the examples above. Recently, Achlioptas {\it et al.} have shown that, if links are placed according to special cooperative rules, the percolation transition may become discontinuous~\cite{achlioptas09}. Such rules are non-local in character, as they require information between different parts of the system. Achlioptas {\it et al.} introduced their rules in the growth of random networks, and found an abrupt jump in the size of the giant component at the percolation threshold, hence the name {\it explosive percolation}. This peculiar type of transition is due to the fact that links are placed such to considerably slow down the formation of large clusters, so that clusters are mostly of about the same size~\cite{friedman09,moreira09}. In this way one reaches a point in which the insertion of a vanishingly small fraction of links leads to the merger of most of such small clusters, generating a big macroscopic cluster. The same effect has been observed by Ziff on 2-dimensional lattices~\cite{ziff09}. For SF networks, the problem has been studied by Cho {\it et al.}~\cite{cho09} and by the authors of this paper~\cite{radicchi09}, with different conclusions. In Ref.~\cite{cho09} the authors conclude that the percolation transition is discontinuous for any value of the degree distribution exponent $\lambda$ greater than a critical value $\lambda_c \sim 2.3$; we found that, for $\lambda_c \leq \lambda \leq 3$, the behaviour at the percolation threshold is consistent with that of a continuous transition, while for $\lambda>3$ the expected behavior of a discontinuous transition is recovered. In this paper, we carry out an extensive numerical analysis of the phenomenon of explosive percolation. We will describe the case of SF networks~\cite{radicchi09}, which we have studied in our previous paper, but we will also present results on lattices and random networks. The results of random percolation in all graph topologies will be presented too, for comparison. Like in the paper by Achlioptas {\it et al.}, all graphs discussed in this paper will be built through dynamic growth processes. The paper is organized as follows. In Section~\ref{sec:models}, we describe the growth models considered in this paper. Section~\ref{sec:analysis} contains the results obtained from our numerical simulations. In Section~\ref{sec:concl} we discuss the results of the analysis. A summary is presented in Section~\ref{sec:summ}. \section{Growth models} \label{sec:models} Our simulations of random percolation will be performed according to the Random Growth (RG) model, i.e., by iteratively adding one link to a system with $N$ nodes, where the link is randomly selected among all possible links. This procedure~\cite{keramiotis85,newman2000} is equivalent to classical bond percolation. Actually, also in Achlioptas processes links are added one by one. The difference is that the link to be added is chosen among two or more randomly selected links, according to a deterministic rule. In this paper we focus on the Product Rule (PR), which prescribes that the link to be picked is the one minimizing the product of the sizes of the two clusters joined by the link. The process is schematically illustrated in Fig.~\ref{fig1}. In the paper we shall often call this specific Achlioptas process as PR model, or simply PR. \begin{figure} \begin{center} \includegraphics[angle=-90,width=\columnwidth]{pr2} \end{center} \caption{(Color online) Scheme of an Achlioptas process with product rule. One of the two links represented by the dashed lines, which are selected at random among all possible pairs of non-adjacent nodes, has to be eventually added to the system. According to the product rule, the winner is the link that joins the pair of clusters with the smaller product size. In this case the winning link is that between clusters $c_1$ and $c_2$, whose product size is $7\cdot 2=14$, which is smaller than the product of the sizes of clusters $c_3$ and $c_4$ ($4\cdot 4=16$).} \label{fig1} \end{figure} Other options are available too. For instance one could go for the sum of the cluster sizes, instead of the product. If, among two or more randomly selected links, the choice is random, one recovers random percolation. The systematic minimization criterion slows down the process of cluster growth, decelerating the percolation transition, which then may become ``explosive''. For both RG and PR models, the growth proceeds until one reaches the desired density of links $p$. We defined $p$ as the number of links of the graph divided by the total number of links present in the graph when it has been ``completed'', i.e., when the last link has been added. All graphs considered in this paper are ``sparse'', i.e., the ratio $\langle k\rangle$ between (twice) the number of links and the number of nodes $N$, which is the average degree of the nodes, does not depend on $N$. Therefore, since at time $t$ of the growth process there are exactly $t$ links in the system, their density $p$, according to our definition, is $t/(N{\langle k\rangle})$. \section{Numerical analysis} \label{sec:analysis} Our numerical analysis aims to understand and characterize the nature of the percolation transition induced by RG and PR. In order to do that, we make use of finite size scaling~\cite{landau00}, a well-known technique adopted in numerical studies of phase transitions. For continuous phase transitions, every variable $X$ near the threshold $p_c$ is scale-independent, due to the infinite correlation length of the system at $p_c$, so it has a power law form, \begin{equation} X \sim |p-p_c|^{\omega}, \label{eqscal1} \end{equation} where $\omega$ is a critical exponent. On a finite system of size $N$, the variable $X$ has the following scaling form near the threshold \begin{equation} X = N^{-\omega/\nu}F\left[ \left(p-p_c\right)\, N^{1/\nu} \right] \label{eqscal2} \end{equation} where $\nu$ is another critical exponent and $F$ a universal function. For $p=p_c$, the variable displays the simple scaling $X \sim N^{-\omega/\nu}$, which can be used to deduce the exponents' ratio $\omega/\nu$, from the examination of several systems with different sizes. Also, if $p_c$, $\nu$ and $\omega$ are known, by plotting the expression $XN^{\omega/\nu}$ as a function of $\left(p-p_c\right)\, N^{1/\nu}$ one yields the universal function $F$, which does not depend on $N$, so curves corresponding to different system sizes collapse. In this work we examined the two main variables measured in percolation, i.e. the {\it percolation strength} $P$ and the {\it average cluster size} $S$. The percolation strength $P$ is the order parameter of the transition, and measures the relative size of the percolating cluster(s) with respect to the total system size $N$. On generic graphs there is no operative criterion to define a percolating cluster (as opposed to lattices), so one usually takes $P$ as the relative size of the largest connected cluster. The critical exponent of the percolation strength is indicated with $\beta$ and the scaling ansatz of $P$ is \begin{equation} P = N^{-\beta/\nu}F^{(1)}\left[ \left(p-p_c\right)\, N^{1/\nu} \right]. \label{eqP} \end{equation} The average cluster size $S$ is defined as \begin{equation} S = \frac{\sum_{s} n_s s^2}{\sum_{s}n_s s} \;, \end{equation} where $n_s$ stands for the number of clusters of size $s$ per node. The sums run over all possible values of $s$ except for the one of the largest cluster. The critical exponent of the average cluster size is indicated with $\gamma$ and the scaling ansatz of $S$ is \begin{equation} S = N^{\gamma/\nu}F^{(2)}\left[ \left(p-p_c\right)\, N^{1/\nu} \right]. \label{eqS} \end{equation} The universal functions $F^{(1)}$ and $F^{(2)}$ of Eqs.~(\ref{eqP}) and~(\ref{eqS}) are different from each other, although they are related. We remark that in Ref.~\cite{radicchi09} we have used the susceptibility $\chi$ of the order parameter, which measures the size of its fluctuations, rather than the average cluster size $S$. Therefore, the values of $\gamma$ that we present here are different from those of Ref.~\cite{radicchi09}. In random percolation, the probability distribution $P(s)$ of sizes of the ``finite'' clusters, i.e. of all clusters except the largest, decreases as the power law $P(s)\sim s^{-\tau}$ with the system size $s$ at the percolation threshold. In our simulations we have also measured the critical exponent $\tau$ (usually called Fisher exponent). We remark that, for a given system, $P(s)$ is proportional to $n_s$. Their relation is $P(s)=Nn_s/n_c$, where $n_c$ is the total number of ``finite'' clusters. This is why in the paper we shall use the symbol $n_s$ to indicate $P(s)$ as well. In the plots, however, $n_s$ is normalized as $P(s)$, for consistency. In lattice percolation, as well as in spin models, the exponents $\beta_L$, $\gamma_L$ and $\nu_L$ (where $L$ stays for lattice) are linked by the so-called {\it hyperscaling relation} \begin{equation} \frac{\gamma_L}{\nu_L}+\frac{2\beta_L}{\nu_L}=d, \label{hyper} \end{equation} where $d$ is the dimension of the lattice. In the general case of graphs, we do not have a space dimension, so the scaling is done in terms of the ``volume'' $N$, as we have done in Eqs.~(\ref{eqscal2}),~(\ref{eqP}) and~(\ref{eqS}). In lattices $N=L^d$ and the hyperscaling relation for the exponents expressing the scaling of the variables in terms of the volume $N$ reads \begin{equation} \frac{\gamma}{\nu}+\frac{2\beta}{\nu}=1. \label{hyper1} \end{equation} Eq.~(\ref{hyper1}) is actually very general, and holds for random percolation on any system below the upper critical dimension~\cite{cohen02}. The identification of the percolation threshold $p_c$ is performed in two independent ways. One way consists in using the scaling of the pseudo-critical points $p_c(N)$ \begin{equation} p_c = p_c(N) + b N^{-1/\nu} \,, \label{eq:chi2} \end{equation} where $b$ is a constant which has to be determined from the fit together with the other parameters $\nu$ and $p_c$. The pseudocritical points can be defined in several ways. We took the positions of the peaks of $S$ at different system sizes $N$. The second method is based on Eq.~(\ref{eqP}). By plotting the percolation strength $P$ as a function of the system size $N$ for a given value of $p$, the correct value of the percolation threshold can be determined by finding the value of $p$ which yields the best power law fit. In~\cite{achlioptas09} a new method for the determination of the nature of the transition has been proposed. The method consists in studying the behaviour of the width of the transition window as a function of the system size. As a measure of the width of the transition window we considered the quantity $\Delta p=p_2-p_1$, where $p_2$ is the lowest value of $p$ for which $P > 0.5$ and $p_1$ the lowest value of $p$ for which $P > 1/\sqrt{N}$. As we will see, the width of the transition window generally scales as a power law with the system size and its dependence from $N$ can be therefore written as $\Delta p \sim N^{-\alpha}$. Achlioptas {\it et al.} argued that, for continuous transitions, $\Delta p$ should be independent of the system size ($\alpha=0$), whereas, if there is an explosive first-order transition, $\Delta p$ should decrease with $N$ ($\alpha>0$). Actually, in 2-dimensional lattices Ziff has found that $\alpha>0$ even in the case of random percolation~\cite{ziff09}. This is however due to the fact that in the particular case of the lattice $p_2$ is essentially coincident with the actual critical threshold of the system; therefore on the lattice one should take a value $p_3$ appreciably larger than $p_c$ (like the point at which $P>0.7$, for instance). We stress that the choice of $p_1$ and $p_2$ is completely arbitrary, so the robustness of the exponent $\alpha$ needs to be tested. Therefore we also used another definition of $\Delta p$, namely $\Delta\tilde{p}=\tilde{p}_2-p_1$, where $\tilde{p}_2$ is the lowest value of $p$ for which $P > 0.2$. Also in this case, we can generally write $\Delta \tilde{p} \sim N^{-\tilde{\alpha}}$. The robustness of the scaling of $\Delta p$ would be indicated by the equality of the exponents $\alpha$ and $\tilde{\alpha}$. \subsection{Lattices} \label{sec:2d} We consider first the case of $2d$-lattices (square lattices) with periodic boundary conditions. The results obtained from our simulations with RG (see Figure~\ref{fig:d2}) confirm the well-known fact that the transition is continuous. We also recover the correct critical exponents; the scaling is done in terms of the linear dimension $L$ of the lattice, as it is customary in this case. PR on $2d$-lattices has been only recently studied by Ziff~\cite{ziff09}, who has shown that the transition is explosive, like that observed by Achlioptas. In Figs.~\ref{fig:d2}c,~\ref{fig:d2}d,~\ref{fig:d2}e and \ref{fig:d2}f we report the results obtained by applying PR on $2d$-lattices. We find a trivial scaling for the order parameter $P$, with exponents' ratio $\beta/\nu$ basically equal to zero [$0.07(3)$] (Fig.~\ref{fig:d2}c). This is consistent with what one expects to find for a discontinuous transition. On the other hand, we find a clean non-trivial power law scaling at $p_c$ for the average cluster size $S$, with exponent $\gamma/\nu=1.7(1)$ (Fig.~\ref{fig:d2}d). This had been observed by Cho {\it et al.} in SF networks~\cite{cho09}. An explanation of this is provided by Fig.~\ref{fig:d2}e, which shows the distribution of sizes $n_s$ for all clusters except the largest one. The distribution is a clear power law [exponent $1.9(1)$], which is unexpected for a classic discontinuous transition, as it usually occurs for continuous transitions. Therefore, all variables derived from $n_s$, like the average cluster size $S$, display power law scaling. \begin{figure}[htb] \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{beta_d2_cl} \qquad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{gamma_d2_cl} \vskip .4cm \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{beta_d2_pr} \quad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{gamma_d2_pr} \vskip .4cm \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{tau_d2.eps} \quad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{ac_d2_.eps} \caption{(Color online) Analysis of $2d$-lattices. (a) RG model: the percolation strength $P$ is plotted as a function of the lattice side $L$ for three different values of the occupation probability: $p=0.499$ (violet diamonds), $p=0.5$ (orange circles) and $p=0.501$ (grey squares). The dashed line stands for the best fit obtained at the critical point $p=p_c=0.5$, which allows to determine $\beta/\nu=0.11(1)$. (b) RG model: the average cluster size $S$ is plotted as a function of the lattice side $L$ for the same values of $p$ as those used in (a). The dashed line has slope $\gamma/\nu=1.76(1)$. (c) PR model: the percolation strength $P$ is plotted as a function of the lattice side $L$ for three different values of the occupation probability: $p=0.5256$ (violet diamonds), $p=0.5266$ (orange circles) and $p=0.5276$ (grey squares). The dashed line stands for the best fit obtained at the critical point $p=p_c=0.5266(2)$, which allows to determine $\beta/\nu=0.07(3)$. (d) PR model: $S$ is plotted as a function of the lattice side $L$ for the same values of $p$ used in (c). The dashed line has slope $\gamma/\nu=1.7(1)$. (e) Comparison between the cluster size distributions measured at the critical threshold for both growth models. For RG (orange circles) $\tau=2.05(1)$ (black dashed line), while for PR (grey squares) $\tau=1.9(1)$ (red dotted line). Simulations have been performed on systems with $L=4096$. (f) $\Delta p$ as a function of the system size $N$: $\alpha=0.15(1)$ (dashed black line) for RG (orange circles) and $\alpha= 0.24(1)$ (dotted red line) for PR (grey squares). The first value is questionable, as the scaling should yield a plateau ($\alpha\sim 0$), like we have indirectly verified (see text). To see the correct scaling one should simulate much larger systems.} \label{fig:d2} \end{figure} This striking feature, as we will see below, is common to all ``explosive'' transitions we have investigated here. \FloatBarrier Finally, in Fig.~\ref{fig:d2}f we show the results of the Achlioptas test for both RG and PR. For RG, we find $\alpha=0.15(1)$. As we remarked above, $\alpha$ is non-zero despite the continuous percolation transition, which seems to go against the argument by Achlioptas {\it et al.}. However, this happens because $p_2$ is very close to the critical point $p_c$. The correct behavior can be seen if one considers a window clearly including $p_c$, which could be done by taking a larger value for the upper limit of the window, like, e.g. the smallest value $p_3$ at which the relative size of the giant component exceeds $0.7$. Actually the scaling of $p_3-p_1$ (not shown) still shows sublinear behavior, but we believe that this is due to the fact that $p_1$ grows too rapidly for the systems we were able to simulate. In fact, $p_3-p_2$ is approximately constant for the lattice sizes we have taken, so $p_3-p_1>p_3-p_2$ cannot go to zero in the limit of infinite lattice size. For PR, we obtain $\alpha=0.24(1)$. This result is quite different from the value found by Ziff ($0.34$). However, in his simulations, Ziff has considered only links between clusters, whereas we have considered all possible links, including those within clusters. Simulations of the process \'a la Ziff has confirmed that this is indeed the reason of the discrepancy with our result. We have performed the same analysis for the window $\Delta\tilde{p}$ defined in Section~\ref{sec:analysis}: the exponents $\tilde{\alpha}$ for RG and PR are consistent with the corresponding values of $\alpha$ (see Table~\ref{table}). In 3d-lattices, the general picture is consistent with that in two dimensions (Fig.~\ref{fig3d}). Classic percolation results, threshold and exponents, are recovered (Fig.~\ref{fig3d}a,~\ref{fig3d}b,~\ref{fig3d}e). The scaling at $p_c$ of the order parameter $P$ for the PR process is again trivial, with exponent $\beta/\nu=0.02(2)$, essentially zero (Fig.~\ref{fig3d}c). The $S$ scales with an exponent $\gamma/\nu=2.1(1)$ (Fig.~\ref{fig3d}d), again due to the power law shape of the distribution of cluster sizes (Fig.~\ref{fig3d}e). We remark that the exponent $\tau=1.99(4)$ is compatible with that we found in two dimensions [$1.9(1)$]. The test of Achlioptas {\it et al.} (Fig.~\ref{fig3d}f) yields again a non-zero value of the exponent $\alpha$ for RG ($\alpha=0.10(1)$) (probably because our lattices are not yet large enough to see the actual behavior, as in $2d$), and a larger value for PR ($\alpha=0.30(1)$). Like in two dimensions, also in $3d$ the exponents $\tilde{\alpha}$ for RG and PR are consistent with the corresponding values of $\alpha$ (see Table~\ref{table}). \begin{figure} \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{beta_d3_cl} \qquad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{gamma_d3_cl} \vskip .4cm \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{beta_d3_pr} \qquad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{gamma_d3_pr} \vskip .4cm \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{tau_d3.eps} \qquad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{ac_d3_.eps} \caption{(Color online) Analysis of $3d$-lattices. (a) RG model: the percolation strength $P$ is plotted as a function of the lattice side $L$ for three different values of the occupation probability: $p=0.2478$ (violet diamonds), $p=0.2488$ (orange circles) and $p=0.2498$ (grey squares). The dashed line stands for the best fit obtained at the critical point $p=p_c=0.2488(3)$, which allows to determine $\beta/\nu=0.48(1)$. (b) RG model: the average cluster size $S$ is plotted as a function of the lattice side $L$ for the same values of $p$ used in (a). The dashed line has slope $\gamma/\nu=2.0(1)$. (c) PR model: the percolation strength $P$ is plotted as a function of the lattice side $L$ for three different values of the occupation probability: $p=0.3866$ (violet diamonds), $p=0.3876$ (orange circles) and $p=0.3886$ (grey squares). The dashed line stands for the best fit obtained at the critical point $p=p_c=0.3876(2)$, which allows to determine $\beta/\nu=0.02(2)$. (d) PR model: the average cluster size $S$ is plotted as a function of the lattice side $L$ for the same values of $p$ used in (c). The dashed line has slope $\gamma/\nu=2.1(1)$. (e) Cluster size distributions $n_s$ for $3d$-lattices at the critical point. For both percolation models $n_s \sim s^{-\tau}$ as $s$ increases. For RG (orange circles) $\tau=2.20(1)$ (black dashed line), while for PR (grey squares) $\tau=1.99(4)$ (red dotted line). Simulations have been performed on systems with $L=256$. (f) We plot the quantity $\Delta p$ as a function of the system size $N$. For both models $\Delta p$ decreases as a power law, $\Delta p \sim N^{-\alpha}$, as $N$ increases. In particular we have: $\alpha=0.10(1)$ (dashed black line) for RG (orange circles) and $\alpha=0.30(1)$ (dotted red line) for PR (grey squares). The first value is questionable, as the scaling should yield a plateau ($\alpha\sim 0$), like we have indirectly verified (see text). To see the correct scaling one should simulate much larger systems.} \label{fig3d} \end{figure} \subsection{Erd\"os-R\'enyi random networks} \label{sec:er} \begin{figure} \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{beta_er_cl} \qquad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{gamma_er_cl} \vskip .4cm \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{beta_er_pr.eps} \qquad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{gamma_er_pr.eps} \vskip .4cm \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{tau_er.eps} \qquad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{ac_er_.eps} \caption{(Color online) Analysis of ER random networks. (a) RG model: the percolation strength $P$ is plotted as a function of the system size $N$ for three different values of the occupation probability: $p=0.495$ (violet diamonds), $p=0.5$ (orange circles) and $p=0.505$ (grey squares). The dashed line stands for the best fit obtained at the critical point $p=p_c=0.5$, which allows to determine $\beta/\nu=0.33(1)$. (b) RG model: the average cluster size $S$ is plotted as a function of the network size $N$ for the same values of $p$ used in (a). The dashed line has slope $\gamma/\nu=0.34(1)$. (c) PR model: the percolation strength $P$ is plotted as a function of the network size $N$ for three different values of the occupation probability: $p=0.8872$ (violet diamonds), $p=0.8882$ (orange circles) and $p=0.8892$ (grey squares). The dashed line stands for the best fit obtained at the critical point $p=p_c=0.8882(2)$, which allows to determine $\beta/\nu=0.02(1)$. (d) PR model: the average cluster size $S$ is plotted as a function of the network size $N$ for the same values of $p$ used in (c). The dashed line has slope $\gamma/\nu=0.48(4)$. (e) Cluster size distributions $n_s$ for ER random networks at critical point. For both percolation models $n_s \sim s^{-\tau}$ as $s$ increases. For RG (orange circles) $\tau=2.51(2)$ (black dashed line), while for PR (grey squares) $\tau=2.08(5)$ (red dotted line). Simulations have been performed on systems with $N=8192$. (f) We plot the quantity $\Delta p$ as a function of the system size $N$. For both models $\Delta p$ decreases as a power law, $\Delta p \sim N^{-\alpha}$, as $L$ increases. In particular we have: $\alpha=0.03(1)$ (dashed black line) for RG (orange circles) and $\alpha=0.36(1)$ (dotted red line) for PR (grey squares).} \label{figER} \end{figure} Percolation studies on random networks {\'a} la Erd\"os-Ren\'yi (ER) have a long tradition, as we wrote in the Introduction. Fig.~\ref{figER} summarizes the results of our analysis. The well-known results of random percolation, threshold and exponents, are recovered, as illustrated in Figs.~\ref{figER}a,~\ref{figER}b and \ref{figER}e. In particular, we notice that the hyperscaling relation of Eq.~\ref{hyper1} is satisfied for the exponents' ratios $\beta/\nu$ and $\gamma/\nu$. For PR, instead, we see again a flat profile of the order parameter $P$ with $N$ ($\beta/\nu=0.02(1)$), which hints to a discontinuous transition, together with a power law scaling of the average cluster size $S$, with exponent $\gamma/\nu=0.48(4)$. The exponent $\tau=2.08(5)$ (Fig.~\ref{figER}e) is still compatible with the values found for PR on both $2d$ and $3d$ lattices (see Section~\ref{sec:2d} and Table~\ref{table}). The Achlioptas test of Fig.~\ref{figER}f yields $\alpha=0.03(1)$ for RG, compatible with a window $\Delta p$ that is independent of $N$, while for PR $\alpha=0.36(1)$, in agreement with the calculations of Achlioptas {\it et al.}~\cite{achlioptas09}. Again, the same test performed with the window $\Delta\tilde{p}$ yields essentially the same values of the exponent for both RG and PR (Table~\ref{table}), so the results of the test appear to be quite robust. \subsection{Scale-free networks} \label{sec:sf} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{thres_new} \end{center} \caption{(Color online) Achlioptas process with PR on random SF networks. The plot shows the percolation threshold $p_c(N)$ as a function of the degree exponent $\lambda$ for various network sizes $N$. The black line represents the infinite size limit extrapolation of the critical threshold, performed by applying Eq.~(\ref{eq:chi2}). The percolation threshold becomes non-zero for $\lambda>\lambda_c\sim 2.3$. Reprinted figure with permission from Ref.~\cite{radicchi09}.} \label{figthres} \end{figure} \begin{figure} \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{beta_sf_g2.5_pr.eps} \qquad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{gamma_sf_g2.5_pr.eps} \caption{(Color online) Percolation transition induced by an Achlioptas process with PR on SF networks. The degree exponent $\lambda=2.5$. (a) The percolation strength $P$ is plotted as a function of the system size $N$ for three different values of the occupation probability: $p=0.0529$ (violet diamonds), $p=0.0629$ (orange circles) and $p=0.0729$ (grey squares). The dashed line stands for the best fit obtained at the critical point $p=p_c=0.0629(1)$, which allows to determine $\beta/\nu=0.59(1)$. (b) The average cluster size $S$ is plotted as a function of the network size $N$ for the same values of $p$ used in (a). The dashed line has slope $\gamma/\nu=0.24(1)$.} \label{figSF2.5} \end{figure} \begin{figure} \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{beta_sf_g2.8_pr.eps} \quad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{gamma_sf_g2.8_pr.eps} \caption{(Color online) Percolation transition induced by an Achlioptas process with PR on SF networks. The degree exponent $\lambda=2.8$. (a) The percolation strength $P$ is plotted as a function of the system size $N$ for three different values of the occupation probability: $p=0.1229$ (violet diamonds), $p=0.1329$ (orange circles) and $p=0.1349$ (grey squares). The dashed line stands for the best fit obtained at the critical point $p=p_c=0.1329(1)$, which allows to determine $\beta/\nu=0.50(1)$. (b) The average cluster size $S$ is plotted as a function of the network size $N$ for the same values of $p$ used in (a). The dashed line has slope $\gamma/\nu=0.42(1)$.} \label{figSF2.8} \end{figure} \begin{figure} \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{scaling1_sf_g2.8_pr.eps} \qquad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{scaling2_sf_g2.8_pr.eps} \vskip 0.4cm \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{scaling3_sf_g2.8_pr.eps} \qquad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{g_scaling1_sf_g2.8_pr.eps} \vskip 0.4cm \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{g_scaling2_sf_g2.8_pr.eps} \qquad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{g_scaling3_sf_g2.8_pr.eps} \caption{(Color online) Rescaling of the percolation variables $P$ and $S$ near the percolation transition induced by an Achlioptas process with PR on SF networks with degree exponent $\lambda=2.8$. System size $N$ goes from $256000$ to $4096000$ via successive doublings.} \label{fig:scaling_sf2.8} \end{figure} SF networks have been objects of intense investigations over the last few years~\cite{Newman:2003,vitorep,barratbook}. The main reason of their success is that they are a proxy of many natural, social and man-made systems, if the latter are represented as graphs. The ubiquity of networks with skewed distributions of degree is not accidental. Such broad distributions indicate that there is a whole hierarchy of node roles based on their degrees, going from a large majority of nodes with low degree to a small subset of nodes with high degree, or ``hubs''. The hubs have a fundamental role for the structure and dynamics of networks. Random SF networks with degree exponent $\lambda<3$ have so many hubs that a very small fraction of links (vanishing in the limit of infinite system size) is enough to keep a macroscopic fraction of nodes of the graph in the same connected component, which can be equivalently stated by saying that the percolation threshold is zero~\cite{cohen00,newman01,pastor00,dorogovtsev08,vazquez04}. In Ref.~\cite{radicchi09} we have already studied Achlioptas processes with PR on random SF networks. Here we present some more detailed calculations and add significantly new material. The networks are constructed as follows. The starting point is a set of $N$ nodes and given degree sequence $\{k_1, k_2, \ldots , k_N \}$. The degrees of the sequence are taken from a power law distribution with exponent $\lambda$. We set the average degree $\langle k\rangle$ equal to $5$. If links are placed randomly, the procedure can be carried out with the configuration model~\cite{molloy95}, i.e. by connecting randomly selected pairs of stubs adjacent to the nodes, until no more stubs are available. This is actually the procedure we have adopted for the RG model. For PR, instead, at each iteration we pick two pairs of stubs and apply the PR to find which pair of stubs has to be eventually joined in a link (the PR applies as we have schematically illustrated in Fig.~\ref{fig1}). In the degree exponent's range $\lambda<3$ we will present only results referring to PR, due to the absence of a percolation threshold for RG. A remarkable result found independently in Refs.~\cite{cho09} and \cite{radicchi09} is that the percolation transition of the Achlioptas process with PR has a non-vanishing threshold for $\lambda>\lambda_c\sim 2.3$ (Fig.~\ref{figthres}). In Figs.~\ref{figSF2.5} and \ref{figSF2.8} we show the scalings at $p_c$ of $P$ and $S$ for $\lambda=2.5$ and $2.8$, respectively. At variance with what we have seen in Sections~\ref{sec:2d} and \ref{sec:er}, here the scaling of $P$ at $p_c$ is non-trivial, as $P$ decreases with $N$ as a power law in both cases. This appears inconsistent with the typical scenario of a discontinuous transition, which generally yields the trivial scaling we have observed in Figs.~\ref{fig:d2}c, ~\ref{fig3d}c and \ref{figER}c. We shall come back to this issue in Section~\ref{sec:concl}. A clean power law scaling at $p_c$ is also found for $S$ (Figs.~\ref{figSF2.5}b and \ref{figSF2.8}b), although we have seen that the same happens for explosive discontinuous transitions as well. \begin{figure} \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{ac_sf_g2.5_.eps} \quad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{ac_sf_g2.8_.eps} \caption{(Color online) Achlioptas test for SF networks. (a) $\lambda=2.5$: we plot the quantity $\Delta p$ as a function of the system size $N$. For both percolation models $\Delta p$ decreases as a power law, $\Delta p \sim N^{-\alpha}$, as $N$ increases. In particular we have: $\alpha=-0.04(1)$ (dashed black line) for RG (orange circles) and $\alpha=-0.26(3)$ (dotted red line) for PR (grey squares). We also consider the transition window $\Delta \tilde{p}$ (defined in Section~\ref{sec:analysis}), from which we obtain: $\tilde{\alpha}=-0.06(1)$ (lower black dashed line) for RG (orange diamonds); $\tilde{\alpha}=0.31(1)$ (lower red dotted line) for PR (grey triangles). (b) $\lambda=2.8$: same plot as the one of (a). The measured exponents are: $\alpha=-0.04(1)$ (dashed black line) for RG (orange circles); $\alpha=0.04(1)$ (dotted red line) for PR (grey squares); $\tilde{\alpha}=-0.07(1)$ (lower black dashed line) for RG (orange diamonds); $\tilde{\alpha}=0.32(1)$ (lower red dotted line) for PR (grey triangles).} \label{figAch2.5} \end{figure} In Fig.~\ref{fig:scaling_sf2.8} we show the rescaling of the variables $P$ and $S$. The data collapses observed in Figs.~\ref{fig:scaling_sf2.8}c and \ref{fig:scaling_sf2.8}f show the profiles of the universal scaling functions $F^{(1)}$ and $F^{(2)}$ of Eqs.~(\ref{eqP}) and~(\ref{eqS}). The results of the Achlioptas tests for $\lambda=2.5$ and $2.8$ are shown in Fig.~\ref{figAch2.5}. In each case we present the scaling of both $\Delta p$ and $\Delta\tilde{p}$, to check for the robustness of the results. We find that, while the scaling is clear for both variables, $\alpha\neq \tilde{\alpha}$. In fact, the exponents often indicate contradictory trends, with the transition window increasing and decreasing with $N$, which is clearly inconsistent. We cannot exclude that this is due to finite size effects (as we have seen on lattices) and that simulations on much larger systems would show consistent results instead. On the other hand, it might be that the results of Achlioptas test indeed depend on the specific definition of the transition window. \begin{figure} \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{beta_sf_g3.5_cl.eps} \qquad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{gamma_sf_g3.5_cl.eps} \vskip 0.4cm \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{beta_sf_g3.5_pr.eps} \qquad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{gamma_sf_g3.5_pr.eps} \vskip 0.4cm \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{tau_sf_g3.5.eps} \qquad \includegraphics[width=0.22\textwidth, height=0.16\textwidth]{ac_sf_g3.5_.eps} \caption{(Color online) Analysis of SF networks with degree exponent $\lambda=3.5$. (a) RG model: the percolation strength $P$ is plotted as a function of the system size $N$ for three different values of the occupation probability: $p=0.074$ (violet diamonds), $p=0.078$ (orange circles) and $p=0.082$ (grey squares). The dashed line stands for the best fit obtained at the critical point $p=p_c=0.078(1)$, which allows to determine $\beta/\nu=0.38(1)$. (b) RG model: the average cluster size $S$ is plotted as a function of the lattice side $L$ for the same values of $p$ used in (a). The dashed line has slope $\gamma/\nu=0.15(2)$. (c) PR model: the percolation strength $P$ is plotted as a function of the system size $N$ for three different values of the occupation probability: $p=0.2214$ (violet diamonds), $p=0.2224$ (orange circles) and $p=0.2234$ (grey squares). The dashed line stands for the best fit obtained at the critical point $p=p_c=0.2224(2)$, which yields $\beta/\nu=-0.06(3)$. (d) PR model: the average cluster size $S$ is plotted as a function of the network size $N$ for the same values of $p$ used in (c). The dashed line has slope $\gamma/\nu=0.40(9)$. (e) For both growth models $n_s \sim s^{-\tau}$ as $s$ increases. For RG (orange circles) $p_c=0.078(1)$ and $\tau=2.94(1)$ (black dashed line), while for PR (grey squares) $p_c=0.2224(2)$ and $\tau=2.2(1)$ (red dotted line). Simulations have been performed on systems with $N=8192$. (f) We plot the quantity $\Delta p$ as a function of the system size $N$. For both growth models $\Delta p$ decreases as a power law, $\Delta p \sim N^{-\alpha}$, as $N$ increases. In particular we have: $\alpha=-0.02(1)$ (upper dashed black line) for RG (orange circles) and $\alpha=0.34(1)$ (dotted red line) for PR (grey squares). We also consider the transition window $\Delta \tilde{p} = \tilde{p}_2-p_1$, where $\tilde{p}_2$ is the minimal value of the occupation probability at which $P=0.2$. In this case we find again a good power law fit. For RG (orange diamonds) the decay exponent is unchanged, since $\tilde{\alpha}=-0.02(1)$ (lower black dashed line). Similarly for PR (grey triangles) $\tilde{\alpha}=0.35(1)$.} \label{fig3.5} \end{figure} \begin{table*} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline System & Growth Model & $p_c$ & $\beta/\nu$ & $\gamma/\nu$ & $\tau$ & $\alpha$& $\tilde{\alpha}$ \\ \hline \hline \multirow{2}{*}{$2d$-lattice} & RG & $0.5$ & $0.11(1)$ & $1.76(1)$ & $2.05(1)$ & $0.15(1)$ & $0.16(1)$ \\ \cline{2-8} & PR & $0.5266(2)$ & $0.07(3)$ & $1.7(1)$ & $1.9(1)$ & $0.24(1)$ & $0.23(1)$ \\ \hline \hline \multirow{2}{*}{$3d$-lattice} & RG & $0.2488(3)$ & $0.48(1)$ & $2.0(1)$ & $2.20(1)$ & $0.10(1)$ & $0.10(1)$ \\ \cline{2-8} & PR & $0.3876(2)$ & $0.02(2)$ & $2.1(1)$ & $1.99(4)$ & $0.30(1)$ & $0.31(1)$ \\ \hline \hline \multirow{2}{*}{ER network} & RG & $0.5$ & $0.33(1)$ & $0.34(1)$ & $2.51(2)$ & $0.03(1)$ & $0.04(1)$ \\ \cline{2-8} & PR & $0.8882(2)$ & $0.02(1)$ & $0.48(4)$ & $2.08(5)$ & $0.36(1)$ & $0.36(1)$ \\ \hline \hline \multirow{2}{*}{SF network $\lambda=2.5$} & RG & $0$ & $-$ & $-$ & $-$ & $-0.04(1)$ & $-0.06(1)$ \\ \cline{2-8} & PR & $0.0629(1)$ & $0.59(1)$ & $0.24(1)$ & $2.15(2)$ & $-0.26(3)$ & $0.31(1)$ \\ \hline \hline \multirow{2}{*}{SF network $\lambda=2.8$} & RG & $0$ & $-$ & $-$ & $-$ & $-0.04(1)$ & $-0.07(1)$ \\ \cline{2-8} & PR & $0.1329(1)$ & $0.50(1)$ & $0.42(1)$ & $2.13(6)$ & $0.04(1)$ & $0.32(1)$ \\ \hline \hline \multirow{2}{*}{SF network $\lambda=3.5$} & RG & $0.078(1)$ & $0.38(1)$ & $0.15(2)$ & $2.94(1)$ & $-0.02(1)$ & $-0.02(1)$ \\ \cline{2-8} & PR & $0.2224(2)$ & $-0.06(3)$ & $0.40(9)$ & $2.2(1)$ & $0.34(1)$ & $0.35(1)$ \\ \hline \end{tabular} \caption{The table summarizes the results obtained from our numerical analysis. Percolation threshold and critical exponents are reported for each system and growth model analyzed.} \label{table} \end{table*} For $\lambda>3$, however, the situation is different. Fig.~\ref{fig3.5} reports the results of our finite size scaling analysis for percolation transitions induced by RG and PR on SF networks with exponent $\lambda=3.5$. In this case we also present the results of RG, because for $\lambda>3$ there is a non-zero threshold. In Ref.~\cite{cohen02} it has been proved that, for random percolation on SF networks, $\beta/\nu=1/(\lambda-1)$ and $\gamma/\nu=(\lambda-3)/(\lambda-1)$ for $3\leq\lambda\leq 4$. For $\lambda>4$, the process reaches the mean field limit and the exponents are frozen: $\beta/\nu=\gamma/\nu=1/3$. Interestingly, these are just the values of the exponents for the percolation transition of ER random networks. SF networks tend to ER random networks in the limit $\lambda\rightarrow\infty$. Our estimates of the exponents' ratios $\beta/\nu$ and $\gamma/\nu$ (Figs.~\ref{fig3.5}a and \ref{fig3.5}b) are consistent with the predicted values for random percolation presented above. For PR, instead, we recover the same scenario as on lattices and ER random networks. The scaling of $P$ at $p_c$ is trivial (Fig.~\ref{fig3.5}c), with $\beta/\nu=-0.06(3)$, which is essentially zero, while the power law scaling of $S$ at $p_c$ is non-trivial (Fig.~\ref{fig3.5}d), with $\gamma/\nu=0.40(9)$. The Fisher exponent $\tau=2.2(1)$ (Fig.~\ref{fig3.5}e). The Achlioptas test (Fig.~\ref{fig3.5}f) yields essentially the same set of values we had found for ER random networks (see Table~\ref{table}). Moreover, the values are stable no matter whether one uses $\Delta p$ or $\Delta \tilde{p}$. \section{Discussion} \label{sec:concl} \begin{figure*} \begin{center} \includegraphics[width=8cm]{pdistr_sf_g2.1_pr_pcN.eps} \includegraphics[width=8cm]{pdistr_sf_g2.5_pr_pcN.eps} \vskip0.4cm \includegraphics[width=8cm]{pdistr_sf_g2.8_pr_pcN.eps} \includegraphics[width=8cm]{pdistr_sf_g3.5_pr_pc.eps} \end{center} \caption{(Color online) Achlioptas process with PR on SF networks. Distributions of the values of the order parameter $P$ at the pseudocritical point $p_c(N)$ for different degree exponents $\lambda$: $2.1$ (a), $2.5$ (b), $2.8$ (c), $3.5$ (d). The main frame of each plot shows the values of $P$ for each of $1000$ realizations. The insets display the distribution of the $P$-values (upper panel) and its cumulative (lower panel). The distributions are all bimodal, which indicates that the order parameter undergoes a discontinuous jump at the critical point. The network size is $N=8192000$ in all cases.} \label{fighist} \end{figure*} \begin{figure} \includegraphics[width=\columnwidth]{tau_pr_gloal.eps} \caption{(Color online) Cluster size distributions $n_s$ for Achlioptas processes with PR at the critical point. The cluster size distributions scale as power laws (i.e., $n_s \sim s^{-\tau}$) for all systems analyzed in this paper. The Fisher exponents $\tau$ are very close to each other and all distributions collapse into a unique curve with the only exceptions of the ones obtained for SF networks with $\lambda=2.5$ and $\lambda=2.8$. The dashed black line is a power law with exponent $-2$, plotted as a useful reference. The lattice side $L=4096$ for $2d$-lattice and $L=256$ for $3d$-lattice. $N=8192000$ for all networks.} \label{tauall} \end{figure} In this section we want to discuss the results we have obtained, which are summarized in Table~\ref{table}. We have seen that our finite size scaling analysis leads to two different scenarios. The first scenario is consistent with the ``explosive'' transition observed by Achlioptas {\it et al.}, and occurs on ER random networks, lattices and SF networks with degree exponent $\lambda>3$. In all these cases we have derived the same picture from finite size scaling, in particular the saturation of the order parameter $P$ at $p_c$ with the size of the system $N$. On SF networks with $\lambda<3$ the situation looks different, as there we have observed a clear power law scaling of $P$ at $p_c$, just as one would expect to find in continuous transitions. Moreover, the pseudo-critical points also show the clean power law scaling of Eq.~\ref{eq:chi2} for $\lambda<3$ (that is how the critical thresholds of Fig.~\ref{figthres} were derived), which usually happens for continuous transitions. This appears to contradict the conclusion of Cho {\it et al.}, who claim that the transition is always discontinuous on SF networks~\cite{cho09}. Cho {\it et al.} have adopted the model by Chung and Lu~\cite{chung02} to build their networks, which is different from the procedure we used, but we have verified that the results obtained in this way are consistent with ours. However, the seemingly continuous transition we observe for SF networks with $\lambda<3$ has the surprising and somewhat disturbing feature that the hyperscaling relation of Eq.~\ref{hyper1} is violated, as one can easily verify through the values of $\beta/\nu$ and $\gamma/\nu$ reported in Table~\ref{table} for $\lambda=2.5$ and $\lambda=2.8$. Such violation could imply that the transition is not continuous after all. In order to test this, we have computed the distribution of the values of the order parameter $P$ at the pseudo-critical point $p_c(N)$, for PR on SF networks with $\lambda=2.1, 2.5, 2.8, 3.5$. The results are reported in Fig.~\ref{fighist}. The two horizontal bands visible in the main frame of each of the four panels indicate that $P$ oscillates between two values at the pseudo-critical point, which means that the transition is discontinuous. Interestingly, this is also found for $\lambda=2.1<\lambda_c$. We could not carry out the finite size scaling analysis in this case, because the percolation threshold vanishes in the infinite size limit, but the result on finite systems, as shown in Fig.~\ref{fighist}a, is the same as those for $\lambda>\lambda_c$. We conclude that the percolation transition for an Achlioptas process with PR is discontinuous on SF networks, for any value of the degree exponent $\lambda$. Therefore, based on the results of this analysis, we have to partially modify the conclusion we had drawn in Ref.~\cite{radicchi09}, where we had stated that, for $\lambda<3$, the transition is continuous. There actually is a discontinuous jump of the order parameter at $p_c$: nevertheless, all relevant percolation variables display power law scaling at the percolation threshold for $\lambda<3$, in particular Eqs.~(\ref{eqP}),~(\ref{eqS}) and~(\ref{eq:chi2}) hold, just like in standard continuous transitions. Therefore, we hesitate to state that the transition is first- or second-order, as it looks like an unusual mixture of both. Therefore, the regime of SF networks for $\lambda<3$ is very intriguing and deserves further investigations. Furthermore, the explosive transition observed in the other cases, including the original transition discovered by Achlioptas {\it et al.}, is not a standard discontinuous transition neither. The most striking feature here is that the size distribution $n_s$ of the ``finite'' clusters at $p_c$ is a power law, not exponential or Gaussian as one usually observes in first-order phase transitions. This fact has the consequence that all variables computed by means of $n_s$ also display non-trivial power law scaling at $p_c$, as we have seen with the average cluster size $S$. Interestingly, the Fisher exponent $\tau$ for every transition we have investigated is very close to $2$, and consistent with this value within errors. In Fig.~\ref{tauall} we plot all distributions $n_s$ we have computed. Indeed, we see that the curves are strongly overlapping, and that only the curves corresponding to the anomalous discontinuous transition found in SF networks with $\lambda<3$ perhaps deviate from the general pattern, though very little. Another striking feature of our findings is the existence of a non-zero percolation threshold for SF networks for $\lambda>\lambda_c\sim 2.3$ (Fig.~\ref{figthres}), in contrast with the fact that the threshold for random percolation is zero until $\lambda=3$. In Ref.~\cite{cho09}, Cho {\it et al.} suggested an interesting explanation for this result. They noticed that, since in Achlioptas processes the networks are not constructed through the random addition of links, the degree distribution of the system during the growth deviates from that imposed by construction, which will be eventually reached at the end of the process. In Fig.~\ref{figexp} we plot the degree exponent $\lambda_{eff}$ measured at the critical point as a function of the imposed exponent $\lambda$. We see that the two exponents are quite different, and that there is a simple linear relation between them. In particular, we notice that $\lambda_{eff}\sim 3$ when $\lambda=\lambda_c\sim 2.3$. Therefore, at the percolation threshold, SF networks constructed with an Achlioptas process with PR for $\lambda>\lambda_c$ are actually SF networks with degree exponent bigger than $3$. For SF networks with degree exponent bigger than $3$, random percolation has a non-zero threshold, and this could be the reason of the non-zero threshold we observe for $\lambda>\lambda_c$. However, we stress that the ``effective'' SF networks produced by Achlioptas processes are not random, so there is no {\it a priori} guarantee that one finds the same results as for random SF networks as the argument above implies. Still, one could expect some qualitative agreement. This is confirmed by the fact that $\lambda_{eff}\sim 4$ when $\lambda=3$. For degree exponents larger than $4$, random SF networks are hardly distinguishable from ER random networks. From the point of view of percolation the two classes of systems are in fact fully equivalent (same exponents). This could explain why, for $\lambda>3$, the picture we recover from finite size scaling looks the same as for ER random networks, and that the corresponding critical exponents (including the exponent $\alpha$ of the Achioptas test) are consistent with each other within errors (see Table~\ref{table}). \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{lr_leff} \end{center} \caption{(Color online) Achlioptas process with PR on SF networks. Relation between the degree exponent $\lambda$ imposed through the starting degree sequence and the effective exponent $\lambda_{eff}$ of the degree distribution of the system when it is at the percolation threshold. The relation is linear with good approximation (black dashed line). The newtwork size is $N=8192000$.} \label{figexp} \end{figure} \section{Summary} \label{sec:summ} In this paper we have performed a thorough numerical analysis of the percolation transitions induced by Achlioptas processes with product rule. The typical outcome, on lattices, ER random networks and SF networks is the ``explosive'' percolation transition originally observed by Achlioptas {\it et al.}~\cite{achlioptas09}. This transition is kind of hybrid, as it combines the discontinuity of the order parameter at the critical point with analytical features like the power law decay of the size distribution of finite clusters, a feature typical of continuous transitions. Hybrid phase transitions are actually not new, they have been observed in a variety of domains, like spin glasses~\cite{gross84,kirkpatrick87}, constraint satisfaction problems (K-SAT)~\cite{monasson99} and models of jamming in granular materials~\cite{Ohern02,toninelli06,schwarz06}. A remarkable feature of our findings is that the value of the exponent $\tau$ of the cluster size distribution appears to be compatible with $2$ in all instances, despite the diversity of the systems we considered. For SF networks with degree exponent $\lambda<3$ the situation is even more extreme: on the one hand, all percolation variables display power law scaling at the critical point, just like one expects for a continuous second-order phase transition; on the other hand, the order parameter still undergoes a discontinuous jump at the critical point. This is certainly something worth investigation in the future. As usually in numerical studies of phase transitions, despite the large graph sizes we have investigated here, we cannot exclude that the regime we have tested is not yet ``asymptotic'' and that therefore is dominated by finite size effects, which give a distorted perception of what truly happens. We tend to discard this hypothesis, though, due to the remarkably clean scaling plots we have derived. Some theoretical arguments have been proposed to describe explosive percolation transitions~\cite{friedman09,moreira09}. However, a real theory of such processes is still missing, and looking for a theory is certainly a challenging but promising future research direction. We hope that the results of our analysis will contribute to inspire new theoretical developments in this topic. \begin{acknowledgments} We would like to thank Antonio Coniglio, Sergey Dorogovtsev, Byungnam Kahng, Jinseop Kim, Jos\'e Fernando Mendes, Raissa D'Souza and Robert Ziff for stimulating discussions. S. F. gratefully acknowledges ICTeCollective, grant number 238597 of the European Commission. \end{acknowledgments}
1,116,691,499,972
arxiv
\section{introduction} The aim of this paper is to define and discuss two representations of the Cohomological Hall algebras, and combine them into a single representation of the algebra which is called ``full'' (or ``double'') COHA in \cite{Soi2014}. Cohomological Hall algebra (COHA for short) was introduced in \cite{KoS2011}. The definition is similar to the definition of conventional Hall algebra (see e.g. \cite{Sch2006}) or its motivic version (see e.g. \cite{KoS2008}). Instead of spaces of constructible functions on the stack of objects of an abelian category, one considers cohomology groups of the stacks. The product is defined through the pullback-pushforward construction. Details can be found in \cite{KoS2011}. By analogy with conventional Hall algebra of a quiver, which gives the ``positive'' part of a quantization of the corresponding Lie algebra, one may want to define the ``double'' COHA, for which the one defined in \cite{KoS2011} would be a ``positive part''. Following the discussion in \cite{Soi2014}, we study the double of representations of COHA, and hope to find the double of COHA through its representations. This paper focuses on $A_1$-quiver. Stable framed representations of the quiver are used to produce two representations of COHA. Since the moduli spaces of stable framed representations of $A_1$-quiver are Grassmannians, we actually define two representations on the cohomology of Grassmannians. We show that the operators from these two representations form $D_{n+1}$-Lie algebra. We also make a modification to the decreasing representation and form a twisted decreasing representation. The operators from untwisted increasing operators and twisted decreasing operators form a finite Clifford algebra. These confirm the conjecture from \cite{Soi2014} that the double of $A_1$-COHA is the infinite Clifford algebra. \section{Two geometric representations of $A_1$-COHA} \subsection{COHA} Let $Q$ be a quiver with $N$ vertices. Given a dimension vector $\gamma=(\gamma_i)_{i=1}^N$, $ M_{\gamma}$ is the space of complex representations with fixed underlying vector space $\bigoplus_{i=1}^{N} \fC^{\gamma_i}$ of dimension vector $\gamma$, and $G_{\gamma}=\prod_{i=1}^NGL_{\gamma_i}(\fC)$ is the associated gauge group. $[ M_{\gamma}/G_{\gamma}]$ is the stack of representations of $Q$ with fixed dimension vector $\gamma$. As a vector space, COHA of $Q$ is defined to be $\mathcal H:=\bigoplus_{\gamma}\mathcal H_{\gamma}:=\bigoplus_{\gamma}H^*([M_{\gamma}/G_{\gamma}]):=\bigoplus_{\gamma}H^*_{G_{\gamma}}( M_{\gamma})$. Here by equivariant cohomology of a complex algebraic variety $M_{\gamma}$ acted by a complex algebraic group $G_{\gamma}$ we mean the usual (Betti) cohomology with coefficients in $\mathbb{Q}$ of the bundle $EG_{\gamma}\times_{G_{\gamma}}M_{\gamma}$ associated to the universal $G_{\gamma}$-bundle $EG_{\gamma}\rightarrow BG_{\gamma}$ over the classifying space of $G_{\gamma}$. The product $*:\mathcal H\otimes \mathcal H\rightarrow \mathcal H$ is defined by means of the pullback-pushforward construction in \cite{KoS2011}. \subsection{$A_1$-COHA}\label{incre_basis} Let $Q$ be $A_1$. $N=1$. Since there is only one representation with fixed underlying vector space $\fC^{d}$ of dimension $d$, $M_{d}$ is a point and $G_{d}=GL_d(\fC)$. Therefore $\mathcal H_d=H^*_{GL_d(\fC)}( M_d)=\mathbb{Q}[x_{1,d},\ldots,x_{d,d}]^{S_d}$ is the algebra of symmetric polynomials in variables $x_{1,d},\ldots,x_{d,d}$. It is possible to talk about the geometric interpretation of these variables. They can be treated as the first Chern classes of the tautological bundles over the classifying space of $G_d$. For details see e.g. \cite{Xia2013} The COHA $\mathcal H$ for quiver $A_1$ is described in \cite{KoS2011}. It is the infinite exterior algebra generated by odd elements $\phi_0, \phi_1,\phi_2\ldots$ with wedge $\wedge$ as its product. Generators $(\phi_i)_{i\geq 0}$ correspond to the additive generators $(x^i_{1,1})_{i\geq0}$ of $\mathcal H_1=\mathbb{Q}[x_{1,1}]$. A monomial in the exterior algebra $$\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}\in\mathcal H_d,\quad 0\leq k_1<\ldots<k_d$$ corresponds to the Schur symmetric polynomial $s_{\lambda}(x_{1,d},\ldots,x_{d,d})$, where $\lambda=(\lambda_d,\ldots,\lambda_1)=(k_d-d+1,k_{d-1}-d+2,\ldots,k_1)$ is a partition. Let $\Phi_{\bf k}=\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}$ with index ${\bf k}=(k_1,\ldots,k_d)$, $0\leq k_1<\ldots<k_d$. Denote by ${\bf k}(\lambda)$ the index related to the partition $\lambda$ and by $\lambda({\bf k})$ the partition related to the basis index $\bf k$. Then we have $\Phi_{{\bf k}(\lambda)}=s_{\lambda(\bf k)}$ \begin{comment} Following the discussion in \cite{KoS2011}, it is easy to see that the embedding $\mathcal H_1\hookrightarrow\mathcal H$ induces a homomorphism of graded algebras $\varphi:\bigwedge^*(\mathcal H_1)\cong\mathcal H$. $\mathcal H_1$ (equipped with the cup product $\cup$) can be identified with the polynomial ring $\mathbb{Q}[x]$. Let $\phi_0,\phi_1,\ldots$ be the basis of $\mathcal H_1$ that are related to $x^0,x^1,\ldots$ under the identification. Then $\{\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}\}$, where $k_1<\ldots<k_d$ is an increasing sequence of $d$ non-negative integers, form a basis of $\bigwedge^d(\mathcal H_1)$. By ind \begin{equation} (\phi_{k_1}\wedge\ldots\wedge\phi_{k_d})(x_1,\ldots,x_d)=s_{\lambda}(x_1,\ldots,x_d), \end{equation} where $s_{\lambda}$ is the Schur polynomial belonging to the partition $\lambda=(k_d-d+1,\ldots,k_1)$. Thus $A_1$-COHA $\mathcal H$ is isomorphic to $\bigwedge^*(\mathcal H_1)$. \end{comment} \subsection{Stable framed representations}\label{section23} Fix a dimension vector ${\bf n}=(n_i)_{i=1}^N$. A {\it framed representation} of $Q$ of dimension vector $\gamma$ is a pair $(V,f)$, where $V$ is an ordinary representation of $Q$ of dimension $\gamma$ and $f=(f_i)_{i=1}^{N}$ is a collection of linear maps from $\fC^{n_i}$ to $V_i$. The set of framed representations of dimension vector $\gamma$ with framed structure dimension vector $\bf n$ is denoted by ${\hat M}_{\gamma,{\bf n}}$. It carries a natural gauge group $G_{\gamma}$-action. See e.g. \cite{Rei2008a}. For the notion of stable framed representation of a quiver, see e.g. \cite{Rei2008} (more general framework of triangulated categories can be found in \cite{Soi2014}). We focus on the trivial stability condition. In this case, a framed representation is called {\it stable} if there is no proper (ordinary) subrepresentation of $V$ which contains the image of $f$. The set of stable framed representations of dimension vector $\gamma$ with framed structure dimension vector $\bf n$ is denoted by $ {\hat M}^{st}_{\gamma,{\bf n}}$. The gauge group $G_{\gamma}$ of $ M_{\gamma,{\bf n}}$ induces a $G_{\gamma}$-action on $\hat M^{st}_{\gamma,{\bf n}}$. The stack of stable framed representations $[\hat M^{st}_{\gamma,{\bf n}}/G_{\gamma}]$ is in fact a smooth projective scheme. We denote it by $\mathcal M^{st}_{\gamma,{\bf n}}$ and call it {\it the smooth model} of quiver $Q$ with dimension $\gamma$ and framed structure $\bf n$ The pullback-pushforward construction is applied to the cohomology of the scheme of stable framed representations. This construction leads to two representations of COHA for the quiver $Q$ which we describe below. Fix two dimension vectors $\gamma_1$ and ${\gamma_2}$. Set $\gamma=\gamma_1+\gamma_2$. Consider the scheme consisting of diagrams \begin{equation} \mathcal M^{st}_{\gamma_2,\gamma,{\bf n}}:=\{\xymatrix{0\ar[r]{}&E_1\ar[r]&E\ar[r]&E_2\ar[r]&0\\&&\fC^{\bf n}\ar[u]^{f}\ar[ur]_{f_2}&&}\}, \end{equation} where $E_1\in M_{\gamma_1}$, $(E,f)\in \mathcal M^{st}_{\gamma,{\bf n}}$, $(E_2,f_2)\in \mathcal M^{st}_{\gamma_2,{\bf n}}$. $f:\fC^{\bf n}\rightarrow E$ and $f_2:\fC^{\bf n}\rightarrow E_2$ are the framed structures attached to $E$ and $E_2$ respectively. The subgroup of the automorphism group of $E$ which preserves the embedding of $E_1$ is denoted by $P_{\gamma_1,\gamma,{\bf n}}$. It plays the role of the automorphism group of $\mathcal M^{st}_{\gamma_2,\gamma,{\bf n}}$. The natural projections from the diagram to its components give the following diagram: \begin{equation}\label{corres} \xymatrix{&\mathcal M^{st}_{\gamma,{\bf n}}&\\&[\hat M^{st}_{\gamma_2,\gamma,{\bf n}}/P_{\gamma_2,\gamma,{\bf n}}]\ar[u]^{p}\ar[dr]^{p_2}\ar[dl]_{p_1}&\\[ M_{\gamma_1}/G_{\gamma_1}]&&\mathcal M^{st}_{\gamma_2,{\bf n}}}. \end{equation} The map $p_*(p_1^*(\phi_1)\cup p_2^*(\varphi_2))$ defines a morphism from $H^*({\mathcal M}^{st}_{\gamma_2,{\bf n}})$ to $H^*({\mathcal M}^{st}_{\gamma,{\bf n}})$ for $\phi_1\in \mathcal H_{\gamma_1}$ and $\varphi_2\in H^*(\mathcal M^{st}_{\gamma_2,{\bf n}})$. This morphism induces a representation of $\mathcal H=\bigoplus_{\gamma}\mathcal H_{\gamma}$ on $\bigoplus_{\gamma}H^*(\mathcal M^{st}_{\gamma,{\bf n}})$. It is called {\it the increasing representation} of COHA for the quiver $Q$, and denoted by $R^+_{\bf n}$. Similarly, the map $(p_2)_*(p_1^*(\phi_1)\cup p^*(\varphi))$ for $\phi_1\in \mathcal H_{\gamma_1}$ and $\varphi\in H^*(\mathcal M^{st}_{\gamma,{\bf n}})$ gives {\it the decreasing representation} $R^-_{\bf n}$ on the cohomology of the smooth model. In order to have well-defined representations one needs to show that $p$ and $p_2$ are proper morphisms. For $A_1$-case the properness is obvious (see Section \ref{2repn_A1} below). \subsection{$A_1$-case} Let $n$ be the framed structure dimension. A framed representation $(\fC^d,f)$ of $A_1$-quiver is stable if and only if $f:\fC^n\rightarrow \fC^d$ is surjective. Thus the stable framed moduli space $\mathcal M^{st}_{d,n}$ is the Grassmannian (of quotient spaces) $Gr(d,n)$ for $0\leq d\leq n$, and empty for $d>n$. It is well known (see e.g. \cite{Ful1997}, p.$161$) that the cohomology of full flag variety $Fl(n)$ is isomorphic to $R(n)=\mathbb{Q}[x_1,\ldots,x_n]/(e_1(x_1,\ldots,x_n),\ldots,e_n(x_1,\ldots,x_n))$, where $e_i(x_1,\ldots,x_n)$ represents the $i$-th elementary symmetric polynomial. The cohomology of Grassmannian $Gr(d,n)$ is a subalgebra of $R(n)$ which is generated by Schur polynomials in variables $x_1,\ldots,x_d$. Thus we can use $s_{\lambda}(x_1,\ldots,x_d)$ to represent classes in $H^*(Gr(d,n))$. There is a natural projection $\pi: Fl(n)\rightarrow Gr(d,n)$. By abuse of notations, we use the same symbol $x_i$ to denote the classes in $Gr(d,n)$ whose pullback $\pi^*(x_i)$ is $x_i\in H^*(Fl(n))$. Classes in $H^*(Gr(d,n))$ have an alternative presentation. \begin{lemma}\label{Lemma2.1} In $H^*(Gr(d,n))$, $s_{\lambda}(x_1,\ldots,x_d)=(-1)^{|\lambda|}s_{\lambda'}(x_{d+1},\ldots,x_n)$, where $\lambda'$ is the transpose partition of $\lambda$. \end{lemma} \begin{proof} The above identity can be easily deduced from the identity $\prod_{i=1}^d\frac{1}{1-x_it}=\prod_{i=d+1}^n(1-x_it)$ (see e.g. \cite{Ful1997}, p.$163$) in the ring $R(n)[t]$. Since \begin{equation} \prod_{i=1}^d\frac{1}{1-x_it}=\sum_{r\geq0}h_r(x_1,\ldots,x_d)t^r \end{equation} and \begin{equation} \prod_{i=d+1}^n(1-x_it)=\sum_{r\geq0}e_r(x_{d+1},\ldots,x_n)(-t)^r \end{equation} where $h_r$ (resp. $e_r$) stands for the $r$-th complete symmetric polynomial (resp. elementary symmetric polynomial), we have \begin{equation} h_r(x_1,\ldots,x_d)=(-1)^re_r(x_{d+1},\ldots,x_n), \quad r\geq0. \end{equation} By Jacobi-Trudi identity, \begin{eqnarray*} s_{\lambda'}(x_1,\ldots,x_d)&=&det(e_{\lambda_i-i+j}(x_1,\ldots,x_d))\\ &=&det((-1)^{\lambda_i-i+j}h_{\lambda_i-i+j}(x_{d+1},\ldots,x_n))\\ &=&(-1)^{|\lambda|}det(h_{\lambda_i-i+j}(x_{d+1},\ldots,x_n))\\ &=&(-1)^{|\lambda|}s_{\lambda}(x_{d+1},\ldots,x_n). \end{eqnarray*} The third identity comes from the fact that \begin{eqnarray*} det(h_{\lambda_i-i+j}t^{\lambda_i-i+j})&=&\sum_{\omega}\sum_{i=1}^n(-1)^{\omega}h_{\lambda_i-i+\omega(i)}t^{\lambda_i-i+\omega(i)}\\ &=&\sum_{\omega}t^{\sum_{i=1}^n\lambda_i-i+\omega(i)}\sum_{i=1}^n(-1)^{\omega}h_{\lambda_i-i+\omega(i)}\\ &=&\sum_{\omega}t^{|\lambda|}\sum_{i=1}^n(-1)^{\omega}h_{\lambda_i-i+\omega(i)}\\ &=&t^{|\lambda|}det(h_{\lambda_i-i+j}). \end{eqnarray*} \end{proof} In the following we will use this ``transpose'' presentation to do some computations. \subsection{Two representations of $A_1$-COHA}\label{2repn_A1} The scheme $[M^{st}_{d_2,d,n}/P_{d_2,d,n}]$ in $A_1$-quiver case is isomorphic as a scheme to the two-step flag variety $F_{d_2,d,n}$, which is variety of the flags $\{\fC^n\twoheadrightarrow \fC^d\twoheadrightarrow \fC^{d_2}\}$. Let $\phi_i$ be a generator of $\mathcal H_1$, and $s_{\lambda}$ be the Schur polynomial considered as an element of the cohomology of the Grassmannian $H^*(Gr(d_2,n))$ whose partition is $\lambda$. In this case, $p$ is the obvious projection from $F_{d_2,d,n}$ to $Gr(d,n)$ and $p_2$ is the obvious projection from $F_{d_2,d,n}$ to $Gr(d_2,n)$. Therefore both $p$ and $p_2$ are proper morphisms of stacks (which are in fact schemes), and the increasing and decreasing representations introduced in Section \ref{section23} are well defined. Now we want to compute the increasing representation by the formula $p_*(p_1^*(\phi_i)\cup p_2^*(s_{\lambda}))$. Note that in this case, $d_1=1$. Recall that $\phi_i$ represents the polynomial $\phi_i(x_{1,1})=x_{1,1}^i$. Using the geometric interpretation, $x^i_{1,1}$ is treated as the first Chern class of the tautological line bundle $\mathscr O(-i)$ over the classifying space of $G_1$. $\mathscr O(-i)$ will be pulled back through $p_1$ to the line bundle over $F_{d_2,d,n}$ associated to the corresponding character of $G_{d_1}$ when treating $G_{d_1}$ as a subquotient of $P_{d_2,d,n}$. Hence $p_1^*(\phi_i)$ will be the first Chern class of the line bundle described above, which is $\phi_i(x_{d_2+1})=x^i_{d_2+1}$. As homogenous spaces, $Gr(d,n)\approx GL_n(\fC)/P_{d,n}$, $Gr(d_2,n)\approx GL_n(\fC)/P_{d_2,n}$ and $F(d_2,d,n)\approx GL_n(\fC)/P_{d_2,d,n}$. We use the formula in \cite{Bri1996} to compute the pushforward. \begin{thm}\label{thm2.5}\cite{Bri1996}. Let $G$ be a connected reductive algebraic group over $\fC$ and $B$ a Borel subgroup. Choose a maximal torus $T\subset B$ with Weyl group $W$. The set of all positive roots of the root system of $(G,T)$ is denoted by $\Delta^+$. Let $P\supset B$ be a parabolic subgroup of $G$, with the set of positive roots $\Delta^+(P)$ and Weyl group $W_P$. Let $L_{\alpha}$ be the complex line bundle over $G/B$ which is associated to the root $\alpha$. The Gysin homomorphism $f_*:H^*(G/B)\rightarrow H^*(G/P)$ is given by \begin{equation} f_*(p)=\sum_{w\in W/W_P}w\cdot\frac{p}{\prod_{\alpha\in\Delta^+\backslash\Delta^+(P)}c_1(L_{\alpha})}. \end{equation} \end{thm} \begin{comment} \begin{thm}\label{thm2.5}\cite{Ped2007}. Let $G$ be a compact, connected Lie group and $T$ a maximal torus in $G$. Let $H$ be a closed, connected subgroup of maximal rank in $G$ which contains $T$. Denote by $W_H$ the Weyl group of $T$ in $H$, and by $\Delta^+(H)$ the set of positive roots of $T$ in $H$. Let $L_{\alpha}$ be the complex line bundle over $G/T$ which is associated to the root $\alpha$. The Gysin homomorphism $f_*:H^*(G/T)\rightarrow H^*(G/H)$ is given by \begin{equation} f_*(p)=\sum_{w\in W_H}w\cdot\frac{p}{\prod_{\alpha\in\Delta^+(H)}c_1(L_{\alpha})}. \end{equation} \end{thm} \end{comment} Applying Thm \ref{thm2.5}, for $s_{\lambda}\in H^{*}(Gr(d_2,n))$, \begin{equation}\label{incre-formula} (\phi_i^+\cdot s_{\lambda})(x_1,\ldots,x_{d_2+1})=\sum_{i_1<\ldots<i_{d_2}}\frac{s_{\lambda}(x_{i_1},\ldots,x_{i_{d_2}})\phi_i(x_{i_{d_2+1}})}{\prod_{j=1}^{d_2}(x_{i_j}-x_{i_{d_2+1}})}. \end{equation} Similarly, the formula of the decreasing actions is \begin{equation}\label{decre-formula-1} (\phi_i^-\cdot s_{\lambda})(x_1,\ldots,x_{d_2-1})=\sum_{i_1<\ldots<i_{d_2}}\frac{s_{\lambda}(x_{i_1},\ldots,x_{i_{d_2}})\phi_i(x_{i_{d_2}})}{\prod_{j=d_2+1}^n(x_{i_{d_2}}-x_{j})}. \end{equation} \begin{comment} \begin{remark} It is well-known that $G/T\simeq G_{\fC}/B$ and $G/H\simeq G_{\fC}/P$, where $B$ is the Borel subgroup in the complexification $G_{\fC}$ of $G$ and $P$ is the parabolic subgroup in $G_{\fC}$ corresponding to $H$. For the reason we can use (\ref{incre-formula}) and (\ref{decre-formula-1}) in the framework of our paper. \end{remark} \end{comment} \begin{remark} In Formula (\ref{decre-formula-1}), variables $x_i$ for $i>d_2-1$ appear on the right side, which do not belong to the variables on the left side. This is not a contradiction because of the formula $s_{\lambda}(x_1,\ldots,x_d)=(-1)^{|\lambda|}s_{\lambda'}(x_{d+1},\ldots,x_n)$ by Lemma \ref{Lemma2.1}. More details will be discussed in the following section. \end{remark} \begin{remark} The construction above actually only defines an incrasing operator $\phi^+_{i,d}$ from $H^*(Gr(d,n))$ to $H^*(Gr(d+1,n))$ and an decreasing operator $\phi^-_{i,d}$ from $H^*(Gr(d,n))$ to $H^*(Gr(d-1,n))$. The increasing operator we need is $\phi^+_i=\sum_{d=0}^n\phi^+_{i,d}$. The decreasing operator we need is $\phi^-_i=\sum_{d=0}^n\phi^-_{i,d}$. We can then define {\it the twisted decreasing operator} by $\hat{\phi}^-_i=\sum_{d=0}^n(-1)^{d-1}\phi^-_{i,d}$. We call the representation formed by these operators {\it the twisted decreasing representation} and denote it by $\hat{R}^-_n$. \end{remark} \section{Increasing and decreasing operators} \subsection{Increasing operators} The key result of this subsection is adapted from \cite{Fra2013}. \begin{prop}\cite{Fra2013}. The increasing representation structure is induced by the open embedding $j: \hat M^{st}_{d,n}\rightarrow \hat M_{d,n}$. The induced map $j^*:\mathcal H\rightarrow R^+_n$ is $\mathcal H$-linear and surjective. The kernel of $j^*$ equals $\sum_{p\geq0, q>0}\mathcal H_p\wedge(e_q^n\cup \mathcal H_q)$, where $e_q=\prod_{i=1}^dx_i$. \end{prop} \begin{proof} In \cite{Fra2013}, the similar result for $n=1$ is proved. It can be easily generalized to $n>1$ case for $A_1$-quiver. \end{proof} The next lemma follows immediately from the definition of Schur polynomials. \begin{lemma} $s_{(\lambda_d+1,\lambda_{d-1}+1,\ldots,\lambda_1+1)}=e_ds_{\lambda}$ for $s_{\lambda}\in \mathbb{Q}[x_1,\ldots,x_d]^{S_d}$ and $e_d=\prod_{i=1}^dx_i$. Thus $e^n_d\cup \Phi_{\bf k}=\Phi_{\bf k+n}$ for $\Phi_{\bf k}\in \mathcal H_d$, and ${\bf n}=(n,n,\ldots,n)$. \end{lemma} Finally, we come to the result, whose proof is straightforward. \begin{prop}\label{propincrebasis} The increasing representation $R^+_n$ is a quotient of $\mathcal H=\bigwedge^*(\mathcal H_1)$ whose kernel is the subalgebra generated by $\{\phi_i\}_{i\geq n}$. Thus $R_n^+$ is isomorphic to $\bigwedge^*(V(n))$ where $V(n)$ is the linear space spanned by $\phi_0, \ldots, \phi_{n-1}$ and the action is given by wedge product from left. Then $\{\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}\}_{k_1<\ldots<k_d}, 0\leq d\leq n-1$ form a basis of $R^+_n$. \end{prop} \subsubsection{Two presentations of classes in the cohomology of Grassmannian}\label{dual_1} Proposition \ref{propincrebasis} implies that we can use the notations introduced in section \ref{incre_basis} to represent cohomology classes of Grassmannians, as well as those in COHA, since they share the same product structure. Thus in $H^*(Gr(d,n))$, $\Phi_{\bf k}=\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}(x_1,\ldots,x_d)$ with index ${\bf k}=(k_1,\ldots,k_d)$ can represent the Schur polynomial $s_{\lambda({\bf k})}(x_{1,d},\ldots,x_{d,d})$, where $0\leq k_1<\ldots<k_d\leq n-1$ and $\lambda=(\lambda_d,\ldots,\lambda_1)=(k_d-d+1,k_{d-1}-d+2,\ldots,k_1)$ is a partition of length $\leq n$. Let $\lambda'$ be the transpose partition of $\lambda$, and ${\bf k'}={\bf k}(\lambda')$. By Lemma \ref{Lemma2.1}, $\Phi_{\bf k}(x_1,\ldots,x_d)=(-1)^{|\lambda|}\Phi_{\bf k'}(x_{d+1},\ldots,x_n)$. $\Phi_{\bf k}$ is called {\it the ordinary presentation} of the correspondent class $s_{\lambda}$, and $(-1)^{|\lambda|}\Phi_{\bf k'}$ is called {\it the transpose presentation}. \begin{comment} The increasing operators can be realized as the wedge product operators over $\bigwedge^n(V)$. From the previous section, the formula for the operators are \begin{equation} \begin{split} (f*g)(x_1,\ldots,x_d)&=\sum_{\text{shuffle}}\frac{f(x_{i_1},\ldots,x_{i_p})g(x_{j_1},\ldots,x_{i_q})}{\prod_{\mu=1}^p\prod_{v=1}^q(x_{j_v}-x_{i_{\mu}})}\\ sfd&=\\ dsf& \end{split} \end{equation} We want to prove that the representation is isomorphic to $\bigwedge^*(V)$. \end{comment} \subsection{Decreasing operators} \begin{comment} We use the basis used in the increasing representations to study the decreasing representation. The formula for decreasing representations is \begin{equation} (f*g)(x_1,\ldots,x_d)=\sum_{\text{shuffle}}\frac{f(x_{i_1},\ldots,x_{i_p})g(x_{j_1},\ldots,x_{i_q})}{\prod_{\mu=1}^p\prod_{v=1}^q(x_{j_v}-x_{i_{\mu}})}. \end{equation} \end{comment} \begin{comment} \begin{lemma} Schur polynomials structure on $H^*(Gr(d,n))$. \end{lemma} \begin{lemma} In $H^*(Gr(d,N))$, $s_{\lambda}(x_1,\ldots,x_d)=(-1)^{|\lambda|}s_{\lambda'}(x_{d+1},\ldots,x_n)$. [dual] \end{lemma} \end{comment} Our goal is to understand the decreasing representation using the basis $\{\Phi_{\bf k}\}_{\bf k}$ of $R^+_n$. From Section \ref{dual_1}, the equation (\ref{decre-formula-1}) can be rewritten as \begin{equation}\label{decre-formula-ture} \begin{split} (\phi_i^-\cdot \Phi_{\bf k})(x_1,\ldots,x_{d_2-1})&=\sum_{i_1<\ldots<i_{d_2}}\frac{\Phi_{\bf k}(x_{i_1},\ldots,x_{i_{d_2}})\phi_i(x_{i_{d_2}})}{\prod_{j=d_2+1}^n(x_{i_{d_2}}-x_{j})}\\ &= (-1)^{|\lambda(\bf k)|}\sum_{i_{d_2+1}<\ldots<i_n}\frac{\Phi_{\bf k'}(x_{i_{d_2+1}},\ldots,x_{i_n})\phi_i(x_{i_{d_2}})}{\prod_{j=d_2+1}^n(x_{i_{d_2}}-x_{i_{j}})}\\ &=(-1)^{|\lambda|+n-d_2}(\phi_i^+\cdot\Phi_{\bf k'})(x_{d_2},\ldots,x_n). \end{split} \end{equation} This formula suggests an algorithm. Start from an ordinary presentation of a class $ \Phi_{\bf k}= \phi_{k_1}\wedge\ldots\wedge\phi_{k_d}$ in $H^*(Gr(d,n))$, where ${\bf k}=(k_1,\ldots,k_d)$, and $0\leq k_1<\ldots<k_d\leq n-1$. First we change $\Phi_{\bf k}(x_1,\ldots,x_d)$ to $(-1)^{|\lambda(\bf k)|}\Phi_{\bf k'}(x_{d+1},\ldots,x_n)$ by Lemma \ref{Lemma2.1}. Then apply $\phi_i^-$ to $\Phi_{\bf k'}$ using formula (\ref{decre-formula-ture}) and Proposition \ref{propincrebasis}. Finally change the result back to the ordinary presentation. We need the following lemma to help us to do these transformations. \begin{lemma} If $\phi_r$ appears in $\Phi_{{\bf k}'(\lambda)}$, $\phi_{n-r-1}$ will not appear in $\Phi_{{\bf k}(\lambda)}$. On the other hand, if $\phi_r$ doesn't appear in $\Phi_{{\bf k}'(\lambda)}$, $\phi_{n-r-1}$ will appear in $\Phi_{{\bf k}(\lambda)}$. \end{lemma} \begin{proof} From Section \ref{propincrebasis}, $\lambda=(\lambda_d,\ldots,\lambda_1)=(k_d-d+1,k_{d-1}-d+2,\ldots,k_1)$ is a partition of length $\leq n$. The transpose partition is defined by $\lambda'_j=\#\{\lambda_i\geq n-d+1-j\}$ for $1\leq j\leq n-d$. Thus we have \begin{equation} \lambda_{d-i+1}=\begin{cases} n-d&\text{if}\ 1\leq i\leq \lambda'_1,\\ n-d-j&\text{if}\ \lambda'_j+1\leq i\leq \lambda'_{j+1}\ \text{for}\ 1\leq j\leq n-d-1,\\ 0&\text{if}\ \lambda'_{n-d}+1\leq i\leq d. \end{cases} \end{equation} From $\lambda=(\lambda_d,\ldots,\lambda_1)=(k_d-d+1,k_{d-1}-d+2,\ldots,k_1)$, it immediately implies \begin{equation} k_{d-i+1 =\begin{cases} n-i&\text{if}\ 1\leq i\leq \lambda'_1,\\ n-i-j&\text{if}\ \lambda'_j+1\leq i\leq \lambda'_{j+1}\ \text{for}\ 1\leq j\leq n-d-1,\\ d-i&\text{if}\ \lambda'_{n-d}+1\leq i\leq d. \end{cases} \end{equation} Then $n-k'_{j+1}= n-j-\lambda'_{j+1}\leq k_{d-i+1}=n-i-j\leq n-j-\lambda'_j-1= n-{k'_j}-2$ if $\lambda'_j+1\leq i\leq \lambda'_{j+1}$ for $1\leq j\leq n-d-1$, or $0=d-d\leq k_{d-i+1}=d-i\leq d-\lambda'_{n-d}-1=n-k'_{n-d}-2$ if $\lambda'_{n-d}+1\leq i\leq d$, or $n-k'_1=n-\lambda'_1\leq k_d=n-i\leq n-1$. Therefore $k_{d-i+1}$ would run over all integers between $n-k'_{j+1}$ and $n-k'_j-2$, or between $0$ and $n-k'_{n-d}-2$, or between $n-k'_1$ and $n-1$. If $\phi_r$ doesn't appear in $\Phi_{{\bf k'}(\lambda)}$, there are three cases. If $k'_s< r< k'_{s+1}$ for $1\leq s\leq n-d-1$, $n-k'_{s+1}\leq n-r-1\leq n-k'_s-2$. If $k'_{n-d}<r\leq d$, $0\leq n-r-1\leq n-k'_{n-d}-2$. If $0\leq r<k'_1$, $n-k'_1\leq n-r-1\leq n-1$. This means that there exists some $1\leq i\leq d$ such that $k_{d-i+1}=n-r-1$. If $\phi_r$ appear in $\Phi_{{\bf k}'(\lambda)}=\phi_{k'_1}\wedge\ldots\wedge\phi_{k'_{n-d}}$, let $r=k'_s$. Then $k_{d-i+1}$ can never be $n-k'_s-1=n-r-1$ for $1\leq i\leq d$. \end{proof} \begin{comment} since $n-k'_{j+1}\leq k_{d-i+1}\leq n-{k'_j}-2$, let $j=s$ and we get $n-k'_{s+1}\leq k_{d-i+1}\leq n-{k'_s}-2$ for $\lambda'_s+1\leq i\leq \lambda'_{s+1}$. $k'_s<r<k'_{s+1}$ is the same as $k'_s+1\leq r\leq k'_{s+1}-1$. Since the cohomology of Grassmannian is $S[x_1,\ldots,x_n]/R$, where $R$ is the subalgebra of symmetric polynomials in $S[x_1,\ldots,x_n]$, we have $S_{\lambda}(x_1,\ldots, x_d)=S_{\lambda'}(x_{d+1},\ldots,x_{n})$, where $\lambda$ is a partition of $d\times (n-d)$ and $\lambda'$ is the conjugate of $\lambda$. Therefore the formula for decreasing representation is[CHECK!] \begin{equation} (\phi_r^-\cdot s_{\lambda})(x_1,\ldots,x_d)=\sum_{\text{shuffle}}\frac{f(x_{i_1},\ldots,x_{i_p})g(x_{j_1},\ldots,x_{i_q})}{\prod_{\mu=1}^p\prod_{v=1}^q(x_{j_v}-x_{i_{\mu}})}, \end{equation} which can be written in the following forms \begin{equation} \phi_r\wedge f(x_{d+1},\ldots,x_n). \end{equation} The above argument For $\phi_i^-\cdot s_{\lambda}$ where $s_\lambda\in \bigwedge ^*(V)$, we first transfer $s_{\lambda}(x_1,\ldots,x_d)$ to $s_{\lambda'}(x_{d+1},\ldots,x_{n})$, do the wedge product, and then change the partition back (along with the basis). We now want to compute $\phi_r^-\cdot \phi_{k_1}\wedge\ldots\wedge\phi_{k_d}$. Let us start from $\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}$. By section \ref{incre_basis}, the associated partition $\lambda=(\lambda_d,\ldots,\lambda_1)$ is given by $\lambda_i=k_i-i+1$ for $1\leq i\leq d$. The conjugate of $\lambda$ is $\lambda'=(\lambda'_{n-d},\ldots,\lambda'_{1})$ where $\lambda'_j=\#\{\lambda_i\geq n-d+1-j\}$ for $1\leq j\leq n-d$. Then the associated conjugate Schur polynomial is $(-1)^{|\lambda|}s_{\lambda'}(x_{d+1},\ldots,x_n)=(-1)^{|\lambda|}\phi_{k'_{1}}\wedge\ldots\wedge \phi_{k'_{n-d}}(x_{d+1},\ldots,x_n)$, where $k'_j=\lambda'_j+j-1$ for $1\leq j\leq n-d$. It immediately follows that $\#\{\lambda_j=n-d+1-i\}=\lambda'_i-\lambda'_{i-1}$ for $2\leq i\leq n-d$ and $\#\{\lambda_j=n-d\}=\lambda'_1$. Now assume $r$ does not appear in $k'$. Let $k'_{s}< r<k'_{s+1}$. After multiple $\phi_r$ to $\Phi_{\bf k'}$, the polynomial becomes $(-1)^{n-d-s}(-1)^{|\lambda|}\phi_{k'_1}\wedge\ldots\wedge\phi_{k'_s}\wedge\phi_r\wedge\phi_{k'_{s+1}}\wedge\ldots\wedge\phi_{k'_{n-d}}$, or $(-1)^{n-d-s+|\lambda|}\phi_{\tau'_1}\wedge\ldots\wedge\phi_{\tau'_{n-d+1}}$ where the index $\tau'=(\tau'_{n-d+1},\ldots,\tau'_1)$ is given by \begin{equation} \begin{split} \mu=&(\underbrace{n-d+1,\ldots,n-d+1}_{\lambda'_1},\underbrace{n-d,\ldots,n-d}_{\lambda'_2-\lambda'_1},\ldots,\underbrace{n-d-s+2,\ldots,n-d-s+2}_{\lambda'_s-\lambda'_{s-1}},\\ &\underbrace{n-d-s+1,\ldots,n-d-s+1}_{r-s-\lambda'_s},\underbrace{n-d-s,\ldots,n-d-s}_{\lambda'_{s+1}-1-r+s},\\ &\underbrace{n-d-s-1,\ldots,n-d-s-1}_{\lambda'_{s+2}-\lambda'_{s+1}},\ldots,\underbrace{1,\ldots,1}_{\lambda'_{n-d}-\lambda'_{n-d-1}}). \end{split} \end{equation} After checking these indexes, the only difference between $\lambda$ and $\mu$ is that in $\lambda$, $\#\{\lambda_j=n-d-s\}=\lambda'_{s+1}-\lambda'_{s}$, and in $\mu$, $\#\{\mu_j=n-d-s\}=\lambda'_{s+1}-1-r+s$ and $\#\{\mu_j=n-d-s+1\}=r-s-\lambda'_{s}$. Therefore we have the following partitions: \begin{equation} \lambda=(\underbrace{n-d,\ldots,n-d}_{\lambda'_1},\underbrace{n-d-1,\ldots,n-d-1}_{\lambda'_2-\lambda'_1},\ldots,\underbrace{2,\ldots,2}_{\lambda'_{n-d-1}-\lambda'_{n-d-2}},\underbrace{1,\ldots,1}_{\lambda'_{n-d}-\lambda'_{n-d-1}}), \end{equation} and \begin{equation} \lambda=(\underbrace{n-d,\ldots,n-d}_{\lambda'_1},\underbrace{n-d-1,\ldots,n-d-1}_{\lambda'_2-\lambda'_1},\ldots,\underbrace{2,\ldots,2}_{\lambda'_{n-d-1}-\lambda'_{n-d-2}},\underbrace{1,\ldots,1}_{\lambda'_{n-d}-\lambda'_{n-d-1}}), \end{equation} or Turn to the associated Schur polynomials. \end{comment} \begin{defn} We introduce the {\it right partial derivative operator} $\partial_i^R: \bigwedge^*(V(n))\rightarrow\bigwedge^*(V(n))$ to state the following proposition. For $\Phi_{\bf k}=\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}$, if $\phi_{i}$ appears in $\Phi_{\bf k}$, $\partial_{i}^R(\Phi_{\bf k})=(-1)^{d-i}\phi_{k_1}\wedge\ldots\wedge\hat{\phi}_{i}\wedge\ldots\wedge\phi_{k_d}$. If $\phi_i$ does not appear in $\Phi_{\bf k}$, $\partial_{i}^R(\Phi_{\bf k})=0$. The {\it left partial derivative operator} $\partial_i^L:\bigwedge^*(V(n))\rightarrow\bigwedge^*(V(n))$ is defined in the similar way. If $\phi_{i}$ appears in $\Phi_{\bf k}$, $\partial_{i}^L(\Phi_{\bf k})=(-1)^{i-1}\phi_{k_1}\wedge\ldots\wedge\hat{\phi}_{i}\wedge\ldots\wedge\phi_{k_d}$. If $\phi_i$ does not appear in $\Phi_{\bf k}$, $\partial_{i}^L(\Phi_{\bf k})=0$. It is easy to see that $\partial_i^{R}=(-1)^{d-1}\partial^L_i$ on $\bigwedge^d(V(n))$. \end{defn} \begin{prop} The decreasing operators are the right partial derivative operators on $\bigwedge ^*(V(n))$: $\phi_r^-\cdot\Phi_{\bf k}=\partial_{n-r-1}^R(\Phi_{\bf k})$. \end{prop} \begin{proof} What we want is to compute $\phi^-_r\cdot \Phi_{\bf k}$. Based on formula (\ref{decre-formula-ture}), we have \begin{equation} \begin{split} (\phi^-_r\cdot \Phi_{\bf k})(x_1,\ldots,x_{d-1})&=(-1)^{|\lambda|+n-d}(\phi_r^+\cdot\Phi_{\bf k'})(x_{d},\ldots,x_n)\\ &=(-1)^{|\lambda|+n-d}(\phi_r\wedge\phi_{k'_1}\wedge\ldots\wedge\phi_{k'_{n-{d}}})(x_{d},\ldots,x_n). \end{split} \end{equation} If $\phi_{n-r-1}$ is not in the $\Phi_{\bf k}$, $\phi_r$ will appear in $\Phi_{\bf k'}$. Thus $\phi_r^-\cdot\Phi_{\bf k}(x_1,\ldots,x_{d-1})=(\phi_r\wedge\phi_{k'_1}\wedge\ldots\wedge\phi_r\wedge\ldots\wedge\phi_{k'_{n-d}})(x_{d},\ldots,x_{n})=0$. If $\phi_{n-r-1}$ appears in $\Phi_{\bf k}=\phi_{k_1}\wedge\ldots\wedge\phi_{k_d}$, $\phi_r$ won't be in $\Phi_{\bf k'}=\phi_{k'_1}\wedge\ldots\wedge\phi_{k'_{n-d}}$. Assume $k'_s< r<k'_{s+1}$. We have \begin{equation} \phi_r\wedge\phi_{k'_1}\wedge\ldots\wedge\phi_{k'_{n-{d}}}=(-1)^{s}\phi_{k'_1}\wedge\ldots\wedge\phi_{k'_s}\wedge\phi_r\wedge\phi_{k'_{s+1}}\wedge\ldots\wedge\phi_{k'_{n-{d}}}. \end{equation} We have to change this back to the ordinary presentation. First, let's find the partition associated to this polynomial. The index ${\bf l'}=(l'_{1},\ldots,l'_{n-d+1})$ is given by \begin{equation} l'_i=\begin{cases} k'_{i-1}&s+2\leq i\leq n-d+1,\\ r&i=s+1,\\ k'_i&1\leq i\leq s. \end{cases} \end{equation} Then the new partition $\mu'=(\mu'_{n-d+1},\ldots,\mu'_1)$ is given by \begin{equation} \mu'_i=\begin{cases} \lambda'_{i-1}-1&s+2\leq i\leq n-d+1,\\ r-s&i=s+1,\\ \lambda'_i&1\leq i\leq s. \end{cases} \end{equation} Next step is to recover the partition $\mu$ from its transpose $\mu'$. From the definition of transpose partition, $\mu'_j=\#\{\mu_i\geq n-d+2-j\}$ for $1\leq j\leq n-d-1$. Then \begin{comment} We have \begin{enumerate} \item $\#\{\mu_j= n-d+2-i\}=\lambda'_{i-1}-\lambda'_{i-2}=\#\{\lambda_j=n-d+2-i\}$ for $s+3\leq i\leq n-d+1$, \item $\#\{\mu_j= n-d-s\}=\lambda'_{s+1}-1-r+s$, \item $\#\{\mu_j= n-d-s+1\}=r-s-\lambda'_s$, \item $\#\{\mu_j= n-d+2-i\}=\lambda'_i-\lambda'_{i-1}=\#\{\lambda_j=n-d+1-i\}$ for $2\leq i\leq s$, \item $\#\{\mu_j= n-d+1\}=\lambda'_1=\#\{\lambda_j=n-d\}$. \end{enumerate} \end{comment} \begin{equation} \mu_{d-i}=\begin{cases} n-d-j &\text{if}\ \lambda'_{j}\leq i\leq\lambda'_{j+1}-1\ \text{and}\ s+1\leq j\leq n-d-1,\\ n-d-s &\text{if}\ r-s+1\leq i\leq \lambda'_{s+1}-1,\\ n-d+1-s &\text{if}\ \lambda'_s+1\leq i\leq r-s,\\ n-d+1-j&\text{if}\ \lambda'_j+1\leq i\leq \lambda'_{j+1}\ \text{and}\ 2\leq j\leq s-1,\\ n-d+1&\text{if}\ 1\leq i\leq \lambda'_1. \end{cases} \end{equation} By comparing it wit \begin{equation} \lambda_{d-i+1}=\begin{cases} n-d-j&\text{if}\ \lambda'_j+1\leq i\leq \lambda'_{j+1}\ \text{and}\ 2\leq j\leq n-d-1,\\ n-d&\text{if}\ 1\leq i\leq \lambda'_1, \end{cases} \end{equation} we notice that $\mu_{i}=\lambda_{i+1}+1$ for $d-r+s\leq i\leq d-1$ and $\mu_{i}=\lambda_i$ for $1\leq i\leq d-r+s-1$. Therefore, since $l_i=\mu_i+i-1$ for $1\leq i\leq d-1$ and $k_j=\lambda_j+j-1$ for $1\leq j\leq d$, it is easy to see that $l_i=k_{i+1}$ for $d-r+s\leq i\leq d-1$ and $l_{i}=k_i$ for $1\leq i\leq d-r+s-1$. Thus the resulted presentation is $(-1)^{n-d+s+|\lambda|+|\mu|}\phi_{k_1}\wedge\ldots\wedge\hat{\phi}_{n-r-1}\wedge\ldots\wedge\phi_{k_d}$$=$$(-1)^{r+s}\phi_{k_1}\wedge\ldots\wedge\hat{\phi}_{n-r-1}\wedge\ldots\wedge\phi_{k_d}$$=$$\partial_{n-r-1}^R(\Phi_{\bf k})$, which is $\Phi_{\bf k}$ applied by the right partial derivative of $\phi_{n-r-1}$. If $r<k'_1$ or $r>k'_{n-d}$, the similar process will lead to the same result. \end{proof} \subsection{Twisted decreasing operators} From the above computations, it is obvious to have the following proposition about the twisted decreasing operators. \begin{prop} The twisted decreasing operators are the left partial derivative operators on $\bigwedge ^*(V(n))$: $\hat{\phi}_r^-\cdot\Phi_{\bf k}=\partial_{n-r-1}^L(\Phi_{\bf k})$. \end{prop} \section{The double of representations} \subsection{The double of untwisted representations} Let $V(n)$ be the $n$-dimensional vector space spanned by $\{\phi_i \}_{i=0}^{n-1}$. The increasing and decreasing representations can be realized as creation operators $\{\alpha_i^+\}_{i=0}^{n-1}$ and annihilation operators $\{\alpha_i^-\}_{i=0}^{n-1}$ on $\bigwedge^*(V(n))$. Here $\alpha_i^+=\phi_i^+$ is the left wedge product, and $\alpha_i^-=\phi_{n-i-1}^-$ is the right partial derivative $\partial_i^R$. Define $H=[\alpha_0^+,\alpha_0^-]$ and the following operators for $0\leq i\leq n-1$: \begin{equation*} T_i=\frac{\alpha_i^++[H,\alpha_i^+]/2}{2},\quad S_i=\frac{\alpha_i^--[H,\alpha_i^-]/2}{2}. \end{equation*} Then define the following operators \begin{equation*} \begin{split} E_{0}&=-\frac{\alpha_0^-+[H,\alpha_0^-]/2}{2}, \quad F_0=\frac{\alpha_0^+-[H,\alpha_0^+]/2}{2},\\ E_1&=S_0,\quad F_1=T_0,\\ E_i&=[T_{i-2},S_{i-1}],\quad F_i=[T_{i-1},S_{i-2}], \quad \text{for}\ \ 2\leq i\leq n,\\ H_i&=[E_i,F_i],\quad \text{for}\ \ 0\leq i\leq n. \end{split} \end{equation*} In the following, let $P_k$ be an arbitrary degree $k$ monomial in $\bigwedge^*(V(n))$. Denote by $R_i^j$ the operator which change the factor $\phi_i$ in $P_k$ to $\phi_j$. \begin{lemma}\label{want-to-prove-matrix} For $2\leq i\leq n$, \begin{enumerate} \item $H(P_k)=(-1)^{k-1}P_k$. \item $E_0(P_k)=-\partial_0^R (P_k)$ if $k$ is even, and $\phi_0$ is included in $P_k$. Otherwise it's 0. \item $F_0(P_k)=\phi_0\wedge P_k$ if $k$ is odd, and $\phi_0$ is NOT included in $P_k$. Otherwise it's 0. \item $E_1(P_k)=\partial_0^R (P_k)$ if $k$ is odd, and $\phi_0$ is included in $P_k$. Otherwise it's 0. \item $F_1(P_k)=\phi_0\wedge P_k$ if $k$ is even, and $\phi_0$ is NOT included in $P_k$. Otherwise it's 0. \item $S_{i-1}(P_k)=\partial_{i-1}^R (P_k)$ if $k$ is odd, and $\phi_{i-1}$ is included in $P_k$. Otherwise it's 0. \item $T_{i-1}(P_k)=\phi_{i-1}\wedge P_k$ if $k$ is even, and $\phi_{i-1}$ is NOT included in $P_k$. Otherwise it's 0. \item $E_i(P_k)=R_{i-1}^{i-2}(P_k)$ if $\phi_{i-1}$ is included in $P_k$ and $\phi_{i-2}$ is NOT. Otherwise it's 0. \item $F_i(P_k)=R_{i-2}^{i-1}(P_k)$ if $\phi_{i-2}$ is included in $P_k$ and $\phi_{i-1}$ is NOT. Otherwise it's 0. \item $H_0(P_k)=\begin{cases} -P_k \quad &\text{$k$ is even and $\phi_0$ is included in $P_k$}\\ P_k &\text{$k$ is odd and $\phi_0$ is NOT included in $P_k$}\\ 0&otherwise \end{cases}$. \item $H_1(P_k)=\begin{cases} P_k \quad &\text{$k$ is even and $\phi_0$ is NOT included in $P_k$}\\ -P_k &\text{$k$ is odd and $\phi_0$ is included in $P_k$}\\ 0&otherwise \end{cases}$. \item $H_i(P_k)=\begin{cases} -P_k \quad &\text{$\phi_{i-1}$ is included in $P_k$ and $\phi_{i-2}$ is NOT included}\\ P_k &\text{$\phi_{i-2}$ is included in $P_k$ and $\phi_{i-1}$ is NOT included}\\ 0&otherwise \end{cases}$. \end{enumerate} \end{lemma} \begin{proof} The proof of the lemma is straightforward. \end{proof} \begin{comment} \begin{proof} $E_0(P_n)=\frac{1+(-1)^n}{2}\phi_1^-(P_n)$, $F_0(P_n)=\frac{1-(-1)^n}{2}\phi_1^+(P_n)$, $E_1(P_n)=\frac{1-(-1)^n}{2}\phi_1^-(P_n)$, $F_1(P_n)=\frac{1+(-1)^n}{2}\phi_1^+(P_n)$. \end{proof} \end{comment} The main theorem below implies that the combination of two representations $R^+_n$ and $R^-_n$ of $A_1$-COHA forms an $D_{n+1}$-Lie algebra. \begin{thm} The above operators satisfy the Serre relations for $0\leq i, j\leq n$: \begin{enumerate} \item $[H_i,H_j]=0$, \item $[E_i,F_j]=\delta_{ij}H_i$, \item $[H_i,E_j]=a_{ji}E_j,\quad [H_i,F_j]=-a_{ji}F_j$, \item $(adE_i)^{-a_{ji}+1}(E_j)=0$, if $i\neq j$, \item $(adF_i)^{-a_{ji}+1}(F_j)=0$, if $i\neq j$, \end{enumerate} where $(a_{ij})$ is the Cartan matrix for $D_{n+1}$-Lie algebras. \end{thm} \begin{proof} The first statement holds since each $H_i$ is diagonal by Lemma \ref{want-to-prove-matrix}. The second is due to the definition of $H_i$ for $\delta_{ij}=1$. For the other relations, we need to check the following relations, which can be easily solved by Lemma \ref{want-to-prove-matrix}: \begin{enumerate} \item $a_{ii}=2$, for $0\leq i\leq n$, \item $a_{21}=a_{20}=a_{12}=a_{02}=a_{i-1,i}=a_{i,i-1}=-1$ for $3\leq i\leq n$, \item $a_{10}=a_{01}=a_{0,i}=a_{i,0}=a_{1,i}=a_{i,1}=a_{i,j}=a_{j,i}=0$, for $3\leq i\leq n$, $2\leq j\leq n$ and $|i-j|>1$ \item $[E_0,F_1]=[E_1,F_0]=[E_0,F_j]=[E_1,F_j]=[E_i,F_0]=[E_i,F_1]=[E_i,F_j]=0$, for $2\leq i\neq j\leq n$, \item $[E_2,[E_2,E_0]]=[E_0,[E_0,E_2]]=[F_2,[F_2,F_0]]=[F_0,[F_0,F_2]]=0$, \item $[E_{i-1},[E_{i-1},E_i]]=[E_{i},[E_{i},E_{i-1}]]=[F_{i-1},[F_{i-1},F_i]]=[F_{i},[F_{i},F_{i-1}]]=0$, for $2\leq i\leq n$, \item $[E_0,E_1]=[E_0,E_i]=[E_1,E_i]=[E_i,E_j]=0$ for $3\leq i\leq n$, $2\leq j\leq n$ and $|i-j|>1$, \item $[F_0,F_1]=[F_0,F_i]=[F_1,F_i]=[F_i,F_j]=0$ for $3\leq i\leq n$, $2\leq j\leq n$ and $|i-j|>1$. \end{enumerate} \end{proof} \subsection{The double of twisted representations} Use the setting from the previous subsection. Let $\hat{\alpha}_i^-=\hat{\phi}_{n-i-1}^-$ be the left partial derivative $\partial_i^L$. Now we use creation operators $\{\alpha_i^+\}_{i=0}^{n-1}$ and twisted annihilation operators $\{\hat{\alpha}_i^-\}_{i=0}^{n-1}$ to form representations. These relations show that the double of twisted representations form a finite Clifford algebra. \begin{thm} Operators $\{\alpha_i^{+}\}_{i=0}^{n-1}$ and $\{\hat{\alpha}_i^{-}\}_{i=0}^{n-1}$ satisfy the following relations: \begin{enumerate} \item $\alpha^+_i\alpha^+_j+\alpha^+_j\alpha^+_i=0$, \item $\hat{\alpha}^-_i\hat{\alpha}^-_j+\hat{\alpha}^-_j\hat{\alpha}^-_i=0$, \item $\alpha^+_i\hat{\alpha}^-_j+\hat{\alpha}^-_j\alpha^+_i=\delta_{i,j}$. \end{enumerate} \end{thm} \begin{proof} The proof is straightforward by applying the formula in the definitions of the operators to the basis vectors of $\bigwedge^*(V(n))$. \end{proof} \section{Further discussions} For fixed $n$, the double of $R^+_n$ and $R^-_n$ forms $D_{n+1}$-Lie algebra, and the double of $R^+_n$ and $\hat{R}^-_n$ forms a finite Clifford algebra. This leads to the following conjecture stated in \cite{Soi2014}. \begin{conj}\cite{Soi2014} Full COHA for the quiver $A_1$ is isomorphic to the infinite Clifford algebra $Cl_c$ with generators $\phi_n^{\pm},\ n\in\mathbb{Z}$ and the central element $c$, subject to the standard anticommuting relations between $\phi_n^+$ (resp. $\phi_n^-$) as well as the relation $\phi_n^+\phi_m^-+\phi_m^-\phi_n^+=\delta_{n,m}c$. \end{conj} \begin{remark} As stated in \cite{Soi2014}, in the case of finite-dimensional representations we have $c\mapsto0$ and we see two representations of the infinite Grassmann algebra, which are combined in the representations of the orthogonal Lie algebra. \end{remark} \subsection*{Acknowledgement} I thank to Yan Soibelman who introduced me to this subject, stated the problem and made multiple comments on the draft of this paper. I thank Hans Franzen for helpful communications, whose paper helped me to simplify my proof. I also thank to Zongzhu Lin, Zhaobin Fan, Jie Ren and Hui Chen for helpful discussions. \begin{comment} \item $H(P_n)=(-1)^{n-1}P_n$. \begin{proof} \begin{eqnarray*} H(P_n)&=&[\phi_i^+,\phi_i^-](P_n)\\ &=&\phi_i^+(\phi_i^-(P_n))-\phi_i^-(\phi_i^+(P_n))\\ &=&\phi_i\wedge(P_n\partial_i)-(\phi_i\wedge P_n)\partial_i. \end{eqnarray*} If $P_n$ has $\phi_i$, $\phi_i\wedge(P_n\partial_i)=(-1)^{n-1}P_n$. If $P_n$ doesn't have $\phi_i$, $-(\phi_i\wedge P_n)\partial_i=-(-1)^nP_n$. To sum up, $H(P_n)=(-1)^{n-1}P_n$. \end{proof} \item $X_1^+(P_n)=\phi_1\wedge P_n$ if $n$ is even, and $0$ if $n$ odd. $X_1^-(P_n)=P_n\partial_1$ if $n$ is odd, and $0$ if $n$ even. $X_0^+(P_n)=\phi_1\wedge P_n$ if $n$ is odd, and $0$ if $n$ even. $X_0^-(P_n)=P_n\partial_i$ if $n$ is even, and $0$ if $n$ odd. \begin{proof} \begin{eqnarray*} [H,\phi_1^+](P_n)&=&H(\phi_1^+(P_n))-\phi_1^+(H(P_n))\\ &=&(-1)^n\phi_1^+(P_n)-(-1)^{n-1}\phi_1^+(P_n)\\ &=&2(-1)^n\phi_1^+(P_n). \end{eqnarray*} \begin{eqnarray*} [H,\phi_1^-](P_n)&=&H(\phi_1^-(P_n))-\phi_1^-(H(P_n))\\ &=&(-1)^{n-2}\phi_1^-(P_n)-(-1)^{n-1}\phi_1^-(P_n)\\ &=&2(-1)^n\phi_1^-(P_n). \end{eqnarray*} Thus \begin{eqnarray*} X_1^+(P_n)&=&\frac{\phi_1^+(P_n)+[H,\phi_1^+]/2(P_n)}{2}\\ &=&\frac{1+(-1)^n}{2}\phi_1^+(P_n). \end{eqnarray*} \begin{eqnarray*} X_1^-(P_n)&=&\frac{\phi_1^-(P_n)-[H,\phi_1^-]/2(P_n)}{2}\\ &=&\frac{1-(-1)^n}{2}\phi_1^-(P_n). \end{eqnarray*} \begin{eqnarray*} X_0^+(P_n)&=&\frac{\phi_1^+(P_n)-[H,\phi_1^+]/2(P_n)}{2}\\ &=&\frac{1-(-1)^n}{2}\phi_1^+(P_n). \end{eqnarray*} \begin{eqnarray*} X_0^-(P_n)&=&\frac{\phi_1^-(P_n)+[H,\phi_1^-]/2(P_n)}{2}\\ &=&\frac{1+(-1)^n}{2}\phi_1^-(P_n). \end{eqnarray*} \end{proof} \item $T_i^+(P_n)=\phi_i\wedge P_n$ if $n$ is even, and $0$ if $n$ odd. $T_i^+(P_n)=P_n\partial_i$ if $n$ is odd, and $0$ if $n$ even. \item $[T_i^+,T_{i-1}^-]=-R_{i-1}^i$ for $1\leq i\leq n$. $[T_i^-,T_{i-1}^+]=-R_{i}^{i-1}$ for $1\leq i\leq n$. \item $H_i(P_n)=P_n$ if $P_n$ has $\phi_{i-1}$ and doesn't have $\phi_i$. $H_i(P_n)=-P_n$ if $P_n$ has $\phi_{i}$ and doesn't have $\phi_{i-1}$. Otherwise, $H_i(P_n)=0$. \end{comment}
1,116,691,499,973
arxiv
\section{Introduction}\label{sec:Introduction} Let $(X,g)$ be an oriented Riemannian $3$--manifold and $P \rightarrow X$ a principal $G$--bundle, where $G$ is a compact Lie group. (Magnetic) \emph{monopoles} are solutions $(A,\Phi )$ to the \emph{Bogomolny equation} \begin{equation}\label{eqn:Bogomolny} \ast F_{A}=d_{A}\Phi. \end{equation} Here $\ast$ is the Hodge star operator of $(X,g)$; $F_{A}$ is the curvature of a connection $A$ on $P$; and $\Phi$, the \emph{Higgs field}, is a section of the adjoint bundle $\textrm{ad} (P)$. The moduli space of monopoles on $P \rightarrow X$ is the space of equivalence classes of solutions to \eqref{eqn:Bogomolny} with respect to the action of the gauge group $\text{Aut} (P)$. The Bogomolny equation is the dimensional reduction of the Yang-Mills anti-self-duality (ASD) equation, \emph{i.e.} monopoles on $X$ are circle-invariant instantons on $X \times \mathbb{S} ^{1}$. An immediate consequence of equation (\ref{eqn:Bogomolny}) and the Bianchi identity is \begin{equation}\label{eqn:Harmonic:Higgs:Field} d_{A}^{\ast}d_{A}\Phi =0. \end{equation} In particular, when $X$ is compact smooth monopoles coincide with reducible (assuming $\Phi \neq 0$) flat connections. In order to find irreducible solutions to (\ref{eqn:Bogomolny}) one has to consider a non-compact base manifold $X$, in the sense that either $X$ is complete or we allow for singularities of the fields $(A,\Phi)$, or a combination of the two possibilities, as in this paper. The classical case of smooth monopoles on $\mathbb{R} ^{3}$ has been investigated from many different points of view, \emph{cf.} Atiyah and Hitchin's book \cite{Atiyah:Hitchin}. An important property of the moduli spaces of monopoles on $\mathbb{R} ^{3}$ is that they are hyperk\"aler manifolds by virtue of an infinite dimensional hyperk\"ahler quotient. In the 1980's Atiyah and Hitchin found an explicit formula for the metric on the moduli space of centred charge $2$ $SU(2)$ monopoles on $\mathbb{R} ^{3}$. From this formula it follows that the Atiyah--Hitchin manifold is a gravitational instanton (a complete hyperk\"ahler $4$--manifold with finite $L^{2}$--norm of the curvature) of type ALF: the volume of large geodesic balls of radius $r$ grows like $r^{3}$, the complement of a large ball is a circle bundle over $\mathbb{R} ^{3}/\mathbb{Z}_{2}$ and the metric is asymptotically adapted to this circle fibration. Pursuing the idea that moduli spaces of solutions to dimensional reductions of the Yang--Mills ASD equations on $\mathbb{R} ^{4}$ are ``a natural place to look for gravitational instantons'' \cite{Cherkis:Talk}, in \cite{Cherkis:Kapustin:1}, \cite{Cherkis:Kapustin:3} and \cite{Cherkis:Kapustin:2} Cherkis and Kapustin introduced the study of \emph{periodic monopoles}, \emph{i.e.} monopoles on $\mathbb{R} ^{2} \times \mathbb{S} ^{1}$, possibly with isolated singularities at a finite collection of points. They argued that, when $4$--dimensional, moduli spaces of periodic monopoles (with singularities) are gravitational instantons of type ALG: the volume of large balls grows quadratically and the metric is asymptotically adapted to a fibration by $2$--dimensional tori. In \cite{Foscolo:Deformation} we proved that for generic choices of parameters the moduli space $\mathcal{M}_{n,k}$ of $SO(3)$ periodic monopoles of charge $k$ with $n$ singularities is either empty or a smooth hyperk\"ahler manifold of dimension $4(k-1)$. Here the choice of generic parameters guarantees that $\mathcal{M}_{n,k}$ does not contain reducible solutions. In this paper we address the existence question and construct periodic solutions to \eqref{eqn:Bogomolny} by gluing methods. The main result of the paper is the following theorem. We refer to Corollary \ref{cor:Existence} for a more precise statement. \begin{thm}\label{thm:Main:Theorem} Under appropriate assumptions on the parameters defining the boundary conditions, there exist irreducible $SO(3)$ periodic monopoles (with singularities), \emph{i.e.} the moduli space $\mathcal{M}_{n,k}$ contains smooth points. \end{thm} Monopoles on $\mathbb{R} ^{3}$ (with structure group $SU(2)$ and without singularities) were themselves constructed via gluing methods in a seminal work by Taubes \cite[Theorem 1.1 \S IV.1]{Jaffe:Taubes}. Furthermore, Cherkis and Kapustin's physically-motivated computation of the asymptotics of the $L^{2}$--metric on the moduli space of periodic monopoles \cite{Cherkis:Kapustin:3} is based on the idea that a charge $k$ monopole is a non-linear superposition of particle-like charge $1$ components. The theorem confirms this expectation and we plan to exploit the gluing construction to recover Cherkis and Kapustin's asymptotic formula for the $L^{2}$--metric on the moduli spaces in a future paper. The main steps and ingredients of Taubes's original gluing construction for Euclidean monopoles without singularities are: \begin{itemize} \item[(i)] Charge 1 monopoles on $\mathbb{R} ^{3}$ are completely explicit, as we will see in Section \ref{sec:Prasad:Sommerfield:Monopole}. Up to translations and scaling there exists a unique solution, localised around the origin in $\mathbb{R}^{3}$. \item[(ii)] Given $k$ points far apart in $\mathbb{R}^{3}$, Taubes constructs an approximate solution to (\ref{eqn:Bogomolny}) patching together $k$ charge 1 monopoles each localised around one of these points; the choice of gluing maps accounts for a further $k-1$ parameters. \item[(iii)] The approximate solution is deformed to a genuine monopole by an application of the implicit function theorem. \end{itemize} The first difficulty to implement the construction in the periodic case is that not even charge $1$ periodic monopoles are explicitly known. In fact, numerical experiments of Ward \cite{Ward} show that a very different behaviour should be expected depending on the sign of the \emph{mass}, the constant term $v$ in the expansion of $|\Phi|$ at infinity, \emph{cf.} Definition \ref{def:Boundary:Conditions}. When $v$ is positive and large, charge $1$ periodic monopoles are concentrated in an almost spherical region around their centre. When the mass is negative and large in absolute value, the monopoles are instead localised in a slab containing \emph{two} maxima of the energy density. As a consequence, the construction of a charge $k$ periodic monopole as a superposition of $k$ charge $1$ monopoles can be carried out only when the charge $1$ constituents have large positive mass. There are two ways of arranging this. On one side one can consider periodic monopoles with large mass $v$. By scaling, the large mass limit $v \rightarrow +\infty$ is equivalent to the large radius limit $\mathbb{R} ^{2} \times \mathbb{R} /2\pi v\mathbb{Z} \rightarrow \mathbb{R} ^{3}$. Here nothing is special about the case $X=\mathbb{R} ^{2} \times \mathbb{S} ^{1}$ and it is conceivable that large mass monopoles exist on any $3$--manifold satisfying appropriate conditions. More interestingly, we will exploit the fact that the Green's function of $\mathbb{R} ^{2} \times \mathbb{S} ^{1}$ grows logarithmically at infinity (\emph{cf.} Lemma \ref{lem:Asymptotics:Periodic:Dirac:Higgs:Field}) to construct periodic monopoles with arbitrary mass $v$ and $n<2(k-1)$ singularities. These solutions are described qualitatively as the superposition of widely separated charge $1$ components which get more and more concentrated around their respective centres as these recede from each other. \subsection*{Plan of the paper} The proof of Theorem \ref{thm:Main:Theorem} is articulated into various steps. The basic building blocks used in the gluing construction---periodic Dirac monopoles and charge $1$ Euclidean monopoles---are introduced in Section \ref{sec:Preliminaries} together with further background material. The starting point of the construction is a certain singular solution to the Bogomolny equation described in Section \ref{sec:Sum:Dirac:Monopoles}: given singularities $p_{1}, \ldots, p_{n}$ and $k$ additional well-separated points $q_{1}, \ldots , q_{k}$, we construct a reducible solution to the Bogomolny equation on $(\R^{2} \times \Sph ^{1}) \setminus \{ p_{1}, \ldots, p_{n}, q_{1}, \ldots, q_{k} \}$ by taking a sum of periodic Dirac monopoles. Consideration of the asymptotic behaviour of this reducible solution leads to the definition of boundary conditions as in Cherkis--Kapustin \cite{Cherkis:Kapustin:1} and \cite{Cherkis:Kapustin:2}. We want to think of $q_{1}, \ldots, q_{k}$ as the centres of highly concentrated charge $1$ monopoles. To this end it is necessary to assume that either \begin{itemize} \item[(A)] the mass $v$ is sufficiently large, or \item[(B)] when the number of singularities $n$ is less than $2(k-1)$, $q_{1}, \ldots , q_{k}$ are sufficiently far away from each other and from the singularities $p_{1}, \ldots, p_{n}$. \end{itemize} We refer to (A) and (B) as the \emph{high mass} and \emph{large distance} case respectively. Under either of these hypothesis, in Section \ref{sec:Approximate:Solutions} we construct initial approximate solutions to the Bogomolny equation by gluing scaled Euclidean charge $1$ monopoles in a neighbourhood of $q_{1}, \ldots, q_{k}$ to resolve the singularities of the sum of periodic Dirac monopoles. By varying the centres and phases (thought of as fixing the choice of gluing maps) of the glued-in charge $1$ monopoles, we obtain a $4(k-1)$--dimensional family of inequivalent approximate solutions. The next step of the construction is to deform the initial approximate solutions into genuine monopoles by means of the Implicit Function Theorem. The crucial step, tackled in Section \ref{sec:Linear}, is to study the linearised equation. A first difficulty arises from the fact that, if one fixes the boundary conditions (\emph{i.e.} works with weighted Sobolev spaces forcing certain decay), there is a $3$--dimensional space of obstructions to the solvability of the linearised equation. There are two ways to proceed: \begin{itemize} \item[(i)] enlarge the Banach spaces in which to solve the Bogomolny equation by allowing the appropriate changes of asymptotics; \item[(ii)] consider the centre of mass of the centres of the glued-in Euclidean charge $1$ monopoles as a free parameter to be fixed only at the end of the construction to compensate for the obstructions. \end{itemize} We follow this second approach as it seems more appropriate to construct a whole $4(k-1)$--parameter family of monopoles in a fixed moduli space. In order to study the linearised problem, we adopt the strategy of studying the linearised equation separately on the building blocks, the charge $1$ Euclidean monopoles and the sum of periodic Dirac monopoles. In the former case, there are no obstructions to the solvability of the linearised equation and the use of weighted Sobolev spaces allows to obtain uniform estimates for the norm of a right inverse. In the latter case, we can solve the linearised equation in the chosen weighted Sobolev spaces only modulo obstructions. Furthermore, for technical reasons we have to distinguish between the high mass and large distance case (A) and (B) above. \begin{itemize} \item[(A)] Under the assumption that the points $q_{1}, \ldots, q_{k}$ are contained in a fixed compact set of $\R^{2} \times \Sph ^{1}$ and that the mass $v$ is sufficiently large, we use the weighted Sobolev spaces and estimates of \cite{Foscolo:Deformation} without major difficulties. \item[(B)] When the points $q_{1}, \ldots, q_{k}$ move off to infinity, instead, an additional technical difficulty arises from the following fact: it is well-known that for all $f \in C^{\infty}_{0}(\mathbb{R} ^{2})$ with mean value zero there exists a bounded solution $u$, unique up to the addition of a constant, to $\triangle u =f$ with $\| \nabla u \| _{L^{2}} < \infty$. However, if $f$ is supported on the union of balls $B_{1}(z_{1}) \cup B_{1}(z_{2})$, say, with non-zero mean value on each of them, then $\| \nabla u \| ^{2}_{L^{2}} \geq C \log{|z_{1}-z_{2}|}$. As a consequence, in the large distance case the linearised operator is not well-behaved on a certain finite-dimensional space of sections which has to be dealt with separately. \end{itemize} In Section \ref{sec:Linearised:Equation:Modulo:Obstructions} we patch together the local inverses of the linearised operator on the different building blocks and by a simple iteration solve the linearised equation globally modulo obstructions. Finally, in Section \ref{sec:Deformation} we conclude the construction of solutions to the Bogomolny equation satisfying the required boundary conditions by an application of the Implicit Function Theorem. \subsection*{Aknowledgments} The results of this paper are part of the author's Ph.D. thesis at Imperial College London. He wishes to thank his supervisor Mark Haskins for his support. Olivier Biquard guided early stages of this project; we thank him for suggesting us this problem. The author is grateful to Hans-Joachim Hein for discussions while this work was being developed and to Simon Donaldson and Michael Singer for their careful comments on an early version of this paper. The writing process was completed while the author was a Simons Instructor at Stony Brook University. \section{Preliminaries}\label{sec:Preliminaries} In this section we describe in some details the simple components to be patched together in the gluing construction. We begin with periodic Dirac monopoles, \emph{i.e.} abelian solutions to the Bogomolny equation on $\mathbb{R} ^{2} \times \mathbb{S} ^{1}$ with a singularity at one point, recalling the asymptotic expansions proved in \cite{Foscolo:Deformation}. Secondly, we collect the main properties of the basic Euclidean monopole, the Prasad--Sommerfield monopole. As a preliminary and mainly to fix some notation we give a brief overview of the deformation theory of monopoles on an arbitrary $3$--manifold. \subsection{Deformation theory of monopoles} Let $(X,g)$ be a non-compact oriented Riemannian $3$--manifold endowed with a principal $G$--bundle $P \rightarrow X$. Denote by $\mathcal{C}$ the infinite dimensional space of smooth pairs $c=(A,\Phi)$, where $A$ is a connection on $P \rightarrow X$ and $\Phi \in \Omega ^{0}(X;\textrm{ad} P)$ a Higgs field. Since $X$ is non compact, elements of $\mathcal{C}$ have to satisfy appropriate boundary conditions, which we suppose to be included in the definition of $\mathcal{C}$. The configuration space $\mathcal{C}$ is an affine space; the underlying vector space is the space of sections \[ \Omega (X;\textrm{ad} P):=\Omega ^{1}(X;\textrm{ad} P)\oplus\Omega ^{0}(X;\textrm{ad} P) \] satisfying appropriate decay conditions. Let $\mathcal{G}$ be the group of bounded smooth sections of $\text{Aut}(P)$ which preserve the chosen boundary conditions. Here $g \in Aut(P)$ acts on a pair $c=(A,\Phi) \in \mathcal{C}$ by $c \mapsto c + (d_{1}g)g^{-1}$, where\begin{equation}\label{eqn:Linearisation:Gauge:Action} d_{1}g = -\left( d_{A}g, [\Phi,g] \right) \in \Omega (X; \textrm{ad} P). \end{equation} Consider the gauge-equivariant map $\Psi\colon\thinspace \mathcal{C} \rightarrow \Omega ^{1}(X;\textrm{ad} P)$ defined by \[ \Psi (A,\Phi )= \ast F_{A}-d_{A}\Phi. \] By fixing a base point $c=(A,\Phi )\in\mathcal{C}$ we write $\Psi (A+a,\Phi +\psi )=\Psi (c)+d_{2}(a,\psi )+(a,\psi )\cdot (a, \psi )$ for all $(a,\psi )\in\Omega (X; \textrm{ad} P)$. The linearisation $d_{2}$ of $\Psi$ at $c$ and the quadratic term are defined by: \begin{equation}\label{eqn:Linearisation:Bogomolny} d_{2}(a,\psi )=\ast d_{A}a-d_{A}\psi +[\Phi ,a] \end{equation} \begin{equation}\label{eqn:Quadratic:Term:Bogomolny} (a,\psi )\cdot (a, \psi )=\ast [a,a]-[a,\psi ] \end{equation} The linearisation at $c$ of the action of $\mathcal{G}$ on $\mathcal{C}$ is the operator $d_{1}\colon\thinspace \Omega ^{0}(X;\textrm{ad}\,P) \rightarrow \Omega (X; \textrm{ad} P)$ defined as in \eqref{eqn:Linearisation:Gauge:Action}. We couple $d_{2}$ with $d_{1}^{\ast}$ to obtain an elliptic operator \begin{equation}\label{eqn:Dirac:Operator} D=d_{2}\oplus d_{1}^{\ast }\colon\thinspace \Omega (X; \textrm{ad} P)\longrightarrow \Omega (X; \textrm{ad} P). \end{equation} The moduli space $\mathcal{M}$ of monopoles in $\mathcal{C}$ is $\mathcal{M}=\Psi ^{-1}(0)/\mathcal{G}$. If $\mathcal{M}$ is smooth, the tangent space $T_{[c]}\mathcal{M}$ at a point $c=(A,\Phi)$ is identified with $\ker D$. The operator $D$ is a twisted Dirac operator on $\Omega (X; \textrm{ad} P)$. To see this, recall that Clifford multiplication is defined by \begin{equation}\label{eqn:Clifford:Multiplication} \gamma (\alpha)\beta = \alpha \wedge \beta - \alpha ^{\sharp} \lrcorner\, \beta \end{equation} for a $1$--form $\alpha$ and a $k$--form $\beta$. Then $D = \tau\slashed{D}_{A} + [\Phi ,\cdot\, ]$, where $\slashed{D}_{A}$ is the twisted Dirac operator \[ \Omega ^{1} \oplus \Omega ^{0} \xrightarrow{(\text{id},\ast)} \Omega ^{1} \oplus \Omega ^{3} \xrightarrow{\gamma\, \circ \nabla _{A}} \Omega ^{2} \oplus \Omega ^{0} \xrightarrow{(\ast, \text{id})} \Omega ^{1} \oplus \Omega ^{0}. \] and $\tau$ is a sign operator with $\tau = 1$ on $1$--forms and $\tau =-1$ on $0$--forms. \subsection{Periodic Dirac monopole}\label{sec:Periodic:Dirac:Monopole} When the structure group is $G=U(1)$, the Bogomolny equation \eqref{eqn:Bogomolny} reduces to a linear equation. By \eqref{eqn:Harmonic:Higgs:Field} the Higgs field $\Phi$ is a harmonic function such that $\frac{\ast d\Phi}{2\pi i}$ represents the first Chern class of a line bundle. The assumption that $\Phi$ is bounded thus forces $\Phi$ to be constant. Non-trivial abelian solutions, so called \emph{Dirac monopoles}, are obtained by allowing an isolated singularity. On $\mathbb{R}^{3}$ these are defined as follows. \begin{definition}\label{def:Euclidean:Dirac:Monopole} Fix a point $q \in \mathbb{R} ^{3}$ and let $H_{q}$ denote the radial extension of the Hopf line bundle to $\mathbb{R} ^{3} \setminus \{ q \}$. Fix $k \in \mathbb{Z}$ and $v \in \mathbb{R}$. The \emph{Euclidean Dirac monopole} of charge $k$ and mass $v$ with singularity at $q$ is the abelian monopole $(A^{0}, \Phi^{0})$ on $H_{q}^{k}$, where \[ \Phi^{0} =i\left( v-\frac{k}{2|x-q|} \right), \] $x \in \mathbb{R} ^{3}$, and $A^{0}$ is the $SO(3)$--invariant connection on $H_{q}^{k}$ with curvature $\ast d\Phi$. \end{definition} Periodic Dirac monopoles are defined in a similar way. Fix coordinates $(z,t)\in \mathbb{C} \times \mathbb{R} /2\pi \mathbb{Z}$ and a point $q=(z_{0},t_{0})\in \mathbb{R} ^{2} \times \mathbb{S} ^{1}$. Line bundles of a fixed degree on $(\R^{2} \times \Sph ^{1}) \setminus \{ q \}$ differ by tensoring by flat line bundles. We can distinguish connections with the same curvature by comparing their holonomy around loops $\gamma _{z}:=\{ z \} \times \mathbb{S} ^{1}_{t}$ for $z \neq z_{0}$. Set $\theta _{q}=\text{arg}(z-z_{0})$ and fix an origin in the circle parametrised by $\theta _{q}$. Denote by $L_{q}$ the degree $1$ line bundle on $(\R^{2} \times \Sph ^{1}) \setminus \{ q \}$ with connection $A_{q}$ whose holonomy around $\gamma _{z}$ is $e^{-i\theta _{q}}$. Any line bundle of degree $1$ is of the form $L_{q} \otimes L_{b}$ for some flat line bundle $L_{b}$. \begin{definition}\label{def:Periodic:Dirac:Monopole} Fix a point $q \in \R^{2} \times \Sph ^{1}$. The \emph{periodic Dirac monopole} of charge $k \in \mathbb{Z}$, with singularity at $q$ and twisted by the flat line bundle $L_{v,b}$ for some $v \in \mathbb{R}$ and $b \in \mathbb{R} /\mathbb{Z}$ is the pair $(A, \Phi)$ on $L_{q}^{k} \otimes L_{v,b}$, where \[ \Phi =i \left( v+kG_{q} \right) \] and up to gauge transformations the connection is $A= kA_{q}+ib\, dt$. Here $G_{q}$ is the Green's function of $\R^{2} \times \Sph ^{1}$ with singularity at $q$ defined in Lemma \ref{lem:Asymptotics:Periodic:Dirac:Higgs:Field} below. \end{definition} In \cite{Foscolo:Deformation} we derived asymptotic expansions for the Green's function $G_{q}$ and the connection $A_{q}$, both at infinity and close to the singularity. As these expansions will be essential for the gluing construction, we recall them here. By taking coordinates centred at $q \in \R^{2} \times \Sph ^{1}$, we can assume that the singularity is located at $q=0$. We use polar coordinates $z=re^{i\theta} \in \mathbb{C}$. \begin{lemma}[{Lemma 3.4 in \cite{Foscolo:Deformation}}]\label{lem:Asymptotics:Periodic:Dirac:Higgs:Field} There exists a Green's function $G$ of $\R^{2} \times \Sph ^{1}$ with singularity at $0$ such that the following holds. \begin{itemize} \item[(i)] There exists a constant $C_{1}>0$ such that \[ \left| \nabla ^{k}\left( G(z,t)-\frac{1}{2\pi }\log {r}\right)\right| \leq C_{1}e^{-r} \] for all $r\geq 2$ and $k=0,1,2$. \item[(ii)] There exists a constant $C_{2}>0$ such that \[ \left| \nabla ^{k}\left( G(z,t)-\frac{a_{0}}{2}+\frac{1}{2\rho }\right)\right| \leq C_{2}\rho ^{2-k} \] for all $(z,t)$ with $\rho=\sqrt{r^{2}+t^{2}}< \frac{\pi}{2}$ and $k=0,1,2$. \end{itemize} \end{lemma} Fix a constant $v \in \mathbb{R}$ and consider the Higgs field $\Phi =iv+iG$. The $2$--form $i\ast dG$ represents the curvature of a line bundle $L$ over $(\R^{2} \times \Sph ^{1})\setminus\{ 0\}$. In a neighbourhood of the singularity $L$ is isomorphic to the Hopf line bundle extended radially from a small sphere $\mathbb{S} ^{2}$ enclosing the origin. Any connection $A$ on $L$ with $F_{A}=\ast d\Phi$ is asymptotically gauge equivalent to $A^{0}$ as $\rho \rightarrow 0$. At infinity $L$ is isomorphic to the radial extension of a line bundle of degree $1$ over the torus $\mathbb{T}^{2}_{\infty}$. We choose a representative for $A$ in this asymptotic model as follows. Consider the connection $A^{\infty}=-i\frac{t}{2\pi}d\theta$ on the trivial line bundle $\underline{\mathbb{C}}$ over $\mathbb{S} ^{1}_{\theta} \times \mathbb{R}_{t}$. Let $\tau$ be the map of $\underline{\mathbb{C}}$ into itself defined by $\tau (e^{i\theta },t,\xi )=(e^{i\theta },t+2\pi,e^{i\theta}\xi )$ and define a line bundle with connection over $\mathbb{T}^{2}_{\theta ,t}$ as the quotient $(\underline{\mathbb{C}},A^{\infty })/\tau$. As $r\rightarrow\infty$, up to gauge transformations $A$ is asymptotic to $A^{\infty}+i\alpha\, d\theta +ib\, dt$ for some $\alpha ,b\in\mathbb{R}/\mathbb{Z}$. The monodromy of this limiting connection is $e^{-i\theta-2\pi ib}$ around the circle $\{\theta\}\times \mathbb{S} ^{1}_{t}$ and $e^{it-2\pi i\alpha}$ around the circle $\mathbb{S} ^{1}_{\theta}\times\{ t\}$. While $b$ can be chosen arbitrarily, by \cite[Lemma 3.6]{Foscolo:Deformation} $\alpha$ is fixed by the Bogomolny equation \eqref{eqn:Bogomolny}, $\alpha =\frac{1}{2}$ modulo $\mathbb{Z}$. \begin{lemma}[{Lemma 3.6 in \cite{Foscolo:Deformation}}]\label{lem:Asymptotics:Periodic:Dirac:Connection} Fix parameters $(v,b) \in \mathbb{R} \times \mathbb{R} /\mathbb{Z}$. Let $(A,\Phi )$ be a solution to (\ref{eqn:Bogomolny}) with $\Phi =i \left( v+G \right)$ and such that the holonomy of $A$ around circles $\{ re^{i\theta} \} \times \mathbb{S} ^{1}_{t}$, $r \neq 0$, is $e^{-i\theta-2\pi ib}$. \begin{itemize} \item[(i)] In the region where $r\geq 2$ the connection $A$ is gauge equivalent to \[ A^{\infty}+\frac{i}{2}\, d\theta +ib\, dt+a \] for a $1$--form $a$ such that $d^{\ast}a=0=\partial _{r} \lrcorner\, a$ and $|a|+|\nabla a|=O(e^{-r})$. \item[(ii)] In a ball of radius $\frac{\pi}{2}$ centred at the singular point $z=0=t$, $A$ is gauge equivalent to $A^{0}+a'$ where $|a'|+\rho |\nabla a'|=O(\rho ^{2})$ and $d^{\ast}a'=0=\partial _{\rho}\lrcorner \, a'$. \end{itemize} \end{lemma} Given an arbitrary point $q=(z_{0},t_{0})$ in $\R^{2} \times \Sph ^{1}$ the same formulas describe the asymptotic behaviour of the periodic Dirac monopole $(A_{q},\Phi _{q})$ with singularity at $q$ in coordinates centred at $q$. It will be useful to express the behaviour of $(A_{q},\Phi _{q})$ at large distances from $q$ in a fixed coordinate system. \begin{lemma}[{Lemma 3.7 in \cite{Foscolo:Deformation}}]\label{lem:Asymptotics:Periodic:Dirac:Translations} For $r\geq 2|z_{0}|$ we have \[ \begin{gathered} \frac{1}{i}\Phi _{q}(z,t) = v+\frac{1}{2\pi}\log{r}-\frac{1}{2\pi}\, \text{Re}\left( \frac{z_{0}}{z}\right) +O(r^{-2}) \\ A_{q}(z,t) = A^{\infty } + ib\, dt + i\, \frac{t_{0}+\pi}{2\pi}\, d\theta -\frac{i}{2\pi}\, \text{Im}\left( \frac{z_{0}}{z}\right) dt+O(r^{-2}). \end{gathered} \] \end{lemma} Finally, notice that the parameters $(v,b) \in \mathbb{R} \times \mathbb{R} /\mathbb{Z}$ are related to rotations and dilations. By a rotation in the $z$--plane, we can always assume that $b=0$. On the other hand, given any $\lambda>0$ consider the homothety \[ h_{\lambda }:\mathbb{R} ^{2} \times \mathbb{R}/2\pi\mathbb{Z}\longrightarrow \mathbb{R} ^{2} \times \mathbb{R}/2\pi\lambda\mathbb{Z} \] of ratio $\lambda$. Since the Bogomolny equation is the dimensional reduction of the ASD equation, which is conformally invariant, $(h_{\lambda }^{\ast}A ,\lambda\, h_{\lambda }^{\ast }\Phi )$ is a monopole on $\mathbb{R} ^{2} \times \mathbb{R}/2\pi\mathbb{Z}$ if and only if $(A,\Phi )$ solves the Bogomolny equation on $\mathbb{R} ^{2} \times \mathbb{R}/2\pi\lambda\mathbb{Z}$. A simple but crucial observation for the gluing construction is that given a periodic Dirac monopole $(A_{q},\Phi_{q})$ with mass $v$, then as $v\rightarrow \infty$ \[ \lambda^{-1}h_{\lambda ^{-1}}^{\ast }\Phi\longrightarrow i\left( 1-\frac{1}{2\sqrt{r^{2}+t^{2}}}\right), \] where we set $\lambda =v+\frac{a_{0}}{2}$. In other words, the limit $v\rightarrow \infty$ corresponds to the limit $\mathbb{R}^{2}\times \mathbb{S} ^{1}\rightarrow\mathbb{R}^{3}$ and in this limit a periodic Dirac monopole converges to an Euclidean Dirac monopole. \subsection{Charge 1 monopoles on $\mathbb{R}^{3}$}\label{sec:Prasad:Sommerfield:Monopole} As we will see in the next section, periodic Dirac monopoles will serve us to construct an initial \emph{singular} solution to the Bogomolny equation. We now describe the model solution that we will use to desingularise this initial solution. In 1975 Prasad and Sommerfield \cite{Prasad:Sommerfield} found an explicit smooth finite energy solution to the Bogomolny equation on $\mathbb{R} ^{3}$ with structure group $SU(2)$. By translations and scaling, this explicit solution accounts for all $SU(2)$ Euclidean monopoles of charge $1$. We collect the main properties of the Prasad--Sommerfield monopole following Atiyah--Hitchin \cite{Atiyah:Hitchin} and Taubes \cite{Jaffe:Taubes}. First, we fix some notation. For $x \in \mathbb{R} ^{3}$, set $\rho = |x|$ and $\hat{x}=\rho ^{-1}x$. Denote by $\sigma$ the vector $\sigma =(\sigma _{1},\sigma _{2},\sigma _{3}) \in \mathbb{R} ^{3} \otimes \Lie{su}_{2}$, where $\{ \sigma _{1}, \sigma _{2} ,\sigma _{3} \}$ is the standard orthonormal basis of $\Lie{su}_{2}$ defined in terms of Pauli's matrices. In particular, $\sigma _{3} = \operatorname{diag}{(\frac{i}{2},-\frac{i}{2})}$. The Prasad--Sommerfield (PS) monopole $c_{\text{PS}}=(A_{\text{PS}},\Phi _{\text{PS}})$ is given by the explicit formula, \emph{cf.} \cite[IV.1, Equation 1.15]{Jaffe:Taubes}: \begin{alignat}{2}\label{eqn:PS:Monopole} \Phi _{\text{PS}}(x)=\left( \frac{1}{\tanh{(\rho )}}-\frac{1}{\rho }\right) \, \hat{x}\cdot \sigma \qquad & A_{\text{PS}}(x)=\left( \frac{1}{\sinh{(\rho )}}-\frac{1}{\rho }\right) \, (\hat{x}\times\sigma)\cdot dx \end{alignat} Here $\cdot$ and $\times$ are the scalar and vector product in $\mathbb{R} ^{3}$, respectively. To simplify the notation we will often drop the subscript $\text{PS}$ throughout this section. In \eqref{eqn:PS:Monopole} we fixed the mass $v=1$. A monopole with arbitrary mass $v>0$ is obtained by scaling. The following properties of the PS monopole follow directly from \eqref{eqn:PS:Monopole}, \emph{cf.} \cite[\S IV.1]{Jaffe:Taubes}. \begin{lemma}\label{lem:Properties:PS:Monopole} The pair $(A_{\text{PS}},\Phi _{\text{PS}})$ is a solution of the Bogomolny equation (\ref{eqn:Bogomolny}) with finite energy. \begin{itemize} \item[(i)] $\Phi$ has exactly one zero, $\Phi (0)=0$. \item[(ii)] $|\Phi (x)|<1$ and $1-|\Phi (x)|=\frac{1}{\rho} +O(e^{-2\rho})$. \item[(iii)] By (i), over $\mathbb{R} ^{3}\setminus\{ 0\}$ we can decompose each $\Lie{su}_{2}$--valued form $u$ into diagonal and off-diagonal part $u =u_{D}+u_{T}$, where $u_{D}= |\Phi |^{-2} \langle u,\Phi\rangle\Phi $. Then $|(d_{A}\Phi )_{D}|=O(\rho ^{-2})$ and $|(d_{A}\Phi )_{T}|=O(e ^{-\rho })$. \end{itemize} \end{lemma} In particular, over $\mathbb{R}^{3}\setminus \{ 0\}$ the trivial rank 2 complex vector bundle splits as a sum $H\oplus H^{-1}$ of eigenspaces of $\Phi$. By Lemma \ref{lem:Properties:PS:Monopole}.(i) $H$ is the radial extension of the Hopf line bundle. The adjoint bundle $\left( \mathbb{R} ^{3} \setminus \{ 0 \} \right) \times \Lie{su}_{2}$ splits as a sum $\underline{\mathbb{R}}\oplus H^{2}$. We refer to such a gauge over $\mathbb{R} ^{3} \setminus \{ 0 \}$ as to the \emph{asymptotically abelian gauge} because it yields an asymptotic isomorphism between the PS monopole and a charge $1$ Euclidean Dirac monopole. The isomorphism $\eta\colon\thinspace \underline{\mathbb{C}}^{2} \rightarrow H\oplus H^{-1}$ can be made explicit, \emph{cf.} \cite[\S IV.7, 7.1 and 7.2]{Jaffe:Taubes}. A direct computation then yields the following asymptotic expansions. \begin{lemma}\label{lem:Abelian:Gauge:PS:Monopole} Let $(A,\Phi )$ be the PS monopole defined in (\ref{eqn:PS:Monopole}). \begin{itemize} \item[(i)] There exists an isomorphism $\eta :\underline{\mathbb{C}}^{2} \rightarrow H\oplus H^{-1}$ over $\mathbb{R} ^{3}\setminus\{ 0\}$ such that $\eta (A,\Phi )=(A^{0},\Phi ^{0}) \, \sigma_{3}+(a,\psi )$, with $a$ and $\psi$ a $1$ and $0$--form with values in the $SO(3)$--bundle $\underline{\mathbb{R}} \oplus H^{2}$. \item[(ii)] $(a,\psi )$ satisfies $d_{ A^{0} \sigma_{3} }^{\ast}a=0=[\Phi ^{0} \sigma_{3},\psi ]$ and $\partial_{\rho}\, \lrcorner\, a=0$. Moreover, as $\rho\rightarrow\infty$: \[ |a|+|\psi |+|d_{ A^{0}\sigma_{3} }a|+|[\Phi ^{0}\sigma_{3},a]|+|d_{ A^{0}\sigma_{3} }\psi |=O(e^{-\rho }) \] \end{itemize} \end{lemma} Therefore $\eta$ puts $(A,\Phi)$ in ``Coulomb gauge'' with respect to $(A^{0}, \Phi ^{0})\, \sigma _{3}$. Without altering properties (i) and (ii), we have the freedom to change $\eta$ by composing with an element in the stabiliser $U(1)$ of $(A^{0}, \Phi ^{0})\, \sigma _{3}$. By abuse of notation, we won't distinguish between $\eta$ and the induced isomorphism of $SO(3)$--bundles $\left( \mathbb{R} ^{3} \setminus \{ 0 \} \right) \times \Lie{su}_{2} \simeq \underline{ \mathbb{R} } \oplus H^{2}$. As an isomorphism of $SO(3)$--bundles, we have the freedom to compose $\eta$ with an element of $SO(2)$, where $U(1) \rightarrow SO(2)$ is the double cover induced by the adjoint representation $SU(2) \rightarrow SO(3)$. The deformation theory of charge $1$ monopoles on $\mathbb{R} ^{3}$ can be understood explicitly. Let $D$ be the Dirac operator (\ref{eqn:Dirac:Operator}) twisted by the PS monopole. The $L^{2}$--kernel of $D$ is $4$--dimensional, spanned over $\mathbb{H}$ by the vector $(d_{A}\Phi ,0)$, \emph{i.e.} \[ \ker{D}=\langle\, (d_{A}\Phi ,0), \gamma (dx_{h}) \, (d_{A}\Phi ,0 ), h=1,2,3\, \rangle _{\mathbb{R}}, \] where $\gamma (dx_{h})$ denotes the Clifford multiplication. We can explicitly integrate these infinitesimal deformations. Choose $x_{0}\in\mathbb{R}^{3}$ and let $T_{x_{0}}$ be the translation $x\mapsto x-x_{0}$. Then $T_{x_{0}}^{\ast }c_{\text{PS}}$ is still a solution to the Bogomolny equation. The corresponding infinitesimal deformation is $-\gamma (x_{0})\, (d_{A}\Phi, 0)=-x_{0} \, \lrcorner \, (F_{A},d_{A}\Phi)$. On the other hand, $(d_{A}\Phi,0)$ is the infinitesimal action of the gauge transformation $\exp{(-\Phi)}$. For future use, in the next lemma we put $T_{x_{0}}^{\ast }c_{\text{PS}}$ in ``Coulomb gauge'' with respect to $c_{\text{PS}}$ and derive some useful estimates. \begin{lemma}\label{lem:PS:Translations} There exists $\kappa ,C$ and $\rho _{0}>0$ such that the following holds. For any $x_{0} \in \mathbb{R} ^{3}$ with $|x_{0}| < \kappa$ there exists a solution $(A_{x_{0}},\Phi _{x_{0}})$ to the Bogomolny equation which can be written \[ (A_{x_{0}},\Phi _{x_{0}}) = (A,\Phi) -x_{0} \, \lrcorner \, (F_{A},d_{A}\Phi) + (a_{x_{0}}, \psi _{x_{0}}), \] where $d_{1}^{\ast}(a_{x_{0}}, \psi _{x_{0}})=0$. Here $d_{1}$ is the linearisation at $c_{\text{PS}}=(A,\Phi)$ of the action of gauge transformations. Moreover, $|(a_{x_{0}}, \psi _{x_{0}})| \leq C\frac{|x_{0}|^{2}}{\rho ^{3}}$ for all $\rho \geq \rho _{0}$. \proof We give a sketch of the proof as it seems that the statement should be well-known. We will prove later, \emph{cf.} Proposition \ref{prop:Linearised:Equation:Uj}, that the operator $DD^{\ast}\colon\thinspace W^{2,2}_{w, \delta} \rightarrow L^{2}_{w, \delta -2}$ is an isomorphism for all $\delta \in (-1, 0)$. Here the spaces $W^{m,2}_{w,\delta}$ are defined in Definition \ref{def:Weighted:Spaces:Uj}. Using Lemma \ref{lem:Properties:PS:Monopole}.(v) to see that $d_{A}\Phi \in W^{1,2}_{w,\delta-1}$ one then shows that $DD^{\ast} - 2x_{0}\, \lrcorner \, (F_{A},d_{A}\Phi)\cdot D^{\ast}$ remains an isomorphism if $\kappa$ is sufficiently small. The Implicit Function Theorem implies that there exists a unique solution $u \in W^{2,2}_{w,\delta}$ to the equation \[ DD^{\ast}u + \big( - x_{0}\, \lrcorner \, (F_{A},d_{A}\Phi) + D^{\ast}u \big) \cdot \big( - x_{0}\, \lrcorner \, (F_{A},d_{A}\Phi) + D^{\ast}u \big) =0, \] \emph{i.e.} such that $(A,\Phi) - x_{0}\, \lrcorner \, (F_{A},d_{A}\Phi) + D^{\ast}u $ satisfies the Bogomolny equation and the gauge fixing condition $d_{1}^{\ast}D^{\ast}u=0$. Moreover, $\| D^{\ast}u \| _{W^{1,2}_{w,\delta-1}} \leq C |x_{0}|^{2}$ and the map $x_{0} \mapsto u \in W^{2,2}_{w,\delta}$ is smooth. It remains to show that $(a_{x_{0}}, \psi _{x_{0}})=D^{\ast}u = O(\rho ^{-3})$. Consider the equation $D\xi + \xi \cdot \xi =0$ for $\xi \in W^{1,2}_{w,\delta -1}$. By Lemma \ref{lem:Abelian:Gauge:PS:Monopole}, in the asymptotically abelian gauge we write \[ D_{0}\xi + 2 (a,\psi) \cdot \xi + \xi \cdot \xi =0, \] where $(a,\psi)$ is exponentially decaying and $D_{0}$ is the Dirac operator twisted by the Dirac monopole $(A^{0},\Phi ^{0})$. Decomposing $\xi = \xi_{D}+\xi_T$ into diagonal and transverse part, a Moser iteration argument as in \cite[Lemma 7.10]{Foscolo:Deformation} yields $\xi _{T}=O(\rho ^{-\mu}) = \triangle \xi _{D}$ for all $\mu$. Apply this result to $\xi = - x_{0}\, \lrcorner \, (F_{A},d_{A}\Phi) + D^{\ast}u$. It follows that we can write \[ D^{\ast}u=x'_{0} \, \lrcorner \, (F_{A},d_{A}\Phi) + \tau\, (d_{A}\Phi, 0) + O(\rho ^{-3}) \] for some $x'_{0} \in \mathbb{R} ^{3}$ and $\tau \in \mathbb{R}$. However, since $u \in W^{2,2}_{w,\delta}$ and $\delta \in (-1,0)$, an integration by parts shows that $D^{\ast}u$ is $L^{2}$--orthogonal to $\text{ker}\,{D}$ and therefore $x'_{0}=0=\tau$. \endproof \end{lemma} Of course, there exists a gauge transformation such that $g(A_{x_{0}},\Phi_{x_{0}}) = T^{\ast}_{x_{0}}(A,\Phi)$. Indeed, there must exist $x'_{0} \in \mathbb{R} ^{3}$ and $g$ such that $g(A_{x_{0}},\Phi_{x_{0}}) = T^{\ast}_{x'_{0}}(A,\Phi)$. On the other hand, comparing $|\Phi _{x_{0}}|$ with $|T^{\ast}_{x'_{0}}\Phi|$, one concludes that $x'_{0}=x_{0}$. \section{A sum of periodic Dirac monopoles}\label{sec:Sum:Dirac:Monopoles} In this section we construct a singular \emph{reducible} solution $c_{\text{ext}}$ to the Bogomolny equation \eqref{eqn:Bogomolny} by summing periodic Dirac monopoles. Consideration of the asymptotic behaviour of this singular solution suggests the definition of boundary conditions for periodic monopoles (with singularities) as in Cherkis--Kapustin \cite{Cherkis:Kapustin:1} and \cite{Cherkis:Kapustin:2}. The aim of the rest of the paper will be to construct periodic monopoles by desingularising $c_\text{ext}$ while preserving the boundary conditions. \medskip The construction of the pair $c_{\text{ext}} = (A_\text{ext},\Phi_\text{ext})$ depends on the choice of the following data: \begin{itemize} \item[(i)] A \emph{vacuum background}, \emph{i.e.} constants $v\in\mathbb{R}$ and $b\in\mathbb{R}/\mathbb{Z}$ corresponding to the flat line bundle $L_{v,b}$ on $\R^{2} \times \Sph ^{1}$ with constant Higgs field $iv$ and flat connection $d+ib\, dt$. \item[(ii)] The \emph{singularities}: a collection $S$ of $n$ distinct points $p_{i}=(m_{i},a_{i})\in \R^{2} \times \Sph ^{1}$ for $i=1,\ldots ,n$. \item[(iii)] The \emph{centres of non-abelian monopoles}: further $k$ points, pair-wise distinct and distinct from the $p_{i}$'s, which we denote by $q_{j}=(z_{j},t_{j})$ for $j=1,\ldots ,k$. \end{itemize} Fix an origin in $\mathbb{C} \times \mathbb{R} /2\pi \mathbb{Z}$ so that $z_{1}+\ldots +z_{k}=0$ and $t_{1}+\ldots +t_{k}=0$ modulo $2\pi$. Set $\mu =m_{1}+\ldots +m_{n}$ and $\alpha =a_{1}+\ldots +a_{n}$ modulo $2\pi$. Denote by $d$ the \emph{minimum distance}: \begin{equation}\label{eqn:Distance} d=\min{\left\{ |z_{j}-z_{h}|,|z_{j}-m_{i}|\, \text{ for all }\, j,h=1,\ldots k , j\neq h, \text{ and all } i=1,\ldots n\right\}} \end{equation} We assume that $d\geq 5$. Throughout the paper constants are allowed to depend on a lower bound for $d$ and on the position of the points $p_{1}, \ldots , p_{n}$. Define a harmonic function $\Phi_{\text{ext}}$ on $(\R^{2} \times \Sph ^{1}) \setminus (S \cup \{ q_{1},\ldots ,q_{k}\} )$ by \begin{equation}\label{eqn:Sum:Dirac:Higgs:Field} \Phi _{\text{ext}}= v+2\sum _{j=1}^{k}{G _{q_{j}}}-\sum _{i=1}^{n}{G _{p_{i}}}. \end{equation} Here $G_{p}$ is the Green's function of $\R^{2} \times \Sph ^{1}$ with singularity at $p$ given in Lemma \ref {lem:Asymptotics:Periodic:Dirac:Higgs:Field}. By Lemmas \ref{lem:Asymptotics:Periodic:Dirac:Higgs:Field} and \ref{lem:Asymptotics:Periodic:Dirac:Translations}, for large $|z|$ \begin{equation}\label{eqn:Asymptotics:Infinity:Sum:Dirac:Higgs:Field} \Phi _{\text{ext}}= v +\frac{2k-n}{2\pi}\log{|z|}+\frac{1}{2\pi}\text{Re }\left(\frac{\mu}{z}\right) +O(|z|^{-2}), \end{equation} while close to the singularity $p_{i}$ \begin{equation}\label{eqn:Asymptotics:Singularity:Sum:Dirac:Higgs:Field} \Phi _{\text{ext}}= \text{const} +\frac{1}{2\rho _{i}}+O( \rho _{i} ). \end{equation} Here $\rho _{i}=\text{dist}(p_{i},\cdot )$ and the constant term is defined by \begin{equation}\label{eqn:Mass:Singularity} \sum _{m \neq i}{ G_{p_{m}}(p_{i}) }-\frac{a_{0}}{2}+v+\frac{1}{\pi}\sum _{j=1}^{k}{\log{|z_{j}-m_{i}|}}+O\left( e^{-d}\right). \end{equation} Lemmas \ref{lem:Asymptotics:Periodic:Dirac:Higgs:Field} and \ref{lem:Asymptotics:Periodic:Dirac:Translations} yield similar expansions also for the derivatives of $\Phi _{\text{ext}}$. Finally, in the ball $B_{\frac{\pi}{2}}(q_{j})$ \begin{equation}\label{eqn:Asymptotics:Centre:Higgs:Field:Ext} \Phi _{\text{ext}} = \lambda_{j} -\frac{1}{\rho _{j}} + O\left( \frac{\rho _{j}}{d} + \rho _{j} ^{2} \right) \end{equation} where the constant $\lambda _{j}$, using once again Lemma \ref{lem:Asymptotics:Periodic:Dirac:Higgs:Field}, is defined by \begin{equation}\label{eqn:Scale} \lambda _{j}=v + a_{0} + \frac{1}{\pi}\sum _{h=1,h\neq j}^{k}{\log{|z_{h}-z_{j}|}}-\frac{1}{2\pi}\sum _{i=1}^{n}{\log{|m_{i}-z_{j}|}}+O\left( e^{-d}\right). \end{equation} We will refer to $\lambda _{j}$ as the \emph{local mass} attached to the point $q_{j}$. \begin{definition}\label{def:Hypothesis:Background:Data} Given $\lambda _{0},K>1$ and $d_{0} \geq 5$ we say that $v\in\mathbb{R}$ and the points $p_{1},\ldots ,p_{n},q_{1},\ldots ,q_{k}$ are $(\lambda _{0},d_{0},K)$--\emph{admissible} if: \begin{itemize} \item[(i)] The minimum distance $d$ (\ref{eqn:Distance}) satisfies $d \geq d_{0}$; \item[(ii)] $\lambda :=\min _{j}{\lambda _{j}}>\lambda _{0}$; \item[(iii)] $\overline{\lambda}:=\max _{j}{\lambda _{j}} \leq K \lambda$; \item[(iv)] $v>0$ if $n=2k$. \end{itemize} \end{definition} The constant $\lambda_0$ has to be thought of as being very large. Therefore it is important to determine which set of data are admissible as $\lambda _{0} \rightarrow \infty$. \begin{itemize} \item[(A)] By \eqref{eqn:Scale}, a first possibility is to fix the points $p_{1}, \ldots ,p_{n}$, $q_{1}, \ldots, q_{k}$ so that (i) in Definition \ref{def:Hypothesis:Background:Data} is satisfied and then choose $v$ sufficiently large. We refer to this as the \emph{high mass} case. \item[(B)] Consider instead the limit $d \rightarrow +\infty$ and assume that in addition to Definition \ref{def:Hypothesis:Background:Data} we have \[ \overline{d}=\max{\left\{ |z_{j}-z_{h}|,|z_{j}-m_{i}|\, \text{ for all }\, j,h=1,\ldots k , j\neq h, i=1,\ldots n\right\}} \leq K' d \] for some $K'>1$. Then as $d \rightarrow \infty$ \begin{equation}\label{eqn:Limit:Mass:Large:Distance} \lambda_{j} \sim \frac{1}{\pi}\left( k-1-\frac{n}{2} \right) \log{d} \end{equation} for all $j=1, \ldots, k$. Thus if $n<2(k-1)$ we can fix $v$ and $p_{1}, \ldots ,p_{n}$ arbitrarily and then choose the additional $k$ points $q_{1}, \ldots ,q_{k}$ so that $d \geq d_{0}$ for $d_{0}$ sufficiently large. We call this the \emph{large distance} case. It is the most interesting case because, in analogy with Taubes's result \cite{Jaffe:Taubes} for Euclidean monopoles, it is expected that it corresponds to (an open subset of) the end of the moduli space. \end{itemize} An important consequence of choosing data with large $\lambda$ is that zeroes of $\Phi _{\text{ext}}$ are localised around the points $q_{1},\dots ,q_{k}$. \begin{lemma}\label{lem:Localisation:Zeroes:Higgs:Field} Fix $d_{0}=5$ and suppose that $v > 1$ if $n=2k$. There exists $\lambda _{0}$ such that the following holds. Suppose that the initial data are $(\lambda _{0},5,K)$--admissible. Then for all $j=1, \ldots ,k$ there exists $\frac{1}{\lambda _{j}} < \delta (\lambda _{j}) \leq \frac{2}{\lambda _{j}}$ such that $\Phi _{\text{ext}} \geq 1$ on $(\R^{2} \times \Sph ^{1})\setminus \left( S \cup \bigcup _{j=1}^{k}B_{\delta (\lambda_{j} )}(q_{j}) \right)$. \proof By Lemmas \ref{lem:Asymptotics:Periodic:Dirac:Higgs:Field} and \ref{lem:Asymptotics:Periodic:Dirac:Translations} there exists a constant $C$ such that in the annulus $\delta \leq \rho _{j}\leq \frac{\pi}{2}$ \[ \Phi _{\text{ext}} \geq \lambda _{j} -\frac{1}{\delta} -C. \] Fix some $\lambda _{0}$ such that $\lambda _{0} \geq 1 +\frac{2}{\pi}+C$ and set $\delta = \delta (\lambda_{j} )=\left(\lambda_{j} -1 - C \right) ^{-1}$. By choosing $\lambda _{0}$ larger if necessary, we can assume that $\lambda _{0} \geq 2(1+C)$, so that $\delta (\lambda _{j}) \leq \frac{2}{\lambda _{j}}$. Now, if $\lambda_{j} >\lambda _{0}$ then $\delta <\frac{\pi}{2}$ and $\Phi _{\text{ext}} \geq 1$ in the annulus $\delta \leq \rho _{j}\leq \frac{\pi}{2}$. On the other hand, $\Phi _{\text{ext}} \geq 1$ in a neighbourhood of the singularities $p_{1},\ldots ,p_{n}$ and at infinity. Here we need to use the hypothesis $v> 1$ if $n=2k$. The Lemma follows from the minimum principle. \qed \end{lemma} The form $\ast d\Phi _{\text{ext}}$ is the curvature of the line bundle $M \rightarrow (\R^{2} \times \Sph ^{1}) \setminus \left( S \cup \{ q_{1}, \ldots ,q_{k} \} \right)$ \begin{equation}\label{eqn:Sum:Dirac:Bundle} M=L_{v,b}\otimes\bigotimes _{j=1}^{k}{L^{2}_{q_{j}}}\otimes \bigotimes _{i=1}^{n}{L^{-1}_{p_{i}}}, \end{equation} where $L_{q}$ is the line bundle on $(\R^{2} \times \Sph ^{1}) \setminus \{ p \}$ of Definition \ref{def:Periodic:Dirac:Monopole}. Consider the reducible $SO(3)$--bundle $\underline{\mathbb{R}}\oplus M$. By a result of Whitney \cite[\S III.7]{Whitney}, isomorphism classes of $SO(3)$--bundles over a CW--complex of dimension at most 3 are completely classified by the second Stiefel--Whitney class $w_{2}$. In our case $w_{2}(\underline{\mathbb{R}}\oplus M)=c_{1}(M)$ modulo $2$, so that $w_{2}$ evaluated on the torus at infinity is $2k-n$ (mod $2$), $1$ on a small sphere enclosing one of the $n$ singularities and it vanishes on spheres enclosing each of the $k$ points. Denote by $\hat{\sigma}$ the trivialising section of the first factor in $\underline{\mathbb{R}}\oplus M$. Multiplying by $\hat{\sigma}$, $\Phi _{\text{ext}}$ \eqref{eqn:Sum:Dirac:Higgs:Field} defines a Higgs field on $\underline{\mathbb{R}}\oplus M$. Fix the connection \begin{equation}\label{eqn:Sum:Dirac:Connection} A_{\text{ext}}=\left( b\, dt+2\sum _{j=1}^{k}{A_{q_{j}}}-\sum _{i=1}^{n}{A_{p_{i}}}\right) \hat{\sigma} \end{equation} on $\underline{\mathbb{R}} \oplus M$, where $A_{p}$ is the connection on $L_{p}$ of Definition \ref{def:Periodic:Dirac:Monopole}. Lemmas \ref{lem:Asymptotics:Periodic:Dirac:Connection} and \ref{lem:Asymptotics:Periodic:Dirac:Translations} yield asymptotic expansions of representatives of $A_{\text{ext}}$ close to $p_{i}$, close to $q_{j}$ and at infinity. \subsection{Boundary conditions}\label{sec:Boundary:Conditions} By Whitney's result mentioned above, given the collection $S$ of $n$ distinct points $p_{1}, \ldots, p_{n} \in \R^{2} \times \Sph ^{1}$, there exists a unique $SO(3)$--bundle $V$ up to isomorphism on $(\R^{2} \times \Sph ^{1}) \setminus S$ with $w_{2}(V)\cdot [\mathbb{S}^{2}_{p_{i}}]=1$ for all $i=1,\ldots ,n$. Notice that $V$ does not lift to an $SU(2)$--bundle whenever $n>0$ and this explains the choice of $SO(3)$ as structure group. Now, outside of $\{ q_{1}, \ldots, q_{k} \}$ we have an isomorphism $V \simeq \underline{\mathbb{R}} \oplus M$, showing that a desingularisation of the pair $(A_{\text{ext}},\Phi_{\text{ext}})$ is topologically possible. Our aim is to find a desingularisation which solves the Bogomolny equation and satisfies the same asymptotic behaviour of $(A_{\text{ext}},\Phi_{\text{ext}})$ at infinity and close to the points $p_{1}, \ldots, p_{n}$. This leads us to consider the boundary conditions for periodic monopoles (with singularities) introduced by Cherkis and Kapustin in \cite[\S 1.4]{Cherkis:Kapustin:1} and \cite[\S 2]{Cherkis:Kapustin:2}. \begin{definition}\label{def:Boundary:Conditions} Given a non-negative integer $k_{\infty} \in \mathbb{Z} _{\geq 0}$, parameters $(v,b) \in \mathbb{R} \times \mathbb{R} / \mathbb{Z}$ and a point $q=(\mu ,\alpha) \in \R^{2} \times \Sph ^{1}$, let $\mathcal{C}=\mathcal{C}(p_{1},\ldots ,p_{n},k_{\infty},v,b,q)$ be the space of smooth pairs $c=(A,\Phi )$ of a connection $A$ on $V$ and a section $\Phi$ of $V$ satisfying the following boundary conditions. \begin{enumerate} \item For each $p_{i} \in S$ there exists a ball $B_{\sigma}(p_{i})$ and a gauge $V|_{B_{\sigma}(p_{i}) \setminus \{ p_{i} \} } \simeq \underline{\mathbb{R}} \oplus H_{p_{i}}$ such that $(A,\Phi)$ can be written \begin{alignat*}{2} \Phi = -\frac{1}{2\rho_{i} }\, \hat{\sigma} +\psi\qquad & A = A^{0}\, \hat{\sigma} +a \end{alignat*} with $\xi= (a,\psi)=O(\rho_{i}^{-1+\tau})$ and $|\nabla _{A}\xi| + |[\Phi,\xi]|=O(\rho_{i}^{-2+\tau})$ for some rate $\tau >0$. Here $\rho_{i}$ is the distance from $p_{i}$ and $A^{0}$ is the unique $SO(3)$--invariant connection on $H_{p_{i}}$. \item There exists $R>0$ and a gauge $V \simeq \underline{\mathbb{R}} \oplus \left( L^{k_{\infty}}_{q} \otimes L_{v,b} \right)$ over $\left( \mathbb{R}^{2} \setminus B_{R} \right) \times \mathbb{S} ^{1}$ such that $(A,\Phi)$ can be written \begin{gather*} \Phi =\left(v + \frac{k_{\infty}}{2\pi }\log{r} +\frac{1}{2\pi}\text{Re}\left( \frac{\mu}{z}\right)\right) \,\hat{\sigma} +\psi\\ \\ A = \left( b\, dt +k_{\infty}A^{\infty }+\frac{1}{2\pi }(\alpha + k_{\infty}\pi)d\theta +\frac{1}{2\pi}\text{Im}\left( \frac{\mu}{z}\right) dt\right) \, \hat{\sigma} + a \end{gather*} with $\xi= (a,\psi)=O(r^{-1-\tau})$ and $|\nabla _{A}\xi| + |[\Phi,\xi]|=O(r^{-2-\tau})$ for some $\tau >0$. Here $A^{\infty}$ is the connection on $L_{q}$ of Lemma \ref{lem:Asymptotics:Periodic:Dirac:Connection}. \end{enumerate} \end{definition} The reducible pair $(A_{\text{ext}},\Phi_{\text{ext}})$ satisfies these boundary conditions with rate $\tau=1$ and charge at infinity $k_{\infty}=2k-n$. In fact, for topological reasons one always has $k_{\infty} \equiv n$ modulo $2$ and then defines $k$ by the formula above as the (non-abelian) \emph{charge} of the $SO(3)$--monopole $(A,\Phi) \in \mathcal{C}$, \emph{cf.} \cite[Section 4.2]{Foscolo:Deformation}. We call the parameter $q$ in Definition \ref{def:Boundary:Conditions} the \emph{centre} of the monopole. Observe that it is necessary to fix $q$ in order to have $L^{2}$--integrable infinitesimal deformations. \section{The family of initial approximate solutions}\label{sec:Approximate:Solutions} Let $c_{\text{ext}}$ be the reducible solution $(A_{\text{ext}},\Phi _{\text{ext}})$ to the Bogomolny equation obtained in \eqref{eqn:Sum:Dirac:Higgs:Field} and \eqref{eqn:Sum:Dirac:Connection} from $(\lambda _{0},5,K)$--admissible data as in Definition \ref{def:Hypothesis:Background:Data}. Here $\lambda _{0}$ is chosen large enough so that Lemma \ref{lem:Localisation:Zeroes:Higgs:Field} holds. In fact we will reserve the freedom to take $\lambda _{0}$ as large as needed until the end of the construction. The aim of this section is to construct a family of approximate solutions to the Bogomolny equation \eqref{eqn:Bogomolny}. We desingularise the singular solution $c_{\text{ext}}$ by gluing rescaled PS monopoles in small balls centred at the $k$ points $q_{1}, \ldots, q_{k}$. By varying the centres of the PS monopoles and the gluing maps, we obtain a $(4k-1)$--parameter family of inequivalent smooth pairs $c(x_{0},\tau)$ on $(\R^{2} \times \Sph ^{1}) \setminus S$. \begin{comment} \begin{figure*}[h] \centering \includegraphics[width=.6\textwidth]{annuli.pdf} \caption{Gluing regions} \end{figure*} \end{comment} \begin{definition}\label{def:Gluing:Data} We define \emph{gluing regions}, adapted cut-off functions and \emph{gluing parameters}. \begin{itemize} \item[(i)] Let $N>2$ be a number to be fixed later and set $\delta _{j}=\lambda _{j}^{-\frac{1}{2}}$. Taking $\lambda _{0}$ large enough depending on $N$, we assume that $2N\delta_{j} < \frac{1}{2}$ and $\frac{\delta _{j}}{2N}> \frac{2}{\lambda_{j}}$ for all $j$. Write $(\R^{2} \times \Sph ^{1}) \setminus S$ as the union of open sets \begin{alignat*}{2} U_{j}=B_{N\delta _{j}}(q_{j}) \text{ for } j=1,\ldots ,k, \qquad & U_{\text{ext}}= (\R^{2} \times \Sph ^{1}) \setminus \left( S \cup \bigcup _{j=1}^{k}{B_{N^{-1}\delta_{j}}(q_{j})}\right). \end{alignat*} Let $\text{Ann}_{j}$, $\text{Ann}_{j,\text{ext}}$, $\text{Ann}_{j,\text{int}}$ be the annuli, respectively, \[ U_{j}\cap U_{\text{ext}},\qquad B_{2N\delta_{j}}(q_{j})\setminus B_{N\delta_{j}}(q_{j}), \qquad B_{N^{-1}\delta_{j}}(q_{j})\setminus B_{(2N)^{-1}\delta_{j}}(q_{j}). \] \item[(ii)] Let $\chi$ be a smooth increasing function of one variable such that $\chi(s) \equiv 1$ if $s \leq 1$ and $\chi (s) \equiv 0$ when $s \geq 2$. For each $j$, let $\rho _{j}$ be the distance function from $q_{j}$ and define cut-off functions $\chi ^{j}_{\text{int}}=\chi \left( \frac{\delta _{j}\rho_{j}}{2N}\right)$ and $\chi ^{j}_{\text{ext}}=1-\chi \left( N\delta _{j}\rho_{j}\right)$ on $B_{1}(q_{j})$. \item[(iii)] Fix $\kappa \in (0,1)$ so that Lemma \ref{lem:PS:Translations} holds. Let $\mathcal{P}=\mathcal{P}_{\kappa}$ be the trivial $\mathbb{T} ^{k}$--bundle over the product of $k$ balls $B_{\kappa}(0) \times \dots \times B_{\kappa}(0) \subset \left( \mathbb{R} ^{3} \right) ^k$. We denote points in $\mathcal{P}$ by $k$--tuples $(x_{0},\tau)$ of points $\left( x^{j}_{0}, \exp{(\tau ^{j}\hat{\sigma})} \right) \in B_{\kappa}(0) \times SO(2)$. \end{itemize} \end{definition} We think of $x^{j}_{0} \in B_{\kappa}(0)$ as parametrising the charge $1$ monopole $(A_{x_{0}^{j}}, \Phi _{x_{0}^{j}})$ on $\mathbb{R}^{3}$ of Lemma \ref{lem:PS:Translations}. Identifying the ball $B_{\pi}(q_{j}) \subset \R^{2} \times \Sph ^{1}$ with $B_{\lambda _{j}\pi}(0)\subset\mathbb{R}^{3}$ via the homothety \begin{equation}\label{eqn:homothety:hj} h_j\colon\thinspace q_{j}+x \longmapsto \lambda _{j}x, \end{equation} the $j$th copy of $B_{\kappa}(0)$ in the definition of $\mathcal{P}$ can also be thought of as $B_{\lambda_{j}^{-1}\kappa}(q_{j}) \subset \R^{2} \times \Sph ^{1}$. As for the interpretation of the phase factors $(\tau _{1}, \ldots, \tau _{k})$, let \begin{equation}\label{eqn:gauge:etaj} \eta_{j} \colon\thinspace \text{Ann}_{j}\times\mathfrak{su}(2)\rightarrow\underline{\mathbb{R}}\oplus H^{2}\simeq\underline{\mathbb{R}}\oplus M \end{equation} be the isomorphism obtained by composing the gauge transformation $\eta$ of Lemma \ref{lem:Abelian:Gauge:PS:Monopole} with a fixed isomorphism $H^{2} \simeq M$ over $\text{Ann} _{j}$. The choice of $\tau _{j}$ fixes the freedom to compose $\eta _{j}$ with a constant diagonal gauge transformation $\exp{(\tau _{j}\hat{\sigma})}$. It is clear how to define a family of $SO(3)$--bundles $V(\tau)$ over $(\R^{2} \times \Sph ^{1}) \setminus S$ with an isomorphism $V(\tau) \simeq \underline{\mathbb{R}} \oplus M$ outside of $\{ q_{1}, \ldots, q_{k} \}$: given a $k$--tuple $\tau = (e^{\tau _{1}\hat{\sigma}}, \ldots, e^{\tau _{k}\hat{\sigma}}) \in SO(2) \times \ldots \times SO(2)$, define $V(\tau)$ identifying $\left( U_{j},U_{j}\times\mathfrak{su}(2)\right)$ and $\left( U_{\text{ext}},\underline{\mathbb{R}}\oplus M\right)$ over $\text{Ann}_{j}$ using $\exp{ (\tau _{j}\hat{\sigma}) } \circ \eta_{j}$. Since $w_{2}\left( V(\tau) \right) \cdot [\mathbb{S} ^{2}_{p_{i}}] \equiv 1$ the isomorphism class of $V(\tau)$ does not depend on the choice of $\tau$. In order to define a smooth pair $c(x_{0},\tau)$ on $V(\tau)$ for all $(x_{0},\tau) \in \mathcal{P}$ we are now going to patch together $c_{\text{ext}}$ and a rescaled Prasad--Sommerfield monopole in a neighbourhood of each of the points $q_{1}, \ldots, q_{k}$. Some care is needed to implement the construction in such a way that the resulting family of approximate solutions to the Bogomolny equation satisfies a number of properties. In particular, the most na\"ive approach to the construction would yield an error term in the Bogomolny equation not sufficiently small to apply the Implicit Function Theorem. Obstructions to match $c_{\text{ext}}$ with scaled PS monopoles at a higher order appear if we also require a fixed behaviour at infinity. Pull-back a rescaled Prasad--Sommerfield monopole with centre $x^{j}_{0}$ to $U_{j}$ via the homothety $h_{j}$ \eqref{eqn:homothety:hj} obtaining a pair $c_{j}(x_{0})=h_{j}^{\ast}(A_{x_{0}^{j}}, \lambda _{j}\Phi _{x_{0}^{j}})$. By the abelian gauge of Lemma \ref{lem:Abelian:Gauge:PS:Monopole} and Lemma \ref{lem:PS:Translations}, over $U_{j} \setminus \{ q_{j} \}$ we write $e^{\tau_{j}\hat{\sigma}}\eta _{j}\left( c_{j}(x_{0}) \right) = c^{0}_{j}(x_{0}) + (a,\psi)$, where $|(a,\psi)| = O(\lambda _{j}^{-2}\rho_{j} ^{-3})$ and the leading order term $c_{j}^{0}(x_{0})$ is \begin{equation}\label{eqn:c0:j:x0} c_{j}^{0}(x_{0}) = c_{j}^{0} -\left( \frac{\langle x \times x^{j}_{0}, dx \rangle}{\lambda _{j}\rho_{j} ^{3}}, \frac{\langle x, x^{j}_{0}\rangle}{\lambda _{j}\rho_{j} ^{3}} \right) \hat{\sigma}. \end{equation} Here $c_{j}^{0}$ is the Euclidean Dirac monopole of charge $2$, mass $\lambda _{j}$ and singularity at the origin pulled-back to a neighbourhood of $q_{j}$ in $\R^{2} \times \Sph ^{1}$. Define a modified pair $c'_{j}(x_{0})$ by \begin{equation}\label{eqn:c:j:x0:'} e^{\tau_{j}\hat{\sigma}}\eta _{j}\left( c'_{j}(x_{0}) \right) = c^{0}_{j}(x_{0}) + \chi ^{j}_{\text{int}}(a,\psi). \end{equation} Next, we modify $c_{\text{ext}}$ so that it coincides with $c^{0}_{j}(x_{0})$ over the annulus $\text{Ann} _{j}$. Set \begin{equation}\label{eqn:c:ext:x0} c_{\text{ext}}(x_{0})=c_{\text{ext}} -2 \sum_{j=1}^{k}{ \frac{x^{j}_{0}}{ \lambda _{j} } \, \lrcorner \, \left( \ast dG _{q_{j}}, dG_{q_{j}} \right) \otimes \hat{\sigma} }. \end{equation} By Lemmas \ref{lem:Asymptotics:Periodic:Dirac:Higgs:Field} and \ref{lem:Asymptotics:Periodic:Dirac:Translations}, over the punctured ball $B_{\frac{\pi}{2}}(q_{j}) \setminus \{ q_{j} \}$ we can write $c_{\text{ext}}(x_{0}) = c_{j}^{0}(x_{0}) + (a,\psi)$ with $|(a,\psi)| = O \left( \frac{\rho _{j}}{d} + \rho _{j}^{2} +\frac{1}{\lambda _{j}}\right)$. Define $c'_{\text{ext}}(x_{0})$ by \begin{equation}\label{eqn:c:ext:x0:'} c'_{\text{ext}}(x_{0}) = c^{0}_{j}(x_{0}) + \chi ^{j}_{\text{ext}}(a,\psi). \end{equation} The collection $\left( U_{\text{ext}},c'_{\text{ext}}(x_{0})\right)$ and $\left( U_{j},c'_{j}(x_{0}) \right)$ for $j=1, \ldots, k$ defines a pair $c(x_{0},\tau)$ on $V(\tau)$. \begin{remark} Notice that the choice $\delta _{j} = \lambda _{j}^{-\frac{1}{2}}$ minimises the size of both $e^{\tau_{j}\hat{\sigma}}\eta _{j}\big( c'_{j}(x_{0}) \big) - c^{j}_{0}(x_{0})$ and $c_{\text{ext}}(x_{0}) - c^{j}_{0}(x_{0})$. \end{remark} In fact, as in \cite[Lemma 7.2.46]{Donaldson:Kronheimer}, it is more convenient to fix a base point $\tau_{0}=(\text{id}, \ldots,\text{id})$ and regard the pairs $c(x_{0},\tau)$ as a family of configurations on the fixed $SO(3)$--bundle $V=V(\tau_{0})$. Let $\gamma _{1}, \ldots, \gamma _{k},\gamma _{\text{ext}}$ be a partition of unity subordinate to the cover $U_{1}, \ldots, U_{k},U_{\text{ext}}$. Define a gauge transformation $g_{j}$ on $U_{j}$ by $\eta _{j}\circ g_{j} \circ \eta _{j}^{-1}=\exp{(\tau _{j}\gamma _{\text{ext}}\hat{\sigma})}$. Similarly, let $g_{\text{ext}}$ be the gauge transformation on $U_{\text{ext}}$ with the properties that $g_{\text{ext}} = \exp{(-\tau _{j}\gamma _{j}\hat{\sigma})}$ on $\text{Ann}_{j}$ and $g_{\text{ext}} \equiv 1$ on the complement of $\text{Ann}_{1} \cup \ldots \cup \text{Ann}_{k}$. Then $\eta _{j} \, g_{j}\, \eta _{j}^{-1}\, g_{\text{ext}}^{-1} = e^{\tau_{j}\hat{\sigma}}$ over $\text{Ann}_{j}$ and therefore $\eta _{j}\, g_{j}\left( c'_{j}(x_{0}) \right) = g_{\text{ext}}\left( c'_{\text{ext}}(x_{0}) \right)$. Define a pair on $V$ by \begin{equation}\label{eqn:c:x0:tau} \begin{dcases*} g_{j}\left( c'_{j}(x_{0}) \right) & on $U_{j},$\\ g_{\text{ext}}\left( c'_{\text{ext}}(x_{0}) \right) & on $U_{\text{ext}}.$ \end{dcases*} \end{equation} Then $(g_{1}, \ldots, g_{k},g_{\text{ext}})$ defines an isomorphism $g\colon\thinspace V(\tau) \xrightarrow{\sim} V$ such that $g\big( c(x_{0},\tau) \big)$ is precisely \eqref{eqn:c:x0:tau}. Let $\Gamma$ be the stabiliser of $c_{\text{ext}}$, \emph{i.e.} $\Gamma$ is the group of constant diagonal gauge transformations of $\underline{\mathbb{R}} \oplus M$. There is natural diagonal action of $\Gamma$ on $\tau$ by composition on the left. Since the Prasad--Sommerfield monopole is irreducible $c(x_{0},\tau)$ and $c(x'_{0},\tau ')$ are gauge equivalent if and only if $x'_{0}=x_{0}$ and $\tau '$ belongs to the $\Gamma$--orbit of $\tau$. \subsection{The centre of mass}\label{sec:Centre:Mass} By \eqref{eqn:c:ext:x0}, the family $c(x_{0},\tau)$ does not satisfy fixed boundary conditions as $(x_{0},\tau)$ varies in $\mathcal{P}$. Indeed, the centre of the pair $c(x_{0},\tau)$ in the sense of Definition \ref{def:Boundary:Conditions} depends on the \emph{centre of mass} \begin{equation}\label{eqn:Zeta} \zeta = -\sum _{j=1}^{k}{ \frac{ x^{j}_{0} }{\lambda_{j}}} \end{equation} of the points $\frac{x_{0}^{1}}{\lambda_{1}}, \ldots, \frac{x_{0}^{k}}{\lambda_{k}}$. Thus the family $\{ c(x_{0},\tau) \, | \, (x_{0},\tau) \in \mathcal{P} \}$ belongs to the fixed configuration space $\mathcal{C}$ of Definition \ref{def:Boundary:Conditions} if and only if $(x_{0},\tau)$ satisfies the ``balancing'' condition $\zeta =0$. Notice that the necessity of this constraint and the action of the stabiliser of $c_{\text{ext}}$ agree with the fact that the dimension of the moduli space of periodic monopoles of charge $k$ is $4(k-1)$ \cite[Theorem 1.5]{Foscolo:Deformation}. Nonetheless, we will not require $\zeta =0$ at this stage. Since $\R^{2} \times \Sph ^{1}$ is a \emph{parabolic} manifold, \emph{i.e.} it has no strictly positive Green's function, if $\triangle u = f \in C^{\infty}_{0}(\R^{2} \times \Sph ^{1})$, then $u$ grows logarithmically at infinity unless $f$ has mean value zero. As a consequence, when deforming the approximate solution $c(x_{0},\tau)$ into a genuine monopole by the Implicit Function Theorem it is necessary to allow appropriate changes of the asymptotics at infinity by varying the centre $q$ in Definition \ref{def:Boundary:Conditions}. Since our aim is to construct a whole family of solutions to the Bogomolny equation in a fixed moduli space, however, we regard the gluing problem as obstructed. In order to compensate for the obstructions we have to introduce a family of initial approximate solutions depending on parameters. These are precisely the coordinates of the centre of mass $\zeta$. Thus we don't require the ``balancing'' condition $\zeta =0$ at the beginning but will rather fix $\zeta$ only at the end of the construction. This brief discussion motivates the following definition. \begin{definition}\label{def:Obstruction:Basis} Define sections $o_{1}, \ldots, o_{4}$ of $(\Lambda ^{1} \oplus \Lambda ^{0}) \otimes V$ over $(\mathbb{R}^2 \times \mathbb{S}^1) \setminus S$ by \begin{alignat*}{2} o_{h}=-\frac{1}{2\pi k} \sum _{j=1}^{k} { \gamma (dx_{h}) \, \big( \chi ^{j}_{\text{ext}}\, dG_{q_{j}},0 \big) \, \otimes \hat{\sigma}} \qquad & o_{4}=-\frac{1}{2\pi k} \sum _{j=1}^{k} {\big( \chi ^{j}_{\text{ext}}\, dG_{q_{j}},0 \big) \, \otimes \hat{\sigma} }, \end{alignat*} Here $h=1,2,3$, $\left( dG_{q_{j}},0 \right)$ is an element of $\Omega(\underline{\mathbb{R}} \oplus M)$ and, by abuse of notation, $dx_{1}=dx$, $dx_{2}=dy$ and $dx_{3}=dt$. Moreover, $\gamma (dx_{h}) \left( a,0 \right)=\partial _{x_{h}} \lrcorner \left( \ast a, a \right)$ is the Clifford multiplication \eqref{eqn:Clifford:Multiplication} of $dx_{h}$ with the $1$--form $a$. \end{definition} As we will see later, the span of $o_{1},o_{2},o_{3}$ is the space of obstructions to solve the Bogomolny equation with fixed asymptotics. By Lemmas \ref{lem:Asymptotics:Periodic:Dirac:Higgs:Field} and \ref{lem:Asymptotics:Periodic:Dirac:Translations} and Definition \ref{def:Gluing:Data}.(ii) there exists a constant $C>0$ such that: \begin{alignat}{2}\label{eqn:Obstruction:Basis} \begin{dcases*} |o_{h}| \leq C\rho _{j}^{-2} & in $B_{1}(q_{j}) \setminus B_{N\delta _{j}}(q_{j})$\\ |o_{h}| \leq C & outside of $\bigcup _{j=1}^{k}{ B_{\frac{1}{2}}(q_{j}) }$ \end{dcases*} \qquad & \begin{dcases*} |\nabla o_{h}| \leq C\rho _{j}^{-3} & in $B_{1}(q_{j}) \setminus B_{N\delta _{j}}(q_{j})$\\ |\nabla o_{h}| \leq C & outside of $\bigcup _{j=1}^{k}{ B_{\frac{1}{2}}(q_{j}) }$ \end{dcases*} \end{alignat} We complete the definition of the pair $c(x_{0},\tau)$ replacing \eqref{eqn:c:x0:tau} by \begin{equation}\label{eqn:Approximate:Solution} c(x_{0},\tau)+4\pi \sum_{h=1}^{3} { \zeta_{h}\, o _{h} } \end{equation} where $\zeta$ is given by \eqref{eqn:Zeta}. By Lemmas \ref{lem:Asymptotics:Periodic:Dirac:Higgs:Field}, \ref{lem:Asymptotics:Periodic:Dirac:Connection} and \ref{lem:Asymptotics:Periodic:Dirac:Translations} this modification guarantees that the pairs $c(x_{0},\tau)$ satisfy fixed asymptotics for all $(x_{0},\tau) \in \mathcal{P}$, \emph{i.e.} $c(x_{0},\tau)$ lies in the fixed configuration space $\mathcal{C}$ as $(x_{0},\tau)$ varies in $\mathcal{P}$. By abuse of notation, in the rest of the paper we will take \eqref{eqn:Approximate:Solution} as the definition of the pair $c(x_{0},\tau)$. \subsection{Estimate of the error and the geometry of the approximate solutions.} Fix a point $(x_{0},\tau) \in \mathcal{P}$ and set $(A,\Phi)=c(x_{0},\tau)$. We want to estimate how far $(A,\Phi)$ is from being a solution to the Bogomolny equation, \emph{i.e.} we want to control $\Psi (x_{0},\tau)=\ast F_{A}-d_{A}\Phi$. For later use, we also estimate the size of the Higgs field $\Phi$ and of the ``curvature'' term $d_{A}\Phi = \ast F_{A}-\Psi(x_{0},\tau)$. \begin{prop}\label{prop:Pregluing:Map} There exists $\lambda _{0}$ and $\kappa$ such that the following holds. Suppose that $v,S,q_{1}, \ldots, q_{k}$ are $(\lambda _{0}, 5,K)$--admissible and let $\mathcal{P}=\mathcal{P}_{\kappa}$ be as in Definition \ref{def:Gluing:Data}.(iii). Then the construction of $c(x_{0},\tau)$ in \eqref{eqn:Approximate:Solution} defines a map $c\colon\thinspace \mathcal{P} \rightarrow \mathcal{C}$, the \emph{pre-gluing map}, that factors through $c \colon\thinspace \mathcal{P}/\Gamma \rightarrow \mathcal{C}/\mathcal{G}$, where $\Gamma \simeq SO(2)$ acts on $\mathcal{P}$ diagonally and $\mathcal{G}$ is the space of bounded gauge transformation which preserve the boundary conditions of Definition \ref{def:Boundary:Conditions}. Furthermore, there exists a uniform constant $C>0$ depending only on $\lambda _{0}$, $\kappa$ and $p_{1}, \ldots, p_{n}$ with the following significance. For every $(x_{0},\tau) \in \mathcal{P}$ define $\zeta$ by \eqref{eqn:Zeta} and set $(A,\Phi) = c(x_{0},\tau)$. \begin{itemize} \item[(i)] The error $\Psi(x_{0},\tau) = \ast F_{A} - d_{A}\Phi$ is supported on $\text{Ann}_{j,\text{int}}$ and $\text{Ann}_{j,\text{ext}}$. Moreover, define \begin{equation}\label{eqn:Error:Obstruction} \Psi _{\zeta} = 4\pi\, d_{2} \left( \sum_{h=1}^{3}{ \zeta_{h}\, o_{h} } \right). \end{equation} Then $\Psi _{\zeta}$ is supported on $\text{Ann}_{j,\text{ext}}$ and we have estimates \begin{alignat*}{3} |\Psi(x_{0},\tau) - \Psi _{\zeta}| \leq C & \qquad \text{ and } \qquad & \rho _{j}^{2}|\Psi _{\zeta}| \leq \frac{C}{\sqrt{\lambda}}. \end{alignat*} \item[(ii)] On every ball $B_{1}(q_{j})$, $j=1, \dots, k$, we have \begin{equation}\label{eqn:Curvature:Uj} \left( \lambda _{j}^{-2} + \rho _{j}^{2} \right) |d_{A}\Phi| \leq C. \end{equation} \item[(iii)] $|\Phi| \geq \frac{1}{2}$ over $U_{\text{ext}}$. \end{itemize} \proof The first part of the Proposition is a simple restatement of the construction of the pair $c(x_{0},\tau)$ for $(x_0,\tau) \in \mathcal{P}$. We have to verify the estimates in (i), (ii) and (iii). From the construction of $c(x_{0},\tau)$ recall that \[ \begin{dcases*} c(x_{0},\tau) = c_{j}^{0}(x_{0}) + \chi ^{j}_{\text{int}}\, O(\lambda _{j}^{-2}\rho _{j}^{-3}) & over $\text{Ann}_{j,\text{int}}$\\ c(x_{0},\tau) = c_{j}^{0}(x_{0}) + \chi ^{j}_{\text{ext}}\, O(\lambda _{j}^{-1}+\rho _{j}) + 4\pi k \, \sum_{h=1}^{3}{ \zeta_{h}\, o_{h} } & over $\text{Ann}_{j,\text{ext}}$ \end{dcases*} \] with $c_{j}^{0}(x_{0})$ defined explicitly in \eqref{eqn:c0:j:x0}. Observe also that if $(A,\Phi )$ and $(A,\Phi )+(a,\psi )$ both solve the Bogomolny equation and $\chi $ is a smooth function, then \[ \ast F_{A+\chi a}-d_{A+\chi a}(\Phi +\chi\psi )=\ast (d\chi \wedge a) -(d\chi )\psi +\chi (\chi -1)(\ast [a,a]-[a,\psi ]). \] A direct computation using Definition \ref{def:Gluing:Data}.(ii) therefore yields \begin{equation}\label{eqn:Error} \begin{dcases*} | \Psi (x_{0},\tau) | \leq C N^{4} & over $\text{Ann}_{j,\text{int}}$,\\ |\Psi (x_{0},\tau) - \Psi _{\zeta}| \leq C & over $\text{Ann}_{j,\text{ext}}$. \end{dcases*} \end{equation} In order to control $|\Psi _{\zeta}|$, we use the fact that $G_{q_{j}}$ is harmonic outside of $q_{j}$ to obtain $|d_{2}o_{h}| \leq C \rho _{j}^{-2}|\nabla \chi ^{j}_{\text{ext}}|$. The estimate now follows from the properties of $\chi ^{j}_{\text{ext}}$ in Definition \ref{def:Gluing:Data}, the definition of $\delta _{j}$ and \eqref{eqn:Zeta}. This concludes the proof of (i). We prove \eqref{eqn:Curvature:Uj} separately in different regions. \begin{itemize} \item[1.] On the ball $\rho _{j} \leq \frac{\delta _{j}}{2N}$, $c(x_{0},\tau)$ is gauge equivalent to a translation of a PS monopole rescaled by $\lambda _{j}$. The quantity $(1+\rho ^{2})|d_{A}\Phi|$ is scale invariant and therefore Lemma \ref{lem:Properties:PS:Monopole} and the fact that $|x^{j}_{0}|<\kappa$ imply the estimate. \item[2.] On the annulus $\text{Ann}_{j}$, $c(x_{0},\tau)$ coincides with the modified Dirac monopole $c_{j}^{0}(x_{0})$: \[ \left( \lambda _{j}^{-2} + \rho _{j}^{2} \right) |d_{A}\Phi| \leq C\left( 1+N\lambda _{j}^{-\frac{1}{2}} \right) \] follows directly from the definition \eqref{eqn:c0:j:x0} of $c^{j}_{0}(x_{0})$. \item[3.] We deduce the estimate on the annulus $\text{Ann}_{j,\text{int}}$ from the previous two and Lemma \ref{lem:PS:Translations}. Write $c(x_{0},\tau) = c^{0}_{j}(x_{0})+ \chi _{j}^{\text{int}}(a,\psi)$, where $(a,\psi)=O\left( \lambda _{j}^{-2}\rho _{j}^{-3}\right)$. In Lemma \ref{lem:PS:Translations} we did not calculate the decay of the covariant derivative of $(a,\psi)$, but we can argue as follows. Observe that \[ d_{A+\chi a}\left( \Phi + \chi \psi \right) = d_{A}\Phi + \chi\, d_{A+a}(\Phi + \psi) + d\chi \wedge \psi + \chi (\chi -1)[a,\psi] \] where $(A,\Phi)=c^{j}_{0}(x_{0})$ and $\chi = \chi ^{j}_{\text{int}}$. Since $(A+a,\Phi+\psi)$ is a translation of the PS monopole, we deduce that \[ \rho _{j}^{2}|d_{A}\Phi| \leq C\big( 1+N^{2}\lambda _{j}^{-1}+ N^{4}\lambda _{j}^{-2} \big). \] \item[4.] Finally, on the annulus $B_{1}(q_{j}) \setminus B_{N\delta _{j}}$ write \[ c(x_{0},\tau)= c_{j}^{0}(x_{0}) + \chi ^{j}_{\text{ext}}(a,\psi) + 4\pi k \sum _{h=1}^{3}{ \zeta _{h}\, o_{h} }, \] where $(a,\psi)=O(\rho _{j}+\lambda _{j}^{-1})$ and $(\nabla a,\nabla \psi) = O(1)$. A direct computation using \eqref{eqn:Obstruction:Basis} yields $\rho _{j}^{2}|d_{A}\Phi| = O(1+\lambda _{j}^{-\frac{1}{2}})$. \end{itemize} Finally, the statement in (iii) is a consequence of the following lemma. \endproof \end{prop} \begin{lemma}\label{lem:Nonvanishing:Higgs:Field} If $(A,\Phi) = c'_{\text{ext}}(x_{0})+ 4\pi \sum_{h=1}^{3} { \zeta_{h}\, o _{h} }$ then, possibly after taking $\lambda _{0}$ larger and $\kappa$ smaller if necessary, there exists $\frac{1}{\lambda _{j}} < \delta (\lambda _{j}) < \frac{\sqrt{2}}{\lambda _{j}}$ such that \begin{equation}\label{eqn:Localisation:Zeroes:Higgs:Field} |\Phi| \geq \frac{1}{2} \end{equation} outside of $\bigcup _{j=1}^{k}{ B_{\delta (\lambda _{j})}(q_{j}) }$. \proof We prove the lemma controlling the size of $|\Phi|$ through each step of the construction of the pair $c(x_{0},\tau)$. First, by the definition \eqref{eqn:c:ext:x0} of $c_{\text{ext}}(x_{0})$, the function $\langle \Phi _{\text{ext}}(x_{0}), \hat{\sigma} \rangle$ is harmonic outside of the points $\{ p_{1}, \ldots, p_{n}, q_{1}, \ldots, q_{k} \}$. One can argue as in Lemma \ref{lem:Localisation:Zeroes:Higgs:Field} to show that there exists $\lambda _{0} > 0$ such that if $\lambda _{j} > \lambda _{0}$ then $\langle \Phi _{\text{ext}}(x_{0}), \hat{\sigma} \rangle \geq 1$ outside of $\bigcup _{j=1}^{k}{ B_{\delta (\lambda _{j})}(q_{j}) }$ for some $\frac{1}{\lambda _{j}}< \delta (\lambda _{j}) \leq \frac{1+\sqrt{1+\kappa}}{2\lambda _{j}}$. Secondly, picking $\lambda _{0}$ even larger if necessary (depending on $N$), one can make sure that the term of order $O\left( \frac{\rho _{j}}{d}+ \rho _{j}^{2}+ \frac{1}{\lambda _{j}} \right)$ multiplied by the cut-off function $\chi ^{j}_{\text{ext}}$ in \eqref{eqn:c:ext:x0:'} is smaller than $\frac{1}{4}$. Finally, by \eqref{eqn:Obstruction:Basis}, Definition \ref{def:Hypothesis:Background:Data}.(iii) and the fact that $|\zeta| \leq \frac{C\kappa}{\lambda}$, we can choose $\kappa$ small enough (depending on $K$ of Definition \ref{def:Hypothesis:Background:Data}) so that $\left| 4\pi \sum_{h=1}^{3} { \zeta_{h}\, o_{h} } \right| \leq \frac{1}{4}$. \endproof \end{lemma} \section{The linearised equation}\label{sec:Linear} Having constructed a family of approximate solutions to the Bogomolny equation, our aim is now to find $\xi=\xi(x_{0},\tau)$ for every $(x_{0},\tau) \in \mathcal{P}$ with the appropriate decay at infinity and at the singularities $p_{i}\in S$ such that $c(x_{0},\tau)+\xi$ is a solution to the Bogomolny equation \eqref{eqn:Bogomolny}. Hence we look for a solution $\xi$ of the equation \begin{equation}\label{eqn:NonLinear:Equation} d_{2}\xi + \xi \cdot \xi + \Psi(x_{0},\tau) =0, \end{equation} where $d_{2}$ is the linearisation \eqref{eqn:Linearisation:Bogomolny} of the Bogomolny equation at $c(x_{0},\tau)$. In this section, the technical chore of the paper, we study the linearised equation $d_{2}\xi=f$. This is the crucial step to solve \eqref{eqn:NonLinear:Equation}. The strategy we adopt to understand the invertibility properties of $d_{2}$ is to first solve the equation separately on $U_{j}$ and $U_{\text{ext}}$, in the latter case only modulo obstructions. A global solution of the linearised equation modulo obstructions is then obtained from the local right inverses of $d_{2}$ by a simple iteration. As the main technical tool we will employ a range of weighted Sobolev spaces to carry out the analysis. For technical reasons we will have to distinguish the high mass and large distance limit when studying the equation $d_{2}\xi=f$ over $U_{\text{ext}}$. Due to the gauge invariance of the Bogomolny equation, $d_{2}$ is not elliptic. We will look for a solution of the form $\xi = d_{2}^{\ast}u$ for a $1$--form $u$ with values in $V$. The Weitzenb\"ock formula \begin{equation}\label{eqn:Weitzenbock} d_{2}d_{2}^{\ast}u = \nabla _{A}^{\ast}\nabla _{A}u - \textrm{ad}^{2}(\Phi)u + \ast[\Psi,u] \end{equation} can be deduced from \cite[Lemma 18]{Floer:Monopoles:2}; here $\Psi = \Psi(x_{0},\tau)$. At times it will be convenient to pair a $V$--valued $1$--form $u$ with the zero section of $V$ and consider the $V$--valued form of mixed degree $(u,0)$. Observe that $d_{2}^{\ast}u=D^{\ast}(u,0)$, where $D$ is the Dirac operator \eqref{eqn:Dirac:Operator}, and the equation $d_{2}d_{2}^{\ast}u=f$ is equivalent to \begin{equation}\label{eqn:d2:D} DD^{\ast}(u,0)=(f,\ast[\Psi, \ast u]). \end{equation} We will make use of the Weitzenb\"ock formulas \cite[Lemma 18]{Floer:Monopoles:2} \begin{alignat}{3}\label{eqn:Weitzenbock:D} DD^{\ast}=\nabla _{A}^{\ast}\nabla _{A}-\text{ad}(\Phi)^{2}+\Psi \qquad & \text{ and } & \qquad D^{\ast}D=DD^{\ast}+2d_{A}\Phi. \end{alignat} \subsection{The linearised equation on $U_{j}$} There are no obstructions to the invertibility of the operator $d_{2}d_{2}^{\ast}$ over $U_{j}$. The only issue is instead the fact that by \eqref{eqn:Curvature:Uj} the curvature term $d_{A}\Phi$ blows up as $\lambda _{j} \rightarrow \infty$. In view of the Weitzenb\"ock formula \eqref{eqn:Weitzenbock:D} for $D^{\ast}D$, the norm of the inverse of the operator $d_{2}d_{2}^{\ast}\colon\thinspace W^{2,2} \rightarrow L^{2}$ cannot be uniformly bounded. Following a standard approach in gluing problems, we resolve this difficulty introducing appropriate weighted Sobolev spaces. For each $j=1, \ldots ,k$ consider the weight function $w_{j} = \sqrt{\lambda _{j}^{-2} + \rho _{j}^{2} }$. By abuse of notation, we won't distinguish between the globally defined function $w_{j}$ on $\mathbb{R} ^{3}$ and a fixed smooth increasing function on $(\R^{2} \times \Sph ^{1}) \setminus S$ with the properties $w_{j} \leq 1$ and \begin{equation}\label{eqn:Weight:Function:Uj} w_{j}=\begin{dcases*} \sqrt{ \lambda _{j}^{-2} + \rho _{j}^{2} } & if $\rho _{j} \leq \frac{1}{2}$,\\ \; 1 & if $\rho _{j} \geq 1$. \end{dcases*} \end{equation} By scaling, we will work on $\mathbb{R} ^{3}$ endowed with the weight function $w=\sqrt{ 1+\rho ^{2} }$. On the trivial $SO(3)$--bundle $\mathbb{R} ^{3} \times \Lie{su}(2)$ we fix a pair $(A,\Phi)$ which coincides with the monopole $(A_{x_{0}},\Phi_{x_{0}})$ of Lemma \ref{lem:PS:Translations} if $\rho \leq (2N)^{-1}\sqrt{\lambda _{j}}$ and with the reducible pair induced by a charge $1$ Euclidean Dirac monopole of unit mass when $\rho \geq N^{-1}\sqrt{\lambda _{j}}$. In other words, we work with the pair obtained from $c'_{j}(x_{0})$ by scaling, but the estimates of Proposition \ref{prop:Linearised:Equation:Uj} below will only depend on the curvature control \eqref{eqn:Curvature:Uj} and the fact that $A$ is a smooth metric connection. By Kato's inequality, standard results about functions can be extended to $\Lie{su}_{2}$--valued forms and their covariant derivatives. In addition to the Euclidean Sobolev inequality $\| u \| _{L^{6}} \leq C_{Sob}\| \nabla u \| _{L^{2}}$ we will make use of the following Hardy-type inequality, whose proof is obtained by a simple integration by parts \cite[Lemma 2]{Lewis}. \begin{lemma}\label{lem:Poincare:Inequality:Uj} For all $\delta \in (-1,0)$ and $u \in C^{\infty}_{0}(\mathbb{R} ^{3}; \Lie{su}_{2})$ \[ \int{ w^{-2\delta -3}\, |u|^{2} } \leq \frac{1}{\delta ^{2}} \int{ w^{-2\delta -1}\, |\nabla _{A}u|^{2} }. \] \end{lemma} \begin{definition}\label{def:Weighted:Spaces:Uj} For all $\delta \in \mathbb{R}$ and all smooth forms $u \in \Omega (\mathbb{R} ^{3}; \Lie{su}_{2})$ with compact support define: \begin{alignat*}{2} \| u \| _{L^{2}_{w,\delta}} = \| w^{-\delta -\frac{3}{2}} u \| _{L^{2}}, \qquad & \| u \| ^{2}_{W^{1,2}_{w,\delta}} = \| u \| ^{2}_{L^{2}_{w,\delta}} + \| \nabla _{A}u \| ^{2}_{L^{2}_{w,\delta -1}} + \| [\Phi,u] \| ^{2}_{L^{2}_{w,\delta -1}}. \end{alignat*} Define spaces $L^{2}_{w,\delta}$ and $W^{1,2}_{w,\delta}$ as the completion of $C^{\infty}_{0}$ with respect to these norms. Finally, we say that $u \in W^{2,2}_{w,\delta}$ if $u \in W^{1,2}_{w,\delta}$ and \[ \| \nabla _{A}(D^{\ast}u) \| _{L^{2}_{w,\delta -2}} + \| [\Phi, D^{\ast}u] \| _{L^{2}_{w,\delta -2}} < \infty . \] \end{definition} \begin{prop}\label{prop:Linearised:Equation:Uj} For all $-1 < \delta <0$ there exist $\varepsilon>0$ and $C>0$ such that if $\| w \Psi \| _{L^{3}} < \varepsilon$ then the following holds. For all $f \in L^{2}_{w,\delta -2}$ there exists a unique solution $u \in W^{2,2}_{w,\delta}$ to $d_{2}d_{2}^{\ast}u = f$. Moreover, \[ \| u \| _{W^{2,2}_{w,\delta}} \leq C \| f \| _{L^{2}_{w,\delta -2}}. \] \proof By an approximation argument, we can assume that $f \in C^{\infty}_{0}$. In view of the Weitzenb\"ock formula \eqref{eqn:Weitzenbock}, the solution $u$ can be found by variational methods. Indeed, by H\"older's inequality \[ \left| \langle \ast[\Psi,u] , u \rangle _{L^{2}} \right| \leq \| w\Psi \| _{L^{3}} \| w^{-1}u \|_{L^{2}} \| u \| _{L^{6}} \leq 2C_{Sob}\| w\Psi \| _{L^{3}}\| \nabla _{A}u\| _{L^{2}}^{2}. \] The last inequality follows from Lemma \ref{lem:Poincare:Inequality:Uj} with $\delta = -\frac{1}{2}$ and the Sobolev inequality. Thus $\| d_{2}^{\ast}u \| _{L^{2}}$ is a norm on $W^{1,2}_{w,\frac{1}{2}}$ provided that $\| w\Psi \| _{L^{3}} < \frac{1}{2C_{Sob}}$. Moreover, since $f\in C^{\infty}_{0}$ the functional $\langle f, u \rangle _{L^{2}}$ is continuous on $W^{1,2}_{w,\frac{1}{2}}$. Hence there exists a unique solution $u$ to $d_{2}d_{2}^{\ast}u = f$, which, by standard elliptic regularity, lies in $W^{1,2}_{w,\frac{1}{2}} \cap C^{\infty}_{loc}$. We have to prove that $u \in W^{2,2}_{w,\delta}$. In order to justify the integrations by parts below, observe that $|u|=O(\rho ^{-1})$ as $\rho \rightarrow \infty$. This is because $d_{2}d_{2}^{\ast}u=0$ outside of the support of $f$ and $(A,\Phi)$ is reducible outside of a large compact set; thus $u=u_{D}+u_{T}$ with $u_{D}$ harmonic and $u_{T}$ exponentially decaying due to the non-vanishing of $|\Phi|$ at infinity, \emph{cf.} \cite[Lemma 7.10 and Remark 7.11]{Foscolo:Deformation}. A first integration by parts yields \begin{align*} \| u \| _{L^{2}_{w,\delta}} \| f \| _{L^{2}_{w, \delta -2}} &\geq \int{ \langle f, u \rangle\, w^{-2\delta -1} } = \int{ \langle \ast[\Psi,u], u \rangle\, w^{-2\delta -1} }\\ &{} + \int{ (|\nabla _{A}u|^{2} + |[\Phi,u]|^{2}) w^{-2\delta -1}} + (1+2\delta)|\delta | \int{ |u|^{2} w^{2\delta -3} } \end{align*} As before, we control the term involving the error $\Psi$ as follows: \[ \left| \langle \ast[\Psi,u] , u\, w^{-2\delta -1} \rangle _{L^{2}} \right| \leq \| w\Psi \| _{L^{3}} \| u \|_{L^{2}_{w,\delta}} \| w^{-\delta -\frac{1}{2}}u \| _{L^{6}} \leq C \| w\Psi \| _{L^{3}} \| u \| ^{2}_{W^{1,2}_{w,\delta -1}} \] using Lemma \ref{lem:Poincare:Inequality:Uj}, the Sobolev inequality and the fact that $\nabla _{A}(w^{-\delta -\frac{1}{2}}u) \in L^{2}$ if $u \in W^{1,2}_{w,\delta-1}$. Thus \[ \| u \| _{W^{1,2}_{w,\delta}} \leq C \| f \| _{L^{2}_{w, \delta -2}} \] provided that $\| w\Psi \| _{L^{3}}$ is sufficiently small. Set $\xi = d_{2}^{\ast}u=D^{\ast}(u,0)$. Since $\| \ast[\Psi, \ast u] \| _{L^{2}_{w,\delta -2}} \leq \| w\Psi \| _{L^{3}} \| w^{-\delta -\frac{1}{2}}u \| _{L^{6}}$, the Sobolev inequality and \eqref{eqn:d2:D} imply that $\| D\xi \| _{L^{2}_{w,\delta -2}}$ is controlled by $\| f \| _{L^{2}_{w,\delta -2}}$. We will conclude the proof of the Proposition by establishing the a priori estimate \[ \| \nabla _{A}\xi \| _{L^{2}_{w, \delta -2}} + \| [\Phi, \xi] \| _{L^{2}_{w, \delta -2}} \leq C \left( \| D\xi \| _{L^{2}_{w, \delta -2}} + \| \xi \| _{L^{2}_{w, \delta -1}} \right). \] Integrate the Weitzenb\"ock formula \eqref{eqn:Weitzenbock:D} for $D^{\ast}D$ against $w^{-2\delta +1}\xi$: \begin{align*} \int{ \left( |\nabla _{A}\xi |^{2}+|[\Phi ,\xi ]|^{2}\right) w^{-2\delta +1} } &\leq c_{1} \int{ |\xi|^{2}w^{-2\delta -1} }+ c_{2} \int{ \langle D\xi ,\xi \rangle\, w^{-2\delta} }\\ &{} + \int{ w^{-2\delta +1} |D\xi|^{2} }+\int{ w^{-2\delta +1} |\Psi | \, |\xi|^{2}}\\ &{} + \int{ w^{-2\delta +1} |d_{A}\Phi| \, |\xi|^{2}}, \end{align*} using $|\nabla w| \leq 1$. By Proposition \ref{prop:Pregluing:Map}.(ii) $w^{2}|d_{A}\Phi|$ is uniformly bounded. The term involving $\Psi$ is controlled as before using the smallness of $\| w\Psi \| _{L^{3}}$. \endproof \end{prop} \subsection{The linearised equation on $U_{\text{ext}}$: the high mass case} We move on to study the equation $d_{2}d_{2}^{\ast}u=f$ over the exterior region $U_{\text{ext}}$. The background is now the pair $c'_{\text{ext}}(x_{0}) + 4\pi k \sum_{h=1}^{3} { \zeta_{h}\, o _{h} }$ of \eqref{eqn:c:ext:x0:'} and \eqref{eqn:Approximate:Solution}. Recall that this is a reducible solution to the Bogomolny equation on the complement of $\bigcup _{j=1}^{k}{ B_{2N\delta _{j}}(q_{j}) }$. If $u$ is a section of the reducible $SO(3)$--bundle $\underline{\mathbb{R}} \oplus M$ of \eqref{eqn:Sum:Dirac:Bundle}, we write $u=u_{D} + u_{T}$. By Lemma \ref{lem:Nonvanishing:Higgs:Field} \begin{equation}\label{eqn:Control:OffDiagonal:Infinity} 4|[\Phi , u]|^{2} \geq |u_{T}|^{2} \end{equation} ouside of small neighbourhoods of $\{ q_{1}, \ldots, q_{k}\}$. By Fourier analysis with respect to the circle variable $t$ we can further decompose $u_{D} = \Pi _{0}u_{D} + \Pi _{\perp}u_{D}$ into $\mathbb{S} ^{1}$--invariant and oscillatory part. On each circle $\{ z \} \times \mathbb{S} _{t}^{1}$ one has the Poincar\'e inequality \begin{equation}\label{eqn:Control:Oscillatory:Infinity} \int_{\mathbb{S} ^{1}}{ |\nabla ( \Pi _{\perp}u_{D} )|^{2} } \geq \int_{\mathbb{S} ^{1}}{ |\Pi _{\perp}u_{D}|^{2} }. \end{equation} The inequalities \eqref{eqn:Control:OffDiagonal:Infinity} and \eqref{eqn:Control:Oscillatory:Infinity} suggest that, via the Weitzenb\"ock formula \eqref{eqn:Weitzenbock}, we have extremely good control of the off-diagonal and oscillatory piece of $u$ in terms of $d_{2}d_{2}^{\ast}u$. In order to control the $\mathbb{S} ^{1}$--invariant diagonal piece $\Pi _{0}u_{D}$ we introduce appropriate weighted spaces. The choice of weight functions is different in the two distinct situations (A) and (B) of Section \ref{sec:Sum:Dirac:Monopoles}, \emph{i.e.} the high mass and large distance case, respectively. \begin{itemize} \item[(A)] If we are constructing monopoles in the high mass limit $v \rightarrow \infty$ and $q_{1}, \ldots, q_{k},S$ are contained in a fixed set $B_{R_{0}} \times \mathbb{S} ^{1} \subset \R^{2} \times \Sph ^{1}$, the framework of \cite{Foscolo:Deformation} applies and some care is needed only to check that the constants are uniform as $v \rightarrow \infty$. We will use the weight function $\omega = \sqrt{1+|z|^{2}}$ and let all constants depend on $R_{0}$ without further notice. \item[(B)] If instead $n \leq 2(k-1)$ and we allow $d \rightarrow \infty$, then the error is concentrated around $k$ points $q_{1}, \ldots, q_{k}$ moving off to infinity and we would like to replace $\sqrt{1+|z|^{2}}$ with a weight function which is uniformly bounded above and below in a neighbourhood of each $q_{j}$ but maintains the same behaviour $O(|z|)$ at large distances. \end{itemize} We begin with the high mass case (A) and explain how to extend the results to case (B) in a second step. Thus set $\omega = \sqrt{1+|z|^{2}}$ and consider additional weight functions $\hat{\rho}_{j}, \hat{\rho}_{i}$ defined in a neighbourhood of the point $q_{j}$ and $p_{i} \in S$, respectively. The weight function $\hat{\rho}_{j}$ is a fixed smooth increasing function with $\hat{\rho}_{j} \leq 1$ and \begin{equation}\label{eqn:Weight:Function:Uext:Singularities} \hat{\rho}_{j}=\begin{dcases*} \rho _{j} & if $\rho _{j} \leq \frac{1}{2}$,\\ \; 1 & if $\rho _{j} \geq 1$. \end{dcases*} \end{equation} The function $\hat{\rho}_{i}$ is defined in a similar way, but the transition between $\rho _{i}$ and $1$ takes place on the annulus $B_{2\sigma}(p_{i}) \setminus B_{\sigma}(p_{i})$, where $\sigma >0$ is chosen so that the balls $B_{2\sigma}(p_{i})$ are all disjoint. Constants will be allowed to depend on $\sigma$ without further notice. We denote by $U _{\sigma}$ the complement of the union $\bigcup _{i=1}^{n}{ B_{\sigma}(p_{i}) } \cup \bigcup _{j=1}^{k}{ B_{\frac{1}{2}}(q_{j}) }$. In the definition below, we introduce the relevant Sobolev norms. \begin{definition}\label{def:Weighted:Spaces:Uext} Given a triple $(\delta _{1}, \delta _{2}, \delta _{3})$ and a smooth compactly supported form $u \in \Omega (\underline{\mathbb{R}} \oplus M)$ define $\| u \| _{L^{2}_{(\delta _{1}, \delta _{2}, \delta _{3})} }$ as the maximum of the semi-norms: \begin{alignat*}{3} \left\| \omega ^{-\delta _{1}-1}u \right\| _{L^{2}(U _{\sigma})}, \qquad & \| \hat{\rho}_{i}^{-\delta _{2}-\frac{3}{2} } u \| _{L^{2}\left( B_{2\sigma}(p_{i}) \right)}, \qquad & \| \hat{\rho}_{j}^{-\delta _{3}-\frac{3}{2} } u \| _{L^{2}\left( B_{1}(q_{j}) \right)}. \end{alignat*} Given $\delta >0$, set $\underline{\delta} = (-\delta, \delta , -\delta)$ and for each $m \in \mathbb{Z}$ let $\underline{\delta} + m$ denote the triple $\underline{\delta} + (m,m,m)$. For a smooth compactly supported form $u \in \Omega (\underline{\mathbb{R}} \oplus M)$ we say that \begin{enumerate} \item $u \in L^{2}_{\underline{\delta} -2}$ if the corresponding norm is finite; \item $u \in W^{1,2}_{\underline{\delta} -1}$ if $u \in L^{2}_{\underline{\delta} -1}$ and $\nabla _{A}u, [\Phi, u] \in L^{2}_{\underline{\delta} -2}$; \item $u \in W^{2,2}_{\underline{\delta}}$ if $D^{\ast}u \in W^{1,2}_{\underline{\delta}-1}$ and $u \in L^{2}_{(\delta, -\delta , -\delta)}$. \end{enumerate} The space $W^{m,2}_{\underline{\delta} - 2 +m}$ is defined as the completion of $C^{\infty}_{0}$ with respect to the corresponding norm. By convention $W^{0,2}_{\underline{\delta}-2}=L^{2}_{\underline{\delta}-2}$. \end{definition} We stress two aspects of this definition, referring to \cite{Foscolo:Deformation} and the rest of the section for further details. On one side, we distinguished the points $q_{j}$ from the singularities $p_{i}$. More precisely, around each of the singularities $p_{i}$ we imposed a stronger norm. As we will see later, this choice is necessary to control the non-linearities of \eqref{eqn:NonLinear:Equation}. Secondly, observe that if $u \in W^{2,2}_{\underline{\delta}}$ then the transversal and oscillatory part $u_{T}$ and $\Pi _{\perp}u_{D}$ have a stronger decay and lie in $L^{2}_{\underline{\delta}}$. The reason for the odd definition of $W^{2,2}_{\underline{\delta}}$ is to include diagonal sections which have non-zero limits over the punctures and at infinity. This is necessary to ensure the surjectivity of the operator $d_{2}d_{2}^{\ast}$. The main result of the sub-section is the following proposition. \begin{prop}\label{prop:Linearised:Equation:Uext} For all $0< \delta < \frac{1}{2}$ there exists $\varepsilon$ and $C$ with the following significance. Suppose that $\| \hat{\rho}_{j}\Psi |_{B_{1}(q_{j})} \| _{L^{3}} < \varepsilon$ for all $j=1, \ldots ,k$. Then for all $f \in L^{2}_{\underline{\delta}-2}$ such that $\int{ \langle f , \hat{\sigma } \otimes dx_{h} \rangle }=0$ for $h=1,2,3$ there exists a unique solution $\xi \in W^{1,2}_{\underline{\delta}-1}$ to $d_{2}\xi=f$ of the form $\xi =d_{2}^{\ast}u$ with $\int{ \langle u , \hat{\sigma } \otimes dx_{h} \rangle\, \omega ^{-2(\delta +1)} }=0$. Moreover, \[ \| \xi \| _{W^{1,2}_{\underline{\delta}-1}} \leq C \| f \| _{L^{2}_{\underline{\delta}-2}}. \] \end{prop} The proof is given in two steps. First we prove the existence of a weak solution. \begin{lemma}\label{lem:Existence:Weak:Solutions:Uext} For all $0< \delta \leq \frac{1}{2}$ there exists $\varepsilon>0$ and $C$ such that the following holds. Suppose that $\| \hat{\rho}_{j}\Psi|_{B_{1}(q_{j}) } \| _{L^{3}} < \varepsilon$ for all $j=1, \ldots ,k$. Let $f \in L^{2}_{\underline{\delta}-2}$ be a $(\underline{\mathbb{R}} \oplus M)$--valued $1$--form satisfying $\int{ \langle f , \hat{\sigma } \otimes dx_{h} \rangle }=0$ for $h=1,2,3$. Then there exists a unique weak solution $u$ to $d_{2}d_{2}^{\ast}u=f$ with \[ \int{ \langle u , \hat{\sigma } \otimes dx_{h} \rangle\, \omega ^{-2(\delta +1)} }=0 \qquad \text{ and } \qquad \| d_{2}^{\ast}u \| _{L^{2}} \leq C \| f \| _{L^{2}_{\underline{\delta}-2}}. \] \proof The first claim is that $\| d_{2}^{\ast}u \| ^{2}_{L^{2}}$ is uniformly equivalent to $\| \nabla _{A}u \| ^{2}_{L^{2}} + \| [\Phi, u] \| ^{2}_{L^{2}}$ for all $u \in C^{\infty}_{0}$ provided that $\| \hat{\rho}_{j}\Psi|_{B_{1}(q_{j}) } \| _{L^{3}}$ is sufficiently small. Indeed, integrate by parts the Weitzenb\"ock formula \eqref{eqn:Weitzenbock} for $d_{2}d_{2}^{\ast}$ and use the Sobolev and Hardy inequalities to control \[ \left| \langle \Psi \cdot u , u \rangle _{L^{2}}\right| \leq \| \hat{\rho}_{j}\Psi \| _{L^{3}}\| \hat{\rho}_{j}^{-1}u \| _{L^{2}} \| u \|_{L^{6}} \leq C \| \hat{\rho}_{j}\Psi \| _{L^{3}} \| \nabla _{A}u \| ^{2}_{L^{2}}. \] The second observation is that $\| \nabla _{A}u \| ^{2}_{L^{2}} + \| [\Phi, u] \| ^{2}_{L^{2}}$ is a norm on the space of smooth compactly supported forms with $\int{ \langle u , \hat{\sigma} \rangle \,\omega ^{-2(\delta +1)} } = 0$. More precisely, we are going to show that there exists a uniform constant $C$ such that \begin{equation}\label{eqn:Existence:Weak:Solutions:Uext} \| u \| ^{2}_{L^{2}_{( \delta, -\frac{1}{2}, -\frac{1}{2} ) }} \leq C \int{ |\nabla _{A}u|^{2} + |[\Phi, u]|^{2} }. \end{equation} Indeed, set $B=B_{1}(q_{j})$ and let $\chi$ be a smooth cut-off function supported in $B$ with $\chi \equiv 1$ in $\frac{1}{2}B$. Lemma \ref{lem:Poincare:Inequality:Uj} applied to $\chi u$ with $\delta = -\frac{1}{2}$ yields \[ \| \hat{\rho}_{j}^{-1}u \| _{L^{2}(\frac{1}{2}B)} \leq C \left( \| \nabla _{A}u \| _{L^{2}} + \| u \| _{L^{2}(B \setminus \frac{1}{2}B)} \right) \] with a uniform constant $C>0$. In the same way we can control the norm $\| \hat{\rho}_{i}^{-1}u\| _{L^{2}}$ in a punctured neighbourhood of $p_{i}$ in terms of $\| \nabla _{A}u \| _{L^{2}}$ and the $L^{2}$--norm of $u$ on an annulus around $p_{i}$. Thus \eqref{eqn:Existence:Weak:Solutions:Uext} will follow once we establish the weighted Poincar\'e inequality \[ \int{ \omega ^{-2(\delta +1)} |u|^{2} } \leq C \int{ |\nabla _{A}u|^{2} + |[\Phi, u]|^{2} }. \] Decompose $u=\Pi_{0}u_{D}+\Pi_{\perp}u_{D}+u_{T}$. Since $\omega \geq 1$, the estimate for $u_{T}$ and $\Pi _{\perp}u_{D}$ follows from \eqref{eqn:Control:OffDiagonal:Infinity} and \eqref{eqn:Control:Oscillatory:Infinity}, respectively. If $u=\Pi_{0}u_{D} \in C^{\infty}_{0}(\mathbb{R} ^{2})$ and $\int{ u \,\omega ^{-2(\delta +1)}} =0$, the stated weighted Poincar\'e inequality is proved in \cite[Corollary 8.4]{Amrouche:Girault:Giroire:1}. Finally, notice that, for $\delta$ in the range specified, $L^{2}_{\underline{\delta}-2}$ is contained in the dual of $L^{2}_{(\delta, -\frac{1}{2} , -\frac{1}{2} )}$ and therefore the existence of a weak solution $u$ to $d_{2}d_{2}^{\ast}u=f$ follows by variational methods. \endproof \end{lemma} It remains to prove that $\xi \in W^{1,2}_{\underline{\delta}-1}$. This is a consequence of the following a priori estimate. \begin{lemma}\label{lem:APriori:Estimates:Uext} For all $0 < \delta <\frac{1}{2}$ there exists $C>0$ such that for all smooth compactly supported $\xi$ \[ \| \xi \| _{W^{1,2}_{\underline{\delta}-1}} \leq C \left( \| D\xi \| _{L^{2}_{\underline{\delta}-2}} + \| \xi \| _{L^{2}} \right). \] \proof The estimate is equation (7.8) in the proof of \cite[Proposition 7.7]{Foscolo:Deformation}. The fact that the constant $C$ is uniform independently of the mass of the monopole is proved in \cite[Lemma 8.11]{Foscolo:Deformation}. For the convenience of the reader we summarise the main aspects of the argument. The first step is to prove the weighted elliptic estimate \begin{equation}\label{eqn:APriori:Estimates:Uext} \| \xi \| _{W^{1,2}_{\underline{\delta}-1}} \leq C \left( \| D\xi \| _{L^{2}_{\underline{\delta}-2}} + \| \xi \| _{L^{2}_{\underline{\delta}-1}} \right). \end{equation} As in the proof of Proposition \ref{prop:Linearised:Equation:Uj}, the main tool is the Weitzenb\"ock formula \eqref{eqn:Weitzenbock:D} for the operator $D^{\ast}D$. The constant $C$ thus depends on appropriate norms of $d_{A}\Phi$ and $\Psi$. Using a partition of unity we can always reduce to prove the estimate under the additional assumption that $\xi$ is supported in a specific given region. \begin{itemize} \item[(1)] If $\xi$ is supported in $B_{1}(q_{j})$, as in Proposition \ref{prop:Linearised:Equation:Uj}, integrate by parts the Weitzenb\"ock formula and use Proposition \ref{prop:Pregluing:Map} to show that $|\hat{\rho}_{j}^{2}\Psi|$ and $\hat{\rho}_{j}^{2}|d_{A}\Phi|$ are uniformly bounded. \item[(2)] Assume now that $\xi \in C^{\infty}_{0}(U_{\sigma})$. Since $\Psi \equiv 0$ on $U_{\sigma}$, the existence of a uniform constant $C$ follows from the fact that $\omega |d_{A}\Phi|$ is uniformly bounded. In order to check this last statement, recall that $\Phi '_{\text{ext}}(x_{0})$ is a sum of Green's functions and their derivatives. By Lemma \ref{lem:Asymptotics:Periodic:Dirac:Higgs:Field}.(ii), for any $p=(z_{0},t_{0}) \in \R^{2} \times \Sph ^{1}$ \[ |\nabla G_{p}| + |z-z_{0}|\, |\nabla ^{2}G_{p}| \leq \frac{C}{|z-z_{0}|} \] for all $(z,t)$ such that $|z-z_{0}|>2$. We conclude that there exists $C$ depending only on $\sigma$ and $R_{0}$ such that $\omega |d_{A}\Phi| \leq C$. \item[(3)] Finally, suppose that $\xi$ is supported on $B_{2\sigma}(p_{i})$. In view of the Weitzenb\"ock formula, we have to check that $\hat{\rho }_{i}^{2}|d_{A}\Phi|$ is uniformly bounded. This follows from \eqref{eqn:Asymptotics:Singularity:Sum:Dirac:Higgs:Field} and the fact that the modifications of $\Phi_{\text{ext}}$ in \eqref{eqn:c:ext:x0} and \eqref{eqn:Approximate:Solution} introduce smooth terms controlled by $d^{-1}$ in a neighbourhood of $p_{i}$. \end{itemize} In order to conclude the proof of the Lemma we have to improve \eqref{eqn:APriori:Estimates:Uext} to the required estimate, \emph{i.e.} replace the $L^{2}_{\underline{\delta}-1}$--norm of $\xi$ in the RHS with its $L^{2}$--norm. We distinguish diagonal and off-diagonal part. If $\xi = \xi _{D}$ the statement is deduced from standard theory for the Laplacian in weighted Sobolev spaces, \emph{cf.} part (1) in the proof of \cite[Proposition 7.7]{Foscolo:Deformation}. Now suppose that $\xi = \xi _{T}$. If $\xi$ is supported on $U_{\sigma}$ we apply word by word the argument of part (2) in the proof of \cite[Proposition 7.7]{Foscolo:Deformation}. Indeed, the argument only relies on \eqref{eqn:Control:OffDiagonal:Infinity} and the fact that $\omega \rightarrow \infty$ as $|z| \rightarrow \infty$. Similarly, the proof of (8.12) in \cite[Lemma 8.11]{Foscolo:Deformation}, case (3), yields the result when $\xi=\xi_{T}$ is supported on $B_{2\sigma}(p_{i})$. \endproof \end{lemma} Proposition \ref{prop:Linearised:Equation:Uext} follows from the two lemmas, since we can control $\| D\xi \| _{L^{2}_{\underline{\delta}-2}}$ in terms of $f$ as in the proof of Proposition \ref{prop:Linearised:Equation:Uj}. \subsection{The linearised equation on $U_{\text{ext}}$: the large distance case}\label{sec:Linearised:Equation:Uext:Large:Distance} We come to the task of adapting the analysis to deal with the situation in which the points $q_{1}, \ldots , q_{k}$ move off to infinity. The first step is to define an adapted weight function. By the assumption $d > d_{0}=5$ of Definition \ref{def:Hypothesis:Background:Data}, the set $B_{2}(z_{j}) \times \mathbb{S}^{1}$ does not contain any of the points $q_{1}, \ldots ,q_{k}, p_{1}, \ldots ,p_{n}$ other than $q_{j}$. By taking $d_{0}$ larger, we can also assume that there exists $R_{0}>0$ such that that the ball $B_{R_{0}}(0) \subset \mathbb{R} ^{2}$ is disjoint from $B_{2}(z_{j})$ for all $j$ and $B_{R_{0}} \times \mathbb{S} ^{1}$ contains all the singularities $p_{1}, \ldots, p_{n}$. Set $z_{0}=0$. In addition to the condition (iii) in Definition \ref{def:Hypothesis:Background:Data}, we will need an extra assumption. \begin{assumption}\label{assumption} There exists $K'>1$ such that \[ \overline{d}=\max{\left\{ |z_{j}-z_{h}|,|z_{j}-m_{i}|\, \text{ for all }\, j,h=1,\ldots k , j\neq h, i=1,\ldots n\right\}} \leq K' d. \] \end{assumption} The assumption clearly implies Definition \ref{def:Hypothesis:Background:Data}.(iii). Moreover, once the centre of mass of $q_{1}, \ldots, q_{k}$ is fixed, this extra requirement is vacuous when $k \leq 2$. Fix a cover $\{ \Omega_{j} \} _{j=0}^{k}$ of $\R^{2} \times \Sph ^{1}$ such that $\Omega _{j}$ is an open neighbourhood of the set \begin{equation}\label{eqn:Voronoi} \{ (z,t) \in \R^{2} \times \Sph ^{1} \text{ such that } |z-z_{j}| \leq |z-z_{h}| \text{ for all }h=0, \ldots, k \} \end{equation} for all $j=0, \ldots, k$. Let $\chi _{0}, \ldots, \chi _{k}$ be a partition of unity subordinate to this cover and set $\omega _{j}(z,t)=\sqrt{1+|z-z _{j}|^{2}}$. We want to define a global weight function $\omega$ such that: \begin{subequations}\label{eqn:Properties:Weight:Function:Uext} \begin{align} &\frac{1}{C_{1}}\, \omega_{j} \leq \omega \leq C_{1}\, \omega _{j} \quad \text{ on }\Omega _{j} \quad \text{ and } \quad \omega \leq C_{1}\omega _{j} \quad\text{ on }\R^{2} \times \Sph ^{1} \\ &|\nabla \omega| \leq C_{2}, \quad \text{ and }\quad \left| \omega\, \triangle \omega \right| \leq C_{3} \end{align} \end{subequations} Given $z_{1}, \ldots ,z_{k} \in \mathbb{C}$, rescale by $d$ around $z_{0}=0$. By Assumption \ref{assumption} $z_{1}, \ldots ,z_{k}$ are mapped to a collection of $k$ points in $\mathbb{C}$ such that the maximum and the minimum of the mutual distances are uniformly bounded above and below. Fix a function $\tilde{r}(z)$ which is a smoothing of $\min_{j= 0,\ldots, k}{\{ |z-d^{-1}z_{j}|\} }$ outside of $z_{0},\ldots, z_{k}$. Since the distance function on $\mathbb{R} ^{2}$ satisfies $|\nabla r| = 1$ and $r\triangle r = -1$ outside of the origin, $\| \nabla \tilde{r} \| _{L^{\infty}}$ and $\| \tilde{r}\triangle \tilde{r}\| _{L^{\infty}}$ are bounded. Now set \begin{equation}\label{eqn:Weight:Uext:Large:Distance} \omega(z,t)=\sqrt{1+d^{2}\, \tilde{r}^{2}(d^{-1}z)}. \end{equation} Then (\ref{eqn:Properties:Weight:Function:Uext}b) holds with constants depending only on $\| \nabla\tilde{r} \| _{L^{\infty}}$ and $\| \tilde{r}\triangle \tilde{r}\| _{L^{\infty}}$. Define weighted Sobolev spaces $W^{m,2}_{\underline{\delta} - 2 +m}$, $m=0,1,2$, as in Definition \ref{def:Weighted:Spaces:Uext} using weight functions $\hat{\rho}_{i}, \hat{\rho}_{j}$ and $\omega$. It will also be useful to consider the space $L^{2}_{\underline{\delta}-2,j}$ defined as in Definition \ref{def:Weighted:Spaces:Uext}.(1) using the weight function $\omega _{j}$ instead of $\omega$. By (\ref{eqn:Properties:Weight:Function:Uext}a), $\chi _{j}f \in L^{2}_{\underline{\delta}-2,j}$ for all $f \in L^{2}_{\underline{\delta}-2}$. We study the equation $d_{2}d_{2}^{\ast}u=f$ in these newly defined spaces. When we restrict to the off-diagonal component, only minor modifications to the proof of Proposition \ref{prop:Linearised:Equation:Uext} are necessary to show that $d_{2}d_{2}^{\ast}\colon\thinspace W^{2,2}_{\underline{\delta}} \rightarrow L^{2}_{\underline{\delta}-2}$ is an isomorphism. \begin{prop}\label{prop:Linearised:Equation:Uext:Offdiagonal} For all $0< \delta < \frac{1}{2}$ there exists $\varepsilon$ and $C$ with the following significance. Suppose that $\| \hat{\rho}_{j}\Psi|_{B_{1}(q_{j})} \| _{L^{3}} < \varepsilon$ for all $j=1, \ldots ,k$. Then for all $f=f_{T} \in L^{2}_{\underline{\delta}-2}$ there exists a unique solution $u=u_{T} \in W^{2,2}_{\underline{\delta}}$ to $d_{2}d_{2}^{\ast}u=f$. Moreover, \[ \| u \| _{W^{2,2}_{\underline{\delta}}} \leq C \| f \| _{L^{2}_{\underline{\delta}-2}}. \] \proof Given that $\omega \geq 1$, $\omega \rightarrow \infty$ as $|z| \rightarrow \infty$ and (\ref{eqn:Properties:Weight:Function:Uext}b) holds, the precise definition of $\omega$ is only used in Lemma \ref{lem:APriori:Estimates:Uext} to show that $\omega |d_{A}\Phi|$ is uniformly bounded on the exterior domain $U_{\sigma}$. Therefore we only have to explain why this quantity remains bounded. Recall that the Higgs field is a sum of Green's function and their derivatives. Then, by Lemma \ref{lem:Asymptotics:Periodic:Dirac:Higgs:Field}.(ii) and (\ref{eqn:Properties:Weight:Function:Uext}a) \begin{alignat*}{3} \omega \left( |\nabla G_{q_{j}}| + |\nabla^{2} G_{q_{j}}| \right) \leq C \frac{ \omega _{j} }{|z-z_{j}|} \leq C & \qquad \text{ and } \qquad \omega |dG_{p_{i}}| \leq C \frac{ \omega _{0} }{|z-m_{i}|} \leq C_{i} \end{alignat*} if $|z-z_{j}| > 2$ and $|z-m_{i}|>2$, respectively, for a constant $C_{i}$ depending only on $|m_{i}|$. \qed \end{prop} On the diagonal component there is an additional technical difficulty arising from the following finite dimensional family of functions on which the Laplacian is not well-behaved. \begin{definition}\label{def:vj} For all $j=0,1,\ldots ,k$ let $\psi_{j}$ be a smooth cut-off function with $\psi _{j} \equiv 0$ if $|z-z_{j}| \leq 1$ and $\psi _{j} \equiv 1$ if $|z-z_{j}| \geq 2$. Define functions $v_{j}=-\frac{1}{4\pi ^{2}}\psi _{j} \log{|z-z_{j}|}$. \end{definition} The following two properties of $v_{j}$ are of easy verification. \begin{itemize} \item[(i)] There exists a constant $C>0$ such that $\| \nabla v_{j} \| _{L^{\infty}} + \| \nabla ^{2} v_{j} \| _{L^{\infty}} \leq C$; \item[(ii)] $\int_{\R^{2} \times \Sph ^{1}}{ \triangle v_{j} }=1$. \end{itemize} Given $h \neq j$, set $u=v_{j}-v_{h}$. By (ii) $\triangle u$ has mean value zero and (i) implies that $\| \triangle u \| _{L^{2}_{\underline{\delta}-2} }\leq C$ for a uniform constant $C$. However, restricting to the annulus $2 \leq |z-z_{j}| \leq \frac{1}{2}|z_{j}-z_{h}|$, \[ \int{ |\nabla u|^{2} } \geq c_{1}\log{|z_{j}-z_{h}|} - \frac{c_{2}}{|z_{j}-z_{h}|^{2}} \xrightarrow{d\rightarrow\infty} \infty \] and this fact explains the special role of these functions. \begin{definition}\label{def:W} \begin{itemize} \item[(i)] Let $W$ be the finite dimensional subspace of $1$--forms with values in $\underline{\mathbb{R}} \oplus M$ \[ W= \left\{ \sum_{h=1}^{3}{ \sum_{j=0}^{k}{ \alpha_{h,j}\, d_{2}^{\ast}\left( v_{j}\, \hat{\sigma} \otimes dx_{h} \right) } } \, \text{ such that } \, \sum _{j=0}^{k}{ \alpha _{h,j} }=0 \text{ for all } h=1,2,3 \right\} . \] Define a norm on $W$ by declaring $d_{2}^{\ast}\left( v_{j}\, \hat{\sigma} \otimes dx_{h} \right)$ an orthonormal system. \item[(ii)] Given $f \in L^{2}_{\underline{\delta}-2}$ with $\int{ \langle f, \hat{\sigma} \otimes dx_{h} \rangle }=0$, denote by $\alpha (f)$ the element of $W$ defined by $\alpha _{h,j}=\int{ \langle \chi _{j}f, \hat{\sigma} \otimes dx_{h} \rangle }$. \end{itemize} \end{definition} Notice that the inclusion $L^{2}_{\underline{\delta}-2} \hookrightarrow L^{1}$ is continuous. Indeed, since $\delta >0$, (\ref{eqn:Properties:Weight:Function:Uext}a) implies that $\int{ \omega ^{-2(\delta +1)} } \leq C_{1} \sum _{j=0}^{k}{ \int{ \omega_{j} ^{-2(\delta +1)} } } < +\infty$. In particular, there exists a constant $C>0$ such that \begin{equation}\label{eqn:Bounded:alpha} |\alpha (f)| \leq C \| f \| _{L^{2}_{\underline{\delta}-2}}. \end{equation} \begin{prop}\label{prop:Linearised:Equation:Uext:Diagonal} For all $0< \delta < \frac{1}{2}$ there exists a constant $C>0$ with the following significance. For all $f= f_{D} \in L^{2}_{\underline{\delta}-2}$ such that $\int{ \langle f , \hat{\sigma } \otimes dx_{h} \rangle }=0$ for $h=1,2,3$ there exists $\xi = \xi_{D} \in W^{1,2}_{\underline{\delta}-1}$ such that $d_{2}\xi = f-d_{2}\alpha(f)$. Moreover, \[ \| \xi \| _{W^{1,2}_{\underline{\delta}-1}} \leq C \| f \| _{L^{2}_{\underline{\delta}-2}}. \] \proof The idea is to write $f=\sum _{j=0}^{k}{\chi _{j}f}$ and use Proposition \ref{prop:Linearised:Equation:Uext} to find a solution to $d_{2}\xi=f$ of the form $\xi = \sum _{j=0}^{k}{d_{2}^{\ast}u_{j}}$. Set $\alpha _{h,j}=\int { \langle \chi_{j}f, \hat{\sigma} \otimes dx_{h}\rangle }$. The $1$--form $f_{j}=\chi_{j}f-\sum _{h=1}^{3}{\alpha _{h,j}\, d_{2}d_{2}^{\ast} \left( v_{j} \otimes dx_{h}\right) }$ is now orthogonal to constant forms, \emph{i.e.} $\int{\langle f_{j}, \hat{\sigma} \otimes dx_{h} \rangle } =0$ for all $h=1,2,3$. Moreover, $\| f_{j} \| _{L^{2}_{\underline{\delta}-2,j}} \leq C \| f \| _{L^{2}_{\underline{\delta}-2}}$ by (\ref{eqn:Properties:Weight:Function:Uext}a) and property (i) after Definition \ref{def:vj}. We can therefore apply Proposition \ref{prop:Linearised:Equation:Uext}: there exists $u_{j}$, unique up to the addition of a constant, with the following properties: \begin{itemize} \item[(i)] $u_{j}$ is defined on $(\R^{2} \times \Sph ^{1}) \setminus S$ if $j=0$ and on $(\R^{2} \times \Sph ^{1}) \setminus \{ q_{j} \}$ otherwise; \item[(ii)] $d_{2}d_{2}^{\ast} u_{j}=f_{j}$; \item[(iii)] $\| \omega _{j}^{\delta}d_{2}^{\ast}u _{j} \| _{L^{2}} + \| \omega _{j}^{\delta+1}\nabla (d_{2}^{\ast}u_{j})\| _{L^{2}} \leq C \| f \| _{L^{2}_{\underline{\delta}-2}}$; \item[(iv)] $ \| u_{0}|_{B_{2\sigma}(p_{i})} \| _{W^{2,2}_{\underline{\delta}}} \leq C \| f \| _{L^{2}_{\underline{\delta}-2}}$ for all $i=1, \ldots, n$; \item[(v)] $\| u_{j}|_{B_{1}(q_{j})} \| _{W^{2,2}_{\underline{\delta}}} \leq C \| f \| _{L^{2}_{\underline{\delta}-2}}$ for $j \neq 0$. \end{itemize} Set $\xi = \sum _{j=0}^{k}{ d_{2}^{\ast}u_{j} }$. We have to show that $\xi \in W^{1,2}_{\underline{\delta}-1}$. Restrict first to the exterior domain $U_{\sigma}$. The fact that $\xi \in W^{1,2}_{\underline{\delta}-1}$ follows immediately from the second statement in (\ref{eqn:Properties:Weight:Function:Uext}a) and (iii) above. Next, consider the ball $B_{1}(q_{j})$. Since $\omega \geq 1$, (iii) implies that $d_{2}^{\ast}u_{h}|_{B_{1}(q_{j})} \in W^{1,2}$ if $h$ and $j$ are distinct and neither equal to $0$ and similarly $d_{2}^{\ast}u_{h}|_{B_{2\sigma}(p_{i})} \in W^{1,2}$ if $h\neq 0$. Then the Sobolev embedding $W^{1,2} \hookrightarrow L^{6}$ implies that \[ \| d_{2}^{\ast}u_{h} \| _{W^{1,2}_{\underline{\delta}-1}(B_{1}(q_{j}))} \leq \| \hat{\rho}_{j}^{\delta -\frac{1}{2}} \| _{L^{3}}\, \| d_{2}^{\ast}u_{h} \| _{L^{6}(B_{1}(q_{j}))} + \| \hat{\rho}_{j}^{\delta +\frac{1}{2}}\| _{L^{\infty}} \, \| \nabla (d_{2}^{\ast}u_{h} ) \| _{L^{2}(B_{1}(q_{j}))} \leq C \| f \| _{L^{2}_{\underline{\delta}-2}}. \] The norms of $\hat{\rho}_{j}$ here are bounded because $\delta \in (0,\frac{1}{2})$. Up to changing $\delta$ into $-\delta$, the same argument yields a similar estimate on $B_{2\sigma}(p_{i})$. Together with (iv) and (v) above this concludes the proof. \qed \end{prop} \subsection{Solving the linearised equation modulo obstructions}\label{sec:Linearised:Equation:Modulo:Obstructions} With these technical details out of the way, we combine Propositions \ref{prop:Linearised:Equation:Uj}, \ref{prop:Linearised:Equation:Uext}, \ref{prop:Linearised:Equation:Uext:Offdiagonal} and \ref{prop:Linearised:Equation:Uext:Diagonal} to solve the equation $d_{2}\xi=f$ modulo obstructions. Fix $0< \delta < \frac{1}{2}$. We define weighted Sobolev spaces as in Definition \ref{def:Weighted:Spaces:Uext}, with the difference that over $B_{1}(q_{j})$ we replace $\hat{\rho}_{j}$ of \eqref{eqn:Weight:Function:Uext:Singularities} with the smooth weight function $w_{j}$ of \eqref{eqn:Weight:Function:Uj}. Moreover, the weight function $\omega$ is defined differently in the two situations: \begin{itemize} \item[(A)] $S \cup \{ q_{1}, \ldots , q_{k} \} \subset B_{R_{0}} \times \mathbb{S} ^{1}$ for some $R_{0}>0$; \item[(B)] $d \rightarrow \infty$ and $S, q_{1}, \ldots , q_{k}$ satisfy Assumption \ref{assumption} for some $K'>0$. \end{itemize} With these modifications and distinctions understood, the $W^{m,2}_{\underline{\delta}+m-2}$--norm coincides with the $W^{m,2}_{w,\delta}$--norm of Definition \ref{def:Weighted:Spaces:Uj} over $U_{j}$; over $U_{\text{ext}}$, the spaces $W^{m,2}_{\underline{\delta}+m-2}$ are equivalent to the ones used in Proposition \ref{prop:Linearised:Equation:Uext} in case (A) and to those introduced in Propositions \ref{prop:Linearised:Equation:Uext:Offdiagonal} and \ref{prop:Linearised:Equation:Uext:Diagonal} in case (B). In the latter case, consider also the finite dimensional space $W$ introduced in Definition \ref{def:W}. It is necessary to introduce cut-off functions $\gamma _{j},\gamma _{\text{ext}}, \beta _{j}, \beta _{\text{ext}}$ with some specific properties. Let $\gamma_{j}$ be a smooth function supported in $B_{ 2\delta_{j} }(q_{j})$ and such that $\gamma _{j} \equiv 1$ when $\rho _{j} \leq \frac{\delta_{j}}{2}$. Then $|\nabla \gamma _{j}|\leq \frac{2}{\delta_{j}}$. Define $\gamma _{\text{ext}}$ by $\gamma _{\text{ext}} = 1- \gamma _{j}$ if $\rho \leq 2\delta_{j}$ and $\gamma _{\text{ext}} \equiv 1$ otherwise. The cut-off functions $\beta _{j}$ and $\beta _{\text{ext}}$ are defined in \cite[Lemma 7.2.10]{Donaldson:Kronheimer}: $\beta _{j}$ is a smooth function such that $\beta _{j} \equiv 1$ on $B_{2\delta _{j}}(q_{j})$, $\beta _{j} \equiv 0$ if $\rho _{j} \geq N\delta _{j}$ and \begin{equation}\label{eqn:CutOff:Gluing} | \nabla \beta _{j} | \leq \frac{C}{ \log{N} }\frac{1}{\rho _{j}}. \end{equation} Similarly, $\beta _{\text{ext}} \equiv 1$ outside of $\bigcup _{j=1}^{k}{ B_{\frac{\delta _{j}}{2}}(q_{j}) }$, $\beta _{\text{ext}} \equiv 0$ on $B_{N^{-1}\delta _{j}}(q_{j})$ and $| \nabla \beta _{\text{ext}} | \leq \frac{C}{ \log{N} }\frac{1}{\rho _{j}}$. Recall also that in Definition \ref{def:Obstruction:Basis} we distinguished smooth sections $o_{h} \in \Omega(V)$. They are supported on $U_{\text{ext}}$ and under the identification $V|_{U_{\text{ext}}} \simeq \underline{\mathbb{R}} \oplus M$ have only diagonal component. The crucial property of $o_{h}$, $h=1,2,3$, is given by the following lemma. \begin{lemma}\label{lem:Obstruction:Pairing} For all $h,l=1,2,3$, $\langle d_{2}o_{h}, \hat{\sigma } \otimes dx_{l} \rangle _{L^{2}} = \delta _{hl}$. \proof Since $c(x_{0},\tau)$ is abelian on $U_{\text{ext}}$, $d_{2}o_{h} \oplus d_{1}^{\ast}o_{h}=\slashed{D}o_{h}$, where $\slashed{D}$ is the Dirac operator of $\R^{2} \times \Sph ^{1}$. Since the Clifford multiplication by $dx_{l}$ commutes with $\slashed{D}$, it is enough to prove that \begin{alignat*}{2} \langle \slashed{D}o_{4}, \hat{\sigma } \otimes dx_{h} \rangle _{L^{2}} = 0, & \qquad \langle \slashed{D}o_{4}, \hat{\sigma } \rangle _{L^{2}} = 1. \end{alignat*} This follows by direct calculation using the definition $o _{4}=-\frac{1}{2\pi k}\sum_{j=1}^{k}{\left( \chi^{j}_{\text{ext}}\, dG_{q_{j}}, 0 \right) }\hat{\sigma}$. For example, \[ -\langle \slashed{D}(\chi\, dG,0), \hat{\sigma} \rangle _{L^{2}} = \int_{B} {d(\chi \ast dG)}=\int_{\partial B}{\ast dG}= 2\pi \] implies the second identity. \endproof \end{lemma} We refer to $\operatorname{span}\{d_{2}o_{h}\, | \, h=1,2,3\} \subset L^{2}_{\underline{\delta}-2}$ as the \emph{obstruction space}. Define $\pi\colon\thinspace L^{2}_{\underline{\delta}-2} \rightarrow L^{2}_{\underline{\delta}-2}$ by \begin{equation}\label{eqn:Obstruction:Projection} \pi (f)=f-\sum_{h=1}^{4}{ \langle f , \gamma _{\text{ext}}\, \hat{\sigma }\otimes dx_{h} \rangle _{L^{2}} \,d_{2}o_{h} }. \end{equation} \begin{lemma}\label{lem:Continuity:Projection:Obstructions} There exists a constant $C$ such that \[ \|\pi(f)\| _{L^{2}_{\underline{\delta}-2}} \leq C \left( N^{-2}\lambda \right) ^{\frac{1-\delta}{2}} \| f \| _{L^{2}_{\underline{\delta}-2}}. \] Furthermore, if $f$ is supported on the union of the annuli $\text{Ann}_{j,\text{int}} \cup \text{Ann}_{j} \cup \text{Ann}_{j,\text{ext}}$ for $j=1, \ldots, k$ then the estimate can be improved to \[ \| \pi(f) \| _{L^{2}_{\underline{\delta}-2}} \leq C \| f \| _{L^{2}_{\underline{\delta}-2}}. \] \proof Recall that $L^{2}_{\underline{\delta}-2} \hookrightarrow L^{1}$ is continuous. Moreover, if $f$ is supported on $\text{Ann}_{j,\text{int}} \cup \text{Ann}_{j} \cup \text{Ann}_{j\text{ext}}$ \[ \| f \| _{L^{1}} \leq C \left( N^{-2}\lambda \right) ^{-\frac{1-\delta}{2}} \| f \| _{L^{2}_{\underline{\delta}-2}} \] Therefore, it is enough to estimate $\| d_{2}o_{h}\| _{L^{2}_{\underline{\delta}-2}}$. From Definition \ref{def:Obstruction:Basis} $|d_{2}o_{h}| \leq C \sum _{j=1}^{k}{ \rho ^{-2}_{j}|\nabla \chi ^{j}_{\text{ext}}| }$ and, since $\nabla \chi ^{j}_{\text{ext}}$ is supported in the region where $w_{j} \sim \rho _{j}$ uniformly, we conclude \begin{equation}\label{eqn:Bound:d2:oh} \| d_{2}o_{h} \| _{L^{2}_{\underline{\delta}-2}} \leq C \left( N^{-2}\lambda \right) ^{\frac{1-\delta}{2}}. \qedhere \end{equation} \end{lemma} The following theorem yields the solution to the linear problem modulo obstruction. \begin{thm}\label{thm:Linearised:Equation} Fix $0< \delta < \frac{1}{2}$. \begin{itemize} \item[(A)] Fix $R_{0}>0$ such that $S \cup \{ q_{1}, \ldots, q_{k} \} \subset B_{R_{0}} \times \mathbb{S} ^{1}$. There exist $\varepsilon>0$, $N_{0} > 2$ and $C$ with the following significance. Suppose that $\| w_{j}\Psi |_{B_{1}(q_{j})} \| _{L^{3}} < \varepsilon$ for all $j=1, \ldots ,k$ and $N > N_{0}$. Then there exists a map $Q\colon\thinspace \text{im }\pi \subset L^{2}_{\underline{\delta}-2} \rightarrow W^{1,2}_{\underline{\delta}-1}$ such that $\pi \circ d_{2} \circ Q(f)=f$ and \[ \| Qf \| _{W^{1,2}_{\underline{\delta}-1}} \leq C \| f \| _{L^{2}_{\underline{\delta}-2}}. \] \item[(B)] Suppose that $S,q_{1}, \ldots ,q_{k}$ satisfy Assumption \ref{assumption} for some $K'>0$. There exist $\varepsilon>0$, $N_{0} > 2$ and $C>0$ with the following significance. Suppose that $\| w_{j}\Psi|_{B_{1}(q_{j})} \| _{L^{3}} < \varepsilon$ for all $j=1, \ldots ,k$ and $N > N_{0}$. Then there exist a map $Q=(Q_{1},Q_{2})$, where \[ Q_{1}\colon\thinspace \text{im }\pi \subset L^{2}_{\underline{\delta}-2} \rightarrow W^{1,2}_{\underline{\delta}-1}, \qquad Q_{2} \colon\thinspace \text{im }\pi \subset L^{2}_{\underline{\delta}-2} \rightarrow W, \] such that $\pi \circ d_{2} \circ Q_{1}(f)+\pi \circ d_{2}\big( \beta_{\text{ext}}\, Q_{2}(f) \big)=f$. Moreover, \[ \| Q_{1}(f) \| _{W^{1,2}_{\underline{\delta}-1}} + |Q_{2}(f)| \leq C \| f \| _{L^{2}_{\underline{\delta}-2}}. \] \end{itemize} \proof We prove the statement in (B). The statement in (A) follows in a similar way using Proposition \ref{prop:Linearised:Equation:Uext} instead of Propositions \ref{prop:Linearised:Equation:Uext:Offdiagonal} and \ref{prop:Linearised:Equation:Uext:Diagonal}. By abuse of notation, regard $d_{2}$ as the operator $d_{2}\colon\thinspace W^{1,2}_{\underline{\delta}-1} \oplus W \rightarrow L^{2}_{\underline{\delta}-2}$ defined by \[ d_{2}(\xi,\eta)= d_{2}( \xi + \beta _{\text{ext}}\, \eta ). \] For $f \in L^{2}_{\underline{\delta}-2}$ with $f=\pi(f)$, write $f= \sum _{j=1}^{k}{ \gamma_{j}f }+\gamma_{\text{ext}}f$ and define maps $Q_{1}'\colon\thinspace \text{im }\pi \rightarrow W^{1,2}_{\underline{\delta}-1}$ and $Q_{2}'\colon\thinspace \text{im }\pi \rightarrow W$ as follows: $Q_{2}'(f)=\alpha(\gamma _{\text{ext}}\, f)$, where $\alpha$ is the map of Definition \ref{def:W}.(ii), while \[ Q_{1}'(f)=\sum_{j=1}^{k}{\beta _{j}\,\xi_{j}} + \beta_{\text{ext}}\, \xi _{\text{ext}}. \] Here $\xi_{j}=d_{2}^{\ast}u_{j}$ for the solution $u_{j}$ to $d_{2}d_{2}^{\ast}u_{j}=\gamma _{j}f$ of Proposition \ref{prop:Linearised:Equation:Uj} and $\xi_{\text{ext}}$ is the solution to $d_{2}\xi_{\text{ext}}=\gamma _{\text{ext}}f-\alpha (\gamma _{\text{ext}}f)$ obtained combining Propositions \ref{prop:Linearised:Equation:Uext:Offdiagonal} and \ref{prop:Linearised:Equation:Uext:Diagonal}. In particular, using \eqref{eqn:Bounded:alpha} and \eqref{eqn:CutOff:Gluing}, we deduce the existence of a constant $C>0$ such that \[ \| Q_{1}'(f) \| _{W^{1,2}_{\underline{\delta}-1}} + |Q_{2} '(f)| \leq C \| f \| _{L^{2}_{\underline{\delta}-2}}. \] Set $Q'=(Q_{1}',Q_{2}')$. We have $f-d_{2} Q'(f) = \sum_{j=1}^{k}{ \nabla \beta _{j} \cdot \xi_{j} } + \nabla \beta _{\text{ext}} \cdot \xi_{\text{ext}} + \nabla \beta _{\text{ext}} \cdot \alpha (\gamma _{\text{ext}}f)$. By \eqref{eqn:CutOff:Gluing} and H\"older's inequality \[ \| f-d_{2} Q'(f) \| _{L^{2}_{\underline{\delta}-2}} \leq \frac{C}{\log{N} } \| Q_{1}'(f) \| _{W^{1,2}_{\underline{\delta}-1}} + C\frac{\lambda ^{-\frac{1+\delta}{2}}}{\log{N} } \| Q_{2} '(f) \| _{L^{\infty}} \leq \frac{C}{\log{N} } \| f \| _{L^{2}_{\underline{\delta}-2}} \] because $\| \eta \| _{L^{\infty}} \leq C |\eta|$ for all $\eta \in W$. Since $f-d_{2} Q'(f) $ is supported in $\bigcup_{j=1}^{k}{\text{Ann}_{j}}$, Lemma \ref {lem:Continuity:Projection:Obstructions} yields $\| f- \pi \circ d_{2}\circ Q'(f) \| _{L^{2}_{\underline{\delta}-2}} \leq \frac{C}{\log{N}} \| f \| _{L^{2}_{\underline{\delta}-2}}$ and if $N$ is sufficiently large we can iterate. \endproof \end{thm} \section{Deformation}\label{sec:Deformation} In this section we complete the construction of a family of solutions to the Bogomolny equation by deforming the approximate solutions $c(x_0,\tau)$ given by the pre-gluing map of Proposition \ref{prop:Pregluing:Map}. Using the projection $\pi$ of \eqref{eqn:Obstruction:Projection}, we split the non-linear equation \eqref{eqn:NonLinear:Equation} into an infinite dimensional and a finite dimensional equation. We solve the infinite dimensional equation first. The following lemma, an immediate consequence of the contraction mapping principle, is a quantitative version of the Implict Function Theorem adapted to case (B) in Theorem \ref{thm:Linearised:Equation}. The statement in case (A) is obtained by setting $W=\{ 0 \}$ (and therefore $\eta =0$). \begin{comment} Given $(x_{0},\tau) \in \mathcal{P}$ let $c(x_{0},\tau)$ be the associated pair via the pre-gluing map of Lemma \ref{lem:Pregluing:Map}. \end{comment} \begin{lemma}\label{lem:Implicit:Function:Theorem} Given $c(x_{0},\tau)$ for some $(x_{0},\tau) \in \mathcal{P}$, let $\Psi\colon\thinspace W^{1,2}_{\underline{\delta}-1} \oplus W \rightarrow L^{2}_{\underline{\delta}-2}$ be the smooth map \[ \Psi (\xi, \eta)=d_{2}(\xi +\beta _{\text{ext}}\eta ) + (\xi +\beta _{\text{ext}}\eta ) \cdot (\xi +\beta _{\text{ext}}\eta ) + \Psi (x_{0},\tau). \] Suppose that the following conditions hold. \begin{itemize} \item[(i)] There exists a projection $\pi\colon\thinspace L^{2}_{\underline{\delta}-2} \rightarrow L^{2}_{\underline{\delta}-2}$ such that the map $\pi \circ d_{2}\colon\thinspace W^{1,2}_{\underline{\delta}-1} \oplus W \rightarrow \text{im }\pi$ admits a right inverse $Q=(Q_{1},Q_{2})$ with \[ \| Q_{1}(f) \| _{ W^{1,2}_{\underline{\delta}-1} } + |Q_{2} (f)| \leq C \| f\| _{L^{2}_{\underline{\delta}-2}} \] for all $f \in L^{2}_{\underline{\delta}-2}$ with $\pi(f)=f$. \item[(ii)] There exists $q>0$ such that \[ \left\| \pi \Big( (\xi ,\eta ) \cdot (\xi ,\eta )\Big) - \pi\Big( (\xi' ,\eta' ) \cdot (\xi' ,\eta' ) \Big) \right\| _{ L^{2}_{\underline{\delta}-2} } \leq q \left( \| \xi + \xi' \| _{W^{1,2}_{\underline{\delta}-1} }+| \eta + \eta' | \right) \left( \| \xi - \xi' \| _{W^{1,2}_{\underline{\delta}-1} }+| \eta - \eta' | \right) . \] Here $(\xi ,\eta ) \cdot (\xi' ,\eta' ) = (\xi +\beta _{\text{ext}}\eta ) \cdot (\xi' +\beta _{\text{ext}}\eta' )$. \item[(iii)] The error $\Psi (x_{0},\tau)$ satisfies $\| \pi \big( \Psi(x_{0},\tau) \big) \| _{ L^{2}_{\underline{\delta}-2} }\leq \frac{1}{8qC^{2}}$. \end{itemize} Then there exists a unique $(\xi ,\eta) \in \text{im }Q \subset W^{1,2}_{\underline{\delta}-1} \oplus W$ such that $\pi \big( \Psi (\xi, \eta) \big)=0$. Moreover, \[ \| \xi \| _{ W^{1,2}_{\underline{\delta}-1} }+|\eta| \leq 2C\| \pi \big( \Psi (x_{0},\tau) \big) \| _{ L^{2}_{\underline{\delta}-2} }. \] \end{lemma} Theorem \ref{thm:Linearised:Equation} shows that (i) holds provided $\| w_{j}\Psi (x_{0},\tau) \| _{L^{3}} < \varepsilon$ in every ball $B_{1}(q_{j})$, $j=1, \ldots, k$. The next two lemmas imply that (ii) and (iii) are also satisfied if $\lambda$ is sufficiently large. \begin{lemma}\label{lem:Quadratic:Term} There exists $C>0$ such that condition (ii) of Lemma \ref{lem:Implicit:Function:Theorem} holds with $q=C\lambda^{\frac{1+\delta}{2}}$. \proof Observe that the product $\cdot$ induced by the Clifford multiplication and the Lie bracket is commutative. In particular, $(\xi ,\eta ) \cdot (\xi ,\eta )-(\xi' ,\eta' ) \cdot (\xi' ,\eta' ) = (\xi +\xi', \eta + \eta') \cdot (\xi -\xi', \eta - \eta')$. We will show that $\cdot$ defines a continuous map $\left( W^{1,2}_{\underline{\delta}-1} \oplus W \right) \times \left( W^{1,2}_{\underline{\delta}-1} \oplus W \right) \rightarrow L^{2}_{\underline{\delta}-2}$ such that \begin{equation}\label{eqn:Quadratic:Term} \| (\xi,\eta) \cdot (\xi',\eta') \| _{L^{2}_{\underline{\delta}-2}} \leq C \lambda ^{\delta} \| (\xi, \eta) \| _{W^{1,2}_{\underline{\delta}-1} \oplus W} \| (\xi', \eta') \| _{W^{1,2}_{\underline{\delta}-1} \oplus W}. \end{equation} The lemma then follows combining \eqref{eqn:Quadratic:Term} with Lemma \ref{lem:Continuity:Projection:Obstructions}. The continuity \eqref{eqn:Quadratic:Term} of the product follows from H\"older's inequality and the Sobolev embedding $W^{1,2} \hookrightarrow L^{6}$ along the lines of \cite[Lemmas 5.18 and 6.10]{Foscolo:Deformation}. The crucial observation is that, with respect to the decomposition $V \simeq \underline{\mathbb{R}} \oplus M$ over $U_{\text{ext}}$, there is no $\xi _{D} \cdot \xi'_{D}$ term in the product because $\cdot$ is induced by the Lie bracket on $\Lie{su}(2)$. Moreover, recall that $\beta _{\text{ext}}\eta$ for $\eta \in W$ has only diagonal component. We now provide some details. Restrict first to the ball $B_{1}(q_{j})$. If $\xi \in W^{1,2}_{\underline{\delta}-1}$ and $\eta \in W$ \[ \int_{B_{1}(q_{j})}{ w_{j}^{2\delta +1}|\xi \cdot \beta_{\text{ext}}\eta|^{2}} \leq C \| w_{j}\eta \|^{2}_{L^{\infty}} \int_{B_{1}(q_{j})}{ w_{j}^{2\delta -1}|\xi |^{2} } \leq C|\eta|^{2} \|\xi \| _{W^{1,2}_{\underline{\delta}-1}} \] because $w_{j} \leq 1$ and $\| \eta \| _{L^{\infty}} \leq C|\eta|$ by Definition \ref{def:W}. On the other hand, in order to estimate the norm $\| \xi \cdot \xi' \| _{L^{2}_{\underline{\delta}-2}}$ for $\xi,\xi' \in W^{1,2}_{\underline{\delta}-1}$, by H\"older's inequality it is enough to observe that \[ \int_{B_{1}(q_{j})}{ w_{j}^{2\delta +1} |\xi|^{4} } \leq \lambda _{j}^{2\delta}\, \| w_{j}^{\delta -\frac{1}{2}}\xi \| _{L^{2}}\, \| w_{j}^{\delta +\frac{1}{2}}\xi \| ^{3}_{L^{6}} \leq C \lambda ^{2\delta}_{j} \| \xi \| ^{4}_{W^{1,2}_{\underline{\delta}-1}}. \] The last inequality follows from the continuity of the Sobolev embedding $W^{1,2}_{\underline{\delta}-1} \hookrightarrow w_{j}^{-\delta -\frac{1}{2}}L^{6}$ on the ball $B_{1}(q_{j})$. The factor of $\lambda _{j}$ in the first inequality is due to the fact that $w_{j} \geq \lambda _{j}^{-1}$. This establishes \eqref{eqn:Quadratic:Term} on $B_{1}(q_{j})$. The same calculations with $-\delta$ in place of $\delta$ yield the desired estimate also on the ball $B_{2\sigma}(p_{i})$ for all $i=1, \ldots, n$, \emph{cf.} \cite[Lemmas 5.18]{Foscolo:Deformation}. On the exterior domain $U _{\sigma }$ write $\xi = \xi _{D} + \xi _{T}$ with respect to the decomposition $V \simeq \underline{\mathbb{R}} \oplus M$. Because of the properties \eqref{eqn:Properties:Weight:Function:Uext} of the weight function $\omega$, if $\xi \in W^{1,2}_{\underline{\delta}-1}$ then $\omega ^{\delta}\xi _{D}, \omega ^{\delta +1}\xi _{T} \in W^{1,2}$. Moreover, $W^{1,2} \hookrightarrow L^{p}$ for all $2 \leq p \leq 6$ by the Sobolev embedding. Therefore \[ \| \omega ^{\delta +1}(\xi \cdot \xi') \| _{L^{2}} \leq \| \xi \| _{L^{3}} \| \omega ^{\delta +1}\xi '_{T} \| _{L^{6}} + \| \xi' \| _{L^{3}} \| \omega ^{\delta +1}\xi _{T} \| _{L^{6}} \quad \text{ and } \quad \| \omega ^{\delta +1}(\xi \cdot \eta) \| _{L^{2}} \leq \| \eta \| _{L^{\infty}} \| \omega ^{\delta +1}\xi_{T} \| _{L^{2}} \] by H\"older's inequality. The estimate \eqref{eqn:Quadratic:Term} follows. \endproof \end{lemma} \begin{remark} The continuity \eqref{eqn:Quadratic:Term} of the product $\cdot$ justifies the claim that the map $\Psi$ in Lemma \ref{lem:Implicit:Function:Theorem} is smooth. \end{remark} \begin{lemma}\label{lem:Smallness:Error} There exists $C>0$ such that $\| \pi\big( \Psi(x_{0},\tau) \big) \| _{L^{2}_{\underline{\delta}-2}} \leq C\lambda ^{-1-\frac{\delta}{2}}$ and $\| w_{j}\Psi \|_{L^{3}} \leq C \lambda ^{-\frac{1}{2}}$ on each ball $B_{1}(q_{j})$, $j=1, \ldots, k$. \proof By Definition \ref{def:Obstruction:Basis} and Lemma \ref{lem:Obstruction:Pairing}, if $\Psi_{\zeta}$ is the component of $\Psi(x_{0},\tau)$ defined by \eqref{eqn:Error:Obstruction} then $\pi (\Psi_{\zeta}) = 0$. By Proposition \ref{prop:Pregluing:Map}.(i) \begin{equation}\label{eqn:Error:Orthogonal:Obtruction} \| \Psi(x_{0},\tau) - \Psi_{\zeta} \| _{L^{2}_{\underline{\delta}-2}} \leq C \lambda ^{-1-\frac{\delta}{2}}. \end{equation} The second estimate in Lemma \ref{lem:Continuity:Projection:Obstructions} therefore implies the first statement of the lemma. Similarly, $\| w_{j}\left( \Psi(x_{0},\tau) - \Psi_{\zeta} \right) \| _{L^{3}} = O (\lambda ^{-1})$ on each ball $B_{1}(q_{j})$, while the estimate $\rho _{j}^{2}|\Psi _{\zeta}| \leq \frac{C}{\sqrt{\lambda}}$ in Proposition \ref{prop:Pregluing:Map}.(i) yields $\| w_{j}\Psi _{\zeta} \|_{L^{3}} =O(\lambda ^{-\frac{1}{2}})$. \endproof \end{lemma} \subsection{Existence results} We have all the ingredients to prove the main result of the paper, an existence theorem for periodic monopoles (with singularities). We begin with a rewriting of the pre-gluing map of Proposition \ref{prop:Pregluing:Map}. Fix $d_{0}\geq 5$, $K>1$, parameters $v,b$, the set $S$ of singularities and the centres of non-abelian monopoles $q_{1},\dots, q_{k}$. For all $N>2$ we fix $\lambda _{0}(N)$ sufficiently large and assume that $v,S,q_{1}, \ldots , q_{k}$ are $(\lambda _{0},d_{0},K)$--admissible. Moreover, we assume that either: \begin{itemize} \item[(A)] There exists $R_{0}>0$ such that $S, \{ q_{1}, \ldots, q_{k} \} \subset B_{R_{0}} \times \mathbb{S} ^{1}$; or \item[(B)] There exist $R_{0}, K'>0$ such that $S \subset B_{R_{0}} \times \mathbb{S} ^{1}$, $q_{1}, \ldots, q_{k} \in (\mathbb{R} ^2 \times \mathbb{S}^1) \setminus \left( B_{R_{0}} \times \mathbb{S} ^{1} \right)$ and Assumption \ref{assumption} is satisfied. \end{itemize} Finally, for $\kappa \in (0,1)$ sufficiently small, let $\mathcal{P}=\mathcal{P}_{\kappa}$ be the set of gluing data of Definition \ref{def:Gluing:Data}. Consider the family $c(x_{0},\tau)$ of Proposition \ref{prop:Pregluing:Map}. If $\kappa$ is sufficiently small and $\lambda _{0}(N)$ sufficiently large, we can assume that \eqref{eqn:Curvature:Uj}, \eqref{eqn:Error} and \eqref{eqn:Localisation:Zeroes:Higgs:Field} are satisfied, uniformly for all $(x_{0},\tau) \in \mathcal{P}$. \begin{definition}\label{def:Boundary:Conditions:1} Fix a base point $(0,\tau_{0}) \in \mathcal{P}$ and $\delta >0$ sufficiently small. Set $k_{\infty}=2k-n$ and $q=p_{1}+ \ldots + p_{n}$. Let $\mathcal{C}_{\underline{\delta}} = \mathcal{C}_{\underline{\delta}}(p_{1},\ldots ,p_{n},k_{\infty},v,b,q)$ be the configuration space of pairs $(A,\Phi)$ of the form $(A,\Phi) = c(0,\tau_{0}) + \xi$ with $\xi \in W^{1,2}_{\underline{\delta}-1} \oplus W$. Here and in the rest of the section we set $W=\{ 0 \}$ when condition (A) above holds. \end{definition} Observe that once $q_{1}, \ldots, q_{k}$ are fixed the weighted Sobolev norms used to define $\mathcal{C}_{\underline{\delta}}$ are equivalent to those used in \cite{Foscolo:Deformation} to define a smooth structure on the moduli space of periodic monopoles. Then the pregluing map of Proposition \ref{prop:Pregluing:Map} can be considered as a smooth map \[ c\colon\thinspace \mathcal{P} \rightarrow \mathcal{C}_{\underline{\delta}}. \] Smoothness follows from Lemma \ref{lem:PS:Translations} and the explicit construction of $c(x_{0},\tau)$. The group $\Gamma \simeq SO(2)$ acts on $\mathcal{P}$ by $e^{is} \cdot (x_{0},\tau)=(x_{0},\tau +s)$ and on $\mathcal{C}_{\underline{\delta}}$ as the gauge transformation $\exp{(s\gamma _{\text{ext}}\hat{\sigma})}$; the map $c$ is $\Gamma$--equivariant. \begin{comment} Write $c(x_{0},\tau)=c(0,\tau_{0}) + \xi _{0}(x_{0},\tau)$. Going through the construction of $c(x_{0},\tau)$ in Section \ref{sec:Approximate:Solution}, we can easily estimate $\| \xi _{0}(x_{0},\tau) \| _{W^{1,2}_{\underline{\delta}-1} \oplus W}$: \begin{itemize} \item From Lemma \ref{lem:PS:Translations} and a scaling argument we deduce that over $B_{N^{-1}\delta_{j}}(q_{j})$ \[ \| \xi _{0}(x_{0},\tau) \| _{W^{1,2}_{\underline{\delta}-1}} \leq C\kappa \lambda ^{-\delta}. \] \item By the discussion after \eqref{eqn:Pregluing}, changing $\tau$ amounts to introduce a term of the form $|\tau -\tau_{0}| \left( |\nabla \gamma _{j} | + |\nabla \gamma _{\text{ext}}| \right)$. Moreover, $\|\nabla \gamma _{j} \| _{W^{1,2}_{\underline{\delta}-1}} = O(\lambda ^{-\frac{\delta}{2}})$. \item Finally over $U_{\text{ext}}$, by \eqref{eqn:c:ext:x0}, Definition \ref{def:Obstruction:Basis} and an explicit computation of the $\left( W^{1,2}_{\underline{\delta}-1} \oplus W \right)$--norm of $\nabla G_{q_{j}}$ using Lemma \ref{lem:Asymptotics:Periodic:Dirac:Higgs:Field}.(ii) \[ \| \xi _{0}(x_{0},\tau)|_{U_{\text{ext}}} \| _{W^{1,2}_{\underline{\delta}-1} \oplus W} \leq C\kappa (\lambda ^{-1}+ \lambda ^{-1-\frac{\delta}{2}}). \] \end{itemize} \end{comment} \begin{thm}\label{thm:Existence} Fix data as above and $N>N_{0}$, where $N_{0}$ is given by Theorem \ref{thm:Linearised:Equation}. Then there exists $\lambda '_{0} \geq \lambda _{0}(N)$ such that if $v,S,q_{1}, \ldots ,q_{k}$ are $(\lambda '_{0},d_{0},K)$--admissible then the following holds. \begin{itemize} \item[(i)] There exists a smooth $\Gamma$--equivariant map $c_{1}\colon\thinspace \mathcal{P} \rightarrow \mathcal{C}_{\underline{\delta}}$ such that $\pi \circ \Psi \circ c_{1}=0$. Furthermore, $c_1$ takes the form $c_{1}(x_{0},\tau)=c(x_{0},\tau) + \xi(x_{0},\tau)$ with $\xi(x_{0},\tau) \in W^{1,2}_{\underline{\delta}-1} \oplus W$ and \[ \| \xi(x_{0},\tau) \| _{W^{1,2}_{\underline{\delta}-1} \oplus W} \leq C\lambda ^{-1-\frac{\delta}{2}}. \] \item[(ii)] There exist smooth $\Gamma$--invariant maps $H,h\colon\thinspace \mathcal{P} \rightarrow \mathbb{R} ^{3}$ with \[ H(x_{0},\tau)=-\sum _{j=0}^{k}{ \frac{x_{0}^{j}}{\lambda _{j}} } \] and $|h(x_{0},\tau)| = O(\lambda ^{-\frac{3}{2}})$, such that $c_{1}(x_{0},\tau)$ is a solution to the Bogomolny equation if and only if $H(x_{0},\tau) + h(x_{0},\tau)=0$. \item[(iii)] Given $(x_{0},\tau) \in \mathcal{P}$ and $\zeta \in \mathbb{R} ^{3}$, let $x_{0}+\zeta$ denote the $k$--tuple $x_{0}+(\zeta, \ldots, \zeta)$. For all $(x_{0},\tau) \in \mathcal{P}_{\frac{\kappa}{2}}$ such that $H(x_{0},\tau)=0$ there exists $\zeta \in \mathbb{R} ^{3}$ such that $|\zeta|=O(\lambda ^{-\frac{1}{2}})$ and \[ H(x_{0}+\zeta,\tau)+h(x_{0}+\zeta,\tau)=0. \] \end{itemize} \proof \begin{itemize} \item[(i)] Consider the map \[ \pi \circ \Psi\colon\thinspace \mathcal{P} \times \left( W^{1,2}_{\underline{\delta}-1} \oplus W \right) \rightarrow L^{2}_{\underline{\delta}-2}, \] $(\pi\circ \Psi) (x_{0},\tau, \xi) = \pi \big( d_{2}\xi + \xi\cdot\xi + \Psi(x_{0},\tau) \big)$, which is smooth because of the smoothness of $c\colon\thinspace \mathcal{P} \rightarrow \mathcal{C}_{\underline{\delta}}$, Lemma \ref{lem:Continuity:Projection:Obstructions} and the continuity of the product $\cdot$ established in \eqref{eqn:Quadratic:Term}. Choosing $\lambda _{0}(N)$ larger if necessary, assume that $\| w_{j}\Psi(x_{0},\tau) \| _{L^{3}} = O(\lambda ^{-\frac{1}{2}}) < \varepsilon$, where $\varepsilon$ is given by Theorem \ref{thm:Linearised:Equation}. Fix $N>N_{0}$ so that Theorem \ref{thm:Linearised:Equation} holds. Finally, we can choose $\lambda '_{0} \geq \lambda _{0}(N)$ so that \[ \| \pi\big( \Psi(x_{0},\tau) \big) \| _{L^{2}_{\underline{\delta}-2}} =O(\lambda^{-1-\frac{\delta}{2}}) \leq \frac{1}{2qC^{2}} = O(\lambda^{-\frac{1}{2}-\frac{\delta}{2}}) \] whenever $\lambda > \lambda '_{0}$. Here $q$ and $C$ are given by Lemma \ref{lem:Quadratic:Term} and Theorem \ref{thm:Linearised:Equation}, respectively, and we used Lemma \ref{lem:Smallness:Error}. Lemma \ref{lem:Implicit:Function:Theorem} therefore implies the existence of the map $c_{1}$. The fact that $c_{1}$ is smooth follows from the fact that the family of right inverses $Q$ of Theorem \ref{thm:Linearised:Equation} depends smoothly on $(x_{0},\tau)$. \item[(ii)] We are left with the three equations \[ \langle d_{2}^{\ast}\xi(x_{0},\tau) + \xi(x_{0},\tau) \cdot \xi(x_{0},\tau) +\Psi(x_{0},\tau) , \gamma _{\text{ext}}\,\hat{\sigma }\otimes dx_{h} \rangle_{L^{2}} =0, \] for $h=1,2,3$. We have to show that these can be written as $H(x_{0},\tau)+h(x_{0},\tau)=0$ as claimed. Write $\xi(x_{0},\tau)=\xi' + \beta _{\text{ext}}\eta$, with $\xi' \in W^{1,2}_{\underline{\delta}-1}$ and $\eta \in W$. First, we control all the negligible terms: \begin{itemize} \item[(a)] $\langle d_{2}\xi', \gamma _{\text{ext}}\,\hat{\sigma }\otimes dx_{h} \rangle_{L^{2}} =O(\lambda ^{-\frac{3}{2}})$. Indeed, integrating by parts \[ \left| \langle d_{2}\xi', \gamma _{\text{ext}}\,\hat{\sigma }\otimes dx_{h} \rangle_{L^{2}} \right| \leq \| \xi' \| _{ W^{1,2}_{\underline{\delta}-1} }\, \| w_{j}^{-\delta +\frac{1}{2} }\nabla \gamma _{\text{ext}} \| _{L^{2}} = O(\lambda ^{-\frac{3}{2}}). \] \item[(b)] $\langle d_{2}(\beta _{\text{ext}}\eta ), \gamma _{\text{ext}}\,\hat{\sigma }\otimes dx_{h} \rangle_{L^{2}} = 0$, because $\gamma _{\text{ext}} \equiv 0$ on the support of $\nabla \beta _{\text{ext}}$, $\beta _{\text{ext}}\equiv 1 \equiv \gamma_{\text{ext}}$ on the support of $d_{2}\eta$ and $\langle d_{2}\eta, \hat{\sigma}\otimes dx_{h} \rangle_{L^{2}} =0$ by Definition \ref{def:W}. \item[(c)] By the continuity of the embedding $L^{2}_{\underline{\delta}-2} \hookrightarrow L^{1}$ and \eqref{eqn:Quadratic:Term} \[ \left| \langle \xi(x_{0},\tau) \cdot \xi(x_{0},\tau) , \gamma _{\text{ext}}\,\hat{\sigma }\otimes dx_{h} \rangle_{L^{2}} \right| \leq C \lambda ^{\delta}\| \xi(x_{0},\tau) \| ^{2}_{W^{1,2}_{\underline{\delta}-1} \oplus W} = O(\lambda^{-2}). \] \item[(d)] Finally, as in the proof of Lemma \ref {lem:Continuity:Projection:Obstructions}, \[ \left| \langle \Psi(x_{0},\tau)-\Psi _{\zeta} , \gamma _{\text{ext}} \hat{\sigma}\otimes dx_{h} \rangle_{L^{2}} \right| \leq C \lambda ^{-\frac{1-\delta}{2}} \| \Psi(x_{0},\tau)-\Psi _{\zeta} \| _{L^{2}_{\underline{\delta}-2}} = O(\lambda ^{-\frac{3}{2}}) \] because $\Psi (x_{0},\tau)-\Psi _{\zeta}$ is supported on $\bigcup_{j=1}^{k}{ (A_{j,\text{int}}\cup A_{j} \cup A_{j,\text{ext}}) }$. \end{itemize} On the other hand, by Lemma \ref {lem:Obstruction:Pairing} \[ \langle \Psi _{\zeta} , \gamma _{\text{ext}} \hat{\sigma}\otimes dx_{h} \rangle_{L^{2}} = -\sum_{j=1}^{k}{\frac{x^{j}_{0}}{\lambda _{j}}}. \] \item[(iii)] The claim follows from Brouwer's Fixed Point Theorem by writing \[ \zeta = -\frac{h(x_{0}+\zeta,\tau)}{\sum_{j=1}^{k}{\lambda _{j}^{-1}}} = O(\lambda ^{-\frac{1}{2}}). \] In the last equality we used Definition \ref{def:Hypothesis:Background:Data}.(iii). \qedhere \end{itemize} \end{thm} In view of \cite[Lemma 7.2]{Foscolo:Deformation}, if the boundary conditions are chosen generically then the monopoles constructed in the theorem are necessary irreducible. Indeed, it is shown in \cite[Lemma 7.2]{Foscolo:Deformation} that there exists no monopole satisfying the boundary conditions of Definition \ref{def:Boundary:Conditions} provided that no subset $\{ p_{i_{1}}, \ldots, p_{i_{k}} \} \subset S$ of cardinality $k$ has centre of mass at the origin. The condition is non-vacuous only when $n \geq k$. In conjunction with \cite[Theorem 1.5]{Foscolo:Deformation}, Theorem \ref{thm:Existence} then shows that, for generic choices of parameters and provided the mass $v$ is sufficiently large when $n \geq 2(k-1)$, the moduli space $\mathcal{M}_{n,k}$ of charge $k$ $SO(3)$ periodic monopoles with $n$ singularities is a non-empty smooth hyperk\"ahler manifold. \begin{cor}\label{cor:Existence} Fix integers $k>0$ and $0 \leq n \leq 2k$, constants $(v,b) \in \mathbb{R} \times \mathbb{R} /\mathbb{Z}$ and distinct points $p_{1}, \ldots, p_{n}, q_{1}, \ldots, q_{k} \in \R^{2} \times \Sph ^{1}$ with $q_{1}+\ldots+q_{k}=0$. If $n \geq k$ assume that no subset $\{ p_{i_{1}}, \ldots, p_{i_{k}} \}$ has the origin as its centre of mass. Furthermore, assume that conditions (i), (iii) and (iv) of Definition \ref{def:Hypothesis:Background:Data} are satisfied with $d_{0}=5$ and some $K>1$. Let $\mathcal{C}_{\underline{\delta}}=\mathcal{C}_{\underline{\delta}}(p_{1},\ldots ,p_{n},k_{\infty},v,b,q)$ be the configuration space of Definition \ref{def:Boundary:Conditions:1} for some fixed $\delta >0$ sufficiently small. \begin{itemize} \item[(i)] Fix $R_{0}>0$ such that $p_{1}, \ldots, p_{n},q_{1}, \ldots, q_{k} \in B_{R_{0}} \times \mathbb{S} ^{1}$. Then there exists $v_{0}>0$ such that if $v > v_{0}$ the configuration space $\mathcal{C}_{\underline{\delta}}$ contains irreducible monopoles. \item[(ii)] Assume that $0 \leq n < 2(k-1)$ and that Assumption \ref{assumption} holds for some $K'>1$. Then there exists $d_{0}>0$ such that if the minimum distance $d$ of \eqref{eqn:Distance} satisfies $d >d_{0}$, then the configuration space $\mathcal{C}_{\underline{\delta}}$ contains irreducible monopoles. \item[(iii)] If $n=2(k-1)$ and the conditions of (ii) are satisfied, there exist $v_{0}$ and $d_{0}$ such that if $v > v_{0}$ and $d >d_{0}$ then irreducible monopoles exist in $\mathcal{C}_{\underline{\delta}}$. \end{itemize} \end{cor} The statement in (i) establishes the existence of monopoles for any $0 \leq n \leq 2k$ in the high mass case. In case (ii) the mass $v$ is arbitrary and we require that the points $q_{1}, \ldots, q_{k}$ are widely separated. The statement in (iii) follows from \eqref{eqn:Limit:Mass:Large:Distance}: when $n=2(k-1)$, $\lambda _{j}=O(v)$ for all $j$ as $d \rightarrow \infty$. The large distance case considered in (ii) and (iii) is interesting because, in analogy with Taubes's gluing result for Euclidean monopoles, we expect to be able to use Corollary \ref{cor:Existence} to give a description of the asymptotic geometry of the moduli spaces of periodic monopoles (with singularities). We will address this question in a future paper.
1,116,691,499,974
arxiv
\section{Introduction} Recently we have proposed a vector theory of gravity~\cite{ref:1}. In this theory the four dimensional Spacetime is assumed to be equipped with a non-dynamical metric $g_{\mu\nu}$ which is the background metric to give the Spacetime the notion of world length. Any given metric is legitimate, but only those metrics that extremize our proposed action are the metrics that will be observed classically. Our proposed action comes from our observation that some fundamental laws of Nature, namely the Law of Inertia and the Causality Principle, are preserved under a family of transformation of reference frames called the affine transformations $GL(4\ R)$, on every local patch of the Spacetime. Therefore we believe that it is natural for us to assume that physics should be invariant under these local transformations. These local transformations form a Lie group, and hence we are tempted to construct a local Yang-Mills theory~\cite{ref:1} based on this local group. This Yang-Mills theory is, of course, a vector theory, and has 16 gauge vector boson $A^m_{\ n\mu}$. Two sets of equations will follow when we vary the metric $g_{\mu\nu}$ and the gauge potentials $A^m_{\ n\mu}$ independently so as to extremize the Yang-Mills action when we are looking for classical solutions to the theory. The first set of equations, which will be called the Stephenson Equation~\cite{ref:2}, gives algebraic relations among the various components of the Yang-Mills strength tensor. The second set of equations is the Yang-Mills equation in the presence of a background metric, and will be called the Stephenson-Kilmister-Yang Equation~\cite{ref:3}. The Yang-Mills action can be transformed into a form that might look familiar to us when we make a variable substitution of the 64 variables $A^m_{\ n\mu}$ by a new set of 64 variables $\Gamma^{\rho}_{\ \tau\mu}$ through \begin{equation} \label{eq:YM_potential} A^{m}_{\ n\mu} = e^{m}_{\ \rho} e_{n}^{\ \tau} \Gamma^{\rho}_{\tau\mu} + e^{m}_{\ \tau} \partial_{\mu} e_{n}^{\ \tau}, \end{equation} where $e^m_{\ \rho}$ are the vierbein fields for the background metric in reference to a local Minkowskian frame (we shall use the Latin and Greek indices to denote, respectively, local Minkowskian and world coordinates. A hat is always put on an index when we want to emphasize that we are talking about local coordinates). With the new variables $\Gamma^{\rho}_{\ \tau\mu}$, our proposed action will look like~\cite{ref:1} \begin{eqnarray} \label{eq:YM_action} {\rm S_{YM}}\left[g, A, \partial A \right] = {\rm S_{YM}}\left[g, \Gamma, \partial\Gamma \right] &=& \kappa \int \sqrt{-g} d^4x g^{\mu\mu'} g^{\nu\nu'} ( F^{m}_{\ n\mu\nu}F^{n}_{\ m\mu'\nu'} ) \\ &=& \kappa \int \sqrt{-g} d^4x g^{\mu\mu'} g^{\nu\nu'} ( R^{\lambda}_{\ \sigma\mu\nu}R^{\sigma}_{\ \lambda\mu'\nu'} ). \nonumber \end{eqnarray} where $F^{m}_{\ n\mu\nu} = {\partial_{\mu}A^{m}_{\ n\nu} - \partial_{\nu}A^{m}_{\ n\mu} + A^{m}_{\ p\mu}A^{p}_{\ n\nu}- A^{m}_{\ p\nu}A^{p}_{\ n\mu} }$ stands for the Yang-Mills field strength tensor, $R^{\lambda}_{\ \sigma\mu\nu} = {\partial_{\mu}\Gamma^{\lambda}_{\ \sigma\nu} - \partial_{\nu}\Gamma^{\lambda}_{\ \sigma\mu} + \Gamma^{\lambda}_{\ \rho\mu}\Gamma^{\rho}_{\ \sigma\nu}- \Gamma^{\lambda}_{\ \rho\nu}A^{\rho}_{\ \sigma\mu} }$ stands for the Riemannian curvature tensor, and $\kappa$ is a dimensionless coupling constant for the theory. The corresponding Stephenson and Stephenson-Kilmister-Yang Eequations will be, respectively \begin{eqnarray} \label{eq:S_SKY_eq} && R_{\ \sigma\theta\rho}^{\lambda} R_{ \ \lambda\tau}^{\sigma \ \ \ \rho} -\frac{1}{4}g_{\theta \tau}R^{\lambda \ \xi\rho}_{\ \sigma }R_{\ \lambda\xi\rho}^{\sigma } = \frac{1}{2\kappa}T_{\theta\tau}, \\ && \nabla_{\rho}(\Gamma)(\sqrt{-g}R^{ \ \beta\rho\lambda}_{\sigma }) = \frac{1}{\kappa} \sqrt{-g}S^{\ \beta\lambda}_{\sigma} \nonumber. \end{eqnarray} The $T_{\theta\tau}$ and $S^{\ \beta\lambda}_{\sigma}$ are respectively the metric energy-momentum tensor and the gauge current tensor of the matter source~\cite{ref:1}. Before we will make further discussions, we find it imperative for us to stress that the action given in Eq.~\ref{eq:YM_action}, though looks very similar to the ones found in many occasions in the discussion of the theory of gravity, is fundamentally different from them because there exists no prior relationship between $g_{\mu\nu}$ and $\Gamma^{\rho}_{\ \tau\mu}$ in our theory. The geometric objects here, such as the connections and the Riemann curvature tensor, are in fact, the Yang-Mills gauge potentials and the Yang-Mills field strength tensors of $GL(4\ R)$ in disguise~\cite{ref:1}. We have converted the original Yang-Mills theory into a theory with geometric languages because we want to make use of the many works done in the past decades by people working in the geometric theory of gravity. The theory here is not a higher derivative theory of the metric and the only dynamical variables are the Yang-Mills potentials which satisfy a second order differential equation. The theory given in Eqs.~\ref{eq:YM_potential},~\ref{eq:YM_action} and~\ref{eq:S_SKY_eq} are shown to be able to comply with the gravitational tests in the solar system because the Yang-Mills potentials induce the Schwarzschild metric as one of the admissible metrics~\cite{ref:1}. Furthermore there are some other physical phenomena that are predicted by the above theory which are not predicted by the theory of General Relativity. These new predictions include the existence of two gravitational copies of matter which reproduce the effects of Dark Matter as observed in astronomy and also the existence of a primordial torsion which mimics the effects of Dark Energy in the late time evolution of the Universe~\cite{ref:1}. \section{Cosmology From The Point of View Of The $GL(4\ R)$ YANG-MILLS Theory} In order to be sure that this vector theory of gravity, basing on the local gauge theory of $GL(4\ R)$ with a non-dynamical world metric, is a viable theory of gravity, we must put this theory on test with the well-known thermal history of the Universe. In this article, we are going to show that this vector theory of gravity can accommodate an inflationary expansion of the Universe during it's early phase of evolution, and this inflationary expansion will then slow down and will take up a much slower accelerating expansion during its late time evolution. To this end, we will start by searching for solutions in which the Yang-Mills potentials with symmetric gauge indices are vanishing, and that the cosmic metric will be taken as the spatially flat Friedmann-Lemaître-Robertson-Walker (FLRW) form \begin{equation} \label{eq:FLRW} ds^2 = -dt^2 + a^2(t)( dr^2 + r^2d\theta^2 + r^2\sin^2\theta d\phi^2). \end{equation} Furthermore we will make some assumptions on the form of our cosmic Yang-Mills potential $A^m_{\ n\mu}$. Because of the fact that the Yang-Mills potentials are related to the connections $\Gamma^{\rho}_{\ \tau\mu}$ through Eq.~\ref{eq:YM_potential}, an assumption on the form of the cosmic Yang-Mills potentials will be equivalent to an assumption on the form of the cosmic connections, which are now compatible with the metric because the Yang-Mills potentials that we are seeking are anti-symmetric in their gauge indices. There are totally 24 components for the torsion tensor $\tau_{\alpha\beta}^{\ \ \rho}$, which are the anti-symmetric parts of the connections. The connections will then have two parts, namely the Christoffel symbols and terms composing of the torsions and the metric. We choose to describe the cosmic torsion tensor by what are observed by a local Minkowskian observer. This observer will also see 24 local components. These 24 local components will fall into categories in accordance with their parity signatures under the local spatial parity operations (which consist of either one spatial inversion, two spatial inversions or three spatial inversions) of the local Minkowkian frame. Of all these 24 components, only 3 are invariant under the above said spatial parity operations. They are $\tau_{\hat{0}\hat{1}\hat{1}}$, $\tau_{\hat{0}\hat{2}\hat{2}}$, and $\tau_{\hat{0}\hat{3}\hat{3}}$. We have use a hat for each index because we want to emphasize that these are the components measured by a local Minkowskian observer. We shall then assume that only the torsion components that are invariant under local parity operations will show up in the cosmic evolution of the Universe. Local isotropy will also play a role in the determination of the form of the local torsion components too. Local isotropy will require that $\tau_{\hat{0}\hat{1}\hat{1}}$, $\tau_{\hat{0}\hat{2}\hat{2}}$ and $\tau_{\hat{0}\hat{3}\hat{3}}$ are all equal. Homogeneity will require that they are functions of time only. Hence we have arrived at the conclusion that we will concern ourselves only with cosmos which have the following local torsion components \begin{equation} \label{eq:remain_Tau} \tau_{\hat{0}\hat{1}\hat{1}} = \tau_{\hat{0}\hat{2}\hat{2}} = \tau_{\hat{0}\hat{3}\hat{3}} \equiv \frac{\xi(t)}{2}. \end{equation} See also the works of Ramaswamy and Yasskin~\cite{ref:4}, and Baekler and Hehl~\cite{ref:5} and Chen, Hsu and Yeung~\cite{ref:6} for the selection of the local torsion components in their discussions of cosmologies in the Poincare Gauge Theory of Gravity. With the torsion components given in Eq.~\ref{eq:remain_Tau} and the metric given in Eq.\ref{eq:FLRW}, we can calculate the connections, and, in turn, the Riemann curvature tensor. We will have the following non-vanishing local components: \begin{eqnarray} \label{eq:remain_R} R_{\hat{0}\hat{1}\hat{0}\hat{1}} &\equiv& D = \frac{\ddot{a}}{a} - \xi\frac{\dot{a}}{a} - \dot{\xi}, \nonumber \\ R_{\hat{0}\hat{2}\hat{0}\hat{2}} = R_{\hat{0}\hat{3}\hat{0}\hat{3}} &\equiv& E = -\frac{\ddot{a}}{a} + \xi\frac{\dot{a}}{a} + \dot{\xi}, \\ R_{\hat{1}\hat{2}\hat{1}\hat{2}} = R_{\hat{1}\hat{3}\hat{1}\hat{3}} &\equiv& M = (\frac{\dot{a}}{a} - \xi )^2, \nonumber \\ R_{\hat{2}\hat{3}\hat{2}\hat{3}} &\equiv& N = -(\frac{\dot{a}}{a} - \xi )^2 \nonumber. \end{eqnarray} Using these local curvature components, we can write the Stephenson and the Stephenson-Kilmister-Yang equations as \begin{eqnarray} \label{eq:SSKY} (D^2 + 2E^2 - 2M^2 - N^2) &=& \frac{1}{2\kappa}T_{\hat{0}\hat{0}}, \nonumber \\ (2E^2 + N^2 - D^2 - 2M^2) &=& \frac{1}{2\kappa}T_{\hat{1}\hat{1}}, \\ (D^2 - N^2) &=& \frac{1}{2\kappa}T_{\hat{2}\hat{2}} = \frac{1}{2\kappa}T_{\hat{3}\hat{3}}, \nonumber \end{eqnarray} \begin{equation} \label{eq:SSKY_2} 2(\frac{\dot{a}}{a}) E + \dot{E} + 2(\frac{\dot{a}}{a} - \xi)M = 0. \end{equation} In Eq.~\ref{eq:SSKY_2} and in the subsequent discussions, we will assume that the cosmic gauge current tensor $S_{\sigma}^{\ \beta\lambda}$ of the matter field source will be averaged out during the course of evolution of the Universe. Note the fact that Eq.~\ref{eq:SSKY_2} is the Yang-Mills Equation for the $GL(4\ R)$ group while Eq.~\ref{eq:SSKY} is the algebraic equations for the various components of the Yang-Mills strength tensor. See also the works of Refs.~\cite{ref:4}, \cite{ref:5} and Chen, Hsu and Yeung~\cite{ref:6} for the discussions of cosmic expansion under different situations. There is a hidden constraint built in Eq.~\ref{eq:SSKY} if we are going to impose $T_{\hat{1}\hat{1}}$ = $T_{\hat{2}\hat{2}}$ = $T_{\hat{3}\hat{3}}$ because of isotropy requirement. Namely that the matter metric energy-momentum tensor components should satisfy the algebraic relation of \begin{equation} \label{eq:EM_tensors} T_{\hat{1}\hat{1}} = T_{\hat{2}\hat{2}} = T_{\hat{3}\hat{3}} = \frac{1}{3}T_{\hat{0}\hat{0}}. \end{equation} If we use $\rho$ to denote $T_{\hat{0}\hat{0}}$ and identify it as the local energy density and use $p$ to denote $T_{\hat{i}\hat{i}}$, where $i$ = 1, 2 or 3, and identify it as the pressure, then the above relation is just the equation of state for the matter fields, \begin{equation} \label{eq:p_rho} p = \frac{1}{3}\rho. \end{equation} During the expansion of the Universe, the Universe is supposed to undergo an adiabatic process, and hence satisfies the First Law of Thermodynamics and the equation of state of Eq.~\ref{eq:p_rho} can be integrated to give \begin{equation} \label{eq:rho} \rho = \frac{6\kappa A^4}{a^4}, \end{equation} where $6\kappa A$ is an integration constant. Hence the cosmic equations that we are going to solve are \begin{equation} \label{eq:Stph_3} (\frac{\ddot{a}}{a} - \xi\frac{\dot{a}}{a} )^2 - ( \frac{\dot{a}}{a} - \xi)^4 = (\frac{A}{a})^4, \end{equation} \begin{equation} \label{eq:SKY_3} 2(\frac{\dot{a}}{a}) ( -\frac{\ddot{a}}{a} + \xi\frac{\dot{a}}{a} +\dot{\xi} ) + \frac{d}{dt}( -\frac{\ddot{a}}{a} + \xi\frac{\dot{a}}{a} +\dot{\xi} ) + 2(\frac{\dot{a}}{a} - \xi )^3 = 0. \end{equation} These two equations are far more complicated than the Friedmann Equations. And in the following we shall investigate the implications of these two equations on the evolution of the Universe. \section{THE EVOLUTION OF THE EARLY UNIVERSE} Now consider the situation in which $\xi$ is $an$ $extremely$ $small$ $constant$, say $\xi$ = $10^{-18}$sec$^{-1}$. And also consider the case in which the initial condition for $a(t)$ at $t = 0$ (i.e. $a(0)$) of our cosmic equation is $an$ $extremely$ $small$ $number$. With an extremely small $a(t)$, the matter in the Universe will be highly relativistic and we will have $p = \frac{1}{3}\rho$ as the equation of sate, and hence Eq.~\ref{eq:rho} will be satisfied. At the moment when $t \approx 0$, the right-hand-side of the Eq.~\ref{eq:Stph_3} will be huge for the reason that $a(t)$ is extremely small. Then every term in Eq.~\ref{eq:Stph_3} and Eq.~\ref{eq:SKY_3} will be a very large number, and as a result the extremely small number $\xi$ will be of no importance in determining the evolution of $a(t)$. Therefore these two equations will be simplified to \begin{equation} \label{eq:Stph_4} (\frac{\ddot{a}}{a})^2 - (\frac{\dot{a}}{a})^4 = (\frac{A}{a})^4, \end{equation} \begin{equation} \label{eq:SKY_4} 2(\frac{\dot{a}}{a})( -\frac{\ddot{a}}{a} ) + \frac{d}{dt}( -\frac{\ddot{a}}{a} ) + 2( \frac{\dot{a}}{a} )^3 = 0. \end{equation} Eq.~\ref{eq:Stph_4} and Eq.~\ref{eq:SKY_4} will then describe the evolution of the early Universe. It is interesting to point out that Eq.~\ref{eq:Stph_4} and Eq.~\ref{eq:SKY_4} will admit a simultaneous solution of the form of \begin{equation} \label{eq:a_solution} a(t) = a_0( \cosh 2\beta t )^{\frac{1}{2}}, \end{equation} where $a_0$ and $\beta$ are two integration constants related to the other integration constant $A$ given in Eq.~\ref{eq:rho} by \begin{equation} \label{eq:A_int_const} A^4 = 4 \beta^4 a_0^4. \end{equation} The $a_0$ here is the initial value of $a(t)$ at $t = 0$, and when we compare Eq.~\ref{eq:rho} and Eq.~\ref{eq:A_int_const}, we will see immediately that the initial value of the energy density of the Universe at $t = 0$ is related to $\beta$ through \begin{equation} \label{eq:rho_and_kappa_beta} \rho ( t = 0 ) = 24 \kappa \beta^4. \end{equation} If Eq.~\ref{eq:a_solution} is to describe the evolution of the Universe starting at $t = 0$, and if $\beta$ is $an$ $extremely$ $large$ $number$, say $\beta = \frac{1}{4} \times 10^{35}$sec$^{-1}$, and if the initial value of $a(t)$ at $t = 0$ (which is $a_0$) is extremely small, then it is evident that the Universe will undergo an inflationary phase at the very beginning of its evolution. When the Universe evolves from the age of $10^{-36}$ sec to $10^{-33}$ sec, its radius will increase by a factor \begin{equation} \label{eq:rho_and_kappa_beta} \exp[ 2\beta(10^{-33} {\rm sec})] / \exp[ 2 \beta(10^{-36} {\rm sec}) ] = 10^{23}. \end{equation} \section{The Evolution of the Late-Time Universe} However the Universe won't keep on expanding forever like what is given by Eq.~\ref{eq:a_solution} because the simplified Eq.~\ref{eq:Stph_4} and Eq.~\ref{eq:SKY_4} will no longer describe the Universe when $a(t)$ is no longer extremely small. At the time the Universe is getting large enough, the right-hand-side of the cosmic equations of Eq.~\ref{eq:Stph_3} will no longer be very large. Instead it gets very small very fast as time goes by, and at a point when the importance of $\xi$ cannot be neglected, the simplified equations of Eq.~\ref{eq:Stph_4} and Eq.~\ref{eq:SKY_4} will no longer describe the Universe. What are describing the Universe now are the cosmic equations of Eq.~\ref{eq:Stph_3} and Eq.~\ref{eq:SKY_3} with their right-hand-sides vanishing. Eq.~\ref{eq:Stph_3} and Eq.~\ref{eq:SKY_3} with vanishing right-hand-sides have a trivial simultaneous solution of \begin{equation} \label{eq:a_sol_later} \frac{\dot{a}}{a} - \xi = 0, \end{equation} because \begin{equation} \label{eq:a_sol_later_2} \left[ \frac{\ddot{a}}{a} - \xi(\frac{\dot{a}}{a}) \right] = \left[ (\frac{\dot{a}}{a})( \frac{\dot{a}}{a} -\xi ) + \frac{d}{dt}( \frac{\dot{a}}{a} - \xi ) \right], \end{equation} for a constant $\xi$. The solution for Eq.~\ref{eq:a_sol_later} is \begin{equation} \label{eq:a_sol_later_3} a = r_0 \exp(\xi t), \end{equation} with $r_0$ as an integration constant. In this solutuion the $\rho$ and $p$ are all equal to zero and hence satisfy $p = \frac{1}{3}\rho$. \section{Conclusions} According to the $GL(4\ R)$ Yang-Mills theory of gravity, our Universe would inflate like $a(t) = a_0(\cosh 2\beta t)^{\frac{1}{2}}$ if it has a very huge matter density $\rho_0 = 24 \kappa \beta^4$ at the very beginning of time. And then it would slow down and will take a much slower rate of expansion of $a(t) = r_0 \exp(\xi t)$ in its late time evolution, if a torsion of extremely small local value $\xi$ was created at the birth of the Universe. This small primordial torsion, however, has no effect on the evolution of the early Universe. \section{Discussions} Even though we have carried out most of our discussions by using geometric languages such as the connections and the curvature tensor, these geometric objects are, in fact, derived from the gauge ideas of the gauge potentials and the Yang-Mills field strength tensor. The cosmic metric is driven in accordance with the evolution of an underlying gauge field configuration because Eq.~\ref{eq:SSKY_2} is the Yang-Mills Equation for the $GL(4\ R)$ group while Eq.~\ref{eq:SSKY} is the algebraic equations for the various components of the Yang-Mills strength tensor. We have shown that the Yang-Mills gauge theory of gravity basing on the $GL(4\ R)$ local group admits solutions which can give good descriptions of the early-time and late-time evolution of the Universe. No cosmological constant and no rolling scalar field is required. Because the solution given in Eq.~\ref{eq:a_solution} can be extrapolated back in time, it will take an infinite time for the Universe to evolve to any finite size if it starts at a state with zero size. So according to the $GL(4\ R)$ Yang-Mills gauge theory of gravity, our Universe has existed in the infinite past, and takes an infinite time to evolve to our present day status, on the condition that has started at zero size and that we can ignore all the quantum effects during its evolution. A solution of the form of $a(t) = a_0(\cos 2\beta t )^{\frac{1}{2}}$ is also possible for our cosmic equations. This solution doesn't seem to give a proper description of our Universe, though.
1,116,691,499,975
arxiv
\section{Introduction} In water waves theory, the Euler equations describe the irrotational flow of an ideal incompressible fluid of infinite depth with a free surface. Their symplectic formulation was discovered by \cite{Zakharov1968} in terms of the free-surface elevation $\eta(x,t)$ and the velocity potential $\ensuremath{\varphi}(x,t) = \phi(x, z = \eta(x,t),t)$ evaluated at the free surface of the fluid. Here, $\eta(x, t)$ and $\ensuremath{\varphi}(x, t)$ are conjugated canonical variables with respect to the Hamiltonian $\H$ given by the total wave energy. It is well known that the Euler equations are completely integrable in several important limiting cases. For example, in a two-dimensional (2-D) ideal fluid, unidirectional weakly nonlinear narrowband wave trains are governed by the Nonlinear Schr\"odinger (NLS) equation, which is integrable \cite{Zakharov1972}. Integrability also holds for certain equations that models long waves in shallow waters, in particular the Korteweg--de Vries (KdV) equation (see, for example, \cite{Ablowitz1974, Ablowitz1981, John, Whitham1999}) or the Camassa--Holm (CH) equation \cite{Camassa1993}. For these equations, the associated Lax-pairs have been discovered and the Inverse Scattering Transform \cite{Ablowitz1974, Ablowitz1981, John, Whitham1999} unveiled the dynamics of solitons, which elastically interact under the invariance of an infinite number of time-conserving quantities. An important limiting case of the Euler equations for an ideal free-surface flow was formulated by Zakharov \cite{Zakharov1968, Zakharov1999}. By expanding the Hamiltonian $\H$ up to third order in the wave steepness, he derived an integro-differential equation in terms of canonical conjugate Fourier amplitudes, which has no restrictions on the spectral bandwidth. To derive the Zakharov (Z) equation, fast non-resonant interactions are eliminated via a canonical transformation that preserves the Hamiltonian structure \cite{Krasitskii1994, Zakharov1999}. The integrability of the Z equation is still an open question, but the fully nonlinear Euler equations are non-integrable \cite{Dyachenko1996b}. Indeed, non-integrability can be easily proven by considering the terms of the perturbation series of the Hamiltonian in powers of the wave steepness limited on their resonant manifolds. Integrability does not hold if at least one of these amplitudes is nonzero. In this regard, \cite{Dyachenko1996b} conjectured that the Z equation for unidirectional water waves (2-D) is integrable since the nonlinear fourth-order term of the Hamiltonian vanishes on the resonant manifold leaving only trivial wave-wave interactions, which just cause nonlinear frequency shifts of the Fourier amplitudes. Recently, Dyachenko \& Zakharov realized that such trivial resonant quartet-interactions can be further removed by a canonical transformation \cite{Dyachenko2011}. This drastically simplifies the Z equation to the compact form \begin{equation}\label{eq:cDZ} ib_t = \Omega b + \frac{i}{8}\Bigl(b^\ast(b_x^2)_x - \bigl(b_x^\ast(b^2)_x\bigr)_x\Bigr) - \frac{1}{4}\Bigl[b\ensuremath{\mathbf{K}}\{|b_x|^2\} - \bigl(b_x\ensuremath{\mathbf{K}}\{|b|^2\}\bigr)_x\Bigr], \end{equation} where the canonical variable $b$ scales with the wave surface $\eta$ as $b \sim \sqrt{\frac{2g}{\omega_0}}\eta$ and the subscripts $t$ and $x$ denote partial derivatives with respect to space and time, respectively. The symbols of the pseudo-differential operators $\Omega$ and $\ensuremath{\mathbf{K}}$ are given, respectively, by $\sqrt{g|k|}$ and $|k|$, where $k$ is the Fourier transform parameter. In this study, we wish to explore (\ref{eq:cDZ}), hereafter referred to as cDZ, for a numerical investigation of special solutions in the form of solitary waves. This Letter is structured as follows. We first derive the envelope equation associated to cDZ. Then, ground states and traveling waves are numerically computed by means of the Petviashvili method \cite{Petviashvili1976, Yang2010}. Finally, their nonlinear interactions are discussed. \section{Envelope equation} Consider the following ansatz for wave trains in deep water \begin{equation}\label{eq:ansatz} b(X,T) = \ensuremath{\varepsilon}\sqrt{\frac{2g}{\omega_0}} a_0 B(X,T) e^{i(X-T)}, \end{equation} where $B$ is the envelope of the carrier wave $e^{i(X-T)}$, and $X=\ensuremath{\varepsilon} k_0(x-c_g t)$, $T = \ensuremath{\varepsilon}^2\omega_0 t$, with $k_0 = \frac{\omega_0}{g}$ and $\omega_0$ as characteristic wavenumber and frequencies. The small parameter $\ensuremath{\varepsilon} = k_0a_0$ is a characteristic wave steepness and $c_g$ is the wave group velocity in deep water. Using ansatz (\ref{eq:ansatz}), the cDZ equation (\ref{eq:cDZ}) reduces to the envelope form \begin{multline}\label{eq:cDZenv} iB_T = \Omega_\ensuremath{\varepsilon} B - \frac{\ensuremath{\varepsilon}}{2}\Bigl[B\ensuremath{\mathbf{K}}\{|\S B|^2\} - \S\bigl(\S B\ensuremath{\mathbf{K}}\{|B|^2\}\bigr)\Bigr] \\ + \frac{i}{4}\bigl(B^\ast\S((\S B)^2) + iB^\ast(\S B)^2 - 2\S\bigl(B|\S B|^2\bigr)\bigr), \end{multline} where $\S = \ensuremath{\varepsilon}\partial_X + i$. The approximate dispersion operator $\Omega_\ensuremath{\varepsilon}$ is defined as follows \begin{equation*} \Omega_\ensuremath{\varepsilon} := \frac18\partial_{XX} + \frac{i}{16}\ensuremath{\varepsilon}\partial_{XXX} - \frac{5}{128}\ensuremath{\varepsilon}^2\partial_{XXXX} + O(\ensuremath{\varepsilon}^3), \end{equation*} where $o(\ensuremath{\varepsilon}^3)$ dispersion terms are neglected. Equation (\ref{eq:cDZenv}) admits three invariants, viz. the action $\ensuremath{\mathcal{A}}$, momentum $\ensuremath{\mathcal{M}}$ and the Hamiltonian $\H$ given, respectively, by \begin{equation*} \ensuremath{\mathcal{A}} = \int_\ensuremath{\mathbb{R}} B^\ast B\,\ensuremath{\mathrm{d}} x, \quad \ensuremath{\mathcal{M}} = \int_\ensuremath{\mathbb{R}} i\bigl(B^\ast\S B - B(\S B)^\ast\bigr)\,\ensuremath{\mathrm{d}} x, \end{equation*} and \begin{equation*} \H = \int_\ensuremath{\mathbb{R}}\Bigl[B^\ast\Omega_\ensuremath{\varepsilon} B + \frac{i}{4}|\S B|^2[B(\S B)^\ast - B^\ast\S B] - \frac{\ensuremath{\varepsilon}}{2}|\S B|^2\ensuremath{\mathbf{K}}(|B|^2)\Bigr]\,\ensuremath{\mathrm{d}} x. \end{equation*} If we expand the operator $\S$ in terms of $\ensuremath{\varepsilon}$, (\ref{eq:cDZenv}) can be written in the form of the generalized derivative NLS equation \begin{equation*} iB_T = \Omega_\ensuremath{\varepsilon} B + |B|^2B - 3i\ensuremath{\varepsilon}|B|^2B_X - \frac{\ensuremath{\varepsilon}}{2}B\ensuremath{\mathbf{K}}\{|B|^2\} + \ensuremath{\varepsilon}^2 \ensuremath{\mathcal{N}}_2(B) + \ensuremath{\varepsilon}^3 \ensuremath{\mathcal{N}}_3(B) = 0, \end{equation*} where \begin{multline*} \ensuremath{\mathcal{N}}_{2}(B) = -\frac{3}{2}B^\ast(B_X)^2 + B|B_X|^2 - |B|^2B_{XX} + \frac12B^2B_{XX}^\ast + \frac{i}{2}\bigl(B\ensuremath{\mathbf{K}}|B|^2\bigr)_X + \\ \frac{i}{2}\Bigl[B\ensuremath{\mathbf{K}}(B^\ast B_X - BB_X^\ast) + B_X\ensuremath{\mathbf{K}}|B|^2\Bigr], \end{multline*} and \begin{equation*} \ensuremath{\mathcal{N}}_{3}(B) = -\frac{i}{2}|B_X|^2B_X + \frac{i}{2}B_{XX}(B^\ast B_X - BB_X^\ast) - \frac12 B B_X B_{XX}^\ast - \frac12\Bigl[B\ensuremath{\mathbf{K}}|B_X|^2 - \bigl(B_X\ensuremath{\mathbf{K}}|B|^2\bigr)_X\Bigr]. \end{equation*} To leading order the NLS equation is recovered, and keeping terms up to $O(\ensuremath{\varepsilon})$ yields a Hamiltonian version of the Dysthe equation \cite{Dysthe1979}, viz. \begin{equation}\label{eq:cDZDysthe} iB_T = \Bigl(\frac{1}{8}\partial_{XX} + \frac{i\ensuremath{\varepsilon}}{16}\partial_{XXX}\Bigr)B + |B|^2B - 3i\ensuremath{\varepsilon}|B|^2B_X - \frac12\ensuremath{\varepsilon} B\ensuremath{\mathbf{K}}|B|^2, \end{equation} hereafter referred to as cDZ-Dysthe (see also \cite{Gramstad2011}). Note that the original temporal Dysthe equation \cite{Dysthe1979} is not Hamiltonian since expressed in terms of multiscale variables, which are usually non canonical (see, for example, \cite{Fedele2011}). \section{Ground states and travelling waves} Consider the envelope cDZ equation (\ref{eq:cDZenv}). We construct numerically ground states and traveling waves (TW) of the form $B(X, T) = F(X - cT)e^{-i\omega T}$, where $c$ and $\omega$ are generic parameters and the function $F(\cdot)$ is in general complex. After substituting this ansatz in (\ref{eq:cDZenv}) we obtain the following nonlinear steady equation (in the moving frame $X - cT$) \[ \L F = \ensuremath{\mathcal{N}}(F), \] where $\L = \omega - ic - \Omega_\ensuremath{\varepsilon}$ and $\ensuremath{\mathcal{N}}(F)$ denotes the nonlinear part of the right-hand side of (\ref{eq:cDZenv}). This equation is solved using the Petviashvili method \cite{Petviashvili1976, Yang2010}, which has been successfully applied in deriving TWs of the spatial version of the Dysthe equation \cite{Fedele2011}. Without loosing generality, hereafter we just consider the leading term of the dispersion operator, viz. $\Omega_\ensuremath{\varepsilon} = \frac18\partial_{xx}$, since the soliton shape is only marginally sensitive to the higher order dispersion terms (see \cite{Fedele2012} for more details). The dependence of the invariant $\ensuremath{\mathcal{A}}$ on the frequency $\omega$ is shown on Figure \ref{fig:action} for different values of the propagation speed $c = 0$, $0.1$ and $0.2$, respectively. In the same Figure we also report the action $\ensuremath{\mathcal{A}}$ of solitary waves of the cDZ-Dysthe equation (\ref{eq:cDZDysthe}), which shows a similar qualitative behaviour as that of cDZ. The monotonic increase of $\ensuremath{\mathcal{A}}$ with $\omega$ indicates that ground states are stable in agreement with the Vakhitov--Kolokolov criterion \cite{Vakhitov1973}, since $\od{\ensuremath{\mathcal{A}}}{\omega} > 0$ (see also \cite{Zakharov2001, Yang2010}). This conclusion is also confirmed by direct numerical simulations of the evolution of ground states under the cDZ dynamics using a highly-accurate Fourier-type spectral scheme \cite{Boyd2000, Trefethen2000}, see also \cite{Fedele2011}. In particular, to improve the stability of the time marching scheme, we employ the integrating factor technique \cite{Fructus2005}, and the resulting system of ODEs is discretized in space by the Verner's embedded adaptive 9(8) Runge--Kutta scheme. In all the performed simulations the accuracy has been checked by following the evolution of the invariants $\ensuremath{\mathcal{A}}$, $\ensuremath{\mathcal{M}}$ and $\H$. From a numerical point of view the cDZ equation becomes gradually stiffer as the steepness parameter $\ensuremath{\varepsilon}$ increases. As a consequence, the number of Fourier modes was always chosen to ensure the conservation of the invariants close to $\sim 10^{-13}$. \begin{figure} \centering \includegraphics[width=0.99\textwidth]{figs/JETPAction.eps} \caption{Figure 1. Action $\ensuremath{\mathcal{A}}$ dependence on the frequency $\omega$ for $\ensuremath{\varepsilon} = 0.2$ and several values of the propagation speed: $c = 0$, $0.1$ and $0.2$.} \label{fig:action} \end{figure} We also investigate the interaction of smooth traveling waves under the cDZ dynamics (\ref{eq:cDZenv}) using the developed Fourier-type pseudo-spectral method. Consider the interaction of a system of four travelling wave solutions under the cDZ equation dynamics for $\ensuremath{\varepsilon} = 0.10$, where a solitary wave ($\omega = 0.20$, $c = 0.30$) travels through an array of three equally spaced ground states ($\omega = 0.05$, $c = 0$). Figure \ref{fig:cDZXT} shows the evolution of the system in time. One can see how the solitary wave passes through the ground states without altering its shape, but with a slight phase shift. The interaction appears elastic as clearly seen in Figure \ref{fig:cDZ4} (see also the zoomed detail in the left upper corner). This suggests the integrability of the cDZ equation (\ref{eq:cDZenv}) in agreement with the recent results of Dyachenko \emph{et al.} \cite{Dyachenko2012}. We also perform a similar numerical experiment for the associated Hamiltonian version of the Dysthe equation, viz. (\ref{eq:cDZDysthe}). Namely, the numerical set-up consists of two counter-propagating solitary waves ($\ensuremath{\varepsilon} = 0.1$, $\omega = 0.20$ and $c = \pm 0.20$), which encounter two ground states ($c = 0$) along their paths. The space-time plot of the envelope evolution is shown on Figure \ref{fig:DXT}, and in Figure \ref{fig:4D} one can observe that the collision is inelastic. \begin{figure} \centering \includegraphics[trim = 0mm 1cm 0mm 5cm, clip=true, width=0.99\textwidth]{figs/cDZ_XT.eps} \caption{Figure 2. Elastic collision of four solitary waves under the cDZ dynamics ($\ensuremath{\varepsilon} = 0.10$).} \label{fig:cDZXT} \end{figure} \begin{figure} \centering \includegraphics[width=0.99\textwidth]{figs/cDZ4.eps} \caption{Figure 3. Initial shape (1) and after the collision (2) of a travelling wave ($\omega = 0.20$, $c = 0.30$, $\ensuremath{\varepsilon} = 0.1$) with three equally spaced ground states ($\omega = 0.05$, $c = 0$, $\ensuremath{\varepsilon} = 0.1$).} \label{fig:cDZ4} \end{figure} \begin{figure} \centering \includegraphics[trim = 0mm 1cm 0mm 5cm, clip=true, width=0.99\textwidth]{figs/DystheXT.eps} \caption{Figure 4. Solitary waves collision under the cDZ-Dysthe dynamics ($\ensuremath{\varepsilon} = 0.10$).} \label{fig:DXT} \end{figure} \begin{figure} \centering \includegraphics[width=0.99\textwidth]{figs/4Dysthe.eps} \caption{Figure 5. Inelastic collision of four solitary waves under the cDZ-Dysthe dynamics ($\ensuremath{\varepsilon} = 0.10$).} \label{fig:4D} \end{figure} \section{Conclusions} Special travelling wave solutions of the cDZ equation derived by Dyachenko \& Zakharov \cite{Dyachenko2011} are numerically constructed using the Petviashvili method. The stability of ground states agrees with the Vakhitov-Kolokolov criterion \cite{Vakhitov1973}. Furthermore, by means of an accurate Fourier-type pseudo-spectral scheme, it is shown that solitary waves appear to collide elastically, suggesting the integrability of the cDZ equation, but not that of the associated Hamiltonian Dysthe equation. \section*{Acknowledgements} D.~Dutykh acknowledges the support from French Agence Nationale de la Recherche, project MathOc\'ean (Grant ANR-08-BLAN-0301-01).
1,116,691,499,976
arxiv
\section{Introduction} Let $\omega:\, [0,1]\rightarrow [0,\infty)$ be a measurable function. The {\it weighted Hardy operator} $H_{\omega}$ is defined on all complex-valued measurable functions $f$ on $\mathbb R^n$ as follows: $$H_{\omega}f(x):=\int^1_{0}f(tx)\omega(t)\,dt,\hspace{3mm}x\in \mathbb{ R}^n.$$ Under certain conditions on $\omega$, Carton-Lebrun and Fosset \cite{CF} proved that $H_{\omega}$ maps $L^{p}(\mathbb{ R}^n)$ into itself for $1<p<\infty$. They also pointed out that the operator $H_{\omega}$ commutes with the Hilbert transform when $n=1$, and with certain Calder\'{o}n-Zygmund singular integral operators including the Riesz transform when $n\geq2$. A further extension of the results obtained in \cite{CF} was due to Xiao in \cite{X}. \medskip \noindent{\bf Theorem A \cite{X}.\,} Let $1<p<\infty$ and $\omega : [0,1]\rightarrow [0,\infty)$ be a measurable function. Then, $H_{\omega}$ is bounded on $L^{p}(\mathbb{ R}^n)$ if and only if $$\mathbb{A}:=\int^{1}_{0}t^{-n/p}\omega(t)\,dt<\infty. \eqno(1.1)$$ Moreover, $$\|H_{\omega}f\|_{L^{p}(\mathbb{R}^n) \rightarrow L^{p}(\mathbb{R}^n)}=\mathbb{A}.\eqno(1.2)$$ \medskip Notice that the condition (1.1) implies that $\omega$ is integrable on [0,1]. The constant $\mathbb{A}$ seems to be of interest as it equals to $\frac{p}{p-1}$ if $\omega\equiv 1$ and $n=1$. In this case, $H_{\omega}$ is reduced to the {\it classical Hardy operator} $H$ defined by $$Hf(x):=\frac{1}{x}\int^x_{0}f(t)\,dt,\, x\neq0,$$ which is the most fundamental averaging operator in analysis. Also, a celebrated integral inequality, due to Hardy \cite{HLP}, can be deduced from Theorem A immediately $$\|Hf\|_{L^{p}(\mathbb{R})}\leq \frac{p}{p-1}\|f\|_{L^{p}(\mathbb{R})}, \eqno(1.3)$$ where $1<p<\infty$ and the constant $\frac{p}{p-1}$ is the best possible. Another interesting application of Theorem A is the sharp estimate of the Riemann-Liouville integral operator on the Lebesgue spaces. To be precise, let $n=1$ and we take $$\omega(t):=\frac{1}{\Gamma(\alpha)(1-t)^{1-\alpha}}, \quad t\in[0,1],$$ where $0<\alpha<1$. Then $$H_{\omega}f(x)=x^{-\alpha}I_{\alpha}f(x),\quad x>0,$$ where $I_{\alpha}$ is the {\it Riemann-Liouville integral operator} defined by $$I_{\alpha}f(x):=\frac{1}{\Gamma(\alpha)} \int_{0}^{x}\frac{f(t)}{(x-t)^{1-\alpha}}\,dt, \quad x>0.$$ Note that the operator $I_{\alpha}$ is exactly the one-sided version of the well-known Riesz potential $$\mathcal{I}_{\alpha}f(x):=C_{n, \alpha}\int_{\mathbb{R}^{n}} \frac{f(t)}{|x-t|^{n-\alpha}}\,dt,~~~x\in{\mathbb R}.$$ Clearly, Theorem A implies the celebrated result of Hardy, Littlewood and Polya in [8, Theorem 329], namely, for all $0<\alpha<1$ and $1<p<\infty$, $$\|I_{\alpha}\|_{L^{p}({\mathbb R})\to L^p(x^{-p\alpha} dx)} =\frac{\Gamma(1-1/p)}{\Gamma(1+\alpha-1/p)}.\eqno(1.4)$$ Now we recall the commutators of weighted Hardy operators introduced in \cite{FLL}. For any locally integrable function $b$ on $\mathbb{R}^n$ and integrable function $\omega :\, [0,1]\rightarrow [0,\infty)$, the {\it commutator of the weighted Hardy operator} $H_{\omega}^{b}$ is defined by $$H_{\omega}^{b}f:=bH_{\omega}f-H_{\omega}(bf).$$ It is easy to see that the commutator $H_{\omega}^{b}$ is bounded on $L^{p}(\mathbb{R}^{n})$ for $1<p<\infty$ when $b\in L^{\infty}({\mathbb{R}}^{n})$ and $\omega$ satisfies the condition (1.1). An interesting choice of $b$ is that it belongs to the class of $\mathrm{BMO}({\mathbb{R}}^{n})$. Recall that $\mathrm{BMO}({\mathbb{R}}^{n})$ is defined to be the space of all $b\in L_{loc}{({\mathbb{R}}^{n})}$ such that $$\|b\|_{BMO}:=\sup_{Q\subset\mathbb{R}^{n}}\frac{1}{|Q|}\int_{Q}|b(x)-b_{Q}| \,dx< \infty,$$ where $b_{Q}:=\frac{1}{|Q|}\int_{Q}b$ and the supremum is taken over all cubes $Q$ in ${\mathbb{R}^n}$ with sides parallel to the axes. It is well known that $L^{\infty}({\mathbb{R}}^{n})\varsubsetneq \mathrm{BMO}({\mathbb{R}}^{n})$ since $\mathrm{BMO}({\mathbb{R}}^{n})$ contains unbounded functions such as $\log|x|$. When symbols $b\in\mathrm{BMO}({\mathbb{R}}^{n})$, the condition (1.1) on weight functions $\omega$ does not ensure the boundedness of $H_{\omega}^{b}$ on $L^{p}(\mathbb{R}^{n})$. Via controlling $H_{\omega}^{b}$ by the Hardy-Littlewood maximal operators instead of sharp maximal functions, Fu, Liu and Lu \cite{FLL} established sufficient and necessary conditions on weight functions $\omega$ which ensure that $H_{\omega}^{b}$ is bounded on $L^{p}(\mathbb{R}^{n})$ when $1<p<\infty$. Precisely, they obtain the following conclusion. \medskip \noindent{\bf Theorem B.\,} Let \[ \mathbb{C}:=\int^{1}_{0}t^{-n/p}\omega(t)\log\frac{2}{t}\,dt \] and $1<p<\infty$. Then following statements are equivalent:\\ $\rm(i)$\quad $\omega$ is integrable and $H^{b}_\omega$ is bounded on $L^{p}(\mathbb{R}^{n})$ for all $b\in \mathrm{BMO}(\mathbb{R}^{n})$;\\ $\rm(ii)$\quad $\mathbb{C}<\infty. $ \medskip We remark that the condition (1.1), i.\,e., $\mathbb{A}<\infty$, is weaker than $\mathbb{C}<\infty$ in Theorem B. In fact, if we let \[ \mathbb{B}:=\int^{1}_{0}t^{-n/p}\omega(t)\log\frac{1}{t}\,dt, \] then $\mathbb{C}=\mathbb{A}\log2+\mathbb{B}$. Hence $\mathbb{C}<\infty$ implies $\mathbb{A}<\infty$. However, $\mathbb{A}<\infty$ can not imply $\mathbb{C}<\infty$. To see this, for $0<\alpha<1$, let \[ e^{s(n/p-1)}\tilde{\omega}(s)=\left\{ \begin{array}{ll} s^{-1+\alpha},&\quad 0<s\leq 1,\\ s^{-1-\alpha},&\quad 1<s<\infty,\\ 0,&\quad s=0, \infty \end{array}\right.\eqno(1.5) \] and $\omega(t):=\tilde{\omega}(\log\frac{1}{t})$, $0\leq t\leq1$. Then it is not difficult to verify $\mathbb{A}<\infty$ and $\mathbb{C}=\infty$. Later on in \cite{FZW}, the conclusions in Theorems A and B were further generalized to the central Morrey spaces $\dot{B}^{p,\lambda}({{\rr}^n})$ and the central BMO space $C\dot{M}O^q({{\rr}^n})$. Here the space $C\dot{M}O^q({{\rr}^n})$ was first introduced by Lu and Yang in \cite{LY2}, and the space $\dot{B}^{p,\lambda}({{\rr}^n})$ is a generalization of $C\dot{M}O^q({{\rr}^n})$ introduced by Alvarez, Guzman-Partida and Lakey in \cite{AGL}; see also \cite{bg}. \begin{definition} Let $\lambda\in \mathbb{R}$ and $1<p<\infty$. The \emph{central Morrey space} $\dot{B}^{p,\,\lambda}(\mathbb{R}^{n})$ is defined to be the space of all locally $p$-integrable functions $f$ satisfying that $$\|f\|_{\dot{B}^{p,\,\lambda}}=\sup_{R>0}\biggl(\frac{1}{|B(0, R)|^{1+\lambda p}} \int_{B(0, R)}|f(x)|^{p}dx\biggr)^{1/p}<\infty.$$ \end{definition} Obviously, $\dot{B}^{p,\,\lambda}({{\rr}^n})$ is a Banach space. One can easily check that $\dot{B}^{p,\lambda}(\mathbb{R}^n)=\{0\}$ if $\lambda<-1/p$, $\dot{B}^{p,0}(\mathbb{R}^n)=\dot{B}^{p}(\mathbb{R}^n)$, $\dot{B}^{q,-1/q}(\mathbb{R}^n)=L^{q}(\mathbb{R}^n)$, and $\dot{B}^{p,\lambda}(\mathbb{R}^n)\supsetneq L^{p}(\mathbb{R}^n)$ if $\lambda>-1/p$, where the space $\dot{B}^{p}(\mathbb{R}^n)$ was introduced by Beurling in \cite{B}. Similar to the classical Morrey space, we only consider the case $-1/p<\lambda\leq0$ in this paper. In the past few years, there is an increasing interest on the study of Morrey-type spaces and their various generalizations and the related theory of operators; see, for example, \cite{AGL,GAK,FZW,MN,KMNY}. \begin{definition} Let $1<q<\infty$. A function $f\in L_{\mathrm{loc}}^{q}(\mathbb{R}^{n})$ is said to belong to the \emph{central bounded mean oscillation space} $C\dot{M}O^{q}(\mathbb{R}^{n})$ if $$\|f\|_{C\dot{M}O^{q}}=\sup_{R>0}\biggl(\frac{1}{|B(0, R)|} \int_{B(0, R)}|f(x)-f_{B(0,\, R)}|^{q}dx\biggr)^{1/q}<\infty.\eqno(1.6)$$ \end{definition} The space $C\dot{M}O^{q}({{\rr}^n})$ is a Banach space in the sense that two functions that differ by a constant are regarded as a function in this space. Moreover, {\rm(1.6)} is equivalent to the following condition $$\sup_{R>0}\inf_{c\in\mathbb{C}}\biggl(\dfrac{1}{|B(0, R)|} \int_{B(0, R)}|f(x)-c|^{q}dx\biggr)^{1/q}<\infty.$$ For more detailed properties of these two spaces, we refer to \cite{FZW}. For $1<p<\infty$ and $-1/p< \lambda\le0$, it was proved in \cite[Theorem 2.1]{FZW} that the norm $$\|H_w\|_{\dot{B}^{p,\,\lambda}({{\rr}^n})\to \dot{B}^{p,\,\lambda}({{\rr}^n})} =\int_0^1 t^{n\lambda} w(t)\,dt.$$ Moreover, if $1<p_1<p<\infty$, $1/p_1=1/p+1/q$ and $-1/p<\lambda<0$, then it was proved in \cite[Theorem 3.1]{FZW} that $H^b_w$ is bounded from $\dot{B}^{p,\,\lambda}({{\rr}^n})$ to $\dot{B}^{p_1,\,\lambda}({{\rr}^n})$ if and only if $$\int_0^1 t^{n\lambda} w(t) \log\frac2{t}\,dt<\infty,$$ where the symbol $b\in C\dot{M}O^{q}({{\rr}^n})$. In this paper, we consider the multilinear version of the above results. Recall that the weighted multilinear Hardy operator is defined as follows. \begin{definition} Let $m\in\mathbb{N}$, and $$\omega:\, \overbrace{{[0,1]\times[0,1]\times\cdots\times[0,1]}}^{m\,\text{times}} \rightarrow [0,\infty)$$ be an integrable function. The {\it weighted multilinear Hardy operator $\mathcal{H}_{\omega}^m$} is defined as \[ \mathcal{H}_{\omega}^m(\vec{f})(x):= \int\limits_{0<t_{1},t_{2},\ldots,t_{m}<1} \left(\prod_{i=1}^{m}f_{i}(t_{i}x)\right)\omega(\vec{t})\,d\vec t,\quad x\in \mathbb{R}^n, \] where $\vec f:=(f_1,\ldots, f_m)$, $\omega(\vec{t}):=\omega(t_{1},t_{2},\ldots,t_{m})$, $d\vec t:= dt_1\,\cdots\,dt_m$, and $f_{i}~(i=1,\ldots,m)$ are complex-valued measurable functions on $\mathbb{R}^n$. When $m=2$, $\mathcal{H}_{\omega}^m$ is referred to as bilinear. \end{definition} The study of multilinear averaging operators is traced to the multilinear singular integral operator theory (see, for example, \cite{CM}), and motivated not only the generalization of the theory of linear ones but also their natural appearance in analysis. For a more complete account on multilinear operators, we refer to \cite{FL}, \cite{GL}, \cite{L} and the references therein. The main aim of the paper is to establish the sharp bounds of weighted multilinear Hardy operators on the product of Lebesgue spaces and central Morrey spaces. In addition, we find sufficient and necessary conditions of the weight functions so that commutators of such weighted multilinear Hardy operators (with symbols in $\lambda$-central BMO space) are bounded on the product of central Morrey spaces. The paper is organized as follows: Section 2 is devoted to the sharp estimates of $\mathcal{H}_{\omega}^m$ on the products of Lebesgue spaces and also central Morrey spaces. In Section 3, we present the sharp estimates of the commutator generated by $\mathcal{H}_{\omega}^m$ with symbols in $\dot{CMO}^q({{\rr}^n})$. Section 4 focuses on weighted Ces\`{a}ro operators of multilinear type related to weighted multilinear Hardy operators. \section{Sharp boundedness of $\mathcal{H}_{\omega}^m$ on the product of central Morrey spaces} We begin with the following sharp boundedness of $\mathcal{H}_{\omega}^m$ on the product of Lebesgue spaces, which when $m=1$ goes back to Theorem A. \begin{theorem}\label{t1} Let $1<p, p_i<\infty$, $i=1,\ldots, m$ and $1/p=1/p_1+\cdots+1/p_m$. Then, $\mathcal{H}_{\omega}^m$ is bounded from $L^{p_1}({{\rr}^n})\times \dots \times L^{p_m}({{\rr}^n})$ to $ L^p({{\rr}^n})$ if and only if \begin{eqnarray}\label{A} \mathbb{A}_m:=\int\limits_{0<t_{1},t_{2},...,t_{m}<1} \left(\prod_{i=1}^{m}t_{i}^{-n/p_{i}}\right)\omega(\vec{t})\,d\vec{t}<\infty. \end{eqnarray} Moreover, $$\|\mathcal{H}_{\omega}^m\|_{L^{p_1}({{\rr}^n})\times \dots \times L^{p_m}({{\rr}^n}) \rightarrow L^{p}({{\rr}^n})}=\mathbb{A}_m.$$ \end{theorem} \begin{proof}[Proof] In order to simplify the proof, we only consider the case that $m=2$. Actually, a similar procedure works for all $m\in \mathbb{N}$. Suppose that (\ref{A}) holds. Using Minkowski's inequality yields $$\begin{array}{rl} \displaystyle \|\mathcal{H}_{\omega}^2(f_{1}, f_{2}) \|_{L^p(\mathbb{R}^{n})} &=\displaystyle\left( \int_{\mathbb{R}^n}\left|\int\limits_{0<t_{1},t_{2}<1}f_{1} (t_{1}x)f_{2}(t_{2}x)\omega(t_{1},t_{2})\,dt_{1}dt_{2} \right|^{p}dx\right)^{1/p}\\ &\leq\displaystyle \int\limits_{0<t_{1},t_{2}<1}\left(\int_{\mathbb{R}^n}\left|f_{1}(t_{1}x)f_{2}(t_{2}x)\right|^{p}dx\right)^{1/p}\omega(t_{1},t_{2})\,dt_{1}dt_{2}. \end{array}$$ By H\"{o}lder's inequality with $1/p=1/p_{1}+1/p_{2}$, we see that $$\begin{array}{rl} \displaystyle \|\mathcal{H}_{\omega}^2(f_{1}, f_{2})\|_{L^p(\mathbb{R}^{n})}&\leq\displaystyle \int\limits_{0<t_{1},t_{2}<1}\prod_{i=1}^{2}\left(\int_{\mathbb{R}^n}\left|f_{i}(t_{i}x)\right|^{p_{i}}dx\right)^{1/p_{i}}\omega(t_{1},t_{2})\,dt_{1}dt_{2}\\ &\leq\displaystyle \left(\prod_{i=1}^{2}\|f_{i}\|_{L^{p_i}(\mathbb{R}^{n})}\right)\int\limits_{0<t_{1},t_{2}<1}\left(\prod_{i=1}^{2}t_{i}^{-n/p_{i}}\right)\omega(t_{1},t_{2})\,dt_{1}dt_{2}.\end{array}$$ Thus, $\mathcal{H}_{\omega}^2$ maps the product of Lebesgue spaces $L^{p_1}(\mathbb{R}^{n})\times L^{p_2}(\mathbb{R}^{n})$ to $ L^p(\mathbb{R}^{n})$ and \begin{eqnarray}\label{2.1} \|\mathcal{H}_{\omega}^2\|_{L^{p_1}(\mathbb{R}^{n})\times L^{p_2}(\mathbb{R}^{n})\rightarrow L^p(\mathbb{R}^{n})}\leq\mathbb{A}_2. \end{eqnarray} To see the necessity, for sufficiently small $\varepsilon\in (0, 1)$, we set \begin{eqnarray}\label{2.2} f^{\varepsilon}_{1}(x):= \begin{cases} 0,&\quad |x|\leq\frac{\sqrt{2}}{2},\\ \displaystyle|x|^{-\frac{n}{p_1}-\frac{p_2\varepsilon}{p_1}},&\quad |x|>\frac{\sqrt{2}}{2},\end{cases} \end{eqnarray} and \begin{eqnarray}\label{2.3} f^{\varepsilon}_{2}(x):= \begin{cases} 0,&\quad |x|\leq\frac{\sqrt{2}}{2},\\ \displaystyle|x|^{-\frac{n}{p_2}-\varepsilon},&\quad |x|>\frac{\sqrt{2}}{2}. \end{cases} \end{eqnarray} An elementary calculation gives that $$ \|f_1^\varepsilon\|_{L^{p_1}(\mathbb{R}^{n})}^{p_1} =\|f_2^\varepsilon\|_{L^{p_2}(\mathbb{R}^{n})} ^{p_2}=\frac{\omega_n}{p_2\varepsilon} \Big(\frac{\sqrt{2}}{2}\, \Big)^{-p_2\varepsilon}, $$ where $\omega_n=\frac{n\pi^{n/2}}{\Gamma(1+n/2)}$ is the volume of the unit sphere. Consequently, we have \begin{eqnarray*} &&\|\mathcal{H}_{\omega}^2(f_{1}^\varepsilon, f_{2}^\varepsilon)\|_{L^p(\mathbb{R}^{n})}\\ &&\hspace{0.2cm} =\left\{\int_{\mathbb{R}^n} |x|^ {-n-p_2\varepsilon} \left[\int\limits_{E_{x}(t_{1}, t_{2})}t_{1}^{-\frac{n}{p_1} -\frac{p_2\varepsilon}{p_1}} t_{2}^{-\frac{n}{p_2}-\varepsilon}\omega(t_{1},t_{2}) \,dt_{1}dt_{2}\right]^p\,dx\right\}^{1/p}, \end{eqnarray*} where \[ E_{x}(t_{1}, t_{2}):=\left\{(t_{1}, t_{2})|\, 0<t_{1},t_{2}<1;\, t_1>\frac{\sqrt {2}}{2|x|};\, t_2>\frac{\sqrt {2}}{2|x|}\right\}. \] Hence, $$\begin{array}{rl}\displaystyle&\|\mathcal{H}_{\omega}^2 (f^{\varepsilon}_{1}, f^{\varepsilon}_{2})(x)\|^{p}_{L^p(\mathbb{R}^{n})}\\ &\hspace{0.2cm}\displaystyle\geq\int_{|x|>1/\varepsilon}|x|^{-n-p_2\varepsilon} \left(\int\limits_{E_{\frac{1}{\varepsilon}}(t_{1}, t_{2})} t_{1}^{-\frac{n}{p_1}-\frac{p_2\varepsilon}{p_1}} t_{2}^{-\frac{n}{p_2}-\varepsilon}\omega(t_{1},t_{2})dt_{1}dt_{2}\right)^{p}dx \\ \displaystyle &\hspace{0.2cm}=\displaystyle\frac{\varepsilon^{p_2\varepsilon}\omega_{n}}{p_2 \varepsilon} \left(\int\limits_{E_{\frac{1}{\varepsilon}}(t_{1}, t_{2})} t_{1}^{-\frac{n}{p_1}-\frac{p_2\varepsilon}{p_1}} t_{2}^{-\frac{n}{p_2}-\varepsilon}\omega(t_{1},t_{2})dt_{1}dt_{2}\right)^{p} \\ \displaystyle &\hspace{0.2cm}=\displaystyle\left(\frac{\sqrt{2}}{2}\varepsilon \right)^{p_2\varepsilon}\prod_{i=1}^{2}\|f_{i}^\varepsilon \|_{L^{p_i}(\mathbb{R}^{n})}^{p}\displaystyle\left(\int\limits_{E_{\frac{1}{\varepsilon}}(t_{1}, t_{2})}t_{1}^{-\frac{n}{p_1}-\frac{p_2\varepsilon}{p_1}} t_{2}^{-\frac{n}{p_2}-\varepsilon}\omega(t_{1},t_{2})dt_{1}dt_{2}\right)^{p}. \end{array}$$ Therefore, \begin{eqnarray*} &&\|\mathcal{H}_{\omega}^2\|_{L^{p_1}(\mathbb{R}^{n})\times L^{p_2}(\mathbb{R}^{n}) \rightarrow L^p(\mathbb{R}^{n})}\\ &&\hspace{0.2cm}\geq\displaystyle \left(\frac{\sqrt{2}}{2} \varepsilon\right)^{p_2\varepsilon/p} \int\limits_{E_{\frac{1}{\varepsilon}}(t_{1}, t_{2})} t_{1}^{-\frac{n}{p_1}-\frac{p_2\varepsilon}{p_1}} t_{2}^{-\frac{n}{p_2}-\varepsilon}\omega(t_{1},t_{2})\,dt_{1}\,dt_{2}. \end{eqnarray*} Since $(\sqrt{2}\varepsilon/2)^{p_2\varepsilon/p}\rightarrow1$ as $\varepsilon\rightarrow 0^{+}$, by letting $\varepsilon\rightarrow 0^{+}$, we know that \begin{eqnarray}\label{2.4} \|\mathcal{H}_{\omega}^2\|_{L^{p_1}(\mathbb{R}^{n})\times L^{p_2}(\mathbb{R}^{n})\rightarrow L^p(\mathbb{R}^{n})}\geq \mathbb{A}_2. \end{eqnarray} Combining (\ref{2.1}) and (\ref{2.4}) then finishes the proof. \end{proof} Observe that when $n=1$ and $\alpha\in(0,m)$, if we take $$\omega(\vec{t}):=\frac{1}{\Gamma(\alpha)|(1-t_{1}, \dots, 1-t_{m})|^{m-\alpha}},$$ then $$\mathcal{H}_{\omega}^m(\vec{f})(x)=x^{-\alpha}I^{m}_{\alpha}\vec{f}(x),\quad x>0,$$ where $$I^{m}_{\alpha}\vec{f}(x):=\frac{1}{\Gamma(\alpha)} \int\limits_{0<t_{1},t_{2},...,t_{m}<x} \frac{\prod_{i=1}^{m}f_{i}(t_{i})}{|(x-t_{1}, \dots, x-t_{m})|^{m-\alpha}}\,d\vec{t}.$$ The operator $I^{m}_{\alpha}$ turns out to be the one-sided analogous to the one-dimensional multilinear Riesz operator $\mathcal{I}^{m}_{\alpha}$ studied by Kenig and Stein in [9], where $$\mathcal{I}^{m}_{\alpha}\vec{f}(x):= \int\limits_{t_{1},t_{2},...,t_{m}\in\mathbb{R}} \frac{\prod_{i=1}^{m}f_{i}(t_{i})}{|(x-t_{1}, \dots, x-t_{m})|^{m-\alpha}}\,d\vec{t},\qquad x\in{\mathbb R}.$$ As an application of Theorem \ref{t1} we obtain the following sharp estimate of the boundedness of $I^{m}_{\alpha}$. \begin{corollary} Let $0<\alpha<m$. With the same assumptions as in Theorem \ref{t1}, the operator $I^{m}_{\alpha}$ maps $L^{p_1}({\mathbb R})\times \dots \times L^{p_m}({\mathbb R})$ to $ L^p(x^{-p\alpha} dx)$ and the operator norm equals to $$\frac{1}{\Gamma(\alpha)}\int\limits_{0<t_{1},t_{2},\ldots,t_{m}<1}\left(\prod_{i=1}^{m} t_{i}^{-1/p_{i}}\right)\frac{1}{|(1-t_{1}, \dots, 1-t_{m})|^{m-\alpha}}\,d\vec{t}.$$ \end{corollary} Next we extend the result in Theorem \ref{t1} to the product of central Morrey spaces. \begin{theorem}\label{t2} Let $1<p<p_i<\infty,$ $1/p=1/p_1+\cdots+1/p_m$, $\lambda=\lambda_1+\cdots+\lambda_m$ and $-1/p_i\leq \lambda_i<0~(i=1,2,\ldots,m)$. {\rm (i)} If \begin{eqnarray}\label{Am} \widetilde{\mathbb{A}}_{m}: =\int\limits_{0<t_1,t_2,\ldots,t_m<1}\left(\prod_{i=1}^m t_i^{n\lambda_i}\right)\omega(\vec{t})d\vec{t}<\infty, \end{eqnarray} then $\mathcal{H}_\omega^m$ is bounded from $\dot{B}^{p_1,\lambda_1}({{\rr}^n})\times\cdots \times\dot{B}^{p_m,\lambda_m}({{\rr}^n})$ to $\dot{B}^{p,\lambda}({{\rr}^n})$ with its operator norm not more that $\widetilde{\mathbb{A}}_{m}$. {\rm (ii)} Assume that $\lambda_1p_1=\cdots=\lambda_mp_m$. In this case the condition \eqref{Am} is also necessary for the boundedness of $\mathcal{H}_\omega^m:\ \dot{B}^{p_1,\lambda_1}({{\rr}^n})\times\cdots \times\dot{B}^{p_m,\lambda_m}({{\rr}^n})\to\dot{B}^{p,\lambda}({{\rr}^n})$. Moreover, $$\|\mathcal{H}_\omega^m\|_{\dot{B}^{p_1,\lambda_1}({{\rr}^n}) \times\cdots\times\dot{B}^{p_m,\lambda_m}({{\rr}^n}) \rightarrow\dot{B}^{p,\lambda}({{\rr}^n})}=\widetilde{\mathbb{A}}_{m}.$$ \end{theorem} \begin{proof} By similarity, we only give the proof in the case $m=2$. When $-1/p_i=\lambda_i$, $i=1,2$, then Theorem \ref{t2} is just Theorem \ref{t1}. Next we consider the case that $-1/p_i<\lambda_i<0$, $i=1,2$. First, we assume $\widetilde{\mathbb{A}}_2<\infty$. Since $1/p=1/p_1+1/p_2$, by Minkowski's inequality and H\"{o}lder's inequality, we see that, for all balls $B=B(0,R)$, \begin{eqnarray}\label{H2f} &&\left(\frac{1}{|B|^{1+\lambda p}}\int_B|\mathcal{H}_{\omega}^2(\vec{f})(x)|^pdx\right)^{1/p}\nonumber\\ & &\hspace{0.2cm}\leq \int\limits_{0<t_1,t_2<1}\left(\frac{1}{|B|^{1+\lambda p}}\int_B\Big|\prod_{i=1}^2f_i(t_i x)\Big|^pdx\right)^{1/p}\omega(\vec{t})d\vec{t}\nonumber\\ & &\hspace{0.2cm}\leq \int\limits_{0<t_1,t_2<1}\prod_{i=1}^2\left(\frac{1}{|B|^{1+\lambda_i p_i}}\int_B\Big|f_i(t_i x)\Big|^{p_i}dx\right)^{1/p_i}\omega(\vec{t})d\vec{t}\nonumber\\ &&\hspace{0.2cm}= \int\limits_{0<t_1,t_2<1}t_1^{n\lambda_1}t_2^{n\lambda_2}\prod_{i=1}^2\left(\frac{1}{|t_i B|^{1+\lambda_i p_i}}\int_{t_i B}\Big|f_i( x)\Big|^{p_i}dx\right)^{1/p_i}\omega(\vec{t})d\vec{t}\nonumber\\ &&\hspace{0.2cm}\le \|f_1\|_{\dot{B}^{p_1,\lambda_1}}\|f_2\|_{\dot{B}^{p_2,\lambda_2}} \int\limits_{0<t_1,t_2<1}t_1^{n\lambda_1}t_2^{n\lambda_2}\omega(\vec{t})d\vec{t}. \end{eqnarray} This means that $\|\mathcal{H}_\omega^2\|_{\dot{B}^{p_1,\lambda_1}({{\rr}^n})\times\dot{B}^{p_2,\lambda_2}({{\rr}^n}) \rightarrow\dot{B}^{p,\lambda}({{\rr}^n})}\le\widetilde{\mathbb{A}}_{2}.$ For the necessity when $\lambda_1p_1=\lambda_2p_2$, let $f_1(x):=|x|^{n\lambda_1}$ and $f_2(x):=|x|^{n\lambda_2}$ for all $x\in{{\rr}^n}\setminus\{0\}$, and $f_1(0)=f_2(0):=0$. Then for any $B:=B(0,R)$, \begin{eqnarray*} \left(\frac{1}{|B|^{1+\lambda_i p_i}}\int_B|f_i(x)|^{p_i}dx\right)^{1/p_i}&=&\left(\frac{1}{|B|^{1+\lambda_i p_i}}\int_B|x|^{n\lambda_i p_i}dx\right)^{1/p_i}= \left(\frac{\omega_n}{n}\right)^{-\lambda_i}\left(\frac{1}{1+\lambda_ip_i}\right)^{1/p_i}. \end{eqnarray*} Hence $\|f_i\|_{\dot{B}^{p_i,\lambda_i}} =(\omega_n/n)^{-\lambda_i}(\frac{1}{n+n\lambda_ip_i})^{1/p_i}$, $i=1,2$. Since $\lambda=\lambda_1+\lambda_2$ and $-1/p_i< \lambda_i<0, 1<p<p_i<\infty,~i=1,2$, we have \begin{eqnarray} &&\left(\frac{1}{|B|^{1+\lambda p}}\int_B|\mathcal{H}_{\omega}^2(\vec{f})(x)|^{p}dx\right)^{1/p} \nonumber\\ &&\hspace{0.2cm}=\left(\frac{1}{|B|^{1+\lambda p}}\int_B|x|^{n\lambda p}\,dx\right)^{1/p}\int\limits_{0<t_1,t_2<1}t_1^{n\lambda_1} t_2^{n\lambda_2}\omega(\vec{t})d\vec{t} \nonumber\\ &&\hspace{0.2cm}=\left(\frac{\omega_n}{n}\right)^{-\lambda}\left(\frac{1}{1+\lambda p}\right)^{1/p}\int\limits_{0<t_1,t_2<1}t_1^{n\lambda_1} t_2^{n\lambda_2}\omega(\vec{t})d\vec{t} \nonumber\\ &&\hspace{0.2cm}= \|f_1\|_{\dot{B}^{p_1,\lambda_1}} \|f_2\|_{\dot{B}^{p_2,\lambda_2}} \frac{(1+\lambda_1p_1)^{1/p_1} (1+\lambda_2p_2)^{1/p_2}}{(1+\lambda p)^{1/p}} \int\limits_{0<t_1,t_2<1}t_1^{n\lambda_1}t_2^{n\lambda_2}\omega(\vec{t})d\vec{t}. \nonumber\\ &&\hspace{0.2cm}= \|f_1\|_{\dot{B}^{p_1,\lambda_1}} \|f_2\|_{\dot{B}^{p_2,\lambda_2}} \int\limits_{0<t_1,t_2<1}t_1^{n\lambda_1}t_2^{n\lambda_2}\omega(\vec{t})d\vec{t}, \label{eq2-6} \end{eqnarray} since $\lambda_1p_1=\lambda_2p_2.$ Then, $\widetilde{\mathbb{A}}_2\leq \|\mathcal{H}_{\omega}^2\|_{\dot{B}^{p_1,\lambda_1}\times \dot{B}^{p_2,\lambda_2}\rightarrow\dot{B}^{p,\lambda}}<\infty.$ Combining (\ref{H2f}) and (\ref{eq2-6}) then concludes the proof. This finishes the proof of the Theorem \ref{t2}. \end{proof} We remark that Theorem \ref{t2} when $m=1$ goes back to \cite[Theorem 2.1]{FZW}. A corresponding conclusion of $I^\alpha_m$ is also true. \begin{corollary} Let $0<\alpha<m$. With the same assumptions as in Theorem \ref{t2}, the operator $I^{m}_{\alpha}$ maps $\dot{B}^{p_1,\lambda_1}({\mathbb R})\times \cdots\times\dot{B}^{p_m,\lambda_m}({\mathbb R})$ to $\dot{B}^{p,\lambda}(x^{-p\alpha}dx)$ with the operator norm not more than $$\frac{1}{\Gamma(\alpha)}\int\limits_{0<t_{1},t_{2},\ldots,t_{m}<1}\left(\prod_{i=1}^{m} t_{i}^{\lambda_{i}}\right)\frac{1}{|(1-t_{1}, \dots, 1-t_{m})|^{m-\alpha}}\,d\vec{t}.$$ In particular, when $\lambda_1p_1=\cdots=\lambda_mp_m$, then the operator norm of $I^{m}_{\alpha}$ equals to the above quantity. \end{corollary} \begin{remark} Notice that in the necessary part of Theorem \ref{t2}, we need an additional condition $\lambda_1p_1=\cdots=\lambda_mp_m$. In the case of Lebesgue spaces, this condition holds true automatically. For the case of Morrey spaces, such condition has known to be the necessary and sufficient condition for the interpolation properties of Morrey spaces; see, for example, \cite{LR}. \end{remark} \section{Commutators of weighted multilinear Hardy operators} In this section, we consider the sharp estimates of the multilinear commutator generated by $\mathcal{H}_{\omega}^m$ with symbols in $\dot{CMO}^q({{\rr}^n})$. Before presenting the main results of this section, we first introduce the following well-known Riemann-Lebesgue-type Lemma, which plays a key role in the below proof. For completeness, we give a detailed proof. \begin{lemma}\label{LA} Let $m\in\mathbb{N}$ and $\omega:\,[a,b]^m\to[0,\infty)$ be an integrable function. Then $$\lim_{r\to\infty}\int_{[a,b]^m}\omega(t_1,\cdots,t_m)\,\prod_{i\in E} \sin(\pi r t_i)\,dt_1\,\cdots\,dt_m=0,$$ where $E$ is an arbitrary nonempty subset of $\{1,\cdots,m\}$. \end{lemma} \begin{proof} For simplicity, we only give the proof for the case that $m=2$ and $E=\{1\}$, namely, to show $$\lim_{r\to\infty}\int_{[a,b]^2}\omega(t_1,t_2)\, \sin(\pi r t_1)\,dt_1\,dt_2=0.$$ Since $\omega$ is integrable, for any $\varepsilon>0$, there exists a partition $\{I_i\times J_j:\ i=1,\cdots,k\hspace{0.3cm}\mathrm{and}\hspace{0.3cm}j=1,\cdots,l\} $ such that $I_i=[a_{I_i},b_{I_i}]$, $J_j=[a_{J_j},b_{J_j}]$, $[a,b]=\cup_{i=1}^k I_i=\cup_{j=1}^l J_j$, $I_i\cap I_j=\emptyset=J_i\cap J_j$ if $i\neq j$, and $$0\le \int_a^b\int_a^b \omega(t_1,t_2)\,dt_1\,dt_2-\sum_{i=1}^k \sum_{j=1}^m m_{ij}|I_i||J_j|<\varepsilon/2,$$ where $m_{ij}$ is the minimum value of $\omega$ on $I_i\times J_j$. Let $$g(t_1,t_2):= \sum_{i=1}^k \sum_{j=1}^m m_{ij}\chi_{I_i}(t_1)\chi_{J_j}(t_2),\quad t_1,t_2\in[a,b].$$ Then $$\int_a^b\int_a^b g(t_1,t_2)\,dt_1\,dt_2=\sum_{i=1}^k \sum_{j=1}^m m_{ij}|I_i||J_j|$$ and $$0\le \int_a^b\int_a^b [\omega(t_1,t_2)-g(t_1,t_2)]\,dt_1\,dt_2<\varepsilon/2.$$ It follows from $\omega-g\ge 0$ that \begin{eqnarray*} &&\left|\int_{[a,b]^2}\omega(t_1,t_2)\, \sin(\pi r t_1)\,dt_1\,dt_2\right|\\ &&\hspace{0.2cm}\le \left|\int_{[a,b]^2}[\omega(t_1,t_2)-g(t_1,t_2)]\, \sin(\pi r t_1)\,dt_1\,dt_2\right|+\left|\int_{[a,b]^2}g(t_1,t_2)\, \sin(\pi r t_1)\,dt_1\,dt_2\right|\\ &&\hspace{0.2cm}\le \int_{[a,b]^2}[\omega(t_1,t_2)-g(t_1,t_2)]\,dt_1\,dt_2+\left|\int_{[a,b]^2}g(t_1,t_2)\, \sin(\pi r t_1)\,dt_1\,dt_2\right|\\ &&\hspace{0.2cm}\le \varepsilon/2+\left|\frac{1}{\pi r}\sum_{i=1}^k \sum_{j=1}^m m_{ij}|J_j|[\cos(\pi ra_{I_i})-\cos(\pi rb_{I_i})]\right|. \end{eqnarray*} Choosing $r$ large enough such that $$\left|\frac{1}{\pi r}\sum_{i=1}^k \sum_{j=1}^m m_{ij}|J_j|[\cos(\pi ra_{I_i})-\cos(\pi rb_{I_i})]\right|<\varepsilon/2,$$ we then know that $$\left|\int_{[a,b]^2}\omega(t_1,t_2)\, \sin(\pi r t_1)\,dt_1\,dt_2\right|<\varepsilon.$$ This finishes the proof. \end{proof} Now we recall the definition for the multilinear version of the commutator of the weighted Hardy operators. Let $m\geq 2$, $\omega :\, [0,1]\times[0,1]^m\rightarrow [0,\infty)$ be an integrable function, and $b_{i}\ (1\leq i\leq m)$ be locally integrable functions on ${{\rr}^n}$. We define $$\mathcal{H}_{\omega}^{\vec{b}}(\vec{f})(x):=\int\limits_{0<t_{1},t_{2},...,t_{m}<1} \left(\prod_{i=1}^{m}f_{i}(t_{i}x)\right) \left(\prod_{i=1}^{m}(b_{i}(x)-b_{i}(t_{i}x))\right)\omega(\vec{t})\,d\vec{t},\quad x\in \mathbb{ R}^n.$$ In what follows, we set $$\mathbb{B}_{m}:=\int\limits_{0<t_{1},t_{2}<,...,<t_{m}<1} \left(\prod_{i=1}^{m}t_{i}^{n\lambda_i}\right) \omega(\vec{t})\prod_{i=1}^{m}\log\frac{1}{t_{i}}\,d\vec{t}$$ and $$\mathbb{C}_{m}:=\int\limits_{0<t_{1},t_{2}<,...,<t_{m}<1} \left(\prod_{i=1}^{m}t_{i}^{n\lambda_i}\right) \omega(\vec{t})\prod_{i=1}^{m}\log\frac{2}{t_{i}}\,d\vec{t}.$$ Then we have the following multilinear generalization of Theorem B. \begin{theorem}\label{t3} Let $1<p<p_i<\infty, 1<q_i<\infty$, $-1/p_i<\lambda_i<0$, $i=1,\ldots, m$, such that $1/p=1/p_1+\cdots+1/p_m+1/q_1+\cdots+1/q_m$, $\lambda=\lambda_1+\cdots+\lambda_m $. Assume further that $\omega$ is a non-negative integrable function on $[0,1]\times\cdots \times [0,1]$. {\rm (i)} If $\mathbb{C}_{m}<\infty,$ then $\mathcal{H}_{\omega}^{\vec{b}} $ is bounded from $\dot{B}^{p_1,\lambda_1}(\mathbb{R}^{n})\times \cdots \times \dot{B}^{p_m,\lambda_m}(\mathbb{R}^{n})$ to $ \dot{B}^{p,\lambda}(\mathbb{R}^{n})$ for all $\vec{b}=(b_1,b_2,\ldots,b_m)\in \dot{\mathrm{CMO}}^{q_1}(\mathbb{R}^{n}) \times\cdots\times\dot{\mathrm{CMO}}^{q_m}(\mathbb{R}^{n})$. {\rm (ii)} Assume that $\lambda_1p_1=\cdots=\lambda_mp_m$. In this case the condition $\mathbb{C}_{m}<\infty$ in (i) is also necessary. \end{theorem} \begin{remark} It is easy to verify that condition $\rm(ii)$ in Theorem \ref{t3} is weaker than the condition (\ref{Am}) in Theorem \ref{t2}. \end{remark} \begin{proof}[Proof] By similarity, we only consider the case that $m=2$. We first show (i). That is, we assume $\mathbb{C}_{2}<\infty$ and show that $$\|\mathcal{H}_{\omega}^{\vec{b}}\|_{ \dot{B}^{p_1,\lambda_1}(\mathbb{R}^{n})\times\dot{B}^{p_2,\lambda_2}(\mathbb{R}^{n}) \rightarrow \dot{B}^{p,\lambda}(\mathbb{R}^{n})}<\infty$$ whenever $\vec b=(b_1, b_2)\in\dot{\mathrm {CMO}}^{q_1}({{\rr}^n})\times\dot{\mathrm {CMO}}^{q_2}({{\rr}^n})$. By Minkowski's inequality we have \begin{eqnarray*} && \Big(\frac{1}{|B|}\int_B |\mathcal{H}_{\omega}^{\vec{b}}(\vec{f})(x)|^p\Big)^{1/p}\\ &&\displaystyle\leq \Big(\frac{1}{|B|}\int_B\Big(\int_0^1\int_0^1\prod_{i=1}^{2}|f_i(t_i x)|\prod_{i=1}^{2}|b_i(x)-b_i(t_ix)|\omega(t_1,t_2)dt_1dt_2\Big)^pdx\Big)^{1/p}\\ &&\leq \int_0^1\int_0^1\Big(\frac{1}{|B|}\int_B\Big(\prod_{i=1}^{2}|f_i(t_i x)|\prod_{i=1}^{2}|b_i(x)-b_i(t_ix)|\Big)^pdx\Big)^{1/p}\omega(t_1,t_2)dt_1dt_2\\ &&=I_1+I_2+I_3+I_4+I_5+I_6, \end{eqnarray*} where \begin{eqnarray*} &&I_1:=\displaystyle \int\limits_{0<t_{1},t_{2}<1}\left(\frac{1}{|B|}\int_{B} \left(\Big(\prod_{i=1}^{2}|f_{i}(t_{i}x)|\Big)\Big(\prod_{i=1}^{2}|b_{i}(x) -b_{i, B}|\Big)\right)^p\,dx\right)^{\frac 1p}\,\omega(\vec{t})\,d\vec{t},\\ &&I_2:=\displaystyle \int\limits_{0<t_{1},t_{2}<1}\left(\frac{1}{|B|}\int_{B} \left(\Big(\prod_{i=1}^{2}|f_{i}(t_{i}x)|\Big)\Big(\prod_{i=1}^{2}|b_{i}(t_ix) -b_{i, t_iB}|\Big)\right)^p\,dx\right)^{\frac 1p}\,\omega(\vec{t})\,d\vec{t},\\ &&I_3:=\displaystyle \int\limits_{0<t_{1},t_{2}<1}\left(\frac{1}{|B|}\int_{B} \left(\Big(\prod_{i=1}^{2}|f_{i}(t_{i}x)|\Big)\Big(\prod_{i=1}^{2}|b_{i,B} -b_{i, t_iB}|\Big)\right)^p\,dx\right)^{\frac 1p}\,\omega(\vec{t})\,d\vec{t},\\ &&I_4:=\displaystyle \int\limits_{0<t_{1},t_{2}<1}\left(\frac{1}{|B|}\int_{B} \left(\Big(\prod_{i=1}^{2}|f_{i}(t_{i}x)|\Big)\Big(\sum_{D(i, j)} |b_{i}(x)-b_{i, B}||b_{j, B}-b_{j, t_{j}B}|\Big)\right)^p\,dx\right)^{\frac 1p}\,\omega(\vec{t})\,d\vec{t},\\ &&I_5:=\displaystyle \int\limits_{0<t_{1},t_{2}<1}\left(\frac{1}{|B|}\int_{B} \left(\Big(\prod_{i=1}^{2}|f_{i}(t_{i}x)|\Big)\Big(\sum_{D(i, j)} |b_{i}(x)-b_{i, B}||b_{j}(t_j x)-b_{j, t_{j}B}|\Big)\right)^p\,dx\right)^{\frac 1p}\,\omega(\vec{t})\,d\vec{t},\\ &&I_6:=\displaystyle \int\limits_{0<t_{1},t_{2}<1}\left(\frac{1}{|B|}\int_{B} \left(\Big(\prod_{i=1}^{2}|f_{i}(t_{i}x)|\Big)\Big(\sum_{D(i, j)} |b_{i,B}-b_{i,t_i B}||b_{j}(t_j x)-b_{j, t_{j}B}|\Big)\right)^p\,dx\right)^{\frac 1p}\,\omega(\vec{t})\,d\vec{t}, \end{eqnarray*} and \[ D(i, j):=\{(i, j)| (1, 2); (2, 1)\},\quad\quad ~b_{i, B}:=\frac{1}{|B|}\int_{B}b_{i},~\quad i=1, 2. \] Choose $p<s_{1}<\infty, p<s_{2}<\infty $ such that $1/s_1=1/p_1+1/q_1$, $1/s_2=1/p_2+1/q_2$. Then by H\"{o}lder's inequality, we know that \begin{eqnarray*} \displaystyle I_{1}&\leq &\displaystyle \int\limits_{0<t_{1},t_{2}<1}\prod_{i=1}^{2}\left(\frac{1}{|B|}\int_{B} \left|f_{i}(t_{i}x)\right|^{p_{i}}dx\right)^{1/p_{i}}\prod_{i=1}^{2}\left(\frac{1}{|B|}\int_{B} \left|b_{i}(x)-b_{i, B}\right|^{q_{i}}dx\right)^{1/q_{i}}\omega(\vec{t})\,d\vec{t}\\ &\leq&\displaystyle |B|^{\lambda}\int\limits_{0<t_{1},t_{2}<1}\prod_{i=1}^{2}t_i^{n\lambda_i}\prod_{i=1}^{2}\left(\frac{1}{|t_{i}B|^{1+\lambda_i p_i}}\int_{t_{i}B} \left|f_{i}(x)\right|^{p_{i}}dx\right)^{1/p_{i}}\\ &&\quad\displaystyle\times\prod_{i=1}^{2}\left(\frac{1}{|B|}\int_{B} \left|b_{i}(x)-b_{i, B}\right|^{q_{i}}dx\right)^{1/q_{i}}\omega(\vec{t})\,d\vec{t}\\ &\leq&\displaystyle C|B|^{\lambda}\|b_1\|_{\dot{CMO}^{q_1}}\|b_2\|_{\dot{CMO}^{q_2}} \|f_1\|_{\dot{B}^{p_1,\lambda_1}}\|f_2\|_{\dot{B}^{p_2,\lambda_2}} \int\limits_{0<t_{1},t_{2}<1}\prod_{i=1}^{2}t_i^{n\lambda_i}\omega(\vec{t})\,d\vec{t}. \end{eqnarray*} Similarly, we obtain \begin{eqnarray*} \displaystyle I_{2}&\leq &\displaystyle \int\limits_{0<t_{1},t_{2}<1}\prod_{i=1}^{2}\left(\frac{1}{|B|}\int_{B} \left|f_{i}(t_{i}x)\right|^{p_{i}}dx\right)^{1/p_{i}}\prod_{i=1}^{2}\left(\frac{1}{|B|}\int_{B} \left|b_{i}(t_{i}x)-b_{i, t_{i}B}\right|^{q_{i}}dx\right)^{1/q_{i}}\omega(\vec{t})\,d\vec{t}\\ &\leq&\displaystyle |B|^{\lambda}\int\limits_{0<t_{1},t_{2}<1}\prod_{i=1}^{2}t_i^{n\lambda_i}\prod_{i=1}^{2}\left(\frac{1}{|t_{i}B|^{1+\lambda_i p_i}}\int_{t_{i}B} \left|f_{i}(x)\right|^{p_{i}}dx\right)^{1/p_{i}}\\ &&\quad\displaystyle\times\prod_{i=1}^{2}\left(\frac{1}{|t_iB|}\int_{t_iB} \left|b_{i}(x)-b_{i, t_iB}\right|^{q_{i}}dx\right)^{1/q_{i}}\omega(\vec{t})\,d\vec{t}\\ &\leq&\displaystyle C|B|^{\lambda}\|b_1\|_{\dot{CMO}^{q_1}}\|b_2\|_{\dot{CMO}^{q_2}}\|f_1\|_{\dot{B}^{p_1,\lambda_1}} \|f_2\|_{\dot{B}^{p_2,\lambda_2}} \int\limits_{0<t_{1},t_{2}<1}\prod_{i=1}^{2}t_i^{n\lambda_i}\omega(\vec{t})\,d\vec{t}. \end{eqnarray*} It follows from $1/p=1/s_{1}+1/{s_2}$ that $1=p/s_{1}+p/{s_2}$. From $1/s_1=1/p_1+1/q_1,1/s_2=1/p_2+1/q_2$ and H\"{o}lder's inequality, we deduce that \begin{eqnarray*} \displaystyle I_3&=&\displaystyle \int\limits_{0<t_{1},t_{2}<1}\left(\frac{1}{|B|}\int_{B} \left(\Big(\prod_{i=1}^{2}|f_{i}(t_{i}x)|\Big)\Big(\prod_{i=1}^{2}|b_{i,B} -b_{i, t_iB}|\Big)\right)^p\,dx\right)^{1/p}\,\omega(\vec{t})\,d\vec{t}\\ &\leq&\displaystyle\int\limits_{0<t_{1},t_{2}<1}\prod_{i=1}^{2}\left(\frac{1}{|B|}\int_B|f_{i}(t_{i}x)|^{s_i}\right)^{1/s_{i}}\left(\prod_{i=1}^{2}|b_{i, B}-b_{i, t_{i}B}|\right)\omega(\vec{t})\,d\vec{t}\\ &\leq&\displaystyle C|B|^{\lambda}\int\limits_{0<t_{1},t_{2}<1}t_1^{n\lambda_1}t_2^{n\lambda_2}\prod_{i=1}^{2}\left(\frac{1}{|t_iB|^{1+\lambda_i p_i}}\int_{t_iB}|f_{i}(t_{i}x)|^{p_i}\right)^{1/p_{i}}\\ &&\quad\times\displaystyle\left(\prod_{i=1}^{2}|b_{i, B}-b_{i, t_{i}B}|\right)\omega(\vec{t})\,d\vec{t}\\ &\leq&\displaystyle C|B|^{\lambda}\|f_1\|_{\dot{B}^{p_1,\lambda_1}}\|f_2\|_{\dot{B}^{p_2,\lambda_2}}\int\limits_{0<t_{1},t_{2}<1}t_1^{n\lambda_1}t_2^{n\lambda_2}\prod_{i=1}^{2}|b_{i, B}-b_{i, t_{i}B}|\omega(\vec{t})\,d\vec{t}\\ &\leq&\displaystyle C|B|^{\lambda}\|f_1\|_{\dot{B}^{p_1,\lambda_1}} \|f_2\|_{\dot{B}^{p_2,\lambda_2}}\sum_{\ell=0}^\infty\sum^{\infty}_{k=0} \int\limits_{{2^{-\ell-1}}\leq t_1<{2^{-\ell}}}\int\limits_{{2^{-k-1}}\leq t_2<{2^{-k}}}t_1^{n\lambda_1}t_2^{n\lambda_2}\\ &&\quad\displaystyle\times\left(\sum^\ell_{j=0}\left|b_{1,2^{-j}B}-b_{1,2^{-j-1}B}\right| +\left|b_{1,2^{-k-1}B}-b_{1,t_1B}\right|\right) \\ &&\quad\times \left( \sum^k_{j=0}\left|b_{2,2^{-j}B}-b_{2,2^{-j-1}B}\right| +\left|b_{2,2^{-k-1}B}-b_{2,t_2B}\right|\right)\omega(\vec{t})\,d\vec{t}\\ &\leq&\displaystyle C|B|^{\lambda}\|b_1\|_{\dot{CMO}^{q_1}}\|b_2\|_{\dot{CMO}^{q_2}} \|f_1\|_{\dot{B}^{p_1,\lambda_1}}\|f_2\|_{\dot{B}^{p_2,\lambda_2}}\\ &&\quad\displaystyle\times\int\limits_{0<t_{1},t_{2}<1}\prod_{i=1}^{2} t_i^{n\lambda_i}\omega(\vec{t})\log\frac{2}{t_1}\log\frac{2}{t_1}\,d\vec{t}, \end{eqnarray*} where we use the fact that \begin{eqnarray*} |b_{1,B}-b_{1,t_1B}|&\le&\sum^k_{j=0}\left|b_{1,2^{-j}B}-b_{1,2^{-j-1}B}\right| +\left|b_{1,2^{-k-1}B}-b_{1,t_1B}\right|\\ &\leq& C(k+1)\|b\|_{\dot{CMO}^{q_1}}\leq C\log\frac{2}{t_1}\|b\|_{\dot{CMO}^{q_1}} \end{eqnarray*} and \begin{eqnarray*} |b_{2,B}-b_{2,t_2B}|\leq C\log\frac{2}{t_2}\|b\|_{\dot{CMO}^{q_2}} \end{eqnarray*} We now estimate $I_{4}$. Similarly, we choose $1<s<\infty$ such that $1/p=1/p_1+1/p_2+1/s$ and $1/s=1/q_1+1/q_2$. Using Minkowski's inequality and H\"{o}lder's inequality yields \begin{eqnarray*} I_4&=& \int\limits_{0<t_{1},t_{2}<1}\left(\frac{1}{|B|}\int_{B} \left(\Big(\prod_{i=1}^{2}|f_{i}(t_{i}x)|\Big)\Big(\sum_{D(i, j)} |b_{i}(x)-b_{i, B}||b_{j, B}-b_{j, t_{j}B}|\Big)\right)^p\,dx\right)^{1/p}\,\omega(\vec{t})\,d\vec{t}\\ &\leq& \int\limits_{0<t_{1},t_{2}<1}\left[\left(\frac{1}{|B|}\int_{B} \left(\bigg(\prod_{i=1}^{2}|f_{i}(t_{i}x)|\bigg) \bigg(|b_{1}(x)-b_{1, B}||b_{2, B}-b_{2, t_{2}B}|\bigg)\right)^pdx\right)^{1/p}\right. \\ &&\quad\left. +\left(\frac{1}{|B|}\int_{B} \left(\bigg(\prod_{i=1}^{2}|f_{i}(t_{i}x)|\bigg)\bigg(|b_{2}(x)-b_{2, B}||b_{1, B}-b_{1, t_{1}B}|\bigg)\right)^p\,dx\right)^{1/p}\right]\,\omega(\vec{t})\,d\vec{t}\\ &\leq& \int\limits_{0<t_{1},t_{2}<1}\prod_{i=1}^{2}\left(\frac{1}{|B|}\int_{B} \left|f_{i}(t_{i}x)\right|^{p_{i}}dx\right)^{1/p_{i}}\Bigg\{\left(\frac{1}{|B|}\int_{B} \left|b_{1}(x)-b_{1, B}\right|^{s}dx\right)^{1/s}\\ &&\quad\times|b_{2, B}-b_{2, t_{2}B}|+ \left(\frac{1}{|B|}\int_{B}\left|b_{2}(x)-b_{2, B}\right|^{s}dx\right)^{1/s}|b_{1, B}-b_{1, t_{1}B}|\Bigg\}\omega(\vec{t})\,d\vec{t}\\ &\leq&\displaystyle C|B|^{\lambda}\int\limits_{0<t_{1},t_{2}<1}t_1^{n\lambda_1} t_2^{n\lambda_2}\prod_{i=1}^{2}\left(\frac{1}{|t_i B|^{1+\lambda_i p_i}}\int_{t_iB} \left|f_{i}(x)\right|^{p_{i}}dx\right)^{1/p_{i}}\\ &&\quad\times\Bigg\{\left(\frac{1}{|B|}\int_{B} \left|b_{1}(x)-b_{1, B}\right|^{s}dx\right)^{1/s}|b_{2, B}-b_{2, t_{2}B}|\\ &&\quad+\left(\frac{1}{|B|}\int_{B}\left|b_{2}(x)-b_{2, B}\right|^{s}dx\right)^{1/s}|b_{1, B}-b_{1, t_{1}B}|\Bigg\}\omega(\vec{t})\,d\vec{t}\\ &\leq& C|B|^{\lambda}\|f_1\|_{\dot{B}^{q_1,\lambda_1}}\|f_2\|_{\dot{B}^{q_2},\lambda_2}\int\limits_{0<t_{1},t_{2}<1}t_1^{n\lambda_1}t_2^{n\lambda_2}\Bigg\{\left(\frac{1}{|B|}\int_{B} \left|b_{1}(x)-b_{1, B}\right|^{s}dx\right)^{1/s}\\ &&\quad\times|b_{2, B}-b_{2, t_{2}B}|+\left(\frac{1}{|B|}\int_{B}\left|b_{2}(x)-b_{2, B}\right|^{s}dx\right)^{1/s}|b_{1, B}-b_{1, t_{1}B}|\Bigg\}\omega(\vec{t})\,d\vec{t}. \end{eqnarray*} From the estimates of $I_{1}$ and $I_{3}$, we deduce that \begin{eqnarray*}I_{4}&\leq & C|B|^{\lambda}\|f_1\|_{\dot{B}^{q_1,\lambda_1}}\|f_2\|_{\dot{B}^{q_2},\lambda_2}\|b_1\|_{\dot{CMO}^{q_1}}\|b_2\|_{\dot{CMO}^{q_2}}\\ &&\quad\times\int_{0}^{1}\int_{0}^{1} t_1^{n\lambda_1}t_2^{n\lambda_2}\omega(t_{1}, t_{2})\left(1+\sum_{i=1}^{2}\log\frac{1}{t_{i}}\right)\,d t_{1}dt_{2}. \end{eqnarray*} It can be deduced from the estimates of $I_{1}$, $I_{2}$, $I_{3}$ and $I_{4}$ that $$I_{5}\leq C|B|^{\lambda}\|f_1\|_{\dot{B}^{q_1,\lambda_1}}\|f_2\|_{\dot{B}^{q_2},\lambda_2}\|b_1\|_{\dot{CMO}^{q_1}}\|b_2\|_{\dot{CMO}^{q_2}}\int\limits_{0<t_{1},t_{2}<1}t_1^{n\lambda_1}t_2^{n\lambda_2}\omega(\vec{t})\,d\vec{t}$$ and \begin{eqnarray*}I_{6}&\leq & C|B|^{\lambda}\|f_1\|_{\dot{B}^{q_1,\lambda_1}}\|f_2\|_{\dot{B}^{q_2},\lambda_2}\|b_1\|_{\dot{CMO}^{q_1}}\|b_2\|_{\dot{CMO}^{q_2}}\\ &&\quad\times\int_{0}^{1}\int_{0}^{1} t_1^{n\lambda_1}t_2^{n\lambda_2}\omega(t_{1}, t_{2})\left(1+\sum_{i=1}^{2}\log\frac{1}{t_{i}}\right)\,d t_{1}dt_{2}. \end{eqnarray*} Combining the estimates of $I_{1}$, $I_{2}$, $I_{3}$, $I_{4}$, $I_{5}$ and $I_{6}$ gives \begin{eqnarray*} \displaystyle \left(\frac{1}{|B|^{1+\lambda p}}\int_{B}|\mathcal{H}_{\omega}^{\vec{b}}\vec{f}(x)|^pdx\right)^{1/p}&\leq& C|B|^{\lambda}\|f_1\|_{\dot{B}^{q_1,\lambda_1}}\|f_2\|_{\dot{B}^{q_2},\lambda_2}\|b_1\|_{\dot{CMO}^{q_1}}\|b_2\|_{\dot{CMO}^{q_2}}\\ &&\quad\displaystyle \times\int_{0}^{1}\int_{0}^{1} t_1^{n\lambda_1}t_2^{n\lambda_2}\omega(t_{1}, t_{2})\prod_{i=1}^{2}\log\frac{2}{t_{i}}\,d t_{1}dt_{2}. \end{eqnarray*} This proves (i). Now we prove the necessity in (ii). Assume that $$\|\mathcal{H}_{\omega}^{\vec{b}}\|_{ \dot{B}^{p_1,\lambda_1}(\mathbb{R}^{n})\times\dot{B}^{p_2,\lambda_2}(\mathbb{R}^{n}) \rightarrow \dot{B}^{p,\lambda}(\mathbb{R}^{n})}<\infty$$ whenever $\vec b=(b_1, b_2)\in\dot{\mathrm {CMO}}^{q_1}({{\rr}^n})\times\dot{\mathrm {CMO}}^{q_2}({{\rr}^n})$. To show $\mathbb{C}_{2}<\infty$, it suffices to prove that $\mathbb{A}_2<\infty$, $\mathbb{B}_{2}<\infty$, $$\begin{array}{rl} \mathbb{D}:=\displaystyle \int\limits_{0<t_{1},t_{2}<1} \left(\prod_{i=1}^{2}t_{i}^{n\lambda_i}\right)\omega(t_1,t_2)\log \frac{1}{t_1}\,dt_1\,dt_2<\infty, \end{array}$$ and $$\begin{array}{rl} \mathbb{E}:=\displaystyle \int\limits_{0<t_{1},t_{2}<1} \left(\prod_{i=1}^{2}t_{i}^{n\lambda_i}\right)\omega(t_1,t_2)\log \frac{1}{t_2}\,dt_1\,dt_2<\infty. \end{array}$$ To prove $\mathbb{B}_2<\infty$, we set $b_{1}(x):=\log|x|\in \mathrm{BMO}(\mathbb{R}^{n})\subset\dot{\mathrm {CMO}}^{q_1}({{\rr}^n})$, and $b_{2}(x):=\log|x|\in \mathrm{BMO}(\mathbb{R}^{n})\subset\dot{\mathrm {CMO}}^{q_2}({{\rr}^n})$. Define $f_{1}:=|x|^{n\lambda_1}$ and $f_{2}:=|x|^{n\lambda_2}$ if $x\in{{\rr}^n}\setminus\{0\}$, and $f_1(0)=f-2(0):=0$. Then $$\|f_1\|_{\dot{B}^{p_1, \lambda_1}}=\left(\frac{\omega_n}{n}\right)^{-\lambda_1}\Big(\frac{1}{1+\lambda_1 p_1}\Big)^{1/p_1},\quad \|f_2\|_{\dot{B}^{p_2, \lambda_2}}=\left(\frac{\omega_n}{n}\right)^{-\lambda_2}\Big(\frac{1}{1+\lambda_2 p_2}\Big)^{1/p_2}$$ and $$\mathcal{H}_\omega^{\vec{b}}(\vec{f})(x)=|x|^{n\lambda_1}|x|^{n\lambda_2}\int_0^1\int_0^1t_1^{n\lambda_1}t_2^{n\lambda_2}\omega(t_1,t_2) \log\frac{1}{t_1}\log\frac{1}{t_2}dt_1dt_2.$$ Since $1<p<p_i<\infty$, $-1/p_i<\lambda_i<0$ and $\lambda=\lambda_1+\lambda_2~(i=1, 2)$, we see that, for all $B=B(0,R)$, \begin{eqnarray*} &&\Big(\frac{1}{|B|^{1+\lambda p}}\int_B |\mathcal{H}_\omega^{\vec{b}}(\vec{f})(x)|^{p}dx\Big)^{1/p}\\ &&\quad=\Big(\frac{1}{|B|^{1+\lambda p}}\int_B |x|^{n\lambda p}dx\Big)^{1/p}\int_0^1\int_0^1t_1^{n\lambda_1}t_2^{n\lambda_2}\omega(t_1,t_2) \log\frac{1}{t_1}\log\frac{1}{t_2}dt_1dt_2\\ &&\quad =\left(\frac{\omega_n}{n}\right)^{-\lambda}\Big(\frac{1}{1+\lambda p}\Big)^{1/p}\int_0^1\int_0^1t_1^{n\lambda_1}t_2^{n\lambda_2}\omega(t_1,t_2) \log\frac{1}{t_1}\log\frac{1}{t_2}dt_1dt_2\\ &&\quad= \|f_1\|_{\dot{B}^{p_1,\lambda_1}}\|f_2\|_{\dot{B}^{p_2,\lambda_2}}\int_0^1\int_0^1t_1^{n\lambda_1}t_2^{n\lambda_2}\omega(t_1,t_2) \log\frac{1}{t_1}\log\frac{1}{t_2}dt_1dt_2. \end{eqnarray*} Thus $\mathbb{B}_{2}\leq\|\mathcal{H}_{\omega}^{\vec{b}}\|_{ \dot{B}^{p_1,\lambda_1}(\mathbb{R}^{n})\times\dot{B}^{p_2,\lambda_2}(\mathbb{R}^{n}) \rightarrow \dot{B}^{p,\lambda}(\mathbb{R}^{n})}<\infty.$ Since the proof for $\mathbb{D}<\infty$ is similar to that for $\mathbb{E}<\infty$, we only show $\mathbb{E}<\infty$. To this end, for any $r\in\mathbb N$ and $R\in(0,+\infty)$, we choose $b_{1}(x):=\chi_{[B(0,R/2)]^c}(x)\,\sin(\pi r|x|),$ and $b_{2}(x):=\log|x|, $ where $[B(0,R/2)]^c:={{\rr}^n}\setminus B(0, R/2)$. Obviously, we have $\vec{b}=(b_{1},b_2)\in \mathrm{\dot{CMO}^{q_1}}(\mathbb{R}^{n})\times\mathrm{\dot{CMO}^{q_2}}(\mathbb{R}^{n}),$ and hence, $$\|\mathcal{H}_{\omega}^{\vec{b}}\|_{ \dot{B}^{p_1,\lambda_1}(\mathbb{R}^{n})\times\dot{B}^{p_2,\lambda_2}(\mathbb{R}^{n}) \rightarrow \dot{B}^{p,\lambda}(\mathbb{R}^{n})}<\infty.$$ Let \begin{equation}\label{f1} f_{1}(x):= \begin{cases} 0,&\quad |x|\leq\frac{R}{2},\\ \displaystyle|x|^{n\lambda_1},&\quad |x|>\frac{R}{2},\end{cases} \end{equation} and \begin{equation}\label{f2} f_{2}(x):= \begin{cases} 0,&\quad |x|\leq\frac{R}{2},\\ \displaystyle|x|^{n\lambda_2},&\quad |x|>\frac{R}{2}. \end{cases} \end{equation} Then, we have $$\begin{array}{rl} \displaystyle &\mathcal{H}_{\omega}^{\vec{b}} \vec{f}(x)\\ &\quad=\displaystyle\int\limits_{0<t_{1},t_{2}<1} \left(\prod_{i=1}^{2}f_{i}(t_{i}x)\right) \left(\prod_{i=1}^{2}(b_{i}(x)-b_{i}(t_{i}x)) \right)\omega(\vec{t})\,d\vec{t}\\ &\quad=\displaystyle |x|^{n\lambda} \int_{{\frac{R}{2|x|}}}^{1}\int_{{\frac{R}{2|x|}}}^{1} t_{1}^{n\lambda_1}t_{2}^{n\lambda_2}\left(b_{1}(x)-b_{1}(t_{1}x) \right)\omega(t_{1}, t_{2})\log\frac1{t_2}\,d t_{1} dt_{2}\\ &\quad = \displaystyle |x|^{n\lambda}b_{1}(x) \int_{{\frac{R}{2|x|}}}^{1}\int_{{\frac{R}{2|x|}}}^{1} t_{1}^{n\lambda_1}t_{2}^{n\lambda_2} \omega(t_{1}, t_{2})\log\frac1{t_2}\,d t_{1} dt_{2}-\eta_{d}, \end{array}$$ whenever $R/2<|x|<R$, $$\begin{array}{rl} \eta_{d}&= |x|^{n\lambda} \displaystyle\int_{{\frac{R}{2|x|}}}^{1}\int_{{\frac{R}{2|x|}}}^{1} t_{1}^{n\lambda_1}t_{2}^{n\lambda_2}\omega(t_{1}, t_{2})b_{1}(t_1x) \log\frac1{t_2}\,d t_{1} dt_{2}\\ &=\displaystyle |x|^{n\lambda} \int_{{\frac{R}{2|x|}}}^{1}\int_{{\frac{R}{2|x|}}}^{1} t_{1}^{n\lambda_1}t_{2}^{n\lambda_2}\omega(t_{1}, t_{2})\sin(\pi rt_1|x|) \log\frac1{t_2}\,d t_{1}\,dt_{2}. \end{array}$$ Since $\omega$ is integrable on $[0,1]\times[0,1]$ and $\mathbb{B}_2<\infty$, we know that $$\displaystyle t_{1}^{n\lambda_1}t_{2}^{n\lambda_2}\omega(t_{1},t_{2})\log\frac1{t_2}$$ is integrable on $(\frac{1}{2},1)\times(\frac{1}{2},1)$. Then, it follows from Lemma \ref{LA} that for any $\delta>0$, there exists a positive constant $C_{R,\delta}$ that depends on $R$ and $\delta$ such that $$\begin{array}{rl} \displaystyle\int_{{\frac{1}{2}}}^{1}\int_{{\frac{1}{2}}}^{1} t_{1}^{n\lambda_1}t_{2}^{n\lambda_2}\omega(t_{1}, t_{2})\,\sin(\pi rt_1) \log\frac1{t_2}\,d t_{1} dt_{2}<\delta/2, \end{array}$$ for all $r>C_{R,\delta}$. Now we choose $r>\max(1/R,1)C_{R,\delta}$. Then for any $R/2<|x|<R$, $r|x|>C_{R,\delta}$, and hence $$\begin{array}{rl} \displaystyle\int_{{\frac{1}{2}}}^{1}\int_{{\frac{1}{2}}}^{1} t_{1}^{n\lambda_1}t_{2}^{n\lambda_2}\omega(t_{1}, t_{2})\,\sin(\pi rt_1|x|) \log\frac1{t_2}\,d t_{1} dt_{2}<\delta/2, \end{array}$$ which further implies that $\eta_{d}<\frac{\delta}2 |x|^{n\lambda}.$ Therefore, for any $R/2<|x|<R$, $$\begin{array}{rl} |\mathcal{H}_{\omega}^{\vec{b}}\vec{f}(x)|&\geq \displaystyle |x|^{n\lambda} \left(\int_{{\frac{R}{2|x|}}}^{1}\int_{{\frac{R}{2|x|}}}^{1} t_{1}^{n\lambda_1}t_{2}^{n\lambda_2} \omega(t_{1}, t_{2})\log\frac1{t_2}\,d t_{1} dt_{2}-\frac{\delta}2\right). \end{array}$$ Let $\varepsilon>0$ be small enough and choose $\delta>0$ such that $$\begin{array}{rl} \delta<\displaystyle \int_{{\frac{R\varepsilon}{2}}}^{1} \int_{{\frac{R\varepsilon}{2}}}^{1} t_{1}^{n\lambda_1}t_{2}^{n\lambda_2} \omega(t_{1} t_{2})\log\frac1{t_2}\,d t_{1} dt_{2}. \end{array}$$ Then, for all balls $B=B(0,R)$, $$\begin{array}{rl} &\displaystyle\left(\frac{1}{|B|^{1+\lambda p}}\int_B|\mathcal{H}_{\omega}^{\vec{b}}\vec{f}(x)|^pdx\right)^{1/p}\\ &\quad\geq \displaystyle \left(\frac{1}{|B|^{1+\lambda p}}\int\limits_{R/2<|x|<R} |x|^{n\lambda p}\left(\int_{{\frac{R}{2|x|}}}^{1} \int_{{\frac{R}{2|x|}}}^{1} t_{1}^{n\lambda_1}t_{2}^{n\lambda_2} \omega(t_{1}, t_{2})\log\frac1{t_2}\,d t_{1} dt_{2} -\frac{\delta}2\right)^{p}\,dx\right)^{1/p}\\ &\quad\geq \displaystyle \left(\frac{1}{|B|^{1+\lambda p}}\int\limits_{R/2<|x|<R} |x|^{n\lambda p}\left(\int_{{\frac{R\varepsilon}{2}}}^{1} \int_{{\frac{R\varepsilon}{2}}}^{1} t_{1}^{n\lambda_1}t_{2}^{n\lambda_2}\omega(t_{1}, t_{2})\log\frac1{t_2}\,d t_{1} dt_{2}-\frac{\delta}2\right)^{p}\,dx\right)^{1/p}\\ &\quad\geq \displaystyle C \left(\frac{1}{|B|^{1+\lambda p}}\int\limits_{R/2<|x|<R} |x|^{n\lambda p}\left(\int_{{\frac{R\varepsilon}{2}}}^{1} \int_{{\frac{R\varepsilon}{2}}}^{1} t_{1}^{n\lambda_1}t_{2}^{n\lambda_2} \omega(t_{1}, t_{2})\log\frac1{t_2}\,d t_{1} dt_{2}\right)^{p}\,dx\right)^{1/p}\\ &\quad\geq\displaystyle C\left(\frac{\omega_n}{n}\right)^{-\lambda}\Big(\frac{1}{1+\lambda p}\Big)^{1/p}\int_{{\frac{R\varepsilon}{2}}}^{1} \int_{{\frac{R\varepsilon}{2}}}^{1} t_{1}^{n\lambda_1}t_{2}^{n\lambda_2}\omega(t_{1}, t_{2})\log\frac1{t_2}\,d t_{1} dt_{2}\\ &\quad=\displaystyle C \prod_{i=1}^{2}\|f_{i}\|_{\dot{B}^{p_i,\lambda_i}(\mathbb{R}^{n})} \int_{{\frac{R\varepsilon}{2}}}^{1} \int_{{\frac{R\varepsilon}{2}}}^{1} t_{1}^{n\lambda_1}t_{2}^{n\lambda_2}\omega(t_{1}, t_{2}) \log\frac1{t_2}\,d t_{1} dt_{2}, \end{array}$$ which further implies that \begin{eqnarray*} &&\|\mathcal{H}_{\omega}^{\vec{b}}\|_{\dot{B}^{p_1,\lambda_1}(\mathbb{R}^{n})\times \dot{B}^{p_2,\lambda_2}(\mathbb{R}^{n}) \rightarrow \dot{B}^{p,\lambda}(\mathbb{R}^{n})}\\ &&\quad\geq \displaystyle C \prod_{i=1}^{2}\|f_{i}\|_{\dot{B}^{p_i,\lambda_i}(\mathbb{R}^{n})} \int_{{\frac{R\varepsilon}{2}}}^{1} \int_{{\frac{R\varepsilon}{2}}}^{1} t_{1}^{n\lambda_1}t_{2}^{n\lambda_2}\omega(t_{1}, t_{2}) \log\frac1{t_2}\,d t_{1} dt_{2}. \end{eqnarray*} Letting $\varepsilon\to 0^+$ concludes $\mathbb{E}<\infty.$ To show that $\mathbb{A}_2<\infty$, we let $$b_{1}(x)=b_{2}(x):=\chi_{[B(0,R/2)]^c}(x)\, \sin(\pi r|x|),$$ where $ R\in(0,+\infty)$ and $r\in \mathbb{N}$, and let $f_1,\ f_2$ be as in (\ref{f1}), (\ref{f2}), respectively. Repeating the proof for $\mathbb{E}<\infty$, we also obtain that $\mathbb{A}_2<\infty$. Combining all above estimates then yields $\mathbb{C}_2<\infty.$ This finishes the proof of the Theorem \ref{t3}. \end{proof} We remark that Theorem \ref{t3} when $m=1$ is just \cite[Theorem 3.1]{FZW}. In particular, when $n=1$ and $$\omega(\vec{t}):=\frac{1}{\Gamma(\alpha)|(1-t_{1}, \dots, 1-t_{m})|^{m-\alpha}},$$ we know that $$\mathcal{H}_{\omega}^{\vec{b}}(\vec{f})(x)=x^{-\alpha}I^{m}_{\alpha, \vec{b}}\vec{f}(x),\,\quad x>0,$$ where $$I^{m}_{\alpha, \vec{b}}\vec{f}(x):=\frac{1}{\Gamma(\alpha)} \int\limits_{0<t_{1},t_{2},...,t_{m}<x} \frac{\left(\prod_{i=1}^{m}f_{i}(t_{i})\right)\prod_{i=1}^{m}(b_{i}(x)-b_{i}(t_{i}x))}{|(x-t_{1}, \dots, x-t_{m})|^{m-\alpha}}d\vec{t}.$$ Then, as an immediate consequence of Theorem \ref{t3}, we have the following corollary. \begin{corollary} Let $0<\alpha<m$. Under the assumptions of Theorem \ref{t3}, the operator $I^{m}_{\alpha, \vec{b}}$ maps the product of central Morrey spaces $\dot{B}^{p_1,\lambda_1}(\mathbb{R})\times \dots \times \dot{B}^{p_m,\lambda_m}(\mathbb{R})$ to $ \dot{B}^{p,\lambda}(x^{-p\alpha}dx)$. \end{corollary} \section{Weighted Ces\`{a}ro operator of multilinear type and its commutator} In this section, we focus on the corresponding results for the adjoint operators of weighted multilinear Hardy operators. Recall that, as the adjoint operator of the weighted Hardy operator, the \emph{weighted Ces\`{a}ro operator $G_{\omega}$} is defined by $$G_{\omega}f(x):=\int^1_{0}f(x/t)t^{-n}\omega(t)\,dt,\hspace{3mm}x\in \mathbb{ R}^n.$$ In particular, when $\omega\equiv 1$ and $n=1$, $G_{\omega}$ is the classical Ces\`{a}ro operator defined as $$\displaylines{ Gf(x):=\left\{\begin{array}{ll} \displaystyle\int^\infty_{x}\frac{f(y)}{y}\,dy,&\quad x>0,\\ \displaystyle-\int^x_{-\infty}\frac{f(y)}{y}\,dy,&\quad x<0.\end{array}\right.}$$ When $n=1$ and $\omega(t):=\frac{1}{\Gamma(\alpha)(\frac{1}{t}-1)^{1-\alpha}}$ with $0<\alpha<1$, the operator $G_{\omega}f(\cdot)$ is reduced to $(\cdot)^{1-\alpha}J_{\alpha}f(\cdot)$, where $J_{\alpha}$ is a variant of Weyl integral operator defined by $$J_{\alpha}f(x):=\frac{1}{\Gamma(\alpha)}\int_{x}^{\infty} \frac{f(t)}{(t-x)^{1-\alpha}}\frac{dt}{t}, \quad x>0$$ Moreover, it is well known that the weighted Hardy operator $H_{\omega}$ and the weighted Ces\`{a}ro operator $G_{\omega}$ are adjoint mutually, namely, $$\int_{\mathbb{R}^{n}}g(x)H_{\omega}f(x)\,dx=\int_{\mathbb{R}^{n}} f(x)G_{\omega}g(x)\,dx,\eqno(4.1)$$ for all $f\in L^p(\mathbb{R}^n)$ and $g\in L^q(\mathbb{R}^n)$ with $1<p<\infty, 1/p+1/q=1$. We refer to \cite{X,FZW} for more details. Let the integer $m\geq 2$, and $\omega : [0,1]\times[0,1]^m\rightarrow [0,\infty)$ be an integrable function. Let $f_{i}$ be measurable complex-valued functions on $\mathbb{R}^n$, $1\leq i\leq m$. Corresponding to the weighted multilinear Hardy operators, we define the following \emph{weighted multilinear Ces\`{a}ro operator}: $$\mathcal{G}_{\omega}(\vec{f})(x):= \int\limits_{0<t_{1},t_{2},\ldots,t_{m}<1} \left(\prod_{i=1}^{m}f_{i}(x/t_{i})(t_{i})^{-n}\right)\omega(\vec{t})\,d\vec{t},\quad x\in \mathbb{ R}^n.$$ Notice that in general $\mathcal{H}_{\omega}$ and $\mathcal{G}_{\omega}$ do not obey the commutative rule (4.1). We also point out that, when $n=1$ and $$\omega(\vec{t}):=\frac{1}{\Gamma(\alpha)|(\frac{1}{t_{1}}-1, \dots, \frac{1}{t_{m}}-1)|^{m-\alpha}},$$ the operator $$\mathcal{G}_{\omega}(\vec{f})(x)=x^{m-\alpha}J^{m}_{\alpha}\vec{f}(x),\,\quad x>0,$$ where $$J^{m}_{\alpha}\vec{f}(x):=\frac{1}{\Gamma(\alpha)} \int\limits_{x<t_{1},t_{2},...,t_{m}<\infty}\frac{\prod_{i=1}^{m}f_{i}(x_{i})}{|(t_{1}-x, \dots, t_{m}-x)|^{m-\alpha}}\frac{d\vec{t}}{\vec{t}}.$$ Similar to the argument used in Section 2, we have the following conclusions. \begin{theorem}\label{t4} If $f_i \in L^{p_i}({{\rr}^n})$, $1<p, p_i<\infty$, $i=1,\ldots, m$, and $1/p=1/p_1+\cdots+1/p_m$, then $\mathcal{G}_{\omega}$ is bounded from $L^{p_1}(\mathbb{R}^{n})\times \dots \times L^{p_m}(\mathbb{R}^{n})$ to $ L^p(\mathbb{R}^{n})$ if and only if $$\mathbb{F}:=\int\limits_{0<t_{1},t_{2},...,t_{m}<1} \left(\prod_{i=1}^{m}t_{i}^{-n(1-1/p_{i})}\right)\omega(\vec{t})\,d\vec{t}<\infty.\eqno(4.2)$$ Moreover, $$\|\mathcal{G}_{\omega}\|_{L^{p_1}(\mathbb{R}^n)\times \dots \times L^{p_m}(\mathbb{R}^n)\rightarrow L^{p}(\mathbb{R}^n)}=\mathbb{F}.\eqno(4.3)$$ \end{theorem} We can also deduce from Theorem \ref{t4} that \begin{corollary} Let $0<\alpha<m$. Under the assumptions of Theorem 4.1, we have $J^{m}_{\alpha}$ maps the product of weighted Lebesgue spaces $L^{p_1}(\mathbb{R})\times \dots \times L^{p_m}(\mathbb{R})$ to $ L^p(x^{pm-p\alpha} dx)$ with norm $$\frac{1}{\Gamma(\alpha)}\int\limits_{0<t_{1},t_{2},...,t_{m}<1}\left(\prod_{i=1}^{m} t_{i}^{-(1-1/p_{i})}\right)\frac{1}{|(\frac{1}{t_{1}}-1, \dots, \frac{1}{t_{m}}-1)|^{m-\alpha}}\,d\vec{t}.$$ \end{corollary} Next, we define the commutator of weighted Ces\`{a}ro operators of multilinear type as $$\mathcal{G}_{\omega}^{\vec{b}}(\vec{f})(x):= \int\limits_{0<t_{1},t_{2},...,t_{m}<1}\left(\prod_{i=1}^{m}f_{i}(x/t_{i})(t_{i})^{-n}\right) \left(\prod_{i=1}^{m}\left(b_{i}(x)-b_{i}(\frac{x}{t_{i}})\right) \right)\omega(\vec{t})\,d\vec{t},\,\quad x\in \mathbb{ R}^n.$$ In particular, we know that $$\mathcal{G}_{\omega}^{\vec{b}}(\vec{f})(x)=x^{m-\alpha}J^{m}_{\alpha, \vec{b}}\vec{f}(x),\,\, x>0,$$ where $$J^{m}_{\alpha, \vec{b}}\vec{f}(x):=\frac{1}{\Gamma(\alpha)}\int\limits_{x<t_{1},t_{2},...,t_{m}<\infty} \frac{\left(\prod_{i=1}^{m}f_{i}(x_{i})\right)\prod_{i=1}^{m} (b_{i}(x)-b_{i}(x/t_{i}))}{|(t_{1}-x, \dots, t_{m}-x)|^{m-\alpha}}\frac{d\vec{t}}{\vec{t}}.$$ Let $m \in \mathbb{N}$ and $m\geq 2$. Define $$\mathbb{F}_{m}:=\int\limits_{0<t_{1},t_{2}<,...,<t_{m}<1}\left(\prod_{i=1}^{m}t_{i}^{-n\lambda_i-n}\right)\omega(\vec{t})\prod_{i=1}^{m}\log\frac{2}{t_{i}}\,d\vec{t}.$$ Similar to the arguments in Section 3, we have the following conclusion. \begin{theorem} \label{t5} If $f_i \in L^{p_i}({{\rr}^n})$, $1<p< p_i<\infty, 1<q_i<\infty$, $-1/p_i<\lambda_i<0$, $i=1,\ldots, m$, and $\frac{1}{p}=\frac{1}{p_1}+ \cdots +\frac{1}{p_m}+\frac{1}{q_1}+ \cdots +\frac{1}{q_m}$, $\lambda=\lambda_1+\cdots+\lambda_m$. $\rm(i)$ If $\mathbb{F}_{m}<\infty$, then $\mathcal{G}_{\omega}^{\vec{b}} $ is bounded from $\dot{B}^{p_1, \lambda_1}(\mathbb{R}^{n})\times \cdots \times \dot{B}^{p_m, \lambda_m}(\mathbb{R}^{n})$ to $ \dot{B}^{p, \lambda}(\mathbb{R}^{n})$, for all $\vec{b}=(b_1,b_2,\ldots,b_m)\in \dot{\mathrm{CMO}}^{q_1}(\mathbb{R}^{n})\times\cdots \times\dot{\mathrm{CMO}}^{q_m}(\mathbb{R}^{n})$. $\rm(ii)$ Assume that $\lambda_1p_1=\cdots=\lambda_mp_m$. In this case the condition $\mathbb{F}_{m}<\infty$ in (i) is also necessary. \end{theorem} As an immediate corollary, we have the following consequence. \begin{corollary} Let $0<\alpha<m$. Under the assumptions of Theorem \ref{t5}, we have $J^{m}_{\alpha, \vec{b}}$ maps the product of weighted Lebesgue spaces $\dot{B}^{p_1, \lambda_1}(\mathbb{R})\times \dots \times \dot{B}^{p_m, \lambda_m}(\mathbb{R})$ to $ \dot{B}^{p, \lambda}(x^{pm-p\alpha}dx)$ \end{corollary} Finally, we give some further comments on weighted product Hardy operators. Let $\omega : [0,1]\times[0,1]\rightarrow [0,\infty)$ be an integrable function. Let $f(x_{1}, x_{2})$ be measurable complex-valued functions on $\mathbb{R}^n\times\mathbb{R}^m$. The \emph{weighted product Hardy operator} is defined as $$\mathbb{H}_{\omega}f(x_{1}, x_{2}):= \int\limits_{0<t_{1},t_{2}<1}f(t_{1}x_{1}, t_{2}x_{2})\omega(t_{1}, t_{2}) \,dt_{1}dt_{2}, \quad (x_1,x_2)\in {{\rr}^n}\times \mathbb{R}^m.$$ If $\omega\equiv 1$ and $n, m=1$, then $\mathbb{H}_{\omega}f$ is reduced to the two dimensional Hardy operator $\mathbb{H}$ defined by $$\mathbb{H}f(x_{1}, x_{2}):= \frac{1}{x_{1}}\frac{1}{x_{2}}\int^{x_{1}}_{0} \int^{x_{2}}_{0}f(t_{1}, t_{1})\,dt_{1}dt_{2},\,\quad x_{1}, x_{2}\neq0,$$ which is first introduced by Sawyer \cite{S}. The sharp estimates for weighted product Hardy operators and their commutators on Lebesgue spaces will be interesting questions. \medskip \noindent{\bf Acknowledgements.}\quad The authors cordially thank the referees for their careful reading and helpful comments.
1,116,691,499,977
arxiv
\section{Introduction} Gravitational waves are a promising new tool for astrophysics, capable of bringing breakthroughs in our understanding of the Universe. Currently, an array of ground-based interferometers are undergoing upgrades that should lead to the first direct detection of gravitational waves within this decade~\cite{ligo,virgo}. A project for a space-based interferometer~\cite{2012CQGra..29l4016A} is also under way, and could be operational in the next decade. The likelihood of detecting gravitational waves improves when reliable and efficient waveform models are available for use in the data analysis~\cite{Finn:1992xs}. Systems with spins, unless they are exactly aligned or anti-aligned with the orbital angular momentum, will undergo a secular precession of the spins and of the orbital plane~\cite{Apostolatos:1994mx}. This precession will impact the waveforms, and it is important to take it into account to properly extract the spins of the binary components, as well as to break degeneracies between different system parameters~\cite{Lang:1900bz,Lang:2011je}. The purpose of this paper is to develop accurate analytical waveforms for spinning and precessing compact binary, quasi-circular inspirals. Currently, the most accurate waveforms are numerical, time-domain solutions of the PN equations~\cite{Lang:1900bz,Lang:2011je} that are then numerically Fourier transformed. Such numerical solutions, however, can be rather computationally expensive, especially when a large number of templates are needed, as is the case for spinning systems. We will here employ novel mathematical techniques (multiple scale analysis and uniform asymptotic expansions)~\cite{Bender} to produce analytic, Fourier-domain, waveform families that accurately reproduce their numerical counterparts. \subsection{Previous Waveforms for Spinning Binaries} Over the years, several groups have developed increasingly accurate waveforms for use in gravitational wave astrophysics. Post-Newtonian (PN), quasi-circular and eccentric inspiral waveforms for compact binaries have been obtained to high-order as an expansion in the orbital velocity~\cite{blanchet-review}. The leading-order, spin-orbit and spin-spin contributions to the dynamics appear at 1.5PN and 2PN relative order respectively~\cite{PhysRevD.12.329,PhysRevD.47.R4183,PhysRevD.52.821}. These calculations have now been extended through 2.5PN~\cite{PhysRevD.63.044006,PhysRevD.74.104033,PhysRevD.77.064032} and 3PN order~\cite{PhysRevD.77.081501} out to 3.5PN order~\cite{Marsat:2012fn,ANDP:ANDP201100094,Bohe:2012mr,Bohe:2013cla}. The spin-orbit contributions to the gravitational wave phase and amplitude are currently known to 3.5PN order~\cite{PhysRevD.81.089901,1475-7516-2012-09-028,Bohe:2013cla}. Presently, there are three special cases where purely analytic, inspiral waveform models exist for spinning binaries: \begin{enumerate} \item[(i)] {\bf{Aligned}}: Systems where the spins are co-aligned or anti-aligned with the orbital angular momentum; \item[(ii)] {\bf{Partially Non-Spinning}}: Systems where one of the compact bodies is not spinning; \item[(iii)] {\bf{Equal-mass systems}}. Systems with both components spinning but with a mass ratio of unity. \end{enumerate} For case (i), the spins stay aligned (or anti-aligned) with the orbital angular momentum throughout the evolution and there is no precession. The waveforms here bear strong resemblance to non-spinning waveforms, but with a spin-corrected chirping (see e.g.~\cite{Poisson:1995ef,Arun:2008kb}). For cases (ii) and (iii), the system undergoes {\em simple precession}\footnote{For case (iii), the system undergoes simple precession if the 2PN spin-spin corrections to the spin dynamics are neglected.}, i.e.~the precession of the orbital and spin angular momenta about the total angular momentum with a single (evolving) precession frequency (see e.g.~\cite{Apostolatos:1994mx}). Cases (i) and (ii) are of particular interest since they provide good approximations to systems that are expected to be found in Nature. Case (i) pertains to binaries embedded in a gaseous environment, where the gas torque tends to align the spin with the orbital angular momentum~\cite{2007ApJ.661L.147B,Barausse:2012fy,2013ApJ.762.68D}. Most studies of spin alignment have focused on supermassive black hole mergers, but recently, similar mechanisms have been invoked for stellar-mass black hole mergers~\cite{2013arXiv1302.4442G}. Case (ii) pertains to the case when the spin angular momentum of one of the bodies is much smaller than the other one. Population synthesis models suggest that typical mass ratios in a binary black hole system may be of the order of ten or so, so that the spin of the more massive object will dominate, and the spin of the smaller body can be neglected. The data analysis of signals usually requires the Fourier transform of the wave signal, and the complexity of the latter varies strongly depending on the particular case considered. For the non-precessing case (i), the time-domain waveforms are especially simple, and they can easily be analytically recast in the frequency-domain using the stationary phase approximation (SPA)~\cite{cutlerflanagan,Droz:1999qx}. In the SPA, the waveform amplitude is assumed to vary much more slowly than the phase, which then allows for a Taylor expansion of the generalized Fourier integral~\cite{Bender}. For the simple precession case (ii) and (iii), the waveforms have far more structure because of the precession of the orbital plane. Contrary to earlier claims~\cite{Apostolatos:1994mx}, however, such simple-precessing waveforms are not amenable to a formal stationary phase treatment. This is because the mapping between time and frequency becomes multi-valued and the SPA amplitude diverges. Precessing waveforms have to therefore be modeled differently in the frequency-domain. One approach is to take the discrete Fourier transform (DFT) of the time-domain waveform, which requires the fine sampling of the latter at the orbital timescale, which can lead to a large computational cost. Another approach is to take a spin-aligned (non-precessing) waveform and transform it to a frame that tracks some characteristic quantity (like the total angular momentum vector) at some precession rate~\cite{Schmidt:2010it, O'Shaughnessy:2011fx, Boyle:2011gg, O'Shaughnessy:2012vm, Ochsner:2012dj, Schmidt:2012rh, Ossokine:2013zga, Lundgren:2013jla}. This approach has been validated to some degree with numerical simulations but the matches are not perfect, as these waveforms miss some precession effects. Without an analytical treatment, one is left with educated guesses as to how to improve on these waveforms. Yet another approach is to use an effective SPA, where precession corrections to the amplitude are completely neglected and phase corrections are only partially modeled using insight from the spin-aligned case~\cite{Lang:2011je,Klein:2009gza}. Both of these approaches are unattractive due to either increased computational cost or increased systematic mismodeling error. \pagebreak \subsection{Executive Summary} The goals of the present work are the following: \begin{enumerate} \item[(a)] To develop a general formalism to perturbatively solve the time-domain PN evolution equations and obtain analytic time-domain waveforms for precessing quasi-circular inspirals; \item[(b)] To develop a general formalism to perturbatively Fourier transform a precessing quasi-circular inspiral, time-domain waveform. \end{enumerate} Goal (a) will be achieved through the technique of {\emph{multiple scale expansions}}~\cite{Bender}, where one solves the evolution equations using the fact that $t_{{\mbox{\tiny orb}}} \ll t_{{\mbox{\tiny prec}}} \ll t_{{\mbox{\tiny rr}}}$, where $t_{{\mbox{\tiny orb}}}$, $t_{{\mbox{\tiny prec}}}$ and $t_{{\mbox{\tiny rr}}}$ are the orbital, precession and radiation-reaction timescales respectively. Goal (b) is reached through the technique of {\emph{uniform asymptotic expansions}}~\cite{Bender,PhysRevE.64.026215}, where one recasts the phase modulation induced by precession as a sum of Bessel functions that are then amenable to a formal SPA treatment. Both of these techniques have proven very successful in various areas, from quantum field theory to aerospace engineering. Although these formalisms are general, we exemplify them here by generalizing case (i) to accommodate systems where the spins are only partially aligned with the orbital angular momentum. The accretion torques that drive the spin-orbit alignment are not expected to produce perfect alignment, so it is useful to have analytic waveforms that cover the more realistic, partial alignment case. Expanding the spin precession equations in the misalignment angle leads to a system of coupled harmonic oscillators that diagonalizes to yield two precession frequencies $\omega_+$ and $\omega_-$. Multiple scale analysis is then used to derive an analytic expression for the evolution of these precession frequencies as the black holes spiral together~\cite{Bender}. Spin precession alters the phasing of the waveform, and causes the mapping between gravitational wave frequency and time to become multi-valued, rendering the standard SPA inapplicable. The SPA returns singular results at turning points in the time-frequency mapping, resulting in what are known as {\emph{fold}} and {\emph{cusp catastrophes}} in the optical literature~ \cite{Berry1980257}. We show that the singularities can be cured using uniform asymptotic expansions of the phase in terms of Bessel functions~\cite{Bender,PhysRevE.64.026215}. The final result is a family of fully analytic, approximate, time and frequency domain waveforms for spinning, precessing, quasi circular binaries with moderately misaligned spins. The frequency-domain waveform family can be constructed from the following recipe: \begin{enumerate} \item The waveform is mode-decomposed as \begin{equation} \tilde{h}(f) = \sum_{n\geq0} \;\; \sum_{k \in \mathbb{Z}} \;\; \sum_{m = \{-2,2\}} \tilde{h}_{n,k,m}(f)\,, \label{htildef} \end{equation} \item Each Fourier mode is given by Eq.~\eqref{htildef-nkm} in terms of the carrier phase $\phi_{{\mbox{\tiny C}}}$, the precession phases $\phi_{{\mbox{\tiny P}},\pm}$, the time-frequency mapping $t=t(f)$, the mode-decomposed, time-domain amplitudes ${\cal{A}}_{n,k,m}$, and the additional constant, amplitude, and phase modulation corrections $A_{0,n,k,m}$, $A_{\pm,n,k,m}$, and $\phi_{\pm,n,k,m}$ respectively. \item The carrier phase $\phi_{{\mbox{\tiny C}}}$ and its second time derivative are given in Appendix~\ref{app:freqevol} as a function of the orbital frequency. \item The precession phases and their second time derivatives $\phi_{{\mbox{\tiny P}},\pm}$ are given in Appendix~\ref{app:precphases} as a function of the orbital frequency. \item The time-frequency mapping $t=t(f)$ is given as a function of the orbital frequency in Appendix~\ref{app:freqevol}. \item The mode-decomposed, time-domain amplitudes ${\cal{A}}_{n,k,m}$ are given as a function of the orbital frequency in Appendix~\ref{app:amplitudes}. \item The constant, amplitude, and phase modulation corrections $A_{0,n,k,m}$, $A_{\pm,n,k,m}$, and $\phi_{\pm,n,k,m}$ are given as a function of the orbital frequency in Appendix~\ref{app:amplitudeandphasemodulations}. \item The orbital frequency is given in terms of the Fourier frequency in Eqs.~\eqref{xi-eq}-\eqref{u-eq}. \end{enumerate} We prove that these new waveforms are accurate (i.e.~faithful) by comparing them to the waveforms obtained by numerically solving for the orbital evolution and discretely Fourier transforming. We find typical matches of $0.99$-$0.999$, maximized only over time and phase of coalescence, when the misalignment angles do not exceed $25^\circ$. The benefit of our approach is two-fold. On the one hand, we provide ready-to-use, analytic waveforms that are computationally inexpensive to produce. The computational cost of numerically solving for the Fourier transform of spinning and precessing systems is currently a roadblock in the data analysis of signals for advanced ground detectors. On the other hand, an analytical treatment provides insight into the physics of the problem. Our results analytically explain why the waveforms of spinning and precessing binaries are essentially simple harmonic oscillators, with a carrier band and side-bands induced by evolving precession frequencies~\cite{Lundgren:2013jla}. Moreover, our results extend the recently-proposed kludge waveforms~\cite{Lundgren:2013jla} to account for amplitude and additional phase corrections induced by precession, which cannot be captured by educated guesses from numerical relativity waveforms. The general formalism presented here opens the door to several other applications. For instance, one can extend case (ii) to include the first-order correction in the ratio of the spins of the two bodies. In this case, our formalism can be viewed as a systematic extension of the simple-precessing treatment of Apostolatos, et al~\cite{Apostolatos:1994mx}, which allows us to compute the time-domain waveforms to higher order in the ratio of the timescales of the problem. Moreover, our method allows for the correct analytical calculation of the Fourier-domain waveforms, which cannot be obtained via a standard SPA treatment, contrary to older claims~\cite{Apostolatos:1994mx}. Our formalism can also be applied to other systems, such as inspiraling binary neutron stars, where the magnitude of both spin angular momenta are much smaller than the orbital one. \subsection{Organization and Conventions} The remainder of this paper is organized as follows: Section~\ref{sec:msa} reviews multiple scale analysis through selected examples, which we use later to solve the equations of precession analytically; Section~\ref{sec:uaa} discusses the SPA and uniform asymptotic expansions applied to the Fourier transform of oscillatory functions, where the mapping between time and frequency is multi-valued and the standard SPA fails; Section~\ref{sec:near-alignment} derives an analytic formula for the evolution of the angular momenta in the case of nearly aligned spins; Section~\ref{sec:spa} uses the results of the previous sections to derive an analytical gravitational waveform valid for nearly aligned spins; Section~\ref{sec:comp} compares our waveform to the results obtained by taking a discrete Fourier transform of the time series. Throughout this article we use geometric units with $G = c = 1$. We also employ the following conventions: \begin{itemize} \item Three-dimensional vectors are written in boldface and unit vectors carry a hat over them, e.g.~$\bm{A} = (A_x, A_y, A_z)$, with norm $|\bm{A}| = A$, and unit vector $\uvec{A} = \bm{A}/A$. \item Three-dimensional matrices are written in mathematical boldface, e.g.~${\mathbb{M}}$ and ${\mathbb{A}}$. \item Total time derivatives are denoted with an overhead dot: $\dot{f} = df/dt$. \item $\omega$ is the angular frequency in a frame fixed to the orbital plane. \item $\bm{L}$ is the Newtonian orbital angular momentum $3$-vector. \item $\bm{S}_A$ is the spin angular momentum $3$-vector for component $A$. \item $m_A$ is the mass of component $A$, and we assume $m_1 \geq m_2$. \item $\mu = m_1 m_2/M$ is the reduced mass. \item $\nu = m_1 m_2/M^2$ is the symmetric mass ratio. \item $\chi_{A} \equiv |\bm{S}_{A}|/m_{A}^{2}$ is the dimensionless spin parameter for component $A$. \item $\uvec{N}$ is a unit vector pointing from the observer to the source. \end{itemize} \section{Techniques from Asymptotic Analysis} \subsection{A Primer on Multiple Scale Analysis} \label{sec:msa} Multiple scale analysis is a powerful mathematical formalism that serves as the theoretical foundations of boundary-layer theory and the Wentzel-Kramers-Brillouin (WKB) approximation. In this section, we review some important features of this formalism, as they will be essential in the solution to the precession equations. We will mostly follow and summarize the treatment in Bender and Orszag~\cite{Bender}. Consider the non-linear oscillator ordinary differential equation $\ddot{y} + y + \epsilon y^{3} = 0$, where $y$ is a function of time $t$, with initial conditions $(y(0),\dot{y}(0)) = (1,0)$. If we attempted the series solution \begin{equation} y(t) = \sum_{n=0}^{\infty} \epsilon^{n} y_{n}(t)\,, \end{equation} assuming $\epsilon \ll 1$, and matched coefficients of the same order in $\epsilon$, we would find the solution \begin{equation} y(t) = \cos{t} + \epsilon \left[ \frac{1}{32} \cos{3 t} - \frac{1}{32} \cos{t} - \frac{3}{8} t \sin{t} \right] + {\cal{O}}(\epsilon^{2})\,. \label{series-exp} \end{equation} Clearly, this series approximation diverges as $t \to \infty$, but in fact, it becomes invalid much sooner, when $3\epsilon t /8\sim 1$. As we will show below, however, the exact solution to this differential equation remains perfectly bounded in the $t \to \infty$ limit; a multiple-scale expansion treatment will allow us to find such a solution. Let us then introduce a new variable $\tau = \epsilon t$ that defines a long time scale, as $\tau$ does not become negligible when $t \sim 1/\epsilon$. In multiple scale analysis, we search for solutions that are functions of all timescales in the problem, in this case $t$ and $\tau$, treated as if they were independent variables. This, of course, is just a mathematical trick, since at the end of the day, we can replace $\tau$ in favor of $\epsilon t$ to obtain a solution that is only $t$-dependent. We then assume a perturbative ansatz of the form \begin{equation} y(t) = \sum_{n=0}^{\infty} \epsilon^{n} Y_{n}(t,\tau)\,. \label{multiple-scale-expansion} \end{equation} Taking the sum to $n=1$, the non-linear oscillator equation leads to the following two evolution equations \begin{align} \frac{\partial^{2} Y_{0}}{\partial t^{2}} + Y_{0} &= 0\,, \label{0th-order-eq} \\ \frac{\partial^{2} Y_{1}}{\partial t^{2}} + Y_{1} &= -Y_{0}^{3} - 2 \frac{\partial^{2} Y_{0}}{\partial \tau \partial t}\,. \label{1st-order-eq} \end{align} Notice that the differential operator (the terms on the left-hand side of both equations) is always the same, an expected result in perturbation theory. The most general solution to Eq.~\eqref{0th-order-eq} is $Y_{0} = A(\tau) e^{it} + A^{*}(\tau) e^{-it}$, where the star stands for complex conjugation. Inserting this solution into Eq.~\eqref{1st-order-eq}, we find \begin{equation} \frac{\partial^{2} Y_{1}}{\partial t^{2}} + Y_{1} = e^{it} \left[-3 A^{2} A^{*} - 2 i \frac{\partial A}{\partial \tau}\right] - e^{3 i t} A^{3} + {\rm{c.c.}}\,, \label{1st-order-eq-subed} \end{equation} where ${\rm{c.c.}}$ stands for the complex conjugate. The first term of the right-hand side in Eq.~\eqref{1st-order-eq-subed} is a solution to Eq.~\eqref{0th-order-eq}, and thus, it is it which induces a secular growth. We can eliminate this secular growth by requiring that the term inside square brackets on the right-hand side of Eq.~\eqref{1st-order-eq-subed} vanishes, which then leads to a differential equation for $A(\tau)$, whose solution is \begin{equation} A(\tau) = R(0) e^{i \theta(0) + 3 i R^{2}(0) \tau/2}\,, \end{equation} where $R(0)$ and $\theta(0)$ are constants of integration. Using the initial conditions stipulated above and reassembling the full solution, we find \begin{equation} y(t) = \cos{\left[ t \left(1 + \frac{3}{8} \epsilon t \right) \right]} + {\cal{O}}(\epsilon)\,. \end{equation} Notice that this solution is bounded for all $t$ and it is much more accurate than the series expansion in Eq.~\eqref{series-exp} for large $t$. An extra degree of complication arises when we consider differential equations with implicit functional dependence in the source. For example, let us consider the oscillator ordinary differential equation \begin{equation} \epsilon^{2} \frac{d^{2}y}{dt^{2}} + \omega^{2}(\tau) y = 0\,, \label{oscillator-WKB-eq} \end{equation} where again $\tau = \epsilon t$ and try to solve it with multiple-scale analysis. Imposing the same expansion of the solution as in Eq.~\eqref{multiple-scale-expansion}, Eq.~\eqref{oscillator-WKB-eq} becomes \begin{align} \frac{\partial^{2} Y_{0}}{\partial t^{2}} + \omega^{2}(\tau) Y_{0} &= 0\,, \\ \frac{\partial^{2} Y_{1}}{\partial t^{2}} + \omega^{2}(\tau) Y_{1} &= -2 \frac{\partial^{2} Y_{0}}{\partial t \partial \tau}\,. \label{1st-order-eq-WKB} \end{align} The solution to the zeroth-order equation is now $Y_{0} = A(\tau) e^{i \omega(\tau) t} + A^{*}(\tau) e^{-i \omega(\tau) t}$, which when inserted into Eq.~\eqref{1st-order-eq-WKB} leads to \begin{equation} \frac{\partial^{2} Y_{1}}{\partial t^{2}} + \omega^{2}(\tau) Y_{1} = -2 i e^{i \omega(\tau) t} \left[ \frac{\partial(A \omega)}{\partial \tau} + i t A \omega \frac{\partial \omega}{\partial \tau} \right] + {\rm{c.c.}}\,. \label{1st-order-eq-WKB-subed} \end{equation} To eliminate secularity, we would want to set the term inside the square brackets on the right-hand side of Eq.~\eqref{1st-order-eq-WKB-subed} to zero, but due to the explicit appearance of $t$, this would force $A = 0$. Multiple scale analysis fails if the long time scale is proportional to the short time scale {\emph{and}} the frequency of oscillation is not a constant. We can force the frequency to be constant by changing variables to $T=f(t) = f(\tau/\epsilon)$, which then transforms Eq.~\eqref{oscillator-WKB-eq} into \begin{equation} \frac{d^{2}y}{dT^{2}} + \frac{f''(t)}{[f'(t)]^{2}} \frac{dy}{dT} + \frac{\omega^{2}(\epsilon t)}{[f'(t)]^{2}} y = 0\,, \end{equation} We can force the frequency oscillation to be constant by choosing \begin{equation} T = f(t) = \int^{t} \omega(\epsilon s) ds = \frac{1}{\epsilon} \int^{\tau} \omega(s) ds\,, \end{equation} which then leads to \begin{equation} \frac{d^{2}y}{dT^{2}} + y + \epsilon \frac{\omega'(\tau)}{\omega^{2}(\tau)} \frac{dy}{dT} =0\,. \end{equation} Now, this equation can be solved via multiple-scale analysis. Using the expansion in Eq.~\eqref{multiple-scale-expansion}, the above equation leads to \begin{align} \frac{\partial^{2} Y_{0}}{\partial T^{2}} + Y_{0} &= 0\,, \\ \frac{\partial^{2} Y_{1}}{\partial T^{2}} + Y_{1} &= - \frac{\omega'(\tau)}{\omega^{2}(\tau)} \frac{\partial Y_{0}}{\partial T} - \frac{2}{\omega} \frac{\partial^{2} Y_{0}}{\partial \tau \partial T}\,. \label{1st-order-eq-WKB-new} \end{align} The solution to the zeroth-order in $\epsilon$ is the same as that of the non-linear oscillator, and with this, the first-order in $\epsilon$ equation becomes \begin{equation} \frac{\partial^{2} Y_{1}}{\partial T^{2}} + Y_{1} = -i e^{iT} \left[ \frac{2}{\omega} \frac{\partial A}{\partial \tau} + \frac{\omega'(\tau)}{\omega^{2}(\tau)} A \right] + {\rm{c.c.}}\,. \label{1st-order-eq-WKB-new-subed} \end{equation} This time we can eliminate the secularly growing terms by requiring that the terms inside square brackets in Eq.~\eqref{1st-order-eq-WKB-new-subed} vanish, which leads to a partial differential equation for $A(\tau)$, whose solution is $A(\tau) = A_{0}/\sqrt{\omega(\tau)}$. The full solution is then \begin{equation} y(t) = \frac{A_0}{\sqrt{\omega(\epsilon t)}} e^{\frac{i}{\epsilon} \int^{\epsilon t} \omega(s) ds} + \rm{c.c.}\,, \end{equation} which is the same as what one would have obtained through the WKB physical-optics approximation. We see then that multiple-scale analysis is a generic and powerful technique that, in certain cases, allows us to recover the WKB approximation, among others. Of course, The above examples employed only 2 scales, but multiple scale analysis is in principle valid given an arbitrary number of scales provided they satisfy a certain scale hierarchy. \subsection{The Stationary Phase Approximation and Uniform Asymptotic Expansions} \label{sec:uaa} The leading-order gravitational wave signal from a quasi-circular binary inspiral can be expressed in the form \begin{equation}\label{ht} h(t) = A(t) e^{-i\Phi(t)} \end{equation} where the amplitude $A(t)$ and the phase $\Phi(t)$ are slowly evolving functions of time. The full signal is given by a sum of such terms that form a harmonic series in the orbital frequency. The function $h(t)$ oscillates on the orbital timescale, with an amplitude and frequency that evolve on the slower spin-precession and radiation-reaction timescales. In gravitational wave data analysis, quantities of interest (such as the likelihood function) are usually calculated in the frequency domain, where the noise-autocorrelation function is assumed to take a simple form. Thus, we are faced with the task of Fourier transforming the waveform in Eq.~(\ref{ht}): \begin{equation}\label{hf} \tilde{h}(f) = \int \, A(t) e^{-i\Phi(t)} e^{2\pi i f t} \, dt = \int \, A(t) e^{i\phi(f,t)} \, dt\, . \end{equation} A direct numerical implementation using a fast Fourier transform algorithm is possible, but the computational cost can be high since the waveform needs to be sampled at a cadence set by the orbital period. The quadratic SPA is the standard analytic approach to solve Eq.~\eqref{hf}. At a given frequency $f$, the integral is dominated by the contributions where the phase $\phi(f,t)$ is a slowly-varying function of time. Away from this region, the integrand oscillates rapidly and contributes little. Defining the stationary phase points implicitly as the times $t_{\mbox{\tiny SPA}}$ where $\dot\phi(f,t_{\mbox{\tiny SPA}}) = 0$, or equivalently, \begin{equation}\label{spp} \dot\Phi(t_{\mbox{\tiny SPA}}) = 2 \pi f \,, \end{equation} the Fourier phase can be expanded as \begin{align}\label{spx} \phi(f,t) = \phi(f,t_{\mbox{\tiny SPA}}) + \frac{1}{2}\ddot\phi(f,t_{\mbox{\tiny SPA}}) (t-t_{\mbox{\tiny SPA}})^2 + \ldots \end{align} Given such an expansion, one can then analytically solve the generalized Fourier integral in Eq.~\eqref{hf} through a change of variables~\cite{Bender} \begin{align} \tilde{h}_{{\rm SPA}_2}(f) &= \left[ \frac{2}{\vert \ddot \Phi(t_{\mbox{\tiny SPA}}) \vert}\right]^{1/2} A(t_{\mbox{\tiny SPA}}) \; \Gamma(1/2) \nonumber \\ & e^{i[2\pi f t_{\mbox{\tiny SPA}} - \Phi(t_{\mbox{\tiny SPA}})-\sigma \pi/4]} \,, \label{sspa} \end{align} where $\sigma = {\rm sign} (\ddot \Phi(t_{\mbox{\tiny SPA}}))$, $\Gamma(\cdot)$ is the Gamma function and $t_{\mbox{\tiny SPA}}(f)$ is understood as a function of frequency. Several assumptions go into the solution of Eq.~\eqref{sspa}, which have been implicitly taken for granted in gravitational wave modeling. First, one assumes that there is a unique stationary phase time $t_{\mbox{\tiny SPA}}$ for a given frequency $f$, so that the time-frequency mapping $t_{\mbox{\tiny SPA}}(f)$ is single valued for each harmonic. Second, one assumes that the expansion for the phase about the stationary point in Eq.~(\ref{spx}) can be truncated at quadratic order, and that the amplitude $A(t)$ can be replaced by the constant value $A(t_{\mbox{\tiny SPA}})$. When the mapping in Eq.~(\ref{spp}) between frequency and time yields multiple stationary points, the full solution is given by summing up the contributions of the form (\ref{sspa}) for all the stationary points. But when this mapping is not single valued, the SPA can lead to divergent results, i.e.~$\ddot \Phi(t_{\mbox{\tiny SPA}})$ can vanish and the amplitude can diverge. The goal of {\em uniform asymptotic expansions} is to replace non-uniform expansions, like that of Eq.~\eqref{sspa}, by a new expansion that remains valid in a domain containing the singular point. A standard example is the Airy function uniformization of a fold catastrophe~\cite{Berry1980257}, which occurs when two stationary points coalesce and $\ddot\phi(f,t_{\mbox{\tiny SPA}})=\ddot \Phi(t_{\mbox{\tiny SPA}}) =0$. At these catastrophe points, the stationary point is defined by the last equation and the Taylor expansion of the phase in Eq.~(\ref{spx}) has to be continued to higher order. At cubic order, the integral in Eq.~(\ref{hf}) yields an Airy function, and the cubic SPA is \begin{align} \tilde{h}_{{\rm SPA}_3}(f) &= \left[\frac{2}{\vert \dddot \Phi(t_{\mbox{\tiny SPA}})\vert} \right]^{1/3} A(t_{\mbox{\tiny SPA}}) \nonumber \\ &2 \pi {\rm Ai}\left\{-\sigma [2 \pi f - \dot{\Phi}(t_{\mbox{\tiny SPA}})] \left[\frac{2}{\vert \dddot \Phi(t_{\mbox{\tiny SPA}})\vert} \right]^{1/3} \right\} \nonumber \\ & e^{i[2\pi f t_{\mbox{\tiny SPA}} - \Phi(t_{\mbox{\tiny SPA}})]} \, , \label{sspa3} \end{align} where $\sigma = {\rm sign} [\dddot \Phi(t_{\mbox{\tiny SPA}})]$, and the amplitude and phase are evaluated at the singular point $t_{\mbox{\tiny SPA}}$. The expression in Eq.~(\ref{sspa3}) matches the numerical Fourier transform for frequencies $f$ in the neighborhood of the critical point $f=\dot{\Phi}(t_{\mbox{\tiny SPA}})/2\pi$ where the phase is well approximated by a cubic Taylor expansion. In many instances, there is an overlap region where both approximations [Eqs.~(\ref{sspa}) and~(\ref{sspa3})] are valid simultaneously. In such cases, it is possible to construct a complete SPA waveform from a piecewise collection of the quadratic and cubic SPAs. A completely different uniformization is required, however, for situations where the singular points become so dense that the Airy function and related techniques breakdown. A good example is when the phase has an oscillatory component, which is exactly the situation for precessing black hole binaries. One solution is to re-express the original waveform as the sum of simpler waveforms that each have a well-behaved SPA~\cite{PhysRevE.64.026215}. For example, if the GW phase can be written as the sum of a carrier phase and an oscillatory component: \begin{equation} \Phi(t) = \Phi_{\mbox{\tiny C}}(t) + \alpha(t) \cos \beta(t) \label{eq:correctionsimple} \end{equation} where $\Phi_{\mbox{\tiny C}}(t)$, $\alpha(t)$ and $\beta(t)$ are monotonic functions of time, then \begin{equation}\label{htx} h(t) = A(t) e^{-i\Phi_{\mbox{\tiny C}}(t)} \sum_{n=-\infty}^{\infty} (-i)^n J_n(\alpha(t)) e^{-i n \beta(t)} \end{equation} and \begin{eqnarray}\label{SPAx} && \tilde{h}_{\rm SPA}(f) = \sum_{n=-\infty}^{\infty} \left[ \frac{2 \pi}{\vert \ddot\Phi_C(t_n)+ n \ddot \beta(t_n) \vert}\right]^{1/2}A(t_n) \nonumber \\ && \times (-i)^n J_n[\alpha(t_n)] e^{i[2 \pi f t_n - \Phi_C(t_n)- n \beta(t_n)-\sigma \pi/4]} \,, \end{eqnarray} where $\sigma = {\rm sign} [\ddot \Phi_C(t_n)+ n \ddot \beta(t_n)]$. Notice that there are now $n$ different stationary points $t_n$, defined by the stationary phase condition $2 \pi f = \dot\Phi_C(t_n)+ n \dot \beta(t_n)$. In Eq.~(\ref{SPAx}), we have assumed that the individual contributions are non-singular: $\ddot\Phi_C(t_n)+ n \ddot \beta(t_n)\neq 0$. If a singularity does occur in any of the terms, then the standard SPA for this term can be replaced by the Airy uniformization of Eq.~(\ref{sspa3}). The rapid decay of the Bessel functions with increasing order $\vert n\vert$ means that just a few terms are needed in the sum of Eq.~(\ref{SPAx}) to obtain a good approximation to the full Fourier transform. To illustrate the Bessel uniformization approach, let us consider a simple toy model that shares many of the features of the waveforms produced by spinning black hole binaries. Let us then consider the phase given by \begin{eqnarray} &&\Phi(t) = 2 \pi \left[ f_0 (t/T) + \frac{1}{2} \dot f_0( t/T)^2 \right. \nonumber \\ && \quad \left.+ \alpha_0 (t/T) \cos(\omega_0 (t/T)+ \frac{1}{2} \dot\omega_0 (t/T)^2)\right] \, , \end{eqnarray} and an amplitude given by a Tukey tapered cosine window \begin{equation} A(t) = \begin{cases} \frac{1}{2}\left[1+\cos\left(\pi\left(\frac{t}{T\kappa}-1\right)\right)\right] , & t \leq \kappa T \\ 1, & \kappa T < t < (1- \kappa)T \\ \frac{1}{2}\left[1+\cos\left(\pi\left(\frac{t-T}{T\kappa}+1\right)\right)\right] , & t \geq (1- \kappa)T \, . \end{cases} \end{equation} The Tukey window helps suppress spectral leakage in the numerical Fourier transform. Fig.~\ref{fig:SPA} shows the amplitude of the Fourier transform computed three different ways: (i) using a numerical DFT; (ii) using the quadratic SPA; and (iii) using the Bessel function uniformization of the SPA summing up to $\vert n \vert = 5$. The parameters chosen were $\{T=1, f_0 = 300, \dot f_0 = 900, \omega_0 = 30, \dot\omega_0 = 30, \alpha_0 = 0.4, \kappa = 0.2\}$. The uniform asymptotic expansion provides a near perfect match to the numerical Fourier transform, while the quadratic SPA diverges at turning points of the frequency. \begin{figure}[ht] \begin{center} \includegraphics[width=\columnwidth]{toymodel.pdf} \caption{\label{fig:SPA} The amplitude of the Fourier transform for the toy model waveform computed using a numerical fast Fourier transform (solid, black line); the standard quadratic SPA (dotted, red line); and the Bessel function uniformization of the SPA (dashed, blue line). The numerical transform and the uniform asymptotic expansion are indistinguishable, while the quadratic SPA diverges at turning points of the frequency. } \end{center} \end{figure} \section{Spin and Orbital Angular Momentum} \label{sec:near-alignment} In this section, we explore the evolution equations of the spin and orbital angular momentum vectors using techniques from multiple-scale analysis. We first develop the formalism of multiple-scale analysis as applicable to inspiraling binaries, and then apply it to systems where the spin angular momentum vectors are nearly aligned with the orbital angular momentum vector. Physically, this corresponds to the inspiral of binary BHs or binary NSs in a gas-rich environment, where the latter tends to align the spin and orbital angular momenta. Such a system allows us to make several approximations that enable a perturbative analytic solution. First, we expand all quantities in the misalignment angle \begin{align} \bm{K} = \sum_{n>0} \bm{K}^{(n)}(t) \; \epsilon^{n}\, \end{align} for any vector $\bm{K}$, where $\bm{K}^{(n)}(t)$ are undetermined vector functions and $\epsilon \ll 1$ is the misalignment order-counting parameter. Second, we re-expand all quantities in a separation of timescales \begin{align} \bm{K} = \sum_{n,m>0} \bm{K}^{(n,m)}(t) \; \epsilon^{n} \sigma^{m}\,, \end{align} where $\bm{K}^{(n,m)}(t) $ are undetermined vector function and we have defined the precession order counting parameter \begin{equation} \sigma \equiv \frac{t_{{\mbox{\tiny prec}}}}{t_{{\mbox{\tiny rr}}}} \ll 1\,, \end{equation} with $t_{{\mbox{\tiny prec}}}$ and $t_{{\mbox{\tiny rr}}}$ the precession and radiation-reaction timescales. This last expansion is justified for binaries in the PN (slow-motion/weak-gravity) regime, where all three characteristic timescales of the problem separate. The precession order counting parameter $\sigma$ and the PN one $c$ are not independent, but rather ${\cal{O}}(\sigma) = {\cal{O}}(c^{-3})$. Henceforth, a term of ${\cal{O}}(c^{-2A})$ will be said to be of NPN order. \subsection{Precession Equations} The spin and orbital angular momentum precession equations for an compact binary system in a quasi-circular inspiral in the center of mass frame can be written as\footnote{These equations correct an error in Eq.~$(2.4)$ of~\cite{Kidder:1995zr}, but they are consistent with Eq.$(4.17)$ in that paper, as well as with equations in~\cite{Apostolatos:1994mx} and~\cite{Buonanno:2002ft}.} \begin{align} \dvec{S}_1 &= \Omega_{LS_{1}} \uvec{L} \times \bm{S}_1 + \Omega_{S_{1}S_{2}} \bm{S}_2 \times \bm{S}_1, \\ \dvec{S}_2 &= \Omega_{LS_{2}} \uvec{L} \times \bm{S}_2 + \Omega_{S_{1}S_{2}} \bm{S}_1 \times \bm{S}_2, \\ \duvec{L} &= \frac{\Omega_{LS_{1}}}{L} \bm{S}_1 \times \uvec{L} + \frac{\Omega_{LS_{2}}}{L} \bm{S}_2 \times \uvec{L}\,, \label{eq:Lhatdot} \\ \dot{L} &= - \frac{1}{3} \frac{a_0}{M} (M\omega)^{8/3} \left[ 1 + \sum_{n=2}^N a_n (M\omega)^{n/3} \right] L , \label{eq:Ldot} \end{align} where we have defined \begin{align} \Omega_{LS_{1}} &= \frac{\omega^2}{M} \left[\left( 2 + \frac{3m_2}{2m_1} \right) L - \frac{3}{2} \left( \uvec{L} \cdot \bm{S}_2 \right) \right]\,, \\ \Omega_{LS_{2}} &= \frac{\omega^2}{M} \left[\left( 2 + \frac{3m_1}{2m_2} \right) L - \frac{3}{2} \left( \uvec{L} \cdot \bm{S}_1 \right) \right]\,, \\ \Omega_{S_{1}S_{2}} &= \frac{1}{2} \frac{\omega^2}{M}\,, \end{align} where $m_{1,2}$ are the component masses, $\omega = M^2(\mu/L)^3$ is the orbital frequency of the system, with $L$ the magnitude of the {\emph{Newtonian}} orbital angular momentum $\bm{L}$. The spin angular momentum of the Ath binary component is $\bm{S}_{A}$, while $\hat{\bm{L}} = \bm{L}/L$ is an orbital angular momentum unit vector. All cross- and dot-products represent the standard (Euclidean) three-dimensional operations of vector calculus. The quantities $a_i$ are functions of the symmetric mass ratio as well as $(\bm{S}_{A} \cdot \hat{\bm{L}})$ and $(\bm{S}_{A} \cdot \bm{S}_{B})$, and they are explicitly given in Appendix~\ref{app:freqevol}. The evolution equations presented above are in principle valid to different PN orders. The evolution equation for $L$ (or equivalently for $\omega$) is valid to $N/2$ PN order, where here we choose $N = 5$, i.e.~we model the evolution of $L$ to $2.5$PN order. The evolution equations for $\bm{S}_{A}$ and $\hat{\bm{L}}$, however, are only valid to first-subleading PN order, i.e.~leading order in spin-orbit (1.5PN) and spin-spin (2PN) coupling. Since spin enters at $1.5$PN order in the phase, the $2.5$PN order corrections that we are leaving out in the evolution of $\bm{S}_{A}$ would contribute at $4$PN order. We are then allowed to use Newtonian expressions to map between $L$ and $\omega$, and the norm of $\bm{S}_{A}$ is conserved. Of course, one could extend the analysis in this paper by adding corrections to $\Omega_{LS_{1,2}}$ and $\Omega_{S_{1}S_{2}}$, thus including sub-leading PN corrections to the evolution of $\bm{S}_{A}$. But to be consistent in PN order counting, one would also have to include 4PN corrections to the evolution of $L$, which are currently unknown. Let us now pick a frame and implement the near-alignment approximation. We choose $\uvec{z} = \uvec{J}(t=0)$, where $\bm{J} = \bm{L} + \bm{S}_1 + \bm{S}_2$ is the total angular momentum. Since $\bm{J}$ is not constant, we have to specify it at a given time for the frame to be inertial. In this frame, we can write \begin{align} \bm{K} &= K_z \uvec{z} + K_x \uvec{x} + K_y \uvec{y} , \end{align} for any $\bm{K} = \bm{L}$, $\bm{S}_1$, or $\bm{S}_2$. In the near-alignment approximation, the components of $\bm{K}$ in this frame can then be expanded as \begin{align} K_z &= \sum_{n \geq 0} K_{z}^{(2n)}(t) \;\epsilon^{2n}, \label{eq:defKz}\\ K_x &= \sum_{n \geq 0} K_{x}^{(2n+1)}(t) \; \epsilon^{2n+1}, \label{eq:defKx}\\ K_y &= \sum_{n \geq 0} K_{y}^{(2n+1)}(t) \; \epsilon^{2n+1}.\label{eq:defKy} \end{align} The structure of the equations of motion ensures that odd-powers of $\epsilon$ in $K_z$, as well as the even-powers of $\epsilon$ in $K_j$, $j = x$ or $y$, vanish. In this paper, we will take these sums only up to $\mathcal{O}(\epsilon)$, but extending these results to higher-order is straightforward. Before proceeding, let us make an important comment on the near-alignment approximation. Consider a binary system at early times, where the misalignment angle is $\epsilon_{0}$. As the binary evolves, the norm of the orbital angular momentum $L$ shrinks by radiation-reaction. But since the norm of the spin angular momentum $S_{A}$ is conserved, the misalignment angle will grow. This implies that our perturbation parameter is not a constant, but rather an increasing function of time. Thus, just like the PN approximation is expected to break in the late stages of inspiral because the orbital velocity increases, the mis-alignment approximation is also expected to break as $\epsilon$ increases and the series becomes asymptotic. \subsection{Analysis to ${\cal{O}}(\epsilon^{0})$} Let us first focus on the precession equations to leading-order in $\epsilon$. The $x$- and $y$-components of these equations are trivially satisfied. The $z$-component of the spin angular momentum equation requires that $S_{A,z}^{(0)}$ be a constant, which can be obtained by demanding that $|\bm{S}_{A}| = m_{A}^{2} \chi_{A}$, and thus \begin{align} |\bm{S}_A| = S_{A,z}^{(0)} + {\cal{O}}(\epsilon^{2}) = m_A^2 \chi_A\,. \end{align} We can use this property to solve for $S_{A,z}^{(2n)}$ at all orders, given the lower order solutions for $S_{A,j}^{(2n-1)}$, $j=x$ or $y$. The $z$-component of the orbital angular momentum evolution equation requires a bit more work. First, let us define the quantity \begin{equation} \xi_0 \equiv \frac{\mu M}{L_z^{(0)}} = {\cal{O}}(c^{-1})\,, \end{equation} as a new PN quantity. This parameter is exactly the square-root of the frequency parameter often used in the literature $x = (M\omega)^{2/3} = \mu^2 M^2 L^{-2}$ only when the spins and orbital angular momentum are aligned. Using this parameter, we can rewrite the ${\cal{O}}(\epsilon^{0})$ part of the evolution equation for the $z$-component of orbital angular momentum as \begin{align} \dot{\xi}_0 = \frac{1}{3} \frac{a_0}{M} \xi_0^9 \left( 1 + \sum_{n=2}^N a_n \xi_0^n \right), \label{eq:xi0dot} \end{align} where the spin-dependent part of the couplings were evaluated at leading-order in $\epsilon$: \begin{align} \uvec{L}^{(0)} &= (0,0,1), \\ \bm{S}_A^{(0)} &= \left(0,0,m_A^2 \chi_A \right) . \end{align} Since all coefficients are constants, we can directly integrate Eq.~\eqref{eq:xi0dot} by Taylor expanding $(\dot{\xi}_0)^{-1}$. After inverting the PN series and integrating, we obtain \begin{align} \xi_0(t) &= \zeta \bigg[ 1 - \frac{a_2}{6} \zeta^2 - \frac{a_3^{(0)}}{5} \zeta^3 \nonumber\\ &+ \frac{5a_2^2 -6a_4^{(0)}}{24} \zeta^4 + \frac{9 a_2 a_3^{(0)} - 5 a_5^{(0)}}{15} \zeta^5 + \mathcal{O}\left(c^{-6}\right) \bigg]\,, \label{eq:xi0ofzeta} \end{align} with \begin{align} \zeta &= \left[ \frac{3M}{8 a_0 (t_{\mbox{\tiny coal}} - t)} \right]^{1/8}\,. \label{eq:zeta} \end{align} The notation $a_{i}^{(n)}$ means the part of $a_{i}$ at ${\cal{O}}(\epsilon^{n})$, where recall that $a_{2}$ is a 1PN correction that is spin-independent. We have here kept terms up to $2.5$PN order, while the spin-dependent couplings are included to leading-order in $\epsilon$. For convenience, let us also introduce a new quantity $\xi$, which coincides with $\xi_0$ at ${\cal{O}}(\epsilon^{0})$ but differs at higher orders. This quantity is defined via \begin{align} \xi(t) &= \zeta \bigg[ 1 - \frac{a_2}{6} \zeta^2 - \frac{a_3(t=0)}{5} \zeta^3 + \frac{5a_2^2 -6a_4(t=0)}{24} \zeta^4\nonumber\\ & + \frac{9 a_2 a_3(t=0) - 5 a_5(t=0)}{15} \zeta^5 + \mathcal{O}\left(c^{-6}\right) \bigg]. \label{eq:xiofzeta} \end{align} which extends Eq.~\eqref{eq:xi0ofzeta} by using the full $a_{i}$ coefficients, evaluated at $t=0$, instead of their ${\cal{O}}(\epsilon^{0})$ parts $a_{i}^{(0)}$. Therefore, the difference between $\xi$ and $\xi_0$ is of $\mathcal{O}\left(\epsilon^2\right)$, which implies \begin{equation} L_z^{(0)} = \frac{\mu M}{\xi_0} = \frac{\mu M}{\xi} + \mathcal{O}\left(\epsilon^2\right). \end{equation} With such a definition, $\xi$ coincides with $x^{1/2} = (M\omega)^{1/3}$ when the scalar products between $\uvec{L}$, $\bm{S}_1$, and $\bm{S}_2$ are time-independent. That is, in the near-alignment approximation, we can write \begin{align} \xi &= (M\omega)^{1/3} + \mathcal{O}(\epsilon^2). \end{align} Henceforth, we will use $\xi$ as our independent variable. \subsection{Analysis to ${\cal{O}}(\epsilon^{1})$} Let us now look at the evolution equations to first-order in $\epsilon$. We can write them in matrix notation as \begin{align} \frac{d \bm{W}_1^{(1)}}{dt} &= -{\mathbb{M}} \, \bm{W}_2^{(1)} - a {\mathbb{A}} \, \bm{W}_1^{(1)}, \nonumber \\ \frac{d \bm{W}_2^{(1)}}{dt} &= {\mathbb{M}} \, \bm{W}_1^{(1)} - a {\mathbb{A}} \, \bm{W}_2^{(1)}, \label{eq:matrix} \end{align} where we have defined the vectors \begin{align} \bm{W}_1^{(1)} &= \left( \begin{array}{c} L_x^{(1)}\\ S_{1,x}^{(1)}\\ S_{2,x}^{(1)} \end{array} \right), \qquad \bm{W}_2^{(1)} = \left( \begin{array}{c} L_y^{(1)}\\ S_{1,y}^{(1)}\\ S_{2,y}^{(1)} \end{array} \right), \end{align} and the matrices \begin{align} {\mathbb{M}} &= \left( \begin{array}{c c c} (b+c) & -d & -e \\ -b & (d+f) & -g \\ -c & -f & (e+g) \end{array} \right), \quad {\mathbb{A}} = \left( \begin{array}{c c c} 1 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right)\,. \end{align} with \begin{align} a &= \frac{1}{3} \frac{a_0}{M} \xi(t)^8 \left( 1 + \sum_{n=2}^N a_n \xi(t)^n \right), \label{eq:a} \\ b &= \frac{\xi(t)^6}{M} \bigg[ \left( 2 + \frac{3m_2}{2m_1} \right) \frac{m_1^2}{M^2} \chi_1 - \frac{3}{2} \xi(t) \nu \chi_1 \chi_2 \bigg] , \\ c &= \frac{\xi(t)^6}{M} \bigg[ \left( 2 + \frac{3m_1}{2m_2} \right) \frac{m_2^2}{M^2} \chi_2 - \frac{3}{2} \xi(t) \nu \chi_1 \chi_2 \bigg] , \\ d &= \frac{\xi(t)^5}{M} \bigg[ \left( 2 + \frac{3m_2}{2m_1} \right) \nu - \frac{3}{2} \xi(t) \frac{m_2^2}{M^2} \chi_2 \bigg] , \\ e &= \frac{\xi(t)^5}{M} \bigg[ \left( 2 + \frac{3m_1}{2m_2} \right) \nu - \frac{3}{2} \xi(t) \frac{m_1^2}{M^2} \chi_1 \bigg] , \\ f &= \frac{1}{2} \frac{\xi(t)^6}{M} \frac{m_2^2}{M^2} \chi_2 , \qquad g = \frac{1}{2} \frac{\xi(t)^6}{M} \frac{m_1^2}{M^2} \chi_1. \end{align} The solution to the system in Eq.~\eqref{eq:matrix} can be obtained via a standard linear algebra approach. First, we diagonalize $\mathbb{M}$ via a similarity transformation in matrix ${\mathbb{R}}$, ${\mathbb{R}}^{-1} {\mathbb{M}} {\mathbb{R}} = {\mathbb{D}}$, where ${\mathbb{D}}$ is the diagonal matrix \begin{align} {\mathbb{D}} &= \left(\begin{array}{c c c} 0 & 0 & 0\\ 0 & \omega_{{\mbox{\tiny P}},+} & 0\\ 0 & 0 & \omega_{{\mbox{\tiny P}},-} \end{array} \right)\,, \end{align} with eigenvalues \begin{multline} \omega_{{\mbox{\tiny P}},\pm} = \frac{1}{2} \Big[ b+c+d+e+f+g \\ \pm \sqrt{(b-c+d-e+f-g)^2 + 4(c-f)(b-g)} \Big]. \end{multline} The transformation matrix ${\mathbb{R}}$ is given explicitly in Appendix~\ref{app:RandRm1}. The first few terms of the PN expansion of $\omega_{{\mbox{\tiny P}},\pm}$ are \begin{align} \omega_{{\mbox{\tiny P}},+} &= \frac{1}{M}\left( 2 \nu + \frac{3}{2} \frac{m_1^2}{M^2} \right) \xi^5 \nonumber\\ &+ \frac{1}{M}\left[ \left( 2 \frac{m_2^2}{M^2} + \frac{3}{2} \nu \right) \chi_2 - \frac{m_1^2}{M^2} \chi_1 \right] \xi^6 + \mathcal{O} \left( c^{-7} \right), \\ \omega_{{\mbox{\tiny P}},-} &= \frac{1}{M}\left( 2 \nu + \frac{3}{2} \frac{m_2^2}{M^2} \right) \xi^5 \nonumber\\ &+ \frac{1}{M}\left[ \left( 2 \frac{m_1^2}{M^2} + \frac{3}{2} \nu \right) \chi_1 - \frac{m_2^2}{M^2} \chi_2 \right] \xi^6 + \mathcal{O} \left( c^{-7} \right). \end{align} With this at hand, Eq.~\eqref{eq:matrix} can be transformed into \begin{align} \frac{d\bm{Q}_1^{(1)}}{dt} &= - {\mathbb{D}} \, \bm{Q}_2^{(1)} - {\mathbb{E}} \, \bm{Q}_1^{(1)}, \label{normal-mode-eq-1}\\ \frac{d\bm{Q}_2^{(1)}}{dt} &= {\mathbb{D}} \, \bm{Q}_1^{(1)} - {\mathbb{E}} \, \bm{Q}_2^{(1)}, \label{normal-mode-eq} \end{align} where we have defined the transformed $\bm{W}_{i}$, i.e.~the eigenvectors or quasi-normal modes, via \begin{align} \bm{Q}_j^{(1)} &\equiv {\mathbb{R}}^{-1} \bm{W}_j^{(1)} = \left(\begin{array}{c} Q_{0,j}^{(1)}\\ Q_{+,j}^{(1)}\\ Q_{-,j}^{(1)} \end{array} \right)\,, \label{eq:1st-o-system} \end{align} with $j = 1$ or $2$, and where the remainder matrix \begin{align} {\mathbb{E}} &= {\mathbb{R}}^{-1} \left(a {\mathbb{A}} {\mathbb{R}} + \frac{d{\mathbb{R}}}{dt} \right)\,. \end{align} The second term in the above equation is necessary because the rotation matrix ${\mathbb{R}}$ is not constant. We chose to normalize the eigenvectors such that \begin{align} L^{(1)}_x &= Q^{(1)}_{0,1} + Q^{(1)}_{+,1} + Q^{(1)}_{-,1}, \\ L^{(1)}_y &= Q^{(1)}_{0,2} + Q^{(1)}_{+,2} + Q^{(1)}_{-,2}\,, \end{align} as explained in Appendix~\ref{app:RandRm1}. We decouple the system in Eq.~\eqref{normal-mode-eq} by taking an extra time-derivative to obtain \begin{align} \frac{d^2\bm{Q}_1^{(1)}}{dt^2} &= - {\mathbb{D}}^2 \bm{Q}_1^{(1)} \nonumber\\ &+ \left( \{{\mathbb{D}},{\mathbb{E}}\} - \frac{d{\mathbb{D}}}{dt} \right) \bm{Q}_2 + \left( {\mathbb{E}}^2 - \frac{d{\mathbb{E}}}{dt} \right) \bm{Q}_1^{(1)}, \label{normal-mode-eq-decomp1} \\ \frac{d^2\bm{Q}_2^{(1)}}{dt^2} &= - {\mathbb{D}}^2 \bm{Q}_2^{(1)} \nonumber\\ &- \left( \{{\mathbb{D}},{\mathbb{E}}\} - \frac{d{\mathbb{D}}}{dt} \right) \bm{Q}_1^{(1)} + \left( {\mathbb{E}}^2 - \frac{d{\mathbb{E}}}{dt} \right) \bm{Q}_2^{(1)}, \label{normal-mode-eq-decomp2} \end{align} where $\{{\mathbb{A}},{\mathbb{B}}\} \equiv {\mathbb{A}} {\mathbb{B}} + {\mathbb{B}} {\mathbb{A}}$ denotes the matrix anticommutator. In what follows, we solve this system of equations via multiple scale analysis. \subsubsection{Separation of Scales} Inspection of Eqs.~\eqref{normal-mode-eq-decomp1} and~\eqref{normal-mode-eq-decomp2} reveals that this is a system of perturbed harmonic oscillators with time-dependent frequencies. Thus, the multiple scale analysis methods presented in Sec.~\ref{sec:msa} are well-suited to solve this problem. As explained in that section, however, we must first transform to a new independent variable such that the problem becomes that of a system of perturbed harmonic oscillators with constant frequencies. Because the eigenvalues of the $\mathbb{D}$ matrix are generically not the same, i.e.~$\omega_{{\mbox{\tiny P}},+} \neq \omega_{{\mbox{\tiny P}},-}$, we are forced to introduce a different transformation for $+$ and $-$ modes. Changing variables $t \to \phi_{{\mbox{\tiny P}},\pm}(t)$ in the $Q_{\pm,1}^{(1)}$ equation, Eq.~\eqref{normal-mode-eq-decomp1} becomes \begin{align} \frac{dQ_{0,1}^{(1)}}{dt} &= - \left[ {\mathbb{E}} \, \bm{Q}_1^{(1)} \right]\cdot \bm{P}_0 , \\ \frac{d^2 Q_{+,1}^{(1)}}{d\phi_{{\mbox{\tiny P}},+}^2} &= -\left( \frac{dt}{d\phi_{{\mbox{\tiny P}},+}} \right)^2 \omega_{{\mbox{\tiny P}},+}^2 Q_{+,1}^{(1)} \nonumber\\ &- \left( \frac{dt}{d\phi_{{\mbox{\tiny P}},+}} \right)^2 \frac{d^2\phi_{{\mbox{\tiny P}},+}}{dt^2} \frac{dQ_{+,1}^{(1)}}{d\phi_{{\mbox{\tiny P}},+}} \nonumber\\ &+ \left( \frac{dt}{d\phi_{{\mbox{\tiny P}},+}} \right)^2 \left[ \left( \{{\mathbb{D}},{\mathbb{E}}\} - \frac{d{\mathbb{D}}}{dt} \right) \bm{Q}_2^{(1)} \right]\cdot \bm{P}_+ \nonumber\\ &+ \left( \frac{dt}{d\phi_{{\mbox{\tiny P}},+}} \right)^2\left[ \left( {\mathbb{E}}^2 - \frac{d{\mathbb{E}}}{dt} \right) \bm{Q}_1^{(1)} \right]\cdot \bm{P}_+, \\ \frac{d^2 Q_{-,1}^{(1)}}{d\phi_{{\mbox{\tiny P}},-}^2} &= -\left( \frac{dt}{d\phi_{{\mbox{\tiny P}},-}} \right)^2 \omega_{{\mbox{\tiny P}},-}^2 Q_{-,1}^{(1)} \nonumber\\ &- \left( \frac{dt}{d\phi_{{\mbox{\tiny P}},-}} \right)^2 \frac{d^2\phi_{{\mbox{\tiny P}},-}}{dt^2} \frac{dQ_{-,1}^{(1)}}{d\phi_{{\mbox{\tiny P}},-}} \nonumber\\ &+ \left( \frac{dt}{d\phi_{{\mbox{\tiny P}},-}} \right)^2 \left[ \left( \{{\mathbb{D}},{\mathbb{E}}\} - \frac{d{\mathbb{D}}}{dt} \right) \bm{Q}_2^{(1)} \right]\cdot \bm{P}_- \nonumber\\ &+ \left( \frac{dt}{d\phi_{{\mbox{\tiny P}},-}} \right)^2\left[ \left( {\mathbb{E}}^2 - \frac{d{\mathbb{E}}}{dt} \right) \bm{Q}_1^{(1)} \right]\cdot \bm{P}_-, \end{align} where we introduced the projectors \begin{align} \bm{P}_0 = \left(\begin{array}{c} 1\\0\\0 \end{array} \right)\, , \quad \bm{P}_+ = \left(\begin{array}{c} 0\\1\\0 \end{array} \right)\, , \quad \bm{P}_- = \left(\begin{array}{c} 0\\0\\1 \end{array} \right)\, . \end{align} We obtain similar equations for $\dot{Q}_{i,2}^{(1)}$ with $i = 0$, $+$, or $-$. Notice that we have not transformed the time coordinate for the zero-frequency mode. For the above equations to have a constant normal frequency, we must set \begin{align} \frac{d \phi_{{\mbox{\tiny P}},\pm}}{dt} = \omega_{{\mbox{\tiny P}},\pm}, \label{eq-sep-var} \end{align} modulo a proportionality constant, which we choose to be unity so that $\phi_{{\mbox{\tiny P}},\pm}$ are exactly the precession angles. With this rescaling of the independent variable, the differential system becomes \begin{align} \frac{dQ_{0,1}^{(1)}}{dt} &= - \left[ {\mathbb{E}} \, \bm{Q}_1^{(1)} \right]\cdot \bm{P}_0 , \\ \frac{d^2 Q_{+,1}^{(1)}}{d\phi_{{\mbox{\tiny P}},+}^2} &= - Q_{+,1}^{(1)} - \frac{\dot{\omega}_{{\mbox{\tiny P}},+}}{\omega_{{\mbox{\tiny P}},+}^2} \frac{dQ_{+,1}^{(1)}}{d\phi_{{\mbox{\tiny P}},+}} \nonumber\\ &+ \frac{1}{\omega_{{\mbox{\tiny P}},+}^2} \left[ \left( \{{\mathbb{D}},{\mathbb{E}}\} - \frac{d{\mathbb{D}}}{dt} \right) \bm{Q}_2^{(1)} \right]\cdot \bm{P}_+ \nonumber\\ &+ \frac{1}{\omega_{{\mbox{\tiny P}},+}^2} \left[ \left( {\mathbb{E}}^2 - \frac{d{\mathbb{E}}}{dt} \right) \bm{Q}_1^{(1)} \right]\cdot \bm{P}_+, \\ \frac{d^2 Q_{-,1}^{(1)}}{d\phi_{{\mbox{\tiny P}},-}^2} &= - Q_{-,1}^{(1)} - \frac{\dot{\omega}_{{\mbox{\tiny P}},-}}{\omega_{{\mbox{\tiny P}},-}^2} \frac{dQ_{-,1}^{(1)}}{d\phi_{{\mbox{\tiny P}},-}} \nonumber\\ &+ \frac{1}{\omega_{{\mbox{\tiny P}},-}^2} \left[ \left( \{{\mathbb{D}},{\mathbb{E}}\} - \frac{d{\mathbb{D}}}{dt} \right) \bm{Q}_2^{(1)} \right]\cdot \bm{P}_- \nonumber\\ &+ \frac{1}{\omega_{{\mbox{\tiny P}},-}^2} \left[ \left( {\mathbb{E}}^2 - \frac{d{\mathbb{E}}}{dt} \right) \bm{Q}_1^{(1)} \right]\cdot \bm{P}_-, \end{align} Note that the source to these oscillators depends on $t$, which must be in principle solved for through inversion of the solution to Eq.~\eqref{eq-sep-var}. We will here leave these expressions as implicit functions of $\phi_{{\mbox{\tiny P}},\pm}$. Now, let us perform an expansion of the above differential equations in powers of $\sigma$, which we recall is a book-keeping parameters of ${\cal{O}}(t_{{\mbox{\tiny prec}}}/t_{{\mbox{\tiny rr}}})$. In terms of the $\xi$ variable, $\sigma$ counts the powers in $(\dot{\xi}/\xi) \omega_{{\mbox{\tiny P}},\pm}^{-1} = a/\omega_{{\mbox{\tiny P}},\pm}$, since $\dot{\xi} = a \, \xi$. The differential equations then become \begin{align} &\frac{dQ_{0,1}^{(1)}}{dt} = - \sigma a \left[ \mathbb{R}^{-1} \left(\mathbb{A} \mathbb{R} + \xi\pdfrac{\mathbb{R}}{\xi} \right) \bm{Q}_1^{(1)} \right]\cdot \bm{P}_0 , \label{eq:ddQ0}\\ &\frac{d^2 Q_{+,1}^{(1)}}{d\phi_{{\mbox{\tiny P}},+}^2} = - Q_{+,1}^{(1)} + \sigma \frac{a}{\omega_{{\mbox{\tiny P}},+}^2} \left[ \mathbb{F} \, \bm{Q}_2^{(1)} \right]\cdot \bm{P}_+ \nonumber\\ &+ \sigma^{2} \frac{a^2}{\omega_{{\mbox{\tiny P}},+}^2} \bigg\{ \bigg[ \frac{\xi}{\omega_{{\mbox{\tiny P}},+}}\pdfrac{\omega_{{\mbox{\tiny P}},+}}{\xi}\mathbb{R}^{-1} \nonumber\\ &\times \left(\mathbb{A} \mathbb{R} + \xi\pdfrac{\mathbb{R}}{\xi} \right) + \mathbb{G} \bigg] \bm{Q}_1^{(1)} \bigg\}\cdot \bm{P}_+, \label{eq:ddQp}\\ &\frac{d^2 Q_{-,1}^{(1)}}{d\phi_{{\mbox{\tiny P}},-}^2} = - Q_{-,1}^{(1)} + \sigma \frac{a}{\omega_{{\mbox{\tiny P}},-}^2} \left[ \mathbb{F} \, \bm{Q}_2^{(1)} \right]\cdot \bm{P}_- \nonumber\\ &+ \sigma^{2} \frac{a^2}{\omega_{{\mbox{\tiny P}},-}^2} \bigg\{ \bigg[ \frac{\xi}{\omega_{{\mbox{\tiny P}},-}}\pdfrac{\omega_{{\mbox{\tiny P}},-}}{\xi}\mathbb{R}^{-1}\nonumber\\ &\times \left(\mathbb{A} \mathbb{R} + \xi\pdfrac{\mathbb{R}}{\xi} \right) + \mathbb{G} \bigg] \bm{Q}_1^{(1)} \bigg\}\cdot \bm{P}_- , \label{eq:ddQm} \end{align} where we have defined the new matrices \begin{align} &\mathbb{F} = \left\{\mathbb{D},\mathbb{R}^{-1} \left(\mathbb{A} \mathbb{R} + \xi\pdfrac{\mathbb{R}}{\xi} \right)\right\} , \\ &\mathbb{G} = \mathbb{R}^{-1} \bigg[ \mathbb{A}\mathbb{R} + \xi \mathbb{A} \pdfrac{\mathbb{R}}{\xi} + \xi \pdfrac{\mathbb{R}}{\xi} \mathbb{R}^{-1}\mathbb{A}\mathbb{R} \nonumber\\ &+ \xi^2 \pdfrac{\mathbb{R}}{\xi} \mathbb{R}^{-1} \pdfrac{\mathbb{R}}{\xi} - \left(\frac{\xi}{a} \pdfrac{a}{\xi} + \xi \mathbb{R} \pdfrac{\mathbb{R}^{-1}}{\xi} \right) \nonumber\\ &\times \left(\mathbb{A} \mathbb{R} + \xi \pdfrac{\mathbb{R}}{\xi}\right) - \xi \mathbb{A} \pdfrac{\mathbb{R}}{\xi} - \xi \pdfrac{\mathbb{R}}{\xi} - \xi^2 \pdfrac{^2\mathbb{R}}{\xi^2} \bigg]. \end{align} We can now proceed to the separation of timescales by introducing a new time variable $\tau$ such that $\dot{\tau}/\dot{\phi}_{{\mbox{\tiny P}},\pm} = {\cal{O}}(\sigma)$. In Sec.~\ref{sec:msa}, we used a linear relation between $\tau$ and $t$, i.e. $\tau = \sigma t$. Although we could do the same here, we find it more convenient to use the non-linear relation $d\tau/dt = \sigma a$, or in angle-variables $d\tau/d\phi_{{\mbox{\tiny P}},\pm} = \sigma a/\omega_{{\mbox{\tiny P}},\pm}$. Such a non-linear mapping between $\tau$, $t$, and $\phi_{{\mbox{\tiny P}}\pm}$ leads to a better match with numerical solutions because it allows for the ratio of timescales $t_{{\mbox{\tiny prec}}}/t_{{\mbox{\tiny rr}}}$ to vary as the inspiral proceeds. Of course, we can solve for $\tau$ in terms of $t$ via \begin{align} \tau &= \sigma \int a \, dt = \sigma \int \frac{1}{\xi} d\xi = \sigma \log\xi \,. \label{eq:tau} \end{align} We then postulate the series ansatz \begin{align} \bm{Q}_j^{(1)}(t) = \sum_{n \geq 0} \sigma^n \bm{Q}_j^{(1,n)}(t,\tau) = \sum_{n \geq 0} \sigma^n \bm{Q}_j^{(1,n)}(\phi_{{\mbox{\tiny P}},\pm},\tau), \end{align} where $j = 1$ or $2$. Recall here that the first superscript $1$ reminds us that these are quantities of ${\cal{O}}(\epsilon)$, while the second superscript labels the orders in $\sigma$. Thus, the quantity $Q^{(m,n)}_{j}$ is of bivariate ${\cal{O}}(\epsilon^{m},\sigma^{n})$. With this ansatz, we convert the system of ordinary differential equations of Eqs.~\eqref{eq:ddQ0}-\eqref{eq:ddQm} into a system of partial differential equations (PDEs). In doing so, we transform the differential operators via \begin{align} \frac{d}{dt} &= \pdfrac{}{t} + \sigma a \pdfrac{}{\tau}, \\ \frac{d^2}{d\phi_{{\mbox{\tiny P}},\pm}^2} &= \pdfrac{^2}{\phi_{{\mbox{\tiny P}},\pm}^2} + 2 \sigma \frac{a}{\omega_{{\mbox{\tiny P}},\pm}} \pdfrac{^2}{\phi_{{\mbox{\tiny P}},\pm}\partial\tau} \nonumber\\ &+ \sigma^2 \frac{a^2}{\omega_{{\mbox{\tiny P}},\pm}^2} \pdfrac{^2}{\tau^2} + \sigma^2 \frac{a^2}{\omega_{{\mbox{\tiny P}},\pm}^2} \left( \frac{\dot{a}}{a} - \frac{\dot{\omega}_{{\mbox{\tiny P}},\pm}}{\omega_{{\mbox{\tiny P}},\pm}} \right) \pdfrac{}{\tau}, \end{align} and re-expand all quantities in $\sigma \ll 1$. In what follows, we solve the resulting system of PDEs order by order in $\sigma$. \subsubsection{Solution to ${\cal{O}}(\epsilon^{1},\sigma^{0})$} At lowest order in $\sigma$, the system of PDEs becomes \begin{align} \pdfrac{Q_{0,j}^{(1,0)}}{t} &= 0, \: \pdfrac{^2Q_{+,j}^{(1,0)}}{\phi_{{\mbox{\tiny P}},+}^2} = - Q_{+,j}^{(1,0)}, \: \pdfrac{^2Q_{-,j}^{(1,0)}}{\phi_{{\mbox{\tiny P}},-}^2} = - Q_{-,j}^{(1,0)}, \end{align} $j = 1$ or $2$. Solving these equations and requiring that they satisfy the original first-order differential system of Eqs.~(\ref{normal-mode-eq-1}-\ref{normal-mode-eq}) at leading order in $\sigma$, we find \begin{align} Q_{0,j}^{(1,0)} &= A_{0,j}^{(1,0)}(\tau), \\ Q_{\pm,1}^{(1,0)} &= A_{\pm,1}^{(1,0)}(\tau) \cos \phi_{{\mbox{\tiny P}},\pm} - A_{\pm,2}^{(1,0)}(\tau) \sin \phi_{{\mbox{\tiny P}},\pm} , \\ Q_{\pm,2}^{(1,0)} &= A_{\pm,2}^{(1,0)}(\tau) \cos \phi_{{\mbox{\tiny P}},\pm} + A_{\pm,1}^{(1,0)}(\tau) \sin \phi_{{\mbox{\tiny P}},\pm}. \end{align} with $j = 1$ or $2$ as usual. \subsubsection{Solution to ${\cal{O}}(\epsilon^{1},\sigma^{1})$} Let us now consider the differential system at ${\cal{O}}(\sigma^{1})$. The zero-frequency equations are \begin{align} \pdfrac{Q_{0,j}^{(1,1)}}{t} &= - a \pdfrac{A_{0,j}^{(1,0)}}{\tau} - aA_{0,j}^{(1,0)} \nonumber\\ &- a\left(Q_{+,j}^{(1,0)} + Q_{-,j}^{(1,0)} \right) \left[1 + \mathcal{O}\left(c^{-1}\right) \right] , \label{eq:O11-zero-freq} \end{align} $j = 1$ or $2$. As we saw in Sec.~\ref{sec:msa}, we must require that secular terms do not appear, which then leads to the equation \begin{align} \pdfrac{A_{0,j}^{(1,0)}}{\tau} + A_{0,j}^{(1,0)} &= 0, \end{align} whose solution is \begin{align} A_{0,j}^{(1,0)}(\tau) &= B_{0,j}^{(1,0)} e^{-\tau} = \frac{B_{0,j}^{(1,0)}}{\xi}. \end{align} The terms depending on $Q_{\pm,j}^{(1,0)}$ in Eq.~\eqref{eq:O11-zero-freq} will induce an oscillatory term in $Q_{0,j}^{(1,1)}$. The non-zero frequency equations at ${\cal{O}}(\sigma^{1})$ are \begin{align} \pdfrac{^2Q_{\pm,1}^{(1,1)}}{\phi_{{\mbox{\tiny P}},\pm}^2} &+ Q_{\pm,1}^{(1,1)} \nonumber\\ &= \frac{a}{\omega_{{\mbox{\tiny P}},\pm}} \left[2 \pdfrac{Q_{\pm,2}^{(1,0)}}{\tau} + \frac{1}{\omega_{{\mbox{\tiny P}},\pm}} \left[ \mathbb{F} \bm{Q}_{2}{^{(1,0)}} \right]\cdot \bm{P}_\pm \right], \\ \pdfrac{^2Q_{\pm,2}^{(1,1)}}{\phi_{{\mbox{\tiny P}},\pm}^2} &+ Q_{\pm,2}^{(1,1)} \nonumber\\ &= - \frac{a}{\omega_{{\mbox{\tiny P}},\pm}} \left[2 \pdfrac{Q_{\pm,1}^{(1,0)}}{\tau} + \frac{1}{\omega_{{\mbox{\tiny P}},\pm}} \left[ \mathbb{F} \bm{Q}_{1}{^{(1,0)}} \right]\cdot \bm{P}_\pm \right]. \end{align} Expanding these equations, we find \begin{widetext} \begin{align} \pdfrac{^2Q_{+,1}^{(1,1)}}{\phi_{{\mbox{\tiny P}},+}^2} + Q_{+,1}^{(1,1)} &= \frac{a}{\omega_{{\mbox{\tiny P}},+}} \Bigg\{ 2 \left(\pdfrac{A_{+,2}^{(1,0)}}{\tau} \cos\phi_{{\mbox{\tiny P}},+} + \pdfrac{A_{+,1}^{(1,0)}}{\tau} \sin\phi_{{\mbox{\tiny P}},+} \right) + \omega_{{\mbox{\tiny P}},+}^{-1} \Big[ \mathbb{F}_{++} \left( A_{+,2}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},+} + A_{+,1}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},+} \right)\nonumber\\ & + \mathbb{F}_{+0} A_{0,2}^{(1,0)} + \mathbb{F}_{+-} \left( A_{-,2}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},-} + A_{-,1}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},-} \right) \Big] \Bigg\}, \label{eq:eqQp11} \\ \pdfrac{^2Q_{+,2}^{(1,1)}}{\phi_{{\mbox{\tiny P}},+}^2} + Q_{+,2}^{(1,1)} &= - \frac{a}{\omega_{{\mbox{\tiny P}},+}} \Bigg\{ 2 \left(\pdfrac{A_{+,1}^{(1,0)}}{\tau} \cos\phi_{{\mbox{\tiny P}},+} - \pdfrac{A_{+,2}^{(1,0)}}{\tau} \sin\phi_{{\mbox{\tiny P}},+} \right) + \omega_{{\mbox{\tiny P}},+}^{-1} \Big[ \mathbb{F}_{++} \left( A_{+,1}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},+} - A_{+,2}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},+} \right)\nonumber\\ & + \mathbb{F}_{+0} A_{0,1}^{(1,0)} + \mathbb{F}_{+-} \left( A_{-,1}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},-} - A_{-,2}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},-} \right) \Big] \Bigg\}, \label{eq:eqQp21} \\ \pdfrac{^2Q_{-,1}^{(1,1)}}{\phi_{{\mbox{\tiny P}},-}^2} + Q_{-,1}^{(1,1)} &= \frac{a}{\omega_{{\mbox{\tiny P}},-}} \Bigg\{ 2 \left(\pdfrac{A_{-,2}^{(1,0)}}{\tau} \cos\phi_{{\mbox{\tiny P}},-} + \pdfrac{A_{-,1}^{(1,0)}}{\tau} \sin\phi_{{\mbox{\tiny P}},-} \right) + \omega_{{\mbox{\tiny P}},-}^{-1} \Big[ \mathbb{F}_{--} \left( A_{-,2}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},-} + A_{-,1}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},-} \right)\nonumber\\ & + \mathbb{F}_{-0} A_{0,2}^{(1,0)} + \mathbb{F}_{-+} \left( A_{+,2}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},+} + A_{+,1}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},+} \right) \Big] \Bigg\}, \label{eq:eqQm11} \\ \pdfrac{^2Q_{-,2}^{(1,1)}}{\phi_{{\mbox{\tiny P}},-}^2} + Q_{-,2}^{(1,1)} &= - \frac{a}{\omega_{{\mbox{\tiny P}},-}} \Bigg\{ 2 \left(\pdfrac{A_{-,1}^{(1,0)}}{\tau} \cos\phi_{{\mbox{\tiny P}},-} - \pdfrac{A_{-,2}^{(1,0)}}{\tau} \sin\phi_{{\mbox{\tiny P}},-} \right) + \omega_{{\mbox{\tiny P}},-}^{-1} \Big[ \mathbb{F}_{--} \left( A_{-,1}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},-} - A_{-,2}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},-} \right)\nonumber\\ & + \mathbb{F}_{-0} A_{0,1}^{(1,0)} + \mathbb{F}_{-+} \left( A_{+,1}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},+} - A_{+,2}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},+} \right) \Big] \Bigg\}, \label{eq:eqQm21} \end{align} \end{widetext} To prevent secular terms, we must require that there are no source terms proportional to solutions of the homogeneous equation. This then imposes \begin{align} \pdfrac{A_{+,j}^{(1,0)}}{\tau} + \frac{1}{2} \omega_{{\mbox{\tiny P}},+}^{-1} \mathbb{F}_{++} A_{+,j}^{(1,0)} &= 0, \\ \pdfrac{A_{-,j}^{(1,0)}}{\tau} + \frac{1}{2} \omega_{{\mbox{\tiny P}},-}^{-1} \mathbb{F}_{--} A_{-,j}^{(1,0)} &= 0, \end{align} where $j = 1$ or $2$. The solutions to these equations are \begin{align} A_{\pm,j}^{(1,0)}(\tau) &= B_{\pm,j}^{(1,0)} \exp\left[ - \frac{1}{2}\int (\mathbb{F}_{\pm\pm} / \omega_{{\mbox{\tiny P}},\pm}) d\tau \right] \nonumber\\ &= B_{\pm,j}^{(1,0)} \exp\left\{ - \frac{1}{2}\int [\mathbb{F}_{\pm\pm} / (\omega_{{\mbox{\tiny P}},\pm} \xi) ] d\xi \right\} \nonumber\\ &= B_{\pm,j}^{(1,0)} \left[ 1 + \mathcal{O}(c^{-1}) \right] . \label{eq:Apmj10} \end{align} where $B_{\pm,j}^{(1,0)}$ are integration constants and we have used Eq.~\eqref{eq:tau}. A proof of Eq.~\eqref{eq:Apmj10} is given in appendix~\ref{app:RandRm1}. \subsubsection{Precession phases} \label{sec:precphases} The precession angles $\phi_{{\mbox{\tiny P}},\pm}$ can be computed using Eq.~\eqref{eq-sep-var} and~\eqref{eq:xiofzeta}, which leads to \begin{align} \phi_{{\mbox{\tiny P}},\pm} &= \int \omega_{{\mbox{\tiny P}},\pm} dt = \int \frac{\omega_{{\mbox{\tiny P}},\pm}}{a\xi} d\xi. \label{eq:phipm} \end{align} Care must be exercised when computing this integral because $\delta\omega_p = \omega_{{\mbox{\tiny P}},+} - \omega_{{\mbox{\tiny P}},-}$ satisfies \begin{align} \delta \omega_p^2 &= \mathcal{O}( \delta m^2) \xi^{10} + \mathcal{O}(\delta m) \xi^{11} \\ &+ \mathcal{O}(\delta m^0) \xi^{12} + \mathcal{O}\left( \delta m^0,\xi^{13} \right), \end{align} where $\delta m = (m_1 - m_2)/M$ is the dimensionless mass difference. Depending on the magnitude of $\delta m$ relative to the magnitude of $\xi$, the PN expansion will be somewhat different: if $\delta m \gg {\cal{O}}(c^{-1})$, $\delta \omega_p^2 \sim \xi^{10}$; if $\delta m \ll {\cal{O}}(c^{-1})$, $\delta \omega_{{\mbox{\tiny P}}}^{2} \sim \xi^{12}$. Notice that this is not a problem in the PN treatment of non-spinning inspirals, as there $\delta m$ does not appear in the controlling factor of the approximation. In order to address this feature of the solution, let us separate the precession phases via $\phi_{{\mbox{\tiny P}},\pm} = \phi_{{\mbox{\tiny P}},m} \pm \delta \phi_{\mbox{\tiny P}}$. The mean precession phase $\phi_{{\mbox{\tiny P}},m} \equiv (\phi_{{\mbox{\tiny P}},+} + \phi_{{\mbox{\tiny P}},-})/2$ can be computed using standard PN methods: \begin{align} \phi_{{\mbox{\tiny P}},m} &= \frac{1}{2} \int \left( \omega_{{\mbox{\tiny P}},+} + \omega_{{\mbox{\tiny P}},-} \right) \frac{dt}{d\xi} d\xi \nonumber\\ &= \frac{1}{2} \int \frac{ b+c+d+e+f+g}{a\xi} d\xi \nonumber\\ &= -\frac{5}{128} \left[ \frac{8}{3} + \left( \frac{m_1}{m_2} + \frac{m_2}{m_1} \right) \right] \xi^{-3} \left[1 + \mathcal{O}(c^{-1}) \right]. \label{phim} \end{align} We can expand the integrand of this expression to any relevant PN order, and we provide higher-order PN terms in Appendix~\ref{app:precphases}. The calculation of the precession phase difference $\delta \phi_{\mbox{\tiny P}} \equiv (\phi_{{\mbox{\tiny P}},+} - \phi_{{\mbox{\tiny P}},-})/2$ must be studied in two different cases: $\delta m \ll {\cal{O}}(c^{-1})$ and $\delta m \gg {\cal{O}}(c^{-1})$. Since the PN parameter that controls the expansion in $c^{-1}$ evolves with time, some systems might comply with the former case at some point in time, and to the latter case at another. Thus, we find it useful to consider a third case, i.e.~when $\delta m \sim {\cal{O}}(c^{-1})$. As we will see below, this will allow us to use a uniform approximation depending only on the value of $\delta m$. Let us first focus on the $\delta m \gg {\cal{O}}(c^{-1})$ case. In this case, $\delta m$ can be treated as a quantity of order unity, and we can carry out a standard PN expansion: \begin{align} \delta \phi_{{\mbox{\tiny P}},1} &= \frac{1}{2} \int \left( \omega_{{\mbox{\tiny P}},+} - \omega_{{\mbox{\tiny P}},-} \right) \frac{dt}{d\xi} d\xi \nonumber\\ &= \frac{1}{2} \int \frac{1}{a\xi} \big[(b-c+d-e+f-g)^2 \nonumber\\ &\qquad \qquad\qquad \qquad + 4(c-f)(b-g) \big]^{1/2} d\xi \nonumber\\ &= \frac{15}{256} \left\{ - \frac{1}{3} \left( \frac{m_1}{m_2} - \frac{m_2}{m_1} \right) \xi^{-3} \right. \nonumber \\ &- \left. \frac{1}{2} \left[ \chi_1 - \chi_2 + 2 \left(\frac{m_1}{m_2} \chi_1 - \frac{m_2}{m_1} \chi_2 \right) \right] \xi^{-2} + \mathcal{O}(c) \right\}. \label{phipxi} \end{align} Let us now concentrate on the case where $\delta m \sim {\cal{O}}(c^{-1})$. In this case, we must treat $\delta m$ as a quantity of the same order as the PN parameter $\xi$. Doing so and PN expanding, we find \begin{align} \delta \phi_{{\mbox{\tiny P}},2} &= \frac{1}{2} \int \left( \omega_{{\mbox{\tiny P}},+} - \omega_{{\mbox{\tiny P}},-} \right) \frac{dt}{d\xi} d\xi \nonumber\\ &= \frac{1}{2} \int \frac{1}{a\xi} \big[(b-c+d-e+f-g)^2 \nonumber\\ &+ 4(c-f)(b-g) \big]^{1/2} d\xi \nonumber\\ &= -\frac{5}{8192 \delta m^2 \xi^3} \bigg\{ T_{0} \left[ 32 \delta m^2 - 12 (\chi_1 - \chi_2 ) \delta m \, \xi + \right. \nonumber \\ &+ \left. \left( 9 \chi_1^2 - 50 \chi_1 \chi_2 + 9 \chi_2^2 \right) \xi^2 \right] + 144 \chi_1 \chi_2 (\chi_1 - \chi_2) \xi^3 \nonumber\\ &\times \log \left[ 4 \frac{\delta m}{\xi} - 3 (\chi_1 - \chi_2) + \frac{T_{0}}{\xi} \right] + \mathcal{O}(c^{-4}) \bigg\}, \label{phipDelta} \end{align} where we have defined the quantity \begin{multline} T_{0} = \big[16 \delta m^2 - 24 (\chi_1 - \chi_2) \delta m \xi \\ + \left( 9 \chi_1^2 - 2 \chi_1 \chi_2 + 9 \chi_2^2 \right) \xi^2 \big]^{1/2}. \end{multline} Finally, when $\delta m \ll {\cal{O}}(c^{-1})$, we expand $\delta \omega_p$ in $\delta m$, but leave the $\xi$ factors unexpanded: \begin{align} \delta \phi_{{\mbox{\tiny P}},3} &= \frac{1}{2} \int \left( \omega_{{\mbox{\tiny P}},+} - \omega_{{\mbox{\tiny P}},-} \right) \frac{dt}{d\xi} d\xi \nonumber\\ &= \frac{1}{2} \int \frac{1}{a\xi} \big[(b-c+d-e+f-g)^2 \nonumber\\ &+ 4(c-f)(b-g) \big]^{1/2} d\xi \nonumber\\ &= - \frac{15}{512 T_1 \xi^2} \left\{ T_2 T_3 + \frac{20}{\sqrt{T_1}} \chi_1^2 \chi_2^2 (\chi_1 - \chi_2)^2 \xi^2 \right. \nonumber \\ &\times \left. \log \left[ \frac{1}{\xi} \left( T_3 + \sqrt{T_1} T_2 \right) \right] \right\} + \mathcal{O}(\delta m), \label{phipdeltam} \end{align} where we have defined the quantities \begin{align} T_1 &= 9 \chi_1^2 - 2 \chi_1 \chi_2 + 9 \chi_2^2, \\ T_2 &= \big[ 9 \chi_1^2 - 2 \chi_1 \chi_2 + 9 \chi_2^2 \nonumber\\ & -8 \chi_1 \chi_2 (\chi_1 + \chi_2) \xi + 4 \chi_1^2 \chi_2^2 \xi^2 \big]^{1/2}, \\ T_3 &= 9 \chi_1^2 - 2 \chi_1 \chi_2 + 9 \chi_2^2 - 4 \chi_1 \chi_2 (\chi_1 + \chi_2) \xi. \end{align} In this way, we can reconstruct $\phi_{{\mbox{\tiny P}},\pm}$ by combining $\phi_{{\mbox{\tiny P}},m}$ in Eq.~\eqref{phim} with one of the $\delta \phi_{{\mbox{\tiny P}}}$ in Eqs.~\eqref{phipxi}, \eqref{phipDelta} and \eqref{phipdeltam}, depending on the magnitude of $\delta m$. Later on, we will use the particular implementation described in Appendix~\ref{app:precphases}, where we found useful, in practice, to use $\delta \phi_{{\mbox{\tiny P}}} = \delta \phi_{{\mbox{\tiny P}},1}$ for $\delta m \geq 0.2$, $\delta \phi_{{\mbox{\tiny P}}} = \delta \phi_{{\mbox{\tiny P}},2}$ for $10^{-5} \leq \delta m < 0.2$, and $\delta \phi_{{\mbox{\tiny P}}} = \delta \phi_{{\mbox{\tiny P}},3}$ for $\delta m < 10^{-5}$. Different classes of compact binaries will, of course, have a different natural set of $\delta m$. Neutron star binaries must have $\delta m \in \left(0,0.375\right)$, with typical values in $\delta m \sim 0.08$ for which $\delta \phi_{{\mbox{\tiny P}}} = \phi_{{\mbox{\tiny P}},2}$. Neutron star/black hole binaries can have $\delta m \in \left(0.375,0.96\right)$, with typical values in $\delta m \sim 0.75$ for which $\delta \phi_{{\mbox{\tiny P}}} = \phi_{{\mbox{\tiny P}},1}$. Black hole binaries detectable by advanced ground detectors with total mass less than $50 M_{\odot}$ must have $\delta m \in \left(0,0.82 \right)$, with typical values in $\delta m \lesssim 0.4$ for the best gravitational wave candidates. In this case, one may have to switch between $\delta \phi_{{\mbox{\tiny P}}} = \phi_{{\mbox{\tiny P}},1}$ and $\delta \phi_{{\mbox{\tiny P}}} = \phi_{{\mbox{\tiny P}},2}$. The choice $\delta \phi_{{\mbox{\tiny P}}} = \phi_{{\mbox{\tiny P}},3}$ is only relevant for almost exactly equal mass, which has a very low probability of happening. We argue in Sec.~\ref{subsec:discontinuity} that the discontinuity in the solution $\delta \phi_{{\mbox{\tiny P}}}$ at $\delta m = 0.2$ is of no concern due to high faithfulness between the two approximations at the boundary. \subsubsection{Solutions to higher order in $\sigma$} Going to higher order in $\sigma$ is straightforward, except perhaps for the treatment of certain timescale mixing that will generically occur; see e.g.~Eqs.~(\ref{eq:eqQp11}-\ref{eq:eqQm21}) at higher order in $\sigma$. In order to exemplify how a higher-order in $\sigma$ calculation would proceed, let us return to Eq.~\eqref{eq:eqQp11}. This equation is that of a harmonic oscillator with frequency $\omega_{{\mbox{\tiny P}},+}$, sourced by an oscillatory term of frequency $\omega_{{\mbox{\tiny P}},-}$. Let us transform the left-hand side of this equation from $\phi_{{\mbox{\tiny P}},+}$ to $t$: \begin{align} \pdfrac{^2Q_{+,1}^{(1,1)}}{\phi_{{\mbox{\tiny P}},+}^2} = \frac{1}{\omega_{{\mbox{\tiny P}},+}^2} \pdfrac{^2Q_{+,1}^{(1,1)}}{t^2} - \frac{1}{\omega_{{\mbox{\tiny P}},+}^{3}} \frac{d \omega_{{\mbox{\tiny P}},+}}{dt} \pdfrac{Q_{+,1}^{(1,1)}}{t}. \end{align} The last term in this equation contains a $d \omega_{{\mbox{\tiny P}},+}/dt$ factor, which introduces an extra factor of $\sigma$, and must therefore be kept to ${\cal{O}}(\sigma^{2})$. Equation~\eqref{eq:eqQp11} then becomes \begin{align} \pdfrac{^2Q_{+,1}^{(1,1)}}{t^2} &+ \omega_{{\mbox{\tiny P}},+}^2 Q_{+,1}^{(1,1)} = a \Big[ \mathbb{F}_{+0} A_{0,2}^{(1,0)} \nonumber\\ &+ \mathbb{F}_{+-} \left( A_{-,2}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},-} + A_{-,1}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},-} \right) \Big], \label{eq:eqQp111} \end{align} and its solution is \begin{align} Q_{+,1}^{(1,1)} &= A_{+,1}^{(1,1)}(\psi_{{\mbox{\tiny P}},+}) \cos \phi_{{\mbox{\tiny P}},+} - A_{+,2}^{(1,1)}(\psi_{{\mbox{\tiny P}},+}) \sin \phi_{{\mbox{\tiny P}},+} \nonumber\\ &- \frac{a}{\omega_{{\mbox{\tiny P}},+} - \omega_{{\mbox{\tiny P}},-}} \left[ \frac{m_2^2 \chi_2}{m_1(m_1-m_2)} \xi + \mathcal{O}\left(c^{-2}\right) \right] \nonumber\\ &\times \left( A_{-,2}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},-} + A_{-,1}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},-} \right). \end{align} Notice that the above solution has a pole at $\omega_{{\mbox{\tiny P}},+} = \omega_{{\mbox{\tiny P}},-}$, because the terms oscillating with frequency $\omega_{{\mbox{\tiny P}},-}$ drive a resonance in Eq.~\eqref{eq:eqQp111}. In practice, however, this happens only when $\chi_2 = 0$ and at a single point in time. Such a limit, therefore, must be excluded from higher-order solutions. We can now use this ${\cal{O}}(\epsilon,\sigma)$ solution in the source to the ${\cal{O}}(\epsilon,\sigma^{2})$ differential equation and require that terms oscillating at frequency $\omega_{{\mbox{\tiny P}},+}$ vanish so as not to produce secularly growing terms. This would then lead to differential equations for $A_{+,j}^{(1,1)}(\psi_{{\mbox{\tiny P}},+})$, just as we obtained for $A_{+,j}^{(1,0)}(\psi_{{\mbox{\tiny P}},+})$ at ${\cal{O}}(\epsilon,\sigma)$. The solution to these equations would then lead to a solution to the precession equations at ${\cal{O}}(\epsilon,\sigma)$. We will not carry out a higher-order development here. \subsection{Summary} Let us here collect all the pieces of the solution to ${\cal{O}}(\epsilon,\sigma)$ obtained in the previous subsections. Using the initial conditions $\uvec{J}(0) = \uvec{z}$, we can write \begin{align} \bm{W}_j^{(1)}(t=0) &= \left( \begin{array}{c} -S_{1,k}^{(1)}(t=0) - S_{2,k}^{(1)}(t=0)\\ S_{1,k}^{(1)}(t=0)\\ S_{2,k}^{(1)}(t=0) \end{array} \right), \end{align} where $j = 1$ and $k=x$, or $j=2$ and $k=y$. Furthermore, \begin{align} \bm{Q}_j^{(1)}(t=0) = \left( \begin{array}{c} B_{0,j}^{(1,0)}\\ B_{+,j}^{(1,0)}\\ B_{-,j}^{(1,0)} \end{array} \right) + \mathcal{O}(\sigma, c^{-1}). \end{align} and since \begin{align} \bm{Q}_j^{(1)}(t=0) = \mathbb{R}^{-1}(t=0) \bm{W}_j^{(1)}(t=0), \end{align} we find that \begin{align} B_{0,j}^{(1,0)} = 0. \end{align} This happens because we chose $\uvec{z} = \uvec{J}(t=0)$. Any equivalent but different choice would result in a nonvanishing $B_{0,j}^{(1,0)} $. The solution for the orbital angular momentum is then \begin{align} \label{eq:LzNAS} L_z &= \frac{\mu M}{\xi} + \mathcal{O} \left(\epsilon^2\right), \\ L_x &= \epsilon \Big( B_{+,1}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},+} - B_{+,2}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},+} \nonumber\\ &+ B_{-,1}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},-} - B_{-,2}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},-} \Big) + \mathcal{O} \left(\sigma,\epsilon^3\right), \label{eq:LxNAS}\\ L_y &= \epsilon \Big( B_{+,2}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},+} + B_{+,1}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},+} \nonumber\\ &+ B_{-,2}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},-} + B_{-,1}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},-} \Big) + \mathcal{O} \left(\sigma,\epsilon^3\right), \label{eq:LyNAS} \end{align} where $\xi$ is given by Eq.~\eqref{eq:xiofzeta}, the precession angles $\phi_{{\mbox{\tiny P}},\pm}$ are shown in Sec.~\ref{sec:precphases} and Appendix~\ref{app:precphases}, and $B_{i,j}^{(1,0)}$ are given in Appendix~\ref{app:RandRm1}. The $z$-components of the spins are simply \begin{align} S_{1,z} &= m_1^2 \chi_1 + \mathcal{O} \left(\epsilon^2\right), \\ S_{2,z} &= m_2^2 \chi_2 + \mathcal{O} \left(\epsilon^2\right), \end{align} and the $x$ and $y$-components are \begin{widetext} \begin{align} S_{1,x} = \epsilon \bigg[& \frac{2(g-b)}{b+c - (d-e)-(f+g) + \delta \omega_p} (B_{+,1}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},+} - B_{+,2}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},+}) \nonumber\\ &+\frac{2(g-b)}{b+c- (d-e)-(f+g) - \delta \omega_p} (B_{-,1}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},-} - B_{-,2}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},-}) \bigg] + \mathcal{O}(\sigma, \epsilon^3), \\ S_{1,y} = \epsilon \bigg[& \frac{2(g-b)}{b+c - (d-e)-(f+g) + \delta \omega_p} (B_{+,2}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},+} + B_{+,1}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},+}) \nonumber\\ &+\frac{2(g-b)}{b+c- (d-e)-(f+g) - \delta \omega_p} (B_{-,2}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},-} + B_{-,1}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},-}) \bigg] + \mathcal{O}(\sigma, \epsilon^3), \\ S_{2,x} = \epsilon \bigg[& \frac{2(f-c)}{b+c - (d-e)-(f+g) + \delta \omega_p} (B_{+,1}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},+} - B_{+,2}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},+}) \nonumber\\ &+\frac{2(f-c)}{b+c- (d-e)-(f+g) - \delta \omega_p} (B_{-,1}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},-} - B_{-,2}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},-}) \bigg] + \mathcal{O}(\sigma, \epsilon^3), \\ S_{2,y} = \epsilon \bigg[& \frac{2(f-c)}{b+c - (d-e)-(f+g) + \delta \omega_p} (B_{+,2}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},+} + B_{+,1}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},+}) \nonumber\\ &+\frac{2(f-c)}{b+c- (d-e)-(f+g) - \delta \omega_p} (B_{-,2}^{(1,0)} \cos \phi_{{\mbox{\tiny P}},-} + B_{-,1}^{(1,0)} \sin \phi_{{\mbox{\tiny P}},-}) \bigg] + \mathcal{O}(\sigma, \epsilon^3), \end{align} where $\delta \omega_p = \omega_{{\mbox{\tiny P}},+} - \omega_{{\mbox{\tiny P}},-}$. \end{widetext} \subsection{Comparison with Simple Precession} Another physically relevant case where the equations of precession can be solved analytically is simple precession. This occurs when one of the spins vanishes or when the masses are equal, provided we neglect spin-spin interactions. In simple precession, the orbital and spin angular momentum vectors precess around the total angular momentum vector at exactly the same frequency. Let us begin to study simple precession by rewriting the evolution equations without spin-spin couplings: \begin{align} \duvec{S}_A &= \frac{\omega^2}{M} \left( 2 + \frac{3 m_B}{2m_A} \right) \bm{L} \times \uvec{S}_A,\\ \duvec{L} &= \frac{\omega^2}{M} \left[ \left( 2 + \frac{3 m_2}{2m_1} \right) \bm{S}_1 + \left( 2 + \frac{3 m_1}{2m_2} \right) \bm{S}_2 \right] \times \uvec{L} , \end{align} where $\bm{L}$ as before is the Newtonian orbital angular momentum with norm $L = \mu M^{2/3} \omega^{-1/3}$. If either spin vanishes or if the masses are equal, the derivative of the total spin vector $\bm{S} = \bm{S}_1 + \bm{S}_2$ is perpendicular to $\bm{S}$. We can then rewrite the equations of precession for $\bm{S}$ and $\bm{L}$ as \begin{align} \dot{S} &= 0, \\ \dot{L} &= - \frac{1}{3} \frac{a_0}{M} \omega^{8/3} \left\{ 1 + \sum_{n \geq 2}^N a_n \omega^{n/3} \right\} L, \\ \duvec{S} &= \frac{\omega^2}{M} \left( 2 + \frac{3 m_{\mbox{\tiny V}}}{2m_{\mbox{\tiny NV}}} \right) \bm{J} \times \uvec{S},\\ \duvec{L} &= \frac{\omega^2}{M} \left( 2 + \frac{3 m_{\mbox{\tiny V}}}{2m_{\mbox{\tiny NV}}} \right) \bm{J} \times \uvec{L} , \end{align} where $\bm{J} = \bm{L} + \bm{S}$ is the total angular momentum vector, the vanishing spin, if any, is labelled by the subscript ${\mbox{\tiny V}}$, while the non-vanishing one is labeled by the subscript ${\mbox{\tiny NV}}$. We then see that in simple precession both $\uvec{S}$ and $\uvec{L}$ precess around $\uvec{J}$ at a frequency \begin{align} \omega_{{\mbox{\tiny P}},sp} = \frac{\omega^2}{M} \left( 2 + \frac{3 m_{\mbox{\tiny V}}}{2m_{\mbox{\tiny NV}}} \right) J . \end{align} If the spins are only slightly misaligned with the orbital angular momentum, we have to leading order in $\epsilon$ \begin{align} J &= L + S_1 + S_2 = \frac{\mu M}{\xi} + m_{\mbox{\tiny NV}}^2 \chi_{\mbox{\tiny NV}} + m_{\mbox{\tiny V}}^2 \chi_{\mbox{\tiny V}} , \end{align} which then leads to \begin{align} M \omega_{{\mbox{\tiny P}},sp} =& \left( 2 + \frac{3 m_{\mbox{\tiny V}}}{2m_{\mbox{\tiny NV}}} \right) \bigg[ \nu \xi^5 + \frac{1}{M^2} \left(m_{\mbox{\tiny NV}}^2 \chi_{\mbox{\tiny NV}} + m_{\mbox{\tiny V}}^2 \chi_{\mbox{\tiny V}} \right) \xi^6 \bigg], \end{align} where recall that $\omega = \xi^3/M$ and $\xi$ was defined in Eq.~\eqref{eq:xiofzeta}. Let us now return to our results for the near-alignment, multiple-scale analysis calculation. In order to map our results to those of simple precession, we must neglect spin-spin interactions, which naturally vanish in the single spin case. This implies using the following relations: \begin{align} b &= \frac{\xi(t)^6}{M^3} \bigg[ \left( 2 + \frac{3m_2}{2m_1} \right) m_1^2 \chi_1 \bigg] , \\ c &= \frac{\xi(t)^6}{M^3} \bigg[ \left( 2 + \frac{3m_1}{2m_2} \right) m_2^2 \chi_2 \bigg] , \\ d &= \frac{\xi(t)^5}{M} \bigg[ \left( 2 + \frac{3m_2}{2m_1} \right) \nu \bigg] , \\ e &= \frac{\xi(t)^5}{M} \bigg[ \left( 2 + \frac{3m_1}{2m_2} \right) \nu \bigg] , \\ f &= 0 , \qquad g = 0. \end{align} Thus, in the equal-mass case we have \begin{align} M\omega_{{\mbox{\tiny P}},+} &= \frac{7}{8} \left[ \xi^5 + \xi^6 (\chi_1 + \chi_2 ) \right] , \\ M\omega_{{\mbox{\tiny P}},-} &= \frac{7}{8} \xi^5 , \\ B_{+,j}^{(1,0)} &= - S_{1,j} - S_{2,j} , \\ B_{-,j}^{(1,0)} &= 0 . \end{align} On the other hand, in the case where one of the spins vanishes, $\chi_{{\mbox{\tiny V}}} = 0$, we get \begin{align} M\omega_{{\mbox{\tiny P}},+} &= \left( 2 + \frac{3 m_{\mbox{\tiny V}}}{2 m_{\mbox{\tiny NV}}} \right) \left( \nu \xi^5 + \frac{m_{\mbox{\tiny NV}}^2}{M^{2}} \chi_{\mbox{\tiny NV}} \xi^6 \right) , \\ M\omega_{{\mbox{\tiny P}},-} &= \left( 2 + \frac{3 m_{\mbox{\tiny NV}}}{2 m_{\mbox{\tiny V}}} \right) \nu \xi^5 ,\\ B_{+,j}^{(1,0)} &= - S_{{\mbox{\tiny NV}},j} , \\ B_{-,j}^{(1,0)} &= 0 , \end{align} provided \begin{align} \xi &\geq \xi_{c} \equiv \frac{3 \left( m_{\mbox{\tiny NV}}^2 - m_{\mbox{\tiny V}}^2 \right)}{ \left(4 m_{\mbox{\tiny NV}} + 3 m_{\mbox{\tiny V}} \right) m_{\mbox{\tiny NV}} \chi_{\mbox{\tiny NV}}}. \label{eq:chi0condition} \end{align} In the complementary case, when $\xi < \xi_{c}$, the results are the same modulo $\omega_{{\mbox{\tiny P}},+} \leftrightarrow \omega_{{\mbox{\tiny P}},-}$ and $A_{+,j}^{(1,0)} \leftrightarrow A_{-,j}^{(1,0)}$. We see then clearly that our results in the simple precession limit reproduce exactly the results of simple precession in the nearly aligned limit. That is, both in the equal-mass case or in the vanishing single spin case, $\omega_{{\mbox{\tiny P}},+}$ becomes equal to $\omega_{{\mbox{\tiny P}},sp}$, while the $\omega_{{\mbox{\tiny P}},-}$ mode is irrelevant because its amplitude vanishes. An interesting transition occurs if $\xi_{c} > 0$: the only evolution frequency continuously switches between $\omega_{{\mbox{\tiny P}},+}$ and $\omega_{{\mbox{\tiny P}},-}$. This transition only occurs if the vanishing spin is $\chi_{2}$, because then $m_{{\mbox{\tiny V}}} = m_{2}$ and by the conventions used in this paper, the numerator of Eq.~\eqref{eq:chi0condition} is positive, i.e.~$m_{{\mbox{\tiny NV}}}^{2} - m_{{\mbox{\tiny V}}}^{2} = m_{1}^{2} - m_{2}^{2} > 0$. The transition occurs at a particular value in time, given by $\xi = \xi_{c}$. At this time, however, $\omega_{{\mbox{\tiny P}},+} = \omega_{{\mbox{\tiny P}},-}$, and thus, the transition is continuous. \section{Gravitational Waves} \label{sec:spa} The results of the previous sections can be used to derive a purely analytic time-domain waveform for precessing nearly aligned binaries. In the rest of this section, we will derive such a waveform. \subsection{Time-Domain Waveforms: \\ Standard Representation} An impinging GW will induce the following response in an interferometer with perpendicular arms in the long wavelength approximation: \begin{align} h(t) &= \sum_{n\geq0} \left[ F_+ h_{n,+} + F_\times h_{n,\times} \right], \label{h-time-domain}\\ h_{n,+} &= \mathcal{A}_{n,+}(i_L) \cos n\phi + \mathcal{B}_{n,+}(i_L) \sin n \phi , \label{h-time-domain-plus}\\ h_{n,\times} &= \mathcal{A}_{n,\times}(i_L) \cos n\phi + \mathcal{B}_{n,\times}(i_L) \sin n \phi, \label{h-time-domain-cross} \end{align} where $n \in \mathbb{N}$ is the harmonic number, $\cos i_L = \uvec{L} \cdot \uvec{N}$, and the antenna pattern functions are \begin{align} F_+(\theta_N, \phi_N, \psi_N) &= \frac{1}{2} \left( 1 + \cos^2 \theta_N \right) \cos 2\phi_N \cos 2\psi_N \nonumber\\ &- \cos \theta_N \sin 2\phi_N \sin 2\psi_N, \label{eq:Fplus}\\ F_\times(\theta_N, \phi_N, \psi_N) &= F_+(\theta_N,\phi_N,\psi_N-\pi/4), \label{eq:Fcross} \end{align} with $(\theta_N,\phi_N)$ the spherical angles that label the position of the binary in the detector frame, and $\psi_N$ the polarization angle defined through \begin{equation} \tan \psi_N = \frac{\uvec{L}\cdot\uvec{z} - (\uvec{L}\cdot\uvec{N}) (\uvec{z}\cdot\uvec{N})}{\uvec{N}\cdot( \uvec{L}\times\uvec{z} )}\,, \label{eq:psiN} \end{equation} where $\uvec{z}$ is the unit normal vector to the detector plane. The time-domain GW phase can be decomposed into a carrier phase and a precession perturbation $\phi = \phi_{{\mbox{\tiny C}}} + \delta\phi$~\cite{Apostolatos:1994mx}. Defining the reference of the orbital phase in the orbital plane as $\uvec{L} \times \uvec{N}$, the equation of motion for the orbital phase is \begin{align} \label{eq:phieqmot} \dot{\phi} &= \dot{\phi}_{{\mbox{\tiny C}}} + \delta\dot{\phi}, \qquad \dot{\phi}_{{\mbox{\tiny C}}} = \omega , \\ \delta\dot{\phi} &= \frac{1}{L} \frac{\bm{L} \cdot \uvec{N}}{\bm{L}^2 - \left( \bm{L} \cdot \uvec{N} \right)^2} \left( \bm{L} \times \uvec{N} \right) \cdot \dvec{L}\,. \label{eq:phieqmot2} \end{align} The carrier $\phi_{{\mbox{\tiny C}}}$ is a secular, non-precessing phase, while the perturbation $\delta \phi$ models the precession of the orbital plane. \subsection{Time-Domain Waveforms: Fourier Representation} Before we can Fourier transform the GW response via uniform asymptotics, we need to first figure out the relative scale and variability of all relevant quantities. This is important as it will tell us which quantities can be safely left in the slowly-varying signal amplitude, and which ones have to be promoted to the rapidly-varying phase. The part of the amplitude that depends only on the sky location $(\theta_{N},\phi_{N})$ varies on the timescale of variation of the normal to the detector. For ground-based instruments, this is roughly $t_{{\mbox{\tiny det}}} \sim \mathcal{O}(1\ \mathrm{day})$, much larger than the typical observation time of $t_{{\mbox{\tiny obs}}} \sim \mathcal{O}(100\ \mathrm{s})$. For space-based instruments, $t_{{\mbox{\tiny det}}} \sim \mathcal{O}(1\ \mathrm{year})$, which is of the same order as the typical observation time, but bigger than the typical precession timescale $t_{{\mbox{\tiny prec}}} \sim \mathcal{O}(1\ \mathrm{month})$. This implies that it is safe to leave such terms in the slowly-varying signal amplitude. The different phases, however, can vary on a much shorter timescale. Using the equations of motion for $\phi$, one can show that \begin{align} \dot{\phi}_{{\mbox{\tiny C}}} &\sim \mathcal{O}(c^{-3}) , & \ddot{\phi}_{{\mbox{\tiny C}}} & \sim \mathcal{O}(c^{-11}), \nonumber\\ \delta\dot{\phi} &\sim \mathcal{O}(c^{-6}) , & \delta\ddot{\phi} & \sim \mathcal{O}(c^{-11})\,, \end{align} while from Eq.~\eqref{eq:psiN} and $\cos i_L = \uvec{L} \cdot \uvec{N}$, one finds \begin{align} \dot{\psi}_N &\sim \mathcal{O}(c^{-6}) , & \ddot{\psi}_N & \sim \mathcal{O}(c^{-11}), \nonumber\\ \dot{i}_L &\sim \mathcal{O}(c^{-6}) , & \ddot{i}_L & \sim \mathcal{O}(c^{-11}). \end{align} Clearly then, $\ddot{\phi}_{{\mbox{\tiny C}}}$, $\delta\ddot{\phi}$, $\ddot{\psi}_N$ and $\ddot{i}_L$ are all of the same order, and thus, they must all be promoted to the rapidly varying signal phase. The phase $\phi$ in $h_{n,+,\times}$ can be put into an exponential via Euler's formula. Similarly, the polarization angle $\psi_N$ can be included in the phase by rewriting the antenna pattern functions and the harmonic polarizations in Eqs.~\eqref{h-time-domain-plus} and~\eqref{h-time-domain-cross} via Euler's formula as \begin{align} F_+ &= \frac{1}{2} \left( \mathcal{A}_F + i \mathcal{B}_F \right) e^{2i\psi_N} + {\rm{c.c.}} \, ,\\ F_\times &= \frac{1}{2} \left( \mathcal{B}_F - i \mathcal{A}_F \right) e^{2i\psi_N} + {\rm{c.c.}} \, , \\ h_{n,+} &= \frac{1}{2} \left(\mathcal{A}_{n,+} - i \mathcal{B}_{n,+} \right) e^{in(\phi_{{\mbox{\tiny C}}} + \delta\phi)} + {\rm{c.c.}} \, , \\ h_{n,\times} &= \frac{1}{2} \left(\mathcal{A}_{n,\times} - i \mathcal{B}_{n,\times} \right) e^{in(\phi_{{\mbox{\tiny C}}} + \delta\phi)} + {\rm{c.c.}} \,, \end{align} where we have defined the slowly-varying amplitudes \begin{align} \mathcal{A}_F &= \frac{1}{2} \left( 1 + \cos^2 \theta_N \right) \cos 2\phi_N, \\ \mathcal{B}_F &= \cos \theta_N \sin 2\phi_N\,. \end{align} Finally, the inclination angle $i_{L}$ can also be included in the signal phase if the amplitudes $\mathcal{A}_{n,+}$, $\mathcal{B}_{n,+}$, $\mathcal{A}_{n,\times}$, and $\mathcal{B}_{n,\times}$ are rewritten as Fourier series. Combining all of these results, one can rewrite Eq.~\eqref{h-time-domain} as \begin{align} h(t) &= \sum_{n \geq 0} \ \sum_{k \in \mathbb{Z}} \ \sum_{m = -2,2} \ h_{n,k,m}(t) \\ h_{n,k,m} &= \mathcal{A}_{n,k,m}(\theta_N, \phi_N) e^{i ( n\phi_{{\mbox{\tiny C}}} + n\delta\phi + k i_L + m \psi_N)} + \rm{c.c.}, \label{eq:hoft} \end{align} where the slowly-varying amplitudes $\mathcal{A}_{n,k,m}$ can be computed from~\cite{abiq} and are also given explicitly at 2PN order in Appendix~\ref{app:amplitudes}. \subsection{Preparing for a Uniform Asymptotic Expansion} \label{sec:timedomainphases} The time-domain waveform in Eq.~\eqref{eq:hoft} is almost ready for a uniform asymptotic treatment; the last step is to convert $\phi_{{\mbox{\tiny C}}} + \delta \phi + \psi_{N} + i_{N}$ into a phase of the form $\Phi_{{\mbox{\tiny C}}} + \alpha(t) \cos(\beta(t))$ as in Eq.~\eqref{eq:correctionsimple}. Recall that here $\alpha(t)$ varies on the radiation reaction timescale, while $\beta(t)$ varies on the precession timescale. The carrier phase can be solved for using standard techniques as a function of the orbital frequency: \begin{align} \phi_{{\mbox{\tiny C}}} &= \int \frac{\xi^3}{M} dt = \int \frac{\xi^2}{Ma} d\xi \nonumber\\ &= \phi_{\mbox{\tiny coal}} - \frac{3}{5 a_0} \xi^{-5} \bigg[ 1 - \frac{5 a_2}{3} \xi^2 - \frac{5}{2} a_3 \xi^3 \nonumber\\ &- 5 \left( a_4 - a_2^2 \right) \xi^4 + 5 \left( a_5 - 2 a_2 a_3 \right) \xi^5 \log\xi \bigg] + \mathcal{O}\left( c^{-1}\right), \label{eq:phi0} \end{align} where we used $d\xi/dt = a\xi$ with $\xi = (M\omega)^{1/3} + \mathcal{O}(\epsilon^2)$ as in the previous section; recall that the $a_{i}$ coefficients are given in Appendix~\ref{app:freqevol}, and should be evaluated at $t=0$. In our implementation, in order to isolate the effects of spin precession, we artificially increase the order of the above equation to 6PN; that is, we keep terms in $\dot{\omega}$ up to relative $\mathcal{O}(c^{-5})$, but we also keep all induced terms up to relative $\mathcal{O}(c^{-12})$. The resulting PN series contains terms of relative $\mathcal{O}(c^{-6})$ and higher that will not match the expected result from full GR; yet, they provide a more accurate result for the integration of the truncated $\dot{\omega}$ equation relative to a numerical solution. The exact form of Eq.~\eqref{eq:phi0} that we used in our comparisons is given in Appendix~\ref{app:freqevol}. In principle, the remaining phase terms can be rewritten in the desired form only if one first distinguishes between two complementary cases: \begin{align} &\mbox{case 1:} & \hat{N}_x^2 + \hat{N}_y^2 \sim \mathcal{O}(\epsilon^0) , \nonumber\\ &\mbox{case 2:} & \hat{N}_x \lesssim \mathcal{O}(\epsilon), \quad \hat{N}_y \lesssim \mathcal{O}(\epsilon) . \nonumber \end{align} Assuming case (i), the equation of motion for the correction to the orbital phase is \begin{align} \delta \dot{\phi} &= \epsilon \frac{\hat{N}_z}{1 - \hat{N}_z^2}\Bigg[ \hat{N}_x \left( \frac{\dot{L}_y^{(1)}}{L_z^{(0)}} - \frac{\dot{L}_z^{(0)} L_y^{(1)}}{\left.L_z^{(0)}\right.^2} \right) \nonumber\\ &- \hat{N}_y \left( \frac{\dot{L}_x^{(1)}}{L_z^{(0)}} - \frac{\dot{L}_z^{(0)} L_x^{(1)}}{\left.L_z^{(0)}\right.^2} \right) \Bigg] + \mathcal{O}\left(\epsilon^2\right) \nonumber\\ &= \epsilon \frac{\hat{N}_z}{1 - \hat{N}_z^2} \frac{d}{dt} \left[ \hat{N}_x \frac{L_y^{(1)}}{L_z^{(0)}} - \hat{N}_y \frac{L_x^{(1)}}{L_z^{(0)}} \right] + \mathcal{O}\left(\epsilon^2\right)\,, \end{align} and therefore \begin{align} \delta\phi = \epsilon \frac{\hat{N}_z}{1 - \hat{N}_z^2} \left[ \hat{N}_x \frac{L_y^{(1)}}{L_z^{(0)}} - \hat{N}_y \frac{L_x^{(1)}}{L_z^{(0)}} \right] + \mathcal{O}\left(\epsilon^2\right). \label{eq:dphi1} \end{align} Similarly, in case (i) the inclination phase becomes \begin{align} i_L &= \arccos \left( \hat{N}_z \right) - \epsilon \frac{(L_x^{(1)} \hat{N}_x + L_y^{(1)} \hat{N}_y)}{L_z^{(0)} \sqrt{\hat{N}_x^2 + \hat{N}_y^2}} + \mathcal{O}\left( \epsilon^2 \right). \end{align} and the polarization angle is \begin{widetext} \begin{align} \psi_N &= \arctan \left( \frac{\hat{z}_z - \hat{N}_z \uvec{N} \cdot \uvec{z}}{\hat{N}_y \hat{z}_x - \hat{N}_x \hat{z}_y} \right) + \epsilon \bigg\{ \frac{(\hat{N}_y \hat{z}_x - \hat{N}_x \hat{z}_y)[\hat{L}_x^{(1)} \hat{z}_x + \hat{L}_y^{(1)} \hat{z}_y - (\hat{L}_x^{(1)} \hat{N}_x + \hat{L}_y^{(1)} \hat{N}_y) \uvec{N} \cdot \uvec{z}]}{L_z^{(0)} [(\hat{N}_y \hat{z}_x - \hat{N}_x \hat{z}_y)^2 + (\hat{z}_z - \hat{N}_z \uvec{N} \cdot \uvec{z})^2]} \nonumber\\ &+ \frac{(\hat{z}_z - \hat{N}_z \uvec{N} \cdot \uvec{z} )[L_x^{(1)}(\hat{N}_y \hat{z}_z - \hat{N}_z \hat{z}_y) + L_y^{(1)}(\hat{N}_z \hat{z}_x - \hat{N}_x \hat{z}_z)] }{L_z^{(0)} [(\hat{N}_y \hat{z}_x - \hat{N}_x \hat{z}_y)^2 + (\hat{z}_z - \hat{N}_z \uvec{N} \cdot \uvec{z})^2]} \bigg\} . \end{align} \end{widetext} Case (ii) leads at first to different expressions, but when these are re-expanded in the PN approximation, assuming that $L_{x,y}^{(1)}/L_{z}^{(0)} \ll 1$, one recovers the above expressions. Furthermore, the expansions for $\psi_N$ also depend on whether $\uvec{z}$ is nearly aligned with $\uvec{N}$ or not. But as before, the results obtained when they are not aligned is recovered by re-expanding the nearly aligned result in a PN expansion. However, we expect our result not to yield a match to the numerical solutions as good when $\uvec{N}$ or $\uvec{z}$ are nearly aligned. Using the results from Eqs.~(\ref{eq:LzNAS}-\ref{eq:LyNAS}), we can express $i_{N}$, $\psi_{N}$ and $\delta \phi$ in a Fourier series of the precession phases $\phi_{{\mbox{\tiny P}},+}$ and $\phi_{{\mbox{\tiny P}},-}$. That is, we can rewrite the phase modulation in Eq.~\eqref{eq:hoft} as \begin{widetext} \begin{align} n\delta\phi + k i_L + m \psi_N &= A_{0,n,k,m} + n A_{\delta\phi,+} \cos (\phi_{{\mbox{\tiny P}},+} + \phi_{\delta\phi,+}^{(0)}) + n A_{\delta\phi,-} \cos (\phi_{{\mbox{\tiny P}},-} + \phi_{\delta\phi,-}^{(0)}) + k A_{i_L,+} \cos (\phi_{{\mbox{\tiny P}},+} + \phi_{i_L,+}^{(0)}) \nonumber\\ &+ k A_{i_L,-} \cos (\phi_{{\mbox{\tiny P}},-} + \phi_{i_L,-}^{(0)}) + m A_{\psi_N,+} \cos (\phi_{{\mbox{\tiny P}},+} + \phi_{\psi_N,+}^{(0)}) + m A_{\psi_N,-} \cos (\phi_{{\mbox{\tiny P}},-} + \phi_{\psi_N,-}^{(0)}) + {\cal{O}}(\epsilon,c^{-1}) \nonumber\\ &=A_{0,n,k,m} + A_{+,n,k,m} \cos(\phi_{{\mbox{\tiny P}},+} + \phi_{+,n,k,m}) + A_{-,n,k,m} \cos(\phi_{{\mbox{\tiny P}},-} + \phi_{-,n,k,m}) + {\cal{O}}(\epsilon,c^{-1}), \label{conversion} \end{align} \end{widetext} where the amplitudes $A_{0,n,k,m}$ and $A_{\alpha,\pm}$, and the phases $\phi_{\alpha,\pm}^{(0)}$ are given in Appendix~\ref{app:amplitudeandphasemodulations}, while the harmonic amplitudes are given by \begin{align} A_{\pm,n,k,m} &= \mbox{sign}(A_{c,\pm}) \sqrt{A_{c,\pm}^2 + A_{s,\pm}^2}, \\ \phi_{\pm,n,k,m} &= \arctan \left( \frac{A_{s,\pm}}{A_{c,\pm}} \right), \\ A_{c,\pm} &= n A_{\delta\phi,\pm} \cos (\phi_{\delta\phi,\pm}^{(0)}) + k A_{i_L,\pm} \cos (\phi_{i_L,\pm}^{(0)}) \nonumber\\ &+ m A_{\psi_N,\pm} \cos ( \phi_{\psi_N,\pm}^{(0)}), \\ A_{s,\pm} &= n A_{\delta\phi,\pm} \sin (\phi_{\delta\phi,\pm}^{(0)}) + k A_{i_L,\pm} \sin (\phi_{i_L,\pm}^{(0)}) \nonumber\\ &+ m A_{\psi_N,\pm} \sin ( \phi_{\psi_N,\pm}^{(0)}) , \end{align} This then puts the time-domain waveforms in the desired form to carry out a uniform asymptotic expansion. Before proceeding, let us comment on the remainders of Eq.~\eqref{conversion}. In going from the left-hand side of this equation to the right-hand side, we have neglected terms of ${\cal{O}}(\epsilon)$ and terms of ${\cal{O}}(c^{-1})$, when $\uvec{N}$ and $\uvec{J}$ are aligned. When these vectors are misaligned, the remainders are actually smaller, namely of ${\cal{O}}(\epsilon^{2})$. We will see later on that the neglect of higher-order terms in $c^{-1}$ is the dominant source of discrepancy between our analytic frequency-domain waveforms and the DFT of numerical time-series waveforms. \subsection{Frequency-Domain Gravitational Waveform via Uniform Asymptotic Expansions} \label{sec:GW} We are interested in the Fourier transform of the GW signal. Taking advantage of the linearity of the Fourier transform, Eq.~\eqref{hf} can be rewritten as \begin{align} \tilde{h}(f) &= \sum_{n \geq 0} \ \sum_{k \in \mathbb{Z}} \ \sum_{m = -2,2} \ \tilde{h}_{n,k,m}(t), \end{align} where the Fourier harmonic components are \begin{align} \tilde{h}_{n,k,m}(f) &= \int \mathcal{A}_{n,k,m}^* e^{i(2 \pi ft - n \phi_{{\mbox{\tiny C}}} - n \delta\phi - k i_L - m \psi_N)} dt \nonumber\\ &+ \int \mathcal{A}_{n,k,m} e^{i(2 \pi ft + n \phi_{{\mbox{\tiny C}}} + n \delta\phi + k i_L + m \psi_N)} dt, \label{eq:htildedef} \end{align} and recall that the star denotes complex conjugation. Our particular asymptotic uniformization requires that we transform the above integrands via \begin{align} &e^{-i(n \delta\phi + k i_L + m \psi_N)} \nonumber\\ &= e^{-i A_{0,n,k,m}} \sum_{\{k_+,k_-\} \in \mathbb{Z}^2} J_{k_+} ( A_{+,n,k,m} ) J_{k_-} ( A_{-,n,k,m} ) \nonumber\\ &\times e^{-i [k_+ (\phi_{{\mbox{\tiny P}},+} + \phi_{+,n,k,m} + \pi/2) + k_- (\phi_{{\mbox{\tiny P}},-} + \phi_{-,n,k,m} + \pi/2)]}, \end{align} and similarly for the second term. After this transformation, we can apply the SPA to compute the integrals in Eq.~\eqref{eq:htildedef}. Since $\dot{\phi}_{{\mbox{\tiny C}}} \gg \dot{\phi}_{{\mbox{\tiny P}},\pm}$, we can safely neglect the second term as it will only contribute to negative frequencies. We then obtain \begin{widetext} \begin{align} \tilde{h}_{n,k,m}(f) &= \sum_{\{k_+,k_-\} \in \mathbb{Z}^2} \sqrt{\frac{2\pi}{\ddot{\phi}_{{\mbox{\tiny C}}} + k_+ \ddot{\phi}_{{\mbox{\tiny P}},+} + k_- \ddot{\phi}_{{\mbox{\tiny P}},-}}} \mathcal{A}_{n,k,m}^* J_{k_+} ( A_{+,n,k,m} ) J_{k_-} (A_{-,n,k,m} ) \nonumber \\ & \qquad \times\exp[ i (2\pi f t - n \phi_{{\mbox{\tiny C}}} - A_{0,n,k,m} - k_+ (\phi_{{\mbox{\tiny P}},+} + \phi_{+,n,k,m} + \pi/2) - k_- (\phi_{{\mbox{\tiny P}},-} + \phi_{-,n,k,m} + \pi/2) - \pi/4], \label{htildef-nkm} \end{align} \end{widetext} where all time dependent functions are evaluated at $t=t_{\mbox{\tiny SPA}}$, defined via \begin{align} 2 \pi f = n\dot{\phi}_{{\mbox{\tiny C}}}(t_{\mbox{\tiny SPA}}) + k_+ \dot{\phi}_{{\mbox{\tiny P}},+}(t_{\mbox{\tiny SPA}}) + k_- \dot{\phi}_{{\mbox{\tiny P}},-}(t_{\mbox{\tiny SPA}}). \end{align} We can invert the above equation to find \begin{align} \xi &= u - \frac{1}{24 n} [ k_+(7 + 6 \delta m - \delta m^2) + k_- (7 - 6 \delta m - \delta m^2) ] u^3 \nonumber\\ &+ \frac{1}{24 n} \{ k_+ [ 2( 1 - \delta m)^2 \chi_2 - (7 + 8 \delta m + \delta m^2)\chi_1] \nonumber\\ &+ k_- [ 2( 1 + \delta m)^2 \chi_1 - (7 - 8 \delta m + \delta m^2)\chi_2] \} u^4 + \mathcal{O}(u^5), \label{xi-eq} \end{align} where the dimensionless mass difference $\delta m = (m1 - m_2)/M$, and we have defined the reduced frequency parameter \begin{align} u &= \left(\frac{2\pi M f}{n}\right)^{1/3}, \label{u-eq} \end{align} and integrate $dt = \int (d\xi/dt)^{-1} d\xi$ to find \begin{align} t_{\mbox{\tiny SPA}} &= t_{\mbox{\tiny coal}} - \frac{3M}{8a_0} \xi^{-8} \bigg[ 1 - \frac{4 a_2}{3} \xi^2 - \frac{8 a_3}{5} \xi^3 \nonumber\\ &+ 2 \left( a_2^2 - a_4 \right) \xi^4 + \frac{8}{3} \left( 2 a_2 a_3 - a_5 \right) \xi^5 + \mathcal{O}(c^{-6}) \bigg] .\label{eq:toff} \end{align} In our implementation, similar to Eq.~\eqref{eq:phi0}, we chose to artificially increase the above equation to 6PN. The exact result can be found in Appendix~\ref{app:freqevol}. By inspecting the results of Sec.~\ref{sec:timedomainphases}, we can see that $A_{\pm,n,k,m} \sim \mathcal{O} (c^{-1})$, and therefore the Bessel functions $J_{k_\pm}(A_{\pm,n,k,m})$ will be rapidly suppressed for high values of $k_{\pm}$. This suggests that only a few terms may be needed in the Bessel expansion to accurately approximate the Fourier transform of the time-domain signal. \section{Waveform Comparisons} \label{sec:comp} In this section, we study how well the analytic frequency-domain waveform calculated in the previous section compares to others presented in the literature. First, we compare the phase and amplitude of the waveforms against each other. Then, we use a {\emph{faithfulness}} measure to carry out integrated comparisons, without maximizing over intrinsic parameters. We perform a Monte Carlo study over a variety of systems with different spin misalignments, positions in the sky and relative orientation with respect to the detector plane. \subsection{Comparison Preliminaries} \subsubsection{Waveform Models} The waveforms we compare against each other are the following: \begin{itemize} \item {\bf{DFT:}} The discrete Fourier transform of the numerically-calculated, time-domain response function, tapered by a Tukey window to remove spectral leakage. The time-domain response is constructed from Eq.~\eqref{h-time-domain}, with all angular momenta and phases obtained numerically by solving the evolution equations in the time-domain. \item {\bf{UAA:}} The fully-analytic, frequency-domain, uniform asymptotic approximate waveform of Sec.~\ref{sec:GW}. \item {\bf{HSPA~\cite{Lang:1900bz}:}} A hybrid, semi-analytic, frequency-domain template, given by the non-precessing, spin-aligned SPA waveform with higher harmonics (un-restricted PN), where the spin couplings are promoted {\emph{a posteriori}} to functions of the frequency and the phase is enhanced by the precession correction $\delta \phi$ obtained by numerically integrating Eq.~\eqref{eq:phieqmot2}. All angular momenta are obtained by solving all evolution equations numerically in the time-domain, and then numerically inverting them to find $\bm{S}_{1,2}$ and $\bm{L}$ as a function of orbital frequency. \item {\bf{Aligned SPA:}} A non-precessing, spin-aligned, frequency-domain waveform, computed in the SPA with higher harmonics (un-restricted PN). \end{itemize} The different waveforms described above have different advantages and disadvantages. Perhaps the most accurate one is the DFT family, where the only mis-modeling systematic is induced by numerical error, from the numerical solutions to the evolution equations and the DFT. Unfortunately, however, this is also the most computationally expensive family to evaluate and the one that provides the least analytical insight. The aligned SPA family contains the largest mis-modeling systematics, since it attempts to model the system as non-precessing, but it is also the cheapest to evaluate. The HSPA family is somewhere in between the DFT and aligned SPA families, being computationally less expensive than DFT, but containing some systematics due to the improper use of the stationary phase approximation. Moreover, although less expensive than DFT, the HSPA family is more expensive to evaluate than the analytical waveforms, since each template requires the numerical solution and inversion of the evolution equations. Let us make an important clarification regarding the DFT family. In multiple scale analysis, one usually compares approximations to some exact answer to determine, for example, the region of validity and accuracy of the former. Here, however, we lack such an exact solution. The DFT is perhaps the closest quantity to an exact Fourier transform that we possess, but of course, it is not an exact solution, as numerical error is non-negligible and filtering has been employed to prevent spectral leakage. We have checked, however, that the DFT is robust upon changes to the Tukey filter and numerical resolution. Thus, we here adopt the DFT as ``exact'' and compare the different approximations to it. Care must be exercised, however, when comparing analytical and numerical spinning waveforms. Even when spins are exactly aligned with the orbital angular momentum, the analytical expansion of the carrier phase does not match the numerical integration of Eq.~\eqref{eq:phi0} to sufficiently high accuracy. Similarly, the analytic, perturbative inversion of the time-frequency relation, Eq.~\eqref{eq:toff}, is not sufficiently accurate relative to the numerical inversion. Therefore, to isolate spin precession effects, we will keep terms in Eqs.~\eqref{eq:phi0} and~\eqref{eq:toff} up to 6PN order. This is sufficient to guarantee that any discrepancies in the compared waveforms arise due to spinning effects only. The exact relations we use in our comparisons are given in Appendix~\ref{app:freqevol}. \subsubsection{Detector Models} The comparisons of response functions are, of course, sensitive to the particular detector considered. We here consider both a typical ground-based detector and a typical space-based detector, both in the long-wavelength approximation. Different detectors will operate in different frequency bands, for different observation times, and they will lead to different relations between the detector frame and a fixed frame tied to the distant stars. The latter will impact the functional form of the angles $\theta_N$, $\phi_N$, and $\psi_N$ in Eqs.~(\ref{eq:Fplus}-\ref{eq:psiN}): for a typical ground-based detector, since the observation time is very short, we can approximate the angles $\theta_N$ and $\phi_N$ as constant; for a typical space-based detector, the observation time is not short, and thus, one must properly model the time-dependence of the angles. We here use an eLISA configuration~\cite{elisa}, where a LISA-type configuration trails behind Earth at a rate of $7.5^\circ$ per year. The relation between the detector frame $(\uvec{x}_{\mbox{\tiny det}}, \uvec{y}_{\mbox{\tiny det}}, \uvec{z}_{\mbox{\tiny det}})$ and a frame tied to the distant stars $(\uvec{x}, \uvec{y}, \uvec{z})$ for the space-based detector is given by \begin{align} \uvec{x}_{\mbox{\tiny det}} &= \left( \frac{3}{4} - \frac{1}{4} \cos 2 \Phi_{\mbox{\tiny eLISA}}(t) \right) \uvec{x} - \frac{1}{4} \sin 2 \Phi_{\mbox{\tiny eLISA}}(t) \uvec{y}\nonumber \\ &+ \frac{\sqrt{3}}{2} \cos \Phi_{\mbox{\tiny eLISA}}(t) \uvec{z}, \\ \uvec{y}_{\mbox{\tiny det}} &= - \frac{1}{4} \sin 2 \Phi_{\mbox{\tiny eLISA}}(t) \uvec{x} + \left( \frac{3}{4} + \frac{1}{4} \cos 2 \Phi_{\mbox{\tiny eLISA}}(t) \right) \uvec{y} \nonumber\\ &+ \frac{\sqrt{3}}{2} \sin \Phi_{\mbox{\tiny eLISA}}(t) \uvec{z}, \\ \uvec{z}_{\mbox{\tiny det}} &= - \frac{\sqrt{3}}{2} \cos \Phi_{\mbox{\tiny eLISA}}(t) \uvec{x} - \frac{\sqrt{3}}{2} \sin \Phi_{\mbox{\tiny eLISA}}(t) \uvec{y} + \frac{1}{2} \uvec{z}, \end{align} where the detector barycenter is located at $(\cos{\Phi_{\mbox{\tiny eLISA}}(t)},\sin{\Phi_{\mbox{\tiny eLISA}}(t)},0)$ in the Solar System frame with $\Phi_{\mbox{\tiny eLISA}}(t) = \omega_{{\mbox{\tiny eLISA}}} \; t$ and $\omega_{{\mbox{\tiny eLISA}}} = 2 \pi (352.5/360) \; \mathrm{yr}^{-1}$. In addition, we modify the carrier phase by adding the so-called Doppler term~\cite{cutler1998} \begin{align} \phi_{\mbox{\tiny C}} \to \phi_{\mbox{\tiny C}} + \omega R \sin {\theta}_N \cos (\Phi_{\mbox{\tiny eLISA}}(t) - {\phi}_{N}), \end{align} due to the fact that the barycenter of the detector moves in the frame tied to the distant stars. Here, $R = 1$~AU, and $\theta_N$ and $\phi_{N}$ are the spherical angles of $\uvec{N}$ in the frame tied to the distant stars. \subsubsection{Comparison Measures} We use two distinct comparison measures: \begin{itemize} \item {\bf{Waveform Comparison.}} A direct waveform amplitude and phase comparison as a function of GW frequency and PN expansion parameter. \item {\bf{Match Comparison.}} An integrated overlap waveform comparison, with white noise and without maximizing over intrinsic parameters. \end{itemize} The waveform comparison consists of comparing the Fourier amplitudes (and the Fourier phases) computed with different waveform families against each other, as a function of the dimensionless PN parameter $x$ and the GW frequency in Hz. The dimensionless PN parameter $x$ corresponding to the frequency $f$ for harmonic $n$ is computed using the standard SPA relation $x^{3/2} = 2 \pi M f/n$. When comparing amplitudes and phases, we isolate spin precession effects by normalizing or subtracting by the controlling factors in the non-precessing case. The Fourier amplitude of the $n$-th harmonic is normalized by the amplitude pre-factor of the SPA: \begin{equation} {\cal{A}}_{0} = \sqrt{5 \pi \nu} \frac{M^{2}}{8 D_{L}} \left( \frac{2 \pi M f}{n} \right)^{-7/6}\,, \end{equation} The Fourier phase is subtracted from the non-precessing SPA phase: \begin{equation} \Psi_{0} = 2 \pi f t(f) - n \phi_{{\mbox{\tiny C}}}(f) - \frac{\pi}{4}\,, \end{equation} where $t(f)$ and $\phi_{{\mbox{\tiny C}}}[t(f)]$ are obtained from the numerical inversion of $n \dot{\phi}_{{\mbox{\tiny C}}}(t) = 2 \pi f$ and from the numerical solution to the evolution equations, respectively. The match comparison is carried out through the so-called {\emph{faithfulness}}: \begin{align} F_{\tilde{h}_1,\tilde{h}_2} &\equiv \frac{\left(\tilde{h}_{1}\left|\right.\tilde{h}_{2}\right)}{\sqrt{\left(\tilde{h }_{1}\left|\right.\tilde{h}_{1}\right) \left(\tilde{h}_{2}\left|\right.\tilde{h}_{2}\right)}}\,, \label{match} \end{align} where $\tilde{h}_{1,2}$ are different Fourier-domain waveforms with the {\emph{same}} physical parameters. We define the inner-product in the usual way: \begin{align} \left(\tilde{h}_{1}\left|\right.\tilde{h}_{2}\right) &\equiv 4 \Re \int_{f_{\min}}^{f_{\max}} \frac{\tilde{h}_1 \tilde{h}_2^*}{S_{n}} \; df\,, \end{align} where $\Re[\cdot]$ the real part operator, $(f_{\min},f_{\max})$ are the boundaries of the detector's sensitivity band and $S_{n}$ is the detector's spectral noise density. For ground-based detectors, we choose $f_{\min}=10$ Hz and $f_{\max}=10^{3}$ Hz, with a maximum observation time of $1$ hr. For space-based detectors, we choose $f_{\min} = 10^{-5}$ Hz and $f_{\max}=1$ Hz, with a maximum observation time of $2$ yrs. In all cases, we also terminate the comparisons if the system reaches a separation of 6 times the total mass, i.e.~the innermost stable circular orbit of a point particle in a Schwarzschild spacetime, prior to reaching the frequency $f_{\max}$. We here employ white noise, so $S_{n}$ can be taken out of the integral and it cancels when computing the faithfulness parameter. We expect the fitting factor, maximized over physical parameters, to be in general higher when computed with colored noise than when computed with white noise. This is because colored noise has the effect of weighing one part of the frequency spectrum more heavily, and thus, the maximization occurs in a reduced frequency window, leading to a higher overlap. The faithfulness parameter exists in the interval $[-1,1]$ and it indicates how well waveforms agree with each other, with unity representing perfect agreement. The integrations required are carried out numerically, with errors of ${\cal{O}}(10^{-5})$; thus, we consider $F_{\tilde{h}_{1}\tilde{h}_{2}} = 0.9999$ to be consistent with perfect agreement. A $98\%$ fitting factor is sometimes argued to be ``good enough'' for detection purposes, but this of course depends on the detection tolerance chosen. Both when using a waveform measure or a match measure, we will compare the UAA and aligned SPA models to the DFT model. That is, we will treat the DFT model as a reference waveform, say $\tilde{h}_{2}$, and let $\tilde{h}_{1}$ be the UAA, the aligned SPA or the HSPA waveform. The comparisons we show below should be considered {\emph{conservative}} because the faithfulness parameter is maximized only over the non-physical parameters $t_{{\mbox{\tiny coal}}}$ (appearing in the Fourier phase of the analytical models through Eq.~\eqref{eq:toff}) and $\phi_{{\mbox{\tiny coal}}}$ (appearing in the Fourier phase of the analytical models through Eq.~\eqref{eq:phi0}). All other parameters, including the physical ones, such as the total mass and mass ratio, are kept unchanged. Higher matches would be obtained by allowing all parameters to vary, as is done in parameter estimation. \subsubsection{Systems Considered} \begin{figure*}[th!] \begin{center} \includegraphics[width=\columnwidth]{stat_align_misalign_LIGO.pdf} \includegraphics[width=\columnwidth]{stat_align_misalign_LISA.pdf} \caption{\label{fig:misalignment}Median faithfulness and $1$-$\sigma$ deviations between DFT-UAA waveforms (red dotted curve)and DFT-aligned SPA waveforms (blue dashed curve), as a function of misalignment angle $\epsilon$ in degrees for ground-based (left panel) and space-based systems (right panels). All waveforms satisfy $\cos \epsilon = \uvec{L} \cdot \uvec{S}_1 = \uvec{L} \cdot \uvec{S}_2$. The solid horizontal line corresponds to a faithfulness of $98\%$. Observe how the UAA waveforms are systematically better than the aligned-SPA ones for misalignments $\epsilon \gtrsim 5^{\circ}$.} \end{center} \end{figure*} \begin{figure*}[h!t] \begin{center} \includegraphics[width=\columnwidth]{stat_align_misalign_LIGO_HSPA.pdf} \includegraphics[width=\columnwidth]{stat_align_misalign_LISA_HSPA.pdf} \caption{\label{fig:misalignment-HSPA}Median faithfulness and $1$-$\sigma$ quantiles between DFT-HSPA waveforms, as a function of the misalignment angle $\epsilon$ in degrees for ground-based (left) and space-based systems (right). All waveforms used for this plot satisfy $\cos \epsilon = \uvec{L} \cdot \uvec{S}_1 = \uvec{L} \cdot \uvec{S}_2$. The solid horizontal line corresponds to a faithfulness of $98\%$.} \end{center} \end{figure*} Different systems will be studied when using different comparison measures. When using the waveform measure, we will investigate the following four systems: \begin{itemize} \item{\bf{System A}}: $\delta m = 0.5$, $\alpha_{0} = 57^{\circ}$, \item{\bf{System B}}: $\delta m = 0.1$, $\alpha_{0} = 57^{\circ}$, \item{\bf{System C}}: $\delta m = 0.5$, $\alpha_{0} = 23^{\circ}$, \item{\bf{System D}}: $\delta m = 0.1$, $\alpha_{0} = 23^{\circ}$, \end{itemize} where $\alpha_{0}$ is the angle between the line of sight vector $\uvec{N}$ and the Newtonian orbital angular momentum vector $\bm{L}$ at $t=0$. For these four systems, we choose $(\chi_1,\chi_{2}) = (0.89,0.77)$, and initial misalignment angles of $22^\circ$ and $25^\circ$. When considering space-based detectors, we choose a total redshifted mass of $M = 5 \times 10^6 M_\odot$ and a total observation time of $T_{{\mbox{\tiny obs}}} = 2$~yrs. When considering ground-based detectors, we choose a total mass of $M = 20 M_{\odot}$ and a total observation time of $T_{{\mbox{\tiny obs}}} = 100$~secs. We have also investigated other systems, but the results presented will be representative. When computing a UAA waveform, we use for $\delta\phi_{\mbox{\tiny P}}$ Eq.~\eqref{phipxi} for Systems $A$ and $C$, and Eq.~\eqref{phipDelta} for Systems $B$ and $D$ (recall that for systems with small mass differences, different PN expansions are needed). When using the match measure, we will perform a Monte-Carlo study over 200 points in parameter space for each type of detector, involving systems randomized over all waveform parameters. The misalignment angles will be set equal to each other, but they will be allowed to vary between $0^{\circ}$ and $90^{\circ}$. All throughout, we consider typical systems for ground-based detectors with masses in $(5,20) M_\odot$, and systems for space-based detectors with masses in $(10^5,10^{8}) M_\odot$ and mass ratio $m_1/m_2 \leq 10$. The distribution of the spin magnitudes $\chi_1$ and $\chi_2$ is chosen to be flat in $[0,1]$, and the distributions of unit vectors are chosen to be flat on the sphere. \subsection{Match Comparison} \label{subsec:matchmeasure} Fig.~\ref{fig:misalignment} shows the median match and $1$-$\sigma$ deviations between DFT-UAA waveforms (red dotted curve) and DFT-aligned SPA waveforms (blue dashed curve), as a function of the misalignment angle $\epsilon$ in degrees for ground-based systems (left panel) and space-based systems (right panel). Several observations are due at this time. First, observe that the match for the aligned-SPA family is significantly worse than that of the UAA family, as soon as the system is even slightly misaligned. This is mostly due to the fact that the spin couplings in the phase evolution equation are greatly overestimated in the aligned-SPA model. Second, observe also that even for misalignment angles around $50^\circ$ the UAA family achieves matches around $98\%$ for half the systems considered.This is surprising given that UAA waveforms rely on an expansion in misalignment angle. Third, observe that the lower $1$-$\sigma$ match deviation for space-based systems is significantly lower than that for ground-based systems. This is in part because of the impact of the detector's motion on the waveform, and in part because typical space-based systems spend more time in the detector band than ground-based ones, thus leading to more important phase discrepancies. Fourth, observe that the median and upper $1$-$\sigma$ match deviations are slightly better for space-based than for ground-based systems. Fig.~\ref{fig:misalignment-HSPA} shows a similar match calculation, but this time using the HSPA family of~\cite{Lang:1900bz}. Observe that, for ground-based systems, these waveforms fail to provide a high median match for misalignments $\epsilon \gtrsim 30^{\circ}$. Observe also that similar poor behavior is observed for space-based systems, which have a lower 1-$\sigma$ deviation that dips below $98\%$ at roughly the same value of $\epsilon$. Recall that such poor behavior is in spite of HSPA waveforms using the same numerical solution to the precession equations used to compute DFT waveforms. The poor behavior is because one of the requirements of the SPA used to derive HSPA waveforms (that the amplitude of the signal varies much more slowly than its phase) breaks down for highly misaligned systems. While the first time derivative of the phase is much larger than that of the amplitude, their second time derivatives are of the same order. One should keep in mind when comparing HSPA waveforms to UAA ones that the former require the numerical integration of the equations of precession, while the latter are fully analytic. \subsection{Discontinuity in the Solution to the Equations of Precession} \label{subsec:discontinuity} One concern with the waveform family developed here is that the precession phase difference $\delta \phi_{\mbox{\tiny P}}$ is a discontinuous function of the mass difference $\delta m$. This quantity satisfies $\delta \phi_{\mbox{\tiny P}} = \delta \phi_{{\mbox{\tiny P}},1}$ as given by Eq.~\eqref{phipxi} if $\delta m \geq 0.2$, $\delta \phi_{\mbox{\tiny P}} = \delta \phi_{{\mbox{\tiny P}},2}$ as given by Eq.~\eqref{phipDelta} if $10^{-5} \leq \delta m < 0.2$ and $\delta \phi_{\mbox{\tiny P}} = \delta \phi_{{\mbox{\tiny P}},3}$ as given by Eq.~\eqref{phipdeltam} if $\delta m < 10^{-5}$. Formally then, the waveform derivatives with respect to $\delta m$ are ill-defined at the boundaries of the piecewise function. Let us then investigate whether this discontinuity is a problem. To do so, we compute the match at $\delta m = 0.2$ between a waveform that uses $\delta \phi_{{\mbox{\tiny P}},2}$ and one that uses $\delta \phi_{{\mbox{\tiny P}},1}$. Fig.~\ref{fig:misalignment-deltam} shows cumulative distributions of faithfulnesses for ground-based (dotted red curve) and space-based (dashed blue curve) detections. Observe that the match is above $0.999$ for over $95\%$ of the systems investigated. This then implies that the formal discontinuity in the waveform derivative with respect to $\delta m$ at the boundary of the piecewise function should not affect parameter estimation. \begin{figure}[th] \begin{center} \includegraphics[width=\columnwidth]{stat_misalign_LIGO_LISA_cumulative.pdf} \caption{\label{fig:misalignment-deltam} Cumulative distributions of faithfulnesses between a waveform that uses $\delta \phi_{{\mbox{\tiny P}},1}$ and one that uses $\delta \phi_{{\mbox{\tiny P}},2}$ at $\delta m = 0.2$ for a ground-based (dotted red curve) and space-based (dashed blue curve) set of detections. Observe that for over $95\%$ of the systems investigated, the match is higher than $0.999$, implying the discontinuity would not have a serious effect in parameter estimation.} \end{center} \end{figure} \subsection{Waveform Measure} \label{subsec:waveformmeasure} Fig.~\ref{fig:comp_LIGO} and~\ref{fig:comp_LISA} compare the dominant $\ell=2$, Fourier waveform amplitude (left panels) and phase (right panels) for Systems A through D as a function of the PN parameter $x$ (bottom axis) and the GW frequency in Hz (top axis) for ground-based and space-based systems respectively. The solid black curves correspond to the DFT waveform, the red dotted curves to the UAA waveform and the blue dashed curves to the aligned-SPA waveforms. For reference, the total accumulated phase of the time-domain $\ell=2$ harmonic is about $850$ cycles for all ground-based systems and $2000$ cycles for space-based systems. Let us make several observations about these figures. First, recall that all phase quantities are here presented relative to the carrier, non-spinning phase of the corresponding system. Therefore, the $\sim \mathcal{O}(10)$ oscillations in the phase plots (right panels) occur on a precession timescale, while in reality $850$ and $2000$ total GW cycles have elapsed for ground-based and space-based systems respectively. Second, observe that spin precession clearly induces modulations on the phase and amplitude that depend sensitively on $\delta m$ and $\alpha_{0}$. These modulations are captured much better by the UAA family than the aligned-SPA one. Third, observe that all approximations agree on the frequency of these modulations but not on the amplitudes or overall trends, i.e.~the troughs and valleys do occur roughly at the same values of $x$ for all waveforms. Fourth, in the ground-based detectors for system C, the Fourier amplitude of the DFT shows peculiar features (e.g.~at $x \approx 0.06$) that are approximated by the UAA waveform. Thus, these features are not an artifact of the DFT, and we have checked that they are not induced by spectral leakage. Fifth, we can observe a spike in the DFT and UAA phase difference $\Psi - \Psi_0$ for space-based system $A$ around $x = 0.017$ that is missed by the aligned SPA approximation. By inspecting the corresponding amplitude plots, we can see that these spikes correspond to moments when the amplitudes of the waveforms almost vanish, i.e.~the detector is going through a node in the waveform power distribution. We can see that the UAA waveform reproduces this feature, and we checked that it agrees with the DFT when it is present. Sixth, the phase discrepancy between the aligned SPA and DFT models does not seem to be consistent from system to system. This is because the match is too small for the maximization method that we used to yield trustworthy results for $\phi_{\mbox{\tiny coal}}$ and $t_{\mbox{\tiny coal}}$ in the aligned SPA case. Seventh, the amplitudes are much better recovered by the UAA for systems A and B than C and D. This is because the precession modulation angles $\delta\phi$ and $\psi_N$ are worse approximations for the latter systems, as discussed in Sec.~\ref{sec:timedomainphases}, and shown in Fig.~\ref{fig:DFT-UAAcomp}. \begin{figure*}[!htp] \begin{center} \includegraphics[width=\columnwidth]{comparisonamplitude_dm0p5_nhat57_LIGO.pdf} \includegraphics[width=\columnwidth]{comparisonphase_dm0p5_nhat57_LIGO.pdf} \\ \includegraphics[width=\columnwidth]{comparisonamplitude_dm0p1_nhat57_LIGO.pdf} \includegraphics[width=\columnwidth]{comparisonphase_dm0p1_nhat57_LIGO.pdf} \\ \includegraphics[width=\columnwidth]{comparisonamplitude_dm0p5_nhat23_LIGO.pdf} \includegraphics[width=\columnwidth]{comparisonphase_dm0p5_nhat23_LIGO.pdf} \\ \includegraphics[width=\columnwidth]{comparisonamplitude_dm0p1_nhat23_LIGO.pdf} \includegraphics[width=\columnwidth]{comparisonphase_dm0p1_nhat23_LIGO.pdf} \caption{\label{fig:comp_LIGO} Comparison of the Fourier amplitude (left) and phase (right) of the $\ell=2$ waveform harmonic for ground-based systems as a function of the PN parameter $x=(\pi M f)^{2/3}$ (bottom axis) and as a function of the frequency in Hz (the top axis). The solid black curve corresponds to the DFT result and the dashed red curve to the UAA. The accumulated phase of the time-domain $\ell=2$ harmonic for each system is $2 \Delta \phi_{orb} \sim 850$~cycles. From top to bottom, we present results for Systems A, B, C and D.} \end{center} \end{figure*} \begin{figure*}[!htp] \begin{center} \includegraphics[width=\columnwidth]{comparisonamplitude_dm0p5_nhat57_LISA.pdf} \includegraphics[width=\columnwidth]{comparisonphase_dm0p5_nhat57_LISA.pdf} \\ \includegraphics[width=\columnwidth]{comparisonamplitude_dm0p1_nhat57_LISA.pdf} \includegraphics[width=\columnwidth]{comparisonphase_dm0p1_nhat57_LISA.pdf} \\ \includegraphics[width=\columnwidth]{comparisonamplitude_dm0p5_nhat23_LISA.pdf} \includegraphics[width=\columnwidth]{comparisonphase_dm0p5_nhat23_LISA.pdf} \\ \includegraphics[width=\columnwidth]{comparisonamplitude_dm0p1_nhat23_LISA.pdf} \includegraphics[width=\columnwidth]{comparisonphase_dm0p1_nhat23_LISA.pdf} \caption{\label{fig:comp_LISA} Same as Fig.~\ref{fig:comp_LIGO} for space-based systems. The accumulated phase of the $\ell=2$ harmonic is $2 \Delta \phi_{orb} \sim 2000$~cycles.} \end{center} \end{figure*} We compare four waveform models in Fig.~\ref{fig:DFT-UAAcomp}, using ground-based system C. Three of those models are based on a discrete Fourier transform, and the fourth one is the UAA model. The first DFT model, DFT1, is the one used in the rest of this section, constructed using the full numerical solution to the equations of motion. The second one, DFT2, is computed using the carrier orbital phase $\phi_{{\mbox{\tiny C}}}(t)$ from Eq.~\eqref{eq:phi0}, together with precession modulation phases $\psi_N(t)$, and $i_L(t)$ computed with the analytical solution for $\bm{L}(t)$ derived in Sec.~\ref{sec:near-alignment}, and using \begin{align} \delta\phi(t) &= \hat{N}_z \arctan \left( \frac{L_z \hat{N}_x - L_x}{L_z \hat{N}_y - L_y}\right), \end{align} an approximation valid for any degree of misalignment between $\uvec{N}$ and $\uvec{L}$. The third one, DFT3, is identical to DFT2 but for the precession modulation phases $\delta\phi(t)$, $\psi_N(t)$, and $i_L(t)$, using those used to derive the UAA waveform, derived in Sec.\ref{sec:timedomainphases}. The top panel of Fig.~\ref{fig:DFT-UAAcomp} shows that the amplitude discrepancy between the DFT1 and the DFT2 models is much smaller than between the DFT1 and UAA models (Fig.\ref{fig:comp_LIGO}, third plot from the top on the left panel), meaning that the main source of inaccuracy is not due to the inaccuracy in $\bm{L}(t)$. The bottom panel shows that the DFT3 amplitude is very well approximated by the UAA amplitude. The main source of amplitude discrepancy between the DFT and UAA models for systems C and D that can be observed in Figs.\ref{fig:comp_LIGO} and~\ref{fig:comp_LISA} is thus the Fourier decomposition derived in Sec.\ref{sec:timedomainphases}, which is less accurate when $\uvec{N}$ and $\uvec{L}$ are close to being aligned. \begin{figure}[!htp] \begin{center} \includegraphics[width=\columnwidth]{comparisonamplitude_DFTnum_DFTan.pdf} \\ \includegraphics[width=\columnwidth]{comparisonamplitude_DFT_UAA.pdf} \caption{\label{fig:DFT-UAAcomp} Comparison between DFT and UAA waveform amplitudes, with same parameters as the third row of Fig.~\ref{fig:comp_LIGO} (ground-based system C). On the top, comparison between the amplitudes of two DFT waveforms, one constructed with the fully numerical solution to the equations of motion (black solid line, the DFT waveform that we used in the rest of this section), and the other constructed with the analytical solution for $\bm{L}(t)$ derived in Sec.~\ref{sec:near-alignment}, as well as $\phi_{{\mbox{\tiny C}}}(t)$ from Eq.~\eqref{eq:phi0} (dotted red line). At the bottom, comparison between the amplitudes of a DFT waveform constructed with the phases $\delta\phi(t)$, $\psi_N(t)$, and $i_L(t)$ derived in Sec.\ref{sec:timedomainphases}, as well as $\phi_{{\mbox{\tiny C}}}(t)$ from Eq.~\eqref{eq:phi0} (solid black line), and the UAA waveform (dotted red line).} \end{center} \end{figure} \section{Discussion} The coming enhancements of ground-based detectors will allow for the first direct detection of gravitational waves. In order to carry out efficient searches, one needs computationally cheap and accurate waveforms. Systems with spins will generically undergo precession, unless the spins are perfectly aligned with the orbital angular momentum. Precession will induce a drastic modification to the waveform, generating corrections in both the phase and amplitude. Such modifications cannot be captured by spin-aligned waveform families, as we demonstrate in this paper. Binaries in the presence of gas, however, will tend to have spin vectors almost aligned with the orbital angular momentum vector~\cite{2007ApJ.661L.147B,Barausse:2012fy,2013arXiv1302.4442G}, i.e. the misalignment angles should be small. Motivated by this, we have constructed a waveform family that captures faithfully the main features of GWs emitted by compact binaries with small spin-orbital angular momentum misalignment angles. One can think of this waveform family as a perturbation of the spin-aligned family, with corrections that model precession effects that enter both the waveform amplitude and phase. The waveforms calculated here are purely analytical, constructed both in the time- and in the frequency-domain. Such analytical waveforms have several advantages. On the one hand, analytical waveforms are usually computationally more efficient to evaluate. Given the large number of templates needed for parameter estimation of spinning systems, computational efficiency is highly desirable. On the other hand, the analytic structure provides important physical insight into how each precession effect comes into play. Such insight can then be used to construct simpler, phenomenological waveforms, such as those recently constructed for binaries where one object is not spinning~\cite{Lundgren:2013jla}. The mathematical methods used to construct these analytical waveforms had never been used in waveform modeling, to our knowledge. These methods, however, are very well-known in other fields, such as non-linear optics and aerodynamics. The first method is that of multiple scale analysis, amenable to problems with several timescales that separate. This method allows us to solve the precession equations analytically as an expansion in the ratio of the precession to the radiation-reaction timescale. The second method is that of uniform asymptotics, which allows us to construct a single asymptotic expansion to the solution of a given problem, instead of a series of asymptotic expansions in different regimes. This method is essential to cure the stationary phase approximation, which fails in the presence of precession due to the coalescence of stationary points. Many other problems would benefit from the application of the mathematical methods implemented here. For example, one could study compact binary inspirals, where the spin angular momenta has a small magnitude (relative to the orbital angular momentum) but arbitrary orientation. This application would be complementary to the example studied here. The resulting analytic waveform can be thought of as a perturbation of the non-spinning SPA waveform. Similarly, one can study compact binaries where one component has arbitrary angular momentum, but the companion has a small spin with arbitrary orientation. This case would be intermediate between the one studied here and the one where both binary components have small spin. The resulting analytic waveform can be thought of as a perturbation of a simple precessing waveform. We are currently investigating both of these cases. \acknowledgments We thank Katerina Chatziioannou for her useful comments and suggestions. A.~K.~and N.~C.~acknowledge support from the NSF Award PHY-1205993 and NASA grant NNX10AH15G. N.~Y.~acknowledges support from NSF grant PHY-1114374 and NASA grant NNX11AI49G, under sub-award 00001944.
1,116,691,499,978
arxiv
\section{Background information}\label{Background} For any field $K$ and directed graph $E$ the Leavitt path algebra $L_K(E)$ has been the focus of sustained investigation since 2004. We put off until Section \ref{Sect:K_0lpasC_n^3} a detailed description of these algebras, choosing instead to focus first on the construction of a monoid which can be carried out for any directed graph. For a directed graph $E$ having $n$ vertices $v_1, v_2,Ê\dots, v_n$ we denote by $A_E$ the usual {\it incidence matrix} of $E$, namely, the matrix $A_E = (a_{i,j})$ where, for each pair $1\leq i,j \leq n$, the entry $a_{i,j}$ denotes the number of edges $e$ in $E$ for which $s(e)=v_i$ and $r(e) = v_j$. Let $E$ be a directed graph with vertices $v_1, v_2, \hdots, v_n$ and incidence matrix $A_E = (a_{i,j})$. We let $F_n$ denote the free abelian monoid on the generators $v_1, v_2, \hdots, v_n$ (so $F_n \cong \bigoplus_{i=1}^n \mathbb{Z}^+$ as monoids). We denote the identity element of this monoid by $z$. We define a relation $\approx$ on $F_n$ by setting $$v_i \approx \sum_{j=1}^n a_{i,j}v_j$$ for each non-sink $v_i$, and denote by $\sim_E$ the equivalence relation on $F_n$ generated by $\approx$. For two $\sim_E$ equivalence classes $[x]$ and $[y]$ we define $[x] + [y] = [x+y]$; it is straightforward to show that this gives a well-defined associative binary operation on the set of $\sim_E$ equivalence classes, and that $[z]$ acts as an identity element for this operation. We denote the resulting {\it graph monoid} $F_n / \sim_E$ by $M_E.$ Specifially, \medskip {\bf Definition.} For any $n\geq 1$ and $0\leq j \leq n-1$, the graph monoid $M_{C_n^j}$ is the free abelian monoid $F_n$ generated by $[v_1], [v_2], \hdots, [v_n]$, subject to the relations $$[v_i ]= [v_{i+1}] + [v_{i+j}]$$ (for all $1\leq i \leq n$), where subscripts are interpreted ${\rm mod} \ n$. \medskip For examples of the graph monoid $M_{C_n^j}$, see both \cite[p. 3]{ASch} (in which the graph monoid $M_{C_3^2}$ is shown to consist of the five elements $\{[z], \ [v_1], \ [v_2], \ [v_3], \ [v_1]+[v_2]+[v_3] \}$) and \cite[pp. 3, 4]{AA} (where the graph monoid $M_{C_4^2}$ associated to the graph $C_4^2$ is explicitly described). We present now a streamlined version of the germane background information which will be utilized throughout the remainder of the article. Much of this discussion appears in \cite{AA}. For a unital $K$-algebra $A$, the set of isomorphism classes of finitely generated projective left $A$-modules is denoted by $\mathcal{V}(A)$. We denote the elements of $\mathcal{V}(A)$ using brackets; for example, $[A] \in \mathcal{V}(A)$ represents the isomorphism class of the left regular module ${}_AA$. $\mathcal{V}(A)$ is a monoid, with operation $\oplus$, and zero element $[\{0\}]$. The monoid $(\mathcal{V}(A), \oplus)$ is {\it conical}: this means that the sum of any two nonzero elements of $\mathcal{V}(A)$ is nonzero, or, rephrased, that $\mathcal{V}(A)^* = \mathcal{V}(A) \setminus \{0\}$ is a semigroup under $\oplus$. The following striking property of Leavitt path algebras was established in \cite[Theorem 3.5]{AMP}. $$ \mathcal{V}(L_K(E)) \cong M_E \ \mbox{as monoids}. \ \ \ \ \ \mbox{Moreover, } \ [L_K(E)] \leftrightarrow \sum_{v\in E^0} [v] \ \mbox{under this isomorphism.}$$ A unital $K$-algebra $A$ is called {\it purely infinite simple} in case $A$ is not a division ring, and $A$ has the property that for every nonzero element $x$ of $A$ there exist $b,c\in A$ for which $bxc=1_A$. It is shown in \cite[Corollary 2.2]{AGP} that if $A$ is a unital purely infinite simple $K$-algebra, then the semigroup $(\mathcal{V}(A)^*, \oplus)$ is in fact a group, and, moreover, that $\mathcal{V}(A)^* \cong K_0(A)$, the Grothendieck group of $A$. Summarizing: when $L_K(E)$ is unital purely infinite simple we have the following isomorphisms of groups: $$ K_0(L_K(E)) \cong \mathcal{V}(L_K(E))^* \cong M_E^*.$$ In particular, in this situation we have $|K_0(L_K(E))| = | M_E^* |$. The finite graphs $E$ for which the Leavitt path algebra $L_K(E)$ is purely infinite simple have been explicitly described (using only graph-theoretic properties of $E$) in \cite{AAP2}. This result, together with the preceding discussion, immediately yields \medskip {\bf Proposition.} \cite[Proposition 1.3]{AA} For each $n\geq 1$, and for each $0\leq j \leq n-1$, the $K$-algebra $L_K(C_n^j)$ is unital purely infinite simple. In particular, $M_{C_n^j}^* = (M_{C_n^j} \setminus \{[z]\},+)$ is a group, necessarily isomorphic to $K_0(L_K(C_n^j))$. \medskip {\it Our goal here} is to describe the group $M_{C_n^j}^*$. Our motivation for doing so is twofold. First, the description turns out to be inherently interesting in its own right (involving the aforementioned Haselgrove sequences). Second, we may utilize the structure of this group, viewed as $ K_0(L_K(C_n^j))$, to (quite surprisingly) glean information about the algebra $L_K(C_n^j)$ itself. This is done by invoking the so-called Restricted Algebraic Kirchberg-Phillips Theorem, which we describe fully in Section \ref{Sect:K_0lpasC_n^3}. We now review a useful computational tool. Let $M \in {\rm M}_n(\mathbb{Z})$ and view $M$ as a linear transformation $M:\mathbb{Z}^n \rightarrow \mathbb{Z}^n$ via left multiplication on columns. The cokernel of $M$ is a finitely generated abelian group, having at most $n$ cyclic direct summands; as such, by the invariant factors version of the Fundamental Theorem of Finitely Generated Abelian groups, we have $${\rm Coker}(M)\cong \mathbb{Z}_{s_\ell} \oplus \mathbb{Z}_{s_{\ell + 1}} \oplus \cdots \oplus \mathbb{Z}_{s_n}$$ for some $1\leq \ell \leq n$, where either $n=\ell$ and $s_n=1$ (i.e., ${\rm Coker}(M)$ is the trivial group), or there are (necessarily unique) nonnegative integers $s_\ell, s_{\ell + 1}, \dots , s_n$, for which the nonzero values $s_\ell, s_{\ell + 1}, \dots , s_r$ satisfy $s_i \geq 2$ and $s_i \vert s_{i+1}$ for $ \ell \leq i \leq r-1$, and $s_{r+1} = \cdots = s_n = 0$. ${\rm Coker}(M)$ is a finite group if and only if $r=n$ (i.e., there are no zeros in the sequence $s_\ell, \dots, s_n$). In case $\ell > 1$, we define $s_1 = s_2 = \cdots = s_{\ell-1} = 1$. Clearly then we have $${\rm Coker}(M)\cong \mathbb{Z}_{s_1} \oplus \mathbb{Z}_{s_{2}} \oplus \cdots \oplus \mathbb{Z}_{s_{\ell}} \oplus \cdots \oplus \mathbb{Z}_{s_n},$$ since any additional direct summands are isomorphic to the trivial group $\mathbb{Z}_1$. \begin{remark}\label{isomorphicCokerremark} {\rm It is straightforward to establish that if $P,Q$ are invertible in ${\rm M}_n(\mathbb{Z})$, then ${\rm Coker}(M)\cong {\rm Coker}(PMQ)$. (We note that any such $P$ and $Q$ must have determinant $\pm 1$.) This means that if $N \in {\rm M}_n(\mathbb{Z})$ is a matrix which is constructed by performing any sequence of $\mathbb{Z}$-elementary row and/or column operations starting with $M$, then ${\rm Coker}(M)\cong {\rm Coker}(N)$ as abelian groups. } \end{remark} \begin{definition}\label{Def:Smithnormalform}{\rm Let $M \in {\rm M}_n(\mathbb{Z})$, and suppose ${\rm Coker}(M)\cong \mathbb{Z}_{s_1}\oplus\mathbb{Z}_{s_2}\oplus \cdots \oplus \mathbb{Z}_{s_n}$ as described above. The \emph{Smith Normal Form} of $M$ is the $n\times n$ diagonal matrix $ {\rm diag}(s_1, s_2, \ldots, s_r, 0, \dots, 0)$. } \end{definition} For any matrix $M \in {\rm M}_n(\mathbb{Z})$, the Smith Normal Form of $M$ exists and is unique. If $D \in {\rm M}_n(\mathbb{Z})$ is a diagonal matrix with entries $d_1,d_2,\ldots,d_n$ then clearly ${\rm Coker}(D)\cong \mathbb{Z}_{d_1}\oplus\mathbb{Z}_{d_2}\oplus \cdots \oplus \mathbb{Z}_{d_n}$. In the end we have the following. \begin{proposition}\label{Prop:Smithnormalform} Let $M \in {\rm M}_n(\mathbb{Z})$, and let $S$ denote the Smith Normal Form of $M$. Suppose the diagonal entries of $S$ are $s_1,s_2,\ldots,s_n$. Then $${\rm Coker}(M)\cong \mathbb{Z}_{s_1}\oplus\mathbb{Z}_{s_2}\oplus \cdots \oplus \mathbb{Z}_{s_n}.$$ In particular, if there are no zero entries in the Smith Normal Form of $M$, then $|{\rm Coker}(M)| = s_1 s_2 \cdots s_n = |{\rm det}(S)| = |{\rm det}(M)|$. \end{proposition} The key computational device we will utilize to compute the Smith Normal Form of a matrix $M$ is the following. \medskip {\bf Determinant Divisors Theorem.} \cite[Theorem II.9]{N} Let $M \in {\rm M}_n(\mathbb{Z})$. Define $\alpha_0 := 1$, and, for each $1\leq i \leq n$, define the \emph{$i^{th}$ determinant divisor of} $M$ to be the integer $$\alpha_i := \text{ the greatest common divisor of the set of all } i \times i \text{ minors of } M.$$ Let $s_1, s_2, ... , s_n$ denote the diagonal entries of the Smith Normal Form of $M$, and assume that each $s_i$ is nonzero. Then $$s_i = \frac{\alpha_{i}}{\alpha_{i-1}}$$ for each $1 \leq i \leq n$. \medskip Suppose now that $E$ is a finite directed graph having $n$ vertices $v_1, v_2, \dots, v_n$. Consider the matrix $I_n - A_E^t$, where $I_n$ is the identity $n \times n$ matrix. As above, we may view $I_n - A_E^t$ as a linear transormation $\mathbb{Z}^n \rightarrow \mathbb{Z}^n$. Invoking the discussion in \cite[Section 3]{AALP}, in the case $L_K(E)$ is purely infinite simple we have that $$K_0(L_K(E)) \cong M_E^* \cong {\rm Coker}(I_n - A_E^t).$$ Under this isomorphism $[v_i ] \mapsto \vec{b_i} + {\rm Im }(I_n - A_E^t)$, where $\vec{b_i}$ is the element of $\mathbb{Z}^n$ which is $1$ in the $i^{th}$ coordinate and $0$ elsewhere. In other words, when $L_K(E)$ is purely infinite simple, then $K_0(L_K(E))$ is the cokernel of the linear transformation $I_n - A_E^t: \mathbb{Z}^n \rightarrow \mathbb{Z}^n$ induced by matrix multiplication. \begin{example} Suppose $E=R_m$ ($m \geq 2$), the ``rose with $m$ petals" graph having one vertex and $m$ loops. Because $m\geq 2$, $L_K(R_m)$ is purely infinite simple. Here $A_E$ is the $1 \times 1$ matrix $(m)$, so $I_1 -A_{R_m}^t$ is the $1 \times 1$ matrix $(1-m)$, and we have $$K_0(L_K(R_m))\cong \mathbb{Z}^1_{1-m} \cong \mathbb{Z}_{m-1}.$$ \end{example} Proposition \ref{Prop:Smithnormalform} then yields the following. \begin{proposition}\label{Prop:K_0andSNF} Suppose $E$ is a finite graph with $|E^0|=n$, and suppose also that $L_K(E)$ is purely infinite simple. Let $S$ be the Smith Normal Form of the matrix $I_n-A_E^t$, with diagonal entries $s_1,s_2,\ldots,s_n$. Then $$K_0(L_K(E))\cong \mathbb{Z}_{s_1}\oplus\mathbb{Z}_{s_2}\oplus \cdots \oplus \mathbb{Z}_{s_n}.$$ \end{proposition} Moreover, if $K_0(L_K(E))$ is finite, then an analysis of the Smith Normal Form of the matrix $I_n - A_E^t$ yields $$| K_0(L_K(E)) | = | {\rm det}(I_n - A_E^t) |.$$ (This is immediate, since as noted above any invertible matrix in ${\rm M}_n(\mathbb{Z})$ has determinant $\pm 1$.) Conversely, $K_0(L_K(E))$ is infinite if and only if ${\rm det}(I_n - A_E^t) = 0.$ In \cite{H}, Haselgrove introduces for each pair of positive integers $n,k$ the integer $$ a_k(n) := \prod_{\ell = 0}^{n-1} (1 - \omega_{\ell} - \omega_{\ell}^k), $$ where $\omega_\ell = \cos( \frac{2 \pi \ell}{n}) + i \sin (\frac{2 \pi \ell}{n})$ in $\mathbb{C}$. (That this expression indeed yields an integer follows from some elementary number theory.) Subsequently, in \cite[Definition 2.2]{AA} the integer $$|a_k(n)| \ \mbox{is denoted by } H_k(n),$$ and, for fixed $k$, the sequence $$H_k(1), H_k(2), H_k(3), ...$$ is referred to as the {\it $k^{th}$ Haselgrove sequence}. It is of historical interest to note that Haselgrove's motivation for considering these integers $a_k(n)$ was for their potential use in establishing a connection between a resolution of Fermat's Last Theorem (at the time, of course, Fermat's Last {\it Conjecture}) and some integers which share properties of the Mersenne numbers. We recall some properties of the integers $a_k(n)$ and $H_k(n)$ (established \cite{AA}) which show why these are germane in the current context, and then finish the section with a new result which will be useful in the sequel. \medskip {\bf Proposition.} (See \cite[Section 2]{AA}) Let $n\in \mathbb{N}$ and $0\leq k \leq n-1$. \begin{enumerate} \item $a_k(n) = \det(I_n-A_{C_n^k}^t)$. (This is established using the notion of {\it circulant} matrices.) \item $ \det(I_n-A_{C_n^k}^t) \leq 0.$ (This is established by some elementary analysis in $\mathbb{C}$.) \newline \hspace{.5in} In particular, $\det(I_n-A_{C_n^k}^t) = -H_k(n)$. \item If $H_k(n)>0$, then $ |K_0(L_K(C_n^k))| = H_k(n) = | {\rm Coker}(I_n-A_{C_n^k}^t)|.$ \item $H_k(n)=0$ if and only if $K_0(L_K(C_n^k))$ is infinite. \end{enumerate} \begin{proposition}\label{Lem:H_k(n)=0} Let $n \in \mathbb{N}$ and $0\leq k \leq n-1$. Then $H_k(n) = 0$ if and only if $k \equiv 5 \pmod{6}$ and $n \equiv 0 \pmod{6}$. \end{proposition} \begin{proof} By the previous recollections from \cite{AA} we have \[ H_k(n) = -\prod_{\ell = 0}^{n-1} (1 - \omega_{\ell} - \omega_{\ell}^k), \] where $\omega_\ell = \cos( \frac{2 \pi \ell}{n}) + i \sin (\frac{2 \pi \ell}{n})$ in $\mathbb{C}$. Then $H_k(n) = 0$ if and only if one of the factors $1 - \omega_{\ell} - \omega_{\ell}^k = 0$ for some $0 \leq \ell \leq n-1$. In particular, we must have $\omega_{\ell} + \omega_{\ell}^k = 1$. Letting $\theta = \frac{2 \pi \ell}{n}$ (which we may assume $0 \leq \theta < 2 \pi$), we have $$\cos(\theta) + \cos(k \theta) = 1 \ \ \ \ \mbox{and} \ \ \ \ \sin(\theta) + \sin(k \theta) = 0. $$ The second equation implies that $k \theta \equiv -\theta \pmod {2 \pi}$ or that $k \theta \equiv 2\pi - \theta \pmod{2 \pi}$. In the first case, \[ \cos(\theta) + \cos(k \theta) = \cos(\theta) + \cos(-\theta) = 0, \] which is a contradiction. Hence, the second condition must be true, and \[ \cos(\theta) + \cos(k \theta) = \cos(\theta) + \cos(2\pi-\theta) = 2 \cos(\theta) = 1. \] Hence, $\theta = \frac{\pi}{3}$ or $\theta = \frac{5\pi}{3}$. Substituting in $\theta = \frac{2 \pi \ell}{n}$, we see that \[ \frac{2 \pi \ell}{n} = \frac{\pi}{3} \implies n = 6\ell, \ \ \ \ \mbox{or} \ \ \ \ \frac{2 \pi \ell}{n} = \frac{5\pi}{3} \implies 5n = 6\ell. \] In either case, $n \equiv 0 \pmod{6}$. Similarly, $k \theta \equiv 2\pi - \theta \pmod{2\pi}$ implies that $(k+1)\theta \equiv 2 \pi \equiv 0 \pmod{2\pi}$. Hence, for some integer $m$, \[ (k+1) \frac{\pi}{3} = 2 m \pi \implies k+1 = 6 m, \ \ \ \ \mbox{or} \ \ \ \ (k+1) \frac{5\pi}{3} = 2 m \pi \implies 5(k+1) = 6 m. \] In either case, $k+1 \equiv 0 \pmod{6}$, or $k \equiv 5 \pmod{6}$. Finally, when $n \equiv 0 \pmod{6}$ and $k\equiv 5 \pmod{6}$, then letting $\ell = \frac{n}{6}$ implies that \[ 1 - \omega_{\ell} - \omega_{\ell}^k = 1 - e^{\frac{2 \pi i}{6}} - (e^{\frac{2 \pi i}{6}})^k = 1 - e^{\frac{2 \pi i}{6}} - (e^{\frac{2 \pi i}{6}})^{-1} = 0 \] Since one of the factors of $H_k(n)$ is zero, we conclude that $H_k(n) = 0$. \end{proof} \medskip \section{The Smith Normal Form of the matrix $I_n - A_{C_n^j}^t$}\label{Section:SmithnfC_n^3} In order to investigate the Leavitt path algebras $\{L_K(C_n^j) \ | \ n\in \mathbb{N}\}$, we begin in a manner similar to that used in the case $C_n^2$ studied in \cite[Section 4]{AA}. The generating relations for $M_{C_n^j}^*$ are given by $$[v_i] = [v_{i+1}] + [v_{i+j}] $$ for $1\leq i \leq n$, where subscripts are interpreted ${\rm mod }\ n$. We will focus on the element $[v_1]$ of $M_{C_n^j}^*$; corresponding to any statement established for $[v_1]$ in $M_{C_n^j}^*$, there will be (by the symmetry of the relations) an analogous statement in $M_{C_n^j}^*$ for each $[v_i]$, $1\leq i \leq n$. The computation of the Smith Normal Form of the $n\times n$ matrix $I_n - A_{C_n^j}^t$ is the key tool for determining the $K_0$ of the Leavitt path algebra $C_n^j$. We show below that this computation reduces to calculating the Smith Normal Form of a $j \times j$ matrix. The authors are quite grateful to M. Iovanov for suggesting this approach. \begin{definition}{\rm Let $ p(x) = x^j + c_{j-1} x^{j-1} + \dots + c_1 x + c_0 $ be a degree $j$ monic polynomial with integer coefficients. The \emph{companion matrix} $M(p)$ of $p(x)$ is the $j \times j$ matrix \[ M(p) := \mat{ 0 & 0 & 0 & \dots & 0 & -c_0 \\ 1 & 0 & 0 & \dots & 0 & -c_1 \\ 0 & 1 & 0 & \dots & 0 & -c_2 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & 0 & -c_{j-2} \\ 0 & 0 & 0 & \dots & 1 & -c_{j-1} }. \] For $j \geq 2$ we define $p_j(x) = x^j - x^{j-1} - 1 \in \mathbb{Z}[x]$. The companion matrix of $p_j(x)$, which we will denote by $M_j$, is then the $j \times j$ matrix \[ M_j := M(p_j(x)) = \mat{ 0 & 0 & 0 & \dots & 0 & 1 \\ 1 & 0 & 0 & \dots & 0 & 0 \\ 0 & 1 & 0 & \dots & 0 & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ 0 & 0 & 0 & \dots & 0 & 0 \\ 0 & 0 & 0 & \dots & 1 & 1 }. \] } \end{definition} \begin{remark} Clearly the two matrices $I_n - A_{C_n^j}^t$ and $A_{C_n^j} - I_n$ have the same Smith Normal Form (i.e., have isomorphic cokernels). In the sequel we choose to analyze the latter, because it is easier to work with computationally. A similar statement holds for the matrices $M_n^j - I_j$ and $(M_n^j)^t - I_j$. We note that in a more general analysis of the structure of Leavitt path algebras than the one carried out here, such an interchange might possibly forfeit some important information. \end{remark} \begin{theorem}\label{Thm:SNF} Let $n \geq j$. Then ${\rm Coker}(A_{C_n^j} - I_n) \cong {\rm Coker}(M_j^{n} - I_j)$. \end{theorem} \begin{proof} The proof in the case $j \leq n \leq 2j$ is quite similar to the proof which we give here for the case $n>2j$, but requires some extra computational and notational energy; we thereby leave the proof of the $j \leq n \leq 2j$ case to the interested reader. Thus we assume that $n>2j$. By definition, the entry in the $k^{\mathrm{th}}$ row and $\ell^{\mathrm{th}}$ column of $A_{C_n^j} - I_n$ is given by \[ ( A_{C_n^j} - I_n)_{k \ell} = \begin{cases} -1 & \text{ if $\ell = k$ } \\ 1 & \text{ if $\ell \equiv k+1 \text{ or } k+j \pmod{n}$ } \\ 0 & \text{ otherwise. } \end{cases} \] Our goal is to obtain a diagonal matrix with integers along the diagonal through elementary row and column operations involving only integer multiples. We first note that the bottom left $j \times j$ submatrix of the matrix $A_{C_n^j} - I_n$ can be written as $M_j$ with the columns cyclically permuted: \[ \mat{ 1 & 0 & 0 & \dots & 0 & 0 \\ 0 & 1 & 0 & & 0 & 0 \\ 0 & 0 & 1 & & 0 & 0 \\ \vdots & & & \ddots & & \vdots \\ 0 & 0 & 0 & \dots & 1 & 0 \\ 1 & 0 & 0 & \dots & 0 & 1 \\ } = M_j P, \] where $P$ is the $j \times j$ permutation matrix \[ P = \mat{ 0 & 1 & 0 & \dots & 0 & 0 \\ 0 & 0 & 1 & & 0 & 0 \\ 0 & 0 & 0 & & 0 & 0 \\ \vdots & & & \ddots & & \vdots \\ 0 & 0 & 0 & \dots & 0 & 1 \\ 1 & 0 & 0 & \dots & 0 & 0 \\ } \] The first $(n-2j)$ reduction steps of the Smith Normal Form will result in an $(n-2j) \times (n-2j)$ identity submatrix in the upper left corner. On the bottom $j$ rows, the $i^{\mathrm{th}}$ reduction step adds the $i^{\mathrm{th}}$ column to the $(i+1)^{\mathrm{th}}$ column and the $(i+j)^{\mathrm{th}}$ column, then zeroes out the $i^{\mathrm{th}}$ column. The matrix that accomplishes this reduction step is \[ \mat{ \mathbf{v}_i & \mathbf{v}_{i+1} & \dots & \mathbf{v}_{i+j-1} \\ } \cdot \mat{ 1 & 0 & 0 & \dots & 0 & 1 \\ 1 & 0 & 0 & & 0 & 0 \\ 0 & 1 & 0 & & 0 & 0 \\ \vdots & & & \ddots & & \vdots \\ 0 & 0 & 0 & & 0 & 0 \\ 0 & 0 & 0 & \dots & 1 & 0 \\ } = \mat{ \mathbf{v}_i + \mathbf{v}_{i+1} & \dots & \mathbf{v}_{i+j-1} & \mathbf{v}_{i} \\ }, \] and this matrix is conjugate to the companion matrix $M_j$ via the permutation matrix $P$: \[ \mat{ 1 & 0 & 0 & \dots & 0 & 1 \\ 1 & 0 & 0 & & 0 & 0 \\ 0 & 1 & 0 & & 0 & 0 \\ \vdots & & & \ddots & & \vdots \\ 0 & 0 & 0 & & 0 & 0 \\ 0 & 0 & 0 & \dots & 1 & 0 \\ } = P^{-1} M_j P. \] After $i$ reduction steps, the first $j \times j$ submatrix with nonzero column vectors on the bottom $j$ rows will be \[ M_j P \cdot (P^{-1} M_j P)^{i} = M_j^{i+1} P. \] Denote by $Q$ the $j \times j$ matrix \[ Q = \mat{ -1 & 1 & 0 & \dots & 0 & 0 \\ 0 & -1 & 1 & & 0 & 0 \\ 0 & 0 & -1 & & 0 & 0 \\ \vdots & & & \ddots & & \vdots \\ 0 & 0 & 0 & & -1 & 1 \\ 0 & 0 & 0 & \dots & 0 & -1 \\ }. \] Then after $n-2j$ reduction steps, we have \[ A_{C_{n}^{j}} - I_n \sim \mat{ I_{n-2j} & 0_{(n-2j) \times j} & 0_{(n-2j) \times j} \\ 0_{j \times (n-2j)} & Q & M_j P \\ 0_{j \times (n-2j)} & M_j^{n-2j+1} P & Q \\ }. \] Because each reduction step only adds previous columns to the following existing columns, the next $j$ reduction steps results in \[ A_{C_{n}^{j}} - I_n \sim \mat{ I_{n-j} & 0_{(n-j) \times j} \\ 0_{j \times (n-j)} & M_j^{n-j+1} P + Q \\ }. \] Finally, we adjust the bottom right $j \times j$ matrix by adding the $(n-j+1)^{\mathrm{th}}$ column through the $(n-1)^{\mathrm{th}}$ column to the $n^{\mathrm{th}}$ column, adding the $(n-j+1)^{\mathrm{th}}$ column through the $(n-2)^{\mathrm{th}}$ column to the $(n-1)^{\mathrm{th}}$ column, and so on until adding the $(n-j+1)^{\mathrm{th}}$ column to the $(n-j+2)^{\mathrm{th}}$ column. This procedure is equivalent to multiplying $M_j^{n-j+1} P + Q$ on the right by the $j \times j$ matrix \[ R = \mat{ 1 & 1 & 1 & \dots & 1 & 1 \\ 0 & 1 & 1 & & 1 & 1 \\ 0 & 0 & 1 & & 1 & 1 \\ \vdots & & & \ddots & & \vdots \\ 0 & 0 & 0 & & 1 & 1 \\ 0 & 0 & 0 & \dots & 0 & 1 \\ }. \] A straightforward calculation shows that \[ P R = M_j^{j-1} \qquad \text{and} \qquad Q R = -I_j, \] resulting in the final bottom right $j \times j$ submatrix is \[ (M_j^{n-j+1} P + Q) R = M_j^n - I_j. \] Therefore, \[ A_{C_{n}^{j}} - I_n \sim \mat{ I_{n-j} & 0_{(n-j) \times j} \\ 0_{j \times (n-j)} & M_j^n - I_j \\ } \] and thus by Remark \ref{isomorphicCokerremark}, we conclude that ${\rm Coker}(A_{C_n^j} - I_{n}) \cong {\rm Coker}(M_j^n - I_j)$. \end{proof} \section{The case $j=3$, and the case $j=2$ (briefly) revisited}\label{Sectionjequals2and3} We have shown in Theorem \ref{Thm:SNF} that for any $n \geq j$, the cokernel of the $n \times n$ matrix $A_{C_n^j} - I_n$ is isomorphic to the cokernel of the $j\times j$ matrix $M_j^n - I_j$. In this section we investigate in detail the specific situation when $j=3$. We do so for two reasons: this case will provide some insight as to how the general case works, and, as it turns out, the $j=3$ case provides a sort of ``sweet spot" in the general setting. We conclude the section by showing how Theorem \ref{Thm:SNF} dramatically simplifies the proof of the corresponding result in the $j=2$ case as compared to the proof given in \cite{AA}. An important role in the $j=3$ case will be played in this situation by the elements of the {\it Narayana's Cows sequence} $G$, defined recursively by setting $$G(1) = 1, \ \ G(2) = 1, \ \ G(3) = 1, \ \mbox{ and } \ G(n) = G(n-1) + G(n-3) \mbox{ for all $n \geq 4$}.$$ (We may also define $G(0) = 0$, $G(-1)=0$, $G(-2)=1$ and $G(-3)=0$ consistent with the given recursion equation). This name is used in the Online Encyclopedia of Integer Sequences \cite[Sequence A000930]{OEIS}. (Indeed, this sequence has gained some notoriety in popular culture, including the composition of a musical piece based on it.) The first few terms of the sequence $G(n)$ ($n\geq 1$) are: $$G: \ \ 1,1,1,2,3,4,6,9,13,19,28,41,60,88,129, \dots$$ By the aforementioned \cite[Proposition 1.3]{AA} we have that $M_{C_n^3}^*$ is indeed a group. In $M_{C_n^3}^*$ we have \begin{align*} [v_1] & = [v_2] + [v_4] = ([v_3] + [v_5]) + [v_4] = ([v_4] + [v_6]) + [v_5] + [v_4] \\ & = 2[v_4] + [v_5] + [v_6] = 2([v_5] + [v_7]) + [v_5] + [v_6] \\ & = 3[v_5] + [v_6] + 2[v_7] = \cdots \end{align*} which by an easy induction gives, for $1\leq i \leq n$, $$[v_1] = G(i-1)[v_{i-1}] + G(i-3)[v_i] + G(i-2)[v_{i+1}] .$$ Thus is the Narayana's Cows sequence related to the structure of $M_{C_n^3}^*$ Setting $i=n$, and using that $[v_{n+1}] = [v_1]$ by notational convention, we get in particular that $[v_1] = G(n-1)[v_{n-1}] + G(n-3)[v_n] + G(n-2)[v_1]$, so that \begin{equation*} 0 = G(n-1)[v_{n-1}] + G(n-3)[v_n] + (G(n-2)-1)[v_1] \ \ \ \ \mbox{in} \ M_{C_n^3}^*. \end{equation*} \bigskip It will be quite useful to have an expression for the $3\times 3$ matrix $M_{3}^n$ in terms of the Narayana's Cows sequence, which is the content of the following lemma. \begin{lemma}\label{Lem:M3n} Let $G$ denote the Narayana's Cows sequence. Then for each $n \in \mathbb{N}$, \[ M_3^{n} = \mat{ G(n-2) & G(n-1) & G(n) \\ G(n-3) & G(n-2) & G(n-1) \\ G(n-1) & G(n) & G(n+1) } . \] \end{lemma} \begin{proof} The proof is by induction. As mentioned previously, we may extend the $G(n)$ sequence by setting $G(0)=0$, $G(-1)=0$, and $G(-2)=1$. Thus we have that the statement is true for $n=1$: \[ M_3 = \mat{ G(-1) & G(0) & G(1) \\ G(-2) & G(-1) & G(0) \\ G(0) & G(1) & G(2) }. \] Now suppose that \[ M_3^{n-1} = \mat{ G(n-3) & G(n-2) & G(n-1) \\ G(n-4) & G(n-3) & G(n-2) \\ G(n-2) & G(n-1) & G(n) } \] Then \begin{align*} M_3^n = M_3^{n-1} \cdot M_3 &= \mat{ G(n-3) & G(n-2) & G(n-1) \\ G(n-4) & G(n-3) & G(n-2) \\ G(n-2) & G(n-1) & G(n) } \cdot \mat{ 0 & 0 & 1 \\ 1 & 0 & 0 \\ 0 & 1 & 1 } \\ & = \mat{ G(n-2) & G(n-1) & G(n-3) + G(n-1) \\ G(n-3) & G(n-2) & G(n-4) + G(n-2) \\ G(n-1) & G(n) & G(n-2) + G(n) } \\ & = \mat{ G(n-2) & G(n-1) & G(n) \\ G(n-3) & G(n-2) & G(n-1) \\ G(n-1) & G(n) & G(n+1) } \end{align*} as desired. \end{proof} \begin{corollary}\label{Smithnf3} For all $n \geq 3$, \[ {\rm Coker}(A_{C_n^3} - I_n) \cong {\rm Coker} \mat{ G(n-2)-1 & G(n-3) & G(n-1) \\ G(n-1) & G(n-2)-1 & G(n) \\ G(n) & G(n-1) & G(n+1)-1 }, \] where $G$'s are the Narayana's Cows numbers. \end{corollary} \begin{proof} The result follows immediately from Theorem~\ref{Thm:SNF} and Lemma~\ref{Lem:M3n}. \end{proof} We now analyze the Smith Normal Form for the matrix $M_3^n - I_3$. \begin{definition}\label{DefDeterminantDivisors} Let $n \in \mathbb{N}$. By Lemma \ref{Lem:M3n} we have \[ (M_3^n)^t - I_3 = \mat{ G(n-2)-1 & G(n-3) & G(n-1) \\ G(n-1) & G(n-2)-1 & G(n) \\ G(n) & G(n-1) & G(n+1)-1 }. \] For $i=1,2,3$ and each $n\geq 1$, the corresponding $i$-th determinant divisors $ \alpha_1(n),\alpha_2(n),$ and $\alpha_3(n)$ of $(M_3^n)^t - I_3$ have the following values. \medskip \underline{$i=1$}: \ $\alpha_1(n)$ is the greatest common divisor of the nine $1\times 1$ minors $(M_3^n)^t - I_3$, i.e., of the nine entries of $(M_3^n)^t - I_3$. By eliminating repeated entries, we see that $\alpha_1(n)$ is the greatest common divisor of five integers, to wit: $$\alpha_1(n) = \gcd \{ \ G(n-2)-1, \ G(n-3), \ G(n-1), \ G(n), \ G(n+1)-1 \ \}. $$ \medskip \underline{$i=2$}: \ $\alpha_2(n)$ is the greatest common divisor of the nine $2\times 2$ minors $(M_3^n)^t - I_3$, i.e., of the determinants of the nine $2\times 2$ submatrices of $(M_3^n)^t - I_3$. By doing the standard computation for determinants of $2\times 2$ matrices, and then eliminating repeated results, we see that $\alpha_2(n)$ is the greatest common divisor of six integers, to wit: \medskip $\alpha_2(n) = \ \gcd \{ \ (G(n-2)-1)^2-G(n-1)G(n-3), \ \ (G(n-2)-1)G(n-1)-G(n)G(n-3), \ $ \vspace{.05in} $\hspace{1in} (G(n-2)-1)(G(n+1)-1)-G(n)G(n-1), \ \ $ $ \ (G(n-2)-1)G(n)-G(n-1)^2,\ $ \vspace{.05in} $\hspace{1in} G(n-1)(G(n+1)-1)-G(n)^2, \ \ G(n-3)(G(n+1)-1)-G(n-1)^2 \ \} .$ \medskip \underline{$i=3$}: \ $\alpha_3(n)$ is the greatest common divisor of the one $3\times 3$ minor of $(M_3^n)^t - I_3$, in other words, $$\alpha_3(n) = \ | {\rm det}((M_3^n)^t - I_3) |.$$ \end{definition} \medskip By Proposition \ref{Lem:H_k(n)=0}, since $3 \not\equiv 5 \ {\rm mod} 6$, we have that ${\rm Coker}(I_n - A_{C_n^j}^t)$ is finite, so that by Theorem \ref{Thm:SNF} ${\rm Coker}((M_3^n)^t - I_3)$ is also finite, which yields that necessarily none of the entries in the Smith Normal Form of $(M_3^n)^t - I_3$ is zero. Therefore, by the Determinant Divisors Theorem, the Smith Normal Form of the matrix $(M_3^n)^t - I_3$ is given by: \[ {\rm SNF}((M_3^n)^t - I_3) = \mat{ \alpha_1(n) & 0 & 0 \\ 0 & \frac{\alpha_2(n)}{\alpha_1(n)} & 0 \\ 0 & 0 & \frac{\alpha_3(n)}{\alpha_2(n)} }, \] \smallskip \noindent where the values of $\alpha_1(n), \alpha_2(n),$ and $ \alpha_3(n)$ are as presented in Definition \ref{DefDeterminantDivisors}. As a consequence, Corollary \ref{Smithnf3} immediately yields the following result. \begin{corollary}\label{Cor:K_0intermsofalpha} Let $n \in \mathbb{N}$. Then $$K_0(L_K(C_n^3)) \cong \mathbb{Z}_{\alpha_1(n)} \oplus \mathbb{Z}_{\frac{\alpha_2(n)}{\alpha_1(n)}} \oplus \mathbb{Z}_{\frac{\alpha_3(n)}{\alpha_2(n)}}.$$ \end{corollary} Corollary \ref{Cor:K_0intermsofalpha} is the key ingredient we will utilize to prove the main result about the structure of the $K_0(L_K(C_n^3))$. \begin{remark} {\rm Our goal for the remainder of this section will be to present a more efficient description of the determinant divisors $\alpha_1(n), \alpha_2(n)$ and $\alpha_3(n)$ than those given in Definition \ref{DefDeterminantDivisors}. Our motivation is as follows. In \cite{AA}, a description of $K_0(L_K(C_n^2))$ is given in terms of greatest common divisors of pairs of integers involving terms of the Fibonacci sequence. Our aim here is to establish the analogous result, by describing $K_0(L_K(C_n^3))$ in terms of greatest common divisors of triples of integers involving terms of the Narayana's Cows sequence. } \end{remark} For integers $a,b$, $\gcd \{a,b\}$ denotes the greatest common divisor of $a$ and $b$. A key role will be played by the following integer. \begin{definition}\label{Def:d_3} {\rm For any positive integer $n$ we define $$d_3(n) := \gcd \{ \ G(n-1), \ G(n-3), \ G(n-2)-1 \ \}.$$ \noindent The first few terms of the sequence $d_3(n)$ ($n\geq 1$) are: $$d_3: \ \ 1,1,1,1,1,1,2,3,1,1,1,1,1,4,1,3,1,1, \dots $$} \end{definition} To begin with we show that the first determinant divisor $\alpha_1(n)$ coincides with the integer $d_3(n)$ given in Definition \ref{Def:d_3}. To achieve it recall that we use the following well-known property of greatest common divisors: if $b_{n+1}$ is a $\mathbb{Z}$-linear combination of the integers $b_1, b_2, \dots, b_n$, then $$ (\ast) \ \ \ \ \ \ \ \ {\rm gcd} \{ b_1, b_2, \dots, b_n, b_{n+1} \}= {\rm gcd}\{b_1, b_2, \dots, b_n\}.$$ \begin{lemma}\label{Lem:alpha_1=d_3} Let $n \in \mathbb{N}$ and $d_3(n) = {\rm gcd} \{G(n-1),G(n-3), G(n-2)-1\}$. Then $\alpha_1(n)= d_3(n)$. \end{lemma} \begin{proof} According to Definition \ref{DefDeterminantDivisors}, $$\alpha_1(n) := {\rm gcd}\{G(n-2)-1,G(n-3),G(n-1),G(n),G(n+1)-1\}.$$ But $G(n+1)-1 = G(n) + (G(n-2) - 1)$, and in turn $G(n) = G(n-3) + G(n-1)$, so applying $(\ast)$ twice in order gives \begin{align*} \alpha_1(n) & = {\rm gcd}\{ \ G(n-2)-1, \ G(n-3), \ G(n-1), \ G(n), \ G(n+1)-1\ \} \\ & = {\rm gcd}\{ \ G(n-2)-1, \ G(n-3), \ G(n-1), \ G(n) \ \} \\ & = {\rm gcd}\{ \ G(n-2)-1, \ G(n-3), \ G(n-1) \ \} = d_3(n). \hfill \qedhere \end{align*} \end{proof} We focus now our attention in analyzing the second determinant divisor $\alpha_2(n)$ given in Definition \ref{DefDeterminantDivisors}. Thanks to property $(\ast)$ we may reduce the number of terms appearing in its expression. \begin{definition}{\rm Let $n \in \mathbb{N}$. We define \begin{align*} d'_3(n) = {\rm gcd} & \{G(n-1)G(n-3)-(G(n-2)-1)^2, \\ & \ G(n)G(n-3)-G(n-1)(G(n-2)-1), \\ & \ G(n-1)^2-G(n)(G(n-2)-1)\}. \end{align*}} \end{definition} The first terms of the $d'_3(n)$ sequence ($n\geq 1$) are: $$d'_3: \ \ 1,1,1,1,1,1,4,9,1,1,1,1,1,16,1,9,1,1,...$$ \begin{lemma}\label{Lem:alpha_2} Consider the second determinant divisor $\alpha_2(n)$ from Definition \ref{DefDeterminantDivisors}. Then $$\alpha_2(n) = d'_3(n).$$ \end{lemma} \begin{proof} By definition we have that \begin{align*} \alpha_2(n) := & \ {\rm gcd}\{(G(n-2)-1)^2-G(n-1)G(n-3),\\ & \ (G(n-2)-1)G(n-1)-G(n)G(n-3), \\ & \ (G(n-2)-1)(G(n+1)-1)-G(n)G(n-1),\\ & \ (G(n-2)-1)G(n)-G(n-1)^2,\\ & \ G(n-1)(G(n+1)-1)-G(n)^2, \\ & \ G(n-3)(G(n+1)-1)-G(n-1)^2 \}. \end{align*} Taking into account that $G(n+1)-1 = G(n) + (G(n-2) - 1)$ and $G(n) = G(n-3) + G(n-1)$, and applying $(\ast)$ we have \begin{align*} \alpha_2(n) & = {\rm gcd}\{(G(n-2)-1)^2-G(n-1)G(n-3),\\ & \hskip1.1cm (G(n-2)-1)G(n-1)-G(n)G(n-3), \\ & \hskip1.1cm (G(n-2)-1)G(n)-G(n-1)^2,\\ & \hskip1.1cm G(n-1)(G(n+1)-1)-G(n)^2, \\ & \hskip1.1cm G(n-3)(G(n+1)-1)-G(n-1)^2 \}\\ & = {\rm gcd}\{(G(n-2)-1)^2-G(n-1)G(n-3),\\ & \hskip1.1cm(G(n-2)-1)G(n-1)-G(n)G(n-3), \\ & \hskip1.1cm (G(n-2)-1)G(n)-G(n-1)^2, \\ & \hskip1.1cm G(n-3)(G(n+1)-1)-G(n-1)^2 \}\\ & = {\rm gcd}\{(G(n-2)-1)^2-G(n-1)G(n-3),\\ & \hskip1.1cm (G(n-2)-1)G(n-1)-G(n)G(n-3), \\ & \hskip1.1cm (G(n-2)-1)G(n)-G(n-1)^2 \} \\ & = {\rm gcd}\{G(n-1)G(n-3)-(G(n-2)-1)^2,\\ & \hskip1.1cm G(n)G(n-3)-G(n-1)(G(n-2)-1), \\ & \hskip1.1cm G(n-1)^2-G(n)(G(n-2)-1)\}\\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \mbox{(since ${\rm gcd}\{a,b\}={\rm gcd}\{-a,b\}$ for any integers $a,b$) }\\ & = d'_3(n). \hfill \qedhere \end{align*} \end{proof} Finally, we show that the third determinant divisor $\alpha_3(n)$ appearing in Definition~\ref{DefDeterminantDivisors} exactly coincides with $H_3(n)$, the $n^{\mathrm{th}}$ term of the third Haselgrove sequence. As described above, \[ H_k(n) := \left| \det(I_n - A_{C_n^k}^t) \right| = \left| \prod_{\ell = 0}^{n-1} \left( 1 - \omega_{\ell} - \omega_{\ell}^k \right) \right| \] where $\omega_{\ell} = e^{\tfrac{2 \pi i \ell}{n}}$ ($0\leq \ell \leq n-1$) are the $n$ distinct $n^{\mathrm{th}}$ roots of unity in $\mathbb{C}$. We are particularly interested in the third Haselgrove sequence $H_3(n)$, of which the first few terms ($n\geq 1$) are: \[ H_3: \ \ 1,3,1,3,11,9,8,27,37,33,67,117,131,192,341,\dots \] This sequence has many interesting number-theoretic characteristics (e.g., $H_3(n)$ is a divisibility sequence); we investigate this and additional properties in \cite{AEG2}. \begin{proposition}\label{Cor:alpha_3=H_3} Let $n \geq 1$. Consider the $3 \times 3$ matrix \[ (M_3^n)^t - I_3 = \mat{ G(n-2)-1 & G(n-3) & G(n-1) \\ G(n-1) & G(n-2)-1 & G(n) \\ G(n) & G(n-1) & G(n+1)-1 }, \] and recall that $\alpha_3(n) : = |{\rm det}((M_3^n)^t - I_3)|$. Then $\alpha_3(n) = H_3(n) \neq 0$. \end{proposition} \begin{proof} By Proposition ~\ref{Lem:H_k(n)=0} we have $H_3(n) \neq 0$ for all $n\geq 1$. Thus we may invoke the previously cited ~\cite[Proposition 2.3]{AA} and~\cite[Proposition 2.5]{AA} to get $ H_3(n) = |\det(I_n - A_{C_n^3}^t)|. $ By Theorem~\ref{Thm:SNF}, $A_{C_n^3} - I_n$ and $M_3^n - I_3$ have isomorphic cokernels. Therefore, using Proposition \ref{Prop:Smithnormalform} we get \[ |\det(I_n - A_{C_n^3}^t)| = |\det(A_{C_n^3} - I_n)| = |\det(M_3^n - I_3)|= |\det((M_3^n)^t - I_3)| := \alpha_{3}(n). \] We conclude that $\alpha_{3}(n) = H_{3}(n)$ for all $n \geq 1$. \end{proof} We are now in a position to give the main result of this section. Applying Corollary \ref{Cor:K_0intermsofalpha}, Lemma \ref{Lem:alpha_1=d_3}, Lemma \ref{Lem:alpha_2} and Proposition \ref{Cor:alpha_3=H_3} finally we get: \begin{theorem}\label{Cor:K_0oflpasC_n^3} Let $n \in \mathbb{N}$. Then $$K_0(L_K(C_n^3)) \cong \mathbb{Z}_{d_3(n)} \oplus \mathbb{Z}_{\frac{d'_3(n)}{d_3(n)}} \oplus \mathbb{Z}_{\frac{H_3(n)}{d'_3(n)}}.$$ \end{theorem} \begin{example} Using Theorem \ref{Cor:K_0oflpasC_n^3}, here are explicit descriptions of the Grothendieck groups $K_0(L_K(C_n^3))$ which arise for small values of $n$. \begin{align*} & n=3: \ \ K_0(L_K(C_3^3)) \cong \{0\}\\ & n=4: \ \ K_0(L_K(C_4^3)) \cong \mathbb{Z}_3\\ & n=5: \ \ K_0(L_K(C_5^3)) \cong \mathbb{Z}_{11}\\ & n=6: \ \ K_0(L_K(C_6^3)) \cong \mathbb{Z}_9 \\ & n=7: \ \ K_0(L_K(C_7^3)) \cong \mathbb{Z}_2 \oplus \mathbb{Z}_2 \oplus \mathbb{Z}_2. \end{align*} It is also possible for the $K_0$ group to consist of exactly two nontrivial direct summands; for instance, $n=30$: \ $K_0(L_K(C_{30}^3)) \cong \mathbb{Z}_{31} \oplus \mathbb{Z}_{3069}$. \end{example} \bigskip To finish this section we demonstrate that the results of~\cite{AA} follow from Theorem~\ref{Thm:SNF}. When $j=2$, the companion matrix is \[ M_2 = \mat{ 0 & 1 \\ 1 & 1 \\ }. \] Let $F(n)$ be the $n^\mathrm{th}$ Fibonacci number. Then a well-known Fibonacci identity states that \[ M_2^n - I_2 = \mat{ F(n-1) - 1 & F(n) \\ F(n) & F(n+1) - 1 \\ }. \] So the determinant divisors will be \[ \alpha_1 \overset{(*)}{=} \gcd(F(n-1)-1,F(n)) := d_2(n) \] and \[ \alpha_2 = \det(M_2^n - I_2) = (F(n+1)-1) (F(n-1)-1) - F(n)^2. \] Another Fibonacci identity states that \[ F(n+1) F(n-1) - F(n)^2 = (-1)^n. \] Putting these two together, \begin{align*} \alpha_2 &= F(n+1) F(n-1) - F(n+1) - F(n-1) + 1 - F(n)^2 \\ &= (-1)^n - F(n+1) - F(n-1) + 1 \\ &= -(F(n+1) + F(n-1) - 1 - (-1)^n), \end{align*} which is negative of the formula for $H_2(n)$ in Equation (HtoF) of~\cite[Proposition 4.4]{AA}. Applying the Smith Normal Form, we obtain the main result of~\cite[Theorem 4.13]{AA}: \[ K_0(L_K(C_n^2)) \cong \mathbb{Z}_{d_2(n)} \oplus \mathbb{Z}_{\frac{H_2(n)}{d_2(n)}} . \] \section{Leavitt path algebras of the graphs $C_n^3$}\label{Sect:K_0lpasC_n^3} As mentioned in the introductory remarks, the primary motivation for identifying the structure of the group $M_{C_n^j}^*$ is in its realization as the Grothendieck group of the Leavitt path algebra $L_K(C_n^j)$, and subsequent utilization in identifying $L_K(C_n^j)$ up to isomorphism. In the final section of the article we bring this program to fruition. We begin by recalling briefly some of the basic ideas; for additional information, see e.g. \cite{AAS}. \smallskip {\bf Definition of Leavitt path algebra.} Let $K$ be a field, and let $E = (E^0, E^1, r,s)$ be a directed graph with vertex set $E^0$ and edge set $E^1$. The {\em Leavitt path $K$-algebra} $L_K(E)$ {\em of $E$ with coefficients in $K$} is the $K$-algebra generated by a set $\{v\mid v\in E^0\}$, together with a set of variables $\{e,e^*\mid e\in E^1\}$, which satisfy the following relations: (V) \ \ \ \ $vw = \delta_{v,w}v$ for all $v,w\in E^0$, \ (E1) \ \ \ $s(e)e=er(e)=e$ for all $e\in E^1$, (E2) \ \ \ $r(e)e^*=e^*s(e)=e^*$ for all $e\in E^1$, (CK1) \ $e^*e'=\delta _{e,e'}r(e)$ for all $e,e'\in E^1$, (CK2)Ê\ \ $v=\sum _{\{ e\in E^1\mid s(e)=v \}}ee^*$ for every $v\in E^0$ for which $0 < |s^{-1}(v)| < \infty$. An alternate description of $L_K(E)$ may be given as follows. For any graph $E$ let $\widehat{E}$ denote the ``double graph" of $E$, gotten by adding to $E$ an edge $e^*$ in a reversed direction for each edge $e\in E^1$. Then $L_K(E)$ is the usual path algebra $K\widehat{E}$, modulo the ideal generated by the relations (CK1) and (CK2). \hfill $\Box$ \smallskip It is easy to show that $L_K(E)$ is unital if and only if $|E^0|$ is finite. This is of course the case when $E = C_n^j$. We now have the necessary background information in hand which allows us to present the powerful tool which will yield a number of key results. \smallskip {\bf The Restricted Algebraic KP Theorem.} \cite[Corollary 2.7]{ALPS} Suppose $E$ and $F$ are finite graphs for which the Leavitt path algebras $L_K(E)$ and $L_K(F)$ are purely infinite simple. Suppose that there is an isomorphism $\varphi : K_0(L_K(E)) \rightarrow K_0(L_K(F))$ for which $\varphi([L_K(E)]) = [L_K(F)]$, and suppose also that the two integers ${\rm det}(I_{|E^0|} - A_E^t)$ and ${\rm det}(I_{|F^0|}-A_F^t)$ have the same sign (i.e., are either both nonnegative, or both nonpositive). Then $L_K(E) \cong L_K(F)$ as $K$-algebras. \smallskip The proof of the Restricted Algebraic KP Theorem utilizes deep results and ideas in the theory of symbolic dynamics. The letters K and P in its name derive from E. Kirchberg and N.C. Phillips, who (independently in 2000) proved an analogous result for graph $C^*$-algebras. (We note that this analogous result does not include the hypothesis on the signs of the germane determinants; it is not known whether this hypothesis is required for the algebraic result, hence the addition of the word 'Restricted' to the name.) We are also in position to apply Algebraic KP Theorem to explicitly realize the algebras $L_K(C_n^3)$ as the Leavitt path algebras of graphs having four vertices. The following will be important here: by \cite[Proposition 1.5]{AA}, for any pair $n,j$ we have that the identity element of the group $K_0(C_n^j)$ is the element $[L_K(C_n^j)] = \sum_{v\in E^0} [v]$. \begin{proposition}\label{Propgraphfourvertices} Let $n\in \mathbb{N}$. The Leavitt path algebra $L_K(C_n^3)$ is isomorphic to the Leavitt path algebra $L_K(E_n)$, where $E_n$ is the graph with four vertices given by $$\xymatrix{ & {\bullet}^{u_1} \ar@(lu,ru)^{(2)} \ar@/^{5pt}/[dl]\ar@/^{5pt}/[dr] \ar@/^{5pt}/[dd] & \\ {\bullet}^{u_2} \ar@/^{5pt}/[ur] \ar@/^{5pt}/[dr] \ar@/^{5pt}/[rr] \ar@(l,d)_{(2+d_3(n))}& & {\bullet}^{u_3} \ar@/^{5pt}/[ll] \ar@/^{5pt}/[dl] \ar@/^{5pt}/[ul] \ar@(u,r)^{\left(2+\tfrac{d'_3(n)}{d_3(n)}\right)} \\ & {\bullet}^{u_4} \ar@/^{5pt}/[ur] \ar@/^{5pt}/[ul] \ar@/^{5pt}/[uu] \ar@(rd,ld)^{\left(2+\tfrac{H_3(n)}{d'_3(n)}\right)} & }$$ (where the numbers in parentheses indicate the number of loops at the indicated vertex). \end{proposition} \begin{proof} Using the characterization given in \cite{AAP2}, it follows easily that the graph $E_n$ satisfies the conditions for $L_K(E_n)$ to be unital purely infinite simple. The incidence matrix of $E_n$ is $$A_{E_n}=\left(\begin{matrix} 2 & 1 & 1 & 1\\ 1 & 2+d_3(n) & 1 & 1\\ 1 & 1 & 2+\tfrac{d'_3(n)}{d_3(n)} & 1\\ 1 & 1 & 1 & 2+\tfrac{H_3(n)}{d'_3(n)} \end{matrix}\right),$$ so that $$I_4-A^t_{E_n}=-\left(\begin{matrix} 1 & 1 & 1 & 1\\ 1 & 1+d_3(n) & 1 & 1\\ 1 & 1 & 1+\tfrac{d'_3(n)}{d_3(n)} & 1\\ 1 & 1 & 1 & 1+\tfrac{H_3(n)}{d'_3(n)} \end{matrix}\right).$$ \noindent A straightforward computation yields that the Smith Normal Form of $I_4-A^t_{E_n}$ is $$\left(\begin{matrix} 1 & 0 & 0 & 0\\ 0 & d_3(n) & 0 & 0\\ 0 & 0 & \tfrac{d'_3(n)}{d_3(n)} & 0\\ 0 & 0 & 0 & \tfrac{H_3(n)}{d_3(n)}\end{matrix}\right),$$ which immediately yields that $K_0(L_K(E_n))$ is isomorphic to $\mathbb{Z}_{d_3(n)}\oplus \mathbb{Z}_{\frac{d'_3(n)}{d_3(n)}} \oplus \mathbb{Z}_{\frac{H_3(n)}{d'_3(n)}}$. Also, it is straightforward to check that $\det(I_4-A^t_{E_n})=-H_3(n)<0$. (We remind the reader that the sign of the determinant of a matrix cannot be gleaned from the Smith Normal Form of the matrix; specifically, one must compute the determinant of $I_4-A^t_{E_n}$ directly from the matrix itself.) Finally, by invoking the relation in $K_0(L_K(E_n))$ at $u_1$, we have \begin{align*} [u_1] + [u_2] + [u_3] + [u_4] & = (2[u_1] + [u_2] + [u_3] + [u_4]) + [u_2] + [u_3] + [u_4] \\ & = 2([u_1] + [u_2] + [u_3]+[u_4]), \end{align*} so that $\sigma = [u_1] + [u_2] + [u_3] +[u_4]$ satisfies $\sigma =2\sigma$ in the group $K_0(L_K(E_n))$, which gives that $\sigma = [u_1] + [u_2] + [u_3]+[u_4]$ is the identity element of $K_0(L_K(E_n))$. So the purely infinite simple unital Leavitt path algebras $L_K(C_n^3)$ and $L_K(E_n)$ have these properties: \begin{enumerate} \item $K_0(L_K(C_n^3)) \cong K_0(L_K(E_n))$ (as each is isomorphic to $\mathbb{Z}_{d_3(n)}\oplus \mathbb{Z}_{\frac{d'_3(n)}{d_3(n)}} \oplus \mathbb{Z}_{\frac{H_3(n)}{d'_3(n)}}$), \item this isomorphism necessarily takes $[L_K(C_n^3)]$ to $[L_K(E_n)]$ (as each of these is the identity element in their respective $K_0$ groups), and \item both $\det(I_n-A^t_{C_n^3})$ and $\det(I_4-A^t_{E_n})$ are negative. \end{enumerate} Thus the graphs $C_n^3$ and $E_n$ satisfy the hypotheses of the Restricted Algebraic KP Theorem, and so the desired isomorphism $L_K(C_n^3) \cong L_K(E_n)$ follows. \end{proof} \begin{remark} {\rm Although it is relatively easy to produce graphs $F_n$ having three vertices for which $K_0(L_K(F_n))\cong K_0(L_K(C_n^3))$, we do not know how to produce such graphs for which $[F_n]$ is the zero element in $K_0(L_K(F_n))$, which therefore precludes us from applying the Restricted Algebraic KP Theorem to the Leavitt path algebras of these graphs. } \end{remark} \begin{remark} {\rm Using the template afforded by the $4$-vertex graph presented in Proposition \ref{Propgraphfourvertices}, for each pair $j,n$ with $0\leq j \leq n-1$ one can easily construct a graph $E_n(j)$ having $j+1$ vertices, for which the Leavitt path algebras $L_K(E_n(j))$ and $L_K(C_n^j)$ are isomorphic. } \end{remark} A number of intriguing number-theoretic properties of the Haselgrove sequences and group-theoretic properties of the groups $K_0(L_K(C_n^j))$ arose in the context of the investigation presented in this article. For instance, as mentioned previously, the $H_k(n)$ sequences can be shown to be divisibility sequences. For another example, in the $j=3$ case we may give a more explicit description of the integers $d_3^\prime(n)$, as a product of a power of $d_3(n)$ with an ``indicator factor". (In retrospect, we see that an analogous statement arose in the proof of the corresponding result in the $j=2$ case carried out in \cite{AA}, but this indicator factor turned out not to play a role in the invariant factors representation of the abelian group $K_0(L_K(C_n^2))$.) However, such an ``indicator factor" description does not extend to the cases $j\geq 4$; for this reason we refer to the $j=3$ case as a ``sweet spot" in this setting. These and many additional properties will be presented in \cite{AEG2}. \section*{acknowledgements} The authors would like to thank G. Aranda Pino and M. Iovanov for fruitful discussions during the preparation of this paper. Some of these results were anticipated and suggested by looking at output from the software package \emph{Magma}. The authors are grateful to A. Viruel for his valuable help with this software. The first author was partially supported by a Simons Foundation Collaboration Grant \#208941. The third author was partially supported by the Spanish MEC and Fondos FEDER through projects MTM2013-41208-P and MTM2016-76327-C3-1-P; by the Junta de Andaluc\'{\i}a and Fondos FEDER, jointly, through project FQM-7156; and by the grant "Ayudas para la realizaci\'on de estancias en centros de investigaci\'on de calidad" of the "Plan Propio de Investigaci\'on y Transferencia" of the University of M\'alaga, Spain. Part of this work was carried out during a visit of the third author to the University of Colorado, Colorado Springs, USA. The third author thanks this host institution for its warm hospitality and support.
1,116,691,499,979
arxiv
\section{Introduction} It has long been anticipated that the DESY $ep$ collider HERA would provide a good opportunity to study prompt photon production in photoproduction processes \cite{aur1}. Over the past few years various calculations of this process have been performed leading to continuous improvements in their theoretical precision \cite{bks,afg,gs,gv1}. In the most recent studies \cite{gv1,gv2} the inclusive cross section for producing a single photon was calculated fully in NLO with photon isolation effects incorporated. Gordon and Vogelsang \cite{gv1,gv2} use an approximate but nevertheless accurate analytic technique \cite{gv3,ggrv} for including isolation effects in the NLO calculation, including the fragmentation contributions. This analytic technique is only applicable to single inclusive prompt photon production and cannot be applied when a jet is also observed. The ZEUS Collaboration have reported prompt photon data \cite{zeus} and have first chosen to analyse events with a jet balancing the transverse momentum ($p_T^{\gamma}$) of the photon. In order to compare with this data a new calculation is necessary as described in outline in the next section. In all previous studies of prompt photon production at HERA, one of the common themes was the possibility of using it for measuring the photon distribution functions, particularly the gluon distribution, $g^{\gamma}(x,Q^2)$ which is presently poorly constrained by the available data. This latter fact is still true even with the availability of jet photoproduction data from both HERA and TRISTAN. Prompt photon production is particularly attractive since it is dominated in Leading Order (LO) by the hard scattering subprocess $q g\rightarrow \gamma q$, resulting in a cross section which is very sensitive to the gluon distribution. At HERA the situation is more complicated than at hadron colliders for two reasons. Firstly there are two particles involved in the reaction, namely the quasi-real photon (emitted by the electron which scatters at a small angle) and the proton. Both particles have distinct gluon distribution functions $g^{\gamma}$ and $g^p$, hence two different $qg$ initiated subprocess are present, $q^p g^{\gamma}\rightarrow \gamma q$ and $q^{\gamma} g^p\rightarrow \gamma q$. Since they contribute to the cross section in different regions of pseudo-rapidity, $\eta$, it has been proposed that this may provide a means of separating them, but this has proven to be difficult to implement in the experiments. Secondly, there are two contributions to the cross section in photoproduction processes, usually labelled the direct and resolved. In the former case the quasi-real photon participates directly in the hard scattering subprocess and gives up all its energy, while in the latter, resolved, case it interacts via its partonic substructure. Thus the resolved subprocesses are sensitive to the photon structure functions whereas the direct are not. Again it was proposed that they may be separated experimentally with suitable rapidity cuts, but these studies assumed a fixed initial photon energy. Since the initial photon energy is not fixed but forms a continuous spectrum, then even this separation is not straightforward \cite{gs}. This is because the spectrum of initial photon energies causes the sharply separated peaks in the rapidity spectrum of the resolved and direct components, present when the initial photon energy is fixed, to become smeared out and so less sharply defined. Separation of the resolved and direct processes is better achieved by tagging of the spectator jet from the resolved photon. In section II a brief outline of the theoretical background to the cross section as well as the technique of calculation is given. In section III numerical results are presented and in section IV the summary and conclusions are presented. \section{The Inclusive Photon Plus Jet Cross Section} \subsection{Contributing Subprocesses} In addition to the direct and resolved photon contributions to the cross section there are the non-fragmentation and fragmentation contributions. In the former case the observed final state photon is produced directly in the hard scattering whereas in the latter it is produced by long distance fragmentation off a final state parton. The fragmentation processes involve the functions which cannot be calculated and must be taken from experiment. So far they have not been satisfactorily measured. There are various parametrizations of these functions available using different models for the input distributions. As the numerical results will show in the next section these contributions are small at HERA energies and so do not provide a significant source of uncertainty in the present calculation. This point has already been noted in previous studies and will be returned to below. The only direct non-fragmentation process contributing to the cross section in LO is the so called QCD Compton process (fig.1a) \begin{displaymath} q \gamma \rightarrow \gamma q. \end{displaymath} The corresponding direct fragmentation processes in LO (fig.1b)are \begin{displaymath} q \gamma \rightarrow g q\;\;\; {\rm and}\;\;\; g \gamma \rightarrow q\bar{q}. \end{displaymath} As discussed in many places (see eg., \cite{gv1}) the photon fragmentation function is formally $O(\alpha_{em}/\alpha_s)$, thus although the hard subprocess cross sections in the fragmentation case are $O(\alpha_{em}\alpha_s)$, after convolution with the photon fragmentation functions the process contributes at $O(\alpha_{em}^2)$, the same as the the non-fragmentation part. Thus in a fixed order calculation the two contributions must be added together to provide the physical cross section. At NLO for the non-fragmentation part there are the virtual corrections to the LO Compton process plus the additional three-body processes \begin{displaymath} q \gamma \rightarrow \gamma g q\;\;\; {\rm and}\;\;\; g \gamma \rightarrow \gamma q\bar{q}. \end{displaymath} These processes have been calculated previously by various authors. In this study the virtual corrections are taken from \cite{gvx} In addition there are $O(\alpha_s)$ corrections to the fragmentation processes to take into account, but in this calculation these processes are included in LO only. It has been shown previously \cite{gv1} that the fragmentation contributions are not as significant here as at hadron colliders which generally have higher cms energies. They are also reduced drastically when isolation cuts are implemented. Thus ignoring NLO corrections to the fragmentation contributions, while in principle theoretically inconsistent, will not lead to significant error in estimates of the cross section. In the resolved case, for non-fragmentation there are only the two processes \begin{displaymath} q g\rightarrow \gamma q\;\;\; {\rm and}\;\;\; q \bar{q}\rightarrow \gamma g. \end{displaymath} in LO (fig.2). At NLO there are virtual and three-body corrections to these as well as other three-body processes, for example, $g g\rightarrow \gamma q \bar{q}$ etc. For a complete list of these plus the fragmentation processes see, for example, ref.\cite{gv1}. As with the direct case, the fragmentation contributions are included here in LO. \subsection{Some Calculational Details} The calculation was performed using the phase space slicing method which makes it possible to perform photon isolation exactly as well as to implement the jet definition in the NLO calculation. More details of parts of the calculation can be found in ref.\cite{gordon2}. The two-body matrix elements for the resolved case, after the soft and collinear poles have been canceled and factorized in the $\overline{MS}$ scheme can be found in the appendices of refs.\cite{gordon2,boo}. Those for the direct contributions can be obtained from these by appropriately removing non-abelian couplings. These matrix elements depend on the soft and collinear cut-off parameters, $\delta_s$ and $\delta_c$ and must be added to the three-body matrix elements, also included in the appendix of \cite{gordon2}, in order to cancel the dependence of the cross section on these arbitrary cut-off parameters. Following the ZEUS experiment, the cone isolation method is used to isolate the photon signal. This method restricts the hadronic energy allowed in a cone of radius $R_{\gamma}=\sqrt{\Delta \phi^2 + \Delta \eta^2}$, centred on the photon to be below the value $\epsilon E_{\gamma}$, where $E_{\gamma}$ is the photon energy. The fixed value $\epsilon= 0.1$ is used in this study, which corresponds to the value used in the ZEUS analysis. By contrast the CDF collaboration in their analysis \cite{cdf} uses a value of $\epsilon = 2\;{\rm GeV}/p_T^{\gamma}$, which varies with the photon energy ($p_T^{\gamma}$ is the transverse momentum of the observed photon). The cone algorithm is also used to define the jet. This defines a jet as hadronic energy deposited in a cone radius $R_J= \sqrt{\Delta \phi^2 + \Delta \eta^2}$. If two partons form the jet then the kinematic variables are combined to form that of the jet according to the formulae \begin{eqnarray} p_J&=&p_1+p_2 \nonumber \\ \eta_J&=& \frac{(\eta_1 p_1 + \eta_2 p_2)}{p_1+p_2} \nonumber \\ \phi_J&=& \frac{(\phi_1 p_1 + \phi_2 p_2)}{p_1+p_2}. \end{eqnarray} In the ZEUS analysis $R_{\gamma}=1.0$ and $R_J=1.0$ are chosen and these values will also be used in this study. In order to estimate the flux of quasi-real photons from the electron beam the Weiszacker-Williams approximation is used. Thus the `electron structure function' $f_e(x_e,Q^2)$ is given by a convolution of the photon structure function $f^{\gamma}(x_{\gamma},Q^2)$ and the Weiszacker-Williams (WW) function \begin{eqnarray} f_{\gamma/e}(z)&=&\frac{\alpha_{em}}{2\pi}\left[\left\{ \frac{1+(1-z)^2}{z}\right\} \ln\frac{Q^2_{max}(1-z)}{m_e^2 z^2} \right. \nonumber \\ & - & \left. 2 m_e^2 z \left\{ \frac{(1-z)}{m_e^2 z^2}-\frac{1}{Q^2_{max}}\right\} \right] \end{eqnarray} by \begin{equation} f_e(x_e,Q^2)=\int^1_{x_e}\frac{dz}{z}f_{\gamma/e}(z)f^{\gamma}\left(\frac{x_e} {z},Q^2\right). \end{equation} The expression for $f_{\gamma/e}(z)$ was taken from ref.\cite{gas}. Following the ZEUS analysis the value $Q^2_{max}=1$ GeV$^2$ is used throughout. \section{Results} \subsection{Effect of Experimental Selections} The numerical results presented in this section are obtained using the GS96 \cite{gs96} photon distribution functions, the CTEQ4M \cite{cteq} parton distributions for the proton and the GRVLO \cite{grvf} fragmentation functions as standard. Futhermore the two-loop expression for $\alpha_s$ is used, four-flavours of quarks are assumed active and the factorization/renormalization scales are taken to be equal to the photon $p_T$ ($Q^2=(p_T^{\gamma})^2$). The maximum virtuality of the initial state photon is fixed at $Q^2_{max}=1$ GeV$^2$. The calculation is performed in the $ep$ laboratory frame using $P_e=27.5$ GeV for the electron energy and $P_p=820$ GeV for the proton energy. The electron is moving toward negative rapidity. In order to make contact with the results of previous calculations, it is convenient to start by examining the inclusive single prompt photon cross section, $ep\rightarrow \gamma X$. As more data are taken at HERA this cross section (with isolation cuts) will certainly be measured since it is the largest cross section involving prompt photon production. In fig.3a the non-isolated single inclusive prompt photon cross section is shown as a function of photon rapidity at $p_T^{\gamma}=5$ GeV. No experimental cuts are implemented. In the positive rapidity region the resolved contributions are roughly twice as large as the direct and thus this is the region of interest if information on the gluon distribution of the photon is to be obtained. At negative rapidity, the direct and resolved contributions are comparable in size. When the WW spectrum is cut as done by the ZEUS Collaboration ($0.16\leq z \leq 0.8$) the cross section changes as shown in fig.3b (also at the same $p_T^\gamma= 5$ GeV). Both the resolved and direct contributions remain essentially unchanged at negative rapidities but are reduced in the positive rapidity region. The effect on the direct contribution is large, being reduced by a factor of $10$ at $\eta^{\gamma}=2$. Thus sensitivity to the photon structure function is enhanced in this region since the resolved contribution does not fall by as much. The reason for the asymmetric response of the two contributions to this cut is that the WW distribution is largest at small-$z$ ($x_e=z$, for the direct events). Cutting out this region removes a large fraction of the direct events with lower energy initial photons. When the convolution in eq.(2.3) is taken for the resolved processes on the other hand, for a given $x_{\gamma}=x_e/z$, all regions of $x_e$ contribute and thus the cut on $z$ does not have the same dramatic effect in this region. In all the following results the cut on $z$ is implemented. Using the standard parameters, the fragmentation contribution constitutes less than $20\%$ of the cross section at $p_T^{\gamma}=5$ GeV (before isolation) and as expected, falls rapidly with increasing $p_T^{\gamma}$. After isolation, the fragmentation contribution is reduced to about $3\%$ of the cross section. Fig.3c shows the contribution from fragmentation processes to the resolved and direct contributions, as well as their sum, at $p_T^\gamma=5$ GeV before isolation cuts are implemented. The higher order corrections, enhance the cross section by $O(20\%)$ before isolation. As indicated by fig.3d, the corrections are numerically more significant in the positive rapidity region, but they are still modest, indicating good perturbative stability for the predictions. In Fig.4 the single inclusive prompt photon cross section at $p_T^\gamma=5$ GeV, with only the cut $0.16\leq z \leq 0.8$, is compared to the photon plus jet cross section with isolation cuts and jet definition incorporated as done by the ZEUS collaboration. The rapidity and $p_T$ cuts $-1.5\leq \eta^J \leq 1.8$ and $p_T^J\geq 5$ GeV are placed on the jet. As expected, the photon plus jet cross section is significantly smaller than the single photon cross section, but does not show much difference in shape. It could thus still potentially be used to measure the photon distributions in the positive rapidity region. The lower dot-dashed in fig.4 is the resolved contribution to the photon plus jet cross section after the further cut $x_{\gamma}\geq 0.8$ is imposed. This cut essentially removes most of the resolved contribution to the cross section and therefore most of the sensitivity to the photon distribution functions. It is still nevertheless not a pure direct sample and as seen in fig.5a, it still shows sensitivity to the photonic parton distributions. One of the main differences in the GRV and GS96 photon distributions is in the quark distributions at large-$x_\gamma$. In fig.5a the rapidity distribution is plotted at $p_T^\gamma=5$ GeV with all the cuts used in the ZEUS analysis implemented, including the cut on $x_\gamma$. At negative rapidities the photonic quark distributions are probed at large-$x$ which is where the largest differences between the results of GS96 and GRV are seen. By contrast, as fig.5b demonstrates, there is almost no differences between the results when the proton distributions are changed. This cross section may thus potentially be used to distinguish between these two models of the photon structure function. In fig.6 the cross section is plotted vs $p_T^{\gamma}$ with the ZEUS rapidity cuts on the photon imposed ($-0.7\leq \eta_{\gamma}\leq 0.8$). It shows the well known fact, common to this type of photoproduction process, that the resolved contribution only competes with the direct at low values of $p_T^\gamma$, while the direct dominates as $p_T^\gamma$ is increased. One thus needs to look in the lower $p_T^\gamma$ region if sensitivity to the photon structure function is desired and look at higher $p_T^\gamma$ if the aim is to eliminate the resolved events. Fig.7 shows a partial breakdown of the isolated photon plus jet cross section into initial state contributions as a function of $\eta^\gamma$. The photon $p_T$ is integrated between $5$ and $10$ GeV as done by the ZEUS Collaboration. The solid curve is the sum, the dot-dashed curve the resolved and the dashed curve the direct. The contributions to the resolved process are the labelled dotted curves. The dotted curve with error bars is the $g^\gamma q^p$ initiated process as predicted using the GRV photon distributions. Clearly it is only distinguishable from the GS96 result in the far positive rapidity region. All other features of the curves except for the absolute sizes of the contributions are similar to the results of previous studies done on single non-isolated prompt photon production in the $ep$ laboratory frame \cite{bks,gs,gv1}. \subsection{Comparison with HERA Data} Table 1 lists predictions for the resolved and direct contributions to the cross section and their sum for various choices of parameters. As stated above, in order to obtain a sample of direct events the ZEUS Collaboration have imposed the cut $x_{\gamma}\geq 0.8$ on their data. This cut which is also imposed on the results in Table 1, favours the direct contributions since they contribute at $x_{\gamma}=1$, but there is still a contribution from the resolved processes and hence some sensitivity to the photon distributions chosen. In addition the cuts $5$ GeV $\leq p_T^{\gamma} \leq 10$ GeV, $p_T^J\geq 5$ GeV, $-1.5\leq \eta^J\leq 1.8$, $-0.7\leq \eta^{\gamma}\leq 0.8$ and $0.16\leq z=E_{\gamma}/E_e\leq 0.8$ along with the isolation cuts and jet definitions discussed in Section II are imposed. The first column of numbers gives the results for the standard choice of parameters, while the 2nd and 3rd columns show the effect of changing the scales. The results show a remarkable stability to scale changes. This is in contrast to, for example, the $p_T^{\gamma}$ distribution which generally shows significant scale sensitivity. The 4th and 5th columns show the effect of changing the photon and proton distribution functions used respectively. In the latter case, as already indicated by the results shown in figs.5a and 5b there is hardly any changes in the predictions, while in the former case the changes are very significant. Since with these cuts the cross section is mostly sensitive to the quark distributions in the photon at large-$x$ then this measurement may potentially be used to discriminate between the GS96 and GRV photon parametrizations which differ most significantly in this region. The preliminary experimental value given by the ZEUS Collaboration of $15.3\pm 3.8\pm 1.8$ pb agrees well with the NLO theoretical predictions but the errors are still too large to make any distinction between GS and GRV. \section{Conclusions} A NLO calculation of isolated single photon plus jet production at HERA was presented. The effects of various experimental cuts on the cross section was studied in some detail, and comparisons are made with the preliminary data from the ZEUS Collaboration where good agreement was found. The kinematic cuts chosen favour the direct contribution but there is still a significant sensitivity to the quarks distributions in the photon at large-$x_\gamma$. At the moment the error in the data is still too large to distinguish between the GRV and GS96 photon distributions, but it is expected that analysis of more data will soon remedy this situation. \section*{Acknowledgments} I am grateful to P. Bussey, M. Derrick and T. Vaiciulis of the ZEUS Collaboration for very helpful discussions. This work was funded in part by the US Department of Energy, Division of High Energy Physics, Contract No. W-31-109-ENG-38. \pagebreak
1,116,691,499,980
arxiv
\section{Introduction} \label{sec:1} The explosion of a supernova triggered by the collapse of a massive star produces several solar masses of stellar ejecta expanding at $\sim 10^4 {\rm\ km\ s}^{-1}$ into surrounding circumstellar (CSM) and interstellar (ISM) material. The resulting forward shock compresses and heats the ambient gas. As the shock sweeps up material, the deceleration drives a reverse shock (RS) \index{reverse shock} back into the cold ejecta, heating the metal-enhanced gas to X-ray-emitting temperatures. In many cases, though the actual fraction remains a currently-unsolved question, what remains of the collapsed core is a rapidly-spinning, highly magnetic neutron star that generates an energetic wind of particles and magnetic field confined by the surrounding ejecta. [All current evidence indicates that pulsar winds are composed of electrons and positrons, with little or no ion component. Here, and throughout, the term ``particles'' is used interchangeably for electrons/positrons.] The evolution of this pulsar wind nebula (PWN) \index{pulsar wind nebula} is determined by the properties of the central pulsar, its host supernova remnant (SNR), and the structure of the surrounding CSM/ISM. In discussing the structure and evolution of PWNe, it is important to distinguish two important points at the outset. First, while PWNe have, in the past, sometimes been referred to as SNRs (most often as a ``center-filled'' variety), they are, in fact, {\it not} SNRs. As discussed below, PWNe are created entirely by a confined magnetic wind produced by an energetic pulsar. At early times, the confining material is supernova ejecta, but at later times it can simply be the ISM. Despite being the result of a supernova explosion (as is a neutron star), we reserve the term SNR for the structure produced by the expanding supernova ejecta and its interaction with the surrounding CSM/ISM (and, indeed, an entire population of SNRs have no association with PWNe whatsoever; see Chapter ``Type Ia supernovae''). Second, when describing the evolutionary phase of a PWN (or a composite SNR -- an SNR that contains a PWN), it is not necessarily the true age of the system that describes its structure. Rather, it is the {\it dynamical} age, which accounts for the fact that identical pulsars expanding into very different density distributions, for example, will evolve differently. The outline of this paper is as follows. In Section 2 we review the basic properties of pulsars themselves, including a description of pulsar magnetospheres and the subsequent pulsar winds that form PWNe. Section 3 discusses the emission from PWNe and provides examples of the constraints that multiwavelength observations place on the determination of the system evolution. In Section 4 we investigate the different stages of evolution for a PWN, starting with its initial expansion inside an SNR and ending with the bow shock stage after the PWN escapes into the ISM. Section 5 presents a brief summary. Crucially, in the spirit of this Handbook, this paper is not intended as a literature review. A small set of examples have been selected to illustrate particular properties, and a subset of the recent theoretical literature has been summarized to provide the framework for our basic understanding of these systems. The reader is referred to more thorough PWN reviews by Gaensler \& Slane (2006), Bucciantini (2011), and Kargaltsev et al. (2015), and to the many references and subsequent citations in those works, for a more detailed treatment. \section{Basic Properties} \label{sec:2} \subsection{Pulsars} \index{pulsars} \label{sec:2.1} The discovery and basic theory of pulsars has been summarized in many places. First discovered by their radio pulsations, it was quickly hypothesized that these objects are rapidly-rotating, highly-magnetic neutron stars (NSs). Observations show that the spin period $P$ of a given pulsar increases with time, indicating a gradual decrease in rotational kinetic energy: \begin{equation} \dot{E} = I \Omega \dot{\Omega}, \end{equation} where $\Omega = 2\pi/P$ and $I$ is the moment of inertia of the NS (nominally $I = \frac{2}{5}MR^2$, where $M$ and $R$ are the mass and radius of the star; $I \approx 10^{45}{\rm\ g\ cm}^2$ for $M = 1.4 \ M_\odot$ and $R = 10$~km). This spin-down \index{spin-down} energy loss is understood to be the result of a magnetized particle wind produced by the rotating magnetic star. Treated as a simple rotating magnetic dipole, the energy loss rate is \begin{equation} \dot{E} = -\frac{B_pR^6\Omega^4}{6c^3}\sin^2\chi, \end{equation} where $B_p$ is the magnetic dipole strength at the pole and $\chi$ is the angle between the magnetic field and the pulsar rotation axis. Typical values for $P$ range from $\sim 0.03 - 3$s, with period derivatives of $10^{-17} - 10^{-13} {\rm\ s\ s}^{-1}$ (though values outside these ranges are also observed, particularly for so-called magnetars and millisecond pulsars). This leads to inferred magnetic field strengths of order $10^{11} - 10^{13}$ G. As the pulsar rotates, a charge-filled magnetosphere \index{magnetosphere} is created, with particle acceleration occurring in charge-separated gaps in regions near the polar cap or in the outer magnetosphere, which extends to the so-called light cylinder \index{light cylinder} where $R_{\rm LC} = c/\Omega$. The maximum potential generated by the rotating pulsar field under the assumption of co-alignment of the magnetic and spin axes is \begin{equation} \Phi = \left(\frac{\dot{E}}{c}\right) \approx 6 \times 10^{13} \left( \frac{\dot{E}}{10^{38}{\rm\ erg\ s}^{-1}}\right)^{1/2} {\rm\ V}. \end{equation} The minimum particle current required to sustain the charge density in the magnetosphere is \begin{equation} \dot{N}_{GJ} = \frac{c \Phi}{e} \approx 4 \times 10^{33} \left( \frac{\dot{E}}{10^{38}{\rm\ erg\ s}^{-1}}\right)^{1/2} {\rm\ s}^{-1}, \end{equation} where $e$ is the electron charge (Goldreich \& Julian 1969). As the particles comprising this current are accelerated, they produce curvature radiation that initiates an electron-positron pair cascade. Based on observations of PWNe, values approaching $\dot{N} = 10^{40}{\rm\ s}^{-1}$ are required to explain the radio synchrotron emission. The implied multiplicity (i.e., the number of pairs created per primary particle) of $\sim 10^5 - 10^7$ appears difficult to obtain from pair production in the acceleration regions within pulsar magnetospheres (Timokhin \& Harding 2015), suggesting that a relic population of low energy electrons created by some other mechanism early in the formation of the PWN may be required (e.g., Atoyan \& Aharonian 1996). \subsection{Pulsar Wind Nebulae} \index{pulsar wind nebula} \label{sec:2.2} \begin{figure}[t] \includegraphics[width=4.65in]{figure1.pdf} \caption{ a) Composite image of Crab Nebula with X-ray emission (blue) from \chandra, optical emission (red and yellow) from {\sl HST}, and IR emission (purple) from {\sl Spitzer}. b) Composite image of G54.1+0.3 (Temim et al. 2010) with X-ray emisson (blue) from \chandra, and IR emission from {\sl Spitzer} (red-yellow, $24 \mu$m; green, $8 \mu$m). [Images courtesy NASA/CXO.] } \label{fig:figure1} \end{figure} For pulsars with a magnetic axis that is inclined relative to the rotation axis, the result of the above is a striped wind, with an alternating poloidal magnetic field component separated by a current sheet (Bogovalov 1999). The magnetization \index{magnetization} of the wind, $\sigma$, is defined as the ratio between the the Poynting flux and the particle energy flux: \begin{equation} \sigma = \frac{B^2}{4 \pi m n \gamma_0 c^2}, \label{eq_sigma} \end{equation} where $B$, $n$, and $\gamma_0$ are the magnetic field, number density of particles of mass $m$, and bulk Lorentz factor in the wind, respectively. The energy density of the wind is expected to be dominated by the Poynting flux as it leaves the magnetosphere, with $\sigma \sim 10^4$. Ultimately, the wind is confined by ambient material (slow-moving ejecta in the host SNR at early times; the ISM once the pulsar has exited the SNR), forming an expanding magnetic bubble of relativistic particles - the PWN. As the fast wind entering the nebula decelerates to meet the boundary condition imposed by the much slower expansion of the PWN, a wind termination shock (TS) \index{termination shock} is formed at a radius $R_{\rm TS}$ where the ram pressure of the wind is balanced by the pressure within the nebula: \begin{equation} R_{\rm TS} = \sqrt{\dot{E}/(4 \pi \omega c P_{\rm PWN}}), \end{equation} where $\omega$ is the equivalent filling factor for an isotropic wind and $P_{\rm PWN}$ is the total pressure in the nebula. The geometry of the pulsar system results in an axisymmetric wind (Lyubarsky 2002), forming a torus-like structure in the equatorial plane, along with collimated jets along the rotation axis. The higher magnetization at low latitudes confines the expansion here to a higher degree, resulting in an elongated shape along the pulsar spin axis for the large-scale nebula (Begelman \& Li 1992, van der Swaluw 2003). This structure is evident in Figure~\ref{fig:figure1} (left), where X-ray and optical observations of the Crab Nebula \index{Crab Nebula} clearly reveal the jet/torus structure surrounded by the elongated wind nebula bounded by filaments of swept-up ejecta. The innermost ring corresponds to the TS, and its radius is well-described by Eqn. 6. MHD models of the jet/torus structure in pulsar winds reproduce many of the observed details of these systems (see Bucciantini 2011 for a review). As discussed in Section 3, the relativistic particles in the PWN produce synchrotron radiation extending from the radio to the X-ray band, and upscatter ambient low-energy photons (from the cosmic microwave background, the stellar radiation field, and emission from ambient dust) producing inverse-Compton (IC) emission in the $\gamma$-ray band. Curiously, models of the dynamical structure and emission properties of the Crab Nebula require $\sigma \sim 10^{-3}$ just upstream of the termination shock (Kennel \& Coroniti 1984). Thus, somewhere between the pulsar magnetosphere and the termination shock, the wind converts from being Poynting-dominated to being particle-dominated. Magnetic reconnection in the current sheet has been suggested as a mechanism for dissipating the magnetic field, transferring its energy into that of the particles (e.g., Lyubarsky 2003). Recent particle-in-cell simulations of relativistic shocks show that shock compression of the wind flow can drive regions of opposing magnetic fields together, causing the reconnection (Sironi \& Spitkovsky 2011). As discussed in Section 3, this process can result in a broad particle spectrum, with a power-law-like shape $dN/dE \propto E^{-p}$ with $p \sim 1.5$. High energy particles in the equatorial regions can diffuse upstream of the shock, generating turbulence that supports acceleration of subsequent particles to high energies through a Fermi-like process, potentially creating a steeper high-energy tail with $p \sim 2.5$. The energy range spanned by the flat spectral region, and the maximum energy to which the steep spectrum extends, depend on properties of the striped wind that change with latitude, suggesting that the integrated particle injection spectrum may be quite complex (e.g., Slane et al. 2008). However, the maximum Lorentz factor that appears achievable is limited by the requirement that the diffusion length of the particles be smaller than termination shock radius; $\gamma_{max} \sim 8.3 \times 10^6 \dot{E}_{38}^{3/4} \dot{N}_{40}^{-1/2}$. This is insufficient to explain the observed X-ray synchrotron emission in PWNe, suggesting that an alternative picture for acceleration of the highest energy particles in PWNe is required (Sironi et al. 2013). \section{Radiation from PWNe} \label{sec:3} \begin{figure}[t] \sidecaption \includegraphics[width=11.5cm]{figure2.pdf} \caption{ Synchrotron (left) and IC (right) emission (for scattering off of the CMB) from a PWN at ages of 1000 (solid), 2000 (dotted), and 5000 (dashed) years. Here we have assumed $E_{51} = 1$, $M_{ej} = 8 M_\odot$, and $n_0 = 0.1 {\rm\ cm}^{-3}$ for the SNR evolution, and $n = 3$, $\dot{E}_0 = 10^{40}{\rm\ erg\ s}^{-1}$, and $\tau_0 = 500$~yr for the pulsar. For the wind, we assume that 99.9\% of the energy is in the form of electrons/positrons with a power law spectrum with $\gamma$ = 1.6. } \label{fig:figure2} \end{figure} The emission from PWNe can be divided into two broad categories -- that originating from the relativistic particles within the nebula and that produced by material that has been swept up by the nebula. \subsection{Emission from nebula} \index{PWN emission} \label{sec:3.1} The emission from the relativistic particles is a combination of synchrotron radiation \index{synchrotron radiation} and IC radiation \index{inverse-Compton radiation} associated with the upscattering of ambient photons. If we characterize the injected spectrum as a power law, \begin{equation} Q(E_e,t) = Q_0(t)(E_e/E_0)^{-\gamma} \end{equation} the integrated particle energy is then \begin{equation} \int Q(E,t) E dE = (1 + \sigma) \dot{E}(t) \end{equation} The resulting emission spectrum is found by integrating the electron spectrum over the emissivity function for synchrotron and IC radiation using, respectively, the nebular magnetic field and spectral density of the ambient photon field. As noted above, the low energy particles in PWNe actually appear to have a flatter spectrum, leading to a flat radio spectrum ($\alpha \sim 0.0 - 0.3$ where $S_\nu \propto \nu^{-\alpha})$ [Note: In X-rays, it is conventional to express the photon spectrum $dN_\gamma/dE \propto E^{-\Gamma}$, where $\Gamma = \alpha + 1$.] The spectrum generally steepens near the mm or optical band. For young PWNe with very high magnetic fields, up-scattering of the high energy synchrotron spectrum can produce $\gamma$-ray photons through so-called synchrotron self-Compton emission. The resulting spectrum thus depends on the age, magnetic field, and pulsar spin-down power (e.g., Torres et al. 2013). As illustrated in Figure~\ref{fig:figure2}, the build-up of particles in the nebula results in an IC spectrum that increases with time. The synchrotron flux decreases with time due to the steadily decreasing magnetic field strength associated with the adiabatic expansion of the PWN (see Section 4). This behavior is reversed upon arrival of the SNR RS (not shown in Figure 2), following which the nebula is compressed and the magnetic field strength increases dramatically, inducing an episode of rapid synchrotron losses. Upon re-expanding, however, IC emission again begins to increase relative to the synchrotron emission. At the latest phases of evolution, when the nebula is very large and the magnetic field is low, the IC emission can provide the most easily-detected signature. As described below, such behavior is seen for a number of PWNe that have been identified based on their emission at TeV energies, and for which only faint synchrotron emission near the associated pulsars is seen in the X-ray band. For electrons with energy $E_{e,100}$, in units of 100~TeV, the typical energy of synchrotron photons is \begin{equation} E_\gamma^s \approx 2.2 E_{e,100}^2 B_{10} {\rm\ keV}, \label{eqn:E_syn} \end{equation} where $B_{10}$ is the magnetic field strength in units of $10 \ \mu$G. The associated synchrotron lifetime \index{synchrotron lifetime} for the particles is \begin{equation} \tau_{syn} \approx 820 E_{e,100}^{-1} B_{10}^{-2} {\rm\ yr} \end{equation} which results in a break \index{synchrotron break} in the photon spectrum at \begin{equation} E_{\gamma,br} \approx 1.4 B_{10}^{-3} t_{\rm kyr}^{-2} {\rm\ keV} \end{equation} for electrons injected over a lifetime $t_{\rm kyr}$. Beyond this energy, the photon power law spectrum steepens by $\Delta \Gamma = 0.5$. For young PWNe, with large magnetic fields, the result is a steepening of the X-ray spectrum with radius due to synchrotron burn-off \index{synchrotron burn-off} of the higher energy particles on timescales shorter than their transit time to the outer portions of the PWN. This is readily observed in young systems such as G21.5$-$0.9 and 3C~58 (see below), although the spectral index actually flattens more slowly than expected unless rapid particle diffusion is in effect (Tang \& Chevalier 2012). For $\gamma$-rays produced by IC-scattering off of the CMB, \begin{equation} E_\gamma^{IC} \approx 0.32 E_{e,10}^2 {\rm\ TeV}, \label{eqn:E_IC} \end{equation} where $E_{e,10} = E_e/(10 {\rm\ TeV})$. Note that while the synchrotron energy depends upon both the electron energy and the magnetic field strength, the IC energy (from CMB scattering) depends only on the particle energy. Modeling of both emission components for a particular PWN thus allows determination of the magnetic field strength. Because of the short synchrotron lifetime for the X-ray emitting particles, the X-ray luminosity is related to the current spin-down power of the pulsar. From a variety of studies, $L_x \sim 10^{-3} \dot{E}$ (e.g., Possenti et al. 2002). Although flux values for individual pulsars may differ from this relationship by as much as a factor of 10, determination of the X-ray luminosity can provide a modest constraint on $\dot{E}$ for systems in which pulsations are not directly detected. The broadband spectrum of a PWN, along with the associated dynamical information provided by measurements of the pulsar spin properties, and the size of the PWN and its SNR, place very strong constraints on its evolution and on the spectrum of the particles injected from the pulsar. Combined with estimates of the swept-up ejecta mass, this information can be used to probe the properties of the progenitor star and to predict the long-term fate of the energetic particles in the nebula. Recent multiwavelength studies of PWNe, combined with modeling efforts of their evolution and spectra, have provided unique insights into several of these areas. \subsection{Emission from shocked ejecta} \index{SNR ejecta} \label{sec:3.2} As the PWN expands into the surrounding supernova ejecta, as described below, it heats the ejecta. The resulting emission, often confined to filaments, is a combination of radiation from shocked gas and continuum emission from dust condensed from the cold ejecta in the early adiabatic expansion of the SNR. The thermal emission depends on the velocity of the PWN shock driven into the ejecta which, in turn, depends on the spin-down power of the central pulsar and the density and velocity profile of the ejecta. For slow shocks, line emission may be observed in the IR and optical bands, such as that observed from the Crab Nebula (see Chapter ``Supernova of 1054 and its remnant, the Crab Nebula''), G21.5$-$0.9, and G54.1+0.3 (see below), while for faster shocks the emission may appear in the X-ray band, as observed in 3C~58. This line emission can provide important information on the ejecta composition and expansion velocity. The dust emission is in the form of a blackbody-like spectrum whose properties depend on the temperature, composition, and grain-size distribution of the dust. Measurements of emission from ejecta dust prior to interaction with the SNR RS (see below) are of particular importance in estimating dust formation rates in supernovae (e.g., Temim et al. 2015) \section{PWN Evolution} \index{PWN evolution} \label{sec:4} \begin{figure}[t] \sidecaption \includegraphics[height=2.25in]{figure3.png} \caption{ Left: Density image from a hydrodynamical simulation of a PWN expanding into an SNR that is evolving into a medium with a CSM density gradient increasing to the right. The pulsar itself is moving upward. The reverse shock is propagating inward, approaching the PWN preferentially from the upper right due to the combined effects of the pulsar motion and the CSM density gradient. Right: Density profile for a radial slice through the simulated composite SNR. Colored regions correpond to different physical regions identified in the SNR image. } \label{fig:figure3} \end{figure} The evolution of a PWN within the confines of its host SNR is determined by both the rate at which energy is injected by the pulsar and by the density structure of the ejecta material into which the nebula expands. The location of the pulsar itself, relative to the SNR center, depends upon any motion given to the pulsar in the form of a kick velocity during the explosion, as well as on the density distribution of the ambient medium into which the SNR expands. At the earliest times, the SNR blast wave expands freely at a speed of $\sim (5-10)\times10^3$~km~s$^{-1}$, much higher than typical pulsar velocities of $\sim 200-1500$~km~s$^{-1}$. As a result, for young systems the pulsar will always be located near the SNR center. The energetic pulsar wind is injected into the SNR interior, forming a high-pressure bubble that expands supersonically into the surrounding ejecta, forming a shock. The input luminosity is generally assumed to have the form (e.g. Pacini \& Salvati 1973) \begin{equation} \dot{E} = \dot{E}_0 \left( 1 + \frac{t}{\tau_0} \right)^{-\frac{(n+1)}{(n-1)}} \label{eqn_edot_vs_t} \end{equation} where \begin{equation} \tau_0 \equiv \frac{P_0}{(n-1)\dot{P}_0} \label{eqn_tau0} \end{equation} is the initial spin-down time scale of the pulsar. Here $\dot{E_0}$ is the initial spin-down power, $P_0$ and $\dot{P}_0$ are the initial spin period and its time derivative, and $n$ is the so-called ``braking index'' \index{braking index} of the pulsar ($n = 3$ for magnetic dipole spin-down). The pulsar has roughly constant energy output until a time $\tau_0$, beyond which the output declines fairly rapidly with time. Figure \ref{fig:figure3} illustrates the evolution of a PWN within its host SNR. The left panel shows a hydrodynamical simulation of an SNR evolving into a non-uniform medium, with a density gradient increasing from left to right. The pulsar is moving upward. The SNR forward shock (FS), RS and contact discontinuity (CD) separating the shocked CSM and shocked ejecta are identified, as is the PWN shock driven by expansion into the cold ejecta. The right panel illustrates the radial density distribution, highlighting the PWN TS as well as the SNR FS, CD, and RS. \subsection{Early Expansion} \label{sec:4.1} The energetic pulsar wind injected into the SNR interior forms a high-pressure bubble that drives a shock into the surrounding ejecta. The sound speed in the relativistic fluid within the PWN is sufficiently high ($c_s = c/\sqrt{3}$) that any pressure variations experienced during the expansion are quickly balanced within the bubble; at early stages, the pulsar thus remains located at the center of the PWN, even if the pulsar itself is moving through the inner SNR, which is often the case because pulsars can be born with high velocities ($\sim 200 - 1500 {\rm\ km\ s}^{-1}$; Arzoumanian et al. 2002) due to kicks imparted in the supernova explosions. The wind is confined by the innermost slow-moving ejecta, and the PWN expansion drives a shock into these ejecta, heating them and producing thermal emission. Magnetic tension in the equatorial regions exceeds that elsewhere in the nebula, resulting an a oblate morphology with the long axis aligned with the pulsar rotation axis (Begelman \& Li 1992). As illustrated in Figure~\ref{fig:figure3} (left), the PWN/ejecta interface is susceptible to Rayleigh-Taylor (R-T) instabilities. These structures are readily observed in the Crab Nebula (Figure~\ref{fig:figure1}a; also see Hester 2008 as well as Chapter ``Supernova of 1054 and its remnant, the Crab Nebula''), where highly-structured filaments of gas and dust are observed in the optical and infrared. Spectral studies of these filaments provide information on the composition, mass, and velocity of the ejecta. This, along with information about the associated SNR, can place strong constraints on the progenitor system. In the Crab Nebula, for example, the total mass of the ejecta swept up by the PWN is $\sim 5 M_\odot$ (Fesen et al. 1997), and the expansion velocity is $\sim 1300{\rm\ km\ s}^{-1}$ (Temim et al. 2006). The Crab is one of a small set of young PWNe for which there is no evidence of the surrounding SNR, other than the swept-up ejecta. Other examples include 3C~58 and perhaps G54.1$+$0.3, although there is some evidence for radio and X-ray emission that might be associated with an SNR shell in the latter (Bocchino et al. 2010). The lack of bright (or any) SNR emission in these systems is generally assumed to result from some combination of low explosion energy, as might result from low-mass progenitors that produce electron-capture SNe, and a very low surrounding density, as could result from mass loss through stellar winds in the late phase of massive star evolution. For the Crab Nebula, the available evidence appears to be consistent with a low-mass progenitor (Yang \& Chevalier 2015). For G54.1$+$0.3, on the other hand, an infrared shell surrounding the X-ray PWN is observed to encompass a collection of what appear to be O-type stars that presumably formed in the same stellar cluster as the PWN progenitor, indicating that this system resulted from a high mass star (Temim et al. 2010). The IR emission appears to arise from a combination of slow shocks driven into the surrounding ejecta and unshocked supernova dust that is being radiatively heated by emission from the embedded stars. \begin{figure}[t] \includegraphics[width=4.65in]{figure4.pdf} \caption{a) \chandra\ image of 3C~58 (Slane et al. 2004). Low (high) energy X-rays are shown in red (blue). b) Expanded view of the central region of 3C~58 showing the toroidal structure and jet associated with the central pulsar. [Images courtesy NASA/CXO.] } \label{fig:figure4} \end{figure} While the optical emission from 3C~58 \index{3C 58} shows evidence for R-T structures, high resolution X-ray observations show a network of filamentary structures that do not appear to be associated with the optical filaments (Figure~\ref{fig:figure4}). The origin of these structures is currently not understood, although kink instabilities \index{kink instabilities} in the termination shock region may result in magnetic structures whose size scale is similar to what is observed in 3C~58 (Slane et al. 2004). Thermal X-ray emission is observed in the outer regions of the PWN, (which appear red in Figure~\ref{fig:figure4} due to both the low energy thermal flux and the steepening of the synchrotron spectrum with radius associated with burn-off of high energy particles) with indications of enhanced metals as would be expected from shocked ejecta. Mass and abundance measurements, combined with expansion measurements, can provide the velocity and composition distribution of the ejecta, placing constraints on the total ejecta mass and explosion energy of the supernova (e.g., Yang \& Chevalier 2015, Gelfand et al. 2015). For more typical systems, the ambient density (and/or supernova explosion energy) is sufficiently high to form a distinct SNR shell of swept-up CSM/ISM material, accompanied by RS-shocked ejecta, as illustrated in Figure~\ref{fig:figure3}. An exceptional example is G21.5$-$0.9. \index{G21.5$-$0.9} X-ray observations (Figure~\ref{fig:figure5}a) show a bright central nebula that coincides with a flat-spectrum radio nebula. The nebula is accompanied by a faint SNR shell (Slane et al. 2000; Matheson \& Safi-Harb 2005), and radio timing measurements with the Parkes telescope reveal the 62~ms pulsar J1833-1034 in the center of the nebula (Camilo et al. 2006). Ground-based IR observations (Zajczyk et al. 2012) reveal a ring of \FeII\ emission associated with ejecta that has been swept up by the expanding PWN (Fig. 4b; contours are X-ray emission from the dashed square region from Fig. 4a). The emission directly around the pulsar is extended in X-rays (see innermost contours), possibly associated with a surrounding torus as is seen in the Crab Nebula and other PWNe. The IR emission surrounding the pulsar is polarized. The electric field vectors are shown in Fig. 4c, with the length of the white bars proportional to the polarization fraction. The magnetic field, which is perpendicular to the electric vectors, is largely toroidal, consistent with the picture of wound-up magnetic flux from the spinning pulsar, as described above. \begin{figure}[t] \includegraphics[width=4.65in]{figure5.pdf} \caption{a) \chandra\ image of G21.5$-$-.9. The pulsar is located at the center and is surrounded by a PWN. The faint outer shell is the SNR, and a portion of the emission between the PWN and the outer shell is scattered flux from the PWN. b) Infrared image at 1.64 $\mu$m showing a shell of \FeII\ emission from ejecta that has been swept up and shocked by the expanding PWN. c) Infrared polarization image. The pulsar is within the red circle. White bars show the direction of the electric field vectors, with the length proportional to the polarization fraction. The inferred magentic field, which is orthogonal to the electric vectors, is largely toroidal. [From Zajczyk et al. 2012, A\&A, 542, A12 - Reproduced with permission from Astronomy \& Astrophysics, \textcopyright ESO] } \label{fig:figure5} \end{figure} \subsection{Reverse-shock Interaction} \index{reverse shock} \label{sec:4.2} As the SNR blast wave sweeps up increasing amounts of material, the RS propagates back toward the SNR center. In the absence of a central PWN, it reaches the center at a time $t_{c} \approx 7 (M_{ej}/10~M_\odot)^{5/6} E_{51}^{-1/2} n_0^{-1/3}~{\rm kyr}$, where $E_{51}$ is the explosion energy, $M_{ej}$ is the ejecta mass, and $n_0$ is the number density of ambient gas (Reynolds \& Chevalier 1984). When a PWN is present, however, the RS interacts with the the nebula before it can reach the center (Figure~\ref{fig:figure3}). The shock compresses the PWN, increasing the magnetic field strength and resulting in enhanced synchrotron radiation that burns off the highest energy particles. In the simplified case of SNR expansion into a uniform medium, with a spherically-symmetric PWN, the system evolves approximately as illustrated in Figure~\ref{fig:figure2} (from Gelfand et al. 2009), where the Sedov solution \index{Sedov solution} has been assumed for the SNR evolution, \begin{equation} R_{SNR} \approx 6.2 \times 10^4 \left(\frac{E_{SN}}{n_0}\right)^{1/5} t^{2/5}, \end{equation} and the PWN evolves approximately as \begin{equation} R_{PWN} \approx 1.5 \dot{E}_0^{1/5} E_{SN}^{3/10} M_{ej}^{-1/2} t^{6/5} \end{equation} (Chevalier 1977) prior to the RS interaction. [In reality, the SNR expands freely at the outset, approaching the Sedov solution as $t \rightarrow t_c.$] Here, $E_{SN}$ is the supernova explosion energy, $n_0$ is the number density of the ambient medium, and $M_{ej}$ is the mass of the supernova ejecta. \begin{figure}[t] \sidecaption \includegraphics[width=11.5cm]{figure6.pdf} \caption{ Time evolution of the SNR and PWN radii for a range of values for the ambient density and initial spin-down power of the pulsar. The solid curves correspond to models from Gelfand et al. (2009) using $\dot{E_0} = 10^{40} {\rm\ erg\ s}^{-1}$, $M_{ej} = 8 M_\odot$, $n_0 = 0.1 {\rm\ cm}^{-3}$, and $E_{51} = 1$. } \label{fig:figure6} \end{figure} If the ambient CSM/ISM is significantly nonuniform (and it typically is, because massive stars form in turbulent regions of dense clouds, and strongly modify the CSM through strong and potentially-asymmetric winds), the FS expands more (less) rapidly in regions of lower (higher) density. This has two significant effects. First, it changes the morphology of the SNR to a distorted shell for which the associated pulsar is no longer at the center. Second, the RS also propagates asymmetrically, reaching the center more quickly from the direction of the higher density medium (Blondin et al. 2001). The return of the RS ultimately creates a collision with the PWN. During the compression phase, \index{compression phase} the magnetic field of the nebula increases, resulting in enhanced synchrotron radiation and significant radiative losses from the highest energy particles. The PWN/RS interface is Rayleigh-Taylor (R-T) unstable, and is subject to the formation of filamentary structure where the dense ejecta material is mixed into the relativistic fluid. If the SNR has evolved in a nonuniform medium, an asymmetric RS will form, disrupting the PWN and displacing it in the direction of lower density (Figure~\ref{fig:figure3}). The nebula subsequently re-forms as the pulsar injects fresh particles into its surroundings, but a significant relic nebula of mixed ejecta and relativistic gas will persist. Because the SNR RS typically reaches the central PWN on a timescale that is relatively short compared with the SNR lifetime, all but the youngest PWNe that we observe have undergone an RS interaction (see Figure~\ref{fig:figure6}). This has significant impact on the large-scale geometry of the PWN, as well as on its spectrum and dynamical evolution. Remnants such as G328.4+0.2 (Gelfand et al. 2007), MSH 15$-$56 (Temim et al. 2013), and G327.1$-$1.1 (Temim et al. 2015) \index{G327.1$-$1.1} all show complex structure indicative of RS/PWN interactions, and observations of extended sources of very high energy (VHE) $\gamma$-rays indicate that many of these objects correspond to PWNe that have evolved beyond the RS-crushing stage. An example of such a RS-interaction stage is presented in Figure~\ref{fig:figure7} where we show the composite SNR G327.1$-$1.1 (Temim et al. 2015). Radio observations (a) show a complete SNR shell surrounding an extended flat-spectrum PWN in the remnant interior, accompanied by a finger-like structure extending to the northwest. X-ray observations (b) show faint emission from the SNR shell along with a central compact source located at the tip of the radio finger, accompanied by a tail of emission extending back into the radio PWN. The X-ray properties of the compact source are consistent with emission from a pulsar (though, to date, pulsations have not yet been detected) which, based on its position relative to the geometric center of the SNR, appears to have a northward motion. Spectra from the SNR shell indicate a density gradient in the surrounding medium, increasing from east to west. Results from hydrodynamical modeling of the evolution of such a system using these measurements as constraints, along with an estimate for the spin-down power of the pulsar based upon the observed X-ray emission of its PWN (see Section 3.1) are shown in Figure~\ref{fig:figure7}c where we show the density (compare with Figure~\ref{fig:figure3}). The RS has approached rapidly from the west, sweeping past the pulsar and disrupting the PWN. The result is a trail of emission swept back into the relic PWN, in excellent agreement with the radio morphology. The X-ray spectrum of the tail shows a distinct steepening with distance from the pulsar, consistent with synchrotron cooling of the electrons based on the estimated magnetic field and age of the injected particles tracked in the hydro simulation. Detailed investigation shows that the central source is actually resolved, suggesting that the pulsar is surrounded by a compact nebula (panel d). This is embedded in a cometary structure produced by a combination of the northward motion of the pulsar and the interaction with the RS propagating from the west. However, extended prong-like structures are observed in X-rays, whose origin is currently not understood. \begin{figure}[t] \includegraphics[width=4.65in]{figure7.pdf} \caption{ a) Radio emission from G327.1$-$1.1 (SIFA/MOST, CSIRO/ATNF/ATCA) showing SNR shell surrounding central PWN. b) \chandra\ image showing faint X-ray SNR shell and PWN. c) Hydrodynamical simulation of evolved composite SNR with properties similar to G327.1$-$1.1. (See text for details.) d) Expanded \chandra\ view of central region of G327.1$-$1.1. A compact nebula surrounding the neutron star is embedded in a cometary structure with an extended tail, formed by a combinition of northward pulsar motion and an interaction with the SNR reverse shock approaching from the west. Prong-like structures of unknown origin extend from several regions around the nebula. [After Temim et al. 2015. All images have north at top and west at the right.] } \label{fig:figure7} \end{figure} \subsection{Late-phase Evolution} \index{late-phase evolution} As illustrated in Figure~\ref{fig:figure2}, as a PWN ages, the ratio of the IC to synchrotron luminosity increases due to the declining magnetic field in the nebula. As a result, in late phases of the evolution, the $\gamma$-ray emission may dominate that observed in the radio or X-ray bands. Indeed, PWNe dominate the population of TeV $\gamma$-ray sources in the Galactic Plane (e.g., Carrigan et al. 2013). For many such TeV-detected PWNe, the inferred magnetic field strengths are only $\sim 5 \ \mu$G (e.g., de Jager et al. 2008). In such a case, 1~TeV gamma-rays originate from electrons with energies of $\sim 20$~TeV (assuming IC scattering of CMB photons) while 1~keV synchrotron X-rays originate from electrons with energies of $\sim 100$~TeV (see Eqns. \ref{eqn:E_syn}, \ref{eqn:E_IC}). The higher energy X-ray producing electrons fall beyond the cooling break, while those producing the $\gamma$-rays are predominantly uncooled. The result is a bright TeV nebula accompanied by a fainter X-ray nebula. Such results are seen clearly for HESS J1825$-$137, for which measurements show that the TeV emission extends to much larger distances than the X-ray emission due to more rapid cooling of the X-ray emitting particles. Indeed, for this PWN, the $\gamma$-ray size is observed to decline with increasing energy, indicating that even some of the $\gamma$-ray emitting electrons fall beyond the cooling break although, as observed in younger PWNe in X-rays, the high energy emission extends to larger radii than can be explained unless rapid diffusion of the associated electrons is occurring (Van Etten \& Romani 2011). Deep surveys with future VHE $\gamma$-ray telescopes are expected to reveal many older systems for which emission in other wavebands is now faint. \subsection{Escaping the SNR -- Bow Shock PWNe} \index{bow shock nebulae} \label{sec:4.3} \begin{figure}[t] \sidecaption \includegraphics[width=11.5cm]{figure8.pdf} \caption{ Left: Hydrodynamic simulation of a pulsar bow shock nebula (see text). [From Gaensler \& Slane 2006. Reprinted by permission.] Right: H$\alpha$ image of bow shock created by PSR~J0437$-$4715. [Image courtesy of R. Romani. See Brownsberger \& Romani 2014.] } \label{fig:figure8} \end{figure} Late in the evolution of a PWN, the pulsar will exit its host SNR and begin traveling through the ISM. Since the sound speed for the cold, warm, and hot phases of the ISM is $v_s \sim 1,\ 10,\ \rm\ and\ 100 {\rm\ km\ s}^{-1}$, the pulsar motion will be supersonic. The relative motion of the ISM sweeps the pulsar wind back into a bow shock structure. As illustrated in Figure~\ref{fig:figure8} (left), the structure is still characterized by an FS, CD, and TS, but the gas behind the FS is now shocked ISM material, and the CD separates the shocked pulsar wind from the shocked ISM. Inside the TS, the pulsar wind flows freely. The distance from the pulsar to the TS depends on the angle $\theta$ relative to the pulsar motion (as does that to the FS), and is approximately described by (Wilkin 1996) \begin{equation} R_w(\theta) = R_{w0} \frac{\sqrt{3(1-\theta \cot\theta)}}{\sin \theta}. \end{equation} Here $R_{w0}$ is the stand-off distance from the pulsar, in the direction of motion, where the wind pressure matches the ram pressure of the inflowing ISM (in the pulsar frame): \begin{equation} R_{w0} = \sqrt{\dot{E}/(r \pi \omega c \rho_0 v_{\rm PSR}^2}, \end{equation} where $v_{\rm PSR}$ is the pulsar velocity and $\rho_0$ is the density of the unshocked ISM (cf. Equation 6). Although this description was derived for a non-relativistic, unmagnetized radiative fluid, while the pulsar wind is magnetized and relativistic, and the radiative time for the ISM is long relative to the flow timescale in pulsar bow shock nebulae, the overall geometric description provides an adequate representation (Bucciantini \& Bandiera 2001). The non-radiative shock formed in the ISM interaction results in the emission of optical Balmer lines, dominated by H$\alpha$, providing a distinct signature from which properties of the pulsar motion and wind luminosity can be inferred. An exceptional example is the bow shock nebula associated with PSR~J0437$-$4715 (Figure~\ref{fig:figure8}, right), a nearby ms pulsar in a binary system, for which timing measurements have established $M_{NS} \sim 1.8 M_\odot$ (Verbiest et al. 2008). Parallax measurements establish a distance of 0.16~kpc, and proper motion measurements of the pulsar (and nebula) provide $v_\perp = 107 {\rm\ km\ s}^{-1}$. With the measured spin-down power $\dot{E} = 5.5 \times 10^{33} {\rm\ erg\ s}^{-1}$, modeling of the bow shock structure provides a direct limit on the NS moment of inertia that indicates a relatively stiff equation of state (Brownsberger \& Romani 2014). Radio and X-ray measurements of bow shock nebulae probe the shocked pulsar wind. Observations of PSR~J1747$-$2958 and its associated nebula G359.23$-$0.82 reveal a long radio tail and an X-ray morphology that reveals both a highly magnetized tail from wind shocked from the forward direction, and a weakly magnetized tail from wind flowing in the direction opposite that of the pulsar motion (Gaensler et al. 2004). High resolution measurements of the emission near several pulsars have also provided evidence for asymmetric pulsar winds imprinting additional structure on the bow shock structure (e.g., Romani et al. 2010). \section{Summary} \label{sec:5} The structure of a PWN is determined by both the properties of the host pulsar and the environment into which the nebula expands. Observations across the electromagnetic spectrum allow us to constrain the nature of the pulsar wind, including both its magnetization and geometry, and the global properties of the PWN allow us to constrain the evolutionary history as it evolves through the ejecta of the supernova remnant in which it was born. Spectroscopic observations yield information on the mass and composition of shocked ejecta into which the nebula expands, and on the expansion velocity. Measurements of the broadband spectrum provide determinations of the nebular magnetic field and the maximum energy of the particles injected into the PWN. These observations continue to inform theoretical models of relativistic shocks which, in turn, have broad importance across the realm of high-energy astrophysics. At late phases, interactions between the PWN and the SNR RS produce a complex combination of the relic nebula and freshly-injected particles. Hydrodynamical simulations of the entire composite SNR system can reveal information on the long-term evolution, which depends on the details of the pulsar motion, its wind properties, the ejecta mass and explosion energy of the SNR, and the properties of the surrounding medium. Such systems may eventually fade into obscurity, with $\gamma$-ray emission from the relic electrons providing an important signature before the pulsars exit their SNRs and traverse the ISM at supersonic speeds, producing elongated bow shock nebulae whose structure continue to provide a glimpse of the relativistic outflows from the aging pulsars. \bigskip \noindent {\large{\bf Acknowledgements}} \noindent The author would like to thank the many colleagues with whom he has collaborated on studies that have been briefly summarized in this Handbook contribution. Partial support for this effort was provided by NASA Contract NAS8-03060. \bigskip \noindent {\large{\bf Cross-References}} \noindent $\bullet$ Supernova of 1054 and its remnant, the Crab Nebula \noindent $\bullet$ The Historical Supernova of AD1181 and its remnant, 3C58 \noindent $\bullet$ Supernovae from super AGB Stars (8-12 $M_\odot$) \noindent $\bullet$ Explosion Physics of Core - Collapse Supernovae \noindent $\bullet$ Radio Neutron Stars \noindent $\bullet$ Distribution of the spin periods of neutron stars \noindent $\bullet$ Dynamical Evolution and Radiative Processes of Supernova Remnants \noindent $\bullet$ X-ray Emission Properties of supernova remnants \noindent $\bullet$ Infrared Emission from Supernova Remnants: Formation and Destruction of Dust \bibliographystyle{spbasic}
1,116,691,499,981
arxiv
\section{Introduction} Let $\mathbb{F}_q$ be the finite field with order $q$, where $q=p^l$ and $p$ is an odd prime. In the vector space $\mathbb{F}_q^d$, we can consider the following distance map \begin{equation}\label{distance} \lambda: (x,y)\longmapsto\|x-y\|=(x_1-y_1)^2+\ldots+(x_d-y_d)^2. \end{equation} For $E\subset \mathbb{F}_q^d$, let $\Delta(E)$ denote the set of distances determined by the points of $E$ that is, $$\Delta(E):=\{\|x-y\|: x,y\in E\}.$$ The Erd\H{o}s-Falconer distance problem in $\mathbb{F}_q^d$ asks for a threshold on the size $E\subset \mathbb{F}_q^d$ so that $\Delta(E)$ contains a positive proportion of $\mathbb{F}_q.$ In \cite{IR07}, Iosevich and Rudnev proved that for $E\subset \mathbb{F}_q^d$ if $|E|>cq^{\frac{d+1}{2}}$ for a sufficiently large constant $c$, then $\Delta(E)=\mathbb{F}_q$. Erd\H{o}s-Falconer distance problem in modules $\mathbb{Z}_q^d$ over the cyclic rings $\mathbb{Z}_q$ was studied by Covert, Iosevich and Pakianathan in \cite{CIP}. More precisely, it is proven that for $E\subset \mathbb{Z}_q^d$, where $q=p^l$, if $|E|\gg l(l+1)q^{\frac{(2l-1)d}{2l}+\frac{1}{2l}}$, then $\Delta(E)$ contains all unit elements of $\mathbb{Z}_q$. For more literature on the distance introduced in (\ref{distance}) and related geometric configurations, we refer to \cite{BHIPR13, BIP, CEHIK12, CHISU, HI, HIKR11, IRZ12} and the references therein. Here, in this paper, we tackle a similar problem related to coding theory. Instead of the distance given in (\ref{distance}), we consider the Hamming distance in $\mathbb{F}_q^d$, a key notion in coding theory, and ask similar geometric configurations in $\mathbb{F}_q^d$. We note that the approach we use to prove the main theorem of this paper is analogous to the one employed in \cite{CIP} and \cite{IR07}. Let us first recall the necessary notion. For two vectors $x=(x_1,\dots ,x_d),y=(y_1,\dots, y_d)\in \mathbb{F}_q^d$, the Hamming distance between $x$ and $y$ is defined as $$|x-y|=\sum_{i=1}^{d}d(x_i,y_i)$$ where \begin{equation*} d(x_i,y_i)=1-\delta_{x_i,y_i}= \left\{ \begin{array}{rl} 0 & \text{if } x_i=y_i,\\ 1 & \text{if } x _i\ne y_i. \end{array} \right. \end{equation*} In other words, the Hamming distance $|x-y|$ between $x$ and $y$ is the number of coordinates in which $x$ and $y$ differ. In particular, $|x|$ is the number of nonzero coordinates of $x$. We will denote the Hamming weight of $x$ as $wt(x)$. The question we will be dealing with in this note is that for subsets $E$ of $\mathbb{F}_q^d$, which can be seen as a code over $ \mathbb{F}_q$, how large does the size of $E$ need to be to guarantee that $E$ contains the desired set of Hamming distances. \subsection{Main Result} \begin{theorem}\label{main} Let $E\subset \mathbb{F}_q^d$ where $4|d$. If $|E|>\frac{q^{d-1}}{d}\binom{d}{d/2}\binom{d/2}{d/4}$, then the points of $E$ determine a Hamming distance $r$ for every even $r$. \end{theorem} \subsection{Fourier Analysis in $\mathbb{F}_q^d$} Let $ f : \mathbb {F}_{q}^d\to \mathbb{C}$. The Fourier transform of $f$ is defined as $$\widehat {f}(m)=q^{-d}\sum_{x\in \mathbb {F}_{q}^d}\chi(-x\cdot m)f(x),$$ where $\chi(z)=e^{\frac{2\pi i Tr{(z)}}{q}}$, $q=p^l$, p prime, and $Tr: \mathbb{F}_q\to \mathbb{F}_p$ is the Galois trace. We recall the following properties of Fourier transform. \begin{equation*} q^{-d}\sum_{x\in \mathbb {F}_{q}^d}\chi(x\cdot m) = \left\{ \begin{array}{rl} 1, & \text{if } m=0\\ 0, & otherwise \end{array} \right. \qquad \text{(Orthogonality)} \end{equation*} \begin{equation*} f(x)=\sum_{m\in \mathbb {F}_{q}^d}\chi(x\cdot m)\widehat{f}(m) \qquad \text{(Inversion)} \end{equation*} \begin{equation*} \sum_{m\in \mathbb {F}_{q}^d}|\widehat{f}(m)|^2=q^{-d}\sum_{x\in \mathbb {F}_{q}^d}|f(x)|^2. \qquad \text{(Plancherel)} \end{equation*} \vskip.125in \section{Proof of Main Result} For the proof of Theorem \ref{main}, we make use of the following lemmas: \begin{lemma} Let $S_r(u)=\{v\in \mathbb{F}_q^d: |u-v|=r\}$ be the sphere of radius $r$ centered at $u\in \mathbb{F}_q^d$. Then $$|S_r(u)|=(q-1)^r\binom{d}{r}.$$ \end{lemma} \begin{proof} If $v\in S_r(u)$, then $u$ and $v$ differ in $r$ coordinates. Note that we have $\binom{d}{r}$ ways of choosing those $r$ coordinates, and for each of these $r$ coordinates of $v$ we have $q-1$ choices. \end{proof} \begin{lemma}\label{supshat} Let $S_r:=S_r(0)=\{v\in \mathbb{F}_q^d: |v|=r\}$ denote the sphere of radius $r$ centered at $0\in \mathbb{F}_q^d$, where $4|d$ , and identify $S_r$ with its indicator function.Then \begin{eqnarray*} \sup_{0\ne m\in \mathbb{F}_q^d} |\widehat{S}_r(m)|&= &q^{-d}\sup_{0\ne m\in \mathbb{F}_q^d} |K_{r}(wt(m))|\\ &\le& \begin{cases} q^{-d} \binom{d}{d/2} \binom{d/2}{d/4} & \text{if } wt(m) \text{is even} \\ q^{-d}(q-1)^{r-1}\frac{\binom{d}{r}}{d}\binom{d}{d/2}\binom{d/2}{d/4} & \text{if } wt(m) \text{is odd and}\; r\; \text{is even} \end{cases} \end{eqnarray*} \end{lemma} \begin{proof} \begin{eqnarray*} \widehat{S}_r(m)&=&q^{-d}\sum_{x\in \mathbb{F}_q^d}\chi(-x\cdot m)S_r(x)\\ &=&q^{-d}\sum_{\substack{x_{i_1},\dots,x_{i_r}\in \mathbb{F}_q^{*}\\i_j\in \{1,\dots d\}\\ i_j\ne i_k } } \chi (-x_{i_1}m_{i_1}-\dots-x_{i_r}m_{i_r})\\ &=&q^{-d}\sum_{\substack{x_{i_1},\dots,x_{i_r}\in \mathbb{F}_q^{*}\\i_j\in \{1,\dots d\}\\ i_j\ne i_k } }e^{\frac{-2\pi i}{q}(x_{i_1}m_{i_1}+\dots +x_{i_r}m_{i_r}) }\\ &=&q^{-d}\sum_{\substack{|\mathcal{I}^k|=r\\\mathcal{I}^k=(k_1,\dots,k_r)}} \prod_{i=1}^{r}\sum_{x_i\in \mathbb{F}_q^{*}} e^{\frac{-2\pi i}{q}(x_im_{k_i})}\\ &=&q^{-d}\sum_{\substack{\{k_1,\dots,k_r\}\subset \{1,\dots,d\}\\k_i<k_j\; \text{for}\; i<j }}\left(\sum_{x_1\in \mathbb{F}_q^*}e^{\frac{-2\pi i}{q}(x_1m_{k_1})} \dots\sum_{x_r\in \mathbb{F}_q^{*}}e^{\frac{-2\pi i}{q}(x_rm_{k_r})} \right) \end{eqnarray*} \end{proof} First note that \begin{equation*} \sum_{x_i\in \mathbb{F}_q^*}e^{\frac{-2\pi i}{q}(x_im_{k_i})} =\left\{ \begin{array}{rl} q-1& \text{if } m_{k_i}=0\\ -1& \text{if } m_{k_i}\ne0. \end{array} \right. \end{equation*} Now let $wt(m)=t$, $m=(m_1,\dots,m_t,\dots,m_d)$, where $m_i\ne0$ for $i=1,\dots, t$ and $m_i=0$ for $i=t+1,\dots,d$. For a fixed $\mathcal{I}^k=(k_1,\dots,k_r)$, let $$S_{\mathcal{I}^k}=\left(\sum_{x_1\in \mathbb{F}_q^*}e^{\frac{-2\pi i}{q}(x_1m_{k_1})} \dots \sum_{x_r\in \mathbb{F}_q^{*}}e^{\frac{-2\pi i}{q}(x_rm_{k_r})} \right) .$$ Here if $i$ coordinates of $(m_{k_1}\dots,m_t,\dots,m_{k_r})$ are nonzero, then we get $$S_{\mathcal{I}^k}=(-1)^{i}(q-1)^{r-i},$$ and we have $\binom{t}{i}\binom{d-t}{r-i}$ many such $(m_{k_1}\dots,m_t,\dots,m_{k_r})$. Summing over all possible $i$'s, $i=0,\dots,t$, we get \begin{eqnarray}\label{shat} \widehat{S}_r(m)&=&q^{-d}\sum_{i=0}^{r}\binom{t}{i}\binom{d-t}{r-i}(-1)^{i}(q-1)^{r-i}\\ &=&q^{-d}K_r(t)=q^{-d}K_r(wt(m))\nonumber \end{eqnarray} where $K_r(\cdot)$ denotes the Krawtchouk polynomial. We will make use of the following two lemmas from \cite{KL}. \begin{lemma}\emph{\cite[Lemma 1]{KL}}\label{first} For $d$ and $i$ even $$|K_{k}(i)|\le |K_{d/2}(i)|$$ \end{lemma} \begin{lemma}\emph{\cite[Lemma 2]{KL}} \label{second} For $k$ integer, $d$ and $i$ even $$|K_i(k)|\le \frac{\binom{d}{d/2} \binom{d/2}{i/2} }{\binom{d}{k}}$$ \end{lemma} Now using Lemmas \ref{first} and \ref{second}, we immediately obtain that if $wt(m)$ is even, then $$\sup |\widehat{S}_r(m) | \le q^{-d} \binom{d}{d/2} \binom {d/2}{d/4} $$ On the other hand, if $wt(m)=i$ is odd, then using the symmetry relation of Krowchouk polynomials, now assuming that $r$ is even , we obtain \begin{eqnarray*} |\widehat{S}_r(m) |&=&q^{-d}K_r(wt(m))\\ &=&q^{-d}K_{r}(i)\\ &=&q^{-d}\frac{(q-1)^r\binom{d}{r}K_i(r) }{(q-1)^{i}\binom{d}{i}}\\ &\le& q^{-d}(q-1)^{r-i} \frac{\binom{d}{r}}{ \binom{d}{i}} \binom{d}{d/2} \binom {d/2}{d/4}\\ &\le& q^{-d}(q-1)^{r-1}\frac{ \binom{d}{r} }{d} \binom{d}{d/2} \binom {d/2}{d/4} \end{eqnarray*} \begin{proof}[Proof of Theorem \ref{main}] Let $0<r< d$ be even. Let $\lambda_r=|\{(x,y)\in E\times E: |x-y|=r\}|$. Then \begin{eqnarray}\label{lambda} \lambda_r&=&\sum_{x,y\in \mathbb{F}_q^d}E(x)E(y)S_r(x-y)\nonumber \\ &=&\sum_{x,y,m\in \mathbb{F}_q^d}E(x)E(y)\widehat{S}_r(m)\chi(m\cdot(x-y))\nonumber\\ &=&q^{2d}\sum_{m}|\widehat{E}(m)|^2\widehat{S}_r(m)\nonumber\\ &=&q^{2d}|\widehat{E}(0)|^2\widehat{S}_r(0)+q^{2d}\sum_{m\ne 0}|\widehat{E}(m)|^2\widehat{S}_r(m)\nonumber\\ &=&q^{-d}|E|^2|S_r|+q^{2d}\sum_{m\ne 0}|\widehat{E}(m)|^2\widehat{S}_r(m)\nonumber\\ &=&q^{-d}|E|^2(q-1)^r\binom{d}{r}+q^{2d}\sum_{\substack{m\ne 0\\wt(m) \text{ is even}}}|\widehat{E}(m)|^2\widehat{S}_r(m)+q^{2d}\sum_{\substack{m\ne 0\\wt(m) \text{ is odd}}}|\widehat{E}(m)|^2\widehat{S}_r(m)\nonumber\\ &=&q^{-d}|E|^2(q-1)^r\binom{d}{r}+I+II \end{eqnarray} where $$ I=q^{2d}\sum_{\substack{m\ne 0\\wt(m) \text{ is even}}}|\widehat{E}(m)|^2\widehat{S}_r(m) $$ and $$ II=q^{2d}\sum_{\substack{m\ne 0\\wt(m) \text{ is odd}}}|\widehat{E}(m)|^2\widehat{S}_r(m) $$ We will first estimate $| I |$. By Lemma \ref{supshat} and Plancherel identity, it follows that \begin{eqnarray*} |I|&\leq& q^{2d} q^{-d} \binom{d}{d/2} \binom{d/2}{d/4} \sum_{\substack{m\ne 0\\wt(m) \text{ is even}}}|\widehat{E}(m)|^2\\ &\le& q^{d} \binom{d}{d/2} \binom{d/2}{d/4}\sum_{m}|\widehat{E}(m)|^2\\ &\leq& q^{d} \binom{d}{d/2} \binom{d/2}{d/4}q^{-d}|E|\\ &=& \binom{d}{d/2} \binom{d/2}{d/4}|E| \end{eqnarray*} Now we will estimate $|II|$. Again using Lemma \ref{supshat} and Plancherel identity, we obtain that \begin{eqnarray*} |II|&\le& q^{2d}q^{-d}(q-1)^{r-1}\frac{\binom{d}{r}}{d}\binom{d}{d/2}\binom{d/2}{d/4} \sum_{\substack{m\ne 0\\wt(m) \text{ is odd}}}|\widehat{E}(m)|^2\\ &\le& q^{d}(q-1)^{r-1}\frac{\binom{d}{r}}{d}\binom{d}{d/2}\binom{d/2}{d/4} \sum_{m}|\widehat{E}(m)|^2\\ &\le& q^{d+r-1}\frac{\binom{d}{r}}{d}\binom{d}{d/2}\binom{d/2}{d/4}q^{-d}|E|\\ &=&q^{r-1}\frac{\binom{d}{r}}{d}\binom{d}{d/2}\binom{d/2}{d/4}|E| \end{eqnarray*} Clearly, $|I|\le |II|$ It follows from (\ref{lambda}) that, if $q^{-d}|E|^2(q-1)^r\binom{d}{r}> q^{r-1}\frac{\binom{d}{r}}{d}\binom{d}{d/2}\binom{d/2}{d/4}|E|$, that is if $$|E|>\frac{q^{d-1}}{d}\binom{d}{d/2}\binom{d/2}{d/4}$$ then $\lambda_r>0.$ \end{proof}
1,116,691,499,982
arxiv
\section{Introduction} \documentclass[twocolumn,showpacs,preprintnumbers,amsmath,amssymb,epsfig]{revtex4} \usepackage{graphicx \usepackage{dcolumn \usepackage{bm \usepackage{epsfig} \usepackage{hyperref} \usepackage{float} \usepackage{amsmath} \usepackage{epsfig,floatflt} \usepackage{subfigure} \usepackage[usenames]{color} \usepackage{comment} \usepackage{ulem} \newcommand{\lesssim}{\lesssim} \newcommand{\gtrsim}{\gtrsim} \def^{\circ}{^{\circ}} \def^{\prime}{^{\prime}} \def\hbox{$.\!\!^\circ$}{\hbox{$.\!\!^\circ$}} \def\hbox{$.\!\!^{\rm d}$}{\hbox{$.\!\!^{\rm d}$}} \def\hbox{$.\!\!^{\rm h}$}{\hbox{$.\!\!^{\rm h}$}} \def\hbox{$.\!\!^{\rm m}$}{\hbox{$.\!\!^{\rm m}$}} \def\hbox{$.\!\!^{\rm s}$}{\hbox{$.\!\!^{\rm s}$}} \def\hbox{$.\mkern-4mu^\prime$}{\hbox{$.\mkern-4mu^\prime$}} \def\hbox{$.\!\!^{\prime\prime}$}{\hbox{$.\!\!^{\prime\prime}$}} \def{\displaystyle\cdot}{{\displaystyle\cdot}} \maxdeadcycles=1000 \begin{document} \title{Current data are consistent with flat spatial hypersurfaces in the $\Lambda$CDM cosmological model but favor more lensing than the model predicts} \author{Javier de Cruz P\'erez${}^{1}$, Chan-Gyung Park${}^{2}$, and Bharat Ratra${}^{1}$} \affiliation{ ${}^{1}$Department of Physics, Kansas State University, 116 Cardwell Hall, Manhattan, KS 66506, USA \\ ${}^{2}$Division of Science Education and Institute of Science Education, Jeonbuk National University, Jeonju 54896, Republic of Korea } \email{[email protected], park.chan.gyung@\-gmail.com, [email protected]} \date{\today} \begin{abstract} We study the performance of three pairs of tilted $\Lambda$CDM cosmological models, two pairs allowing for non-flat spatial hypersurfaces, cosmic microwave background (CMB) temperature and polarization power spectrum data (P18), measurements of the Planck 2018 lensing potential power spectrum (lensing), and a large compilation of non-CMB data (non-CMB). For the six models, we measure cosmological parameters and study whether or not pairs of the data sets (as well as subsets of them) are mutually consistent in these models. Half of these models allow the lensing consistency parameter $A_L$, which re-scales the gravitational potential power spectrum, to be an additional free parameter to be determined from data, while the other three have $A_L = 1$ which is the theoretically expected value. The tilted spatially-flat models assume the usual primordial spatial inhomogeneity power spectrum that is a power law in wave number. The tilted non-flat models assume either the primordial power spectrum used in the Planck group analyses [Planck $P(q)$], that has recently been numerically shown to be a good approximation to what is quantum-mechanically generated from a particular choice of closed inflation model initial conditions, or a recently computed power spectrum [new $P(q)$] that quantum-mechanically follows from a different set of non-flat inflation model initial conditions. In the tilted non-flat models with $A_L=1$ we find differences between P18 data and non-CMB data cosmological parameter constraints, which are large enough to rule out the Planck $P(q)$ model at 3$\sigma$ but not the new $P(q)$ model. No significant differences are found when cosmological parameter constraints obtained with two different data sets are compared within the standard tilted flat $\Lambda$CDM model. While both P18 data and non-CMB data separately favor a closed geometry, with spatial curvature density parameter $\Omega_k<0$, when P18+non-CMB data are jointly analyzed the evidence in favor of non-flat hypersurfaces subsides. Differences between P18 data and non-CMB data cosmological constraints subside when $A_L$ is allowed to vary. From the most restrictive P18+lensing+non-CMB data combination we get almost model-independent constraints on the cosmological parameters and find that the $A_L>1$ option is preferred over the $\Omega_k<0$ one, with the $A_L$ parameter, for all models, being larger than unity by $\sim 2.5\sigma$. According to the deviance information criterion, in the P18+lensing+non-CMB analysis, the varying $A_L$ option is on the verge of being {\it strongly} favored over the $A_L=1$ one, which could indicate a problem for the standard tilted flat $\Lambda$CDM model. These data are consistent with flat spatial hypersurfaces but more and better data could improve the constraints on $\Omega_k$ and might alter this conclusion. Error bars on some cosmological parameters are significantly reduced when non-CMB data are used jointly with P18+lensing data. For example, in the tilted flat $\Lambda$CDM model for P18+lensing+non-CMB data the Hubble constant $H_0=68.09\pm 0.38$ km s$^{-1}$ Mpc$^{-1}$, which is consistent with that from a median statistics analysis of a large compilation of $H_0$ measurements as well as with a number of local measurements of the cosmological expansion rate. This $H_0$ error bar is 31\% smaller than that from P18+lensing data alone. \end{abstract} \pacs{98.80.-k, 95.36.+x} \maketitle \section{Introduction} \label{sec:intro} General relativity is the current best description of gravity on cosmological scales. In general relativity gravity is responsible for the observed expansion of the Universe and can be sourced by non-relativistic (cold dark and baryonic) matter, relativistic radiation/matter, a cosmological constant (or a dynamical dark energy density), and the curvature of space. In an influential 1932 paper Einstein and de Sitter, \citep{EinsteindeSitter1932}, noted that available data then were unable to measure spatial curvature and so decided to study whether a spatially-flat cosmological model was observationally consistent. They acknowledged that the cosmological model had to be dynamical, and so Einstein's original argument for a cosmological constant --- to make the Universe static --- was no longer valid and so the cosmological constant did not have to be included in this Einstein-de Sitter model. They ignored relativistic radiation/matter in this model (which was not under discussion then, and is known to be negligible at late times when the model was meant to be applicable). This Einstein-de Sitter model only included non-relativistic (and then only baryonic) matter. A little over half a century later, motivated by observations indicating a lower than critical non-relativistic matter energy density and the first inflation model, an improved standard model, the spatially-flat $\Lambda$CDM model, was proposed, \citep{Peebles:1984ge}. In this model the cosmological constant $\Lambda$, which has a time- and space-independent energy density, is the dominant contributor to the current cosmological energy budget, followed by non-relativistic non-baryonic cold dark matter (CDM), and then non-relativistic baryonic matter. Like the Einstein-de Sitter model, the standard spatially-flat $\Lambda$CDM model assumes vanishing spatial curvature, motivated by early models of spatially-flat inflation, \citep{Guth1981, Sato1981a, Sato1981b, Kazanas1980}. Soon after, spatially-non-flat, open and closed, inflation models were developed, \citep{Gott1982, Hawking1984, Ratra1985}. A decade and a half later, the observed currently accelerated cosmological expansion, discovered from type Ia supernova (SNIa) measurements \cite{Riess:1998cb, Perlmutter:1998np}, greatly strengthened support for a cosmological constant or a dynamical dark energy density that slowly varied in time and space \citep{PeeblesRatra1988, RatraPeebles1988} --- if general relativity is an accurate model for gravity on cosmological length scales --- and for the spatially-flat $\Lambda$CDM model or a model close to it. For reviews of the current situation see Refs.\ \citep{DiValentino:2021izs, Perivolaropoulos:2021jda, Abdalla:2022yfr}. A half-decade prior to the first SNIa observations indicating currently accelerated cosmological expansion, evidence for a lower than critical value of non-relativistic matter density, along with the development of an open inflation model, \citep{Gott1982}, led to some discussion of an open CDM model, \citep{RatraPeebles1994, RatraPeebles1995, Bucheretal1995, Yamamotoetal1995, Kamionkowskietal1994, Gorskietal1998, Ratraetal1999}, but with cosmic microwave background (CMB) observations indicating that space curvature had to be a subdominant contributor to the current cosmological energy budget, \citep{WMAP:2012nax, Planck:2018vyg}, and with SNIa observations favoring a significant contribution to the energy budget from a cosmological constant, interest in open CDM models soon faded. More recently, especially because of results from Planck CMB anisotropy data, \citep{Planck:2018vyg}, there has been renewed interest in non-flat models. In these models the current cosmological energy budget is dominated by $\Lambda$, to be consistent with the observed currently accelerated cosmological expansion, but they now have very mildly closed spatial hypersurfaces instead of open ones. This is because from an analysis of the final Planck 2018 TT,TE,EE+lowE (hereafter P18) data, that makes use of a specific primordial power spectrum (see below for a fuller discussion of these data and the power spectrum they use in this analysis), they find a spatial curvature energy density parameter value $\Omega_k = -0.044^{+0.018}_{-0.015}$ that is closed and 2.7$\sigma$ away from flat, \citep{Planck:2018vyg}, when $\Omega_k$ is included as an additional free parameter in the analysis. We note that from a combination of Atacama Cosmology Telescope (ACT) and Wilkinson Microwave Anisotropy Probe CMB anisotropy data Ref.\ \citep{ACT:2020gnv} find $\Omega_k = -0.001^{+0.014}_{-0.010}$, which is 2.1$\sigma$ from the P18 value and consistent with flat spatial hypersurfaces, while the South Pole Telescope (SPT) CMB anisotropy data results in $\Omega_k = 0.001^{+0.018}_{-0.019}$, \citep{SPT-3G:2021wgf}, which is 1.7$\sigma$ from the P18 value and also consistent with flat spatial hypersurfaces. Both these analyses use the primordial power spectrum used in the P18 analysis. The above result led to the study of the so-called lensing anomaly. The trajectory of CMB photons are bent by the gravitational effect of inhomogeneities present in the mass distribution along their way to us. This statistical phenomenon, predicted by general relativity, is known as weak gravitational lensing of the CMB. When computing the predicted CMB temperature and polarization spectra in a cosmological model that are to be compared to the observed spectra, it is important to account for this effect and compute what are known as lensed CMB spectra. If we use the tilted flat $\Lambda$CDM model to measure cosmological parameter values from Planck CMB data, we can use this model, with these parameter values, to compute the expected amount of CMB weak gravitational lensing, \cite{Lewis:2006fu}. Incorrectly predicting the amount of weak lensing present in the CMB power spectra would indicate an inconsistency in the standard model when it is used to fit Planck CMB temperature and polarization anisotropy data. It turns out that this is actually the case, since an excess of CMB weak lensing is observed in the CMB power spectra, compared to what is expected in the standard model with parameter values determined from CMB data \cite{Calabreseetal2008, Planck:2018vyg}. This is known as the lensing anomaly, since the effect is not yet thought to be statistically significant enough to reject the standard spatially-flat $\Lambda$CDM model. A number of solutions have been proposed, with two being more widely debated. The first of these is related to the aforementioned non-zero value for $\Omega_k$ in the P18 data analysis, which favors closed spatial hypersurfaces, when $\Omega_k$ is taken to be an additional free parameter, e.g.\ \citep{Planck:2018vyg, Handley:2019tkm, DiValentino:2019qzk, DiValentino:2022rdg,Yang:2022kho}. Due to the excess of CMB weak lensing found, it is desirable to have a higher value of the non-relativistic matter energy density parameter $\Omega_m$ in order to increase the amount of gravitational lensing of CMB photons. Because of the tight constraints imposed by CMB data on this parameter there is no room, within the tilted flat $\Lambda$CDM model, to do this. By allowing non-flat spatial hypersurfaces, a closed model with $\Omega_k<0$ can resolve this problem, since the CMB power spectra are affected by the combination $(\Omega_m + \Omega_k)h^2$, where $h$ is the Hubble constant $H_0$ in units of $100~\textrm{km}~\textrm{s}^{-1}~\textrm{Mpc}^{-1}$, which can be held constant by making $\Omega_k$ slightly more negative while slightly increasing $\Omega_m$ to give more CMB weak lensing, and also slightly adjusting $h$. Cosmological distances also depend on spatial curvature, therefore in a non-flat cosmological model the positions of the acoustic peaks are shifted relative to the flat model case. This would not be a welcome change, since the constraints from the observed CMB power spectra are tight. This can be resolved by reducing the value of $h$ which shifts the acoustic peaks in the opposite direction. The fact that almost the same temperature and polarization power spectra can be produced with different combinations of the cosmological parameter values points to a geometrical degeneracy between these three parameters, $H_0$-$\Omega_m$-$\Omega_k$. While the first of the more widely debated resolutions is based on a change of more-conventional cosmological parameters, the second one is more phenomenological, e.g.\ \citep{Planck:2018vyg, SPT:2017jdf, SPT:2019fqo, DiValentino:2019qzk, DiValentino:2022rdg}. Reference \cite{Calabreseetal2008} introduces the lensing consistency parameter $A_L$ which re-scales the gravitational potential power spectrum in such a way that when $A_L=1$ we recover the theoretically predicted amount of weak lensing. If $A_L$ is allowed to vary in the analysis, to be determined from data, when $A_L>1$ the predicted amount of lensing is greater than the case when $A_L=1$. In Ref.\ \cite{Planck:2018vyg} when P18 data are used to analyze the tilted flat $\Lambda$CDM+$A_L$ model, the result is $A_L = 1.180\pm 0.065$ which represents a 2.8$\sigma$ deviation from the theoretically expected value $A_L=1$. We emphasize however that the measured Planck lensing likelihood is consistent with $A_L = 1$, see Fig.\ 3 of Ref.\ \citep{Planck:2018vyg} and Ref.\ \citep{Planck:2018lbu}. We also note that from ACT CMB anisotropy data $A_L = 1.01 \pm 0.11$, \citep{ACT:2020gnv}, consistent with $A_L = 1$ and 1.3$\sigma$ smaller than the P18 value, while from SPT CMB anisotropy data $A_L = 0.81 \pm 0.14$, \citep{Henning:2017nuy}, 1.4$\sigma$ smaller than $A_L = 1$ and 2.4$\sigma$ smaller than the P18 value. To analyze CMB anisotropy data one must assume a form for the primordial power spectrum of spatial inhomogeneities as a function of wavenumber. In the inflation scenario zero-point quantum-mechanical fluctuations during inflation generate the spatial inhomogeneities, \citep{Hawking:1982cz, Starobinsky:1982ee, Guth:1982ec, Bardeen:1983qw, Fischler:1985ky}. In spatially-flat inflation models, if the inflaton field slowly rolls down an almost flat potential energy density the scale factor increases exponentially with time and the primordial power spectrum is almost scale-invariant with hardly any tilt, \citep{Harrison:1969fb, Peebles:1970ag, Zeldovich:1972zz}. A steeper inflaton potential energy density makes the inflaton evolve more rapidly, can cause the scale factor to grow only as a power of time, and will increase the power spectral tilt \citep{Lucchin:1984yf, Ratra:1989uv, Ratra:1989uz}. There has been much less study of the quantum-mechanical generation of spatial inhomgeneities in non-flat inflation models. Power spectra have been derived in spatially open and closed inflation models, \citep{Gott1982, Hawking1984, Ratra1985}, with a slowly-rolling inflation potential energy density, \citep{RatraPeebles1995, Ratra:2017ezv}, but these are untilted power spectra. The power spectrum assumed in the non-flat analyses of Refs.\ \citep{Planck:2018vyg, Handley:2019tkm, DiValentino:2019qzk} is tilted but was not derived from an inflation model computation. Very recently, a numerical study in closed inflation models that computes primordial power spectra generated for a few different, initially slow-roll, inflation initial conditions finds that it is possible to generate, in the closed case, a tilted power spectrum very close to that used in Refs.\ \citep{Planck:2018vyg, Handley:2019tkm, DiValentino:2019qzk}, \cite{Guth:2022xyz}. Also recently, a different set of initial conditions in closed and open inflation models were used to compute a different tilted power spectrum, \citep{Ratra:2022ksb}. In this paper we consider cosmological models with four different power spectra. In the tilted flat $\Lambda$CDM model we use the usual spatially-flat inflation model tilted power spectrum. In the untilted non-flat $\Lambda$CDM model, we use the untilted non-flat inflation model power spectrum, \citep{RatraPeebles1995, Ratra:2017ezv}. In the two different tilted non-flat $\Lambda$CDM models, we use the power spectrum assumed in Ref.\ \citep{Planck:2018vyg, Handley:2019tkm, DiValentino:2019qzk} --- which we call the Planck $P(q)$ --- as well as the power spectrum computed in Ref.\ \citep{Ratra:2022ksb}, which we call the new $P(q)$. See Sec.\ \ref{sec:method} below for a fuller description of the four power spectra we use. We emphasize that we use only non-flat inflation model power spectra that can be derived using a straightforward extension of the spatially-flat inflation model initial conditions to the non-flat inflation case. The issue of non-flat inflation model initial conditions is more complex than the flat inflation case, see discussion in Ref.\ \citep{Ratra:2022ksb}, so we focus on the simplest physically-consistent options, which also makes the analysis tractable. We note that a number of other power spectra have been considered in closed cosmological models, see Refs.\ \citep{Lasenby:2003ur, Masso:2006gv, Asgari:2015spa, Bonga:2016iuf, Handley:2019wlz, Thavanesan:2020lov, Kiefer:2021iko, Hergt:2022fxk}. A desire to measure the spatial curvature energy density parameter $\Omega_k$ provides part of the motivation for our work. The CMB anisotropy data are currently the most restrictive cosmological data, but to use these to measure $\Omega_k$ requires assumption of a primordial power spectrum for spatial inhomgeneities. Other, less-restrictive, data that do not require assuming a power spectrum can also be used to measure $\Omega_k$. These include better-established lower redshift data (that reach to $z \sim 2.3$), such as SNIa, Hubble parameter as a function of redshift [$H(z)$], and (non-growth-rate) baryon acoustic oscillation (BAO) measurements, \citep{Scolnic:2017caz, Yu:2017iju, eBOSS:2020yzd}, as well as emerging probes that reach to higher $z$, such as H \textsc{ii} starburst galaxy apparent magnitude observations as a function of $z$ that reach to $z \sim 2.5$, \citep{Gonzalez-Moran:2019uij,Cao:2020jgu,Cao:2020evz, Johnson:2021wou, Mehrabi:2021feg}; quasar angular size measurements that reach to $z \sim 2.7$, \citep{Cao:2017ivt,Ryan:2019uor, Lian:2021tca, Cao:2021cix}; Mg \textsc{ii} and C \textsc{iv} reverberation measured quasar data that reach to $z \sim 3.4$, \citep{OzDES:2021byt, Khadka:2021ukv, Khadka:2021sxe, Khadka:2022ooh, Cao:2022pdv, OzDES:2022ysr, Czerny:2022xfj}; possibly quasar flux measurements that reach to $z \sim 7.5$, \citep{Risaliti:2018reu, Khadka:2020whe, Khadka:2020vlh, Lusso:2020pdb, Khadka:2020tlm, Khadka:2021xcc, Rezaei:2021qwd, Dainotti:2022rfz, Petrosian:2022tlp}; and gamma-ray burst data that reach to $z \sim 8.2$, \citep{Dirirsa:2019fcs, Khadka:2020hvb, Khadka:2021vqa, Wang:2021hcx, Hu:2021ycz, Cao:2021irf, Luongo:2021pjs, Cao:2022wlg, Liu:2022srx, Dainotti:2022wli, Cao:2022yvi}. Individually these low- and intermediate-redshift data sets are only able to provide relatively weaker constraints on cosmological parameters in general, and specifically on $\Omega_k$, compared to those from CMB data. However, when many (or all) low- and intermediate-redshift data are analyzed jointly they provide useful constraints on $\Omega_k$ --- currently still not nearly as restrictive as the CMB ones --- favoring flat spatial hypersurfaces but still allowing a small amount of spatial curvature energy density, \citep{Park:2018tgj, Cao:2021ldv, Cao:2022ugh}. For other recent discussions of constraints on spatial curvature, see Refs.\ \citep{Arjona:2021hmg, Dhawan:2021mel, Gonzalez:2021ojp, Geng:2021hqc, Wei:2022plg, Mukherjee:2022ujw, Wu:2022fmr} and references therein, and see Refs.\ \citep{Baumgartner:2022jdz, Anselmi:2022uvj, Jimenez:2022asc} and references therein for recent, more general, discussions of non-flat cosmological models. While the standard spatially-flat $\Lambda$CDM cosmological model is attractive because of its simplicity --- the model only has 6 free cosmological parameters --- it is not straightforward to understand how to consistently generalize the current quantum-mechanical standard model of particle physics to one that accommodates the cosmological constant that is part of the standard $\Lambda$CDM model. Nonetheless, the standard cosmological model is consistent with a wide variety of measurements, including CMB anisotropy measurements \cite{Planck:2018vyg}, SNIa apparent magnitude observations \citep{Scolnic:2017caz}, BAO data \citep{eBOSS:2020yzd}, $H(z)$ observations \citep{Yu:2017iju}, and measurements of the growth of structure as a function of redshift ($f\sigma_8$). It is important to bear in mind that these data do not rule out mild evolution of the dark energy density \cite{Gomez-Valent:2018nib, Ooba:2018dzf, Ryan:2018aif, SolaPeracaula:2018wwm, Singh:2018izf, Park:2019emi, Gomez-Valent:2020mqn, Moreno-Pulido:2020anb, Sinha:2020vob, Cao:2020jgu, Urena-Lopez:2020npg, Cao:2021ldv, SolaPeracaula:2021gxi, Khadka:2021vqa, Cao:2021cix, Xu:2021xbt, Cao:2021irf, Jesus:2021bxq, Cao:2022wlg, Moreno-Pulido:2022phq, Cao:2022pdv, Adil:2022hkj} or, as discussed in detail above, mildly curved spatial hypersurfaces. These extensions, among others, might alleviate some of the issues affecting the standard spatially-flat $\Lambda$CDM model, such as differences in $H_0$ and $\sigma_8$ values determined using different techniques, \citep{DiValentino:2021izs, Perivolaropoulos:2021jda, Abdalla:2022yfr}. In this paper however we focus our efforts on the study of the lensing anomaly and on the measurement of $\Omega_k$. In this paper we study eight cosmological models, namely, the tilted flat $\Lambda$CDM (+$A_L$) models, the untilted non-flat $\Lambda$CDM (+$A_L$) models, the tilted non-flat $\Lambda$CDM (+$A_L$) Planck $P(q)$ models, and the tilted non-flat $\Lambda$CDM(+$A_L$) new $P(q)$ models. Six of these are non-flat models, characterized by three different primordial power spectra (see Sec.\ \ref{sec:method} for the form of the power spectra). By using a number of cosmological models with compilations of observational data to test how well the models fit these data, and to constrain the cosmological parameters of the models, we can measure, among other things, $\Omega_k$ and also determine whether the cosmological parameter constraints set by different data are model-dependent or not. The data sets we employ in this work are P18 data, Planck 2018 CMB weak lensing data, non-growth-factor BAO (BAO$^{\prime}$) data, BAO (including growth-factor) data, and non-CMB data [composed of BAO, $f\sigma_8$, $H(z)$, and SNIa data]. These data are described in more detail in Sec.\ \ref{sec:data}. A brief summary of the more significant results we find follows. These assume that the data sets we use are correct and do not have unaccounted for systematic errors. The untilted non-flat $\Lambda$CDM model with and without a varying $A_L$ parameter is not able to properly fit the P18 CMB anisotropy power spectra, due to the lack of the tilt ($n_s$) degree of freedom. Consequently its performance in comparison with the tilted models turns out to be very poor. Significant evidence in favor of a closed Universe is found when P18 data are considered alone and the tilted non-flat models better fit these data than does the standard tilted flat $\Lambda$CDM model. There are disagreements between P18 data cosmological constraints and non-CMB data cosmological constraints in the context of the tilted non-flat models with $A_L=1$, with the tilted non-flat $\Lambda$CDM Planck $P(q)$ model ruled out at 3$\sigma$. These tensions completely fade when the $A_L$ parameter is allowed to vary. On the other hand no significant tension is found when the cosmological parameter constraints obtained with two different data sets are compared within the standard tilted flat $\Lambda$CDM model. The most-restrictive P18+lensing+non-CMB data set clearly favors the varying $A_L$ option (with $A_L>1$) over the $A_L=1$ one --- which could be a problem for the standard tilted flat $\Lambda$CDM model --- and when this data set is utilized we get almost model-independent cosmological parameter constraints. These data are consistent with flat spatial hypersurfaces --- so we conclude that current data do not favor curved geometry --- but more and better data could improve the constraints on $\Omega_k$ and might alter this conclusion. We note that even though both P18 data and non-CMB data favor closed geometry, the larger $H_0$ and smaller $\Omega_m$ values favored by non-CMB data (compared to those favored by P18 data) result in P18+lensing+non-CMB data favoring flat spatial hypersurfaces. The Hubble constant value measured using these data in the tilted flat $\Lambda$CDM model is $H_0=68.09\pm 0.38$ km s$^{-1}$ Mpc$^{-1}$, which is consistent with that from a median statistics analysis of a large compilation of Hubble constant measurements as well as with a number of local measurements of the cosmological expansion rate. This $H_0$ error bar is 31\% smaller than that from P18+lensing data alone; similarly augmenting the P18+lensing data with our non-CMB data compilation reduces the $\Omega_m$ error bar by 33\% and also reduces error bars on all the other cosmological parameters by smaller amounts. The layout of our paper is as follows. In Sec.\ \ref{sec:data} we detail the observational data sets we employ to test the different cosmological models. In Sec.\ \ref{sec:method} we describe the cosmological models and primordial power spectra we study and summarize the methods we use in our analyses. We dedicate Sec.\ \ref{sec:results} to discuss in detail the results obtained by testing the different cosmological models against the several data sets we consider. In this section we also utilize different statistical estimators to compare the performance of the models in fitting data and to study possible tensions between different data sets in a given model. In Sec.\ \ref{sec:discussion} we summarize the more significant results of the previous (long) section. Finally in Sec.\ \ref{sec:conclusion} we deliver our conclusions. \section{Data} \label{sec:data} We use CMB anisotropy data and non-CMB data to constrain cosmological parameters, to determine how well the cosmological models we study fit these data, and to study how mutually consistent these data sets are in each of the cosmological models. We now list the data sets we use. {\bf P18}. Planck 2018 CMB temperature anisotropy data together with polarization data and their corresponding cross-spectra (TT,TE,EE+lowE), \cite{Planck:2018vyg}, which contain: TT power spectra at low-$\ell$ ($2\leq \ell \leq 29$) and high-$\ell$ ($30\leq \ell\leq 2508$) --- where $\ell$ is multipole number, TE data at high-$\ell$ ($30\leq \ell \leq 1996$), and EE data at low-$\ell$ ($2\leq \ell \leq 29$) and high-$\ell$ ($30\leq \ell\leq 1996$). We use the Planck 2018 baseline \texttt{Plik} $\ell \geq 30$ likelihood, which is described in Sec.\ 2.2.1 of Ref.\ \cite{Planck:2018vyg}. {\bf (P18) lensing}. Planck 2018 lensing potential power spectrum, see Sec.\ 2.3 of Ref.\ \cite{Planck:2018vyg} or Sec.\ 2 of Ref.\ \cite{Planck:2018lbu} for more details. {\bf BAO$^\prime$}. Twelve BAO data points from both anisotropic and isotropic BAO estimators that probe the redshift range $0.122 \leq z \leq 2.334$ \cite{Gil-Marin:2020bct,Bautista:2020ahg,Hou:2020rse,Neveux:2020voa,Carter:2018vce,DES:2017rfo,duMasdesBourboux:2020pck}. These are BAO data with growth rates excluded from the original papers, and are listed, along with the appropriate covariance matrices, in Sec.\ 3 of Ref.\ \cite{Cao:2022ugh}. \begin{table} \caption{BAO measurements.} \begin{ruledtabular} \begin{tabular}{ccc} $z_\textrm{eff}$ & Measurement & Reference \\[+0mm] \hline \\[-2mm] $0.122$ & $D_V\left(r_{d,\textrm{fid}}/r_d\right)$ [Mpc] $= 539\pm 17$ [Mpc] & \cite{Carter:2018vce} \\[+1mm] \hline \\[-2mm] $0.38$ & $D_M/r_d$ $= 10.274 \pm 0.151$ & \cite{Gil-Marin:2020bct} \\[+1mm] $0.38$ & $D_H/r_d$ $= 24.888\pm 0.582$ & \cite{Gil-Marin:2020bct} \\[+1mm] $0.51$ & $D_M/r_d$ $= 13.381 \pm 0.179$ & \cite{Gil-Marin:2020bct} \\[+1mm] $0.51$ & $D_H/r_d$ $= 22.429 \pm 0.482$ & \cite{Gil-Marin:2020bct} \\[+1mm] $0.38$ & $f \sigma_8 =0.49729\pm0.04508$ & \cite{Gil-Marin:2020bct} \\[+1mm] $0.51$ & $f \sigma_8 =0.45902\pm 0.03784$ & \cite{Gil-Marin:2020bct} \\[+1mm] \hline \\[-2mm] $0.698$ & $D_M/ r_d$ $= 17.646\pm0.302$ & \cite{Gil-Marin:2020bct,Bautista:2020ahg} \\[+1mm] $0.698$ & $D_H / r_d$ $= 19.770\pm0.469$ & \cite{Gil-Marin:2020bct,Bautista:2020ahg} \\[+1mm] $0.698$ & $f\sigma_8$ $= 0.47300\pm 0.04429$ & \cite{Gil-Marin:2020bct,Bautista:2020ahg} \\[+1mm] \hline \\[-2mm] $0.81$ & $D_A/r_d$ $= 10.75\pm 0.43$ & \cite{DES:2017rfo} \\[+1mm] \hline \\[-2mm] $1.48$ & $D_M/ r_d$ $= 30.21\pm 0.79$ & \cite{Hou:2020rse,Neveux:2020voa} \\[+1mm] $1.48$ & $D_H / r_d$ $= 13.23\pm 0.47$ & \cite{Hou:2020rse,Neveux:2020voa} \\[+1mm] $1.48$ & $f\sigma_8$ $= 0.462\pm 0.045$ & \cite{Hou:2020rse,Neveux:2020voa} \\[+1mm] \hline \\[-2mm] $2.334$ & $D_M / r_d$ $= 37.5^{+1.2}_{-1.1}$ & \cite{duMasdesBourboux:2020pck} \\[+1mm] $2.334$ & $D_H / r_d$ $= 8.99^{+0.20}_{-0.19}$ & \cite{duMasdesBourboux:2020pck} \\[+0mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: For the data point at $z = 0.122$ the sound horizon size (at the drag epoch) of the fiducial model is $r_{d,\textrm{fid}}=147.5~\textrm{Mpc}$ \cite{Carter:2018vce}. \end{flushleft} \end{ruledtabular} \label{tab:bao} \end{table} {\bf BAO}. An extension of the BAO$^\prime$ data described above, that also probe the redshift range $0.122 \leq z \leq 2.334$, but now include the correlated growth rate ($f\sigma_8$) data points provided in Refs.\ \cite{Gil-Marin:2020bct,Bautista:2020ahg,Hou:2020rse,Neveux:2020voa}. Table \ref{tab:bao} lists these BAO data points. The quantities listed in Table \ref{tab:bao} include transverse comoving distance at redshift $z$ \begin{equation} D_M(z) = (1+z)D_A(z), \end{equation} where $D_A(z)$ is the angular size distance at $z$, \begin{equation} D_H(z) = \frac{c}{H(z)}, \end{equation} where $H(z)$ is the Hubble parameter and $c$ the speed of light, and the angle-averaged distance \begin{equation} D_V(z) = \left[czD^2_M(z)/H(z)\right]^{1/3}. \end{equation} The measurements are provided as relative distances with respect to the radius of the sound horizon at the drag epoch redshift $z_d$ \begin{equation} r_d = \int^{\infty}_{z_d}\frac{c_s(z)dz}{H(z)}, \end{equation} where $c_s(z)$ is the speed of sound in the photon-baryon fluid. For BAO data from Ref.\ \cite{Gil-Marin:2020bct} the appropriate covariance matrix is now \begin{widetext} \begin{equation} \label{eq:cov_BOSS} \begin{pmatrix} 0.022897 & -0.02007 & 0.0026481 & 0.013487 & -0.0081402 & 0.0010292 \\ -0.02007 & 0.33849 & -0.0085213 & -0.016024 & 0.13652 & -0.0038002 \\ 0.0026481 & -0.0085213 & 0.0020319 & 0.001325 & -0.0023012 & 0.000814158 \\ 0.013487 & -0.016024 & 0.001325 & 0.032158 & -0.020091 & 0.0026409 \\ -0.0081402 & 0.13652 & -0.0023012 & -0.020091 & 0.23192 & -0.0055377 \\ 0.0010292 & -0.0038002 & 0.000814158 & 0.0026409 & -0.0055377 & 0.0014322 \end{pmatrix}, \end{equation} \end{widetext} while the covariance matrix for BAO data from Refs.\ \cite{Gil-Marin:2020bct,Bautista:2020ahg} is \begin{small} \begin{equation} \label{eq:cov_LRG} \begin{pmatrix} 0.09114 & -0.033789 & 0.0024686 \\ -0.033789 & 0.22009 & -0.0036088 \\ 0.0024686 & -0.0036088 & 0.0019616 \end{pmatrix}, \end{equation} \end{small} and that for BAO data from Refs.\ \cite{Hou:2020rse,Neveux:2020voa} is \begin{small} \begin{equation} \label{eq:cov_Quasar} \begin{pmatrix} 0.6227 & 0.01424 & 0.02257 \\ 0.01424 & 0.2195 & -0.007315 \\ 0.02257 & -0.007315 & 0.002020 \end{pmatrix}. \end{equation} \end{small} {${\boldsymbol f\bm{\sigma}_8}$}. $f\sigma_8$ data points, in addition to those correlated with BAO data that are listed in Table \ref{tab:bao}. These independent $f\sigma_8$ measurements are obtained either from peculiar velocity data \cite{Turnbull:2011ty,Hudson:2012gt,Said:2020epb} or from redshift space distortion (RSD) analyses \cite{Shi:2017qpr,Simpson:2015yfa,Blake:2013nif,Mohammad:2018mdy,Okumura:2015lvp}. These are listed in Table \ref{tab:fs8}. The combination $f(z)\sigma_8(z)$ is used to quantify the growth rate of the matter density perturbation. Here, the growth rate \begin{equation} f(z) = -(1+z)\frac{d\ln{D(z)}}{dz} \end{equation} where $D(z)$ is the growth function. The other function involved, $\sigma_8(z)$, is the root mean square of matter fluctuations smoothed over spheres of radius $R_8 = 8h^{-1}\textrm{Mpc}$ at a given value of the redshift. It is computed as \begin{equation} \sigma^2_8(z) = \int\frac{d^{3}k}{(2\pi)^3}P_m(z,\vec{k})W^2(k{R_8}), \end{equation} where $P_m(z,\vec{k})$ is the matter power spectrum and $W(k{R_8})$ is the window function. \begin{table} \caption{$f\sigma_8$ measurements.} \begin{ruledtabular} \begin{tabular}{ccc} $z_\textrm{eff}$ & $f\sigma_8$ & Reference \\[+0mm] \hline \\[-2mm] $0.02$ & $ 0.398\pm 0.065$ & \cite{Turnbull:2011ty,Hudson:2012gt} \\[+1mm] \hline \\[-2mm] $0.035$ & $0.338\pm 0.027$ & \cite{Said:2020epb} \\[+1mm] \hline \\[-2mm] $0.1$ & $0.376\pm 0.038$ & \cite{Shi:2017qpr} \\[+1mm] \hline \\[-2mm] $0.18$ & $ 0.29\pm 0.10$ & \cite{Simpson:2015yfa} \\[+1mm] $0.38$ & $0.44\pm 0.06$ & \cite{Blake:2013nif} \\[+1mm] \hline \\[-2mm] $0.6$ & $0.49\pm 0.12$ & \cite{Mohammad:2018mdy}\\[+1mm] $0.86$ & $0.46\pm 0.09$ & \cite{Mohammad:2018mdy} \\[+1mm] \hline \\[-2mm] $1.36$ & $0.482\pm 0.116$ & \cite{Okumura:2015lvp} \\[+0mm] \end{tabular} \end{ruledtabular} \label{tab:fs8} \end{table} {\bf SNIa}. Apparent magnitude as a function of redshift measurements for 1048 Pantheon SNIa \cite{Scolnic:2017caz}, probing the redshift range $0.01 < z < 2.3$, and 20 compressed data points, spanning the redshift range $0.015 \leq z \leq 0.7026$, representing 207 DES 3yr SNIa \cite{DES:2018paw}. The Pantheon and DES 3yr data are independent of each other, but the data points within each sample are correlated and we account for the corresponding covariance matrices in our analyses. {${\bm{H(z)}}$}. Hubble parameter measurements over the redshift range $0.070 \leq z \leq 1.965$ obtained using the differential age technique. The 31 data points employed are listed in Table 2 of Ref.\ \cite{Park:2017xbl}. Hereafter we denote the combination of BAO, $f\sigma_8$, SNIa, and $H(z)$ data sets as the non-CMB data set. \section{Methods} \label{sec:method} We apply the Markov chain Monte Carlo (MCMC) method, implemented in the \texttt{CAMB}/\texttt{COSMOMC} package (version of Oct.\ 2018), \cite{Challinor:1998xk,Lewis:1999bs,Lewis:2002ah}, to explore the parameter space of the different models under study. The \texttt{CAMB} program computes the matter and CMB power spectra based on the evolution of density perturbations of the matter and radiation components and the \texttt{COSMOMC} program uses the MCMC method to estimate the parameter constraints that are favored by the given observational data sets. We have performed cross-checks using the \texttt{CLASS}/\texttt{MontePython} package, \cite{Blas:2011rf,Audren:2012wb}. In general a good agreement between the results is obtained unless significant degeneracies between some of the fitting parameters are present. When this happens, differences in the central values are found, but the two sets of results remain compatible at 1$\sigma$ due to large error bars. The inclusion of more data breaks the aforementioned degeneracies and the two sets of results then agree really well. In this paper we consider four cosmological models: the tilted flat, (two) tilted non-flat, and the untilted non-flat $\Lambda$CDM models, as well as their extensions through the inclusion of the $A_L$ parameter, for a total of eight cases. $A_L$ is a phenomenological parameter that scales the theoretical prediction of the gravitational potential power spectrum, with its theoretical expected value being $A_L=1$, see Ref.\ \citep{Calabreseetal2008}. $A_L>1$ causes the smoothing of acoustic peaks in the CMB angular power spectrum, and Planck CMB data tend to prefer $A_L>1$ \cite{Planck:2018vyg}. The tilted flat $\Lambda$CDM model is characterized by six cosmological parameters ($\Omega_b h^2$, $\Omega_c h^2$, $\theta_\textrm{MC}$, $\tau$, $A_s$, $n_s$), where $\Omega_b$ and $\Omega_c$ are the current values of non-relativistic baryonic and cold dark matter density parameters, $\theta_\textrm{MC}$ is the angular size of the sound horizon at recombination defined in the \texttt{CAMB}/\texttt{COSMOMC} program, $\tau$ is the reionization optical depth, and $A_s$ and $n_s$ are the amplitude and the spectral index of the primordial scalar-type energy density perturbation power spectrum \begin{equation}\label{eq:tilted_flat_PS} P_\delta(k) = A_s \left(\frac{k}{k_0} \right)^{n_s}, \end{equation} where $k$ is wavenumber and the pivot scale for $A_s$ is $k_0=0.05~\textrm{Mpc}^{-1}$. This power spectrum is generated by quantum mechanical fluctuations during an early epoch of power-law inflation in an exponential potential energy density scalar field cosmological model with flat spatial hypersurfaces, \cite{Lucchin:1984yf, Ratra:1989uv, Ratra:1989uz}. In the non-flat very-slow-roll (so untilted) inflation $\Lambda$CDM model, the presence of non-zero spatial curvature determines a new length scale, and the power-law part of the primordial power spectrum is not relevant. Thus, this model still has six cosmological parameters, with the spectral index $n_s$ being replaced by the current value of the spatial curvature density parameter $\Omega_k$. For very-slow-roll inflation in this non-flat inflation mode, the primordial power spectrum is, \cite{RatraPeebles1995, Ratra:2017ezv}, \begin{equation}\label{eq:untilted_nonflat_PS} P_\delta(q) \propto \frac{(q^2-4K)^2}{q(q^2-K)} \end{equation} where $q=\sqrt{k^2+K}$ is the wavenumber in a model with non-zero spatial curvature $K=-(H_0^2 / c^2)\Omega_k$, and $A_s$ is defined to be the amplitude of the power spectrum at the pivot-scale $k_0$. This power spectrum form holds in both the open ($\Omega_k > 0$) and closed ($\Omega_k < 0$) cases, with $q|K|^{-1/2} \geq 0$ and continuous in the open case and $q|K|^{-1/2} = 3, 4, 5\dots$ in the closed case. It is the power spectrum used in the non-flat model analyses in Refs.\ \cite{Ooba:2017ukj, Ooba:2017npx, Ooba:2017lng, Park:2017xbl,Park:2018bwy, Park:2018fxx, Park:2019emi}. For the tilted non-flat $\Lambda$CDM model, there are seven cosmological parameters, with $\Omega_k$ added to the six of the tilted flat $\Lambda$CDM model. In this model it has been usual to assume, e.g.\ \cite{Planck:2018vyg}, a primordial power spectrum of the form \begin{equation}\label{eq:tilted_nonflat_Planck_PS} P_\delta(q) \propto \frac{(q^2-4K)^2}{q(q^2-K)} \left( \frac{k}{k_0} \right)^{n_s -1}, \end{equation} where $q$ (and $A_s$) is defined in the previous paragraph. The above expression, which we refer to as the Planck $P(q)$, is a phenomenologically modified version of the non-flat very-slow-roll untilted primordial density perturbation, given in Eq.\ \eqref{eq:untilted_nonflat_PS}, to now also allow for tilt, \cite{Lesgourgues:2013bra}. It assumes that tilt in a non-flat space works in a way similar to how it does in flat space. This expression was known to be physically reasonable in the cases when $K = 0$ or $n_s=1$, since Eqs.\ \eqref{eq:tilted_flat_PS} and \eqref{eq:untilted_nonflat_PS} are recovered, respectively, and these two expressions hold in physically-consistent inflation models. Very recently, a numerical study in closed inflation models that computes primordial power spectra generated for a few different, initially slow-roll, inflation initial conditions finds that it is possible to generate, in the closed case, a power spectrum very close to that given in Eq.\ \eqref{eq:tilted_nonflat_Planck_PS}, \cite{Guth:2022xyz}. In this paper we also study another not-necessarily very-slowly-rolling non-flat (closed and open) inflation model, \cite{Ratra:2022ksb}. These tilted non-flat inflation models result in a primordial power spectrum that differs from that of eq.\ \eqref{eq:tilted_nonflat_Planck_PS} and assumes a different inflation initial condition than those studied in Ref.\ \cite{Guth:2022xyz}. For the closed and open inflation models, the resulting power spectrum \begin{equation} { P_\delta(q) \propto (q^2 -4K)^2|P_{\zeta}(A)|, } \label{eq:tilted_nonflat_new_PS} \end{equation} where $P_\zeta(A)$ is different in the closed and open cases. For the closed inflation model \begin{widetext} \begin{equation} \sqrt{|P_{\zeta}(A)|} = \left(\frac{16\pi}{m^2_p}\right)^{1/2}\!\!\!\!Q^{1/p}\frac{(2+q_s)p}{\sqrt{\pi q_s}} \Bigg|-1 + \frac{W(A)}{p}\Bigg| \,\, \frac{2^{-(6-4q_s+2A -W(A))/p}}{\sqrt{A}(A-1)(A+3)} \,\, \Bigg|\frac{\Gamma\left(1 + W(A)/p\right)\Gamma\left((2+q_s)/(2p)\right)}{\Gamma\left((2+W(A))/p\right)}\Bigg|, \end{equation} \end{widetext} with \begin{equation} W(A) = \sqrt{-8-4q_s + q_s^2 + 4A(A+2)}, \end{equation} and \begin{equation} A = \frac{q}{\sqrt{|K|}} -1. \end{equation} While for the open inflation model \begin{widetext} \begin{equation} \sqrt{|P_{\zeta}(A)|} = \left(\frac{16\pi}{m^2_p}\right)^{1/2}\!\!\!\!Q^{1/p}\frac{(2+q_s)p}{\sqrt{\pi q_s}}\Bigg|-1 + \frac{W(A)}{p}\Bigg|\,\, \frac{2^{-(6-4q_s)/p +\textrm{Re}(W(A)/p)}}{\sqrt{A}(A^2 + 4)}\,\, \Bigg|\frac{\Gamma\left(1 + W(A)/p\right)\Gamma\left((2+q_s)/(2p)\right)}{\Gamma\left((2+W(A))/p\right)}\Bigg|, \end{equation} \end{widetext} with \begin{equation} W(A) = \sqrt{-12 -4q_s + q_s^2 -4A^2}, \end{equation} and \begin{equation} A = \frac{q}{\sqrt{|K|}}. \end{equation} In these equations, $\Gamma(x)$ is the Gamma function, $m_p$ is the Planck mass, $Q$ is a normalization constant, $q_s = (2 -2n_s)/(3-n_s)$, and finally $p = 2-q_s$. In both the closed and open inflation models $0 < q_s < 2$, so $-\infty < n_s < 1$. In this paper we refer to the power spectrum in this tilted non-flat $\Lambda$CDM as the new $P(q)$, which is shown in Eq.\ \eqref{eq:tilted_nonflat_new_PS}, and following the procedure applied to the other power spectra, $A_s$ gives the amplitude of the new $P(q)$ at the pivot-scale $k_0$. Figure \ref{fig:pinit} compares the initial scalar-type perturbation spectra of the tilted flat, untilted non-flat, and two tilted non-flat models with the Planck $P(q)$ and the new $P(q)$. In this figure we set the values of the cosmological parameters, for all the models, to the mean values of the tilted non-flat $\Lambda$CDM model with Planck $P(q)$ constrained by the P18+lensing data (see Table \ref{tab:para_NL_ns_nonCMB} for the parameters), except in panel (b) for the open models where we change the sign of $\Omega_k$. \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=78mm,trim=0cm 5cm 0cm 5cm]{pinit_closed_fig3a.pdf}} \mbox{\includegraphics[width=78mm,trim=0cm 5cm 0cm 5cm]{pinit_open_fig3b.pdf}} \caption{Initial scalar-type perturbation spectra of the tilted flat, untilted non-flat, and two tilted non-flat $\Lambda$CDM models. For the tilted non-flat closed models, the cosmological parameters of the tilted non-flat $\Lambda$CDM model with Planck $P(q)$ constrained by using P18+lensing data (including $\Omega_k=-0.0103$) are used (see Table \ref{tab:para_NL_ns_nonCMB}). For closed models, the same value of $A_s$ was assumed for all models and the same value of $n_s$ was assumed for all tilted models. The powers at the first 11 large-scale wavenumbers are indicated by the filled (open) circles for the tilted closed model with the new (Planck) $P(q)$. For open non-flat models, $\Omega_k=+0.0103$ was assumed. For the tilted flat model, the generalized wavenumber $q$ is equivalent to $k$. } \label{fig:pinit} \end{figure*} In the cases where we include the $A_L$ parameter in the analysis, this increases by one the number of cosmological model parameters to be determined from data, so depending on model we then have either seven or eight cosmological model parameters in these cases. At the background level, the evolution of the scale factor $a$ in all models we study is described by the Hubble function \begin{equation} \begin{split} \label{eq:Hubble_function} H^2(a) = H^2_0[\Omega_\gamma{a^{-4}} &+ (\Omega_b + \Omega_c){a^{-3}} \\ &+ \Omega_k{a^{-2}} + \Omega_\nu(a)+ \Omega_\Lambda]. \end{split} \end{equation} Here $a=1/(1+z)$ is the cosmic scale factor normalized to unity at present, $\Omega_\Lambda$ represents the cosmological constant dark energy density parameter, $\Omega_\gamma$ is the current value of the CMB photon energy density parameter, and $\Omega_\nu(a)$ represents the contribution of the massless and massive neutrinos, for which it is not possible to get an analytical expression. In all cases we study, we determine the contribution of photons, and massless and massive neutrinos by assuming a present CMB temperature $T_0=2.7255$ K, the effective number of neutrino species $N_\textrm{eff}=3.046$, and a single massive neutrino species with neutrino mass $0.06$ eV. During parameter exploration using the MCMC method, we set wide non-zero flat priors on parameters in order that they not affect the parameter estimation; these priors are listed in Table \ref{tab:Priors}. \begin{table} \caption{Flat priors of the fitting parameters.} \begin{ruledtabular} \begin{tabular}{cccc} $\textrm{Parameters}$ & $\textrm{Our}$ & $\textrm{Handley}$ & $\textrm{Handley}$+$\Omega_k$ \\[+0mm] \hline \\[-2mm] $\Omega_b h^2$ & [0.005,0.1] & [0.019,0.025]& [0.019,0.025] \\[+1mm] \hline \\[-2mm] $\Omega_c h^2$ & [0.001,0.99] & [0.095,0.145] & [0.095,0.145] \\[+1mm] \hline \\[-2mm] 100$\theta_\textrm{MC}$ & [0.5,10] & [1.03,1.05]& [1.03,1.05]\\[+1mm] \hline \\[-2mm] $\tau$ & [0.01,0.8] & [0.01,0.4]& [0.01,0.4]\\[+1mm] \hline \\[-2mm] $\Omega_k$ & [-0.5,0.5] & [-0.1,0.05]& [-0.3,0.15]\\[+1mm] \hline \\[-2mm] $n_s$ & [0.8,1.2] & [0.885,1.04]& [0.885,1.04]\\[+1mm] \hline \\[-2mm] $\ln\left(10^{10}A_s\right)$ & [1.61,3.91] & [2.5,3.7]& [2.5,3.7]\\[+1mm] \hline \\[-2mm] $A_L$ & [0,10] & -& -\\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: In almost all the computations reported in this paper we use the priors listed in the Our column in this table. A general exception is that in almost all the computations in the tilted non-flat $\Lambda$CDM model with the new $P(q)$ we use a more restrictive prior range for the spectral index, $0.8\le n_s < 1$. In addition to these choices, in all cases, for the derived parameter $H_0$ we restrict its range of variation to $0.2 \le h \le 1$. In Table \ref{tab:para_sigmap} when only lensing data is used, in order to test the impact of different choices of priors, we also provide results for the narrower priors employed in Ref.\ \cite{Handley:2019tkm} (listed in the Handley column above). The Handley+$\Omega_k$ column priors above differ from Handley priors by allowing for a broader prior for the $\Omega_k$ parameter. \end{flushleft} \end{ruledtabular} \label{tab:Priors} \end{table} Due to the lack of constraining power of some of the data sets, when they are considered alone, we have to fix the values of some of the cosmological parameters in the analyses of these data sets. In BAO$^\prime$, BAO, (P18) lensing, or non-CMB data alone analyses we set the values of $\tau$ and $n_s$ to those obtained in the P18 data alone analysis for each model. Additionally, in BAO$^\prime$ data alone analyses we also fix the value of $\ln\left(10^{10}A_s\right)$, again, to the corresponding P18 data analysis value. Finally, in Sec. \ref{sec:P18+lensing_vs_non-CMB}, when we compare the constraints obtained from P18+lensing data and non-CMB data, in the non-CMB data analyses the values of $\tau$ and $n_s$ are fixed to the ones we get in the P18+lensing data analysis for each model. We use the converged MCMC chains to compute mean values, their confidence limits, and the posterior distributions of the model parameters with the \texttt{GetDist} code \cite{Lewis:2019xzd}. The MCMC chains are considered to converge when the Gelman and Rubin $R$ statistic provided by \texttt{COSMOMC} becomes $R-1<0.01$. In addition to using the various combinations of data sets (see Sec.\ \ref{sec:data}) for constraining cosmological parameters in the models we study, we want to also determine which of these models better fit the data sets we study. For a fair comparison between competing cosmological models with different numbers of free parameters it is necessary to be able to conveniently penalize for extra degrees of freedom. In this work we employ two different statistical criteria, that differently penalize for extra degrees of freedom, to compare the performance of the models. The first one we use is the Akaike information criterion (AIC) \cite{Akaike} which is defined as \begin{equation} \label{eq:AIC} \textrm{AIC} = \chi^2_{\textrm{min}} + 2n. \end{equation} Here $n$ is the number of independent cosmological parameters $\theta$ and $\chi^2_{\textrm{min}}\equiv \chi^2(\hat{\theta}) = -2\ln\mathcal{L}(\hat{\theta})$ is the minimum value of $\chi^2(\theta) = -2\ln\mathcal{L}(\theta)$ evaluated at the best-fit cosmological parameter values $\hat{\theta}$ where $\mathcal{L}(\theta)$ is the likelihood function. The expression in eq.\ \eqref{eq:AIC} is valid only for a large number of data points. According to Ref.\ \cite{Burnham2002}, when the number of data points $N$ obeys $N/n<40$, the expression in eq.\ \eqref{eq:AIC} should be replaced by \begin{equation}\label{eq:AIC_modified} \textrm{AIC}_{c} = \textrm{AIC} + \frac{2n(n+1)}{N-n-1} = \chi^2_{\textrm{min}} + \frac{2nN}{N-n-1}. \end{equation} Note that when $N$ is large compared to $n$ we have $N/(N-n-1)\simeq 1$ and then $\textrm{AIC}_c\simeq\textrm{AIC}$. This is the case for P18 data and non-CMB data but not for the BAO, BAO$^\prime$, and lensing data sets. In particular for BAO data $N = 16$, for BAO$^\prime$ data $N = 12$, for the lensing data set $N = 9$, and in all three cases $N/n<40$ so $\textrm{AIC}_c\neq \textrm{AIC}$. The second one we use is the deviance information criterion (DIC) \cite{DIC} which is defined as \begin{equation} \label{eq:DIC} \textrm{DIC} = \chi^2(\hat{\theta}) + 2p_D \end{equation} where $p_D= \overline{\chi^2} - \chi^2(\hat{\theta}) $ is the penalization for those models with more degrees of freedom. Here an overbar represents the mean value of the corresponding quantity. Unlike the AIC, the DIC is both computed from Monte Carlo posterior samples and also uses the effective number of constrained parameters by taking into account whether or not a parameter is unconstrained by data, see Refs.\ \cite{DIC, Liddle:2007fy}. Therefore, we may say that the DIC is more reliable than the AIC. We mostly use the differences in the AIC$_c$ and DIC values that are defined as \begin{eqnarray} \label{eq:diff_AIC_BIC} &\Delta\textrm{AIC}_c \equiv &\textrm{AIC}_{c,\textrm{X}} - \textrm{AIC}_{c,\textrm{Y}}\\ &\Delta\textrm{DIC} \equiv &\textrm{DIC}_{\textrm{X}} - \textrm{DIC}_{\textrm{Y}}. \end{eqnarray} Here Y represents the tilted flat $\Lambda$CDM model and X represents the model under study. When $-2 \leq \Delta\textrm{AIC}_c,\Delta\textrm{DIC}<0$ there is {\it weak} evidence in favor of the model under study relative to the tilted flat $\Lambda$CDM model. If $-6 \leq \Delta\textrm{AIC}_c,\Delta\textrm{DIC} < -2$ there is {\it positive} evidence, whereas if $-10 \leq \Delta\textrm{AIC}_c,\Delta\textrm{DIC} < -6$ there is {\it strong} evidence for the model under study. Finally if $\Delta\textrm{AIC}_c,\Delta\textrm{DIC} < -10$ there is {\it very strong} evidence in favor of the model under study relative to the tilted flat $\Lambda$CDM model. This scale also holds when $\Delta\textrm{AIC}_c$ and $\Delta\textrm{DIC}$ are positive, and then favors the tilted flat $\Lambda$CDM model over the model under study. We also want to determine whether some of the data sets we consider are mutually consistent (or inconsistent) in a specified cosmological model, and also whether or not the data set consistency (inconsistency) is model dependent. We utilize two different statistical estimators for this purpose. The first one makes use of DIC values and is presented in Sec.\ 2.1.7 of Ref.\ \cite{Joudaki:2016mvz}). This estimator is based on \begin{equation} \label{eq:Tension_estimator_1} \mathcal{I}(D_1,D_2) \equiv \textrm{exp}\left(-\frac{\mathcal{G}(D_1,D_2)}{2}\right), \end{equation} where \begin{equation} \mathcal{G}(D_1,D_2) = \textrm{DIC}(D_1\cup D_2) - \textrm{DIC}(D_1) - \textrm{DIC}(D_2). \end{equation} Here $D_1$ and $D_2$ represent the two data sets under comparison, $\textrm{DIC}(D_1)$ and $\textrm{DIC}(D_2)$ are the DIC values that result when data set $D_1$ and $D_2$, respectively, are individually used to constrain cosmological parameters of the specified cosmological model, and $\textrm{DIC}(D_1\cup D_2)$ is the DIC value that results when data sets $D_1$ and $D_2$ are jointly used to constrain cosmological parameters of the specified model. The intuitive idea behind this estimator is that if two data sets are mutually consistent in a given cosmological model, which means that the cosmological parameter best-fit values determined from each data set are approximately similar, we would have $\chi^2_{\textrm{min}}(D_1\cup D_2)\simeq \chi^2_{\textrm{min}}(D_1) + \chi^2_{\textrm{min}}(D_2)$. This would lead to negative values of $\mathcal{G}(D_1,D_2)$, see eq.\ \eqref{eq:DIC}, which in turn would lead to $\mathcal{I}(D_1,D_2)>1$. However if $\chi^2_{\textrm{min}}(D_1\cup D_2) > \chi^2_{\textrm{min}}(D_1) + \chi^2_{\textrm{min}}(D_2)$, and is large enough, then we would find $\mathcal{I}(D_1,D_2)<1$. Therefore $\log_{10}\mathcal{I}>0$ when the two data sets are mutually consistent and when $\log_{10}\mathcal{I}<0$ the two data sets are mutually inconsistent, in the cosmological model under study. Applying Jeffreys' scale, the level of consistency or inconsistency between the two data sets is {\it substantial} if $\lvert \log_{10}\mathcal{I} \rvert >0.5$, is {\it strong} if $\lvert \log_{10}\mathcal{I} \rvert >1$, and is {\it decisive} if $\lvert \log_{10}\mathcal{I} \rvert >2$, \cite{Joudaki:2016mvz}. We now summarize the second statistical estimator we utilize to determine whether two data sets are mutually consistent (or inconsistent) in a specified cosmological model. This is described in Refs.\ \cite{Handley:2019pqx,Handley:2019wlz,Handley:2019tkm}, also see references therein. Given a data set $D$ and a given model $M$, we can express the posterior distribution for the independent model parameters $\theta$ through Bayes' theorem \begin{eqnarray}\label{eq:BayesTheorem} p(\theta|D,M)=\frac{p(D|\theta,M) p(\theta | M)}{p(D|M)}\,. \end{eqnarray} In the above expression $\mathcal{L}_D(\theta)\equiv p(D|\theta,M)$ is the likelihood function, $\pi(\theta) \equiv p(\theta | M) $ are the priors for the model parameters $\theta$, $\mathcal{Z}_D\equiv p(D|M)$ represents the evidence, and $\mathcal{P}_D(\theta)\equiv p(\theta|D,M)$ is the posterior distribution. Taking advantage of the fact that $\mathcal{P}_D(\theta)$ is a probability distribution function in $\theta$, which means that $\int \mathcal{P}_D(\theta)d\theta = 1$, we can express the evidence as \begin{equation} \label{eq:Evidence} \mathcal{Z}_D = \int \mathcal{L}_D(\theta)\pi(\theta)d\theta . \end{equation} We are interested in quantifying the tension between two independent data sets $D_1$ and $D_2$. The total likelihood from a joint analysis of both these data sets is the product of the likelihoods for each data set, $\mathcal{L}_{12}$ = $\mathcal{L}_1\mathcal{L}_2$. Consequently, $\mathcal{Z}_{12}=\int \mathcal{L}_1(\theta)\mathcal{L}_2(\theta)\pi(\theta)d\theta$. Here and in what follows we index quantities with ``1" or ``2" when they have been computed using data set $D_1$ or $D_2$ respectively, and we use index ``12" when the two data sets are jointly used. We define the Bayes ratio as \begin{equation} \label{eq:Bayes_ratio} R_D\equiv \frac{\mathcal{Z}_{12}}{\mathcal{Z}_1\mathcal{Z}_2}. \end{equation} This statistic is constructed in such a way that when $R_D\gg 1$ we can say that data sets $D_1$ and $D_2$ are consistent in the context of the particular model, while if $R_D\ll 1$ the two data sets are inconsistent. However, $R_D$ is strongly prior-dependent and to avoid this problem we instead use the suspiciousness $S_D$, \cite{Handley:2019pqx,Handley:2019wlz,Handley:2019tkm}, which we define in the following. To define $S_D$ we will need the Shannon information \cite{Shannon:1948zz} \begin{equation} \mathcal{I}_{S,D}(\theta) = \ln\frac{\mathcal{P}_D(\theta)}{\pi(\theta)}, \end{equation} which is a measure of the amount of information, about the parameters $\theta$, that has been gained when moving from the priors to the posterior. The average value over the posterior of the Shannon information \begin{equation} \label{eq:KL_divergence} \mathcal{D}_D = \int \mathcal{P}_D(\theta)\mathcal{I}_{S,D}(\theta)d\theta \equiv \langle \mathcal{I}_{S,D}\rangle_{\mathcal{P}_D}, \end{equation} is known as the Kullback-Leibler divergence and measures how data compresses from prior to posterior. The suspiciousness $S_D$ is defined in terms of the Bayes ratio $R_D$ and the information ratio $I_D$ \begin{equation} S_D = \frac{R_D}{I_D}, \end{equation} where \begin{equation} \ln(I_D) = \mathcal{D}_1 + \mathcal{D}_2 - \mathcal{D}_{12}. \end{equation} By considering a Gaussian analogy we can turn $\ln(S_D)$ into the tension probability $p$ of two data sets being inconsistent, \cite{Handley:2019pqx,Handley:2019wlz,Handley:2019tkm}, \begin{equation} \label{eq:Tension_estimator_2} p = \int^{\infty}_{d-2\ln(S_D)}\!\!\!\!\!\!\!\!\chi^2_d(x)dx = \int^{\infty}_{d-2\ln(S_D)}\!\!\frac{x^{d/2 -1}e^{-x/2}}{2^{d/2}\Gamma(d/2)}dx, \end{equation} where $d$ is the Bayesian model dimensionality \begin{equation} d = \Tilde{d}_1 + \Tilde{d}_2 - \Tilde{d}_{12}, \qquad \Tilde{d}/2 = \langle \mathcal{I}_{S,D}^2\rangle_{\mathcal{P}_D} - \langle \mathcal{I}_{S,D}\rangle^2_{\mathcal{P}_D} . \end{equation} If $p\lesssim 0.05$ the data sets are in moderate tension whereas if $p\lesssim 0.003$ they are in strong tension. The value of $p$ can be converted into a ``sigma value" using the Gaussian formula \begin{equation} \label{eq:Tension_estimator_2_sigma} \sigma = \sqrt{2}\textrm{Erfc}^{-1}(p), \end{equation} where $\textrm{Erfc}^{-1}$ is the inverse complementary error function. In particular $p\lesssim 0.05$ and $p\lesssim 0.003$ correspond to 2$\sigma$ and 3$\sigma$ Gaussian standard deviation, respectively. In Sec.\ \ref{subsec:data_set_tensions} we use both these statistical estimators to examine the consistency of five pairs of data, namely: P18 and lensing, P18 and BAO, P18 and BAO$^\prime$, P18 and non-CMB, and P18+lensing and non-CMB, in the context of different cosmological models. We shall see in Sec.\ \ref{subsec:cosmological_parameters} that when $A_L$ is allowed to vary error bars and two-dimensional cosmological constraint contours determined from each data set broaden (compared to the $A_L = 1$ case) and so are mutually consistent between different data sets (even if they are not mutually consistent when $A_L = 1$). We find, in Sec.\ \ref{subsec:data_set_tensions}, a similar improvement in consistency when $A_L$ is allowed to vary (compared to the $A_L = 1$ case). \section{Results} \label{sec:results} \subsection{Cosmological parameters} \label{subsec:cosmological_parameters} The cosmological parameter mean values and error bars favored by the P18, P18+lensing, and P18+lensing+non-CMB data sets are summarized in Tables \ref{tab:para_FL_nonCMB}-\ref{tab:para_TNL_nonCMB} for the tilted flat $\Lambda$CDM (+$A_L$) models, the untilted non-flat $\Lambda$CDM (+$A_L$) models, the tilted non-flat $\Lambda$CDM (+$A_L$) models with the Planck $P(q)$, and the tilted non-flat $\Lambda$CDM ($+A_L$) models with the new $P(q)$, respectively. Likelihood distributions of cosmological parameters of the four models with $A_L=1$ are shown in Figs.\ \ref{fig:like_P18}, \ref{fig:like_P18_lensing}, and \ref{fig:like_P18_lensing_nonCMB} for the P18, P18+lensing, and P18+lensing+non-CMB data sets, respectively. The likelihood results for these four models, but now with $A_L$ allowed to vary, are shown in Figs.\ \ref{fig:like_Alens_P18}, \ref{fig:like_Alens_P18_lensing}, and \ref{fig:like_Alens_P18_lensing_nonCMB}. Figures \ref{fig:like_FL_compar}---\ref{fig:like_TNL_Alens_compar} show, in each of the eight cosmological models we study, the cosmological parameter constraints for P18, P18+lensing, and P18+lensing+non-CMB data, to illustrate how the cosmological parameter constraints change as we include more data. These results are discussed in Secs.\ \ref{subsubsec:P18_data_constraints}-\ref{subsubsec:contour_plots}. In the third paragraph of Sec.\ \ref{subsec:data_set_tensions} we briefly discuss some cosmological parameter constraints from (P18) lensing only data and in Sec.\ \ref{sec:discussion} we discuss whether P18, P18+lensing, P18+non-CMB and P18+lensing+non-CMB data cosmological parameter constraints are model-independent or not. Our results may indicate tensions between some of the the CMB data sets and some non-CMB low-redshift data in the context of the non-flat models. Tension between P18 data and BAO$^\prime$/BAO data in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model has been noted in Refs.\ \cite{Handley:2019tkm, DiValentino:2019qzk, DiValentino:2020hov} (our updated BAO$^\prime$/BAO data differ from those used in these references, see Sec.\ \ref{sec:data}). Here we want to check whether this tension is observed for our updated BAO$^\prime$/BAO data, whether it is observed in the context of the other models we study, and how this tension is affected when we allow the $A_L$ parameter to vary. In addition to the P18 vs.\ BAO$^\prime$/BAO comparison, we also compare P18 data and non-CMB data as well as P18+lensing and non-CMB data. These comparisons are discussed in Secs.\ \ref{sec:P18_vs_BAO}-\ref{sec:P18+lensing_vs_non-CMB}. \begin{table*} \caption{Mean and 68.3\% confidence limits of tilted flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by TT,TE,EE+lowE (P18), P18+lensing, and P18+lensing+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccc} \\[-1mm] & \multicolumn{3}{c}{Tilted flat $\Lambda$CDM model} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02236 \pm 0.00015$ & $0.02237 \pm 0.00014$ & $0.02250 \pm 0.00013$ \\[+1mm] $\Omega_c h^2$ & $0.1202 \pm 0.0014$ & $0.1200 \pm 0.0012$ & $0.11838 \pm 0.00083$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04090 \pm 0.00031$ & $1.04091 \pm 0.00031$ & $1.04110 \pm 0.00029$ \\[+1mm] $\tau$ & $0.0542 \pm 0.0079$ & $0.0543 \pm 0.0073$ & $0.0569 \pm 0.0071$ \\[+1mm] $n_s$ & $0.9649 \pm 0.0043$ & $0.9649 \pm 0.0041$ & $0.9688 \pm 0.0036$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.044 \pm 0.016$ & $3.044 \pm 0.014$ & $3.046 \pm 0.014$ \\[+1mm] \hline \\[-1mm] $H_0$ & $67.28 \pm 0.61$ & $67.34 \pm 0.55$ & $68.09 \pm 0.38$ \\[+1mm] $\Omega_m$ & $0.3165 \pm 0.0084$ & $0.3155 \pm 0.0075$ & $0.3053 \pm 0.0050$ \\[+1mm] $\sigma_8$ & $0.8118 \pm 0.0074$ & $0.8112 \pm 0.0059$ & $0.8072 \pm 0.0058$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{3}{c}{Tilted flat $\Lambda$CDM+$A_L$ model} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02259 \pm 0.00017$ & $0.02251 \pm 0.00017$ & $0.02258 \pm 0.00014$ \\[+1mm] $\Omega_c h^2$ & $0.1180 \pm 0.0015$ & $0.1183 \pm 0.0015$ & $0.11747 \pm 0.00091$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04114 \pm 0.00032$ & $1.04109 \pm 0.00032$ & $1.04118 \pm 0.00029$ \\[+1mm] $\tau$ & $0.0496 \pm 0.0082$ & $0.0487 \pm 0.0087$ & $0.0476 \pm 0.0085$ \\[+1mm] $n_s$ & $0.9710 \pm 0.0050$ & $0.9695 \pm 0.0048$ & $0.9715 \pm 0.0038$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.030 \pm 0.017$ & $3.028 \pm 0.018$ & $3.023 \pm 0.018$ \\[+1mm] $A_{L}$ & $1.181 \pm 0.067$ & $1.073 \pm 0.041$ & $1.089 \pm 0.035$ \\[+1mm] \hline \\[-1mm] $H_0$ & $68.31 \pm 0.71$ & $68.14 \pm 0.69$ & $68.52 \pm 0.42$ \\[+1mm] $\Omega_m$ & $0.3029 \pm 0.0093$ & $0.3048 \pm 0.0091$ & $0.2998 \pm 0.0053$ \\[+1mm] $\sigma_8$ & $0.7997 \pm 0.0088$ & $0.7996 \pm 0.0089$ & $0.7955 \pm 0.0075$ \\[+1mm] \end{tabular} \\[+1mm] \end{ruledtabular} \label{tab:para_FL_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of untilted non-flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by TT,TE,EE+lowE (P18), P18+lensing, and P18+lensing+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccc} \\[-1mm] & \multicolumn{3}{c}{Untilted non-flat $\Lambda$CDM model} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02320 \pm 0.00015$ & $0.02307 \pm 0.00014$ & $0.02301 \pm 0.00014$ \\[+1mm] $\Omega_c h^2$ & $0.11098 \pm 0.00088$ & $0.11108 \pm 0.00086$ & $0.11176 \pm 0.00083$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04204 \pm 0.00030$ & $1.04196 \pm 0.00029$ & $1.04189 \pm 0.00029$ \\[+1mm] $\tau$ & $0.0543 \pm 0.0091$ & $0.0580 \pm 0.0087$ & $0.0799 \pm 0.0089$ \\[+1mm] $\Omega_k$ & $-0.095 \pm 0.024$ & $-0.0322 \pm 0.0075$ & $-0.0065 \pm 0.0014$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.021 \pm 0.019$ & $3.027 \pm 0.018$ & $3.075 \pm 0.018$ \\[+1mm] \hline \\[-1mm] $H_0$ & $47.1 \pm 3.2$ & $58.9 \pm 2.1$ & $67.90 \pm 0.56$ \\[+1mm] $\Omega_m$ & $0.617 \pm 0.082$ & $0.390 \pm 0.027$ & $0.2938 \pm 0.0049$ \\[+1mm] $\sigma_8$ & $0.730 \pm 0.017$ & $0.765 \pm 0.011$ & $0.7997 \pm 0.0076$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{3}{c}{Untilted non-flat $\Lambda$CDM+$A_L$ model} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02320 \pm 0.00015$ & $0.02312 \pm 0.00014$ & $0.02310 \pm 0.00014$ \\[+1mm] $\Omega_c h^2$ & $0.11097 \pm 0.00087$ & $0.11092 \pm 0.00087$ & $0.11100 \pm 0.00085$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04202 \pm 0.00030$ & $1.04193 \pm 0.00029$ & $1.04195 \pm 0.00030$ \\[+1mm] $\tau$ & $0.0540 \pm 0.0087$ & $0.0554 \pm 0.0097$ & $0.0566 \pm 0.0083$ \\[+1mm] $\Omega_k$ & $-0.12 \pm 0.12$ & $0.0161 \pm 0.0094$ & $-0.0060 \pm 0.0014$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.020 \pm 0.018$ & $3.021 \pm 0.020$ & $3.024 \pm 0.017$ \\[+1mm] $A_{L}$ & $1.08 \pm 0.27$ & $1.44 \pm 0.15$ & $1.162 \pm 0.036$ \\[+1mm] \hline \\[-1mm] $H_0$ & $52 \pm 18$ & $85.7 \pm 8.5$ & $68.48 \pm 0.58$ \\[+1mm] $\Omega_m$ & $0.70 \pm 0.42$ & $0.190 \pm 0.043$ & $0.2874 \pm 0.0050$ \\[+1mm] $\sigma_8$ & $0.721 \pm 0.053$ & $0.7805 \pm 0.0094$ & $0.7764 \pm 0.0078$ \\[+1mm] \end{tabular} \\[+1mm] \end{ruledtabular} \label{tab:para_NL_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of Planck-$P(q)$-based tilted non-flat $\Lambda\textrm{CDM}$ ($+A_L$) model parameters constrained by TT,TE,EE+lowE (P18), P18+lensing, and P18+lensing+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccc} \\[-1mm] & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM model [Planck $P(q)$]} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02260 \pm 0.00017$ & $0.02249 \pm 0.00016$ & $0.02249 \pm 0.00015$ \\[+1mm] $\Omega_c h^2$ & $0.1181 \pm 0.0015$ & $0.1186 \pm 0.0015$ & $0.1187 \pm 0.0013$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04116 \pm 0.00032$ & $1.04107 \pm 0.00032$ & $1.04106 \pm 0.00031$ \\[+1mm] $\tau$ & $0.0483 \pm 0.0083$ & $0.0495 \pm 0.0082$ & $0.0563 \pm 0.0073$ \\[+1mm] $\Omega_k$ & $-0.043 \pm 0.017$ & $-0.0103 \pm 0.0066$ & $0.0004 \pm 0.0017$ \\[+1mm] $n_s$ & $0.9706 \pm 0.0047$ & $0.9687 \pm 0.0046$ & $0.9681 \pm 0.0044$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.027 \pm 0.017$ & $3.030 \pm 0.017$ & $3.046 \pm 0.014$ \\[+1mm] \hline \\[-1mm] $H_0$ & $54.5 \pm 3.6$ & $63.7 \pm 2.3$ & $68.17 \pm 0.55$ \\[+1mm] $\Omega_m$ & $0.481 \pm 0.062$ & $0.351 \pm 0.024$ & $0.3051 \pm 0.0053$ \\[+1mm] $\sigma_8$ & $0.775 \pm 0.015$ & $0.796 \pm 0.011$ & $0.8080 \pm 0.0066$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM+$A_L$ model [Planck $P(q)$]} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02258 \pm 0.00017$ & $0.02251 \pm 0.00017$ & $0.02259 \pm 0.00016$ \\[+1mm] $\Omega_c h^2$ & $0.1183 \pm 0.0015$ & $0.1183 \pm 0.0015$ & $0.1173 \pm 0.0014$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04116 \pm 0.00033$ & $1.04110 \pm 0.00032$ & $1.04118 \pm 0.00032$ \\[+1mm] $\tau$ & $0.0478 \pm 0.0081$ & $0.0489 \pm 0.0085$ & $0.0479 \pm 0.0085$ \\[+1mm] $\Omega_k$ & $-0.130 \pm 0.095$ & $-0.005 \pm 0.027$ & $-0.0002 \pm 0.0017$ \\[+1mm] $n_s$ & $0.9704 \pm 0.0048$ & $0.9696 \pm 0.0049$ & $0.9718 \pm 0.0045$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.027 \pm 0.017$ & $3.028 \pm 0.018$ & $3.024 \pm 0.017$ \\[+1mm] $A_{L}$ & $0.88 \pm 0.15$ & $1.09 \pm 0.16$ & $1.090 \pm 0.036$ \\[+1mm] \hline \\[-1mm] $H_0$ & $45 \pm 11$ & $69 \pm 11$ & $68.49 \pm 0.56$ \\[+1mm] $\Omega_m$ & $0.80 \pm 0.35$ & $0.32 \pm 0.11$ & $0.2998 \pm 0.0055$ \\[+1mm] $\sigma_8$ & $0.733 \pm 0.045$ & $0.796 \pm 0.016$ & $0.7952 \pm 0.0085$ \\[+1mm] \end{tabular} \\[+1mm] \end{ruledtabular} \label{tab:para_NL_ns_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of new-$P(q)$-based tilted non-flat $\Lambda\textrm{CDM}$ ($+A_L$) model parameters constrained by TT,TE,EE+lowE (P18), P18+lensing, and P18+lensing+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccc} \\[-1mm] & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM model [new $P(q)$]} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02255 \pm 0.00017$ & $0.02248 \pm 0.00016$ & $0.02248 \pm 0.00015$ \\[+1mm] $\Omega_c h^2$ & $0.1188 \pm 0.0015$ & $0.1188 \pm 0.0014$ & $0.1186 \pm 0.0013$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04109 \pm 0.00032$ & $1.04104 \pm 0.00032$ & $1.04106 \pm 0.00031$ \\[+1mm] $\tau$ & $0.0525 \pm 0.0083$ & $0.0515 \pm 0.0081$ & $0.0566 \pm 0.0074$ \\[+1mm] $\Omega_k$ & $-0.033 \pm 0.014$ & $-0.0086 \pm 0.0057$ & $0.0003 \pm 0.0017$ \\[+1mm] $n_s$ & $0.9654 \pm 0.0045$ & $0.9661 \pm 0.0043$ & $0.9679 \pm 0.0042$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.039 \pm 0.017$ & $3.035 \pm 0.016$ & $3.046 \pm 0.014$ \\[+1mm] \hline \\[-1mm] $H_0$ & $56.9 \pm 3.6$ & $64.2 \pm 2.0$ & $68.13 \pm 0.54$ \\[+1mm] $\Omega_m$ & $0.444 \pm 0.055$ & $0.345 \pm 0.021$ & $0.3054 \pm 0.0051$ \\[+1mm] $\sigma_8$ & $0.786 \pm 0.014$ & $0.799 \pm 0.010$ & $0.8079 \pm 0.0067$ \\[+1mm] \hline \hline \\[-1mm] $ $ & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM+$A_L$ model [new $P(q)$]} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02257 \pm 0.00017$ & $0.02252 \pm 0.00017$ & $0.02260 \pm 0.00016$ \\[+1mm] $\Omega_c h^2$ & $0.1187 \pm 0.0016$ & $0.1183 \pm 0.0015$ & $0.1174 \pm 0.0013$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04111 \pm 0.00033$ & $1.04108 \pm 0.00032$ & $1.04118 \pm 0.00032$ \\[+1mm] $\tau$ & $0.0512 \pm 0.0086$ & $0.0495 \pm 0.0093$ & $0.0486 \pm 0.0086$ \\[+1mm] $\Omega_k$ & $-0.10 \pm 0.11$ & $0.003 \pm 0.016$ & $-0.0002 \pm 0.0017$ \\[+1mm] $n_s$ & $0.9654 \pm 0.0057$ & $0.9688 \pm 0.0053$ & $0.9713 \pm 0.0042$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.036 \pm 0.018$ & $3.030 \pm 0.019$ & $3.025 \pm 0.017$ \\[+1mm] $A_{L}$ & $0.94 \pm 0.20$ & $1.13 \pm 0.15$ & $1.088 \pm 0.035$ \\[+1mm] \hline \\[-1mm] $H_0$ & $51 \pm 14$ & $72.0 \pm 9.2$ & $68.48 \pm 0.56$ \\[+1mm] $\Omega_m$ & $0.70 \pm 0.43$ & $0.287 \pm 0.076$ & $0.2999 \pm 0.0055$ \\[+1mm] $\sigma_8$ & $0.752 \pm 0.052$ & $0.801 \pm 0.011$ & $0.7956 \pm 0.0082$ \\[+1mm] \end{tabular} \\[+1mm] \end{ruledtabular} \label{tab:para_TNL_nonCMB} \end{table*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_P18_fig2.pdf}} \caption{Planck 2018 TT,TE,EE+lowE (P18) data likelihood distributions of parameters of the tilted non-flat $\Lambda$CDM model with the new initial power spectrum [new $P(q)$] (red contours), of the tilted non-flat $\Lambda$CDM model with the Planck team's initial spectrum [Planck $P(q)$] (green), of the untitled non-flat $\Lambda$CDM model (grey), and of the tilted flat $\Lambda$CDM model (blue contours). } \label{fig:like_P18} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_P18_lensing_fig3.pdf}} \caption{P18+lensing data likelihood distributions of parameters of the tilted non-flat $\Lambda$CDM model with the new initial power spectrum [new $P(q)$] (red contours), of the tilted non-flat $\Lambda$CDM model with the Planck team's initial spectrum [Planck $P(q)$] (green), of the untitled non-flat $\Lambda$CDM model (grey), and of the tilted flat $\Lambda$CDM model (blue contours). } \label{fig:like_P18_lensing} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_P18_lensing_nonCMBv2_fig4.pdf}} \caption{P18+lensing+non-CMB data likelihood distributions of parameters of the tilted non-flat $\Lambda$CDM model with the new initial power spectrum [new $P(q)$] (red contours), of the tilted non-flat $\Lambda$CDM model with the Planck team's initial spectrum [Planck $P(q)$] (green), of the untitled non-flat $\Lambda$CDM model (grey), and of the tilted flat $\Lambda$CDM model (blue contours). } \label{fig:like_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_Alens_P18_fig5.pdf}} \caption{Planck 2018 TT,TE,EE+lowE (P18) data likelihood distributions of parameters of the tilted non-flat $\Lambda$CDM+$A_L$ model with the new initial power spectrum [new $P(q)$] (red contours), of the tilted non-flat $\Lambda$CDM+$A_L$ model with the Planck team's initial spectrum [Planck $P(q)$] (green), of the untitled non-flat $\Lambda$CDM+$A_L$ model (grey), and of the tilted flat $\Lambda$CDM+$A_L$ model (blue contours). } \label{fig:like_Alens_P18} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_Alens_P18_lensing_fig6.pdf}} \caption{P18+lensing data likelihood distributions of parameters of the tilted non-flat $\Lambda$CDM+$A_L$ model with the new initial power spectrum [new $P(q)$] (red contours), of the tilted non-flat $\Lambda$CDM+$A_L$ model with the Planck team's initial spectrum [Planck $P(q)$] (green), of the untitled non-flat $\Lambda$CDM+$A_L$ model (grey), and of the tilted flat $\Lambda$CDM+$A_L$ model (blue contours). } \label{fig:like_Alens_P18_lensing} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_Alens_P18_lensing_nonCMBv2_fig7.pdf}} \caption{P18+lensing+non-CMB data likelihood distributions of parameters of the tilted non-flat $\Lambda$CDM+$A_L$ model with the new initial power spectrum [new $P(q)$] (red contours), of the tilted non-flat $\Lambda$CDM+$A_L$ model with the Planck team's initial spectrum [Planck $P(q)$] (green), of the untitled non-flat $\Lambda$CDM+$A_L$ model (grey), and of the tilted flat $\Lambda$CDM+$A_L$ model (blue contours). } \label{fig:like_Alens_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_compar_fig8.pdf}} \caption{Likelihood distributions of tilted flat $\Lambda$CDM model parameters constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets. } \label{fig:like_FL_compar} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_compar_fig9.pdf}} \caption{Likelihood distributions of untilted non-flat $\Lambda$CDM model parameters constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets. } \label{fig:like_NL_compar} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_ns_compar_fig10.pdf}} \caption{Likelihood distributions of tilted non-flat $\Lambda$CDM model parameters with the Planck team's initial power spectrum [Planck $P(q)$] constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets. } \label{fig:like_NL_ns_compar} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_ns1_compar_fig11.pdf}} \caption{Likelihood distributions of tilted non-flat $\Lambda$CDM model parameters with the new initial power spectrum [new $P(q)$] constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets.} \label{fig:like_TNL_compar} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_Alens_compar_fig12.pdf}} \caption{Likelihood distributions of tilted flat $\Lambda$CDM+$A_L$ model parameters constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets. } \label{fig:like_FL_Alens_compar} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_compar_fig13.pdf}} \caption{Likelihood distributions of untilted non-flat $\Lambda$CDM+$A_L$ model parameters constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets. } \label{fig:like_NL_Alens_compar} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_ns_compar_fig14.pdf}} \caption{Likelihood distributions of tilted non-flat $\Lambda$CDM+$A_L$ model parameters with the Planck team's initial power spectrum [Planck $P(q)$] constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets. } \label{fig:like_NL_ns_Alens_compar} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_Alens_ns1_compar_fig15.pdf}} \caption{Likelihood distributions of tilted non-flat $\Lambda$CDM+$A_L$ model parameters with the new initial power spectrum [new $P(q)$] constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets. } \label{fig:like_TNL_Alens_compar} \end{figure*} We now discuss the results obtained from the different data sets we consider. \subsubsection{P18 data cosmological constraints} \label{subsubsec:P18_data_constraints} In the case of the tilted flat $\Lambda$CDM model, with just six primary (not derived) cosmological parameters, and with $\Omega_k = 0$, from P18 data alone (see Table \ref{tab:para_FL_nonCMB} and Figs.\ \ref{fig:like_P18} and \ref{fig:like_Alens_P18}) the derived parameters $\Omega_m = 0.3165\pm 0.0084$ and $H_0 = 67.28\pm 0.61$ km s$^{-1}$ Mpc$^{-1}$, which are consistent with many other measurements of these quantities and which differ from the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}, $\Omega_m = 0.295\pm 0.017$ and $H_0 = 69.7\pm 1.2$ km s$^{-1}$ Mpc$^{-1}$, by $1.1\sigma$ and $1.8\sigma$. The improvement in the fit to P18 data when the $A_L$ parameter is allowed to vary, in the tilted flat $\Lambda$CDM$+A_L$ model, is positive, according to the DIC statistical criterion described in Sec. \ref{sec:method} (see the results in Sec. \ref{subsec:model_selection} ). This fact is reflected in the measured (from P18 data) value of this phenomenological parameter, $A_L=1.181\pm 0.067$, which differs from the theoretically expected $A_L = 1$ by $2.7\sigma$. The inclusion of the $A_L$ parameter, introduced to deal with the lensing anomaly, does not significantly affect the values of the other six primary parameters, leaving them close to the values found for the six parameter ($\Omega_k = 0$) tilted flat $\Lambda$CDM model with $A_L = 1$ (the largest difference is for $\Omega_c h^2$, where it is 1.1$\sigma$ of the quadrature sum of the two error bars); it does however increase the error bars somewhat, with the largest increase being 16\% for $n_s$. In addition, in the case when $A_L$ is allowed to vary, the derived parameters $\Omega_m$ and $H_0$ (as well as $\sigma_8$) errors bars mildly increase, by 11\% and 16\%, resulting in (for P18 data) $\Omega_m = 0.3029\pm 0.0093$ and $H_0 = 68.31\pm 0.71$ km s$^{-1}$ Mpc$^{-1}$, that are consistent with many other measurements of these quantities, now differing only by $0.41\sigma$ and $1.0\sigma$, respectively, from those of Ref.\ \cite{Cao:2022ugh}. These derived $\Omega_m$ and $H_0$ parameter values in the $A_L$-varying case also differ from those in the $A_L = 1$ case by at most 1.1$\sigma$. The addition of the $\Omega_k$ parameter to the six primary cosmological parameters of the tilted flat $\Lambda$CDM model introduces a strong degeneracy between increasing $\Omega_m$ and decreasing $H_0$. The non-flat models also show some degeneracy between $\Omega_m$ and $\Omega_k$ as well as between $H_0$ and $\Omega_k$. These degeneracies can be seen in the corresponding panels in Fig.\ \ref{fig:like_P18}. In the tilted non-flat $\Lambda$CDM models (see Tables \ref{tab:para_NL_ns_nonCMB} and \ref{tab:para_TNL_nonCMB} and Fig.\ \ref{fig:like_P18}) we see that P18 data alone is unable to break the strong geometrical degeneracy between $\Omega_m$-$H_0$-$\Omega_k$. For the Planck $P(q)$ and the new $P(q)$, the measured values (from P18 data) $\Omega_m = 0.481\pm 0.062$ and $0.444\pm 0.055$, as well as $H_0 = 54.5\pm 3.6$ and $56.9\pm 3.6$ km s$^{-1}$ Mpc$^{-1}$, respectively, are in conflict with most other measurements of these parameters; for example, see the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh} in the paragraph before last. Note that even though the values of, and error bars on, the six primary cosmological parameters in common between the two tilted non-flat models and the tilted flat model are very similar (the largest difference is 1.1$\sigma$ for $\Omega_b h^2$ between the tilted flat and the tilted non-flat Planck $P(q)$ models, and the biggest increase, 13\%, in the error bars is also for $\Omega_b h^2$, in both tilted non-flat models relative to the tilted flat model), the additional primary cosmological parameter $\Omega_k$ in the two tilted non-flat models is relatively poorly constrained, and the derived cosmological parameters $\Omega_m$ and $H_0$ error bars in the two tilted non-flat $\Lambda$CDM models are approximately factors of 7 and 6 larger than those in the tilted flat $\Lambda$CDM model (and $\Omega_m$ and $H_0$ in these tilted non-flat models differ by between 2.3$\sigma$ and 3.5$\sigma$ from the tilted flat model values). The evidence in favor of $\Omega_k < 0$ is significant in both of the tilted non-flat $\Lambda$CDM models. For the Planck $P(q)$ case we find $\Omega_k=-0.043\pm 0.017$ while for the new $P(q)$ case $\Omega_k= -0.033\pm 0.014$, being 2.5$\sigma$ and 2.4$\sigma$ away from flat respectively from flat spatial hypersurfaces. In both cases there is clear preference for a closed over an open spatial geometry. And we shall see in Sec. \ref{subsec:model_selection} that the P18 data DIC statistical criterion strongly favors both tilted non-flat models over the tilted flat $\Lambda$CDM model. Allowing the $A_L$ parameter to vary in the non-flat models introduces an additional strong degeneracy between $\Omega_k$, $\Omega_m$, $H_0$, and $A_L$; compare the corresponding panels in Figs.\ \ref{fig:like_P18} and \ref{fig:like_Alens_P18}. In the tilted non-flat $\Lambda$CDM+$A_L$ models with the Planck $P(q)$ and with the new $P(q)$, where the $A_L$ parameter is allowed to vary (see Tables \ref{tab:para_NL_ns_nonCMB} and \ref{tab:para_TNL_nonCMB} and Fig.\ \ref{fig:like_Alens_P18}) P18 data alone is unable to break the strong geometrical degeneracy between $\Omega_m$-$H_0$-$\Omega_k$-$A_L$. (In the tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model some parameters have a somewhat bimodal distribution for P18 data, see the one-dimensional posterior distributions in Fig.\ \ref{fig:like_Alens_P18}. This is not the case for the tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model.) Like in the tilted flat $\Lambda$CDM case discussed in the paragraph before last, the extra $A_L$ parameter does not significantly affect any of the (primary, not derived) cosmological parameter constraints, compared to the $A_L = 1$ case, except, because of the additional $\Omega_k$-$A_L$ degeneracy, the $\Omega_k$ constraints which are now $\Omega_k=-0.130\pm 0.095$ for the Planck $P(q)$ case and $\Omega_k= -0.10\pm 0.11$ for the new $P(q)$, being only 1.4$\sigma$ and 0.91$\sigma$, respectively, away from flat spatial hypersurfaces, with the $\Omega_k$ error bars now being factors of 6 and 8, respectively, larger than those in the $A_L = 1$ case. Also, unlike the tilted flat $\Lambda$CDM case of the paragraph before last, we measure, from the P18 data, $A_L=0.88\pm 0.15$ and $0.94\pm0.20$, which differ from the theoretically expected $A_L = 1$ by only 0.80$\sigma$ and 0.30$\sigma$. We will see in Sec. \ref{subsec:model_selection} that in both these models the fit to P18 data is weakly or positively better when $A_L = 1$ compared to the case when the $A_L$ parameter is allowed to vary. However, when $A_L$ varies the DIC statistical criterion weakly favors [positively disfavors] the tilted non-flat Planck $P(q)$ [new $P(q)$] model over the tilted flat $\Lambda$CDM$+A_L$ model. In addition, in both these cases when $A_L$ is allowed to vary, the $\Omega_m$ and $H_0$ (as well as $\sigma_8$) errors bars significantly increase, resulting in (for P18 data) $\Omega_m = 0.80\pm 0.35$ and $0.70\pm 0.43$, as well as $H_0 = 45\pm 11$ and $51\pm 14$ km s$^{-1}$ Mpc$^{-1}$, that are consistent with many other measurements of these quantities. Again, the error bars on $\Omega_b h^2$, $\Omega_c h^2$, $\theta_{\rm MC}$, $\tau$, $n_s$, and $A_s$ are similar in the two tilted non-flat $\Lambda$CDM$+A_L$ models and the tilted flat $\Lambda$CDM$+A_L$ model, however the $A_L$ error bars are approximately a factor of 2.5 larger in the tilted non-flat models, with the introduction of the seventh primary cosmological parameter $\Omega_k$ (that is poorly constrained) also resulting in the $\Omega_m$ error bars being a factor $\sim$42 larger and the $H_0$ error bars being a factor $\sim$18 larger in the tilted non-flat $\Lambda$CDM$+A_L$ models compared to the tilted flat $\Lambda$CDM$+A_L$ model. The restriction that $n_s=1$ in the untilted non-flat $\Lambda$CDM (+$A_L$) models is an unwelcome feature when fitting the P18 CMB anisotropy spectra, according to the statistical criteria outlined in Sec. \ref{subsec:model_selection}. Because of this we will focus less attention on the untilted non-flat $\Lambda$CDM model compared to the two tilted non-flat models. Despite the poor performance of the untilted non-flat $\Lambda$CDM model in this case (which also affects what happens when additional data are jointly analyzed with P18 data), the model shares some features with the two tilted non-flat $\Lambda$CDM models (see Table \ref{tab:para_NL_nonCMB} and Fig.\ \ref{fig:like_P18}), namely, the evidence in favor of closed spatial geometry, now with $\Omega_k= -0.095\pm 0.024$ (4.0$\sigma$), and the presence of the aforementioned geometrical degeneracy between $\Omega_m$-$H_0$-$\Omega_k$. Also, as in the two tilted non-flat models, in the untilted non-flat case the measured values (from P18 data) $\Omega_m = 0.617\pm 0.082$ and $H_0 = 47.1\pm 3.2$ km s$^{-1}$ Mpc$^{-1}$ are in conflict with most other measurements of these parameters. In the untilted non-flat $\Lambda$CDM+$A_L$ model where the $A_L$ parameter is allowed to vary (see Table \ref{tab:para_NL_nonCMB} and Fig.\ \ref{fig:like_Alens_P18}) P18 data alone is again unable to break the larger geometrical degeneracy between $\Omega_m$-$H_0$-$\Omega_k$-$A_L$. Like in the tilted flat and non-flat $\Lambda$CDM cases discussed earlier, the extra $A_L$ parameter does not significantly affect any of the (primary, not derived) cosmological parameter constraints in the untilted non-flat model, compared to the $A_L = 1$ case, except, because of the additional $\Omega_k$-$A_L$ degeneracy, for the $\Omega_k$ constraint which is now $\Omega_k=-0.12\pm 0.12$ and only 1.0$\sigma$ away from flat spatial hypersurfaces. Also, unlike the tilted flat $\Lambda$CDM case, but like the tilted non-flat cases of the paragraph before last, we measure, from the P18 data, $A_L=1.08\pm 0.27$ which differs from the theoretically expected value of $A_L = 1$ by only 0.30$\sigma$. We will see in Sec.\ \ref{subsec:model_selection} that in this model the fit to P18 data is slightly better when $A_L = 1$ compared to the case when the $A_L$ parameter is allowed to vary. Similar to the two tilted non-flat models of the paragraph before last, when $A_L$ is allowed to vary in the untilted non-flat model, the $\Omega_m$ and $H_0$ (as well as $\sigma_8$) errors bars significantly increase, resulting in (for P18 data) $\Omega_m = 0.70\pm 0.42$ and $H_0 = 52\pm 18$ km s$^{-1}$ Mpc$^{-1}$, that are consistent with most other measurements of these quantities. In Fig.\ \ref{fig:like_P18} we provide the 2$\sigma$ contour plots for all four of the $A_L = 1$ models. The contours for the untilted non-flat $\Lambda$CDM model overlap minimally or even do not overlap at all with those corresponding to the other models. This is likely due to the lack of the degree of freedom encapsulated in $n_s$ in the untilted non-flat model, which greatly hinders the fit of the CMB anisotropy power spectra and causes the other parameters to shift from the ranges preferred in the context of the other three models. As for the other three cosmological models, there is a significant overlap of contours, except when $\Omega_m$ or $H_0$ (or less so $\sigma_8$) is involved, which can even lead to the contours not overlapping at all. This is presumably related with the geometrical degeneracy previously mentioned. The corresponding plots for the four models now including allowing $A_L$ to vary are in Fig.\ \ref{fig:like_Alens_P18}. Allowing $A_L$ to vary broadens the contours, and for some parameters there are two disconnected 1$\sigma$ regions. While the untilted non-flat model contours still do not overlap in many cases with those of the other three models, in the other three models the contours overlap even when $\Omega_m$ or $H_0$ or $\sigma_8$ is involved. \subsubsection{ P18+lensing data cosmological constraints} \label{subsubsec:P18_lensing_data_constraints} Constraints on primary parameters derived from joint analyses of P18 and lensing data are quite similar to those derived from P18 data alone (see Tables \ref{tab:para_FL_nonCMB}-\ref{tab:para_TNL_nonCMB} and Figs. \ref{fig:like_P18_lensing} and \ref{fig:like_Alens_P18_lensing}), except for $\Omega_k$ and $A_L$ constraints. On the other hand, constraints on derived parameters $\Omega_m$ and $H_0$ are, in most non-flat cases, greatly affected by lensing data. In this subsubsection we discuss parameter constraints from jointly analyzed P18 and lensing data and compare these to the P18 data alone constraints. Ideally one would like to establish that cosmological parameter constraints derived from P18 data and from lensing data are mutually consistent, prior to using P18+lensing data in joint analyses. While it is not straightforward to derive (P18) lensing data alone cosmological parameter constraints for the wide flat Our priors of Table \ref{tab:Priors}, we shall see, in Sec.\ \ref{subsec:data_set_tensions} (where we do briefly discuss some of these cosmological constraints), that P18 data and lensing data are not significantly mutually inconsistent. This is also consistent with the results we discuss in this subsubsection. Comparing the six-parameter tilted flat $\Lambda$CDM primary cosmological parameter constraints for P18 data and P18+lensing data, shown in the upper half of Table \ref{tab:para_FL_nonCMB}, we see that there are no significant changes in parameter values (the largest change is that $\Omega_c h^2$ is 0.11$\sigma$ smaller in the P18+lensing case) with all but the $\theta_{\rm MC}$ error bars being smaller in the P18+lensing case (the $\theta_{\rm MC}$ error bar is unchanged and the largest decrease is 14\% for the $\Omega_c h^2$ error bar). For the derived parameters, the largest change is the 0.089$\sigma$ decrease in $\Omega_m$ relative to the P18 data value, and the 20\% smaller $\sigma_8$ error bar. For P18+lensing data we find $\Omega_m = 0.3155\pm 0.0075$ and $H_0 = 67.34\pm 0.55$ km s$^{-1}$ Mpc$^{-1}$ which are consistent with many other measurements of these quantities and 1.1$\sigma$ larger and 1.8$\sigma$ lower than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. Comparing the seven-parameter tilted flat $\Lambda$CDM$+A_L$ primary cosmological parameter constraints for P18 data and P18+lensing data, shown in the lower half of Table \ref{tab:para_FL_nonCMB}, we see more significant changes in the parameter values (the largest change is that $A_L$ is 1.4$\sigma$ smaller in the P18+lensing case, with the next largest being $\Omega_b h^2$ which is 0.33$\sigma$ smaller) with some of the error bars being larger in the P18+lensing case (the largest increase is 6\% for the $\tau$ and $\ln(10^{10}A_s)$ error bars) and some of the error bars being smaller (the largest decrease is 39\% for $A_L$). The reason the error bars of $\tau$ and $\ln (10^{10} A_s)$ increase, contrary to the common expectation that the error bars of the parameters will become smaller as more data is added, appears to be that the degeneracy between parameters is only partially broken by the lensing data. Interestingly, these characteristics are common to all other $A_L$-varying models (see Tables \ref{tab:para_NL_nonCMB}-\ref{tab:para_TNL_nonCMB}). For the derived parameters, the largest change is the 0.17$\sigma$ decrease in $H_0$ relative to the P18 data value, and the 3\% smaller $H_0$ error bar. For P18+lensing data in the varying $A_L$ case we measure $\Omega_m = 0.3048\pm 0.0091$ and $H_0 = 68.14\pm 0.69$ km s$^{-1}$ Mpc$^{-1}$ which are consistent with many other measurements of these quantities and 0.51$\sigma$ larger and 1.1$\sigma$ lower than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. The improvement in the fit to P18+lensing data when the $A_L$ parameter is allowed to vary, in the tilted flat $\Lambda$CDM$+A_L$ model, is only weak, as discussed in Sec.\ \ref{subsec:model_selection}. We now find $A_L = 1.073\pm 0.041$ which is 1.8$\sigma$ away from the theoretically expected $A_L = 1$. While there is still a deviation from the predicted value, the tendency of the lensing data is to push $A_L$ closer to 1, resulting in a smaller deviation than the 2.7$\sigma$ one found for $A_L = 1.181\pm 0.067$ from P18 data in the tilted flat $\Lambda$CDM$+A_L$ model. The inclusion of the $A_L$ parameter does not significantly affect the values of the other six primary parameters, leaving them close to the values found for the six parameter tilted flat $\Lambda$CDM model with $A_L = 1$ (the largest difference is for $\Omega_c h^2$, where it is 0.88$\sigma$ of the quadrature sum of the two error bars); it does however increase the error bars, more than what happens in the P18 alone data case discussed in Sec. \ref{subsubsec:P18_data_constraints}, with largest increase being 29\% for $\ln(10^{10}A_s)$. In addition, in the case when $A_L$ is allowed to vary, the derived parameters change somewhat and their error bars increase, with the largest changes associated with $\sigma_8$, where it is now 1.1$\sigma$ smaller with a 51\% larger error bar. In the six-parameter untilted non-flat $\Lambda$CDM model, including lensing data in the mix results in a reduction in the size of the cosmological parameter error bars relative to those from P18 data (see Table \ref{tab:para_NL_nonCMB} and Fig. \ref{fig:like_P18_lensing}). The most affected parameters are the primary parameter $\Omega_k$, whose error bars decrease by 69\%, and the derived parameters $H_0$, $\Omega_m$ and $\sigma_8$, for which we observe a shrinkage of the error bars by 34\%, 67\%, and 35\%, respectively. As happens in the tilted flat $\Lambda$CDM model, here also there are no significant changes in the values of the primary parameters, with the exception of the curvature parameter $\Omega_k$. This is not true for two of the derived parameters, $H_0$ and $\Omega_m$, which together with the curvature parameter are involved in the $\Omega_m$-$H_0$-$\Omega_k$ geometrical degeneracy. From P18+lensing data we find $\Omega_k=-0.095\pm 0.024$, $H_0=47.1\pm 3.2$ km s$^{-1}$ Mpc$^{-1}$, and $\Omega_m=0.390\pm 0.027 $. These values differ by 2.5$\sigma$, 3.1$\sigma$, and 2.6$\sigma$, respectively, from the corresponding values obtained in the P18 data alone analysis. From the results obtained for the untilted non-flat $\Lambda$CDM+$A_L$ model (see Table \ref{tab:para_NL_nonCMB} and Fig. \ref{fig:like_Alens_P18_lensing}), we observe significant changes in the values of the primary parameters $\Omega_k$ and $A_L$, as well as in the derived parameters $H_0$ and $\Omega_m$. For the P18+lensing data set we get $\Omega_k = 0.0161\pm 0.0094$ (1.7$\sigma$ away from flat) and $A_L = 1.44\pm 0.15$ (2.9$\sigma$ away from $A_L = 1$). These values differ by 1.1$\sigma$ and 1.2$\sigma$, respectively, from the corresponding values obtained in the P18 data alone analysis. For the derived parameters, from P18+lensing data we find $H_0=85.7\pm 8.5$ km s$^{-1}$ Mpc$^{-1}$ and $\Omega_m=0.190\pm 0.043$, which differ by 1.7$\sigma$ and 1.2$\sigma$ from the corresponding P18 data alone values. Joint analyses of the P18 and lensing data in the tilted non-flat models result in constraints that differ more from those derived using just P18 data compared to what happens in the tilted flat model case (see Tables \ref{tab:para_NL_ns_nonCMB} and \ref{tab:para_TNL_nonCMB} and Fig.\ \ref{fig:like_P18_lensing}). This is because lensing data partially breaks the $\Omega_m$-$H_0$-$\Omega_k$ geometrical P18 alone data degeneracy of the tilted non-flat models (compare the corresponding panels in Figs.\ \ref{fig:like_P18} and \ref{fig:like_P18_lensing}). Comparing the seven-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ and new $P(q)$ primary cosmological parameter constraints for P18 data and P18+lensing data, shown in the upper halves of Tables \ref{tab:para_NL_ns_nonCMB} and \ref{tab:para_TNL_nonCMB}, we see that aside from $\Omega_k$ (discussed next) there are no significant changes in parameter values [the largest change is that $\Omega_b h^2$ is 0.47$\sigma$ (0.30$\sigma$) smaller in the P18+lensing case, for the Planck (new) $P(q)$] with some of the error bars being smaller in the P18+lensing case [leaving aside $\Omega_k$ (discussed next) the largest decrease is 6\% (7\%) for the $\Omega_b h^2$ ($\Omega_c h^2$) error bar, for the Planck (new) $P(q)$]. On the other hand, $\Omega_k$ changes significantly when lensing data are added to the mix, becoming 1.8$\sigma$ (1.6$\sigma$) larger, and closer to flat for the Planck (new) $P(q)$, with 61\% (59\%) smaller error bars, still favoring closed geometry over flat but only by 1.6$\sigma$ (1.5$\sigma$), respectively. For the derived parameters, the largest change is the 2.2$\sigma$ (1.8$\sigma$) increase in $H_0$ relative to the P18 data value for the Planck (new) $P(q)$, with 61\% (62\%) smaller error bars for $\Omega_m$. For P18+lensing data we find $\Omega_m = 0.351\pm 0.024$ ($0.345\pm 0.021$) and $H_0 = 63.7\pm 2.3$ ($64.2\pm 2.0$) km s$^{-1}$ Mpc$^{-1}$ for the Planck (new) $P(q)$, which are consistent with many other measurements of these quantities and 1.9$\sigma$ (1.9$\sigma$) larger and 2.3$\sigma$ (2.4$\sigma$) lower, respectively, than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. Comparing the eight-parameter tilted non-flat $\Lambda$CDM$+A_L$ Planck (new) $P(q)$ primary cosmological parameter constraints for P18 data and P18+lensing data, shown in the lower half of Table \ref{tab:para_NL_ns_nonCMB} (\ref{tab:para_TNL_nonCMB}), we see that there are smaller differences compared to the tilted flat $\Lambda$CDM$+A_L$ case. For the Planck $P(q)$ case we mostly find less significant changes (the largest changes are that $\Omega_k$ and $A_L$ are 1.3$\sigma$ and 0.95$\sigma$ larger in the P18+lensing case, with the next largest being $\Omega_b h^2$ which is 0.29$\sigma$ smaller) with some of the error bars being larger in the P18+lensing case (the largest increase is 7\% for the $A_L$ error bar, and this is the only model where the $A_L$ error bar is larger for P18+lensing data than for P18 data) and some of the error bars being smaller (the largest decrease is 72\% for $\Omega_k$). In the new $P(q)$ case we find roughly half the parameters change more significantly (the largest changes again are that $\Omega_k$ and $A_L$ are 0.93$\sigma$ and 0.76$\sigma$ larger in the P18+lensing case, with the next largest being $n_s$ which is 0.44$\sigma$ larger) with some of the error bars being larger in the P18+lensing case (the largest increase is 8\% for the $\tau$ error bar) and some of the error bars being smaller (the largest decrease is 85\% for $\Omega_k$). From the P18+lensing analyses, we measure $\Omega_k=-0.005\pm 0.027$ for the Planck $P(q)$ case and $\Omega_k= 0.003\pm 0.016$ for the new $P(q)$, both being only 0.19$\sigma$ away from flat spatial hypersurfaces, very different from the P18 data alone results. For the derived parameters, the largest change is the 1.5$\sigma$ (1.3$\sigma$) increase in $H_0$ relative to the P18 data value for the Planck (new) $P(q)$, with 69\% (82\%) smaller error bars for $\Omega_m$. For P18+lensing data we find $\Omega_m = 0.32\pm 0.11$ ($0.287\pm 0.076$) and $H_0 = 69\pm 11$ ($72.0\pm 9.2$) km s$^{-1}$ Mpc$^{-1}$ for the Planck (new) $P(q)$, which are consistent with many other measurements of these quantities and 0.22$\sigma$ larger (0.10$\sigma$ smaller) and 0.063$\sigma$ lower (0.25$\sigma$ higher), respectively, than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. (Note that the P18+lensing data Planck $P(q)$ $H_0$ error bar is unchanged, $\pm 11$ km s$^{-1}$, from the P18 data value, and this is the only model where this happens.) We will see in Sec.\ \ref{subsec:model_selection} that in both tilted non-flat $\Lambda$CDM$+A_L$ models the fit to P18+lensing data is weakly better when $A_L = 1$ compared to the case when the $A_L$ parameter is allowed to vary; this differs from what happens in the tilted flat $\Lambda$CDM$+A_L$ model. Also, unlike the tilted flat $\Lambda$CDM$+A_L$ P18+lensing case, we measure, from P18+lensing data, $A_L=1.089\pm 0.16$ and $1.13\pm0.15$, for the Planck $P(q)$ and the new $P(q)$, respectively, which differ from the theoretically expected $A_L = 1$ by only 0.56$\sigma$ and 0.87$\sigma$. The inclusion of the $A_L$ parameter does not significantly affect the values of the other seven primary parameters, leaving them close to the values found for the seven parameter tilted non-flat $\Lambda$CDM models with $A_L = 1$ [the largest difference is for $\Omega_k$, where it is 0.19$\sigma$ (0.68$\sigma$) for the Planck (new) $P(q)$]; it does however increase the error bars, but less than what happens in the P18 alone data case discussed in Sec.\ \ref{subsubsec:P18_data_constraints}, with largest factor being 4 (3) for $\Omega_k$ for the Planck (new) $P(q)$. In addition, in the case when $A_L$ is allowed to vary, the derived parameters change somewhat and their error bars increase, with the largest changes associated with $H_0$, where it is now 0.47$\sigma$ (0.83$\sigma$) larger for the Planck (new) $P(q)$ with a factor of 5 (5) larger error bar. From the discussion above in this subsubsection, the fact that the cosmological constraint contours displayed in Fig.\ \ref{fig:like_P18_lensing} for the three tilted models overlap should not come as a surprise. Unlike in the previous P18 data alone case, the P18+lensing data contours that involve $\Omega_m$, $H_0$, or $\Omega_k$ now overlap for the tilted models, indicating that the geometrical degeneracy is, at least, partially broken. Figure \ref{fig:like_Alens_P18_lensing} shows the results when the $A_L$ parameter is included in the analysis. While the overlap already found in the P18 data alone analysis (see Fig.\ \ref{fig:like_Alens_P18}) remains, the bimodal 1$\sigma$ regions of that plot have now disappeared. \subsubsection{P18+lensing+non-CMB data cosmological constraints} \label{subsubsec:P18_lensing_nonCMB_data_constraints} In this subsubsection we comment on the results obtained from a joint analysis of the P18+lensing+non-CMB data set and how the cosmological constraints change when compared to those obtained using P18+lensing data. As outlined in Sec.\ \ref{sec:data} non-CMB data we use here is comprised of BAO, $f\sigma_8$, SNIa, and $H(z)$ data, all of which provide useful information on the late-time Universe. Ideally one would like to establish that cosmological parameter constraints derived from P18+lensing data and from non-CMB data are mutually consistent, prior to using P18+lensing+non-CMB data in joint analyses. Given that P18 data dominate the P18+lensing data compilation, it is instructive to also study whether P18 data cosmological constraints are consistent with those from non-CMB data. We shall see in Sec.\ \ref{sec:P18_vs_BAO} that, in some of the models we study here, cosmological constraints from BAO$^\prime$ and BAO data, the dominant part of the non-CMB data compilation, are somewhat inconsistent with those derived using P18 data. This is also consistent with the results we discuss in this subsubsection, as well as, with the results presented in Sec.\ \ref{sec:P18_vs_non-CMB}, where we compare the cosmological parameter values obtained using P18 data and using non-CMB data. In Sec.\ \ref{sec:P18+lensing_vs_non-CMB} we compare P18+lensing data cosmological constraints and non-CMB data cosmological constraints, and find similar tensions. In addition, in Sec.\ \ref{subsec:data_set_tensions}, we study tensions between some of the CMB data sets and some of the low-redshift data sets, including the case of P18+lensing data vs.\ non-CMB data, by using the two statistical estimators presented in Sec. \ref{sec:method}. Comparing the six-parameter tilted flat $\Lambda$CDM primary cosmological parameter constraints for P18+lensing data and P18+lensing+non-CMB data, shown in the upper half of Table \ref{tab:para_FL_nonCMB}, we see that there are no significant changes in parameter values (the largest change is that $\Omega_c h^2$ is 1.1$\sigma$ smaller in the P18+lensing+non-CMB case) with all but the ln($10^{10}A_s)$ error bars being smaller in the P18+lensing+non-CMB case (the ln($10^{10}A_s)$ error bar is unchanged and the largest decrease is 31\% for the $\Omega_c h^2$ error bar). For the derived parameters, the largest changes are the 1.1$\sigma$ decrease in $\Omega_m$ and the 1.1$\sigma$ increase in $H_0$ relative to the P18+lensing data values, and the 33\% (31\%) smaller $\Omega_m$ ($H_0$) error bar. For P18+lensing+non-CMB data we find $\Omega_m = 0.3053\pm 0.0050$ and $H_0 = 68.09\pm 0.38$ km s$^{-1}$ Mpc$^{-1}$ which are consistent with many other measurements of these quantities and 0.58$\sigma$ larger and 1.3$\sigma$ lower than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. Comparing the seven-parameter tilted flat $\Lambda$CDM$+A_L$ primary cosmological parameter constraints for P18+lensing data and P18+lensing+non-CMB data, shown in the lower half of Table \ref{tab:para_FL_nonCMB}, we see smaller changes in the parameter values (the largest change is that $\Omega_c h^2$ is 0.47$\sigma$ smaller in the P18+lensing+non-CMB case, with the next largest being $n_s$ which is 0.33$\sigma$ larger) with all but the ln($10^{10}A_s)$ error bars being smaller in the P18+lensing+non-CMB case (the ln$(10^{10}A_s)$ error bar is unchanged and the largest decrease is 39\% for the $\Omega_c h^2$ error bars). For the derived parameters, the largest changes are the 0.47$\sigma$ increase in $H_0$ and the 0.47$\sigma$ decrease in $\Omega_m$ relative to the P18+lensing data values, and the 42\% smaller $\Omega_m$ error bar. For P18+lensing+non-CMB data in the varying $A_L$ case we measure $\Omega_m = 0.2998\pm 0.0053$ and $H_0 = 68.52\pm 0.42$ km s$^{-1}$ Mpc$^{-1}$ which are consistent with many other measurements of these quantities and 0.27$\sigma$ larger and 0.93$\sigma$ lower than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. The improvement in the fit to P18+lensing+non-CMB data when the $A_L$ parameter is allowed to vary, in the tilted flat $\Lambda$CDM$+A_L$ model, is positive, as discussed in Sec.\ \ref{subsec:model_selection}. We now find $A_L = 1.089\pm 0.035$ which is now 2.5$\sigma$ away from the theoretically expected $A_L = 1$, larger than the 1.8$\sigma$ deviation for the P18+lensing case of Sec.\ \ref{subsubsec:P18_lensing_data_constraints}; the tendency of the non-CMB data is to push $A_L$ farther away from 1. The inclusion of the $A_L$ parameter does not significantly affect the values of the other six primary parameters, leaving them close to the values found for the six parameter tilted flat $\Lambda$CDM model with $A_L = 1$ (the largest difference is for ln$(10^{10}A_s)$, where it is 1.0$\sigma$ lower); it does however increase the error bars, comparable to what happens in the P18+lensing data case discussed in Sec.\ \ref{subsubsec:P18_lensing_data_constraints}, with largest increase being 29\% for $\ln(10^{10}A_s)$. In addition, in the case when $A_L$ is allowed to vary, the derived parameters change somewhat and their error bars increase, with the largest changes associated with $\sigma_8$, where it is now 1.2$\sigma$ smaller with a 29\% larger error bar. Adding non-CMB data to P18+lensing data strongly suppresses P18+lensing data support for non-zero spatial curvature, (see Tables \ref{tab:para_NL_ns_nonCMB} and \ref{tab:para_TNL_nonCMB}), except in the case of the untilted non-flat $\Lambda$CDM model, for which $\Omega_k= -0.0065\pm 0.0014$ (4.6$\sigma$ away from flat) and also for the untilted non-flat $\Lambda$CDM$+A_L$ model where $\Omega_k = -0.0060\pm 0.0014$ (4.3$\sigma$ away from flat) (see Table \ref{tab:para_NL_nonCMB}). Comparing the seven-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ and new $P(q)$ primary cosmological parameter constraints for P18+lensing data and P18+lensing+non-CMB data, shown in the upper halves of Tables \ref{tab:para_NL_ns_nonCMB} and \ref{tab:para_TNL_nonCMB}, we see that aside from $\Omega_k$ (discussed next) there are no significant changes in parameter values [the largest change is that ln($10^{10}A_s$) is 0.73$\sigma$ (0.52$\sigma$) larger in the P18+lensing+non-CMB case, for the Planck (new) $P(q)$] with all of the error bars being smaller in the P18+lensing+non-CMB case [leaving aside $\Omega_k$ (discussed next) the largest decrease is 18\% (13\%) for the log($10^{10}A_s$) error bar, for the Planck (new) $P(q)$]. On the other hand, $\Omega_k$ changes significantly when non-CMB data are added to the mix, becoming 1.6$\sigma$ (1.5$\sigma$) larger, and closer to flat for the Planck (new) $P(q)$, with 74\% (70\%) smaller error bars, now favoring open geometry over flat but only by 0.24$\sigma$ (0.18$\sigma$), respectively. For the derived parameters, the largest changes are the 1.9$\sigma$ (1.9$\sigma$) increase in $H_0$ and the 1.9$\sigma$ (1.8$\sigma$) decrease in $\Omega_m$ relative to the P18+lensing data values for the Planck (new) $P(q)$, with 78\% (76\%) smaller error bars for $\Omega_m$ and 76\% (73\%) smaller error bars for $H_0$. For P18+lensing+non-CMB data we find $\Omega_m = 0.3051\pm 0.0053$ ($0.3054\pm 0.0051$) and $H_0 = 68.17\pm 0.55$ ($68.13\pm 0.54$) km s$^{-1}$ Mpc$^{-1}$ for the Planck (new) $P(q)$, which are consistent with many other measurements of these quantities and 0.57$\sigma$ (0.59$\sigma$) larger and 1.2$\sigma$ (1.2$\sigma$) lower, respectively, than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. Comparing the eight-parameter tilted non-flat $\Lambda$CDM$+A_L$ Planck (new) $P(q)$ primary cosmological parameter constraints for P18+lensing data and P18+lensing+non-CMB data, shown in the lower half of Table \ref{tab:para_NL_ns_nonCMB} (\ref{tab:para_TNL_nonCMB}), we see that there are approximately comparable differences to the tilted flat $\Lambda$CDM$+A_L$ case. For the Planck (new) $P(q)$ case the largest change is that $\Omega_c h^2$ is 0.49$\sigma$ (0.45$\sigma$) smaller in the P18+lensing+non-CMB case, with the next largest being $\Omega_b h^2$ ($n_s$) which is 0.34$\sigma$ (0.37$\sigma$) smaller, with most of the error bars being smaller in the P18+lensing+non-CMB case (the largest decreases are 94\% (89\%) for $\Omega_k$ and 77\% (77\%) for $A_L$). From the P18+lensing+non-CMB analyses, we measure $\Omega_k=-0.0002\pm 0.0017$ for both $P(q)$ cases, both being only 0.12$\sigma$ away from flat spatial hypersurfaces, very different from the P18 data alone results. For the derived parameters, the largest change is the 0.18$\sigma$ (0.39$\sigma$) decrease in $\Omega_m$ ($\sigma_8$) relative to the P18+lensing data value for the Planck (new) $P(q)$, with 95\% (93\%) smaller error bars for $\Omega_m$ and 95\% (94\%) smaller error bars for $H_0$. For P18+lensing+non-CMB data we find $\Omega_m = 0.2998\pm 0.0055$ ($0.2999\pm 0.0055$) and $H_0 = 68.49\pm 0.56$ ($68.48\pm 0.56$) km s$^{-1}$ Mpc$^{-1}$ for the Planck (new) $P(q)$, which are consistent with many other measurements of these quantities and 0.27$\sigma$ (0.27$\sigma$) larger and 0.91$\sigma$ (0.92$\sigma$) lower, respectively, than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. We will see in Sec.\ \ref{subsec:model_selection} that in both tilted non-flat $\Lambda$CDM$+A_L$ models the fit to P18+lensing+non-CMB data is positively better when the $A_L$ parameter is allowed to vary compared to the $A_L = 1$ case; this is similar to what happens in the tilted flat $\Lambda$CDM$+A_L$ model. Also, like the tilted flat $\Lambda$CDM$+A_L$ P18+lensing+non-CMB case, we measure, from P18+lensing+non-CMB data, $A_L=1.090\pm 0.036$ and $1.088\pm0.035$, for the Planck $P(q)$ and the new $P(q)$, respectively, which both differ from the theoretically expected $A_L = 1$ by 2.5$\sigma$. The inclusion of the $A_L$ parameter does not significantly affect the values of the other seven primary parameters, leaving them close to the values found for the seven-parameter tilted non-flat $\Lambda$CDM models with $A_L = 1$ [the largest difference is for ln($10^{10}A_s$), where it is 1.0$\sigma$ (0.95$\sigma$) smaller for the Planck (new) $P(q)$]; it does however increase the error bars, but less than what happens in the P18 alone and P18+lensing data cases discussed in Secs.\ \ref{subsubsec:P18_data_constraints} and \ref{subsubsec:P18_lensing_data_constraints}, with largest increase being 21\% for ln($10^{10}A_s$) for both $P(q)$ cases. In addition, in the case when $A_L$ is allowed to vary, the derived parameters change somewhat and their error bars increase, with the largest changes associated with $\sigma_8$, where it is now 1.3$\sigma$ (1.2$\sigma$) smaller for the Planck (new) $P(q)$ with a 21\% (22\%) larger error bar. When non-CMB data (that include $f \sigma_8$ data) are added to the mix and the $A_L$ parameter is allowed to vary, $A_L > 1$ is favored and there is a decrease in the value of $\sigma_8$ compared to the $A_L = 1$ case, which helps to alleviate the corresponding tension. Since $A_L>1$ helps to resolve the lensing anomaly, there is less or no need to increase the value of $\Omega_m$ to predict more lensing. A lower value of $\Omega_m$ means less structure formation in the past, consequently slightly alleviating the $\sigma_8$ tension. While $\Omega_k$ plays a role at both early and late times, the $A_L$ parameter only has an impact on CMB data. Since, as we shall see in Sec. \ref{sec:P18_vs_non-CMB}, non-CMB data prefer a flatter geometry than do P18 data, it is possible to understand why the evidence in favor of $\Omega_k\neq 0$ subsides, while the evidence for $A_L>1$ does not, when non-CMB data is added to the mix. A fairly large negative value of $\Omega_k$ is required to resolve the P18 data lensing anomaly, thus improving upon the performance shown by the tilted flat $\Lambda$CDM model, however, such a large value of the curvature parameter is not supported by lensing data or by non-CMB data. This fact raises the issue of whether it is consistent to jointly use P18, lensing, and non-CMB data sets in the context of the non-flat models. We try to answer this question, through the use of different statistical criteria, in Sec.\ \ref{subsec:data_set_tensions}. Note that Figs.\ \ref{fig:like_P18_lensing_nonCMB} and \ref{fig:like_Alens_P18_lensing_nonCMB} show that when P18+lensing+non-CMB data is used it is not necessary to consider $A_L\neq 1$ in order to make the three sets of tilted model contours overlap. \subsubsection{Comparing P18, P18+lensing, and P18+lensing+non-CMB data cosmological constraints for each model} \label{subsubsec:contour_plots} Cosmological parameter contour plots allow us to easily see the degree of correlation between the two variables. If the two variables are more correlated then the corresponding constraint contours are more line-like, on the other hand, if they are less correlated the contours are broader and enclose 2-dimensional areas. In this subsubsection we comment on how the constraint contours, for each cosmological model, change depending on whether we consider P18, P18+lensing, or P18+lensing+non-CMB data. Figures \ref{fig:like_FL_compar}-\ref{fig:like_TNL_Alens_compar} show, for each of the eight cosmological models we study, the cosmological parameter constraints for P18, P18+lensing, and P18+lensing+non-CMB data. The constraint contours shrink as more data is included in the analysis used to determine them. From Fig.\ \ref{fig:like_FL_compar} for the six-parameter tilted flat $\Lambda$CDM we see that there are significant overlaps between the contours obtained in the three data sets considered. Along with the results discussed in Secs.\ \ref{subsubsec:P18_lensing_data_constraints} and \ref{subsubsec:P18_lensing_nonCMB_data_constraints} this is an indication that there is not significant tension between P18, P18+lensing, and P18+lensing+non-CMB data when these data are analyzed in the tilted flat $\Lambda$CDM model. The $\Omega_m$-$H_0$ panel contours indicate that these two parameters are strongly correlated. The inclusion of lensing data and/or non-CMB data, which provide information about the late-time Universe, partially breaks this correlation and induces a shift in the one-dimensional posterior distributions of not only these two parameters but also other parameters. Non-CMB data cause a larger shift. For the six-parameter untilted non-flat $\Lambda$CDM model (see Fig.\ \ref{fig:like_NL_compar}) constraint contours determined from the three different data sets overlap only for some parameters. In particular, for constraint contours in panels that involve $\Omega_k$, $\Omega_m$, or $H_0$ there is no overlap between those determined using P18 data and those determined using P18+lensing+non-CMB data (there are larger than 2$\sigma$ differences between these contours when one of these three parameters are involved and the differences are larger when two of these three parameters are involved), and there is only a slight amount of overlap between the P18 data contours and the P18+lensing data contours. The $\Omega_m$-$H_0$-$\Omega_k$ geometrical degeneracy is prominent for P18 data and is clearly seen in the $\Omega_k$-$\Omega_m$, $\Omega_k$-$H_0$, and $\Omega_m$-$H_0$ panels, as these three parameters are strongly correlated. Including lensing data and/or non-CMB data partially breaks this degeneracy causing significant shifts in the one-dimensional posterior distributions of not only these three parameters but also other parameters. The shifts are bigger here than in the tilted flat $\Lambda$CDM model and indicate significant tension between the data sets, especially between the P18 and P18+lensing+non-CMB data sets, when they are analyzed in the untilted non-flat $\Lambda$CDM model. Non-CMB data again appear to cause the larger shift. As discussed in more detail in Sec.\ \ref{subsec:data_set_tensions}, these shifts mean that P18 and non-CMB data are mutually inconsistent in the untilted non-flat $\Lambda$CDM model and so cannot be jointly used to derive cosmological parameter constraints in this model. Similar, but quantitatively less discrepant, results are obtained for the seven-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ and the tilted non-flat $\Lambda$CDM new $P(q)$ models (see Figs.\ \ref{fig:like_NL_ns_compar} and \ref{fig:like_TNL_compar}). The differences between the untilted non-flat and tilted non-flat results is likely a consequence of the additional $n_s$ parameter in the tilted non-flat models. In the tilted non-flat models, the more-discrepant P18 and P18+lensing+non-CMB data constraint contours overlap in all panels for pairs of the six primary cosmological parameters excluding the $\Omega_k$ parameter as well as the derived $\Omega_m$ and $H_0$ parameters. The differences are larger in the Planck $P(q)$ case than in the new $P(q)$ case, largest for $H_0$, smallest for $\Omega_m$, with $\Omega_k$ being in-between. In the new $P(q)$ case, the 2$\sigma$ contours overlap for $\Omega_m$ and almost overlap for $\Omega_k$. These results may be an indication of the tension found, in the context of the tilted non-flat models, between P18 data and the BAO data set. We shall study this tension in more detail in Sec.\ \ref{subsec:data_set_tensions}. As in the untilted non-flat $\Lambda$CDM model, the geometrical degeneracy between $\Omega_m$-$H_0$-$\Omega_k$ affects the tilted non-flat models. Again, including lensing data and/or non-CMB data partially breaks this degeneracy causing significant shifts in the one-dimensional posterior distributions of not only these three parameters but also other parameters. The shifts are bigger here than in the tilted flat $\Lambda$CDM model and but smaller than in the untilted non-flat $\Lambda$CDM model, but still indicate some tension between the data sets, especially between the P18 and P18+lensing+non-CMB data sets, especially when they are analyzed in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. Non-CMB data again appear to cause the larger shift. When the $A_L$ parameter is allowed to vary, the three different sets of constraint contours overlap in all four models (see Figs.\ \ref{fig:like_FL_Alens_compar}--\ref{fig:like_TNL_Alens_compar}). In the non-flat models there now is a bigger degeneracy between the cosmological parameters $\Omega_m$-$H_0$-$\Omega_k$-$A_L$ which causes the constraint contours to expand relative to the $A_L = 1$ case, especially for P18 data. For some parameters in the untilted non-flat $\Lambda$CDM model and the tilted non-flat $\Lambda$CDM new $P(q)$ model we observe a bimodal distribution when only P18 data is used, and the same parameters in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model have an almost bimodal distribution for P18 data. These bimodalities are likely a consequences of the above-mentioned geometrical degeneracy. \begin{table*} \caption{Mean and 68.3\% confidence limits of tilted flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{5}{c}{Tilted flat $\Lambda$CDM model} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02236 \pm 0.00015$ & $0.02243 \pm 0.00013$ & $0.043 \pm 0.016$ & $0.02241 \pm 0.00014$ & $0.043 \pm 0.016$ \\[+1mm] $\Omega_c h^2$ & $0.1202 \pm 0.0014$ & $0.11926 \pm 0.00097$ & $0.163 \pm 0.042$ & $0.11946 \pm 0.00098$ & $0.168 \pm 0.044$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04090 \pm 0.00031$ & $1.04102 \pm 0.00029$ & $1.054 \pm 0.026$ & $1.04099 \pm 0.00029$ & $1.059 \pm 0.025$ \\[+1mm] $\tau$ & $0.0542 \pm 0.0079$ & $0.0581 \pm 0.0081$ & $0.0542$ & $0.0555 \pm 0.0077$ & $0.0542$ \\[+1mm] $n_s$ & $0.9649 \pm 0.0043$ & $0.9673 \pm 0.0037$ & $0.9649$ & $0.9665 \pm 0.0038$ & $0.9649$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.044 \pm 0.016$ & $3.051 \pm 0.017$ & $3.01 \pm 0.27$ & $3.045 \pm 0.016$ & $3.044$ \\[+1mm] \hline \\[-1mm] $H_0$ & $67.28 \pm 0.61$ & $67.70 \pm 0.43$ & $83 \pm 12$ & $67.60 \pm 0.44$ & $83 \pm 12$ \\[+1mm] $\Omega_m$ & $0.3165 \pm 0.0084$ & $0.3106 \pm 0.0058$ & $0.294 \pm 0.015$ & $0.3119 \pm 0.0059$ & $0.300 \pm 0.016$ \\[+1mm] $\sigma_8$ & $0.8118 \pm 0.0074$ & $0.8119 \pm 0.0073$ & $0.874 \pm 0.037$ & $0.8102 \pm 0.0070$ & $0.92 \pm 0.12$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2765.80$ & $2786.66$ & $15.92$ & $2777.75$ & $10.98$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $20.22$ & $15.92$ & $11.61$ & $10.98$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $22.24$ & $22.24$ & $12.58$ & $12.58$ \\[+1mm] $\textrm{DIC}$ & $2817.93$ & $2839.25$ & $21.93$ & $2829.61$ & $14.93$ \\[+1mm] $\textrm{AIC}_c$ & $2819.80$ & $2840.66$ & $27.56$ & $2831.75$ & $19.98$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{5}{c}{Tilted flat $\Lambda$CDM+$A_L$ model} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02259 \pm 0.00017$ & $0.02258 \pm 0.00015$ & $0.043 \pm 0.015$ & $0.02256 \pm 0.00014$ & $0.045 \pm 0.013$ \\[+1mm] $\Omega_c h^2$ & $0.1180 \pm 0.0015$ & $0.1183 \pm 0.0010$ & $0.163 \pm 0.042$ & $0.1185 \pm 0.0010$ & $0.177 \pm 0.042$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04114 \pm 0.00032$ & $1.04113 \pm 0.00029$ & $1.055 \pm 0.024$ & $1.04109 \pm 0.00030$ & $1.065 \pm 0.018$ \\[+1mm] $\tau$ & $0.0496 \pm 0.0082$ & $0.0522 \pm 0.0080$ & $0.0496$ & $0.0492 \pm 0.0084$ & $0.0496$ \\[+1mm] $n_s$ & $0.9710 \pm 0.0050$ & $0.9705 \pm 0.0038$ & $0.9710$ & $0.9698 \pm 0.0039$ & $0.9710$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.030 \pm 0.017$ & $3.036 \pm 0.017$ & $3.00 \pm 0.27$ & $3.030 \pm 0.018$ & $3.030$ \\[+1mm] $A_{L}$ & $1.181 \pm 0.067$ & $1.170 \pm 0.060$ & $1.181$ & $1.174 \pm 0.061$ & $1.181$ \\[+1mm] \hline \\[-1mm] $H_0$ & $68.31 \pm 0.71$ & $68.21 \pm 0.46$ & $83 \pm 12$ & $68.11 \pm 0.47$ & $85 \pm 10$ \\[+1mm] $\Omega_m$ & $0.3029 \pm 0.0093$ & $0.3042 \pm 0.0060$ & $0.294 \pm 0.015$ & $0.3055 \pm 0.0061$ & $0.302 \pm 0.017$ \\[+1mm] $\sigma_8$ & $0.7997 \pm 0.0088$ & $0.8031 \pm 0.0077$ & $0.875 \pm 0.037$ & $0.8011 \pm 0.0079$ & $0.93 \pm 0.11$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2756.12$ & $2776.71$ & $15.91$ & $2767.77$ & $10.98$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $20.47$ & $15.91$ & $11.37$ & $10.98$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $20.78$ & $20.78$ & $11.88$ & $11.88$ \\[+1mm] $\textrm{DIC}$ & $2812.41$ & $2832.92$ & $21.83$ & $2823.77$ & $15.04$ \\[+1mm] $\Delta\textrm{DIC}$ & $-5.52$ & $-6.33$ & $-0.10$ & $-5.90$ & $0.11$ \\[+1mm] $\textrm{AIC}_c$ & $2812.12$ & $2832.71$ & $27.55$ & $2823.77$ & $19.98$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $-7.68$ & $-7.95$ & $-0.01$ & $-7.98$ & $0.00$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. The number of free parameters of the tilted flat $\Lambda$CDM model is 27 for P18, P18+BAO, and P18+BAO$^\prime$ data sets (including 21 internal calibration parameters), 4 for BAO data, and 3 for BAO$^\prime$ data. \end{flushleft} \end{ruledtabular} \label{tab:para_FL_BAO} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of untilted non-flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{5}{c}{Untilted non-flat $\Lambda$CDM model} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02320 \pm 0.00015$ & $0.02298 \pm 0.00014$ & $0.040 \pm 0.015$ & $0.02299 \pm 0.00014$ & $0.040 \pm 0.015$ \\[+1mm] $\Omega_c h^2$ & $0.11098 \pm 0.00088$ & $0.11184 \pm 0.00089$ & $0.175 \pm 0.046$ & $0.11171 \pm 0.00089$ & $0.175 \pm 0.047$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04204 \pm 0.00030$ & $1.04188 \pm 0.00029$ & $1.16 \pm 0.13$ & $1.04189 \pm 0.00030$ & $1.13 \pm 0.12$ \\[+1mm] $\tau$ & $0.0543 \pm 0.0091$ & $0.077 \pm 0.010$ & $0.0543$ & $0.073 \pm 0.010$ & $0.0543$ \\[+1mm] $\Omega_k$ & $-0.095 \pm 0.024$ & $-0.0066 \pm 0.0015$ & $-0.047 \pm 0.059$ & $-0.0074 \pm 0.0016$ & $-0.034 \pm 0.057$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.021 \pm 0.019$ & $3.069 \pm 0.021$ & $2.70 \pm 0.43$ & $3.059 \pm 0.021$ & $3.021$ \\[+1mm] \hline \\[-1mm] $H_0$ & $47.1 \pm 3.2$ & $67.77 \pm 0.60$ & $84 \pm 12$ & $67.46 \pm 0.63$ & $83 \pm 12$ \\[+1mm] $\Omega_m$ & $0.617 \pm 0.082$ & $0.2950 \pm 0.0055$ & $0.303 \pm 0.019$ & $0.2975 \pm 0.0057$ & $0.307 \pm 0.019$ \\[+1mm] $\sigma_8$ & $0.730 \pm 0.017$ & $0.7977 \pm 0.0093$ & $0.850 \pm 0.048$ & $0.7927 \pm 0.0090$ & $1.00 \pm 0.18$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2789.77$ & $2837.93$ & $15.91$ & $2828.81$ & $10.67$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $20.34$ & $15.91$ & $11.68$ & $10.67$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $1987.47$ & $1987.47$ & $1765.08$ & $1765.08$ \\[+1mm] $\textrm{DIC}$ & $2847.14$ & $2895.04$ & $24.31$ & $2884.90$ & $17.55$ \\[+1mm] $\Delta\textrm{DIC}$ & $29.21$ & $55.79$ & $2.38$ & $55.29$ & $2.62$ \\[+1mm] $\textrm{AIC}_c$ & $2843.77$ & $2891.93$ & $31.91$ & $2882.81$ & $24.39$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $23.97$ & $51.27$ & $4.35$ & $51.06$ & $4.41$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{5}{c}{Untilted non-flat $\Lambda$CDM+$A_L$ model} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02320 \pm 0.00015$ & $0.02318 \pm 0.00015$ & $0.041 \pm 0.015$ & $0.02320 \pm 0.00015$ & $0.042 \pm 0.014$ \\[+1mm] $\Omega_c h^2$ & $0.11097 \pm 0.00087$ & $0.11117 \pm 0.00086$ & $0.176 \pm 0.045$ & $0.11095 \pm 0.00087$ & $0.180 \pm 0.044$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04202 \pm 0.00030$ & $1.04198 \pm 0.00030$ & $1.16 \pm 0.13$ & $1.04199 \pm 0.00030$ & $1.14 \pm 0.12$ \\[+1mm] $\tau$ & $0.0540 \pm 0.0087$ & $0.0598 \pm 0.0087$ & $0.0540$ & $0.0557 \pm 0.0089$ & $0.0540$ \\[+1mm] $\Omega_k$ & $-0.12 \pm 0.12$ & $-0.0064 \pm 0.0015$ & $-0.050 \pm 0.060$ & $-0.0073 \pm 0.0015$ & $-0.035 \pm 0.058$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.020 \pm 0.018$ & $3.033 \pm 0.018$ & $2.68 \pm 0.41$ & $3.023 \pm 0.019$ & $3.020$ \\[+1mm] $A_{L}$ & $1.08 \pm 0.27$ & $1.310 \pm 0.062$ & $1.08$ & $1.319 \pm 0.063$ & $1.08$ \\[+1mm] \hline \\[-1mm] $H_0$ & $52 \pm 18$ & $68.27 \pm 0.61$ & $84 \pm 12$ & $67.93 \pm 0.62$ & $84 \pm 11$ \\[+1mm] $\Omega_m$ & $0.70 \pm 0.42$ & $0.2897 \pm 0.0054$ & $0.304 \pm 0.018$ & $0.2921 \pm 0.0055$ & $0.307 \pm 0.020$ \\[+1mm] $\sigma_8$ & $0.721 \pm 0.053$ & $0.7799 \pm 0.0083$ & $0.848 \pm 0.049$ & $0.7750 \pm 0.0085$ & $1.01 \pm 0.18$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2787.76$ & $2809.82$ & $15.89$ & $2799.18$ & $10.68$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $21.96$ & $15.89$ & $11.38$ & $10.68$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $106.63$ & $106.63$ & $80.18$ & $80.18$ \\[+1mm] $\textrm{DIC}$ & $2846.45$ & $2869.28$ & $24.63$ & $2857.90$ & $17.89$ \\[+1mm] $\Delta\textrm{DIC}$ & $28.52$ & $30.03$ & $2.70$ & $28.29$ & $2.96$ \\[+1mm] $\textrm{AIC}_c$ & $2843.76$ & $2865.82$ & $31.89$ & $2855.18$ & $24.39$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $23.96$ & $25.16$ & $4.33$ & $23.43$ & $4.41$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_NL_BAO} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of Planck-$P(q)$-based tilted non-flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{5}{c}{Tilted non-flat $\Lambda$CDM model [Planck $P(q)$]} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02260 \pm 0.00017$ & $0.02241 \pm 0.00015$ & $0.040 \pm 0.015$ & $0.02241 \pm 0.00015$ & $0.040 \pm 0.016$ \\[+1mm] $\Omega_c h^2$ & $0.1181 \pm 0.0015$ & $0.1195 \pm 0.0014$ & $0.174 \pm 0.047$ & $0.1195 \pm 0.0014$ & $0.172 \pm 0.047$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04116 \pm 0.00032$ & $1.04099 \pm 0.00032$ & $1.15 \pm 0.13$ & $1.04099 \pm 0.00032$ & $1.13 \pm 0.12$ \\[+1mm] $\tau$ & $0.0483 \pm 0.0083$ & $0.0578 \pm 0.0077$ & $0.0483$ & $0.0550 \pm 0.0078$ & $0.0483$ \\[+1mm] $\Omega_k$ & $-0.043 \pm 0.017$ & $0.0005 \pm 0.0018$ & $-0.046 \pm 0.060$ & $-0.0001 \pm 0.0018$ & $-0.033 \pm 0.055$ \\[+1mm] $n_s$ & $0.9706 \pm 0.0047$ & $0.9667 \pm 0.0045$ & $0.9706$ & $0.9666 \pm 0.0044$ & $0.9706$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.027 \pm 0.017$ & $3.051 \pm 0.016$ & $2.74 \pm 0.43$ & $3.044 \pm 0.016$ & $3.027$ \\[+1mm] \hline \\[-1mm] $H_0$ & $54.5 \pm 3.6$ & $67.83 \pm 0.58$ & $83 \pm 12$ & $67.58 \pm 0.62$ & $83 \pm 12$ \\[+1mm] $\Omega_m$ & $0.481 \pm 0.062$ & $0.3100 \pm 0.0060$ & $0.303 \pm 0.019$ & $0.3122 \pm 0.0063$ & $0.306 \pm 0.019$ \\[+1mm] $\sigma_8$ & $0.775 \pm 0.015$ & $0.8130 \pm 0.0079$ & $0.850 \pm 0.049$ & $0.8099 \pm 0.0081$ & $0.98 \pm 0.17$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2754.73$ & $2786.20$ & $15.88$ & $2776.90$ & $10.68$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $20.09$ & $15.88$ & $11.71$ & $10.68$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $665.90$ & $665.90$ & $582.59$ & $582.59$ \\[+1mm] $\textrm{DIC}$ & $2810.59$ & $2840.62$ & $24.34$ & $2832.28$ & $17.58$ \\[+1mm] $\Delta\textrm{DIC}$ & $-7.34$ & $1.37$ & $2.41$ & $2.67$ & $2.65$ \\[+1mm] $\textrm{AIC}_c$ & $2810.73$ & $2842.20$ & $31.88$ & $2832.90$ & $24.39$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $-9.07$ & $1.54$ & $4.32$ & $1.15$ & $4.41$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{5}{c}{Tilted non-flat $\Lambda$CDM+$A_L$ model [Planck $P(q)$]} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02258 \pm 0.00017$ & $0.02260 \pm 0.00017$ & $0.041 \pm 0.014$ & $0.02262 \pm 0.00017$ & $0.044 \pm 0.013$ \\[+1mm] $\Omega_c h^2$ & $0.1183 \pm 0.0015$ & $0.1180 \pm 0.0015$ & $0.174 \pm 0.045$ & $0.1178 \pm 0.0015$ & $0.182 \pm 0.043$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04116 \pm 0.00033$ & $1.04115 \pm 0.00033$ & $1.16 \pm 0.14$ & $1.04118 \pm 0.00032$ & $1.12 \pm 0.11$ \\[+1mm] $\tau$ & $0.0478 \pm 0.0081$ & $0.0522 \pm 0.0081$ & $0.0478$ & $0.0496 \pm 0.0085$ & $0.0478$ \\[+1mm] $\Omega_k$ & $-0.130 \pm 0.095$ & $-0.0004 \pm 0.0018$ & $-0.045 \pm 0.063$ & $-0.0012 \pm 0.0018$ & $-0.026 \pm 0.054$ \\[+1mm] $n_s$ & $0.9704 \pm 0.0048$ & $0.9712 \pm 0.0047$ & $0.9704$ & $0.9716 \pm 0.0047$ & $0.9704$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.027 \pm 0.017$ & $3.035 \pm 0.017$ & $2.74 \pm 0.45$ & $3.029 \pm 0.018$ & $3.027$ \\[+1mm] $A_{L}$ & $0.88 \pm 0.15$ & $1.170 \pm 0.061$ & $0.88$ & $1.178 \pm 0.061$ & $0.88$ \\[+1mm] \hline \\[-1mm] $H_0$ & $45 \pm 11$ & $68.13 \pm 0.60$ & $84 \pm 11$ & $67.85 \pm 0.61$ & $85 \pm 10$ \\[+1mm] $\Omega_m$ & $0.80 \pm 0.35$ & $0.3044 \pm 0.0062$ & $0.303 \pm 0.019$ & $0.3064 \pm 0.0063$ & $0.307 \pm 0.019$ \\[+1mm] $\sigma_8$ & $0.733 \pm 0.045$ & $0.8020 \pm 0.0089$ & $0.851 \pm 0.048$ & $0.7983 \pm 0.0091$ & $0.99 \pm 0.16$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2754.99$ & $2776.32$ & $15.91$ & $2767.04$ & $10.73$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $20.38$ & $15.91$ & $11.22$ & $10.73$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $593.77$ & $593.77$ & $518.08$ & $518.08$ \\[+1mm] $\textrm{DIC}$ & $2811.63$ & $2835.10$ & $24.31$ & $2825.27$ & $17.54$ \\[+1mm] $\Delta\textrm{DIC}$ & $-6.30$ & $-4.15$ & $2.38$ & $-4.34$ & $2.61$ \\[+1mm] $\textrm{AIC}_c$ & $2812.99$ & $2834.32$ & $31.91$ & $2825.04$ & $24.45$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $-6.81$ & $-6.34$ & $4.35$ & $-6.71$ & $4.47$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_NL_ns_BAO} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of tilted new-$P(q)$-based non-flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters with the new $P(q)$ constrained by TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{5}{c}{Tilted non-flat $\Lambda$CDM model [new $P(q)$]} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02255 \pm 0.00017$ & $0.02242 \pm 0.00015$ & $0.039 \pm 0.015$ & $0.02243 \pm 0.00016$ & $0.041 \pm 0.016$ \\[+1mm] $\Omega_c h^2$ & $0.1188 \pm 0.0015$ & $0.1194 \pm 0.0014$ & $0.173 \pm 0.048$ & $0.1193 \pm 0.0014$ & $0.177 \pm 0.048$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04109 \pm 0.00032$ & $1.04100 \pm 0.00032$ & $1.16 \pm 0.14$ & $1.04102 \pm 0.00032$ & $1.13 \pm 0.12$ \\[+1mm] $\tau$ & $0.0525 \pm 0.0083$ & $0.0582 \pm 0.0081$ & $0.0525$ & $0.0562 \pm 0.0080$ & $0.0525$ \\[+1mm] $\Omega_k$ & $-0.033 \pm 0.014$ & $0.0003 \pm 0.0018$ & $-0.051 \pm 0.061$ & $-0.0004 \pm 0.0018$ & $-0.032 \pm 0.059$ \\[+1mm] $n_s$ & $0.9654 \pm 0.0045$ & $0.9665 \pm 0.0043$ & $0.9654$ & $0.9665 \pm 0.0043$ & $0.9654$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.039 \pm 0.017$ & $3.051 \pm 0.016$ & $2.72 \pm 0.45$ & $3.046 \pm 0.016$ & $3.039$ \\[+1mm] \hline \\[-1mm] $H_0$ & $56.9 \pm 3.6$ & $67.79 \pm 0.59$ & $83 \pm 12$ & $67.52 \pm 0.61$ & $83 \pm 12$ \\[+1mm] $\Omega_m$ & $0.444 \pm 0.055$ & $0.3102 \pm 0.0060$ & $0.304 \pm 0.019$ & $0.3124 \pm 0.0063$ & $0.307 \pm 0.020$ \\[+1mm] $\sigma_8$ & $0.786 \pm 0.014$ & $0.8128 \pm 0.0079$ & $0.846 \pm 0.048$ & $0.8098 \pm 0.0080$ & $0.99 \pm 0.18$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2757.38$ & $2786.27$ & $15.90$ & $2777.01$ & $10.67$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $20.66$ & $15.90$ & $11.82$ & $10.67$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $278.54$ & $278.54$ & $236.71$ & $236.71$ \\[+1mm] $\textrm{DIC}$ & $2811.54$ & $2840.16$ & $24.57$ & $2831.65$ & $17.69$ \\[+1mm] $\Delta\textrm{DIC}$ & $-6.39$ & $0.91$ & $2.64$ & $2.04$ & $2.76$ \\[+1mm] $\textrm{AIC}_c$ & $2813.38$ & $2842.27$ & $31.90$ & $2833.01$ & $24.39$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $-6.42$ & $1.61$ & $4.34$ & $1.26$ & $4.41$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{5}{c}{Tilted non-flat $\Lambda$CDM+$A_L$ model [new $P(q)$]} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02257 \pm 0.00017$ & $0.02260 \pm 0.00017$ & $0.039 \pm 0.015$ & $0.02261 \pm 0.00017$ & $0.042 \pm 0.015$ \\[+1mm] $\Omega_c h^2$ & $0.1187 \pm 0.0016$ & $0.1180 \pm 0.0014$ & $0.174 \pm 0.047$ & $0.1178 \pm 0.0015$ & $0.177 \pm 0.046$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04111 \pm 0.00033$ & $1.04117 \pm 0.00033$ & $1.17 \pm 0.14$ & $1.04117 \pm 0.00032$ & $1.13 \pm 0.13$ \\[+1mm] $\tau$ & $0.0512 \pm 0.0086$ & $0.0532 \pm 0.0081$ & $0.0512$ & $0.0495 \pm 0.0084$ & $0.0512$ \\[+1mm] $\Omega_k$ & $-0.10 \pm 0.11$ & $-0.0005 \pm 0.0017$ & $-0.055 \pm 0.060$ & $-0.0012 \pm 0.0018$ & $-0.035 \pm 0.059$ \\[+1mm] $n_s$ & $0.9654 \pm 0.0057$ & $0.9707 \pm 0.0044$ & $0.9654$ & $0.9715 \pm 0.0047$ & $0.9654$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.036 \pm 0.018$ & $3.038 \pm 0.017$ & $2.69 \pm 0.43$ & $3.029 \pm 0.018$ & $3.036$ \\[+1mm] $A_{L}$ & $0.94 \pm 0.20$ & $1.168 \pm 0.061$ & $0.94$ & $1.176 \pm 0.062$ & $0.94$ \\[+1mm] \hline \\[-1mm] $H_0$ & $51 \pm 14$ & $68.09 \pm 0.60$ & $83 \pm 12$ & $67.85 \pm 0.63$ & $84 \pm 11$ \\[+1mm] $\Omega_m$ & $0.70 \pm 0.43$ & $0.3048 \pm 0.0062$ & $0.304 \pm 0.019$ & $0.3065 \pm 0.0065$ & $0.306 \pm 0.020$ \\[+1mm] $\sigma_8$ & $0.752 \pm 0.052$ & $0.8026 \pm 0.0086$ & $0.844 \pm 0.048$ & $0.7982 \pm 0.0092$ & $0.99 \pm 0.18$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2756.33$ & $2776.32$ & $15.90$ & $2767.43$ & $10.68$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $20.30$ & $15.90$ & $11.21$ & $10.68$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $194.81$ & $194.81$ & $160.72$ & $160.72$ \\[+1mm] $\textrm{DIC}$ & $2814.83$ & $2834.67$ & $24.75$ & $2824.97$ & $17.76$ \\[+1mm] $\Delta\textrm{DIC}$ & $-3.10$ & $-4.58$ & $2.82$ & $-4.64$ & $2.83$ \\[+1mm] $\textrm{AIC}_c$ & $2814.33$ & $2834.32$ & $31.90$ & $2825.43$ & $24.39$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $-5.47$ & $-6.34$ & $4.34$ & $-6.32$ & $4.41$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_TNL_ns_BAO} \end{table*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_fig16.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the tilted flat $\Lambda$CDM model. } \label{fig:like_FL_BAO} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_Alens_fig17.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the tilted flat $\Lambda$CDM$+A_L$ model. } \label{fig:like_FL_Alens_BAO} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_fig18.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the untilted non-flat $\Lambda$CDM model. } \label{fig:like_NL_BAO} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_fig19.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the untilted non-flat $\Lambda$CDM$+A_L$ model. } \label{fig:like_NL_Alens_BAO} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_ns_fig20.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the tilted non-flat $\Lambda$CDM model with the Planck $P(q)$. } \label{fig:like_NL_ns_BAO} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_ns_fig21.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the tilted non-flat $\Lambda$CDM$+A_L$ model with the Planck $P(q)$. } \label{fig:like_NL_Alens_ns_BAO} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_ns1_fig22.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the tilted non-flat $\Lambda$CDM model with the new $P(q)$. } \label{fig:like_TNL_ns1_BAO} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_Alens_ns1_fig23.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the tilted non-flat $\Lambda$CDM$+A_L$ model with the new $P(q)$. } \label{fig:like_TNL_Alens_ns1_BAO} \end{figure*} \subsubsection{Comparing P18 data and BAO/BAO$^{\prime}$ data cosmological constraints}\label{sec:P18_vs_BAO} In this subsubsection we compare BAO and BAO$^\prime$ data cosmological constraints to those obtained from P18 data. Prior to jointly analyzing P18+BAO/BAO$^\prime$ data, we need to determine whether P18 and BAO/BAO$^\prime$ data cosmological constraints are mutually consistent. In Sec.\ \ref{subsec:data_set_tensions} we use two other statistical estimators to examine whether or not P18 and BAO/BAO$^\prime$ data are in tension. The cosmological parameter mean values and error bars favored by the P18, BAO, BAO$^\prime$, P18+BAO, and P18+BAO$^\prime$ data sets are summarized in Tables \ref{tab:para_FL_BAO}-\ref{tab:para_TNL_ns_BAO} for the tilted flat $\Lambda$CDM (+$A_L$) models, the untilted non-flat $\Lambda$CDM (+$A_L$) models, the tilted non-flat $\Lambda$CDM (+$A_L$) models with the Planck $P(q)$, and the tilted non-flat $\Lambda$CDM ($+A_L$) models with the new $P(q)$, respectively. Likelihood distributions of cosmological parameters of the four models with $A_L=1$ and $A_L$ varying are shown in Figs.\ \ref{fig:like_FL_BAO}-\ref{fig:like_TNL_Alens_ns1_BAO} for the P18, BAO, BAO$^\prime$, P18+BAO, and P18+BAO$^\prime$ data sets. Since neither BAO$^{\prime}$ nor BAO data have the ability to constrain $\tau$ or $n_s$ or $A_L$, we set their values to those found in the corresponding P18 data analysis. In addition, for the same reason, in the BAO$^\prime$ data analyses, we also set the value of $\ln(10^{10}A_s)$ to that found in the corresponding P18 data analysis. We see from the upper and lower panels of Tables \ref{tab:para_FL_BAO}-\ref{tab:para_TNL_ns_BAO} that the BAO and BAO$^\prime$ data results for the $A_L=1$ and $A_L$-varying cases are similar, even though the fixed $\tau$ and $n_s$ [and $\ln(10^{10}A_s)$] values are slightly different for the $A_L = 1$ and $A_L$-varying cases. From Tables \ref{tab:para_FL_BAO}-\ref{tab:para_TNL_ns_BAO} we see that, in the six non-flat $\Lambda$CDM (+$A_L$) models the constraints set by BAO$^\prime$/BAO data on $\Omega_m$ are tighter than the ones imposed by P18 data and in the three non-flat $\Lambda$CDM+$A_L$ models the constraints set by BAO$^\prime$/BAO data on $\Omega_k$ are tighter than the ones imposed by P18 data. P18 data more restrictively constrains all other parameters in all eight cosmological models. As we discuss below, there is a significant level of disagreement in the non-flat models between P18 data cosmological constraints and BAO$^\prime$/BAO data cosmological constraints, in most cases. From Tables \ref{tab:para_NL_BAO}-\ref{tab:para_TNL_ns_BAO} we see that all three data sets, P18, BAO$^\prime$, and BAO, favor negative values of the curvature parameter, with BAO$^\prime$ and BAO data favoring closed geometry only weakly, at 0.48$\sigma$ to 0.96$\sigma$. However, we should take into account the geometrical degeneracy between $H_0$-$\Omega_k$-$\Omega_m$ and note that both BAO$^\prime$ and BAO data favor higher values of $H_0$ and lower values of $\Omega_m$ than do P18 data and this is what causes the P18 and BAO/BAO$^\prime$ cosmological constraint differences. We first discuss BAO$^\prime$ data results (BAO$^\prime$ data do not include $f\sigma_8$ data points, see Sec.\ \ref{sec:data}) and then consider results from BAO data. This will allow us to test the impact of some $f\sigma_8$ data on the cosmological constraints. Comparing the six-parameter and the three-parameter tilted flat $\Lambda$CDM model primary cosmological parameter constraints for P18 and BAO$^\prime$ data, shown in the upper half of Table \ref{tab:para_FL_BAO}, we see that the values of $\Omega_b h^2$ and $\Omega_c h^2$ are in mild disagreement, at 1.3$\sigma$ and 1.1$\sigma$ respectively. We also observe a similar 1.3$\sigma$ level of tension in the derived $H_0$ values, whereas the other two derived parameters, $\Omega_m$ and $\sigma_8$, show a better agreement, disagreeing by only 0.91$\sigma$ and 0.90$\sigma$ respectively. Comparing the seven-parameter and the three-parameter tilted flat $\Lambda$CDM+$A_L$ model primary cosmological parameter constraints for P18 and BAO$^\prime$ data, shown in the lower half of Table \ref{tab:para_FL_BAO}, we see that the values of $\Omega_b h^2$, $\Omega_c h^2$, and $100\theta_{\textrm{MC}}$ are in 1.7$\sigma$, 1.4$\sigma$ and 1.3$\sigma$ tension respectively. As for the derived parameters, we find $H_0$ and $\sigma_8$ values are in 1.7$\sigma$ and 1.2$\sigma$ disagreement while $\Omega_m$ values differ by only 0.046$\sigma$. This means that only for the $\Omega_m$ parameter does the inclusion of a varying $A_L$ reduce the disagreement found in the $A_L=1$ case, while increasing the disagreement for a number of other parameters. P18 and BAO$^\prime$ data results obtained for the six-parameter and the three-parameter untilted non-flat $\Lambda$CDM model, shown in the upper half of Table \ref{tab:para_NL_BAO}, indicate more significant differences than found in the tilted flat $\Lambda$CDM model. The primary cosmological parameters $\Omega_b h^2$ and $\Omega_c h^2$ values disagree at 1.1$\sigma$ and 1.4$\sigma$, while the primary spatial curvature parameter value is $\Omega_k=-0.034\pm 0.057$ for BAO$^\prime$ data, which is 0.60$\sigma$ away from flat and in 0.99$\sigma$ tension with the P18 value $\Omega_k=-0.095\pm 0.024$, which is 4.0$\sigma$ away from flat. Regarding the derived parameters, we find that $\Omega_m$, $H_0$, and $\sigma_8$ values are in 3.7$\sigma$, 2.9$\sigma$, and 1.5$\sigma$ disagreement. According to these results, P18 and BAO$^\prime$ data probably should not be jointly analyzed in the context of the untilted non-flat $\Lambda$CDM model. The results for the seven-parameter and the three-parameter untilted non-flat $\Lambda$CDM+$A_L$ model, obtained considering P18 and BAO$^\prime$ data, are in the lower half of Table \ref{tab:para_NL_BAO}. While there is a slight increase in the disagreement between the values of the primary parameters $\Omega_b h^2$ (1.3$\sigma$) and $\Omega_c h^2$ (1.6$\sigma$), there are significant decreases for the derived parameters $\Omega_m$, and $H_0$, but not for $\sigma_8$, that now disagree by 0.93$\sigma$, 1.5$\sigma$, and 1.5$\sigma$ respectively. This is caused by the increase in the size of the error bars in the $A_L$-varying P18 case with respect to the corresponding values obtained with $A_L=1$. For the BAO$^\prime$ data primary spatial curvature parameter we find $\Omega_k=-0.035\pm 0.058$, which is 0.60$\sigma$ away from flat hypersurfaces and only in 0.64$\sigma$ tension with the P18 value $\Omega_k=-0.12\pm0.12$, which is now only 1.0$\sigma$ away from flat. According to these results, unlike in the $A_L=1$ case, in the $A_L$-varying case P18 and BAO$^\prime$ data can probably be jointly analyzed in the context of the untilted non-flat $\Lambda$CDM model. Note that in this case a joint analysis of P18+BAO$^\prime$ data favors closed geometry at 4.9$\sigma$, with $\Omega_k=-0.0073\pm0.0015$, although because of the lack of the tilt ($n_s$) degree of freedom this untilted non-flat $\Lambda$CDM+$A_L$ model does not provide a good fit to smaller-angular-scale P18 data, which is reflected in the large $\Delta$DIC and $\Delta$AIC$_c$ values for the P18+BAO$^\prime$ case in the lower half of Table \ref{tab:para_NL_BAO}. Comparing the seven-parameter and the four-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ model primary cosmological parameter constraints for P18 and BAO$^\prime$ data, we see, in the upper half of Table \ref{tab:para_NL_ns_BAO}, that the values of $\Omega_b h^2$ and $\Omega_c h^2$ are both in 1.1$\sigma$ disagreement. The BAO$^\prime$ data primary spatial curvature parameter value $\Omega_k=-0.033\pm 0.055$ is 0.6$\sigma$ away from flat and only in 0.17$\sigma$ tension with the P18 value $\Omega_k=-0.043\pm0.017$, which is 2.5$\sigma$ in favor of closed geometry. The derived parameters $\Omega_m$, $H_0$, and $\sigma_8$ are in 2.7$\sigma$, 2.3$\sigma$, and 1.2$\sigma$ tension. These results reveal that P18 and BAO$^\prime$ data cosmological constraints are somewhat inconsistent in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model and these data probably should not be used jointly to constrain this model. Looking at the lower half of Table \ref{tab:para_NL_ns_BAO} we can compare results obtained for the eight-parameter and the four-parameter tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model from P18 and BAO$^\prime$ data respectively. We observe that the primary parameters $\Omega_b h^2$ and $\Omega_c h^2$ are in 1.6$\sigma$ and 1.5$\sigma$ tension. For the BAO$^\prime$ data primary spatial curvature parameter we find $\Omega_k= -0.026\pm 0.054$, which is only 0.48$\sigma$ away from flat and in 0.95$\sigma$ tension with the P18 value $-0.130\pm0.095$, which is 1.4$\sigma$ away from flat. Regarding the derived parameters we find that $\Omega_m$, $H_0$, and $\sigma_8$ are in 1.4$\sigma$, 2.7$\sigma$ and 1.6$\sigma$ disagreement. Compared to the $A_L = 1$ case, in the $A_L$-varying case we find a significant reduction only in the $\Omega_m$ tension, with most of the other parameter disagreements being more significant, which again suggests that P18 and BAO$^\prime$ data should not be jointly analyzed within the tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model. Comparing the seven-parameter and the four-parameter tilted non-flat $\Lambda$CDM new $P(q)$ model primary cosmological parameter constraints for P18 and BAO$^\prime$ data, from the upper half of Table \ref{tab:para_TNL_ns_BAO} we see that the values of $\Omega_b h^2$ and $\Omega_c h^2$ both disagree at 1.2$\sigma$. The BAO$^\prime$ data primary spatial curvature parameter value is $\Omega_k=-0.032\pm 0.059$, which is only a 0.54$\sigma$ deviation from flat and, similar to the Planck $P(q)$ model, is only in 0.016$\sigma$ disagreement with the P18 value $-0.033\pm 0.014$, which is 2.4$\sigma$ away from flat. Regarding the derived parameters $\Omega_m$, $H_0$, and $\sigma_8$, we find that their values disagree at 2.3$\sigma$, 2.1$\sigma$, and 1.1$\sigma$ respectively. While these disagreements are smaller than the ones found in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model, they still are large enough to require we more carefully test whether P18 and BAO$^\prime$ data can be jointly used to constrain cosmological parameters in this cosmological model. The results for the eight-parameter and the four-parameter tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model are in the lower half of Table \ref{tab:para_TNL_ns_BAO}, for P18 and BAO$^\prime$ data, respectively. As happens in the Planck $P(q)$ model, when the $A_L$ parameter is allowed to vary the tensions found for the primary parameters $\Omega_b h^2$ and $\Omega_c h^2$ do not decrease (in fact they slightly increase) with respect to the $A_L=1$ case, both now being 1.3$\sigma$. For the BAO$^\prime$ data primary spatial curvature parameter we find $\Omega_k= -0.035\pm 0.059$, which is 0.59$\sigma$ away from flat hypersurfaces and only in 0.52$\sigma$ tension with the P18 value $\Omega_k=-0.10\pm 0.11$, which is 0.91$\sigma$ away from flat. As for the value of the derived parameters $\Omega_m$, $H_0$ and $\sigma_8$ we find disagreements at 0.92$\sigma$, 1.9$\sigma$, and 1.3$\sigma$ respectively. The tensions are reduced with respect to the case with $A_L=1$, due to the increase of the error bars, but possibly still are not small enough to allow the joint use of P18+BAO$^\prime$ data for constraining tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model cosmological parameters. We now comment on the consistency between the cosmological constraints obtained using the BAO data set (which contain some $f\sigma_8$ data points) and the P18 data cosmological constraints. Here we also have to deal with the $\sigma_8$ tension, namely the discrepancy between the larger value for $\sigma_8$ obtained when P18 data are considered and the typically smaller values that one gets from low-redshift structure formation data (the $f\sigma_8$ data points we consider) or from weak lensing measurements. Note that since BAO data include some $f\sigma_8$ measurements we allow for ln($10^{10}A_s$) to vary in the BAO data only analyses (unlike the BAO$^\prime$ data only analyses where we fix the value of this parameter). We shall see that the tilted non-flat $\Lambda$CDM new $P(q)$ model is the model that best reconciles these measurements. Comparing the six-parameter and the four-parameter tilted flat $\Lambda$CDM primary cosmological parameter constraints for P18 and BAO data, shown in the upper half of Table \ref{tab:para_FL_BAO}, we see that the values of $\Omega_b h^2$ and $\Omega_c h^2$ are in 1.3$\sigma$ and 1.0$\sigma$ tension, respectively. A similar level of disagreement is found if we look at the values of the derived parameters. In particular for $\Omega_m$, $H_0$, and $\sigma_8$ we find 1.3$\sigma$, 1.3$\sigma$, and 1.6$\sigma$ disagreement. Here the greatest disagreement is that affecting $\sigma_8$, which has to do with the $\sigma_8$ tension mentioned above. Considering the results presented in the lower half of Table \ref{tab:para_FL_BAO} for the seven-parameter and the four-parameter tilted flat $\Lambda$CDM+$A_L$ model, obtained for P18 and BAO data, respectively, we find that including a varying $A_L$ parameter does not decrease the primary parameter tensions found when $A_L=1$. For $\Omega_b h^2$ and $\Omega_c h^2$ the disagreement is now 1.4$\sigma$ and 1.1$\sigma$. On the other hand for the derived $\Omega_m$, $H_0$, and $\sigma_8$ we find that their corresponding values disagree at 0.50$\sigma$, 1.2$\sigma$, and 2.0$\sigma$. Once again, allowing $A_L$ to vary reduces the $\Omega_m$ disagreement and the largest disagreement is between the $\sigma_8$ values. Comparing the six-parameter and the five-parameter untilted non-flat $\Lambda$CDM model primary cosmological parameter constraints for P18 and BAO data, provided in the upper half of Table \ref{tab:para_NL_BAO}, we observe that the values of $\Omega_b h^2$ and $\Omega_c h^2$ show a disagreement of 1.1$\sigma$ and 1.4$\sigma$, respectively. The BAO data value for the primary spatial curvature parameter is $\Omega_k=-0.047\pm 0.059$, which is 0.80$\sigma$ away from flat hypersurfaces and in 0.75$\sigma$ tension with the P18 value $-0.095\pm 0.024$, which represents a 4.0$\sigma$ deviation from flat. The level of tension is worse for the derived parameters $\Omega_m$, $H_0$, and $\sigma_8$, the disagreements now being 3.7$\sigma$, 3.0$\sigma$, and 2.4$\sigma$. We may say that P18 and BAO data should not be jointly used to constrain cosmological parameters in the untilted non-flat $\Lambda$CDM model. Results for the seven-parameter and the five-parameter untilted non-flat $\Lambda$CDM+$A_L$ model for P18 and BAO data, respectively, can be seen in the lower half of Table \ref{tab:para_NL_BAO}. Again we do not observe a reduction in the tension for the primary parameters $\Omega_b h^2$ (1.2$\sigma$) and $\Omega_c h^2$ (1.4$\sigma$) compared with the results found for the $A_L =1$ case. On the other hand, there is an important decrease for the derived parameters $\Omega_m$, $H_0$, and $\sigma_8$, the disagreement now being 0.94$\sigma$, 1.5$\sigma$, and 1.8$\sigma$, respectively. This is probably caused by the increase in the size of the error bars in the $A_L$-varying P18 case, with respect to the corresponding values obtained with $A_L=1$. For the BAO data primary spatial curvature parameter we find $\Omega_k=-0.050\pm 0.060$, which is 0.83$\sigma$ away from flat and in 0.52$\sigma$ tension with the P18 value $\Omega_k = -0.12\pm 0.12$, which is 1.0$\sigma$ in favor of a closed geometry. Comparing the seven-parameter and the five-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ model primary cosmological parameter constraints for P18 and BAO data, shown in the upper half of Table \ref{tab:para_NL_ns_BAO}, we see that the values of $\Omega_b h^2$ and $\Omega_c h^2$ are both in 1.2$\sigma$ tension. The BAO data primary spatial curvature parameter $\Omega_k=-0.046\pm 0.060$ is 0.77$\sigma$ away from flat hypersurfaces and, as in the BAO$^\prime$ case, in good agreement with, differing only by 0.048$\sigma$ from, the P18 result $-0.043\pm 0.017$, which is 2.5$\sigma$ away from flat). As for the derived parameters $\Omega_m$, $H_0$, and $\sigma_8$ we observe disagreements of 2.7$\sigma$, 2.3$\sigma$ and 1.5$\sigma$. These results reveal an inconsistency between P18 and BAO cosmological constraints that probably mean P18 and BAO data should not be used to jointly constrain cosmological parameters in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. We provide results for the eight-parameter and the five-parameter tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model, from P18 and BAO data, in the lower half of Table \ref{tab:para_NL_ns_BAO}. For the primary parameters $\Omega_b h^2$ and $\Omega_c h^2$ we find a tension between the P18 and BAO values of 1.3$\sigma$ and 1.2$\sigma$, respectively. For the BAO data primary spatial curvature parameter we find $\Omega_k= -0.045\pm 0.063$, which represents a 0.71$\sigma$ evidence in favor of closed geometry and is only in 0.75$\sigma$ tension with respect to the P18 value $-0.130\pm0.095$, which represents a 1.4$\sigma$ deviation from flat. Regarding the derived $\Omega_m$, $H_0$, and $\sigma_8$ parameters, the observed disagreements are 1.4$\sigma$, 2.5$\sigma$, and 1.8$\sigma$. The tension for $\Omega_m$ has reduced significantly with respect to the $A_L=1$ case, however overall the disagreements are still large enough to not allow one to jointly analyze P18 and BAO data in this cosmological model. Comparing the seven-parameter and the five-parameter tilted non-flat $\Lambda$CDM new $P(q)$ model primary cosmological parameter constraints for P18 and BAO data, shown in the upper half of Table \ref{tab:para_TNL_ns_BAO}, we see that the values of $\Omega_b h^2$ and $\Omega_c h^2$ are both in 1.1$\sigma$ disagreement. The BAO data value of the primary spatial curvature parameter is $\Omega_k=-0.051\pm 0.061$, which represents a 0.84$\sigma$ deviation from a flat geometry and is only in 0.29$\sigma$ disagreement with the P18 value $\Omega_k=-0.033\pm 0.014$, which is 2.4$\sigma$ away from flat. Regarding the derived parameters $\Omega_m$, $H_0$, and $\sigma_8$, we find 2.4$\sigma$, 2.1$\sigma$, and 1.2$\sigma$ disagreements between the corresponding values. It is necessary to further study the possible tension between P18 and BAO within this model. Results for the eight-parameter and the five-parameter tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model, obtained from P18 and BAO data, can be seen in the lower half of Table \ref{tab:para_TNL_ns_BAO}. For the primary parameters $\Omega_b h^2$ and $\Omega_c h^2$ the disagreement is at 1.1$\sigma$ and 1.2$\sigma$ respectively. For the BAO data primary spatial curvature parameter we find $\Omega_k= -0.055\pm 0.060$, which represents 0.92$\sigma$ evidence in favor of closed geometry and is in only 0.36$\sigma$ disagreement with the P18 value $-0.10\pm 0.11$, which represents a 0.91$\sigma$ deviation from flat. Regarding the derived parameters, $\Omega_m$, $H_0$, and $\sigma_8$ we find 0.92$\sigma$, 1.7$\sigma$, and 1.3$\sigma$ disagreements. The tensions for $H_0$ and $\Omega_m$ have reduced with respect to the case with $A_L=1$, however they are still large enough to wonder whether we can jointly analyze P18 and BAO data in the context of this model. In Tables \ref{tab:para_FL_BAO}-\ref{tab:para_TNL_ns_BAO} $\chi^2_{\textrm{min}}$ (BAO/BAO$^{\prime}$) is the value of $\chi^2$ for BAO or BAO$^\prime$ data respectively, at the best-fit position for BAO or BAO$^\prime$ data, while $\chi^2_{\textrm{BAO/BAO}^\prime}$ (at P18 B-F) is the value of $\chi^2$ for BAO or BAO$^{\prime}$ data evaluated at the best-fit position for P18 data. The values of $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) and $\chi_{\textrm{BAO/BAO}^{\prime}}^2$ (at P18 B-F) gives a qualitative indication of the agreement or disagreement in the values of the cosmological parameters obtained by considering P18 data and by considering BAO/BAO$^\prime$ data. If the cosmological parameters agree one might expect that $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$)$\simeq$ $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F). We see that this is the case only for the tilted flat $\Lambda$CDM(+$A_L$) models for the BAO$^\prime$ data, but again emphasize that this is only a qualitative probe. Figures \ref{fig:like_FL_BAO}-\ref{fig:like_TNL_Alens_ns1_BAO} show one-dimensional likelihoods and two-dimensional contours for cosmological parameters obtained using P18, BAO$^\prime$, BAO, P18+BAO$^\prime$, and P18+BAO data. As mentioned above, BAO$^\prime$ data constraints (shown in green) and BAO data constraints (shown in grey) are comparatively less restrictive than P18 constraints (shown in dark blue), are unable to put tight constraints on the primary cosmological parameters (except for $\Omega_k$ in the three non-flat $\Lambda$CDM$+A_L$ models), in most cases overlap at 2$\sigma$ with each other, and in many cases they also overlap with the P18 data constraints. Since the BAO data set contains more measurements than the BAO$^\prime$ data set, the BAO constraints are typically more restrictive, and BAO data, which includes $f\sigma_8$ measurements, are much more effective at constraining $\sigma_8$ than are BAO$^\prime$ data. Figures \ref{fig:like_FL_BAO} and \ref{fig:like_FL_Alens_BAO} are for tilted flat $\Lambda$CDM (+$A_L$) models. The $\sim 1 \sigma$ disagreements between the BAO$^\prime$/BAO constraints and those obtained with P18 data, discussed above, can be clearly seen in the contour plots. For the tilted flat $\Lambda$CDM model the larger disagreements are in panels for derived cosmological parameters, with the largest for $\sigma_8$. Some of these disagreements decrease when the $A_L$ parameter is allowed to vary. Looking at the contour plots for the untilted non-flat $\Lambda$CDM (+$A_L$) models (see Figs.\ \ref{fig:like_NL_BAO} and \ref{fig:like_NL_Alens_BAO}) we observe non-overlapping contours in those panels that involve the derived parameters $\Omega_m$ and $H_0$. These disagreements are smaller when $A_L$ is allowed to vary. This may indicate that in the context of this cosmological model we may jointly analyze P18 data with BAO$^\prime$/BAO data only when $A_L$ is allowed to vary. Figures \ref{fig:like_NL_ns_BAO} and \ref{fig:like_NL_Alens_ns_BAO} show cosmological parameter constraints for the tilted non-flat $\Lambda$CDM (+$A_L$) Planck $P(q)$ models, while the ones for the tilted non-flat $\Lambda$CDM(+$A_L$) new $P(q)$ models are displayed in Figs.\ \ref{fig:like_TNL_ns1_BAO} and \ref{fig:like_TNL_Alens_ns1_BAO}. As expected, considering the results discussed above in this subsubsection, the contour plots for these tilted non-flat models are quite similar. We see in the panels that involve the primary cosmological parameters there is overlap at 1$\sigma$, not only when $A_L$ is allowed to vary but also when $A_L=1$. When $A_L=1$, for the Planck $P(q)$ model some P18 and BAO$^\prime$/BAO data constraint contours that involve $\Omega_m$ and $H_0$ do not overlap even at 2$\sigma$. This is not true for the new $P(q)$ model with $A_L=1$, where overlap is reached at $< 2 \sigma$. This may indicate that the new $P(q)$ model is better able to reconcile P18 and BAO$^\prime$/BAO data. In view of the results discussed in this subsubsection, further tests are needed to properly quantify the level of disagreement, in the context of non-flat models, between P18 data and BAO$^\prime$/BAO data cosmological constraints. We return to this issue in Sec.\ \ref{subsec:data_set_tensions}. \begin{table*} \caption{Mean and 68.3\% confidence limits of tilted flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by non-CMB, P18, and P18+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Tilted flat $\Lambda$CDM} & \multicolumn{2}{c}{Tilted flat $\Lambda$CDM$+A_L$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18 & P18+non-CMB & P18 & P18+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0256 \pm 0.0025$ & $0.02236 \pm 0.00015$ & $0.02250 \pm 0.00012$ & $0.02259 \pm 0.00017$ & $0.02265 \pm 0.00014$ \\[+1mm] $\Omega_c h^2$ & $0.1129 \pm 0.0062$ & $0.1202 \pm 0.0014$ & $0.11825 \pm 0.00087$ & $0.1180 \pm 0.0015$ & $0.11736 \pm 0.00092$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.0323 \pm 0.0082$ & $1.04090 \pm 0.00031$ & $1.04112 \pm 0.00029$ & $1.04114 \pm 0.00032$ & $1.04120 \pm 0.00029$ \\[+1mm] $\tau$ & $0.0542$ & $0.0542 \pm 0.0079$ & $0.0548 \pm 0.0076$ & $0.0496 \pm 0.0082$ & $0.0484 \pm 0.0083$ \\[+1mm] $n_s$ & $0.9649$ & $0.9649 \pm 0.0043$ & $0.9692 \pm 0.0036$ & $0.9710 \pm 0.0050$ & $0.9726 \pm 0.0038$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.10 \pm 0.11$ & $3.044 \pm 0.016$ & $3.041 \pm 0.015$ & $3.030 \pm 0.017$ & $3.026 \pm 0.017$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $1.181 \pm 0.067$ & $1.201 \pm 0.061$ \\[+1mm] \hline \\[-1mm] $H_0$ & $69.8 \pm 1.7$ & $67.28 \pm 0.61$ & $68.15 \pm 0.39$ & $68.31 \pm 0.71$ & $68.62 \pm 0.43$ \\[+1mm] $\Omega_m$ & $0.286 \pm 0.011$ & $0.3165 \pm 0.0084$ & $0.3045 \pm 0.0051$ & $0.3029 \pm 0.0093$ & $0.2988 \pm 0.0054$ \\[+1mm] $\sigma_8$ & $0.787 \pm 0.027$ & $0.8118 \pm 0.0074$ & $0.8048 \pm 0.0068$ & $0.7997 \pm 0.0088$ & $0.7961 \pm 0.0074$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total) & $1106.54$ & $2765.80$ & $3879.35$ & $2756.12$ & $3865.90$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.54$ & $\cdots$ & $1111.57$ & $\cdots$ & $1109.54$ \\[+1mm] $\textrm{DIC}$ & $1114.45$ & $2817.93$ & $3931.02$ & $2812.41$ & $3922.11$ \\[+1mm] $\Delta\textrm{DIC}$ & $\cdots$ & $\cdots$ & $\cdots$ & $-5.52$ & $-8.91$ \\[+1mm] $\textrm{AIC}_c$ & $1114.54$ & $2819.80$ & $3933.35$ & $2812.1$ & $3921.90$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $\cdots$ & $\cdots$ & $\cdots$ & $-7.68$ & $-11.45$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_FL_P18_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of untilted non-flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by non-CMB, P18, and P18+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Untilted non-flat $\Lambda$CDM} & \multicolumn{2}{c}{Untilted non-flat $\Lambda$CDM$+A_L$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18 & P18+non-CMB & P18 & P18+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0243 \pm 0.0033$ & $0.02320 \pm 0.00015$ & $0.02300 \pm 0.00014$ & $0.02320 \pm 0.00015$ & $0.02320 \pm 0.00015$ \\[+1mm] $\Omega_c h^2$ & $0.120 \pm 0.013$ & $0.11098 \pm 0.00088$ & $0.11161 \pm 0.00086$ & $0.11097 \pm 0.00087$ & $0.11097 \pm 0.00085$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.10 \pm 0.10$ & $1.04204 \pm 0.00030$ & $1.04189 \pm 0.00029$ & $1.04202 \pm 0.00030$ & $1.04199 \pm 0.00030$ \\[+1mm] $\tau$ & $0.0543$ & $0.0543 \pm 0.0091$ & $0.0717 \pm 0.0095$ & $0.0540 \pm 0.0087$ & $0.0562 \pm 0.0086$ \\[+1mm] $\Omega_k$ & $-0.033 \pm 0.050$ & $-0.095 \pm 0.024$ & $-0.0062 \pm 0.0014$ & $-0.12 \pm 0.12$ & $-0.0062 \pm 0.0014$ \\[+1mm] $\ln(10^{10} A_s)$ & $2.87 \pm 0.34$ & $3.021 \pm 0.019$ & $3.057 \pm 0.019$ & $3.020 \pm 0.018$ & $3.024 \pm 0.018$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $1.08 \pm 0.27$ & $1.324 \pm 0.063$ \\[+1mm] \hline \\[-1mm] $H_0$ & $70.2 \pm 1.7$ & $47.1 \pm 3.2$ & $68.07 \pm 0.56$ & $52 \pm 18$ & $68.45 \pm 0.58$ \\[+1mm] $\Omega_m$ & $0.294 \pm 0.018$ & $0.617 \pm 0.082$ & $0.2920 \pm 0.0050$ & $0.70 \pm 0.42$ & $0.2878 \pm 0.0050$ \\[+1mm] $\sigma_8$ & $0.771 \pm 0.034$ & $0.730 \pm 0.017$ & $0.7921 \pm 0.0085$ & $0.721 \pm 0.053$ & $0.7759 \pm 0.0078$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $1106.53$ & $2789.77$ & $3926.27$ & $2787.76$ & $3895.24$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.53$ & $\cdots$ & $1107.71$ & $\cdots$ & $1107.45$ \\[+1mm] $\textrm{DIC}$ & $1116.95$ & $2847.14$ & $3982.38$ & $2846.45$ & $3954.21$ \\[+1mm] $\Delta\textrm{DIC}$ & $2.50$ & $29.21$ & $51.36$ & $28.52$ & $23.19$ \\[+1mm] $\textrm{AIC}_c$ & $1116.53$ & $2843.77$ & $3980.27$ & $2843.76$ & $3951.24$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $1.99$ & $23.97$ & $46.92$ & $23.96$ & $17.89$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_NL_P18_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of Planck-$P(q)$-based tilted non-flat $\Lambda\textrm{CDM}$ ($+A_L$) model parameters constrained by non-CMB, P18, and P18+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM Planck $P(q)$} & \multicolumn{2}{c}{Tilted non-flat $\Lambda$CDM$+A_L$ Planck $P(q)$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18 & P18+non-CMB & P18 & P18+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0242 \pm 0.0033$ & $0.02260 \pm 0.00017$ & $0.02248 \pm 0.00015$ & $0.02258 \pm 0.00017$ & $0.02268 \pm 0.00017$ \\[+1mm] $\Omega_c h^2$ & $0.120 \pm 0.012$ & $0.1181 \pm 0.0015$ & $0.1185 \pm 0.0013$ & $0.1183 \pm 0.0015$ & $0.1170 \pm 0.0014$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.10 \pm 0.11$ & $1.04116 \pm 0.00032$ & $1.04107 \pm 0.00031$ & $1.04116 \pm 0.00033$ & $1.04125 \pm 0.00032$ \\[+1mm] $\tau$ & $0.0483$ & $0.0483 \pm 0.0083$ & $0.0543 \pm 0.0077$ & $0.0478 \pm 0.0081$ & $0.0485 \pm 0.0087$ \\[+1mm] $\Omega_k$ & $-0.032 \pm 0.051$ & $-0.043 \pm 0.017$ & $0.0004 \pm 0.0017$ & $-0.130 \pm 0.095$ & $-0.0006 \pm 0.0017$ \\[+1mm] $n_s$ & $0.9706$ & $0.9706 \pm 0.0047$ & $0.9687 \pm 0.0043$ & $0.9704 \pm 0.0048$ & $0.9735 \pm 0.0046$ \\[+1mm] $\ln(10^{10} A_s)$ & $2.90 \pm 0.34$ & $3.027 \pm 0.017$ & $3.040 \pm 0.016$ & $3.027 \pm 0.017$ & $3.025 \pm 0.018$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $0.88 \pm 0.15$ & $1.203 \pm 0.062$ \\[+1mm] \hline \\[-1mm] $H_0$ & $70.1 \pm 1.7$ & $54.5 \pm 3.6$ & $68.25 \pm 0.56$ & $45 \pm 11$ & $68.48 \pm 0.56$ \\[+1mm] $\Omega_m$ & $0.294 \pm 0.018$ & $0.481 \pm 0.062$ & $0.3040 \pm 0.0055$ & $0.80 \pm 0.35$ & $0.2994 \pm 0.0055$ \\[+1mm] $\sigma_8$ & $0.771 \pm 0.035$ & $0.775 \pm 0.015$ & $0.8055 \pm 0.0076$ & $0.733 \pm 0.045$ & $0.7946 \pm 0.0088$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total) & $1106.53$ & $2754.73$ & $3878.77$ & $2754.99$ & $3865.53$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.53$ & $\cdots$ & $1111.36$ & $\cdots$ & $1109.27$ \\[+1mm] $\textrm{DIC}$ & $1116.92$ & $2810.59$ & $3933.33$ & $2811.63$ & $3924.07$ \\[+1mm] $\Delta\textrm{DIC}$ & $2.47$ & $-7.34$ & $2.31$ & $-6.30$ & $-6.95$ \\[+1mm] $\textrm{AIC}_c$ & $1116.53$ & $2810.73$ & $3934.77$ & $2812.99$ & $3923.53$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $1.99$ & $-9.07$ & $1.42$ & $-6.81$ & $-9.82$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_NL_ns_P18_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of new-$P(q)$-based tilted non-flat $\Lambda\textrm{CDM}$ ($+A_L$) model parameters constrained by non-CMB, P18, and P18+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM new $P(q)$} & \multicolumn{2}{c}{Tilted non-flat $\Lambda$CDM$+A_L$ new $P(q)$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18 & P18+non-CMB & P18 & P18+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0241 \pm 0.0033$ & $0.02255 \pm 0.00017$ & $0.02249 \pm 0.00015$ & $0.02257 \pm 0.00017$ & $0.02269 \pm 0.00016$ \\[+1mm] $\Omega_c h^2$ & $0.120 \pm 0.013$ & $0.1188 \pm 0.0015$ & $0.1184 \pm 0.0013$ & $0.1187 \pm 0.0016$ & $0.1170 \pm 0.0013$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.11 \pm 0.11$ & $1.04109 \pm 0.00032$ & $1.04108 \pm 0.00031$ & $1.04111 \pm 0.00033$ & $1.04125 \pm 0.00032$\\[+1mm] $\tau$ & $0.0525$ & $0.0525 \pm 0.0083$ & $0.0549 \pm 0.0077$ & $0.0512 \pm 0.0086$ & $0.0490 \pm 0.0086$ \\[+1mm] $\Omega_k$ & $-0.036 \pm 0.051$ & $-0.033 \pm 0.014$ & $0.0003 \pm 0.0017$ & $-0.10 \pm 0.11$ & $-0.0006 \pm 0.0017$\\[+1mm] $n_s$ & $0.9654$ & $0.9654 \pm 0.0045$ & $0.9684 \pm 0.0041$ & $0.9654 \pm 0.0057$ & $0.9730 \pm 0.0043$ \\[+1mm] $\ln(10^{10} A_s)$ & $2.88 \pm 0.34$ & $3.039 \pm 0.017$ & $3.042 \pm 0.016$ & $3.036 \pm 0.018$ & $3.026 \pm 0.018$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $0.94 \pm 0.20$ & $1.204 \pm 0.061$ \\[+1mm] \hline \\[-1mm] $H_0$ & $70.1 \pm 1.8$ & $56.9 \pm 3.6$ & $68.21 \pm 0.55$ & $51 \pm 14$ & $68.47 \pm 0.56$ \\[+1mm] $\Omega_m$ & $0.295 \pm 0.018$ & $0.444 \pm 0.055$ & $0.3043 \pm 0.0054$ & $0.70 \pm 0.43$ & $0.2994 \pm 0.0056$ \\[+1mm] $\sigma_8$ & $0.770 \pm 0.035$ & $0.786 \pm 0.014$ & $0.8057 \pm 0.0074$ & $0.752 \pm 0.052$ & $0.7948 \pm 0.0083$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $1106.49$ & $2757.38$ & $3878.76$ & $2756.33$ & $3865.41$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.49$ & $\cdots$ & $1111.36$ & $\cdots$ & $1109.32$ \\[+1mm] $\textrm{DIC}$ & $1117.31$ & $2811.54$ & $3932.56$ & $2814.83$ & $3923.86$ \\[+1mm] $\Delta\textrm{DIC}$ & $2.86$ & $-6.39$ & $1.54$ & $-3.10$ & $-7.16$ \\[+1mm] $\textrm{AIC}_c$ & $1116.49$ & $2813.38$ & $3934.76$ & $2814.33$ & $3923.41$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $1.95$ & $-6.42$ & $1.41$ & $-5.47$ & $-9.94$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_TNL_P18_nonCMB} \end{table*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_P18_nonCMBv2_fig24.pdf}} \caption{Likelihoods of the tilted flat $\Lambda$CDM model parameters constrained by P18, non-CMB, and P18+non-CMB data sets. } \label{fig:like_FL_P18_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_Alens_P18_nonCMBv2_fig25.pdf}} \caption{Likelihoods of the tilted flat $\Lambda$CDM$+A_L$ model parameters constrained by P18, non-CMB, and P18+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_FL_P18_nonCMB}. } \label{fig:like_FL_Alens_P18_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_P18_nonCMBv2_fig26.pdf}} \caption{Likelihoods of the untilted non-flat $\Lambda$CDM model parameters constrained by P18, non-CMB, and P18+non-CMB data sets. } \label{fig:like_NL_P18_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_P18_nonCMBv2_fig27.pdf}} \caption{Likelihoods of the untilted non-flat $\Lambda$CDM$+A_L$ model parameters constrained by P18, non-CMB, and P18+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_NL_P18_nonCMB}. } \label{fig:like_NL_Alens_P18_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_ns_P18_nonCMBv2_fig28.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM model [with Planck $P(q)$] parameters constrained by P18, non-CMB, and P18+non-CMB data sets. } \label{fig:like_NL_ns_P18_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_ns_P18_nonCMBv2_fig29.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM$+A_L$ model [with Planck $P(q)$] parameters constrained by P18, non-CMB, and P18+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_NL_ns_P18_nonCMB}. } \label{fig:like_NL_Alens_ns_P18_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_ns1_P18_nonCMBv2_fig30.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM model [with new $P(q)$] parameters constrained by P18, non-CMB, and P18+non-CMB data sets. } \label{fig:like_TNL_P18_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_Alens_ns1_P18_nonCMBv2_fig31.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM$+A_L$ model [with new $P(q)$] parameters constrained by P18, non-CMB, and P18+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_TNL_P18_nonCMB}. } \label{fig:like_TNL_Alens_P18_nonCMB} \end{figure*} \subsubsection{Comparing P18 data and non-CMB data cosmological constraints}\label{sec:P18_vs_non-CMB} In the previous subsubsection we compared BAO and BAO$^\prime$ data cosmological constraints to those obtained from P18 data. In the non-flat models with $A_L =1$ there is somewhat significant disagreement between the values of the cosmological parameters (especially the derived parameters $\Omega_m$, $H_0$, and $\sigma_8$) determined using P18 data and those determined from BAO or BAO$^\prime$ data. This disagreement motivates additional tests to decide whether P18 data and BAO$^{\prime}$/BAO data can be used together to constrain parameters of the non-flat models. While both P18 data and BAO$^{\prime}$/BAO data favor negative $\Omega_k$ values, BAO$^{\prime}$/BAO data favor higher values of $H_0$ and lower values of $\Omega_m$ relative to the values obtained in the P18 analysis. Allowing for a varying $A_L$ parameter resolves these tensions, which may indicate that we can only jointly analyze P18 data and BAO$^{\prime}$/BAO data in the non-flat models when $A_L$ is allowed to vary. To further examine these inconsistencies, in this subsubsection we compare non-CMB data (which include BAO as well as BAO$^\prime$ data) cosmological constraints to those obtained from P18 data. (Prior to jointly analyzing P18+non-CMB data, we need to determine whether P18 and non-CMB data cosmological constraints are mutually consistent.) This allows us to examine how the inclusion of SNIa, $H(z)$, and $f\sigma_8$ data affects the P18 data vs.\ BAO$^\prime$/BAO data conclusions of Sec.\ \ref{sec:P18_vs_BAO} and provides a different, perhaps more expansive, test of the consistency of cosmological parameters obtained from high-redshift data and from low-redshift data. In Sec.\ \ref{subsec:data_set_tensions} we use two other statistical estimators to examine whether or not P18 and non-CMB data are in tension. The cosmological parameter mean values and error bars favored by the P18, non-CMB, and P18+non-CMB data sets are summarized in Tables \ref{tab:para_FL_P18_nonCMB}-\ref{tab:para_TNL_P18_nonCMB} for the tilted flat $\Lambda$CDM (+$A_L$) models, the untilted non-flat $\Lambda$CDM (+$A_L$) models, the tilted non-flat $\Lambda$CDM (+$A_L$) models with the Planck $P(q)$, and the tilted non-flat $\Lambda$CDM ($+A_L$) models with the new $P(q)$, respectively. Likelihood distributions of cosmological parameters of the four models with $A_L=1$ and with $A_L$ varying are shown in Figs.\ \ref{fig:like_FL_P18_nonCMB}-\ref{fig:like_TNL_Alens_P18_nonCMB} for the P18, non-CMB, and P18+non-CMB data sets. Since non-CMB data do not have the ability to constrain $\tau$ or $n_s$, we set their values to those found in the corresponding P18 data analysis. $A_L$ does not affect predictions for the non-CMB measurements we study so we do not include $A_L$ in the non-CMB data analyses. (We saw, in Sec.\ \ref{sec:P18_vs_BAO}, that BAO$^\prime$/BAO data constraints for $A_L = 1$ and for varying $A_L$ were very similar, see Tables \ref{tab:para_FL_BAO}-\ref{tab:para_TNL_ns_BAO}.) From Tables \ref{tab:para_NL_P18_nonCMB}-\ref{tab:para_TNL_P18_nonCMB} we see, in the six non-flat $\Lambda$CDM (+$A_L$) models that the constraints set by non-CMB data on $H_0$ and $\Omega_m$ are tighter than the ones imposed by P18 data, and in the three non-flat $\Lambda$CDM+$A_L$ models that the constraints set by non-CMB data on $\Omega_k$ and $\sigma_8$ are tighter than the ones imposed by P18 data. P18 data more restrictively constrain all other parameters in all eight cosmological models. As we discuss below, there is at least one parameter in the three non-flat models with $A_L = 1$ with a more than 3$\sigma$ level of disagreement between P18 data cosmological constraints and non-CMB data cosmological constraints and one parameter in the tilted flat $\Lambda$CDM model with $A_L = 1$ and in the tilted non-flat $\Lambda$CDM$+A_L$ model with the Planck $P(q)$ with a more than 2$\sigma$ level of disagreement between P18 data cosmological constraints and non-CMB data cosmological constraints. From Tables \ref{tab:para_NL_P18_nonCMB}-\ref{tab:para_TNL_P18_nonCMB} we see that both P18 data and non-CMB data favor negative values of the curvature parameter, with non-CMB data only weakly favoring closed spatial hypersurfaces, at 0.66$\sigma$ to 0.71$\sigma$. However, we should take into account the geometrical degeneracy between $H_0$-$\Omega_k$-$\Omega_m$ and note that, like both BAO$^\prime$ and BAO data, non-CMB data favor higher values of $H_0$ and lower values of $\Omega_m$ than do P18 data and this is what causes the P18 and non-CMB cosmological constraint differences. The dominant component of non-CMB data is BAO$^\prime$/BAO data. This is why the cosmological constraints obtained from BAO$^\prime$/BAO data are similar to the ones obtained from the complete non-CMB low-redshift data set. However, there are some differences between these sets of constraints that are worth mentioning. As expected, the error bars obtained considering non-CMB data are smaller than the ones from BAO$^\prime$/BAO data. While similar values for $\Omega_m$ are found in both cases, the values of $H_0$ favored by non-CMB data are $\sim 1\sigma$ smaller than those favored by BAO$^\prime$/BAO data. BAO$^\prime$ data favor closed spatial hypersurfaces at 0.48$\sigma$ to 0.60$\sigma$ while BAO data favor them by 0.71$\sigma$ to 0.96$\sigma$, which are on either side of the 0.66$\sigma$ to 0.71$\sigma$ favoring of closed spatial hypersurfaces from non-CMB data. We also find smaller values for the $\sigma_8$ parameter when non-CMB data are considered, with BAO$^\prime$ data favoring 1.1$\sigma$-1.3$\sigma$ larger values while BAO data favor $\sim 1.3 \sigma$ larger values in the non-flat models and a 1.9$\sigma$ larger value in the tilted flat $\Lambda$CDM case. This might be because the non-CMB data set contain additional $f\sigma_8$ data points that favor lower values of $\sigma_8$ than those in the BAO data set. Comparing the six-parameter and the four-parameter tilted flat $\Lambda$CDM model primary cosmological parameter constraints for P18 and non-CMB data, shown in the left half of Table \ref{tab:para_FL_P18_nonCMB}, we see that the values of $\Omega_b h^2$, $\Omega_c h^2$, and $\theta_\textrm{MC}$ are in mild disagreement, at 1.3$\sigma$, 1.1$\sigma$, and 1.0$\sigma$, respectively. We also observe a more significant 2.2$\sigma$ level of tension in the derived $\Omega_m$ values, the derived $H_0$ values differ by 1.4$\sigma$, and $\sigma_8$ values show a better agreement, disagreeing by only 0.89$\sigma$. Comparing the seven-parameter and the four-parameter tilted flat $\Lambda$CDM+$A_L$ model primary cosmological parameter constraints for P18 and non-CMB data, shown in Table \ref{tab:para_FL_P18_nonCMB}, we see that the values of $\Omega_b h^2$ and $\theta_{\textrm{MC}}$ are in 1.2$\sigma$ and 1.1$\sigma$ tension respectively. As for the derived parameters, we find $\Omega_m$ values differ by 1.2$\sigma$ while $H_0$ and $\sigma_8$ values are in only 0.81$\sigma$ and 0.45$\sigma$ disagreement. So unlike in the BAO data and the BAO$^\prime$ data comparisons with P18 data of Sec.\ \ref{sec:P18_vs_BAO}, the inclusion of a varying $A_L$ reduces the disagreement for all three derived parameters, but less successfully for $\Omega_m$ in the non-CMB case here compared to the BAO/BAO$^\prime$ cases there. P18 and non-CMB data results obtained for the six-parameter and the four-parameter untilted non-flat $\Lambda$CDM model, shown in the left half of Table \ref{tab:para_NL_P18_nonCMB}, indicate mostly less significant differences in primary parameters but more significant differences in derived parameters than found in the tilted flat $\Lambda$CDM model. The primary spatial curvature parameter value is $\Omega_k=-0.033\pm 0.050$ for non-CMB data, which is 0.66$\sigma$ away from flat and in 1.1$\sigma$ tension with the P18 value $\Omega_k=-0.095\pm 0.024$, which is 4.0$\sigma$ away from flat. Regarding the derived parameters, we find that $H_0$, $\Omega_m$, and $\sigma_8$ values are in 6.4$\sigma$, 3.8$\sigma$, and 1.1$\sigma$ disagreement. These results probably mean that P18 and non-CMB data should not be jointly analyzed in the context of the untilted non-flat $\Lambda$CDM model. The results for the seven-parameter and the four-parameter untilted non-flat $\Lambda$CDM+$A_L$ model, obtained considering P18 and non-CMB data, are in Table \ref{tab:para_NL_P18_nonCMB}. There is a slight increase in the disagreement between the values of the primary spatial curvature parameter $\Omega_k$ (now 0.67$\sigma$) and decreases for the derived parameters $H_0$, $\Omega_m$, and $\sigma_8$, that now disagree by 1.0$\sigma$, 0.97$\sigma$, and 0.79$\sigma$ respectively. This is caused by the larger error bars in the $A_L$-varying P18 case compared to the corresponding values obtained with $A_L=1$. According to these results, unlike in the $A_L=1$ case, in the $A_L$-varying case P18 and non-CMB data can probably be jointly analyzed in the context of the untilted non-flat $\Lambda$CDM model. Note that in this case a joint analysis of P18+non-CMB data favors closed geometry at 4.4$\sigma$, with $\Omega_k=-0.0062\pm0.0014$, although because of the lack of the tilt ($n_s$) degree of freedom this untilted non-flat $\Lambda$CDM+$A_L$ model does not provide a good fit to smaller-angular-scale P18 data, which is reflected in the large $\Delta$DIC and $\Delta$AIC$_c$ values for the P18+non-CMB case in the lower half of Table \ref{tab:para_NL_P18_nonCMB}. Comparing the seven-parameter and the five-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ model primary cosmological parameter constraints for P18 and non-CMB data, we see, in the left half of Table \ref{tab:para_NL_ns_P18_nonCMB}, that the primary parameter values do not much disagree. The non-CMB data primary spatial curvature parameter value $\Omega_k=-0.032\pm 0.051$ is 0.63$\sigma$ away from flat and only in 0.20$\sigma$ tension with the P18 value $\Omega_k=-0.043\pm0.017$, which is 2.5$\sigma$ in favor of closed geometry. The derived parameters $H_0$, $\Omega_m$, and $\sigma_8$ are in 3.9$\sigma$, 2.9$\sigma$, and 0.11$\sigma$ tension. These results show that P18 and non-CMB data cosmological constraints are inconsistent in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model and these data probably should not be used jointly to constrain this model. Looking at Table \ref{tab:para_NL_ns_P18_nonCMB} we can compare results obtained for the eight-parameter and the five-parameter tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model from P18 and non-CMB data respectively. Aside from $\Omega_k$, the primary parameter disagreements do not change much compared to the $A_L=1$ case. For the non-CMB data primary spatial curvature parameter we have $\Omega_k= -0.032\pm 0.051$, which is 0.63$\sigma$ away from flat and in 0.91$\sigma$ tension with the P18 value $-0.130\pm0.095$, which is 1.4$\sigma$ away from flat. Regarding the derived parameters we find that $H_0$, $\Omega_m$, and $\sigma_8$ are in 2.3$\sigma$, 1.4$\sigma$ and 0.67$\sigma$ disagreement. Compared to the $A_L = 1$ case, in the $A_L$-varying case we find significant reductions in the $H_0$ and $\Omega_m$ tensions, with both disagreements still being significant, which suggest that P18 and non-CMB data should not be jointly analyzed within the tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model. Comparing the seven-parameter and the five-parameter tilted non-flat $\Lambda$CDM new $P(q)$ model primary cosmological parameter constraints for P18 and non-CMB data, from the left half of Table \ref{tab:para_TNL_P18_nonCMB} we see that the primary parameter values do not much disagree. The non-CMB data primary spatial curvature parameter value is $\Omega_k=-0.036\pm 0.051$, which is only a 0.71$\sigma$ deviation from flat and, similar to the Planck $P(q)$ model, is only in 0.057$\sigma$ disagreement with the P18 value $-0.033\pm 0.014$, which is 2.4$\sigma$ away from flat. Regarding the derived parameters $H_0$, $\Omega_m$, and $\sigma_8$, we find that their values disagree at 3.3$\sigma$, 2.6$\sigma$, and 0.42$\sigma$ respectively. While the $H_0$ and $\Omega_m$ disagreements are a little smaller than the ones found in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model, they still are large enough to require we more carefully test whether P18 and non-CMB data can be jointly used to constrain cosmological parameters in this model. The results for the eight-parameter and the five-parameter tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model are in Table \ref{tab:para_TNL_P18_nonCMB}, for P18 and non-CMB data, respectively. As happens in the Planck $P(q)$ model, when the $A_L$ parameter is allowed to vary the mild tensions found for the primary parameters, except for $\Omega_k$, do not change much compared to the $A_L=1$ case. For the non-CMB data primary spatial curvature parameter we have $\Omega_k= -0.036\pm 0.051$, which is 0.71$\sigma$ away from flat hypersurfaces and now in 0.53$\sigma$ tension with the P18 value $\Omega_k=-0.10\pm 0.11$, which is 0.91$\sigma$ away from flat. As for the value of the derived parameters $H_0$, $\Omega_m$, and $\sigma_8$ we find disagreements at 1.4$\sigma$, 0.94$\sigma$, and 0.29$\sigma$ respectively. The tensions are reduced with respect to the case with $A_L=1$, due to the increase of the error bars, but possibly the $H_0$ tension is still not small enough to allow the joint use of P18+non-CMB data for constraining tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model cosmological parameters. Figures \ref{fig:like_FL_P18_nonCMB}-\ref{fig:like_TNL_Alens_P18_nonCMB} show one-dimensional likelihoods and two-dimensional contours for cosmological parameters obtained using P18, non-CMB, and P18+non-CMB data. As mentioned above, non-CMB data constraints (shown with unfilled black lines) are comparatively less restrictive than P18 constraints (shown in grey), are unable to put tight constraints on the primary cosmological parameters (except on $\Omega_k$ in the three non-flat $\Lambda$CDM$+A_L$ models), and in many cases they at least partially overlap with the P18 data constraints. Figures \ref{fig:like_FL_P18_nonCMB} and \ref{fig:like_FL_Alens_P18_nonCMB} are for tilted flat $\Lambda$CDM (+$A_L$) models. The $\sim 1 \sigma$ disagreements between the non-CMB constraints and those obtained with P18 data, discussed above, can be clearly seen in the contour plots. For the tilted flat $\Lambda$CDM model the larger disagreements are in panels for derived cosmological parameters, with the largest for $\Omega_m$ and the next largest for $H_0$. These disagreements decrease when the $A_L$ parameter is allowed to vary. Looking at the contour plots for the untilted non-flat $\Lambda$CDM (+$A_L$) models (see Figs.\ \ref{fig:like_NL_P18_nonCMB} and \ref{fig:like_NL_Alens_P18_nonCMB}) we observe non-overlapping contours in those panels that involve the derived parameters $H_0$ and $\Omega_m$ or the primary parameter $\Omega_k$, especially in the $\Omega_k$-$\theta_{\rm MC}$ subpanel of Fig.\ \ref{fig:like_NL_P18_nonCMB}. These disagreements largely disappear when $A_L$ is allowed to vary, except perhaps for the $H_0$ one. This may indicate that in the context of this cosmological model we may jointly analyze P18 data with non-CMB data only when $A_L$ is allowed to vary. Figures \ref{fig:like_NL_ns_P18_nonCMB} and \ref{fig:like_NL_Alens_ns_P18_nonCMB} show cosmological parameter constraints for the tilted non-flat $\Lambda$CDM (+$A_L$) Planck $P(q)$ models, while the ones for the tilted non-flat $\Lambda$CDM(+$A_L$) new $P(q)$ models are displayed in Figs.\ \ref{fig:like_TNL_P18_nonCMB} and \ref{fig:like_TNL_Alens_P18_nonCMB}. As expected, considering the results discussed above in this subsubsection, the contour plots for these tilted non-flat models are quite similar. We see in the panels that involve the primary cosmological parameters that there is overlap at 1$\sigma$, not only when $A_L$ is allowed to vary but also when $A_L=1$. When $A_L=1$, for the Planck $P(q)$ model, P18 and non-CMB data constraint contours that involve $H_0$ and $\Omega_m$ do not overlap even at 2$\sigma$. These disagreements are less severe for the new $P(q)$ model with $A_L=1$, where overlap is reached in most cases at a little over $2 \sigma$. In view of the results discussed in this subsubsection, further tests are needed to properly quantify the level of disagreement, in the context of non-flat models, between P18 data and non-CMB data cosmological constraints. We return to this issue in Sec.\ \ref{subsec:data_set_tensions}. \begin{table*} \caption{Mean and 68.3\% confidence limits of tilted flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by non-CMB, P18+lensing, and P18+lensing+non-CMB data sets. The Hubble constant $H_0$ has a unit of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Tilted flat $\Lambda$CDM} & \multicolumn{2}{c}{Tilted flat $\Lambda$CDM$+A_L$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18+lensing & P18+lensing+non-CMB & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0257 \pm 0.0026$ & $0.02237 \pm 0.00014$ & $0.02250 \pm 0.00013$ & $0.02251 \pm 0.00017$ & $0.02258 \pm 0.00014$ \\[+1mm] $\Omega_c h^2$ & $0.1128 \pm 0.0061$ & $0.1200 \pm 0.0012$ & $0.11838 \pm 0.00083$ & $0.1183 \pm 0.0015$ & $0.11747 \pm 0.00091$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.0321 \pm 0.0080$ & $1.04091 \pm 0.00031$ & $1.04110 \pm 0.00029$ & $1.04109 \pm 0.00032$ & $1.04118 \pm 0.00029$ \\[+1mm] $\tau$ & $0.0543$ & $0.0543 \pm 0.0073$ & $0.0569 \pm 0.0071$ & $0.0487 \pm 0.0087$ & $0.0476 \pm 0.0085$ \\[+1mm] $n_s$ & $0.9649$ & $0.9649 \pm 0.0041$ & $0.9688 \pm 0.0036$ & $0.9695 \pm 0.0048$ & $0.9715 \pm 0.0038$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.10 \pm 0.11$ & $3.044 \pm 0.014$ & $3.046 \pm 0.014$ & $3.028 \pm 0.018$ & $3.023 \pm 0.018$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $1.073 \pm 0.041$ & $1.089 \pm 0.035$ \\[+1mm] \hline \\[-1mm] $H_0$ & $69.9 \pm 1.7$ & $67.34 \pm 0.55$ & $68.09 \pm 0.38$ & $68.14 \pm 0.69$ & $68.52 \pm 0.42$ \\[+1mm] $\Omega_m$ & $0.285 \pm 0.011$ & $0.3155 \pm 0.0075$ & $0.3053 \pm 0.0050$ & $0.3048 \pm 0.0091$ & $0.2998 \pm 0.0053$ \\[+1mm] $\sigma_8$ & $0.787 \pm 0.026$ & $0.8112 \pm 0.0059$ & $0.8072 \pm 0.0058$ & $0.7996 \pm 0.0089$ & $0.7955 \pm 0.0075$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total) & $1106.55$ & $2774.71$ & $3888.41$ & $2771.24$ & $3881.55$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.55$ & $\cdots$ & $1112.05$ & $\cdots$ & $1109.64$ \\[+1mm] $\textrm{DIC}$ & $1114.38$ & $2826.45$ & $3940.70$ & $2825.53$ & $3935.15$ \\[+1mm] $\Delta\textrm{DIC}$ & $\cdots$ & $\cdots$ & $\cdots$ & $-0.92$ & $-5.55$ \\[+1mm] $\textrm{AIC}_c$ & $1114.55$ & $2828.71$ & $3942.41$ & $2827.24$ & $3937.55$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $\cdots$ & $\cdots$ & $\cdots$ & $-1.47$ & $-4.86$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_FL_P18_lensing_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of untilted non-flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by non-CMB, P18+lensing, and P18+lensing+non-CMB data sets. The Hubble constant $H_0$ has a unit of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Untilted non-flat $\Lambda$CDM} & \multicolumn{2}{c}{Untilted non-flat $\Lambda$CDM$+A_L$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18+lensing & P18+lensing+non-CMB & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0241 \pm 0.0033$ & $0.02307 \pm 0.00014$ & $0.02301 \pm 0.00014$ & $0.02312 \pm 0.00014$ & $0.02310 \pm 0.00014$ \\[+1mm] $\Omega_c h^2$ & $0.121 \pm 0.013$ & $0.11108 \pm 0.00086$ & $0.11176 \pm 0.00083$ & $0.11092 \pm 0.00087$ & $0.11100 \pm 0.00085$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.11 \pm 0.11$ & $1.04196 \pm 0.00029$ & $1.04189 \pm 0.00029$ & $1.04193 \pm 0.00029$ & $1.04195 \pm 0.00030$ \\[+1mm] $\tau$ & $0.0580$ & $0.0580 \pm 0.0087$ & $0.0799 \pm 0.0089$ & $0.0554 \pm 0.0097$ & $0.0566 \pm 0.0083$ \\[+1mm] $\Omega_k$ & $-0.037 \pm 0.050$ & $-0.0322 \pm 0.0075$ & $-0.0065 \pm 0.0014$ & $0.0161 \pm 0.0094$ & $-0.0060 \pm 0.0014$ \\[+1mm] $\ln(10^{10} A_s)$ & $2.84 \pm 0.34$ & $3.027 \pm 0.018$ & $3.075 \pm 0.018$ & $3.021 \pm 0.020$ & $3.024 \pm 0.017$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $1.44 \pm 0.15$ & $1.162 \pm 0.036$ \\[+1mm] \hline \\[-1mm] $H_0$ & $70.2 \pm 1.7$ & $58.9 \pm 2.1$ & $67.90 \pm 0.56$ & $85.7 \pm 8.5$ & $68.48 \pm 0.58$ \\[+1mm] $\Omega_m$ & $0.295 \pm 0.018$ & $0.390 \pm 0.027$ & $0.2938 \pm 0.0049$ & $0.190 \pm 0.043$ & $0.2874 \pm 0.0050$ \\[+1mm] $\sigma_8$ & $0.769 \pm 0.035$ & $0.765 \pm 0.011$ & $0.7997 \pm 0.0076$ & $0.7805 \pm 0.0094$ & $0.7764 \pm 0.0078$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $1106.51$ & $2813.13$ & $3938.22$ & $2807.91$ & $3915.05$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.51$ & $\cdots$ & $1108.60$ & $\cdots$ & $1107.39$ \\[+1mm] $\textrm{DIC}$ & $1117.24$ & $2869.06$ & $3992.71$ & $2856.10$ & $3973.55$ \\[+1mm] $\Delta\textrm{DIC}$ & $2.86$ & $42.61$ & $52.01$ & $29.65$ & $32.85$ \\[+1mm] $\textrm{AIC}_c$ & $1116.51$ & $2867.13$ & $3992.22$ & $2863.91$ & $3971.05$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $1.96$ & $38.42$ & $49.81$ & $35.20$ & $28.64$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_NL_P18_lensing_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of Planck-$P(q)$-based tilted nonflat $\Lambda\textrm{CDM}$ ($+A_L$) model parameters constrained by non-CMB, P18+lensing, and P18+lensing+non-CMB data sets. The Hubble constant $H_0$ has a unit of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM Planck $P(q)$} & \multicolumn{2}{c}{Tilted non-flat $\Lambda$CDM$+A_L$ Planck $P(q)$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18+lensing & P18+lensing+non-CMB & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0242 \pm 0.0033$ & $0.02249 \pm 0.00016$ & $0.02249 \pm 0.00015$ & $0.02251 \pm 0.00017$ & $0.02259 \pm 0.00016$ \\[+1mm] $\Omega_c h^2$ & $0.120 \pm 0.013$ & $0.1186 \pm 0.0015$ & $0.1187 \pm 0.0013$ & $0.1183 \pm 0.0015$ & $0.1173 \pm 0.0014$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.10 \pm 0.10$ & $1.04107 \pm 0.00032$ & $1.04106 \pm 0.00031$ & $1.04110 \pm 0.00032$ & $1.04118 \pm 0.00032$ \\[+1mm] $\tau$ & $0.0495$ & $0.0495 \pm 0.0082$ & $0.0563 \pm 0.0073$ & $0.0489 \pm 0.0085$ & $0.0479 \pm 0.0085$ \\[+1mm] $\Omega_k$ & $-0.033 \pm 0.050$ & $-0.0103 \pm 0.0066$ & $0.0004 \pm 0.0017$ & $-0.005 \pm 0.027$ & $-0.0002 \pm 0.0017$ \\[+1mm] $n_s$ & $0.9687$ & $0.9687 \pm 0.0046$ & $0.9681 \pm 0.0044$ & $0.9696 \pm 0.0049$ & $0.9718 \pm 0.0045$ \\[+1mm] $\ln(10^{10} A_s)$ & $2.90 \pm 0.34$ & $3.030 \pm 0.017$ & $3.046 \pm 0.014$ & $3.028 \pm 0.018$ & $3.024 \pm 0.017$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $1.09 \pm 0.16$ & $1.090 \pm 0.036$ \\[+1mm] \hline \\[-1mm] $H_0$ & $70.1 \pm 1.8$ & $63.7 \pm 2.3$ & $68.17 \pm 0.55$ & $69 \pm 11$ & $68.49 \pm 0.56$ \\[+1mm] $\Omega_m$ & $0.294 \pm 0.018$ & $0.351 \pm 0.024$ & $0.3051 \pm 0.0053$ & $0.32 \pm 0.11$ & $0.2998 \pm 0.0055$ \\[+1mm] $\sigma_8$ & $0.771 \pm 0.036$ & $0.796 \pm 0.011$ & $0.8080 \pm 0.0066$ & $0.796 \pm 0.016$ & $0.7952 \pm 0.0085$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total) & $1106.51$ & $2771.53$ & $3887.99$ & $2771.14$ & $3881.37$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.51$ & $\cdots$ & $1111.94$ & $\cdots$ & $1110.31$ \\[+1mm] $\textrm{DIC}$ & $1117.27$ & $2826.17$ & $3942.07$ & $2827.14$ & $3936.85$ \\[+1mm] $\Delta\textrm{DIC}$ & $2.89$ & $-0.28$ & $1.37$ & $0.69$ & $-3.85$ \\[+1mm] $\textrm{AIC}_c$ & $1116.51$ & $2827.53$ & $3943.99$ & $2829.14$ & $3939.37$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $1.96$ & $-1.18$ & $1.58$ & $0.43$ & $-3.04$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_NL_ns_P18_lensing_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of new-$P(q)$-based tilted nonflat $\Lambda\textrm{CDM}$ ($+A_L$) model parameters constrained by non-CMB, P18+lensing, and P18+lensing+non-CMB data sets. The Hubble constant $H_0$ has a unit of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM new $P(q)$} & \multicolumn{2}{c}{Tilted non-flat $\Lambda$CDM$+A_L$ new $P(q)$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18+lensing & P18+lensing+non-CMB & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0242 \pm 0.0033$ & $0.02248 \pm 0.00016$ & $0.02248 \pm 0.00015$ & $0.02252 \pm 0.00017$ & $0.02260 \pm 0.00016$ \\[+1mm] $\Omega_c h^2$ & $0.120 \pm 0.013$ & $0.1188 \pm 0.0014$ & $0.1186 \pm 0.0013$ & $0.1183 \pm 0.0015$ & $0.1174 \pm 0.0013$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.10 \pm 0.10$ & $1.04104 \pm 0.00032$ & $1.04106 \pm 0.00031$ & $1.04108 \pm 0.00032$ & $1.04118 \pm 0.00032$\\[+1mm] $\tau$ & $0.0515$ & $0.0515 \pm 0.0081$ & $0.0566 \pm 0.0074$ & $0.0495 \pm 0.0093$ & $0.0486 \pm 0.0086$ \\[+1mm] $\Omega_k$ & $-0.033 \pm 0.050$ & $-0.0086 \pm 0.0057$ & $0.0003 \pm 0.0017$ & $0.003 \pm 0.016$ & $-0.0002 \pm 0.0017$\\[+1mm] $n_s$ & $0.9654$ & $0.9661 \pm 0.0043$ & $0.9679 \pm 0.0042$ & $0.9688 \pm 0.0053$ & $0.9713 \pm 0.0042$ \\[+1mm] $\ln(10^{10} A_s)$ & $2.89 \pm 0.34$ & $3.035 \pm 0.016$ & $3.046 \pm 0.014$ & $3.030 \pm 0.019$ & $3.025 \pm 0.017$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $1.13 \pm 0.15$ & $1.088 \pm 0.035$ \\[+1mm] \hline \\[-1mm] $H_0$ & $70.1 \pm 1.8$ & $64.2 \pm 2.0$ & $68.13 \pm 0.54$ & $72.0 \pm 9.2$ & $68.48 \pm 0.56$ \\[+1mm] $\Omega_m$ & $0.295 \pm 0.017$ & $0.345 \pm 0.021$ & $0.3054 \pm 0.0051$ & $0.287 \pm 0.076$ & $0.2999 \pm 0.0055$ \\[+1mm] $\sigma_8$ & $0.771 \pm 0.036$ & $0.799 \pm 0.010$ & $0.8079 \pm 0.0067$ & $0.801 \pm 0.011$ & $0.7956 \pm 0.0082$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $1106.49$ & $2771.75$ & $3887.55$ & $2770.45$ & $3880.69$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.49$ & $\cdots$ & $1111.65$ & $\cdots$ & $1109.43$ \\[+1mm] $\textrm{DIC}$ & $1117.14$ & $2825.74$ & $3942.22$ & $2827.29$ & $3937.52$ \\[+1mm] $\Delta\textrm{DIC}$ & $2.76$ & $-0.71$ & $1.52$ & $0.84$ & $-3.18$ \\[+1mm] $\textrm{AIC}_c$ & $1116.49$ & $2827.75$ & $3943.55$ & $2828.45$ & $3938.69$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $1.94$ & $-0.96$ & $1.14$ & $-0.26$ & $-3.72$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_TNL_P18_lensing_nonCMB} \end{table*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_P18_lensing_nonCMBv2_fig32.pdf}} \caption{Likelihoods of the tilted flat $\Lambda$CDM model parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. } \label{fig:like_FL_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_Alens_P18_lensing_nonCMBv2_fig33.pdf}} \caption{Likelihoods of the tilted flat $\Lambda$CDM$+A_L$ model parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_FL_P18_lensing_nonCMB}. } \label{fig:like_FL_Alens_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_P18_lensing_nonCMBv2_fig34.pdf}} \caption{Likelihoods of the untilted non-flat $\Lambda$CDM model parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. } \label{fig:like_NL_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_P18_lensing_nonCMBv2_fig35.pdf}} \caption{Likelihoods of the untilted non-flat $\Lambda$CDM$+A_L$ model parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_NL_P18_lensing_nonCMB}. } \label{fig:like_NL_Alens_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_ns_P18_lensing_nonCMBv2_fig36.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM model [with Planck $P(q)$] parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. } \label{fig:like_NL_ns_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_ns_P18_lensing_nonCMBv2_fig37.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM$+A_L$ model [with Planck $P(q)$] parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_NL_ns_P18_lensing_nonCMB}. } \label{fig:like_NL_Alens_ns_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_ns1_P18_lensing_nonCMBv2_fig38.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM model [with new $P(q)$] parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. } \label{fig:like_TNL_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_Alens_ns1_P18_lensing_nonCMBv2_fig39.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM$+A_L$ model [with new $P(q)$] parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_TNL_P18_lensing_nonCMB}. } \label{fig:like_TNL_Alens_P18_lensing_nonCMB} \end{figure*} \subsubsection{Comparing P18+lensing data and non-CMB data cosmological constraints}\label{sec:P18+lensing_vs_non-CMB} In the previous subsubsection we compared non-CMB data cosmological constraints to those obtained from P18 data. We found significant tensions in the non-flat models with $A_L=1$ for the derived parameters $H_0$ and $\Omega_m$, and a 2.2$\sigma$ tension between the two $\Omega_m$ values in the flat $\Lambda$CDM model with $A_L = 1$. In view of these results additional tests are needed if one wants to know whether P18 and non-CMB data can be jointly analysed to determine cosmological constraints. We study this in Sec.\ \ref{subsec:data_set_tensions}. Interestingly, when the $A_L$ parameter is allowed to vary these tensions decrease significantly, with the largest tension being 2.3$\sigma$ between the two $H_0$ values in the tilted non-flat Planck $P(q)$ model, and the remaining tensions not exceeding 1.4$\sigma$, perhaps an indication that P18 and non-CMB data can be use jointly to constrain cosmological parameters when $A_L$ is allowed to vary. In Secs.\ \ref{subsubsec:P18_data_constraints} and \ref{subsubsec:P18_lensing_data_constraints} we discussed the cosmological constrained obtained from P18 data and from P18+lensing data. We shall see, in Sec.\ \ref{subsec:data_set_tensions}, that, in the non-flat models, P18 data and lensing data are less mutually inconsistent than P18 data and non-CMB data are, however it is necessary to perform an additional test to determine whether or not P18, lensing, and non-CMB data can be jointly analyzed to derive cosmological constraints in the non-flat models with $A_L=1$. In this subsubsection we describe the results of this additional test that compares non-CMB data cosmological constraints to the ones obtained from P18+lensing data. P18+lensing+non-CMB data cannot be jointly used in the context of a given model unless the cosmological constraints obtained with P18+lensing data and with non-CMB data are consistent. While in the previous subsubsection we labelled the study of P18 vs.\ non-CMB as a study of high-redshift data cosmological constraints vs.\ low-redshift data cosmological constraints, we cannot do that in this subsubsection since most of the information in the lensing data is from low-redshift data. The cosmological parameter mean values and error bars favored by the P18+lensing, non-CMB, and P18+lensing+non-CMB data sets are summarized in Tables \ref{tab:para_FL_P18_lensing_nonCMB}-\ref{tab:para_TNL_P18_lensing_nonCMB} for the tilted flat $\Lambda$CDM (+$A_L$) model, the untilted non-flat $\Lambda$CDM (+$A_L$) model, the tilted non-flat $\Lambda$CDM (+$A_L$) Planck $P(q)$ model, and the tilted non-flat $\Lambda$CDM (+$A_L$) new $P(q)$ model. Likelihood distributions of cosmological parameters of the four models with $A_L=1$ and with varying $A_L$ are shown in Figs.\ \ref{fig:like_FL_P18_lensing_nonCMB}-\ref{fig:like_TNL_Alens_P18_lensing_nonCMB} for the P18+lensing, non-CMB and P18+lensing+non-CMB data sets. Since non-CMB data do not have the ability to constrain $\tau$ or $n_s$, in this subsubsection we set their values in the non-CMB data only analyses to those found in the corresponding P18+lensing data analyses. Note that in the previous subsubsection, where the case of P18 data versus non-CMB data was studied, the values of $\tau$ and $n_s$ in the non-CMB data only analyses were set to those found in the corresponding P18 data analyses, nevertheless, the cosmological parameter constraints from the two non-CMB data analyses are practically identical. This indicates that non-CMB data are mostly insensitive to changes in the values of $\tau$ and $n_s$, as we have assumed. Again, we do not include the $A_L$ parameter in the analyses when only non-CMB data are considered, since it does not play a role at low redshift. Similar to what happens in the previous subsubsection, when we compare cosmological constraints from P18 data and from non-CMB data, by looking at Tables \ref{tab:para_NL_P18_lensing_nonCMB}-\ref{tab:para_TNL_P18_lensing_nonCMB} we observe that for the six non-flat $\Lambda$CDM (+$A_L$) models the constraints imposed by non-CMB data on the $H_0$ and $\Omega_m$ parameters are tighter than the ones from P18+lensing data. P18+lensing data more restrictively constrain all other parameters in all eight cosmological models. Comparing the six-parameter and the four-parameter tilted flat $\Lambda$CDM model primary cosmological parameter constraints for P18+lensing and non-CMB data, shown in the left part of Table \ref{tab:para_FL_P18_lensing_nonCMB}, we observe that the values of $\Omega_b h^2$, $\Omega_c{h^2}$, and $\theta_{\textrm{MC}}$ are in mild disagreement, at 1.3$\sigma$, 1.2$\sigma$, and 1.1$\sigma$, respectively. We also see tensions in the derived parameters. In particular, for the non-relativistic matter density parameter $\Omega_m$, the level of tension reaches 2.3$\sigma$, whereas the values of $H_0$ disagree by 1.4$\sigma$. From Table \ref{tab:para_FL_P18_lensing_nonCMB} we can compare the seven-parameter and the four-parameter tilted flat $\Lambda$CDM+$A_L$ model cosmological parameter constraints for P18+lensing data and for non-CMB data. Regarding the primary cosmological parameters we see that the values of $\Omega_b{h^2}$ and $\theta_{\textrm{MC}}$ disagree at 1.2$\sigma$ and 1.1$\sigma$ respectively. The inclusion of the $A_L$-varying parameter reduces significantly the tension found in the $A_L=1$ case for $\Omega_m$ and $H_0$, with them now disagreeing by only 1.4$\sigma$ and 0.96$\sigma$. We do not find any clear evidence that prevents us from jointly analyzing P18+lensing and non-CMB data, in the context of the tilted flat $\Lambda$CDM model, with and without a varying $A_L$ parameter. The results for the six-parameter and the four-parameter untilted non-flat $\Lambda$CDM model obtained from P18+lensing and non-CMB data are in Table \ref{tab:para_NL_P18_lensing_nonCMB}. While for the primary cosmological parameters we do not observe significant tensions, we do for the derived parameters. The primary spatial curvature parameter is $\Omega_k=-0.037\pm 0.050$ for non-CMB data, which is 0.74$\sigma$ away from flat hypersurfaces and in 0.095$\sigma$ tension with the P18+lensing analysis value $\Omega_k=-0.0322\pm 0.0075$, which is 4.3$\sigma$ away from flat. As for the derived parameters, we find that $H_0$, $\Omega_m$ and $\sigma_8$ values are in 4.2$\sigma$, 2.9$\sigma$, and 0.11$\sigma$ disagreement. The high levels of tensions reached for some of the parameters may indicate that P18+lensing and non-CMB data should not be jointly analyzed in the context of the untilted non-flat $\Lambda$CDM model. P18+lensing and non-CMB data results obtained for the seven-parameter and the four-parameter untilted non-flat $\Lambda$CDM+$A_L$ model are shown in Table \ref{tab:para_NL_P18_lensing_nonCMB}. Regarding the values of the primary cosmological parameters, except for $\Omega_k$ (discussed next), as was observed in the $A_L=1$ case, there are no significant tensions. The value of the curvature parameter is $\Omega_k=-0.037\pm 0.050$ (0.74$\sigma$ away from flat) for the non-CMB data and $\Omega_k=0.0161\pm 0.0094$ for the P18+lensing data, which indicates 1.7$\sigma$ evidence in favor of an open spatial geometry. The two $\Omega_k$ values disagree at 1.0$\sigma$. The disagreement in the values of the derived parameters $H_0$, $\Omega_m$ and $\sigma_8$ values are 1.8$\sigma$, 2.3$\sigma$, and 0.32$\sigma$, respectively, which clearly represents a reduction with respect to the $A_L=1$ case. This is due to the elongation of the error bars in the varying $A_L$ case compared to the $A_L=1$ case. Given these results, the P18+lensing and the non-CMB data should perhaps not be used together in the context of the untilted non-flat $\Lambda$CDM+$A_L$ model. It may be noticed however that when we do so, namely in the P18+lensing+non-CMB analysis, the obtained value for the curvature parameter is $\Omega_k=-0.0060\pm0.0014$, which is 4.3$\sigma$ away from flat. Nonetheless, according to the AIC and DIC this model is strongly disfavoured when it is compared with the tilted models, due to the lack of the degree of freedom contained in the $n_s$ parameter. The results that allow us to compare the seven-parameter and the five-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ model primary cosmological parameter constraints for P18+lensing data and non-CMB data can be seen in Table \ref{tab:para_NL_ns_P18_lensing_nonCMB}. There are no significant tensions in the values of the primary cosmological parameters. The non-CMB data value of the spatial curvature parameter, $\Omega_k=-0.033\pm 0.050$, is 0.66$\sigma$ away from flat and in 0.45$\sigma$ tension with the value found in the P18+lensing analysis, namely $\Omega_k=-0.0103\pm0.0066$ which represents a 1.6$\sigma$ deviation from flat hypersurfaces. As for the values of the derived parameters $H_0$, $\Omega_m$, and $\sigma_8$, the tensions are 2.2$\sigma$, 1.9$\sigma$, and 0.66$\sigma$, respectively. Given these results, further tests are probably necessary in order to decide whether P18+lensing and non-CMB data can be jointly analyzed in the context of the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. P18+lensing and non-CMB data results obtained for the eight-parameter and the five-parameter tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model are shown in Table \ref{tab:para_NL_ns_P18_lensing_nonCMB}. Similar to the $A_L=1$ case we do not find significant disagreements in the values of the primary cosmological parameters. For the non-CMB data the value of the curvature parameter is $\Omega_k=-0.033\pm 0.050$ which is 0.66$\sigma$ away from flat and in 0.49$\sigma$ tension with the P18+lensing value, $\Omega_k=-0.005\pm 0.027$, which in turn is only 0.19$\sigma$ in favor of a closed geometry. An important reduction in the disagreements found in the derived parameters, with respect to the $A_L=1$ case, is observed. In particular, for $H_0$, $\Omega_m$, and $\sigma_8$ the disagreement found is 0.099$\sigma$, 0.23$\sigma$, and 0.63$\sigma$. We may say that in the context of the tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ we are allowed to analyze together P18+lensing data and non-CMB data. By doing so, we get for P18+lensing+non-CMB data no evidence in favor of a non-flat geometry, $\Omega_k=-0.0002\pm 0.0017$, but still a clear 2.5$\sigma$ preference for $A_L\neq 1$ since $A_L=1.090\pm 0.036$. Comparing the seven-parameter and the five-parameter tilted non-flat $\Lambda$CDM new $P(q)$ model primary cosmological parameter constraints for P18+lensing data and non-CMB data, from the left part of Table \ref{tab:para_TNL_P18_lensing_nonCMB} we see no important differences in the values of the primary parameters. The value for the spatial curvature parameter is $\Omega_k=-0.033\pm 0.050$ for non-CMB data, which represents a 0.66$\sigma$ deviation from flat and it is in 0.48$\sigma$ tension with the value obtained in the P18+lensing analysis, $\Omega_k=-0.0086\pm 0.0057$, which is 1.5$\sigma$ away from flat hypersurfaces. Regarding the triad of derived parameters $H_0$, $\Omega_m$, and $\sigma_8$, the disagreement found for each of them is 2.2$\sigma$, 1.9$\sigma$, and 0.75$\sigma$ respectively. In light of these results we deem that more testing is required to decide whether the P18+lensing and non-CMB data can be jointly analyzed in the context of the tilted non-flat $\Lambda$CDM new $P(q)$ model. In Table \ref{tab:para_TNL_P18_lensing_nonCMB} we provide the results for the eight-parameter and the five-parameter tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model when P18+lensing data and non-CMB data are considered. The tensions found for the values of primary cosmological parameters are not significant, as in the $A_L=1$ case. When non-CMB data is considered we find $\Omega_k=-0.033\pm 0.050$, which shows a 0.66$\sigma$ evidence in favor of a closed geometry and is in 0.69$\sigma$ tension with the P18+lensing data value, $\Omega_k=0.003\pm 0.016$, which shows only a 0.19$\sigma$ preference for an open geometry. As for the derived parameters $H_0$, $\Omega_m$, and $\sigma_8$ the level of agreement is really good, with the corresponding values only in 0.20$\sigma$, 0.10$\sigma$, and 0.80$\sigma$ tension, respectively. These results seem to indicate that in the context of the tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model P18+lensing data and non-CMB data can be jointly analyzed. In the P18+lensing+non-CMB analysis we find $\Omega_k=-0.0002\pm 0.0017$, so no clear preference for an open or a closed geometry. On the other hand, we find $A_L=1.088\pm 0.035$ which is 2.5$\sigma$ away from the predicted value $A_L=1$. In Figs.\ \ref{fig:like_FL_P18_lensing_nonCMB}-\ref{fig:like_TNL_Alens_P18_lensing_nonCMB} we show the one-dimensional likelihoods and the two-dimensional contours for cosmological parameters obtained using P18+lensing, non-CMB, and P18+lensing+non-CMB data. The constraints coming from non-CMB data (shown with unfilled black lines) are less restrictive than P18+lensing constraints (shown in grey), except for the $H_0$ and $\Omega_m$ constraints in the six non-flat models. Except for the untilted non-flat model with $A_L=1$ we observe at least partial overlaps between the three sets of contours even when the $A_L$ parameter is not allowed to vary. The contour plots for the tilted flat $\Lambda$CDM (+$A_L$) models are in Figs.\ \ref{fig:like_FL_P18_lensing_nonCMB} and \ref{fig:like_FL_Alens_P18_lensing_nonCMB}. The aforementioned $\sim$1$\sigma$ disagreements (and the $\sim 2\sigma$ $\Omega_m$ disagreement in the $A_L = 1$ case) found when we compared the one-dimensional likelihood P18+lensing and non-CMB results can also be observed here. The largest tensions are seen in the panels containing one of the derived parameters and the inclusion in the analysis of the varying $A_L$ parameter clearly reduces them. Looking at the contour plots for the untilted non-flat $\Lambda$CDM (+$A_L$) models displayed in Figs.\ \ref{fig:like_NL_P18_lensing_nonCMB} and \ref{fig:like_NL_Alens_P18_lensing_nonCMB} we observe significantly non-overlapping contours either when the primary parameter $\Omega_k$ is involved or when the derived parameter $H_0$ or $\Omega_m$ is involved. This reinforces the idea that when $A_L$ is not allowed to vary the P18+lensing and non-CMB data sets cannot be analyzed together in the untilted non-flat $\Lambda$CDM model. Quite different results are found when we do allow $A_L$ to vary. The disagreements observed in the $A_L=1$ case largely disappear. Therefore we may say that in the context of this varying $A_L$ cosmological model we can jointly analyze P18+lensing and non-CMB data. Figures \ref{fig:like_NL_ns_P18_lensing_nonCMB} and \ref{fig:like_NL_Alens_ns_P18_lensing_nonCMB} show cosmological parameter constraints for the tilted non-flat $\Lambda$CDM (+$A_L$) models, while the ones for the tilted non-flat $\Lambda$CDM (+$A_L$) new $P(q)$ models are displayed in Figs. \ref{fig:like_TNL_P18_lensing_nonCMB} and \ref{fig:like_TNL_Alens_P18_lensing_nonCMB}. The contour plots for these tilted non-flat models are very similar, something that was not unexpected given the results discussed above in this subsubsection. In both cases, when $A_L$ is not allowed to vary and when it is allowed to vary, we observe overlaps between the primary parameter panels contours at 1$\sigma$. When $A_L=1$ we observe an improvement in the overlapping in the current P18+lensing data vs.\ non-CMB data case, compared to the P18 data vs.\ non-CMB data case of the previous subsubsection, where now for both the Planck $P(q)$ model and the new $P(q)$ model the contours do overlap at 2$\sigma$. On the other hand, in the varying $A_L$ case we observe overlaps, even in those panels that involve some of the derived parameters, at 1$\sigma$. As in the P18 data vs.\ non-CMB data cosmological constraints comparison discussed in the previous subsubsection, further tests are needed to determine whether or not P18+lensing data and non-CMB data can be jointly analyzed in the context of the non-flat models under study. We discuss this issue in detail in Sec. \ref{subsec:data_set_tensions}. \begin{table*} \caption{Individual and total $\chi^2$ values for the best-fit flat and non-flat $\Lambda\textrm{CDM}$ inflation models. The deviance information criterion (DIC) and the Akaike information criterion (AIC$_c$) are also listed. } {\scriptsize \begin{ruledtabular} \begin{tabular}{lcccccccccccccc} Data sets & $\chi_{\textrm{plik}}^2$ & $\chi_{\textrm{lowl}}^2$ & $\chi_{\textrm{simall}}^2$ & $\chi_{\textrm{lensing}}^2$ & $\chi_{\textrm{prior}}^2$ & $\chi_{\textrm{SN}}^2$ & $\chi_{\textrm{BAO}}^2$ & $\chi_{H(z)}^2$ & $\chi_{f\sigma_8}^2$ & $\chi^2_{\textrm{total}}$ & $\Delta\chi^2$ & DIC & $\Delta\textrm{DIC}$ & $\Delta\textrm{AIC}_c$ \\[+0mm] \hline \\[-2mm] \multicolumn{15}{c}{Tilted flat $\Lambda\textrm{CDM}$ model} \\ \hline \\[-2mm] P18 & 2344.71 & 23.39 & 396.05 & & 1.66 & & & & & 2765.80 & & 2817.93 & & \\[+1mm] P18+lensing & 2344.66 & 23.39 & 396.06 & 8.79 & 1.82 & & & & & 2774.71 & & 2826.45 & & \\[+1mm] P18+lensing+non-CMB & 2346.61 & 22.64 & 396.34 & 8.94 & 1.84 & 1058.99 & 20.10 & 14.76 & 18.20 & 3888.41 & & 3940.70 & & \\[+1mm] \hline\\[-2mm] \multicolumn{15}{c}{Tilted flat $\Lambda$\textrm{CDM}$+A_L$ model} \\ \hline \\[-2mm] P18 & 2337.23 & 21.92 & 395.66 & & 1.31 & & & & & 2756.12 & $-9.68$ & 2812.41 & $-5.52$ & $-7.68$ \\[+1mm] P18+lensing & 2341.62 & 22.29 & 395.68 & 9.94 & 1.71 & & & & & 2771.24 & $-3.47$ & 2825.53 & $-0.92$ & $-1.47$ \\[+1mm] P18+lensing+non-CMB & 2342.43 & 21.99 & 395.68 & 9.74 & 2.06 & 1059.14 & 21.46 & 14.73 & 14.31 & 3881.55 & $-6.86$ & 3935.15 & $-5.55$ & $-4.86$ \\[+1mm] \hline \\[-2mm] \multicolumn{15}{c}{Untilted non-flat $\Lambda\textrm{CDM}$ model} \\ \hline \\[-2mm] P18 & 2369.95 & 22.22 & 395.69 & & 1.92 & & & & & 2789.77 & $23.97$ & 2847.14 & $29.21$ & $23.97$ \\[+1mm] P18+lensing & 2383.06 & 20.88 & 396.13 & 10.63 & 2.43 & & & & & 2813.13 & $38.42$ & 2869.06 & $42.61$ & $38.42$ \\[+1mm] P18+lensing+non-CMB & 2396.21 & 19.89 & 399.59 & 11.65 & 2.28 & 1059.51 & 20.65 & 15.68 & 12.77 & 3938.22 & $49.81$ & 3992.71 & $52.01$ & $49.81$ \\[+1mm] \hline\\[-2mm] \multicolumn{15}{c}{Untilted non-flat $\Lambda$\textrm{CDM}$+A_L$ model} \\ \hline \\[-2mm] P18 & 2369.32 & 20.34 & 395.87 & & 2.23 & & & & & 2787.76 & $21.96$ & 2846.45 & $28.52$ & $23.96$ \\[+1mm] P18+lensing & 2378.87 & 20.09 & 395.65 & 11.25 & 2.05 & & & & & 2807.91 & $33.20$ & 2856.10 & $29.65$ & $35.20$ \\[+1mm] P18+lensing+non-CMB & 2379.11 & 19.95 & 395.82 & 10.72 & 2.06 & 1060.16 & 22.50 & 15.47 & 9.26 & 3915.05 & $26.64$ & 3973.55 & $32.85$ & $28.64$ \\[+1mm] \hline \\[-2mm] \multicolumn{15}{c}{Tilted non-flat $\Lambda\textrm{CDM}$ model [Planck $P(q)$]} \\ \hline \\[-2mm] P18 & 2336.45 & 21.29 & 395.60 & & 1.38 & & & & & 2754.73 & $-11.07$ & 2810.59 & $-7.34$ & $-9.07$ \\[+1mm] P18+lensing & 2342.29 & 21.86 & 395.66 & 10.09 & 1.63 & & & & & 2771.53 & $-3.18$ & 2826.17 & $-0.28$ & $-1.18$ \\[+1mm] P18+lensing+non-CMB & 2345.82 & 22.90 & 396.53 & 8.92 & 1.88 & 1059.00 & 20.09 & 14.70 & 18.15 & 3887.99 & $-0.42$ & 3942.07 & $1.37$ & $1.58$ \\[+1mm] \hline\\[-2mm] \multicolumn{15}{c}{Tilted non-flat $\Lambda$\textrm{CDM}$+A_L$ model [Planck $P(q)$]} \\ \hline \\[-2mm] P18 & 2336.57 & 21.51 & 395.61 & & 1.29 & & & & & 2754.99 & $-10.81$ & 2811.63 & $-6.30$ & $-6.81$ \\[+1mm] P18+lensing & 2341.32 & 22.55 & 395.71 & 9.44 & 2.12 & & & & & 2771.14 & $-3.57$ & 2827.14 & $0.69$ & $0.43$ \\[+1mm] P18+lensing+non-CMB & 2341.91 & 22.16 & 395.77 & 9.62 & 1.60 & 1059.06 & 20.61 & 14.74 & 15.90 & 3881.37 & $-7.04$ & 3936.85 & $-3.85$ & $-3.04$ \\[+1mm] \hline\\[-2mm] \multicolumn{15}{c}{Tilted non-flat $\Lambda\textrm{CDM}$ model [new $P(q)$]} \\ \hline \\[-2mm] P18 & 2338.26 & 21.42 & 396.28 & & 1.42 & & & & & 2757.38 & $-8.42$ & 2811.54 & $-6.39$ & $-6.42$ \\[+1mm] P18+lensing & 2342.99 & 21.18 & 395.90 & 9.92 & 1.76 & & & & & 2771.75 & $-2.96$ & 2825.74 & $-0.71$ & $-0.96$ \\[+1mm] P18+lensing+non-CMB & 2346.63 & 22.53 & 396.30 & 8.91 & 1.53 & 1058.99 & 20.12 & 14.75 & 17.79 & 3887.55 & $-0.86$ & 3942.22 & $1.52$ & $1.14$ \\[+1mm] \hline\\[-2mm] \multicolumn{15}{c}{Tilted non-flat $\Lambda\textrm{CDM}$+$A_L$ model [new $P(q)$]} \\ \hline \\[-2mm] P18 & 2337.56 & 21.31 & 395.93 & & 1.52 & & & & & 2756.33 & $-9.47$ & 2814.83 & $-3.10$ & $-5.47$ \\[+1mm] P18+lensing & 2341.21 & 22.62 & 395.75 & 9.49 & 1.37 & & & & & 2770.45 & $-4.26$ & 2827.29 & $0.84$ & $-0.26$ \\[+1mm] P18+lensing+non-CMB & 2342.85 & 21.35 & 395.81 & 9.72 & 1.53 & 1059.13 & 21.27 & 14.77 & 14.27 & 3880.69 & $-7.72$ & 3937.52 & $-3.18$ & $-3.72$ \\[+1mm] \end{tabular} \\[+1mm] Note: $\Delta\chi^2$, $\Delta\textrm{DIC}$, and $\Delta\textrm{AIC}_c$ indicate the values relative to those of the tilted flat $\Lambda\textrm{CDM}$ model for the same combination of data sets. For the tilted flat $\Lambda$CDM model AIC$_c=2819.8$ (P18), $2828.7$ (P18+lensing), and $3942.4$ (P18+lensing+non-CMB). All $\chi^2$ values are computed at the corresponding model best-fit cosmological parameter values. \end{ruledtabular} } \label{tab:chi2_lcdm} \end{table*} \subsection{Model selection} \label{subsec:model_selection} In Sec.\ \ref{subsec:cosmological_parameters} we determined and discussed the cosmological parameter mean values and error bars in four pairs of cosmological models (with $A_L = 1$ and with varying $A_L$) from P18, P18+lensing, and P18+lensing+non-CMB data, as well as the differences in the values of the cosmological parameters obtained from P18 data and BAO/BAO$^\prime$ data, from P18 data and non-CMB data, and from P18+lensing data and non-CMB data. In this subsection we utilize the DIC, eq.\ \eqref{eq:DIC}, to determine which of these models best-fit some combinations of these data sets. For the P18, P18+lensing, and P18+lensing+non-CMB data sets, the values of $\Delta \textrm{AIC}_c$, $\Delta \textrm{DIC}$, and the individual contributions to the $\chi^2_{\textrm{total}}$ for each model are in Table \ref{tab:chi2_lcdm}. Here the Planck CMB data $\chi^2$s are: $\chi^2_{\textrm{plik}}$ from the TT data power spectra $30\leq \ell\leq 2508$ multipoles, the TE data $30\leq \ell \leq 1996$ multipoles, and the EE data $30\leq \ell \leq 1996$ multipoles; $\chi^2_{\textrm{lowl}}$ from the TT data power spectra $2\leq \ell \leq 29$ multipoles; $\chi^2_{\textrm{simall}}$ from the EE data power spectra $2\leq \ell \leq 29$ multipoles; $\chi^2_{\textrm{lensing}}$ from the lensing potential data power spectrum; and $\chi^2_{\textrm{prior}}$ from the priors for the Planck calibration and dust foreground emission. The P18+BAO/BAO$^{\prime}$ data values of $\Delta \textrm{AIC}_c$ and $\Delta \textrm{DIC}$ are provided in Tables \ref{tab:para_FL_BAO}-\ref{tab:para_TNL_ns_BAO}, whereas the corresponding P18+non-CMB data results can be found in Tables \ref{tab:para_FL_P18_nonCMB}-\ref{tab:para_TNL_P18_nonCMB}. In this subsection we do not discuss the results obtained for the untilted non-flat $\Lambda$CDM models, without and with a varying $A_L$, since as seen in the results presented in Tables \ref{tab:chi2_lcdm}, \ref{tab:para_NL_BAO}, and \ref{tab:para_NL_P18_nonCMB} this model is not able to fit CMB data as well as the other (tilted) models do. According to the statistical criteria we use, the untilted non-flat $\Lambda$CDM model is very strongly disfavoured when it is compared with the rest of the models that allow for a tilt ($n_s$) degree of freedom. We also do not discuss results obtained when only BAO$^\prime$, BAO, (P18) lensing (but see Table \ref{tab:para_lensing} and the brief related discussion in the third paragraph in Sec.\ \ref{subsec:data_set_tensions}), or non-CMB data are considered, because these data sets do not much discriminate between models. From Tables \ref{tab:para_FL_BAO}-\ref{tab:para_TNL_ns_BAO} and \ref{tab:para_FL_P18_nonCMB}-\ref{tab:para_TNL_P18_nonCMB} one sees that for these three data sets the DIC values for all models, including the untilted non-flat $\Lambda$CDM model, are very similar. In order to find more significant differences among the models under study we must include CMB data. In what follows we summarize results we find in a number of different combinations of data sets for the three tilted models. For clarity we focus on DIC results, since this is a more reliable indicator, \cite{DIC, Liddle:2007fy}. The tables also list the AIC$_c$ values. {\bf P18.} The results for these data are listed in Table \ref{tab:chi2_lcdm}. When $A_L = 1$, the non-flat Planck $P(q)$ and the non-flat new $P(q)$ models are strongly favored over the tilted flat model while the Planck $P(q)$ model is weakly favored over the new $P(q)$ model. When $A_L$ is allowed to vary, the non-flat Planck $P(q)$ model is weakly favored over the flat model, with both models being positively favored over the non-flat new $P(q)$ model. The flat$+A_L$ model is positively favored over the flat one, the Planck $P(q)$ model is weakly favored over the Planck $P(q)+A_L$ one, and the new $P(q)$ model is positively favored over the new $P(q)+A_L$ one. It is interesting that compared to the varying $A_L$ case, when $A_L=1$ both tilted non-flat models are strongly favored over the tilted flat $\Lambda$CDM model. {\bf P18+lensing.} The results for these data are listed in Table \ref{tab:chi2_lcdm}. These data provide only weak discrimination between models. When $A_L = 1$, the non-flat new $P(q)$ model is weakly favored over the non-flat Planck $P(q)$ model and both are weakly favored over the flat model. When $A_L$ is allowed to vary, the tilted flat model is weakly favored over both non-flat models while the non-flat Planck $P(q)$ model is weakly favored over the non-flat new $P(q)$ model. The flat$+A_L$ model is weakly favored over the flat one, the Planck $P(q)$ model is weakly favored over the Planck $P(q)+A_L$ one, and the new $P(q)$ model is weakly favored over the new $P(q)+A_L$ one. {\bf P18+BAO/P18+BAO$^\prime$.} The results for these data are listed in Tables \ref{tab:para_FL_BAO}, \ref{tab:para_NL_ns_BAO}, and \ref{tab:para_TNL_ns_BAO}. We discuss the P18+BAO data and P18+BAO$^\prime$ data results together since the conclusions are very similar. When $A_L = 1$, the tilted flat model is weakly (positively) favored over the non-flat Planck and new $P(q)$ models with the non-flat new $P(q)$ model weakly (weakly) favored over the non-flat Planck $P(q)$ model for P18+BAO (P18+BAO$^\prime$) data. When $A_L$ is allowed to vary, the tilted flat model is positively (weakly) favored over the non-flat Planck (new) $P(q)$ model, and the non-flat new $P(q)$ model weakly is favored over the non-flat Planck $P(q)$ model, for P18+BAO data, while for P18+BAO$^\prime$ data the tilted flat model is weakly favored over both non-flat Planck and new $P(q)$ models, and the non-flat new $P(q)$ model is weakly favored over the non-flat Planck $P(q)$ model. The flat$+A_L$ model is strongly (positively) favored over the flat one, the Planck $P(q)+A_L$ model is positively (strongly) favored over the Planck $P(q)$ one, and the new $P(q)+A_L$ model is positively (strongly) favored over the new $P(q)$ one for P18+BAO (P18+BAO$^\prime$) data. {\bf P18+non-CMB.} The results for these data are listed in Tables \ref{tab:para_FL_P18_nonCMB}, \ref{tab:para_NL_ns_P18_nonCMB}, and \ref{tab:para_TNL_P18_nonCMB}. Since the dominant component of non-CMB data is BAO/BAO$^\prime$ data, in the P18+non-CMB case here we find similar conclusions to the ones presented in the P18+BAO/P18+BAO$^\prime$ cases above. When $A_L = 1$, the tilted flat model is positively (weakly) favored over the non-flat Planck (new) $P(q)$ model with the non-flat new $P(q)$ model weakly favored over the non-flat Planck $P(q)$ model. When $A_L$ is allowed to vary, the tilted flat model is weakly favored over the non-flat Planck $P(q)$ and non-flat new $P(q)$ models, with the non-flat new $P(q)$ model weakly favored over the non-flat Planck $P(q)$ model. The flat$+A_L$ model is strongly favored over the flat one, the Planck $P(q)+A_L$ model is strongly favored over the Planck $P(q)$ one, and the new $P(q)+A_L$ model is strongly favored over the new $P(q)$ one. {\bf P18+lensing+non-CMB.} The results for these data are listed in Table \ref{tab:chi2_lcdm}. When $A_L = 1$, the tilted flat model is weakly favored over the non-flat Planck $P(q)$ and non-flat new $P(q)$ models with the non-flat Planck $P(q)$ model weakly favored over the non-flat new $P(q)$ model. When $A_L$ is allowed to vary, the tilted flat model is weakly (positively) favored over the non-flat Planck (new) $P(q)$ model, with the non-flat Planck $P(q)$ model weakly favored over the non-flat new $P(q)$ model. The flat$+A_L$ model is positively favored over the flat one, the Planck $P(q)+A_L$ model is positively favored over the Planck $P(q)$ one, and the new $P(q)+A_L$ model is positively favored over the new $P(q)$ one. In summary: P18 data and P18+non-CMB data both strongly disfavor the tilted flat $\Lambda$CDM model with $A_L =1$ relative to some of the tilted $\Omega_k < 0$ or varying $A_L$ options; P18+lensing data are largely agnostic; and P18+lensing+non-CMB data, P18+BAO data, and P18+BAO$^\prime$ data all positively favor the varying $A_L$ options over the $A_L=1$ cases. \begin{table*} \caption{$\log_{10} \mathcal{I}$ and tension ($\sigma$ and $p$) parameters for P18 data versus lensing data, P18 data versus BAO (BAO$^\prime$) data, P18 data versus non-CMB data, and P18+lensing data versus non-CMB data in the six tilted flat and non-flat $\Lambda$CDM models. Table \ref{tab:Priors} lists the Our, Handley, and Handley+$\Omega_k$ priors. } {\scriptsize \begin{ruledtabular} \begin{tabular}{lccccccc} \\[-1mm] & \multicolumn{7}{c}{Tilted flat $\Lambda$CDM model} \\[+1mm] \cline{2-8}\\[-1mm] Data: & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ BAO & P18 vs.\ BAO$^\prime$ & P18 vs.\ non-CMB & P18+lensing vs.\ non-CMB \\[+1mm] Prior: & Our & Handley & Handley+$\Omega_k$ & Our & Our & Our & Our \\[+1mm] \hline \\[-1mm] $\log_{10} {\mathcal I}$ & $1.240$ & $1.166$ & $\ldots$ & $0.132$ & $0.707$ & $0.296$ & $0.029$ \\[+1mm] $\sigma$ & $0.718$ & $0.390$ & $\ldots$ & $1.533$ & $0.426$ & $1.749$ & $1.747$ \\[+1mm] $p$ (\%) & $47.3$ & $69.7$ & $\ldots$ & $12.5$ & $67.0$ & $8.03$ & $8.06$ \\[+1mm] \hline \hline \\[-1mm] \\[-1mm] & \multicolumn{7}{c}{Tilted flat $\Lambda$CDM$+A_L$ model} \\[+1mm] \cline{2-8}\\[-1mm] Data: & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ BAO & P18 vs.\ BAO$^\prime$ & P18 vs.\ non-CMB & P18+lensing vs.\ non-CMB \\[+1mm] Prior: & Our & Handley & Handley+$\Omega_k$ & Our & Our & Our & Our \\[+1mm] \hline \\[-1mm] $\log_{10} {\mathcal I}$ & $\ldots$ & $\ldots$ & $\ldots$ & $0.286$ & $0.810$ & $1.033$ & $1.033$ \\[+1mm] $\sigma$ & $\ldots$ & $\ldots$ & $\ldots$ & $1.402$ & $0.371$ & $0.835$ & $0.774$ \\[+1mm] $p$ (\%) & $\ldots$ & $\ldots$ & $\ldots$ & $16.1$ & $71.0$ & $40.4$ & $43.9$ \\[+1mm] \hline \hline \\[-1mm] \\[-1mm] & \multicolumn{7}{c}{Tilted non-flat $\Lambda$CDM model [Planck $P(q)$]} \\[+1mm] \cline{2-8}\\[-1mm] Data: & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ BAO & P18 vs.\ BAO$^\prime$ & P18 vs.\ non-CMB & P18+lensing vs.\ non-CMB \\[+1mm] Prior: & Our & Handley & Handley+$\Omega_k$ & Our & Our & Our & Our \\[+1mm] \hline \\[-1mm] $\log_{10} {\mathcal I}$ & $-0.486$ & $-0.316$ & $-0.360$ & $-1.236$ & $-0.891$ & $-1.263$ & $0.297$ \\[+1mm] $\sigma$ & $2.479$ & $2.411$ & $2.403$ & $3.000$ & $2.478$ & $3.005$ & $1.837$\\[+1mm] $p$ (\%) & $1.32$ & $1.59$ & $1.63$ & $0.270$ & $1.32$ & $0.265$ & $6.62$ \\[+1mm] \hline \hline \\[-1mm] \\[-1mm] & \multicolumn{7}{c}{Tilted non-flat $\Lambda$CDM$+A_L$ model [Planck $P(q)$]} \\[+1mm] \cline{2-8}\\[-1mm] Data: & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ BAO & P18 vs.\ BAO$^\prime$ & P18 vs.\ non-CMB & P18+lensing vs.\ non-CMB \\[+1mm] Prior: & Our & Handley & Handley+$\Omega_k$ & Our & Our & Our & Our \\[+1mm] \hline \\[-1mm] $\log_{10} {\mathcal I}$ & $\ldots$ & $\ldots$ & $\ldots$ & $0.182$ & $0.847$ & $0.972$ & $1.641$ \\[+1mm] $\sigma$ & $\ldots$ & $\ldots$ & $\ldots$ & $1.460$ & $0.465$ & $0.793$ & $0.516$ \\[+1mm] $p$ (\%) & $\ldots$ & $\ldots$ & $\ldots$ & $14.4$ & $64.2$ & $42.8$ & $60.6$ \\[+1mm] \hline \hline \\[-1mm] \\[-1mm] & \multicolumn{7}{c}{Tilted non-flat $\Lambda$CDM model [new $P(q)$]} \\[+1mm] \cline{2-8}\\[-1mm] Data: & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ BAO & P18 vs.\ BAO$^\prime$ & P18 vs.\ non-CMB & P18+lensing vs.\ non-CMB \\[+1mm] Prior: & Our & Handley & Handley+$\Omega_k$ & Our & Our & Our & Our \\[+1mm] \hline \\[-1mm] $\log_{10} {\mathcal I}$ & $-0.062$ & $-0.089$ & $-0.057$ & $-0.880$ & $-0.526$ & $-0.806$ & $0.143$ \\[+1mm] $\sigma$ & $2.201$ & $1.887$ & $1.843$ & $2.604$ & $2.108$ & $2.577$ & $1.886$ \\[+1mm] $p$ (\%) & $2.77$ & $5.91$ & $6.54$ & $0.922$ & $3.50$ & $0.996$ & $5.93$ \\[+1mm] \hline \hline \\[-1mm] \\[-1mm] & \multicolumn{7}{c}{Tilted non-flat $\Lambda$CDM$+A_L$ model [new $P(q)$]} \\[+1mm] \cline{2-8}\\[-1mm] Data: & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ BAO & P18 vs.\ BAO$^\prime$ & P18 vs.\ non-CMB & P18+lensing vs.\ non-CMB \\[+1mm] Prior: & Our & Handley & Handley+$\Omega_k$ & Our & Our & Our & Our \\[+1mm] \hline \\[-1mm] $\log_{10} {\mathcal I}$ & $\ldots$ & $\ldots$ & $\ldots$ & $1.066$ & $1.655$ & $1.798$ & $1.500$ \\[+1mm] $\sigma$ & $\ldots$ & $\ldots$ & $\ldots$ & $1.052$ & $0.145$ & $0.402$ & $0.573$ \\[+1mm] $p$ (\%) & $\ldots$ & $\ldots$ & $\ldots$ & $29.3$ & $88.4$ & $68.7$ & $56.7$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: The statistical estimator values in the tilted flat $\Lambda$CDM model for the Handley+$\Omega_k$ priors are the same as for the Handley priors because $\Omega_k = 0$ in the flat model. \end{flushleft} \end{ruledtabular} } \label{tab:para_sigmap} \end{table*} \subsection{Data set tensions} \label{subsec:data_set_tensions} In this subsection we check whether there is concordance (discordance) between pairs of some of the data sets we study (in the context of a given cosmological model), as well as whether or not this concordance (discordance) is model independent. To do this, we use the two Sec.\ \ref{sec:method} statistical estimators, in eq.\ \eqref{eq:Tension_estimator_1} and in eqs.\ \eqref{eq:Tension_estimator_2} and \eqref{eq:Tension_estimator_2_sigma}. The values of these statistical estimators for the six tilted flat and non-flat $\Lambda$CDM ($+A_L$) models are listed in Table \ref{tab:para_sigmap}; we do not compute these estimators in the untilted non-flat $\Lambda$CDM model which does not include the tilt ($n_s$) degree of freedom that is strongly favored by data. As in Sec.\ \ref{subsec:model_selection}, here we only study pairs of data sets in which one of the data sets is or includes the P18 data set. Conclusions based on either of the two statistical estimators qualitatively agree, for the five pairs of data sets we compare in this subsection, as discussed next. \begin{table*} \caption{Mean and 68.3\% confidence limits of tilted flat and non-flat $\Lambda\textrm{CDM}$ model parameters constrained by lensing data alone. Table \ref{tab:Priors} lists the Our, Handley, and Handley+$\Omega_k$ priors. The Hubble constant $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } {\tiny \begin{ruledtabular} \begin{tabular}{lccc} \\[-1mm] & \multicolumn{3}{c}{Lensing data constraints with Our priors} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & Tilted flat $\Lambda$CDM & Tilted non-flat $\Lambda$CDM [Planck $P(q)$] & Tilted non-flat $\Lambda$CDM [new $P(q)$] \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.049 \pm 0.023$ & $0.052 \pm 0.027$ & $0.048 \pm 0.026$ \\[+1mm] $\Omega_c h^2$ & $0.125 \pm 0.032$ & $0.120 \pm 0.023$ & $0.116 \pm 0.022$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.016 \pm 0.022$ & $1.41 \pm 0.33$ & $1.47 \pm 0.27$ \\[+1mm] $\tau$ & $0.0542$ & $0.0483$ & $0.0525$ \\[+1mm] $\Omega_k$ & $\ldots$ & $-0.26 \pm 0.11$ & $-0.279 \pm 0.095$ \\[+1mm] $n_s$ & $0.9649$ & $0.9706$ & $0.9654$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.23 \pm 0.11$ & $3.10 \pm 0.19$ & $3.13 \pm 0.16$ \\[+1mm] \hline \\[-1mm] $H_0$ & $83 \pm 10$ & $65 \pm 17$ & $66 \pm 16$ \\[+1mm] $\Omega_m$ & $0.255 \pm 0.070$ & $0.54 \pm 0.48$ & $0.48 \pm 0.36$ \\[+1mm] $\sigma_8$ & $0.779 \pm 0.082$ & $0.85 \pm 0.16$ & $0.88 \pm 0.15$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ & $3.67$ & $3.12$ & $3.38$ \\[+1mm] $\textrm{DIC}$ (lensing) & $14.2$ & $13.3$ & $13.9$ \\[+1mm] $\textrm{AIC}$ (lensing) & $13.7$ & $15.1$ & $15.4$ \\[+1mm] \hline \\[-1mm] $\textrm{DIC}$ (P18) & $2817.9$ & $2810.6$ & $2811.5$ \\[+1mm] $\textrm{DIC}$ (P18+lensing) & $2826.5$ & $2826.2$ & $2825.7$ \\[+1mm] $\log_{10} \mathcal{I}$ & $1.240$ & $-0.486$ & $-0.062$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{3}{c}{Lensing data constraints with Handley priors} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & Tilted flat $\Lambda$CDM & Tilted non-flat $\Lambda$CDM [Planck $P(q)$] & Tilted non-flat $\Lambda$CDM [new $P(q)$] \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0220 \pm 0.0018$ & $0.0221 \pm 0.0017$ & $0.0220 \pm 0.0017$ \\[+1mm] $\Omega_c h^2$ & $0.1121 \pm 0.0093$ & $0.1117 \pm 0.0099$ & $0.1134 \pm 0.0097$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.0397 \pm 0.0058$ & $1.0395 \pm 0.0058$ & $1.0395 \pm 0.0059$ \\[+1mm] $\tau$ & $0.21 \pm 0.11$ & $0.20 \pm 0.11$ & $0.21 \pm 0.11$ \\[+1mm] $\Omega_k$ & $\ldots$ & $-0.032 \pm 0.040$ & $-0.029 \pm 0.040$ \\[+1mm] $n_s$ & $0.957 \pm 0.043$ & $0.954 \pm 0.043$ & $0.939 \pm 0.033$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.26 \pm 0.15$ & $3.20 \pm 0.16$ & $3.21 \pm 0.16$ \\[+1mm] \hline \\[-1mm] $H_0$ & $69.7 \pm 3.9$ & $62 \pm 14$ & $63 \pm 14$ \\[+1mm] $\Omega_m$ & $0.281 \pm 0.050$ & $0.40 \pm 0.15$ & $0.39 \pm 0.15$ \\[+1mm] $\sigma_8$ & $0.869 \pm 0.064$ & $0.826 \pm 0.083$ & $0.836 \pm 0.084$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ & $6.81$ & $6.89$ & $6.79$ \\[+1mm] $\textrm{DIC}$ (lensing) & $13.9$ & $14.1$ & $13.8$ \\[+1mm] $\textrm{AIC}$ (lensing) & $20.8$ & $22.9$ & $22.8$ \\[+1mm] \hline \\[-1mm] $\textrm{DIC}$ (P18) & $2817.9$ & $2810.6$ & $2811.5$ \\[+1mm] $\textrm{DIC}$ (P18+lensing) & $2826.5$ & $2826.2$ & $2825.7$ \\[+1mm] $\log_{10} \mathcal{I}$ & $1.166$ & $-0.316$ & $-0.088$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{3}{c}{Lensing data constraints with Handley$+\Omega_k$ priors } \\[+1mm] \cline{2-4}\\[-1mm] Parameter & Tilted flat $\Lambda$CDM & Tilted non-flat $\Lambda$CDM [Planck $P(q)$] & Tilted non-flat $\Lambda$CDM [new $P(q)$] \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $\cdots$ & $0.0221 \pm 0.0017$ & $0.0221 \pm 0.0017$ \\[+1mm] $\Omega_c h^2$ & $\cdots$ & $0.1088 \pm 0.0088$ & $0.1104 \pm 0.0089$ \\[+1mm] $100\theta_\textrm{MC}$ & $\cdots$ & $1.0395 \pm 0.0058$ & $1.0396 \pm 0.0059$ \\[+1mm] $\tau$ & $\cdots$ & $0.20 \pm 0.11$ & $0.20 \pm 0.11$ \\[+1mm] $\Omega_k$ & $\cdots$ & $-0.123 \pm 0.095$ & $-0.122 \pm 0.096$ \\[+1mm] $n_s$ & $\cdots$ & $0.951 \pm 0.041$ & $0.939 \pm 0.032$ \\[+1mm] $\ln(10^{10} A_s)$ & $\cdots$ & $3.11 \pm 0.16$ & $3.11 \pm 0.16$ \\[+1mm] \hline \\[-1mm] $H_0$ & $\cdots$ & $48 \pm 15$ & $48 \pm 15$ \\[+1mm] $\Omega_m$ & $\cdots$ & $0.70 \pm 0.33$ & $0.71 \pm 0.33$ \\[+1mm] $\sigma_8$ & $\cdots$ & $0.745 \pm 0.096$ & $0.75 \pm 0.10$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ & $\cdots$ & $6.79$ & $6.77$ \\[+1mm] $\textrm{DIC}$ (lensing) & $\cdots$ & $13.9$ & $13.9$ \\[+1mm] $\textrm{AIC}$ (lensing) & $\cdots$ & $22.8$ & $22.8$ \\[+1mm] \hline \\[-1mm] $\textrm{DIC}$ (P18) & $\cdots$ & $2810.6$ & $2811.5$ \\[+1mm] $\textrm{DIC}$ (P18+lensing) & $\cdots$ & $2826.2$ & $2825.7$ \\[+1mm] $\log_{10} \mathcal{I}$ & $\cdots$ & $-0.360$ & $-0.057$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\mathcal{I}=\exp(-\mathcal{F}/2)$ where $\mathcal{F}=\textrm{DIC(P18+lensing)}-\textrm{DIC(P18)}-\textrm{DIC(lensing)}$. The cosmological parameter values in the tilted flat $\Lambda$CDM model for the Handley+$\Omega_k$ priors are the same as for the Handley priors because $\Omega_k = 0$ in the flat model. \end{flushleft} \end{ruledtabular} } \label{tab:para_lensing} \end{table*} \begin{itemize} \item {\bf P18 vs.\ lensing}. Since, as mentioned earlier, lensing data (see Sec.\ \ref{sec:data}) alone do not place significant constraints on cosmological parameters (even if we fix the values of some of them), the role played by the priors is more important in lensing data alone analyses than in other cases. Therefore, in this case, we use three different sets of priors (see Table \ref{tab:Priors}) in order to determine whether and how the lensing data alone cosmological parameter constraints and statistical estimator values depend on the priors used. In all three cases we report results obtained from converged chains. Due to the weak constraining power of lensing data alone, it is not possible to reach convergence when the $A_L$ parameter is allowed to vary. Consequently, we provide results only for the $A_L=1$ cases. Here we first briefly comment on the lensing data alone cosmological parameter constraints, which do depend on the set of priors used, see Table \ref{tab:para_lensing}. For instance, if we look at the value of the curvature parameter $\Omega_k$ (which is most affected by the choice of prior) obtained by employing Our priors, for the titled non-flat $\Lambda$CDM Planck (new) $P(q)$ model, $\Omega_k= -0.26\pm 0.11$ ($\Omega_k=-0.279\pm 0.095$), we find a 1.9$\sigma$ (2.4$\sigma$) difference with the Handley priors analysis value $\Omega_k=-0.032\pm 0.040$ ($\Omega_k=-0.029\pm 0.040$) and a 0.94$\sigma$ (1.2$\sigma$) difference with the Handley+$\Omega_k$ priors analysis value $\Omega_k=-0.123\pm 0.095$ ($\Omega_k=-0.122\pm 0.096$). Reassuringly, we find that when we broaden the prior for $\Omega_k$, as we do when we move from Handley priors to Handley+$\Omega_k$ priors, the results get closer to those obtained with Our priors, the broadest priors we use. Additionally, our lensing data alone analysis (and cosmological parameter constraints) differ from those of the Planck team (Sec.\ 3.2.1 of Ref.\ \cite{Planck:2018lbu}) in that we fix $n_s$ and vary $\Omega_b h^2$ freely, whereas the Planck team use Gaussian priors for $n_s$ and $\Omega_b h^2$. Also, in our analysis $0.2 < h <1.0$ was chosen as the prior, while the Planck team used $0.4 < h < 1.0$. One notable difference is that when Our priors are used the value we find for $\Omega_b h^2$ is larger than the Gaussian prior value ($\Omega_b h^2 = 0.0222 \pm 0.0005$) adopted by the Planck team. In the tilted flat $\Lambda$CDM model we find $\Omega_b h^2=0.049 \pm 0.023$, and similar results are seen in the tilted non-flat models with the Planck and the new $P(q)$. However, when the Handley priors and the Handley+$\Omega_k$ priors are used, due to the very narrow range of $\Omega_b h^2$ (between $0.019$ and $0.025$) in these priors, such a deviation disappears, and $\Omega_b h^2$ is constrained with very consistent values in the tilted flat and the two tilted non-flat $\Lambda$CDM models and is also consistent with the Gaussian prior value adopted by the Planck team. Given the significant dependence on prior of the lensing data alone cosmological constraints, it is not possible to compare lensing data alone cosmological constraints to cosmological constraints we have derived from the other data sets. On the other hand, looking at Table \ref{tab:para_sigmap}, we do not see significant differences in the statistical estimator values from lensing only data analyses for the three different priors. This being the case, in the following, for the sake of consistency with our other discussions, we discuss only the lensing data alone results obtained using Our priors. For the tilted flat $\Lambda$CDM model we do not find discordance between P18 data and lensing data. We find $\textrm{log}_{10}\mathcal{I}=1.240$ which indicates a {\it strong} consistency between the two data sets. A similar conclusion is indicated by the other statistical estimator, $\sigma=0.718$ and $p=47.3\%$. We conclude that P18 and lensing data can be jointly analyzed in the context of the tilted flat $\Lambda$CDM model. Looking at the results for the tilted non-flat $\Lambda$CDM Planck $P(q)$ model, for the first statistical estimator $\textrm{log}_{10}\mathcal{I}=-0.486$ which is on the verge of indicating a {\it substantial} discordance while for the second one $\sigma=2.479$ and $p = 1.32\%$ which indicate a moderate tension. These results however may not be significant enough to conclude that P18 and lensing data cannot be used together in an analysis of the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. In the tilted non-flat new $P(q)$ $\Lambda$CDM model, the two statistical estimators considered here point to somewhat different conclusions. While for the first one we get $\textrm{log}_{10}\mathcal{I}=-0.062$, which indicates neither consistency nor inconsistency between the two data sets, the second one, $\sigma=2.201$ and $p = 2.77\%$, indicates a moderate tension between the two data sets. Taken together these results indicate that there is at most moderate inconsistency between P18 and lensing data within the tilted flat new $P(q)$ $\Lambda$CDM model. \item {\bf P18 vs.\ BAO$^\prime$}. In the context of the tilted flat $\Lambda$CDM model there is no sign of discordance between these two data sets. We find $\textrm{log}_{10}\mathcal{I}=0.707$, which indicates a {\it substantial} consistency. The other statistical estimator points to a similar conclusion, with $\sigma=0.426$ and $p=67\%$. Very similar results are found for the tilted flat $\Lambda$CDM+$A_L$ model. The value $\textrm{log}_{10}\mathcal{I}=0.810$, once again, indicates a {\it substantial} consistency between P18 and BAO$^\prime$ data, whereas for the second estimator we find $\sigma=0.371$ and $p=71\%$. The P18 and BAO$^\prime$ data sets are mutually consistent and can be jointly analyzed in the tilted flat $\Lambda$CDM (+$A_L$) models. On the other hand, the opposite is true in the tilted non-flat $\Lambda$CDM models (with $A_L = 1$). The comparison of P18 and BAO$^\prime$ data in the tilted non-flat Planck $P(q)$ model results in $\textrm{log}_{10}\mathcal{I}=-0.891$ which indicates a {\it substantial} disagreement between these two data sets. Reassuringly the second statistical estimator points to the same conclusion, in particular, $\sigma =2.478$ and $p =1.32\%$. As expected (see Sec.\ \ref{sec:P18_vs_BAO}) inclusion of the varying $A_L$ parameter reduces the tensions with respect to the $A_L=1$ case. For the Planck $P(q)$+$A_L$ model we find $\textrm{log}_{10}\mathcal{I}=0.847$, which indicates a {\it substantial} degree of consistency between the two data sets, and $\sigma=0.465$ and $p=64.2\%$, therefore, there is no tension between P18 data and BAO$^\prime$ data in this model. We noted in Sec.\ \ref{sec:P18_vs_BAO} that the tilted non-flat $\Lambda$CDM new $P(q)$ model better accommodates P18 and BAO$^\prime$ data than does the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. In particular, in tilted non-flat $\Lambda$CDM new $P(q)$ model when $A_L=1$ we find $\textrm{log}_{10}\mathcal{I}=-0.526$, that is just in the range of {\it substantial} inconsistency. According to the values obtained for the other statistical estimator, $\sigma = 2.108$ and $p=3.50\%$, there is a moderate tension between the two data sets. The inclusion of a varying $A_L$ parameter in the analysis completely changes the conclusions with respect to the $A_L=1$ case. For the new $P(q)$+$A_L$ model we find $\textrm{log}_{10}\mathcal{I}=1.655$, indicating {\it strong} agreement. The values $\sigma=0.145$ and $p=88.4\%$ support this conclusion. \item {\bf P18 vs.\ BAO}. We comment now on the results obtained when the tension between P18 data and BAO data is studied in the context of the different cosmological models. We note that the BAO data set includes some $f\sigma_8$ data points which, as we shall see, induces some changes in the results with respect to the P18 data and BAO$^\prime$ data case. Both statistical estimators do not indicate significant disagreement between P18 data and BAO data for the tilted flat $\Lambda$CDM model with $A_L=1$. For the first one we have $\textrm{log}_{10}\mathcal{I}= 0.132$, which neither indicates consistency nor inconsistency, and this is supported by the second one for which we obtain $\sigma=1.533$ and $p=12.5\%$. It is important to note that in this case the statistical estimators are closer to indicating a moderate tension than they are in the P18 data vs.\ BAO$^\prime$ data case. This is related to the previously mentioned $\sigma_8$ tension. We get similar results for the tilted flat $\Lambda$CDM+$A_L$ model, in which case we find $\textrm{log}_{10}\mathcal{I}=0.286$, which again neither indicates an agreement nor a disagreement, while for the second estimator $\sigma = 1.402$ and $p=16.1\%$, and again no tension is revealed. In view of these results we find no evidence that P18 and BAO data cannot be considered together in the analysis of the tilted flat $\Lambda$CDM (+$A_L$) models. Given the P18 data vs.\ BAO$^\prime$ data comparison results in the tilted non-flat $\Lambda$CDM models, it should not come as a surprise that we find tensions when P18 data and BAO data are compared. In the tilted non-flat Planck $P(q)$ $\Lambda$CDM model with $A_L =1$ we find, for the first estimator $\textrm{log}_{10}\mathcal{I}=-1.236$, and $\sigma=3.000$ and $p=0.27\%$ for the second one. Both results indicate a {\it strong} inconsistency between the two data sets. This level of tension fades when the $A_L$ parameter is allowed to vary. For the Planck $P(q)$+$A_L$ model we obtain $\textrm{log}_{10}\mathcal{I}=0.182$, which does not indicate consistency or inconsistency, and $\sigma=1.460$ and $p=14.4\%$. The P18 and BAO data can be jointly used in the Planck $P(q)$+$A_L$ model. As happens in the case of the P18 data vs.\ BAO$^\prime$ data comparison, the tilted non-flat new $P(q)$ $\Lambda$CDM model performs better than the Planck $P(q)$ when it comes to accommodating the P18 and BAO data sets. For the $A_L=1$ case we find $\textrm{log}_{10}\mathcal{I}=-0.880$, revealing {\it substantial} disagreement, while for the other estimator $\sigma=2.604$ and $p = 0.922\%$, which indicates a moderate tension. Once again, the tensions observed when $A_L=1$, in the context of non-flat models, disappear when this parameter is allowed to vary. For the tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model, we find $\textrm{log}_{10}\mathcal{I}=1.066$, which points out to a {\it strong} consistency between the two data sets, and for the other estimator we obtain $\sigma=1.052$ and $p=29.3\%$. The P18 and BAO data can be jointly used in the new $P(q)$+$A_L$ model. In summary, in the tilted non-flat models, in the Planck $P(q)$ model P18 and BAO data should not be jointly analyzed unless the $A_L$ parameter is allowed to vary, while in the new $P(q)$ models these two data sets can be considered together to put constraints on the cosmological parameters even when $A_L =1$. \item {\bf P18 vs.\ non-CMB}. We now discuss whether or not there is tension between P18 data and non-CMB data in the context of the different cosmological models. Similar results to the ones obtained in the P18 data and BAO$^\prime$/BAO data comparisons are expected, since BAO$^\prime$ data and BAO data are dominant components of non-CMB data. For the tilted flat $\Lambda$CDM model with $A_L = 1$ we find $\textrm{log}_{10}\mathcal{I}=0.296$, which neither indicates agreement nor disagreement, and $\sigma=1.749$ together with $p=8.03\%$, with neither of the two estimators pointing to tension between P18 and non-CMB data in this model. Including a varying $A_L$ in the model improves the agreement between the two data sets. For the tilted flat $\Lambda$CDM+$A_L$ model we find $\textrm{log}_{10}\mathcal{I}=1.033$ which points to {\it strong} consistency between the two data sets, and for the other estimator we get $\sigma=0.835$ and $p=40.4\%$, a result consistent with the first. There is no tension that prevents us from jointly analyzing P18 data and non-CMB data in the tilted flat $\Lambda$CDM (+$A_L$) models. In the case of the tilted non-flat Planck $P(q)$ $\Lambda$CDM model with $A_L = 1$, the value $\textrm{log}_{10}\mathcal{I}=-1.263$ indicates a {\it strong} inconsistency between the P18 and non-CMB data sets. The second statistical estimator provides similar results, $\sigma = 3.005$ and $p=0.265\%$. In the light of these results, we conclude that P18 data and non-CMB data should not be jointly analyzed in the context of this tilted non-flat $A_L =1$ model. For the Planck $P(q)$+$A_L$ model, we get $\textrm{log}_{10}\mathcal{I}=0.972$, so {\it substantial} agreement is observed between P18 data and non-CMB data in this case. In agreement with the result obtained employing the first statistical estimator, for the second one we find $\sigma=0.793$ and $p=42.8\%$, which again does not indicate any tension. Once again the tilted non-flat $\Lambda$CDM new $P(q)$ model does better in jointly accommodating P18 and non-CMB data than does the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. In the new $P(q)$ case with $A_L =1$, the values obtained for both statistical estimators, $\textrm{log}_{10}\mathcal{I}=-0.806$ and $\sigma=2.577$ and $p=0.996\%$, indicate a {\it substantial} discordance between P18 data and non-CMB data in the context of this model. Allowing $A_L$ to vary reduces the tension found in the $A_L=1$ cases. For the new $P(q)$+$A_L$ model we get $\textrm{log}_{10}\mathcal{I}=1.798$, which points to a {\it strong} agreement between the two data sets, whereas for the second estimator we find $\sigma=0.402$ and $p=68.7\%$ and no tension. Therefore, we may say that in the context of the tilted non-flat $\Lambda$CDM (+$A_L$) new $P(q)$ models, P18 and non-CMB data can be jointly analyzed. \item {\bf P18+lensing vs.\ non-CMB}. In the previous cases we have detected some tensions in the context of the non-flat models. Here we study the possible disagreement between P18+lensing data and non-CMB data. For the tilted flat $\Lambda$CDM model with $A_L =1$, both statistical estimators, with values $\textrm{log}_{10}\mathcal{I}=0.029$ and $\sigma=1.747$ and $p=8.06\%$, shed no light on a possible consistency or inconsistency between P18+lensing data and non-CMB data. For the tilted flat $\Lambda$CDM+$A_L$ model, we find $\textrm{log}_{10}\mathcal{I}=1.033$ which indicates a {\it strong} consistency between the two data sets. On the other hand, the second statistical estimator provides $\sigma=0.774$ and $p=43.9\%$, which do not indicate consistency or inconsistency. As we noted at the beginning of this subsection we do not always expect a perfect match in the conclusions from the two estimators. In the tilted non-flat $\Lambda$CDM Planck $P(q)$ model with $A_L=1$ we find $\textrm{log}_{10}\mathcal{I}=0.297$ which neither indicates consistency nor inconsistency between P18+lensing data and non-CMB data, whereas for the second estimator we find $\sigma=1.837$ and $p=6.62\%$, which does not reveal inconsistency. The consistency between P18+lensing and non-CMB data improves considerably in the context of the Planck $P(q)$+$A_L$ model. We get $\textrm{log}_{10}\mathcal{I}=1.641$ indicating a {\it strong} consistency between the two data sets, while the second one gives $\sigma=0.516$ and $p=60.6\%$, in agreement with the conclusion provided by the first estimator. Very similar conclusions are found for the new $P(q)$ (+$A_L$) and the Planck $P(q)$ (+$A_L$) models. When the $A_L$ parameter is not allowed to vary for the new $P(q)$ $A_L = 1$ model we find $\textrm{log}_{10}\mathcal{I}=0.143$ which does not reveal either an inconsistency or a consistency, with second estimator giving $\sigma =1.886$ and $p=5.927\%$ and again no tension is revealed. On the other hand, in the context of the new $P(q)$+$A_L$ model, we get $\textrm{log}_{10}\mathcal{I}=1.50$ indicating a {\it strong} consistency between the two data sets and reassuringly we find similar conclusions from the second statistical estimator, $\sigma=0.573$ and $p=56.7\%$. Unlike in the comparisons of P18 data and BAO$^\prime$/BAO data and the comparisons of P18 data and non-CMB data, we do not find tensions in the context of the non-flat models between P18+lensing data and non-CMB data, even when the $A_L$ parameter is not allowed to vary. This may be suggesting that if we want to jointly analyze P18 data and a low-redshift data set, such as BAO$^\prime$/BAO data or non-CMB data, we should either consider a varying $A_L$ parameter or include (P18) lensing data in the mix. \end{itemize} We have studied the tensions between pairs of data sets, in the context of a given cosmological model, in three different ways based on Bayesian statistics. In Secs.\ \ref{sec:P18_vs_BAO}-\ref{sec:P18+lensing_vs_non-CMB} we quantified the level of tension by comparing the (one- and two-dimensional) cosmological parameter constraints favored by each of the pair of data sets. In the one-dimensional cases we estimated the tension by considering the quadrature sum of the two error bars for each parameter, while in the two-dimensional cases we looked at whether or not the two sets of contours shared a common parameter space area. In this subsection we study tensions between data set pairs by using the two more precise statistical estimators of Sec.\ \ref{sec:method}, see eq.\ \eqref{eq:Tension_estimator_1} and eqs.\ \eqref{eq:Tension_estimator_2} and \eqref{eq:Tension_estimator_2_sigma}. Reassuringly, all three techniques employed result in similar conclusions in most cases. Among all the data set comparisons we study there are two with significant enough discordances to be ruled out: we find in the tilted non-flat $\Lambda$CDM Planck $P(q)$ $A_L = 1$ model that P18 data and BAO data, as well P18 data and non-CMB data, are not mutually consistent. In the first case, when P18 and BAO data are compared, we observe a 2.7$\sigma$ tension between the derived cosmological parameter values of $\Omega_m$ and of $H_0$, obtained with P18 data and with BAO data. Additionally, in Fig.\ \ref{fig:like_NL_ns_BAO}, contour plot panels that contain one of these derived parameters show non-overlapping regions at more than 2$\sigma$. As for the P18 vs.\ non-CMB case, the tensions are even greater than for P18 vs.\ BAO. Comparing the derived cosmological parameter values of $\Omega_m$ and of $H_0$, obtained with P18 data and non-CMB data, we observe a disagreement at 2.9$\sigma$ and 3.9$\sigma$ respectively. Again the contour plot panels in Fig.\ \ref{fig:like_NL_ns_P18_nonCMB} containing $\Omega_m$ and/or $H_0$ show a non-overlapping region at more than 2$\sigma$. For the two statistical estimators of Sec.\ \ref{sec:method}, if we choose to say two data sets are mutually inconsistent (in a given model) when $\textrm{log}_{10}\mathcal{I}\leq -1$ or $\sigma\geq 3$, then this is true only in the two cases discussed in the previous paragraph. For the the tilted non-flat $\Lambda$CDM Planck $P(q)$ $A_L = 1$ model, in the P18 vs.\ BAO case we find $\textrm{log}_{10}\mathcal{I}=-1.236$ (meaning a {\it strong} disagreement between the two data sets) and $\sigma$ = 3.000 ($p=0.270\%$), while in the P18 vs.\ non-CMB analysis we find $\textrm{log}_{10}\mathcal{I}=-1.263$ (again a {\it strong} disagreement between the two data sets) and $\sigma$ = 3.005 ($p=0.265\%$). These results are qualitatively consistent with those of the previous paragraph. They mean that P18 data and BAO data, as well as P18 data and non-CMB data, cannot be jointly analyzed in this model, alternatively it means that the tilted non-flat $\Lambda$CDM Planck $P(q)$ $A_L = 1$ model is inconsistent with these data and ruled out at approximately 3$\sigma$, by them. We note that the level of tensions seen in the P18 vs.\ BAO and P18 vs.\ non-CMB comparisons are less severe in the context of the new $P(q)$ model, which does not strongly rule out the joint analyses of P18 data and BAO data, as well as P18 data and non-CMB data, in the tilted non-flat $\Lambda$CDM new $P(q)$ $A_L = 1$ model. What is more, none of the other combinations studied, namely, P18 data vs.\ lensing data, P18 data vs.\ BAO$^{\prime}$ data, and P18+lensing data vs.\ non-CMB data, are strongly mutually inconsistent in the tilted non-flat $\Lambda$CDM new $P(q)$ model, even when the $A_L$ parameter is not allowed to vary. We now turn to a comparison between some of our results in Table \ref{tab:para_sigmap} and results presented in Refs.\ \cite{Handley:2019tkm} and \cite{DiValentino:2019qzk}. We emphasize that these are only semi-quantitative comparisons, since the data sets used are not identical and the priors used also differ. Reference \cite{Handley:2019tkm} compares P18 data and lensing data, as well P18 data and BAO data (note that while we refer to both data sets as BAO there are some significant differences between the BAO data points used in Ref.\ \cite{Handley:2019tkm} and the updated BAO data we use here), in the tilted flat $\Lambda$CDM model and in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. As described in Sec.\ \ref{sec:method} we use the same ($p, \sigma$) statistical estimator as Ref.\ \cite{Handley:2019tkm} does and so these are the results we compare. For the tilted flat $\Lambda$CDM model, from the P18 vs.\ lensing analysis, Ref.\ \cite{Handley:2019tkm} Fig.\ 2 reports $\sigma \simeq 0.19$ and $p\simeq 85\%$, while we get $\sigma$=0.72 and $p=47\%$ (for Our priors) and $\sigma=0.39$ and $p = 70\%$ (for Handley priors). Some differences are expected due to the different set of data and priors used and this is reflected in these results. Reassuringly, when we employ the same priors for the lensing data (but not for P18 data) as used in Ref.\ \cite{Handley:2019tkm} the results get closer. From the P18 vs.\ BAO analysis in the tilted flat $\Lambda$CDM model Ref.\ \cite{Handley:2019tkm} finds $\sigma \simeq 0.95$ and $p\simeq 65\%$ while we get $\sigma = 1.5$ and $p = 13\%$, consequently the qualitative conclusions are the same, indicating that no tension is found. As for the tilted non-flat $\Lambda$CDM Planck $P(q)$ model, from the P18 vs.\ lensing analysis, Ref.\ \cite{Handley:2019tkm} reports $\sigma \simeq 2.5$ and $p\simeq 1.2\%$ and we find $\sigma=2.5$ and $p=1.3\%$ (for Our priors) and $\sigma=2.4$ and $p = 1.6\%$ (for Handley priors) so there is very good agreement between the results. Finally, in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model, from a comparison of P18 data and BAO data, Ref.\ \cite{Handley:2019tkm} finds $\sigma \simeq 3.0$ and $p\simeq 0.3\%$ whereas we get $\sigma = 3.0$ and $p=0.3\%$. Considering all results, and the fact that somewhat different BAO data and priors are used in the two analyses, there is good agreement between the results and conclusions of Ref.\ \cite{Handley:2019tkm} and our results and conclusions. Reference \cite{DiValentino:2019qzk} uses $\textrm{log}_{10}\mathcal{I}$ to quantify tensions so here we compare our and their results for this statistical estimator. Reference \cite{DiValentino:2019qzk} compares P18 data and lensing data, as well as P18 and BAO$^{\prime}$ data (note that while we refer to both data sets as BAO$^{\prime}$ there are significant differences between the BAO$^{\prime}$ data used in Ref.\ \cite{DiValentino:2019qzk} and the updated BAO$^{\prime}$ data we use here), in the tilted flat $\Lambda$CDM model and in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. For the tilted flat $\Lambda$CDM model and the P18 data vs.\ lensing data analysis, Ref.\ \cite{DiValentino:2019qzk} find $\textrm{log}_{10}\mathcal{I}=0.6$ ({\it substantial} concordance) while we get $\textrm{log}_{10}\mathcal{I}= 1.24$ ({\it strong} concordance). For the P18 data vs.\ BAO$^{\prime}$ data analysis in the tilted flat $\Lambda$CDM model, Ref.\ \cite{DiValentino:2019qzk} report $\textrm{log}_{10}\mathcal{I}=0.2$ (neither a concordance nor a discordance) and we find $\textrm{log}_{10}\mathcal{I}=0.7$ ({\it substantial} concordance). On the other hand, in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model, for the P18 vs.\ lensing data analysis, Ref.\ \cite{DiValentino:2019qzk} provide $\textrm{log}_{10}\mathcal{I}=-0.84$ ({\it substantial} discordance) while we obtain $\textrm{log}_{10}\mathcal{I}=-0.49$ which is on the verge of also indicating a {\it substantial} discordance between the two data sets. Finally, from the P18 vs.\ BAO$^{\prime}$ analysis in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model, Ref.\ \cite{DiValentino:2019qzk} report $\textrm{log}_{10}\mathcal{I}=-1.8$ ({\it strong} discordance) whereas we get $\textrm{log}_{10}\mathcal{I}=-0.89$ ({\it substantial} discordance). As can be appreciated from the preceding discussion, the agreement between our results and the results presented in Ref.\ \cite{DiValentino:2019qzk} is not as good as the one obtained from a comparison of our results and those of Ref.\ \cite{Handley:2019tkm}. It is important to note that the ($p, \sigma$) statistical estimator of eqs.\ \eqref{eq:Tension_estimator_2} and \eqref{eq:Tension_estimator_2_sigma} is not as dependent on the priors as is the $\textrm{log}_{10}\mathcal{I}$ statistical estimator of eq.\ \eqref{eq:Tension_estimator_1}. This may explain the differences found in the comparisons of our results to those of Refs.\ \cite{Handley:2019tkm} and \cite{DiValentino:2019qzk}. All in all, we consider that there is reasonable, and so reassuring, agreement between our results and results available in the literature. \section{Discussion} \label{sec:discussion} We have used P18 data, (P18) lensing data, BAO$^\prime$ data, BAO data, and non-CMB data to constrain cosmological parameters in eight cosmological models, the tilted flat $\Lambda$CDM (+$A_L$) model, the untilted non-flat $\Lambda$CDM (+$A_L$) model, the tilted non-flat $\Lambda$CDM (+$A_L$) Planck $P(q)$ model, and the tilted non-flat $\Lambda$CDM (+$A_L$) new $P(q)$ model, and to determine the goodness-of-fit of these models to the data sets. We have also used the models to examine whether or not pairs of data sets are mutually consistent, studying five cases: P18 data vs.\ lensing data, P18 data vs.\ BAO$^\prime$/BAO data, P18 data vs.\ non-CMB data, and P18+lensing data vs.\ non-CMB data. Assuming these data are correct and that there are no unaccounted systematic errors, three of the eight models we consider may be rejected because they are incompatible with some of these data at levels of significance discussed in Sec.\ \ref{sec:results} and summarized next. These rejected models are the two untilted non-flat $\Lambda$CDM (+$A_L$) models and the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. When P18 data are included in the analyses the untilted non-flat $\Lambda$CDM (+$A_L$) models are, according to the DIC, {\it very strongly} disfavoured when compared with the tilted models. This is because the untilted models lack the degree of freedom encapsulated in the power spectrum tilt ($n_s$) parameter that is strongly favored by P18 data and so the untilted models are incompatible with P18 data. When we use the tilted non-flat $\Lambda$CDM Planck $P(q)$ model to compare cosmological parameter values from P18 data and BAO$^{\prime}$/BAO data, as well as from P18 data and non-CMB data, we find disagreements in the one-dimensional values of the $H_0$ and $\Omega_m$ derived parameters of 2.3$\sigma$ and 2.7$\sigma$ (BAO$^\prime$), 2.3$\sigma$ and 2.7$\sigma$ (BAO), and 2.9$\sigma$ and 2.9$\sigma$ (non-CMB). In Figs.\ \ref{fig:like_NL_ns_BAO} and \ref{fig:like_NL_ns_P18_nonCMB}, in those panels containing $H_0$ and $\Omega_m$, the two-dimensional contours do not overlap even at more than 2$\sigma$ significance. Additionally, in the P18 data vs.\ BAO data case we find $\textrm{log}_{10}\mathcal{I}=-1.236$ (meaning a {\it strong} disagreement between the two data sets) and $\sigma$ = 3.000 ($p=0.27\%$), while in the P18 data vs.\ non-CMB data analysis we get $\textrm{log}_{10}\mathcal{I}=-1.263$ (again a {\it strong} disagreement between the two data sets) and $\sigma$ = 3.005 ($p=0.265\%$). At their levels of significance, these results mean that the tilted non-flat $\Lambda$CDM Planck $P(q)$ model is unable to simultaneously accommodate P18 data and non-CMB data and so is ruled out at 3$\sigma$. Note that non-CMB data include BAO$^{\prime}$/BAO data and Refs.\ \citep{Handley:2019tkm, DiValentino:2019qzk} have previously noted the incompatibility of P18 data and older BAO$^{\prime}$/BAO data in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. We return to this point below. The six-parameter tilted flat $\Lambda$CDM model is the simplest, (largely, see below) observationally consistent, general-relativistic cosmological model. It assumes the existence of cold dark matter, a non-evolving dark energy density $\Lambda$, flat spatial hypersurfaces ($\Omega_k = 0)$, and $A_L = 1$. This is the current standard cosmological model. We have found that this model passes all the consistency tests we use. The largest data set we have used is the P18+lensing+non-CMB data set. These data provide the most restrictive constraints on the parameters of this model, and if the tilted flat $\Lambda$CDM model is a reasonably good approximation of the Universe, the cosmological parameters values measured in this model from these data provide a reasonably good description of parameters of the Universe. From P18+lensing+non-CMB data we find, for the six primary cosmological parameters, $\Omega_b{h^2}=0.02250\pm 0.00013$, $\Omega_c{h^2}=0.11838\pm 0.00083$, $100\theta_{\textrm{MC}}= 1.04110\pm 0.00029$, $\tau=0.0569\pm 0.0071$, $n_s = 0.9688\pm 0.0036$, and $\ln(10^{10}A_s)= 3.046\pm 0.014$. We also provide the values of three derived parameters, $\Omega_m = 0.3053\pm 0.0050$, $H_0=68.09\pm 0.38$ km s$^{-1}$ Mpc$^{-1}$, and $\sigma_8 = 0.8072\pm 0.0058$. The least well-determined parameters are the reionization optical depth $\tau$ at 8.0$\sigma$ and the scalar spectral index $n_s$ at 8.7$\sigma$. As we discuss below, the values of the cosmological parameters determined using any of the six tilted models we study are relatively independent of the cosmological model used, indicating that the values of the cosmological parameters listed above for the tilted flat $\Lambda$CDM model are relatively model independent. It is interesting that the Hubble constant value measured using P18+lensing+non-CMB data in the tilted flat $\Lambda$CDM model, $H_0=68.09\pm 0.38$ km s$^{-1}$ Mpc$^{-1}$, is consistent with that from an early estimate from a median statistics analysis of a large compilation of Hubble constant measurements, $H_0=68\pm 2.8$ km s$^{-1}$ Mpc$^{-1}$, see Refs.\ \citep{ChenRatra2011, Gottetal2001, Calabreseetal2012}, as well as with some local measurements, e.g., $H_0=69.8\pm 1.7$ km s$^{-1}$ Mpc$^{-1}$ (quadrature sum of statistical and systematic uncertainties) from Ref.\ \citep{Freedman2021}, but not with some other local measurements, e.g, $H_0=73.04\pm 1.04$ km s$^{-1}$ Mpc$^{-1}$ from Ref.\ \citep{Riessetal2022}. As for the other derived parameter employed to quantify another tension affecting the tilted flat $\Lambda$CDM model, the $\sigma_8$ parameter, there are differences in its value depending on the data set considered. In the tilted flat $\Lambda$CDM model, using P18 data, we get $\sigma_8=0.8118\pm 0.0074$ whereas non-CMB data give $\sigma_8=0.787\pm 0.027$, with the two values differing by 0.89$\sigma$. In the P18+lensing+non-CMB data analysis case we obtain $\sigma_8=0.8072\pm 0.0058$ which is between the P18 value and the non-CMB value. The shifts in the cosmological parameter values obtained by jointly analyzing non-CMB data with P18+lensing data, compared to the cosmological parameter values obtained from ``Planck'' P18+lensing data, for the tilted flat $\Lambda$CDM are: $-0.68\sigma$ ($\Omega_b{h^2}$), 1.1$\sigma$ ($\Omega_c{h^2}$), $-0.45\sigma$ (100$\theta_{\textrm{MC}}$), $-0.26\sigma$ ($\tau$), $-0.71\sigma$ ($n_s$), $-0.10\sigma$ [$\ln(10^{10}A_s)$], $-1.1\sigma$ ($H_0$), 1.1$\sigma$ ($\Omega_m$), and 0.48$\sigma$ ($\sigma_8$), with the largest shifts being 1.1$\sigma$, suggesting again that in this model non-CMB data and P18+lensing data are not inconsistent. As for the reduction in the error bars obtained by jointly analyzing non-CMB data with P18+lensing data, compared to the error bars obtained from ``Planck'' P18+lensing data, we find 7.1$\%$ ($\Omega_b{h^2}$), 31$\%$ ($\Omega_c{h^2}$), 6.5$\%$ (100$\theta_{\textrm{MC}}$), 2.7$\%$ ($\tau$), 12$\%$ ($n_s$), 0$\%$ [$\ln(10^{10}A_s)$], 31$\%$ ($H_0$), 33$\%$ ($\Omega_m$), and 1.7$\%$ ($\sigma_8$), with the biggest reductions being the 33$\%$ $\Omega_m$ one and the 31$\%$ $\Omega_c{h^2}$ and $H_0$ ones; adding non-CMB data to the mix does quite significantly improve the constraints on some cosmological parameters. We mentioned above that P18 data and non-CMB data are incompatible in the seven-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$. When this model is used to analyze P18 data it favors a closed geometry at 2.5$\sigma$ with $\Omega_k = -0.043 \pm 0.017$, when it is used to analyze P18+lensing data it favors a closed geometry at 1.6$\sigma$ with $\Omega_k = -0.0103 \pm 0.0066$, and when it is used to analyze non-CMB data it favors a closed geometry at 0.63$\sigma$ with $\Omega_k = -0.032 \pm 0.051$. However, since P18 data and non-CMB data are incompatible in this model, the model is ruled out at the relevant levels of significance and so cannot be used to measure the geometry of spatial hypersurfaces from P18+lensing+non-CMB data. On the other hand, the seven-parameter tilted non-flat $\Lambda$CDM new $P(q)$ model is not ruled out. According to the statistical estimators presented in Sec. \ref{sec:method} (see values in Table \ref{tab:para_sigmap}) for all the cases studied using the new $P(q)$ model, in none are our conditions to rule out a model, $\textrm{log}_{10}\mathcal{I}\leq -1$ or $\sigma\geq 3$, fulfilled. For the new $P(q)$ model, in the P18 data analysis, we find $\Omega_k=-0.033\pm 0.014$ which favors closed geometry at 2.4$\sigma$. When the new $P(q)$ model is used to analyze P18+lensing data the results indicate a 1.5$\sigma$ preference for closed geometry with $\Omega_k = -0.0086\pm 0.0057$, and when non-CMB data is analyzed alone we find $\Omega_k = -0.036\pm 0.051$ which is 0.71$\sigma$ in favor of closed geometry. Contrary to what happens in the case of the Planck $P(q)$ model, in the new $P(q)$ model it is reasonable to jointly analyze P18 data, (P18) lensing data, and non-CMB data. And in the P18+lensing+non-CMB and P18+non-CMB analysis cases we obtain $\Omega_k = 0.0003\pm 0.0017$ favoring open geometry by only 0.18$\sigma$ in both cases. It may come as a surprise that even though each data set individually favors a closed geometry, some even with a somewhat significant level of evidence, the joint consideration of all three (or just two) of them reveals a result consistent with flat spatial hypersurfaces, and also more consistent with open than with closed geometry. This is because of the $H_0$-$\Omega_k$-$\Omega_m$ degeneracy and the fact that, in the non-flat models, non-CMB data favor higher $H_0$ values and lower $\Omega_m$ values than do P18 data and P18+lensing data. We have found that with $A_L = 1$ the six-parameter untilted non-flat and the seven-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ models are incompatible with some data we consider. If these data are correct, these models are ruled out. On the other had, we find that the most restrictive data compilation we consider, the P18+lensing+non-CMB data set, indicates that the seven-parameter tilted non-flat $\Lambda$CDM new $P(q)$ model has flat (or very close to flat) spatial hypersurfaces. Yes, P18 data alone favor closed geometry at 2.4$\sigma$, and while it would be valuable to have a much better understanding of this result than is currently available, at this point we feel that the P18+lensing+non-CMB data support for flat geometry should be given more credence. Perhaps more and better future non-CMB might alter this conclusion, however current data are consistent with flat spatial hypersurfaces when $A_L = 1$. In the seven-parameter tilted flat $\Lambda$CDM$+A_L$ model $A_L$ is allowed to vary and is constrained by data. In this model P18 data favor $A_L = 1.181 \pm 0.067$, $A_L > 1$ at 2.7$\sigma$; P18+non-CMB data favor $A_L = 1.204 \pm 0.061$, $A_L > 1$ at 3.3$\sigma$; P18+lensing data favor $A_L = 1.073 \pm 0.041$, $A_L > 1$ at 1.8$\sigma$; and, P18+lensing+non-CMB data favor $A_L = 1.089 \pm 0.035$, $A_L > 1$ at 2.5$\sigma$. With P18+lensing+non-CMB data resulting in $\Delta{\rm DIC} = -5.55$ in favor of $A_L > 1$ over $A_L = 1$, just a little bit below the {\it strongly} favoring threshold of $-6$, the 2.5$\sigma$ $A_L > 1$ value indicates a more serious CMB weak lensing consistency issue than the preference for closed spatial geometry exhibited by some of the data sets. If these data are correct, these results are somewhat uncomfortable for the six-parameter tilted flat $\Lambda$CDM model --- the standard cosmological model. New, and better, data should help to clarify this issue. When $A_L$ is allowed to vary, the eight-parameter tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model is not ruled out by data sets incompatibilities, unlike what happens in the $A_L = 1$ seven-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ model. The eight-parameter tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model also does not suffer from data sets incompatibilities, similar to the $A_L = 1$ seven-parameter tilted non-flat $\Lambda$CDM new $P(q)$ model case. In the eight-parameter tilted non-flat $\Lambda$CDM$+A_L$ Planck (new) $P(q)$ model, P18 data favor $A_L = 0.88 \pm 0.15$ and $A_L < 1$ at 0.8$\sigma$ ($A_L = 0.94 \pm 0.20$ and $A_L < 1$ at 0.3$\sigma$) and $\Omega_k = -0.130 \pm 0.095$ and closed at 1.4$\sigma$ ($\Omega_k = -0.10 \pm 0.11$ and closed at 0.91$\sigma$); P18+non-CMB data favor $A_L = 1.203 \pm 0.062$ and $A_L > 1$ at 3.3$\sigma$ ($A_L = 1.204 \pm 0.061$ and $A_L > 1$ at 3.3$\sigma$) and $\Omega_k = -0.0006 \pm 0.0017$ and closed at 0.35$\sigma$ ($\Omega_k = -0.0006 \pm 0.0017$ and closed at 0.35$\sigma$); P18+lensing data favor $A_L = 1.089 \pm 0.16$ and $A_L > 1$ at 0.56$\sigma$ ($A_L = 1.13 \pm 0.15$ and $A_L > 1$ at 0.87$\sigma$) and $\Omega_k = -0.005 \pm 0.027$ and closed at 0.19$\sigma$ ($\Omega_k = 0.003 \pm 0.0016$ and open at 0.19$\sigma$); and, P18+lensing+non-CMB data favor $A_L = 1.090 \pm 0.036$ and $A_L > 1$ at 2.5$\sigma$ ($A_L = 1.088 \pm 0.035$ and $A_L > 1$ at 2.5$\sigma$) and $\Omega_k = -0.0002 \pm 0.0017$ and closed at 0.12$\sigma$ ($\Omega_k = -0.0002 \pm 0.0017$ and open at 0.12$\sigma$). With P18+lensing+non-CMB data in the eight-parameter tilted non-flat $\Lambda$CDM$+A_L$ Planck (new) $P(q)$ model resulting in $\Delta{\rm DIC} = -5.22\ (-4.70)$, again (as in the seven-parameter tilted flat $\Lambda$CDM$+A_L$ model) {\it positively} favoring $A_L > 1$ over $A_L = 1$, there is a bit more evidence supporting the existence of a CMB weak lensing consistency issue, in all tilted, flat as well as non-flat, models, although the resulting $\Omega_k$ values in both non-flat cases are quite consistent with flat geometry. In the eight-parameter tilted non-flat $\Lambda$CDM$+A_L$ new $P(q)$ model, which unlike the Planck $P(q)$ model is not ruled out, allowing $A_L$ to vary reduces support for closed geometry. Compared to the seven-parameter new $P(q)$ model with $A_L = 1$, for P18 data, support for closed spatial hypersurfaces drops from 2.4$\sigma$ to 0.91$\sigma$, while for P18+lensing data the 1.5$\sigma$ support for closed geometry becomes 0.19$\sigma$ support for open geometry. We also note, from comparing P18 data results given in the two previous paragraphs for the seven-parameter tilted flat $\Lambda$CDM$+A_L$ model and for the eight-parameter tilted non-flat $\Lambda$CDM$+A_L$ Planck and new $P(q)$ models, as one goes from the first to either of the second models, $A_L$ values becomes consistent with unity while $\Omega_k$ values deviate from flat by only 1.4$\sigma$ and 0.91$\sigma$. So for P18 data both the tilted non-flat models cannot be ruled out while the seven-parameter tilted flat model with $A_L > 1$ at 2.7$\sigma$ and a lower DIC value indicates that the standard six-parameter tilted flat $\Lambda$CDM model with $A_L = 1$ is somewhat uncomfortably observationally squeezed. These and other results from our more comprehensive analyses and updated and more expansive data here support and extend the earlier results of Refs.\ \cite{Planck:2018vyg, Handley:2019tkm, DiValentino:2019qzk} that indicate that P18 data support either a closed geometry with $\Omega_k<0$ or $A_L>1$, both of which make the amount of CMB weak lensing higher than in the tilted flat $\Lambda$CDM model. References \cite{Planck:2018vyg, Handley:2019tkm, DiValentino:2019qzk} have also noted that in the tilted non-flat Planck $P(q)$ model, when P18 data and BAO$^\prime$/BAO data are jointly analyzed, evidence for closed geometry dissipates, as we have found here for updated BAO$^\prime$/BAO data as well as for non-CMB data (even though, as we have found here, P18 data and to a lesser extent BAO$^\prime$/BAO data and non-CMB data are all by themselves not inconsistent with closed geometry). References \cite{Handley:2019tkm, DiValentino:2019qzk} have suggested that this might be because of a problem (possibly undetected systematic errors) with BAO$^\prime$/BAO data (and so also with non-CMB data) and so these results (from combinations of these data and P18 data) should not be taken to mean that spatial hypersurfaces are flat. Along these lines, we note that Ref.\ \cite{Glanville:2022xes} present results from a full-shape analysis (instead of the compressed BAO and $f\sigma_8$ data points analysis here) of the 6dFGS, BOSS, and eBOSS catalogs and find $\Omega_k = -0.0041^{+0.0026}_{-0.0021}$ (see their Table 6) when P18 data (not exactly the same P18 data used here) are jointly analyzed with the full-shape galaxy sample data, which is still in favor of a closed geometry, contrary to the conclusions we present here. New and better data and improved analysis techniques will help to shed some light on this issue. It is useful to determine which of the data sets we use are able to set model-independent constraints on the cosmological parameter values. Here we only consider the P18, P18+lensing, P18+non-CMB, and P18+lensing+non-CMB data sets, as the other data sets we study have less constraining power. In our analyses here we consider only the six tilted models, flat and non-flat, with $A_L = 1$ and varying $A_L$. In order to determine whether the constraints are model independent, we compute the shifts in the cosmological parameter value between pairs of models and say that the cosmological constraints are model-independent if almost all the shifts are $<1\sigma$. Neither P18 data nor P18+lensing data are able to place model-independent constraints on the cosmological parameter values. In the case of P18 data, when we compare the flat model with the flat+$A_L$ model, we observe disagreements in the values of the derived parameter $H_0$, $\Omega_m$, and $\sigma_8$ at $\sim 1\sigma$ confidence level. More significant are the discrepancies found when the flat model is compared with the tilted non-flat models. In particular for the Planck (new) $P(q)$ models, we get for $H_0$ a shift of $-3.5\sigma$ ($-2.8\sigma$), for $\Omega_m$ a shift of 2.6$\sigma$ (2.3$\sigma$), and for $\sigma_8$ a shift of $-2.2\sigma$ ($-1.7\sigma$). As expected, when the flat model is compared with the tilted non-flat models with varying $A_L$ the differences are smaller, though still significant. Comparing the flat model cosmological parameter values with the Planck (new) $P(q)$+$A_L$ cosmological parameter values we find for $H_0$ a shift of $-2.0\sigma$ ($-1.2\sigma$), for $\Omega_m$ a shift of 1.4$\sigma$ (0.89$\sigma$), and for $\sigma_8$ a shift of $-1.7\sigma$ ($-1.1\sigma$). Similar results are found when the flat+$A_L$ model is compared with the tilted non-flat models with and without a varying $A_L$ parameter. On the other hand, we do not find significant disagreements when we compare the cosmological parameter values of the four tilted non-flat models, the Planck $P(q)$ (+$A_L$) and the new $P(q)$ (+$A_L$) models, with each other, with the shifts always remaining below 1$\sigma$. The joint consideration of P18 data and (P18) lensing data reduces the disagreements discussed above though it is not possible to claim that P18+lensing data impose model-independent constraints. In this case when the cosmological parameter constraints for the flat and the flat+$A_L$ models are compared, the largest disagreement found is $-1.1\sigma$ for $\sigma_8$. When the flat model is compared with the Planck (new) $P(q)$ model, we get for $H_0$ a shift of $-1.5\sigma$ ($-1.5\sigma$), for $\Omega_m$ a shift of 1.4$\sigma$ (1.3$\sigma$), and for $\sigma_8$ a shift of $-1.2\sigma$ ($-1.1\sigma$), while when the flat model is compared with the Planck (new) $P(q)$+$A_L$ model, all differences remain $<1\sigma$. When we compare the cosmological parameter values obtained for the flat+$A_L$ model with those obtained for the Planck (new) $P(q)$ model, we observe disagreements at $-1.9\sigma$ ($-1.9\sigma$) for $H_0$ and 1.8$\sigma$ (1.8$\sigma$) for $\Omega_m$. As happens in the P18 analysis, in the P18+lensing analysis no significant differences are observed when we compare the Planck $P(q)$ (+$A_L$) and new $P(q)$ (+$A_L$) models with each other. It is the inclusion of non-CMB data which results in model-independent constraints. When P18 data are jointly analyzed with non-CMB data we do not find discrepancies $>1\sigma$. The most important differences in this case, in absolute value, are 0.78$\sigma$-0.96$\sigma$ ($\Omega_b{h^2}$) and 0.87$\sigma$-0.98$\sigma$ ($\sigma_8$) that are found when the results for models with a varying $A_L$ parameter are compared with the results obtained when $A_L = 1$. In the P18+lensing+non-CMB data case almost no significant model-to-model discrepancies are found. The largest ones are found when the varying $A_L$ models are compared with those with $A_L=1$. In particular, the two largest shifts are in $\ln(10^{10}A_s)$ (the largest one being in absolute value 1$\sigma$) and in $\sigma_8$ (the largest one being in absolute value 1.3$\sigma$). We note that P18+non-CMB data cosmological parameter constraints are slightly more model-independent than those determined using P18+lensing+non-CMB data. This is partly because (P18) lensing data changes the $A_L$ parameter value, which in turn causes small shifts in some of the other parameter values. Consequently, when (P18) lensing data are included in the mix we observe larger differences between the cosmological parameter values of the varying $A_L$ models and those of the $A_L=1$ models. Also, P18+lensing+non-CMB cases error bars are smaller than the ones found in the P18+non-CMB analyses, and this contributes to increasing the significance of the differences in some of the cosmological parameter values in the P18+lensing+non-CMB cases. We may say that, as long as at least P18+non-CMB data are considered, if we start from the tilted flat $\Lambda$CDM and then vary $A_L$ and/or $\Omega_k$ (which implies the consideration of one of the non-flat $P(q)$s we have used in this work), we obtain model-independent constraints as a result, since the shifts in the cosmological parameter values remain within or just slightly above 1$\sigma$. In light of these results we can conclude that the P18+lensing+non-CMB data set is powerful enough to result in model-independent cosmological parameter constraints and, if these data are correct and include all systematic errors, this data set is able to accurately measure these parameters of the (reasonably accurate tilted flat $\Lambda$CDM approximation of the) real Universe. \section{Conclusion} \label{sec:conclusion} In what follows we summarize our main conclusions. If the data sets we use are correct and free from unknown systematics, three of the eight cosmological models are ruled out due to incompatibilities with some of the data sets employed in the analyses. The untilted non-flat $\Lambda$CDM (+$A_L$) models are unable to properly fit the P18 data while the tilted non-flat $\Lambda$CDM Planck $P(q)$ model is ruled out at 3$\sigma$ because it is not able to simultaneously accommodate P18 data and non-CMB (or some subset of these) data. Interestingly, the new $P(q)$ tilted non-flat inflation $\Lambda$CDM cosmological model, characterized by the primordial power spectrum in Eq.\ (\ref{eq:tilted_nonflat_new_PS}), does better than the Planck $P(q)$ model in being able to simultaneously accommodate P18 data and non-CMB data. In Sec.\ \ref{subsec:data_set_tensions} we study the mutual compatibility of pairs of data sets and in none of the cases studied is the level of tension high enough to rule out this model. The same holds true for the flat (+$A_L$) models and the Planck and new $P(q)$+$A_L$ models. P18 data do not break the geometrical $\Omega_m$-$H_0$-$\Omega_k$-$A_L$ degeneracy present in the Planck and the new $P(q)$ (+$A_L$) models. In the tilted non-flat $\Lambda$CDM new $P(q)$ model the P18 data analysis reveals a 2.4$\sigma$ evidence in favor of closed geometry with $\Omega_k=-0.033\pm 0.014$ and this model is {\it strongly} favored over the tilted flat $\Lambda$CDM model. In the tilted non-flat models when the $A_L$ parameter is allowed to vary the evidence in favor of closed geometry subsides yet they are either {\it strongly} favored (Planck $P(q)$+$A_L$) or {\it positively} favored (new $P(q)$+$A_L$) over the tilted flat model. The tilted flat $\Lambda$CDM+$A_L$ model better fits P18 data, compared to the tilted flat $\Lambda$CDM model fit, with an $A_L$ parameter value 2.7$\sigma$ larger than the theoretically expected value of $A_L=1$. These results update and strengthen those presented in Refs.\ \cite{Handley:2019tkm, DiValentino:2019qzk}; both options $\Omega_k<0$ and $A_L>1$ appear more indicative of a CMB weak lensing consistency issue. The joint consideration of P18 data and (P18) lensing data does not result in significant changes in the values of most primary cosmological parameters with respect to those from the P18 data alone analysis, the exceptions being $\Omega_k$ and $A_L$. From P18+lensing data in the seven parameter tilted non-flat new $P(q)$ model we find 1.5$\sigma$ evidence in favor of closed geometry with $\Omega_k=-0.0086\pm 0.0057$, while in the seven-parameter tilted flat $\Lambda$CDM+$A_L$ model we find that $A_L>1$ is favored by 1.8$\sigma$ with $A_L=1.073\pm 0.041$. In these single parameter extensions of the tilted flat $\Lambda$CDM model, the addition of (P18) lensing data to P18 data does not favor $\Omega_k<0$ over $A_L>1$ or vice-versa. However, in the eight-parameter tilted non-flat Planck (new) $P(q)$ $\Lambda$CDM+$A_L$ models we find from P18+lensing data that $\Omega_k=-0.005\pm 0.027$ closed at 0.19$\sigma$ ($\Omega_k=0.003\pm 0.016$ open at 0.19$\sigma$), and $A_L=1.09\pm 0.16$ ($A_L=1.13\pm 0.15$) favoring $A_L>1$ at 0.56$\sigma$ (0.87$\sigma$), highlighting, if anything, the CMB weak lensing consistency issue. On the other hand, the values of the derived parameters $\Omega_m$ and $H_0$ are greatly affected by the inclusion of lensing data, and the geometrical degeneracy, when $A_L=1$, is partially broken. According to the DIC values, P18+lensing data do not strongly discriminate between models. The two statistical estimators ($\log_{10} {\mathcal I}$ and $\sigma$) tell us that there is only moderate tensions between P18 data and lensing data in the tilted non-flat models, and even less tension in the tilted flat model. Comparing the constraints from P18 data and non-CMB data allows for a robust test of the consistency of cosmological parameter values determined from high- and low-redshift data, respectively. For these data, the statistical estimators we consider do not show tensions between P18 data and non-CMB data, in the tilted flat model and in the varying $A_L$ models. Also, in the new $P(q)$ model with $A_L = 1$ we find $\textrm{log}_{10}\mathcal{I}=-0.806$ and $\sigma =2.577$ which indicates a non-negligible tension between P18 data results and non-CMB data results, but this is not high enough to rule out this model. No significant evidence is found in favor of non-flat hypersurfaces within the non-flat models. On the other hand, when the $A_L$ parameter is allowed to vary, the $A_L>1$ option is strongly preferred over the $A_L=1$ one. From P18+non-CMB data, for the flat+$A_L$ model we get $A_L=1.201\pm 0.061$ (3.3$\sigma$), for the Planck $P(q)$+$A_L$ model we find $A_L=1.203\pm 0.062$ (3.3$\sigma$), and for the new $P(q)$+$A_L$ model we obtain $A_L=1.204\pm 0.061$ (3.3$\sigma$). Amongst the data sets we consider in this paper, the P18+lensing+non-CMB data set provides the tightest constraints on cosmological parameters, and pins down the cosmological parameter values of the standard tilted flat $\Lambda$CDM model with impressive precision. (We emphasize that in most of the discussion in this paper we assume these data are accurate.) In fact, due to the great constraining power of this data set, almost all cosmological parameter values determined using this data set in the six tilted models considered are compatible at 1$\sigma$ (actually at slightly above 1$\sigma$ for the $\sigma_8$ parameter). Therefore we may say that the cosmological parameter values determined using P18+lensing+non-CMB data are very close to being model independent. From the P18+lensing+non-CMB analysis it is clear that the evidence in favor of $A_L>1$ remains while the evidence in favor of non-flat hypersurfaces subsides. We get $A_L=1.089\pm 0.035$ for the flat+$A_L$ model, $A_L=1.090\pm 0.036$ for the Planck $P(q)$+$A_L$ model, and $A_L=1.088\pm 0.035$ for the new $P(q)$+$A_L$ model, with a 2.5$\sigma$ deviation from $A_L=1$ in all cases. It is interesting that the large (in absolute value) negative $\Omega_k$ values demanded by P18 data in order to deal with the lensing anomaly are not supported by non-CMB data (although the non-CMB data do mildly favor a closed geometry), and the larger $H_0$ and smaller $\Omega_m$ favored by non-CMB data (compared to those favored by P18 data) result in P18+lensing+non-CMB data favoring flat spatial hypersurfaces. This is at the heart of the tensions found, in the context of the tilted non-flat models, when comparing P18 data and BAO$^\prime$/BAO data cosmological parameter constraints and P18 data and non-CMB data constraints. It is interesting that the Hubble constant value measured using P18+lensing+non-CMB data in the tilted flat $\Lambda$CDM model, $H_0=68.09\pm 0.38$ km s$^{-1}$ Mpc$^{-1}$, is consistent with that from a median statistics analysis of a large compilation of Hubble constant measurements as well as with some local measurements. More and better cosmological data are needed in order to shed additional light on the issues studied in this paper. In the meantime the P18+lensing+non-CMB data set looks like the most reliable among all those considered and consequently we conclude that current observational data do not favor curved spatial geometry --- consistent with the standard tilted flat $\Lambda$CDM model --- but do favor $A_L>1$ and so somewhat uncomfortably squeeze the standard tilted flat $\Lambda$CDM model. \acknowledgements We thank H\'ector Gil-Mar\'in for useful discussions about BAO data. J.d.C.P.\ was supported by a FPI fellowship associated to the project FPA2016-76005-C2-1-P (MINECO). C.-G.P.\ was supported by National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No.\ 2020R1F1A1069250). B.R.\ was supported by DOE grant DE-SC0011840. \def{and }{{and }} \section{Introduction} \documentclass[twocolumn,showpacs,preprintnumbers,amsmath,amssymb,epsfig]{revtex4} \usepackage{graphicx \usepackage{dcolumn \usepackage{bm \usepackage{epsfig} \usepackage{hyperref} \usepackage{float} \usepackage{amsmath} \usepackage{epsfig,floatflt} \usepackage{subfigure} \usepackage[usenames]{color} \usepackage{comment} \usepackage{ulem} \newcommand{\lesssim}{\lesssim} \newcommand{\gtrsim}{\gtrsim} \def^{\circ}{^{\circ}} \def^{\prime}{^{\prime}} \def\hbox{$.\!\!^\circ$}{\hbox{$.\!\!^\circ$}} \def\hbox{$.\!\!^{\rm d}$}{\hbox{$.\!\!^{\rm d}$}} \def\hbox{$.\!\!^{\rm h}$}{\hbox{$.\!\!^{\rm h}$}} \def\hbox{$.\!\!^{\rm m}$}{\hbox{$.\!\!^{\rm m}$}} \def\hbox{$.\!\!^{\rm s}$}{\hbox{$.\!\!^{\rm s}$}} \def\hbox{$.\mkern-4mu^\prime$}{\hbox{$.\mkern-4mu^\prime$}} \def\hbox{$.\!\!^{\prime\prime}$}{\hbox{$.\!\!^{\prime\prime}$}} \def{\displaystyle\cdot}{{\displaystyle\cdot}} \maxdeadcycles=1000 \begin{document} \title{Current data are consistent with flat spatial hypersurfaces in the $\Lambda$CDM cosmological model but favor more lensing than the model predicts} \author{Javier de Cruz P\'erez${}^{1}$, Chan-Gyung Park${}^{2}$, and Bharat Ratra${}^{1}$} \affiliation{ ${}^{1}$Department of Physics, Kansas State University, 116 Cardwell Hall, Manhattan, KS 66506, USA \\ ${}^{2}$Division of Science Education and Institute of Science Education, Jeonbuk National University, Jeonju 54896, Republic of Korea } \email{[email protected], park.chan.gyung@\-gmail.com, [email protected]} \date{\today} \begin{abstract} We study the performance of three pairs of tilted $\Lambda$CDM cosmological models, two pairs allowing for non-flat spatial hypersurfaces, cosmic microwave background (CMB) temperature and polarization power spectrum data (P18), measurements of the Planck 2018 lensing potential power spectrum (lensing), and a large compilation of non-CMB data (non-CMB). For the six models, we measure cosmological parameters and study whether or not pairs of the data sets (as well as subsets of them) are mutually consistent in these models. Half of these models allow the lensing consistency parameter $A_L$, which re-scales the gravitational potential power spectrum, to be an additional free parameter to be determined from data, while the other three have $A_L = 1$ which is the theoretically expected value. The tilted spatially-flat models assume the usual primordial spatial inhomogeneity power spectrum that is a power law in wave number. The tilted non-flat models assume either the primordial power spectrum used in the Planck group analyses [Planck $P(q)$], that has recently been numerically shown to be a good approximation to what is quantum-mechanically generated from a particular choice of closed inflation model initial conditions, or a recently computed power spectrum [new $P(q)$] that quantum-mechanically follows from a different set of non-flat inflation model initial conditions. In the tilted non-flat models with $A_L=1$ we find differences between P18 data and non-CMB data cosmological parameter constraints, which are large enough to rule out the Planck $P(q)$ model at 3$\sigma$ but not the new $P(q)$ model. No significant differences are found when cosmological parameter constraints obtained with two different data sets are compared within the standard tilted flat $\Lambda$CDM model. While both P18 data and non-CMB data separately favor a closed geometry, with spatial curvature density parameter $\Omega_k<0$, when P18+non-CMB data are jointly analyzed the evidence in favor of non-flat hypersurfaces subsides. Differences between P18 data and non-CMB data cosmological constraints subside when $A_L$ is allowed to vary. From the most restrictive P18+lensing+non-CMB data combination we get almost model-independent constraints on the cosmological parameters and find that the $A_L>1$ option is preferred over the $\Omega_k<0$ one, with the $A_L$ parameter, for all models, being larger than unity by $\sim 2.5\sigma$. According to the deviance information criterion, in the P18+lensing+non-CMB analysis, the varying $A_L$ option is on the verge of being {\it strongly} favored over the $A_L=1$ one, which could indicate a problem for the standard tilted flat $\Lambda$CDM model. These data are consistent with flat spatial hypersurfaces but more and better data could improve the constraints on $\Omega_k$ and might alter this conclusion. Error bars on some cosmological parameters are significantly reduced when non-CMB data are used jointly with P18+lensing data. For example, in the tilted flat $\Lambda$CDM model for P18+lensing+non-CMB data the Hubble constant $H_0=68.09\pm 0.38$ km s$^{-1}$ Mpc$^{-1}$, which is consistent with that from a median statistics analysis of a large compilation of $H_0$ measurements as well as with a number of local measurements of the cosmological expansion rate. This $H_0$ error bar is 31\% smaller than that from P18+lensing data alone. \end{abstract} \pacs{98.80.-k, 95.36.+x} \maketitle \section{Introduction} \label{sec:intro} General relativity is the current best description of gravity on cosmological scales. In general relativity gravity is responsible for the observed expansion of the Universe and can be sourced by non-relativistic (cold dark and baryonic) matter, relativistic radiation/matter, a cosmological constant (or a dynamical dark energy density), and the curvature of space. In an influential 1932 paper Einstein and de Sitter, \citep{EinsteindeSitter1932}, noted that available data then were unable to measure spatial curvature and so decided to study whether a spatially-flat cosmological model was observationally consistent. They acknowledged that the cosmological model had to be dynamical, and so Einstein's original argument for a cosmological constant --- to make the Universe static --- was no longer valid and so the cosmological constant did not have to be included in this Einstein-de Sitter model. They ignored relativistic radiation/matter in this model (which was not under discussion then, and is known to be negligible at late times when the model was meant to be applicable). This Einstein-de Sitter model only included non-relativistic (and then only baryonic) matter. A little over half a century later, motivated by observations indicating a lower than critical non-relativistic matter energy density and the first inflation model, an improved standard model, the spatially-flat $\Lambda$CDM model, was proposed, \citep{Peebles:1984ge}. In this model the cosmological constant $\Lambda$, which has a time- and space-independent energy density, is the dominant contributor to the current cosmological energy budget, followed by non-relativistic non-baryonic cold dark matter (CDM), and then non-relativistic baryonic matter. Like the Einstein-de Sitter model, the standard spatially-flat $\Lambda$CDM model assumes vanishing spatial curvature, motivated by early models of spatially-flat inflation, \citep{Guth1981, Sato1981a, Sato1981b, Kazanas1980}. Soon after, spatially-non-flat, open and closed, inflation models were developed, \citep{Gott1982, Hawking1984, Ratra1985}. A decade and a half later, the observed currently accelerated cosmological expansion, discovered from type Ia supernova (SNIa) measurements \cite{Riess:1998cb, Perlmutter:1998np}, greatly strengthened support for a cosmological constant or a dynamical dark energy density that slowly varied in time and space \citep{PeeblesRatra1988, RatraPeebles1988} --- if general relativity is an accurate model for gravity on cosmological length scales --- and for the spatially-flat $\Lambda$CDM model or a model close to it. For reviews of the current situation see Refs.\ \citep{DiValentino:2021izs, Perivolaropoulos:2021jda, Abdalla:2022yfr}. A half-decade prior to the first SNIa observations indicating currently accelerated cosmological expansion, evidence for a lower than critical value of non-relativistic matter density, along with the development of an open inflation model, \citep{Gott1982}, led to some discussion of an open CDM model, \citep{RatraPeebles1994, RatraPeebles1995, Bucheretal1995, Yamamotoetal1995, Kamionkowskietal1994, Gorskietal1998, Ratraetal1999}, but with cosmic microwave background (CMB) observations indicating that space curvature had to be a subdominant contributor to the current cosmological energy budget, \citep{WMAP:2012nax, Planck:2018vyg}, and with SNIa observations favoring a significant contribution to the energy budget from a cosmological constant, interest in open CDM models soon faded. More recently, especially because of results from Planck CMB anisotropy data, \citep{Planck:2018vyg}, there has been renewed interest in non-flat models. In these models the current cosmological energy budget is dominated by $\Lambda$, to be consistent with the observed currently accelerated cosmological expansion, but they now have very mildly closed spatial hypersurfaces instead of open ones. This is because from an analysis of the final Planck 2018 TT,TE,EE+lowE (hereafter P18) data, that makes use of a specific primordial power spectrum (see below for a fuller discussion of these data and the power spectrum they use in this analysis), they find a spatial curvature energy density parameter value $\Omega_k = -0.044^{+0.018}_{-0.015}$ that is closed and 2.7$\sigma$ away from flat, \citep{Planck:2018vyg}, when $\Omega_k$ is included as an additional free parameter in the analysis. We note that from a combination of Atacama Cosmology Telescope (ACT) and Wilkinson Microwave Anisotropy Probe CMB anisotropy data Ref.\ \citep{ACT:2020gnv} find $\Omega_k = -0.001^{+0.014}_{-0.010}$, which is 2.1$\sigma$ from the P18 value and consistent with flat spatial hypersurfaces, while the South Pole Telescope (SPT) CMB anisotropy data results in $\Omega_k = 0.001^{+0.018}_{-0.019}$, \citep{SPT-3G:2021wgf}, which is 1.7$\sigma$ from the P18 value and also consistent with flat spatial hypersurfaces. Both these analyses use the primordial power spectrum used in the P18 analysis. The above result led to the study of the so-called lensing anomaly. The trajectory of CMB photons are bent by the gravitational effect of inhomogeneities present in the mass distribution along their way to us. This statistical phenomenon, predicted by general relativity, is known as weak gravitational lensing of the CMB. When computing the predicted CMB temperature and polarization spectra in a cosmological model that are to be compared to the observed spectra, it is important to account for this effect and compute what are known as lensed CMB spectra. If we use the tilted flat $\Lambda$CDM model to measure cosmological parameter values from Planck CMB data, we can use this model, with these parameter values, to compute the expected amount of CMB weak gravitational lensing, \cite{Lewis:2006fu}. Incorrectly predicting the amount of weak lensing present in the CMB power spectra would indicate an inconsistency in the standard model when it is used to fit Planck CMB temperature and polarization anisotropy data. It turns out that this is actually the case, since an excess of CMB weak lensing is observed in the CMB power spectra, compared to what is expected in the standard model with parameter values determined from CMB data \cite{Calabreseetal2008, Planck:2018vyg}. This is known as the lensing anomaly, since the effect is not yet thought to be statistically significant enough to reject the standard spatially-flat $\Lambda$CDM model. A number of solutions have been proposed, with two being more widely debated. The first of these is related to the aforementioned non-zero value for $\Omega_k$ in the P18 data analysis, which favors closed spatial hypersurfaces, when $\Omega_k$ is taken to be an additional free parameter, e.g.\ \citep{Planck:2018vyg, Handley:2019tkm, DiValentino:2019qzk, DiValentino:2022rdg,Yang:2022kho}. Due to the excess of CMB weak lensing found, it is desirable to have a higher value of the non-relativistic matter energy density parameter $\Omega_m$ in order to increase the amount of gravitational lensing of CMB photons. Because of the tight constraints imposed by CMB data on this parameter there is no room, within the tilted flat $\Lambda$CDM model, to do this. By allowing non-flat spatial hypersurfaces, a closed model with $\Omega_k<0$ can resolve this problem, since the CMB power spectra are affected by the combination $(\Omega_m + \Omega_k)h^2$, where $h$ is the Hubble constant $H_0$ in units of $100~\textrm{km}~\textrm{s}^{-1}~\textrm{Mpc}^{-1}$, which can be held constant by making $\Omega_k$ slightly more negative while slightly increasing $\Omega_m$ to give more CMB weak lensing, and also slightly adjusting $h$. Cosmological distances also depend on spatial curvature, therefore in a non-flat cosmological model the positions of the acoustic peaks are shifted relative to the flat model case. This would not be a welcome change, since the constraints from the observed CMB power spectra are tight. This can be resolved by reducing the value of $h$ which shifts the acoustic peaks in the opposite direction. The fact that almost the same temperature and polarization power spectra can be produced with different combinations of the cosmological parameter values points to a geometrical degeneracy between these three parameters, $H_0$-$\Omega_m$-$\Omega_k$. While the first of the more widely debated resolutions is based on a change of more-conventional cosmological parameters, the second one is more phenomenological, e.g.\ \citep{Planck:2018vyg, SPT:2017jdf, SPT:2019fqo, DiValentino:2019qzk, DiValentino:2022rdg}. Reference \cite{Calabreseetal2008} introduces the lensing consistency parameter $A_L$ which re-scales the gravitational potential power spectrum in such a way that when $A_L=1$ we recover the theoretically predicted amount of weak lensing. If $A_L$ is allowed to vary in the analysis, to be determined from data, when $A_L>1$ the predicted amount of lensing is greater than the case when $A_L=1$. In Ref.\ \cite{Planck:2018vyg} when P18 data are used to analyze the tilted flat $\Lambda$CDM+$A_L$ model, the result is $A_L = 1.180\pm 0.065$ which represents a 2.8$\sigma$ deviation from the theoretically expected value $A_L=1$. We emphasize however that the measured Planck lensing likelihood is consistent with $A_L = 1$, see Fig.\ 3 of Ref.\ \citep{Planck:2018vyg} and Ref.\ \citep{Planck:2018lbu}. We also note that from ACT CMB anisotropy data $A_L = 1.01 \pm 0.11$, \citep{ACT:2020gnv}, consistent with $A_L = 1$ and 1.3$\sigma$ smaller than the P18 value, while from SPT CMB anisotropy data $A_L = 0.81 \pm 0.14$, \citep{Henning:2017nuy}, 1.4$\sigma$ smaller than $A_L = 1$ and 2.4$\sigma$ smaller than the P18 value. To analyze CMB anisotropy data one must assume a form for the primordial power spectrum of spatial inhomogeneities as a function of wavenumber. In the inflation scenario zero-point quantum-mechanical fluctuations during inflation generate the spatial inhomogeneities, \citep{Hawking:1982cz, Starobinsky:1982ee, Guth:1982ec, Bardeen:1983qw, Fischler:1985ky}. In spatially-flat inflation models, if the inflaton field slowly rolls down an almost flat potential energy density the scale factor increases exponentially with time and the primordial power spectrum is almost scale-invariant with hardly any tilt, \citep{Harrison:1969fb, Peebles:1970ag, Zeldovich:1972zz}. A steeper inflaton potential energy density makes the inflaton evolve more rapidly, can cause the scale factor to grow only as a power of time, and will increase the power spectral tilt \citep{Lucchin:1984yf, Ratra:1989uv, Ratra:1989uz}. There has been much less study of the quantum-mechanical generation of spatial inhomgeneities in non-flat inflation models. Power spectra have been derived in spatially open and closed inflation models, \citep{Gott1982, Hawking1984, Ratra1985}, with a slowly-rolling inflation potential energy density, \citep{RatraPeebles1995, Ratra:2017ezv}, but these are untilted power spectra. The power spectrum assumed in the non-flat analyses of Refs.\ \citep{Planck:2018vyg, Handley:2019tkm, DiValentino:2019qzk} is tilted but was not derived from an inflation model computation. Very recently, a numerical study in closed inflation models that computes primordial power spectra generated for a few different, initially slow-roll, inflation initial conditions finds that it is possible to generate, in the closed case, a tilted power spectrum very close to that used in Refs.\ \citep{Planck:2018vyg, Handley:2019tkm, DiValentino:2019qzk}, \cite{Guth:2022xyz}. Also recently, a different set of initial conditions in closed and open inflation models were used to compute a different tilted power spectrum, \citep{Ratra:2022ksb}. In this paper we consider cosmological models with four different power spectra. In the tilted flat $\Lambda$CDM model we use the usual spatially-flat inflation model tilted power spectrum. In the untilted non-flat $\Lambda$CDM model, we use the untilted non-flat inflation model power spectrum, \citep{RatraPeebles1995, Ratra:2017ezv}. In the two different tilted non-flat $\Lambda$CDM models, we use the power spectrum assumed in Ref.\ \citep{Planck:2018vyg, Handley:2019tkm, DiValentino:2019qzk} --- which we call the Planck $P(q)$ --- as well as the power spectrum computed in Ref.\ \citep{Ratra:2022ksb}, which we call the new $P(q)$. See Sec.\ \ref{sec:method} below for a fuller description of the four power spectra we use. We emphasize that we use only non-flat inflation model power spectra that can be derived using a straightforward extension of the spatially-flat inflation model initial conditions to the non-flat inflation case. The issue of non-flat inflation model initial conditions is more complex than the flat inflation case, see discussion in Ref.\ \citep{Ratra:2022ksb}, so we focus on the simplest physically-consistent options, which also makes the analysis tractable. We note that a number of other power spectra have been considered in closed cosmological models, see Refs.\ \citep{Lasenby:2003ur, Masso:2006gv, Asgari:2015spa, Bonga:2016iuf, Handley:2019wlz, Thavanesan:2020lov, Kiefer:2021iko, Hergt:2022fxk}. A desire to measure the spatial curvature energy density parameter $\Omega_k$ provides part of the motivation for our work. The CMB anisotropy data are currently the most restrictive cosmological data, but to use these to measure $\Omega_k$ requires assumption of a primordial power spectrum for spatial inhomgeneities. Other, less-restrictive, data that do not require assuming a power spectrum can also be used to measure $\Omega_k$. These include better-established lower redshift data (that reach to $z \sim 2.3$), such as SNIa, Hubble parameter as a function of redshift [$H(z)$], and (non-growth-rate) baryon acoustic oscillation (BAO) measurements, \citep{Scolnic:2017caz, Yu:2017iju, eBOSS:2020yzd}, as well as emerging probes that reach to higher $z$, such as H \textsc{ii} starburst galaxy apparent magnitude observations as a function of $z$ that reach to $z \sim 2.5$, \citep{Gonzalez-Moran:2019uij,Cao:2020jgu,Cao:2020evz, Johnson:2021wou, Mehrabi:2021feg}; quasar angular size measurements that reach to $z \sim 2.7$, \citep{Cao:2017ivt,Ryan:2019uor, Lian:2021tca, Cao:2021cix}; Mg \textsc{ii} and C \textsc{iv} reverberation measured quasar data that reach to $z \sim 3.4$, \citep{OzDES:2021byt, Khadka:2021ukv, Khadka:2021sxe, Khadka:2022ooh, Cao:2022pdv, OzDES:2022ysr, Czerny:2022xfj}; possibly quasar flux measurements that reach to $z \sim 7.5$, \citep{Risaliti:2018reu, Khadka:2020whe, Khadka:2020vlh, Lusso:2020pdb, Khadka:2020tlm, Khadka:2021xcc, Rezaei:2021qwd, Dainotti:2022rfz, Petrosian:2022tlp}; and gamma-ray burst data that reach to $z \sim 8.2$, \citep{Dirirsa:2019fcs, Khadka:2020hvb, Khadka:2021vqa, Wang:2021hcx, Hu:2021ycz, Cao:2021irf, Luongo:2021pjs, Cao:2022wlg, Liu:2022srx, Dainotti:2022wli, Cao:2022yvi}. Individually these low- and intermediate-redshift data sets are only able to provide relatively weaker constraints on cosmological parameters in general, and specifically on $\Omega_k$, compared to those from CMB data. However, when many (or all) low- and intermediate-redshift data are analyzed jointly they provide useful constraints on $\Omega_k$ --- currently still not nearly as restrictive as the CMB ones --- favoring flat spatial hypersurfaces but still allowing a small amount of spatial curvature energy density, \citep{Park:2018tgj, Cao:2021ldv, Cao:2022ugh}. For other recent discussions of constraints on spatial curvature, see Refs.\ \citep{Arjona:2021hmg, Dhawan:2021mel, Gonzalez:2021ojp, Geng:2021hqc, Wei:2022plg, Mukherjee:2022ujw, Wu:2022fmr} and references therein, and see Refs.\ \citep{Baumgartner:2022jdz, Anselmi:2022uvj, Jimenez:2022asc} and references therein for recent, more general, discussions of non-flat cosmological models. While the standard spatially-flat $\Lambda$CDM cosmological model is attractive because of its simplicity --- the model only has 6 free cosmological parameters --- it is not straightforward to understand how to consistently generalize the current quantum-mechanical standard model of particle physics to one that accommodates the cosmological constant that is part of the standard $\Lambda$CDM model. Nonetheless, the standard cosmological model is consistent with a wide variety of measurements, including CMB anisotropy measurements \cite{Planck:2018vyg}, SNIa apparent magnitude observations \citep{Scolnic:2017caz}, BAO data \citep{eBOSS:2020yzd}, $H(z)$ observations \citep{Yu:2017iju}, and measurements of the growth of structure as a function of redshift ($f\sigma_8$). It is important to bear in mind that these data do not rule out mild evolution of the dark energy density \cite{Gomez-Valent:2018nib, Ooba:2018dzf, Ryan:2018aif, SolaPeracaula:2018wwm, Singh:2018izf, Park:2019emi, Gomez-Valent:2020mqn, Moreno-Pulido:2020anb, Sinha:2020vob, Cao:2020jgu, Urena-Lopez:2020npg, Cao:2021ldv, SolaPeracaula:2021gxi, Khadka:2021vqa, Cao:2021cix, Xu:2021xbt, Cao:2021irf, Jesus:2021bxq, Cao:2022wlg, Moreno-Pulido:2022phq, Cao:2022pdv, Adil:2022hkj} or, as discussed in detail above, mildly curved spatial hypersurfaces. These extensions, among others, might alleviate some of the issues affecting the standard spatially-flat $\Lambda$CDM model, such as differences in $H_0$ and $\sigma_8$ values determined using different techniques, \citep{DiValentino:2021izs, Perivolaropoulos:2021jda, Abdalla:2022yfr}. In this paper however we focus our efforts on the study of the lensing anomaly and on the measurement of $\Omega_k$. In this paper we study eight cosmological models, namely, the tilted flat $\Lambda$CDM (+$A_L$) models, the untilted non-flat $\Lambda$CDM (+$A_L$) models, the tilted non-flat $\Lambda$CDM (+$A_L$) Planck $P(q)$ models, and the tilted non-flat $\Lambda$CDM(+$A_L$) new $P(q)$ models. Six of these are non-flat models, characterized by three different primordial power spectra (see Sec.\ \ref{sec:method} for the form of the power spectra). By using a number of cosmological models with compilations of observational data to test how well the models fit these data, and to constrain the cosmological parameters of the models, we can measure, among other things, $\Omega_k$ and also determine whether the cosmological parameter constraints set by different data are model-dependent or not. The data sets we employ in this work are P18 data, Planck 2018 CMB weak lensing data, non-growth-factor BAO (BAO$^{\prime}$) data, BAO (including growth-factor) data, and non-CMB data [composed of BAO, $f\sigma_8$, $H(z)$, and SNIa data]. These data are described in more detail in Sec.\ \ref{sec:data}. A brief summary of the more significant results we find follows. These assume that the data sets we use are correct and do not have unaccounted for systematic errors. The untilted non-flat $\Lambda$CDM model with and without a varying $A_L$ parameter is not able to properly fit the P18 CMB anisotropy power spectra, due to the lack of the tilt ($n_s$) degree of freedom. Consequently its performance in comparison with the tilted models turns out to be very poor. Significant evidence in favor of a closed Universe is found when P18 data are considered alone and the tilted non-flat models better fit these data than does the standard tilted flat $\Lambda$CDM model. There are disagreements between P18 data cosmological constraints and non-CMB data cosmological constraints in the context of the tilted non-flat models with $A_L=1$, with the tilted non-flat $\Lambda$CDM Planck $P(q)$ model ruled out at 3$\sigma$. These tensions completely fade when the $A_L$ parameter is allowed to vary. On the other hand no significant tension is found when the cosmological parameter constraints obtained with two different data sets are compared within the standard tilted flat $\Lambda$CDM model. The most-restrictive P18+lensing+non-CMB data set clearly favors the varying $A_L$ option (with $A_L>1$) over the $A_L=1$ one --- which could be a problem for the standard tilted flat $\Lambda$CDM model --- and when this data set is utilized we get almost model-independent cosmological parameter constraints. These data are consistent with flat spatial hypersurfaces --- so we conclude that current data do not favor curved geometry --- but more and better data could improve the constraints on $\Omega_k$ and might alter this conclusion. We note that even though both P18 data and non-CMB data favor closed geometry, the larger $H_0$ and smaller $\Omega_m$ values favored by non-CMB data (compared to those favored by P18 data) result in P18+lensing+non-CMB data favoring flat spatial hypersurfaces. The Hubble constant value measured using these data in the tilted flat $\Lambda$CDM model is $H_0=68.09\pm 0.38$ km s$^{-1}$ Mpc$^{-1}$, which is consistent with that from a median statistics analysis of a large compilation of Hubble constant measurements as well as with a number of local measurements of the cosmological expansion rate. This $H_0$ error bar is 31\% smaller than that from P18+lensing data alone; similarly augmenting the P18+lensing data with our non-CMB data compilation reduces the $\Omega_m$ error bar by 33\% and also reduces error bars on all the other cosmological parameters by smaller amounts. The layout of our paper is as follows. In Sec.\ \ref{sec:data} we detail the observational data sets we employ to test the different cosmological models. In Sec.\ \ref{sec:method} we describe the cosmological models and primordial power spectra we study and summarize the methods we use in our analyses. We dedicate Sec.\ \ref{sec:results} to discuss in detail the results obtained by testing the different cosmological models against the several data sets we consider. In this section we also utilize different statistical estimators to compare the performance of the models in fitting data and to study possible tensions between different data sets in a given model. In Sec.\ \ref{sec:discussion} we summarize the more significant results of the previous (long) section. Finally in Sec.\ \ref{sec:conclusion} we deliver our conclusions. \section{Data} \label{sec:data} We use CMB anisotropy data and non-CMB data to constrain cosmological parameters, to determine how well the cosmological models we study fit these data, and to study how mutually consistent these data sets are in each of the cosmological models. We now list the data sets we use. {\bf P18}. Planck 2018 CMB temperature anisotropy data together with polarization data and their corresponding cross-spectra (TT,TE,EE+lowE), \cite{Planck:2018vyg}, which contain: TT power spectra at low-$\ell$ ($2\leq \ell \leq 29$) and high-$\ell$ ($30\leq \ell\leq 2508$) --- where $\ell$ is multipole number, TE data at high-$\ell$ ($30\leq \ell \leq 1996$), and EE data at low-$\ell$ ($2\leq \ell \leq 29$) and high-$\ell$ ($30\leq \ell\leq 1996$). We use the Planck 2018 baseline \texttt{Plik} $\ell \geq 30$ likelihood, which is described in Sec.\ 2.2.1 of Ref.\ \cite{Planck:2018vyg}. {\bf (P18) lensing}. Planck 2018 lensing potential power spectrum, see Sec.\ 2.3 of Ref.\ \cite{Planck:2018vyg} or Sec.\ 2 of Ref.\ \cite{Planck:2018lbu} for more details. {\bf BAO$^\prime$}. Twelve BAO data points from both anisotropic and isotropic BAO estimators that probe the redshift range $0.122 \leq z \leq 2.334$ \cite{Gil-Marin:2020bct,Bautista:2020ahg,Hou:2020rse,Neveux:2020voa,Carter:2018vce,DES:2017rfo,duMasdesBourboux:2020pck}. These are BAO data with growth rates excluded from the original papers, and are listed, along with the appropriate covariance matrices, in Sec.\ 3 of Ref.\ \cite{Cao:2022ugh}. \begin{table} \caption{BAO measurements.} \begin{ruledtabular} \begin{tabular}{ccc} $z_\textrm{eff}$ & Measurement & Reference \\[+0mm] \hline \\[-2mm] $0.122$ & $D_V\left(r_{d,\textrm{fid}}/r_d\right)$ [Mpc] $= 539\pm 17$ [Mpc] & \cite{Carter:2018vce} \\[+1mm] \hline \\[-2mm] $0.38$ & $D_M/r_d$ $= 10.274 \pm 0.151$ & \cite{Gil-Marin:2020bct} \\[+1mm] $0.38$ & $D_H/r_d$ $= 24.888\pm 0.582$ & \cite{Gil-Marin:2020bct} \\[+1mm] $0.51$ & $D_M/r_d$ $= 13.381 \pm 0.179$ & \cite{Gil-Marin:2020bct} \\[+1mm] $0.51$ & $D_H/r_d$ $= 22.429 \pm 0.482$ & \cite{Gil-Marin:2020bct} \\[+1mm] $0.38$ & $f \sigma_8 =0.49729\pm0.04508$ & \cite{Gil-Marin:2020bct} \\[+1mm] $0.51$ & $f \sigma_8 =0.45902\pm 0.03784$ & \cite{Gil-Marin:2020bct} \\[+1mm] \hline \\[-2mm] $0.698$ & $D_M/ r_d$ $= 17.646\pm0.302$ & \cite{Gil-Marin:2020bct,Bautista:2020ahg} \\[+1mm] $0.698$ & $D_H / r_d$ $= 19.770\pm0.469$ & \cite{Gil-Marin:2020bct,Bautista:2020ahg} \\[+1mm] $0.698$ & $f\sigma_8$ $= 0.47300\pm 0.04429$ & \cite{Gil-Marin:2020bct,Bautista:2020ahg} \\[+1mm] \hline \\[-2mm] $0.81$ & $D_A/r_d$ $= 10.75\pm 0.43$ & \cite{DES:2017rfo} \\[+1mm] \hline \\[-2mm] $1.48$ & $D_M/ r_d$ $= 30.21\pm 0.79$ & \cite{Hou:2020rse,Neveux:2020voa} \\[+1mm] $1.48$ & $D_H / r_d$ $= 13.23\pm 0.47$ & \cite{Hou:2020rse,Neveux:2020voa} \\[+1mm] $1.48$ & $f\sigma_8$ $= 0.462\pm 0.045$ & \cite{Hou:2020rse,Neveux:2020voa} \\[+1mm] \hline \\[-2mm] $2.334$ & $D_M / r_d$ $= 37.5^{+1.2}_{-1.1}$ & \cite{duMasdesBourboux:2020pck} \\[+1mm] $2.334$ & $D_H / r_d$ $= 8.99^{+0.20}_{-0.19}$ & \cite{duMasdesBourboux:2020pck} \\[+0mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: For the data point at $z = 0.122$ the sound horizon size (at the drag epoch) of the fiducial model is $r_{d,\textrm{fid}}=147.5~\textrm{Mpc}$ \cite{Carter:2018vce}. \end{flushleft} \end{ruledtabular} \label{tab:bao} \end{table} {\bf BAO}. An extension of the BAO$^\prime$ data described above, that also probe the redshift range $0.122 \leq z \leq 2.334$, but now include the correlated growth rate ($f\sigma_8$) data points provided in Refs.\ \cite{Gil-Marin:2020bct,Bautista:2020ahg,Hou:2020rse,Neveux:2020voa}. Table \ref{tab:bao} lists these BAO data points. The quantities listed in Table \ref{tab:bao} include transverse comoving distance at redshift $z$ \begin{equation} D_M(z) = (1+z)D_A(z), \end{equation} where $D_A(z)$ is the angular size distance at $z$, \begin{equation} D_H(z) = \frac{c}{H(z)}, \end{equation} where $H(z)$ is the Hubble parameter and $c$ the speed of light, and the angle-averaged distance \begin{equation} D_V(z) = \left[czD^2_M(z)/H(z)\right]^{1/3}. \end{equation} The measurements are provided as relative distances with respect to the radius of the sound horizon at the drag epoch redshift $z_d$ \begin{equation} r_d = \int^{\infty}_{z_d}\frac{c_s(z)dz}{H(z)}, \end{equation} where $c_s(z)$ is the speed of sound in the photon-baryon fluid. For BAO data from Ref.\ \cite{Gil-Marin:2020bct} the appropriate covariance matrix is now \begin{widetext} \begin{equation} \label{eq:cov_BOSS} \begin{pmatrix} 0.022897 & -0.02007 & 0.0026481 & 0.013487 & -0.0081402 & 0.0010292 \\ -0.02007 & 0.33849 & -0.0085213 & -0.016024 & 0.13652 & -0.0038002 \\ 0.0026481 & -0.0085213 & 0.0020319 & 0.001325 & -0.0023012 & 0.000814158 \\ 0.013487 & -0.016024 & 0.001325 & 0.032158 & -0.020091 & 0.0026409 \\ -0.0081402 & 0.13652 & -0.0023012 & -0.020091 & 0.23192 & -0.0055377 \\ 0.0010292 & -0.0038002 & 0.000814158 & 0.0026409 & -0.0055377 & 0.0014322 \end{pmatrix}, \end{equation} \end{widetext} while the covariance matrix for BAO data from Refs.\ \cite{Gil-Marin:2020bct,Bautista:2020ahg} is \begin{small} \begin{equation} \label{eq:cov_LRG} \begin{pmatrix} 0.09114 & -0.033789 & 0.0024686 \\ -0.033789 & 0.22009 & -0.0036088 \\ 0.0024686 & -0.0036088 & 0.0019616 \end{pmatrix}, \end{equation} \end{small} and that for BAO data from Refs.\ \cite{Hou:2020rse,Neveux:2020voa} is \begin{small} \begin{equation} \label{eq:cov_Quasar} \begin{pmatrix} 0.6227 & 0.01424 & 0.02257 \\ 0.01424 & 0.2195 & -0.007315 \\ 0.02257 & -0.007315 & 0.002020 \end{pmatrix}. \end{equation} \end{small} {${\boldsymbol f\bm{\sigma}_8}$}. $f\sigma_8$ data points, in addition to those correlated with BAO data that are listed in Table \ref{tab:bao}. These independent $f\sigma_8$ measurements are obtained either from peculiar velocity data \cite{Turnbull:2011ty,Hudson:2012gt,Said:2020epb} or from redshift space distortion (RSD) analyses \cite{Shi:2017qpr,Simpson:2015yfa,Blake:2013nif,Mohammad:2018mdy,Okumura:2015lvp}. These are listed in Table \ref{tab:fs8}. The combination $f(z)\sigma_8(z)$ is used to quantify the growth rate of the matter density perturbation. Here, the growth rate \begin{equation} f(z) = -(1+z)\frac{d\ln{D(z)}}{dz} \end{equation} where $D(z)$ is the growth function. The other function involved, $\sigma_8(z)$, is the root mean square of matter fluctuations smoothed over spheres of radius $R_8 = 8h^{-1}\textrm{Mpc}$ at a given value of the redshift. It is computed as \begin{equation} \sigma^2_8(z) = \int\frac{d^{3}k}{(2\pi)^3}P_m(z,\vec{k})W^2(k{R_8}), \end{equation} where $P_m(z,\vec{k})$ is the matter power spectrum and $W(k{R_8})$ is the window function. \begin{table} \caption{$f\sigma_8$ measurements.} \begin{ruledtabular} \begin{tabular}{ccc} $z_\textrm{eff}$ & $f\sigma_8$ & Reference \\[+0mm] \hline \\[-2mm] $0.02$ & $ 0.398\pm 0.065$ & \cite{Turnbull:2011ty,Hudson:2012gt} \\[+1mm] \hline \\[-2mm] $0.035$ & $0.338\pm 0.027$ & \cite{Said:2020epb} \\[+1mm] \hline \\[-2mm] $0.1$ & $0.376\pm 0.038$ & \cite{Shi:2017qpr} \\[+1mm] \hline \\[-2mm] $0.18$ & $ 0.29\pm 0.10$ & \cite{Simpson:2015yfa} \\[+1mm] $0.38$ & $0.44\pm 0.06$ & \cite{Blake:2013nif} \\[+1mm] \hline \\[-2mm] $0.6$ & $0.49\pm 0.12$ & \cite{Mohammad:2018mdy}\\[+1mm] $0.86$ & $0.46\pm 0.09$ & \cite{Mohammad:2018mdy} \\[+1mm] \hline \\[-2mm] $1.36$ & $0.482\pm 0.116$ & \cite{Okumura:2015lvp} \\[+0mm] \end{tabular} \end{ruledtabular} \label{tab:fs8} \end{table} {\bf SNIa}. Apparent magnitude as a function of redshift measurements for 1048 Pantheon SNIa \cite{Scolnic:2017caz}, probing the redshift range $0.01 < z < 2.3$, and 20 compressed data points, spanning the redshift range $0.015 \leq z \leq 0.7026$, representing 207 DES 3yr SNIa \cite{DES:2018paw}. The Pantheon and DES 3yr data are independent of each other, but the data points within each sample are correlated and we account for the corresponding covariance matrices in our analyses. {${\bm{H(z)}}$}. Hubble parameter measurements over the redshift range $0.070 \leq z \leq 1.965$ obtained using the differential age technique. The 31 data points employed are listed in Table 2 of Ref.\ \cite{Park:2017xbl}. Hereafter we denote the combination of BAO, $f\sigma_8$, SNIa, and $H(z)$ data sets as the non-CMB data set. \section{Methods} \label{sec:method} We apply the Markov chain Monte Carlo (MCMC) method, implemented in the \texttt{CAMB}/\texttt{COSMOMC} package (version of Oct.\ 2018), \cite{Challinor:1998xk,Lewis:1999bs,Lewis:2002ah}, to explore the parameter space of the different models under study. The \texttt{CAMB} program computes the matter and CMB power spectra based on the evolution of density perturbations of the matter and radiation components and the \texttt{COSMOMC} program uses the MCMC method to estimate the parameter constraints that are favored by the given observational data sets. We have performed cross-checks using the \texttt{CLASS}/\texttt{MontePython} package, \cite{Blas:2011rf,Audren:2012wb}. In general a good agreement between the results is obtained unless significant degeneracies between some of the fitting parameters are present. When this happens, differences in the central values are found, but the two sets of results remain compatible at 1$\sigma$ due to large error bars. The inclusion of more data breaks the aforementioned degeneracies and the two sets of results then agree really well. In this paper we consider four cosmological models: the tilted flat, (two) tilted non-flat, and the untilted non-flat $\Lambda$CDM models, as well as their extensions through the inclusion of the $A_L$ parameter, for a total of eight cases. $A_L$ is a phenomenological parameter that scales the theoretical prediction of the gravitational potential power spectrum, with its theoretical expected value being $A_L=1$, see Ref.\ \citep{Calabreseetal2008}. $A_L>1$ causes the smoothing of acoustic peaks in the CMB angular power spectrum, and Planck CMB data tend to prefer $A_L>1$ \cite{Planck:2018vyg}. The tilted flat $\Lambda$CDM model is characterized by six cosmological parameters ($\Omega_b h^2$, $\Omega_c h^2$, $\theta_\textrm{MC}$, $\tau$, $A_s$, $n_s$), where $\Omega_b$ and $\Omega_c$ are the current values of non-relativistic baryonic and cold dark matter density parameters, $\theta_\textrm{MC}$ is the angular size of the sound horizon at recombination defined in the \texttt{CAMB}/\texttt{COSMOMC} program, $\tau$ is the reionization optical depth, and $A_s$ and $n_s$ are the amplitude and the spectral index of the primordial scalar-type energy density perturbation power spectrum \begin{equation}\label{eq:tilted_flat_PS} P_\delta(k) = A_s \left(\frac{k}{k_0} \right)^{n_s}, \end{equation} where $k$ is wavenumber and the pivot scale for $A_s$ is $k_0=0.05~\textrm{Mpc}^{-1}$. This power spectrum is generated by quantum mechanical fluctuations during an early epoch of power-law inflation in an exponential potential energy density scalar field cosmological model with flat spatial hypersurfaces, \cite{Lucchin:1984yf, Ratra:1989uv, Ratra:1989uz}. In the non-flat very-slow-roll (so untilted) inflation $\Lambda$CDM model, the presence of non-zero spatial curvature determines a new length scale, and the power-law part of the primordial power spectrum is not relevant. Thus, this model still has six cosmological parameters, with the spectral index $n_s$ being replaced by the current value of the spatial curvature density parameter $\Omega_k$. For very-slow-roll inflation in this non-flat inflation mode, the primordial power spectrum is, \cite{RatraPeebles1995, Ratra:2017ezv}, \begin{equation}\label{eq:untilted_nonflat_PS} P_\delta(q) \propto \frac{(q^2-4K)^2}{q(q^2-K)} \end{equation} where $q=\sqrt{k^2+K}$ is the wavenumber in a model with non-zero spatial curvature $K=-(H_0^2 / c^2)\Omega_k$, and $A_s$ is defined to be the amplitude of the power spectrum at the pivot-scale $k_0$. This power spectrum form holds in both the open ($\Omega_k > 0$) and closed ($\Omega_k < 0$) cases, with $q|K|^{-1/2} \geq 0$ and continuous in the open case and $q|K|^{-1/2} = 3, 4, 5\dots$ in the closed case. It is the power spectrum used in the non-flat model analyses in Refs.\ \cite{Ooba:2017ukj, Ooba:2017npx, Ooba:2017lng, Park:2017xbl,Park:2018bwy, Park:2018fxx, Park:2019emi}. For the tilted non-flat $\Lambda$CDM model, there are seven cosmological parameters, with $\Omega_k$ added to the six of the tilted flat $\Lambda$CDM model. In this model it has been usual to assume, e.g.\ \cite{Planck:2018vyg}, a primordial power spectrum of the form \begin{equation}\label{eq:tilted_nonflat_Planck_PS} P_\delta(q) \propto \frac{(q^2-4K)^2}{q(q^2-K)} \left( \frac{k}{k_0} \right)^{n_s -1}, \end{equation} where $q$ (and $A_s$) is defined in the previous paragraph. The above expression, which we refer to as the Planck $P(q)$, is a phenomenologically modified version of the non-flat very-slow-roll untilted primordial density perturbation, given in Eq.\ \eqref{eq:untilted_nonflat_PS}, to now also allow for tilt, \cite{Lesgourgues:2013bra}. It assumes that tilt in a non-flat space works in a way similar to how it does in flat space. This expression was known to be physically reasonable in the cases when $K = 0$ or $n_s=1$, since Eqs.\ \eqref{eq:tilted_flat_PS} and \eqref{eq:untilted_nonflat_PS} are recovered, respectively, and these two expressions hold in physically-consistent inflation models. Very recently, a numerical study in closed inflation models that computes primordial power spectra generated for a few different, initially slow-roll, inflation initial conditions finds that it is possible to generate, in the closed case, a power spectrum very close to that given in Eq.\ \eqref{eq:tilted_nonflat_Planck_PS}, \cite{Guth:2022xyz}. In this paper we also study another not-necessarily very-slowly-rolling non-flat (closed and open) inflation model, \cite{Ratra:2022ksb}. These tilted non-flat inflation models result in a primordial power spectrum that differs from that of eq.\ \eqref{eq:tilted_nonflat_Planck_PS} and assumes a different inflation initial condition than those studied in Ref.\ \cite{Guth:2022xyz}. For the closed and open inflation models, the resulting power spectrum \begin{equation} { P_\delta(q) \propto (q^2 -4K)^2|P_{\zeta}(A)|, } \label{eq:tilted_nonflat_new_PS} \end{equation} where $P_\zeta(A)$ is different in the closed and open cases. For the closed inflation model \begin{widetext} \begin{equation} \sqrt{|P_{\zeta}(A)|} = \left(\frac{16\pi}{m^2_p}\right)^{1/2}\!\!\!\!Q^{1/p}\frac{(2+q_s)p}{\sqrt{\pi q_s}} \Bigg|-1 + \frac{W(A)}{p}\Bigg| \,\, \frac{2^{-(6-4q_s+2A -W(A))/p}}{\sqrt{A}(A-1)(A+3)} \,\, \Bigg|\frac{\Gamma\left(1 + W(A)/p\right)\Gamma\left((2+q_s)/(2p)\right)}{\Gamma\left((2+W(A))/p\right)}\Bigg|, \end{equation} \end{widetext} with \begin{equation} W(A) = \sqrt{-8-4q_s + q_s^2 + 4A(A+2)}, \end{equation} and \begin{equation} A = \frac{q}{\sqrt{|K|}} -1. \end{equation} While for the open inflation model \begin{widetext} \begin{equation} \sqrt{|P_{\zeta}(A)|} = \left(\frac{16\pi}{m^2_p}\right)^{1/2}\!\!\!\!Q^{1/p}\frac{(2+q_s)p}{\sqrt{\pi q_s}}\Bigg|-1 + \frac{W(A)}{p}\Bigg|\,\, \frac{2^{-(6-4q_s)/p +\textrm{Re}(W(A)/p)}}{\sqrt{A}(A^2 + 4)}\,\, \Bigg|\frac{\Gamma\left(1 + W(A)/p\right)\Gamma\left((2+q_s)/(2p)\right)}{\Gamma\left((2+W(A))/p\right)}\Bigg|, \end{equation} \end{widetext} with \begin{equation} W(A) = \sqrt{-12 -4q_s + q_s^2 -4A^2}, \end{equation} and \begin{equation} A = \frac{q}{\sqrt{|K|}}. \end{equation} In these equations, $\Gamma(x)$ is the Gamma function, $m_p$ is the Planck mass, $Q$ is a normalization constant, $q_s = (2 -2n_s)/(3-n_s)$, and finally $p = 2-q_s$. In both the closed and open inflation models $0 < q_s < 2$, so $-\infty < n_s < 1$. In this paper we refer to the power spectrum in this tilted non-flat $\Lambda$CDM as the new $P(q)$, which is shown in Eq.\ \eqref{eq:tilted_nonflat_new_PS}, and following the procedure applied to the other power spectra, $A_s$ gives the amplitude of the new $P(q)$ at the pivot-scale $k_0$. Figure \ref{fig:pinit} compares the initial scalar-type perturbation spectra of the tilted flat, untilted non-flat, and two tilted non-flat models with the Planck $P(q)$ and the new $P(q)$. In this figure we set the values of the cosmological parameters, for all the models, to the mean values of the tilted non-flat $\Lambda$CDM model with Planck $P(q)$ constrained by the P18+lensing data (see Table \ref{tab:para_NL_ns_nonCMB} for the parameters), except in panel (b) for the open models where we change the sign of $\Omega_k$. \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=78mm,trim=0cm 5cm 0cm 5cm]{pinit_closed_fig3a.pdf}} \mbox{\includegraphics[width=78mm,trim=0cm 5cm 0cm 5cm]{pinit_open_fig3b.pdf}} \caption{Initial scalar-type perturbation spectra of the tilted flat, untilted non-flat, and two tilted non-flat $\Lambda$CDM models. For the tilted non-flat closed models, the cosmological parameters of the tilted non-flat $\Lambda$CDM model with Planck $P(q)$ constrained by using P18+lensing data (including $\Omega_k=-0.0103$) are used (see Table \ref{tab:para_NL_ns_nonCMB}). For closed models, the same value of $A_s$ was assumed for all models and the same value of $n_s$ was assumed for all tilted models. The powers at the first 11 large-scale wavenumbers are indicated by the filled (open) circles for the tilted closed model with the new (Planck) $P(q)$. For open non-flat models, $\Omega_k=+0.0103$ was assumed. For the tilted flat model, the generalized wavenumber $q$ is equivalent to $k$. } \label{fig:pinit} \end{figure*} In the cases where we include the $A_L$ parameter in the analysis, this increases by one the number of cosmological model parameters to be determined from data, so depending on model we then have either seven or eight cosmological model parameters in these cases. At the background level, the evolution of the scale factor $a$ in all models we study is described by the Hubble function \begin{equation} \begin{split} \label{eq:Hubble_function} H^2(a) = H^2_0[\Omega_\gamma{a^{-4}} &+ (\Omega_b + \Omega_c){a^{-3}} \\ &+ \Omega_k{a^{-2}} + \Omega_\nu(a)+ \Omega_\Lambda]. \end{split} \end{equation} Here $a=1/(1+z)$ is the cosmic scale factor normalized to unity at present, $\Omega_\Lambda$ represents the cosmological constant dark energy density parameter, $\Omega_\gamma$ is the current value of the CMB photon energy density parameter, and $\Omega_\nu(a)$ represents the contribution of the massless and massive neutrinos, for which it is not possible to get an analytical expression. In all cases we study, we determine the contribution of photons, and massless and massive neutrinos by assuming a present CMB temperature $T_0=2.7255$ K, the effective number of neutrino species $N_\textrm{eff}=3.046$, and a single massive neutrino species with neutrino mass $0.06$ eV. During parameter exploration using the MCMC method, we set wide non-zero flat priors on parameters in order that they not affect the parameter estimation; these priors are listed in Table \ref{tab:Priors}. \begin{table} \caption{Flat priors of the fitting parameters.} \begin{ruledtabular} \begin{tabular}{cccc} $\textrm{Parameters}$ & $\textrm{Our}$ & $\textrm{Handley}$ & $\textrm{Handley}$+$\Omega_k$ \\[+0mm] \hline \\[-2mm] $\Omega_b h^2$ & [0.005,0.1] & [0.019,0.025]& [0.019,0.025] \\[+1mm] \hline \\[-2mm] $\Omega_c h^2$ & [0.001,0.99] & [0.095,0.145] & [0.095,0.145] \\[+1mm] \hline \\[-2mm] 100$\theta_\textrm{MC}$ & [0.5,10] & [1.03,1.05]& [1.03,1.05]\\[+1mm] \hline \\[-2mm] $\tau$ & [0.01,0.8] & [0.01,0.4]& [0.01,0.4]\\[+1mm] \hline \\[-2mm] $\Omega_k$ & [-0.5,0.5] & [-0.1,0.05]& [-0.3,0.15]\\[+1mm] \hline \\[-2mm] $n_s$ & [0.8,1.2] & [0.885,1.04]& [0.885,1.04]\\[+1mm] \hline \\[-2mm] $\ln\left(10^{10}A_s\right)$ & [1.61,3.91] & [2.5,3.7]& [2.5,3.7]\\[+1mm] \hline \\[-2mm] $A_L$ & [0,10] & -& -\\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: In almost all the computations reported in this paper we use the priors listed in the Our column in this table. A general exception is that in almost all the computations in the tilted non-flat $\Lambda$CDM model with the new $P(q)$ we use a more restrictive prior range for the spectral index, $0.8\le n_s < 1$. In addition to these choices, in all cases, for the derived parameter $H_0$ we restrict its range of variation to $0.2 \le h \le 1$. In Table \ref{tab:para_sigmap} when only lensing data is used, in order to test the impact of different choices of priors, we also provide results for the narrower priors employed in Ref.\ \cite{Handley:2019tkm} (listed in the Handley column above). The Handley+$\Omega_k$ column priors above differ from Handley priors by allowing for a broader prior for the $\Omega_k$ parameter. \end{flushleft} \end{ruledtabular} \label{tab:Priors} \end{table} Due to the lack of constraining power of some of the data sets, when they are considered alone, we have to fix the values of some of the cosmological parameters in the analyses of these data sets. In BAO$^\prime$, BAO, (P18) lensing, or non-CMB data alone analyses we set the values of $\tau$ and $n_s$ to those obtained in the P18 data alone analysis for each model. Additionally, in BAO$^\prime$ data alone analyses we also fix the value of $\ln\left(10^{10}A_s\right)$, again, to the corresponding P18 data analysis value. Finally, in Sec. \ref{sec:P18+lensing_vs_non-CMB}, when we compare the constraints obtained from P18+lensing data and non-CMB data, in the non-CMB data analyses the values of $\tau$ and $n_s$ are fixed to the ones we get in the P18+lensing data analysis for each model. We use the converged MCMC chains to compute mean values, their confidence limits, and the posterior distributions of the model parameters with the \texttt{GetDist} code \cite{Lewis:2019xzd}. The MCMC chains are considered to converge when the Gelman and Rubin $R$ statistic provided by \texttt{COSMOMC} becomes $R-1<0.01$. In addition to using the various combinations of data sets (see Sec.\ \ref{sec:data}) for constraining cosmological parameters in the models we study, we want to also determine which of these models better fit the data sets we study. For a fair comparison between competing cosmological models with different numbers of free parameters it is necessary to be able to conveniently penalize for extra degrees of freedom. In this work we employ two different statistical criteria, that differently penalize for extra degrees of freedom, to compare the performance of the models. The first one we use is the Akaike information criterion (AIC) \cite{Akaike} which is defined as \begin{equation} \label{eq:AIC} \textrm{AIC} = \chi^2_{\textrm{min}} + 2n. \end{equation} Here $n$ is the number of independent cosmological parameters $\theta$ and $\chi^2_{\textrm{min}}\equiv \chi^2(\hat{\theta}) = -2\ln\mathcal{L}(\hat{\theta})$ is the minimum value of $\chi^2(\theta) = -2\ln\mathcal{L}(\theta)$ evaluated at the best-fit cosmological parameter values $\hat{\theta}$ where $\mathcal{L}(\theta)$ is the likelihood function. The expression in eq.\ \eqref{eq:AIC} is valid only for a large number of data points. According to Ref.\ \cite{Burnham2002}, when the number of data points $N$ obeys $N/n<40$, the expression in eq.\ \eqref{eq:AIC} should be replaced by \begin{equation}\label{eq:AIC_modified} \textrm{AIC}_{c} = \textrm{AIC} + \frac{2n(n+1)}{N-n-1} = \chi^2_{\textrm{min}} + \frac{2nN}{N-n-1}. \end{equation} Note that when $N$ is large compared to $n$ we have $N/(N-n-1)\simeq 1$ and then $\textrm{AIC}_c\simeq\textrm{AIC}$. This is the case for P18 data and non-CMB data but not for the BAO, BAO$^\prime$, and lensing data sets. In particular for BAO data $N = 16$, for BAO$^\prime$ data $N = 12$, for the lensing data set $N = 9$, and in all three cases $N/n<40$ so $\textrm{AIC}_c\neq \textrm{AIC}$. The second one we use is the deviance information criterion (DIC) \cite{DIC} which is defined as \begin{equation} \label{eq:DIC} \textrm{DIC} = \chi^2(\hat{\theta}) + 2p_D \end{equation} where $p_D= \overline{\chi^2} - \chi^2(\hat{\theta}) $ is the penalization for those models with more degrees of freedom. Here an overbar represents the mean value of the corresponding quantity. Unlike the AIC, the DIC is both computed from Monte Carlo posterior samples and also uses the effective number of constrained parameters by taking into account whether or not a parameter is unconstrained by data, see Refs.\ \cite{DIC, Liddle:2007fy}. Therefore, we may say that the DIC is more reliable than the AIC. We mostly use the differences in the AIC$_c$ and DIC values that are defined as \begin{eqnarray} \label{eq:diff_AIC_BIC} &\Delta\textrm{AIC}_c \equiv &\textrm{AIC}_{c,\textrm{X}} - \textrm{AIC}_{c,\textrm{Y}}\\ &\Delta\textrm{DIC} \equiv &\textrm{DIC}_{\textrm{X}} - \textrm{DIC}_{\textrm{Y}}. \end{eqnarray} Here Y represents the tilted flat $\Lambda$CDM model and X represents the model under study. When $-2 \leq \Delta\textrm{AIC}_c,\Delta\textrm{DIC}<0$ there is {\it weak} evidence in favor of the model under study relative to the tilted flat $\Lambda$CDM model. If $-6 \leq \Delta\textrm{AIC}_c,\Delta\textrm{DIC} < -2$ there is {\it positive} evidence, whereas if $-10 \leq \Delta\textrm{AIC}_c,\Delta\textrm{DIC} < -6$ there is {\it strong} evidence for the model under study. Finally if $\Delta\textrm{AIC}_c,\Delta\textrm{DIC} < -10$ there is {\it very strong} evidence in favor of the model under study relative to the tilted flat $\Lambda$CDM model. This scale also holds when $\Delta\textrm{AIC}_c$ and $\Delta\textrm{DIC}$ are positive, and then favors the tilted flat $\Lambda$CDM model over the model under study. We also want to determine whether some of the data sets we consider are mutually consistent (or inconsistent) in a specified cosmological model, and also whether or not the data set consistency (inconsistency) is model dependent. We utilize two different statistical estimators for this purpose. The first one makes use of DIC values and is presented in Sec.\ 2.1.7 of Ref.\ \cite{Joudaki:2016mvz}). This estimator is based on \begin{equation} \label{eq:Tension_estimator_1} \mathcal{I}(D_1,D_2) \equiv \textrm{exp}\left(-\frac{\mathcal{G}(D_1,D_2)}{2}\right), \end{equation} where \begin{equation} \mathcal{G}(D_1,D_2) = \textrm{DIC}(D_1\cup D_2) - \textrm{DIC}(D_1) - \textrm{DIC}(D_2). \end{equation} Here $D_1$ and $D_2$ represent the two data sets under comparison, $\textrm{DIC}(D_1)$ and $\textrm{DIC}(D_2)$ are the DIC values that result when data set $D_1$ and $D_2$, respectively, are individually used to constrain cosmological parameters of the specified cosmological model, and $\textrm{DIC}(D_1\cup D_2)$ is the DIC value that results when data sets $D_1$ and $D_2$ are jointly used to constrain cosmological parameters of the specified model. The intuitive idea behind this estimator is that if two data sets are mutually consistent in a given cosmological model, which means that the cosmological parameter best-fit values determined from each data set are approximately similar, we would have $\chi^2_{\textrm{min}}(D_1\cup D_2)\simeq \chi^2_{\textrm{min}}(D_1) + \chi^2_{\textrm{min}}(D_2)$. This would lead to negative values of $\mathcal{G}(D_1,D_2)$, see eq.\ \eqref{eq:DIC}, which in turn would lead to $\mathcal{I}(D_1,D_2)>1$. However if $\chi^2_{\textrm{min}}(D_1\cup D_2) > \chi^2_{\textrm{min}}(D_1) + \chi^2_{\textrm{min}}(D_2)$, and is large enough, then we would find $\mathcal{I}(D_1,D_2)<1$. Therefore $\log_{10}\mathcal{I}>0$ when the two data sets are mutually consistent and when $\log_{10}\mathcal{I}<0$ the two data sets are mutually inconsistent, in the cosmological model under study. Applying Jeffreys' scale, the level of consistency or inconsistency between the two data sets is {\it substantial} if $\lvert \log_{10}\mathcal{I} \rvert >0.5$, is {\it strong} if $\lvert \log_{10}\mathcal{I} \rvert >1$, and is {\it decisive} if $\lvert \log_{10}\mathcal{I} \rvert >2$, \cite{Joudaki:2016mvz}. We now summarize the second statistical estimator we utilize to determine whether two data sets are mutually consistent (or inconsistent) in a specified cosmological model. This is described in Refs.\ \cite{Handley:2019pqx,Handley:2019wlz,Handley:2019tkm}, also see references therein. Given a data set $D$ and a given model $M$, we can express the posterior distribution for the independent model parameters $\theta$ through Bayes' theorem \begin{eqnarray}\label{eq:BayesTheorem} p(\theta|D,M)=\frac{p(D|\theta,M) p(\theta | M)}{p(D|M)}\,. \end{eqnarray} In the above expression $\mathcal{L}_D(\theta)\equiv p(D|\theta,M)$ is the likelihood function, $\pi(\theta) \equiv p(\theta | M) $ are the priors for the model parameters $\theta$, $\mathcal{Z}_D\equiv p(D|M)$ represents the evidence, and $\mathcal{P}_D(\theta)\equiv p(\theta|D,M)$ is the posterior distribution. Taking advantage of the fact that $\mathcal{P}_D(\theta)$ is a probability distribution function in $\theta$, which means that $\int \mathcal{P}_D(\theta)d\theta = 1$, we can express the evidence as \begin{equation} \label{eq:Evidence} \mathcal{Z}_D = \int \mathcal{L}_D(\theta)\pi(\theta)d\theta . \end{equation} We are interested in quantifying the tension between two independent data sets $D_1$ and $D_2$. The total likelihood from a joint analysis of both these data sets is the product of the likelihoods for each data set, $\mathcal{L}_{12}$ = $\mathcal{L}_1\mathcal{L}_2$. Consequently, $\mathcal{Z}_{12}=\int \mathcal{L}_1(\theta)\mathcal{L}_2(\theta)\pi(\theta)d\theta$. Here and in what follows we index quantities with ``1" or ``2" when they have been computed using data set $D_1$ or $D_2$ respectively, and we use index ``12" when the two data sets are jointly used. We define the Bayes ratio as \begin{equation} \label{eq:Bayes_ratio} R_D\equiv \frac{\mathcal{Z}_{12}}{\mathcal{Z}_1\mathcal{Z}_2}. \end{equation} This statistic is constructed in such a way that when $R_D\gg 1$ we can say that data sets $D_1$ and $D_2$ are consistent in the context of the particular model, while if $R_D\ll 1$ the two data sets are inconsistent. However, $R_D$ is strongly prior-dependent and to avoid this problem we instead use the suspiciousness $S_D$, \cite{Handley:2019pqx,Handley:2019wlz,Handley:2019tkm}, which we define in the following. To define $S_D$ we will need the Shannon information \cite{Shannon:1948zz} \begin{equation} \mathcal{I}_{S,D}(\theta) = \ln\frac{\mathcal{P}_D(\theta)}{\pi(\theta)}, \end{equation} which is a measure of the amount of information, about the parameters $\theta$, that has been gained when moving from the priors to the posterior. The average value over the posterior of the Shannon information \begin{equation} \label{eq:KL_divergence} \mathcal{D}_D = \int \mathcal{P}_D(\theta)\mathcal{I}_{S,D}(\theta)d\theta \equiv \langle \mathcal{I}_{S,D}\rangle_{\mathcal{P}_D}, \end{equation} is known as the Kullback-Leibler divergence and measures how data compresses from prior to posterior. The suspiciousness $S_D$ is defined in terms of the Bayes ratio $R_D$ and the information ratio $I_D$ \begin{equation} S_D = \frac{R_D}{I_D}, \end{equation} where \begin{equation} \ln(I_D) = \mathcal{D}_1 + \mathcal{D}_2 - \mathcal{D}_{12}. \end{equation} By considering a Gaussian analogy we can turn $\ln(S_D)$ into the tension probability $p$ of two data sets being inconsistent, \cite{Handley:2019pqx,Handley:2019wlz,Handley:2019tkm}, \begin{equation} \label{eq:Tension_estimator_2} p = \int^{\infty}_{d-2\ln(S_D)}\!\!\!\!\!\!\!\!\chi^2_d(x)dx = \int^{\infty}_{d-2\ln(S_D)}\!\!\frac{x^{d/2 -1}e^{-x/2}}{2^{d/2}\Gamma(d/2)}dx, \end{equation} where $d$ is the Bayesian model dimensionality \begin{equation} d = \Tilde{d}_1 + \Tilde{d}_2 - \Tilde{d}_{12}, \qquad \Tilde{d}/2 = \langle \mathcal{I}_{S,D}^2\rangle_{\mathcal{P}_D} - \langle \mathcal{I}_{S,D}\rangle^2_{\mathcal{P}_D} . \end{equation} If $p\lesssim 0.05$ the data sets are in moderate tension whereas if $p\lesssim 0.003$ they are in strong tension. The value of $p$ can be converted into a ``sigma value" using the Gaussian formula \begin{equation} \label{eq:Tension_estimator_2_sigma} \sigma = \sqrt{2}\textrm{Erfc}^{-1}(p), \end{equation} where $\textrm{Erfc}^{-1}$ is the inverse complementary error function. In particular $p\lesssim 0.05$ and $p\lesssim 0.003$ correspond to 2$\sigma$ and 3$\sigma$ Gaussian standard deviation, respectively. In Sec.\ \ref{subsec:data_set_tensions} we use both these statistical estimators to examine the consistency of five pairs of data, namely: P18 and lensing, P18 and BAO, P18 and BAO$^\prime$, P18 and non-CMB, and P18+lensing and non-CMB, in the context of different cosmological models. We shall see in Sec.\ \ref{subsec:cosmological_parameters} that when $A_L$ is allowed to vary error bars and two-dimensional cosmological constraint contours determined from each data set broaden (compared to the $A_L = 1$ case) and so are mutually consistent between different data sets (even if they are not mutually consistent when $A_L = 1$). We find, in Sec.\ \ref{subsec:data_set_tensions}, a similar improvement in consistency when $A_L$ is allowed to vary (compared to the $A_L = 1$ case). \section{Results} \label{sec:results} \subsection{Cosmological parameters} \label{subsec:cosmological_parameters} The cosmological parameter mean values and error bars favored by the P18, P18+lensing, and P18+lensing+non-CMB data sets are summarized in Tables \ref{tab:para_FL_nonCMB}-\ref{tab:para_TNL_nonCMB} for the tilted flat $\Lambda$CDM (+$A_L$) models, the untilted non-flat $\Lambda$CDM (+$A_L$) models, the tilted non-flat $\Lambda$CDM (+$A_L$) models with the Planck $P(q)$, and the tilted non-flat $\Lambda$CDM ($+A_L$) models with the new $P(q)$, respectively. Likelihood distributions of cosmological parameters of the four models with $A_L=1$ are shown in Figs.\ \ref{fig:like_P18}, \ref{fig:like_P18_lensing}, and \ref{fig:like_P18_lensing_nonCMB} for the P18, P18+lensing, and P18+lensing+non-CMB data sets, respectively. The likelihood results for these four models, but now with $A_L$ allowed to vary, are shown in Figs.\ \ref{fig:like_Alens_P18}, \ref{fig:like_Alens_P18_lensing}, and \ref{fig:like_Alens_P18_lensing_nonCMB}. Figures \ref{fig:like_FL_compar}---\ref{fig:like_TNL_Alens_compar} show, in each of the eight cosmological models we study, the cosmological parameter constraints for P18, P18+lensing, and P18+lensing+non-CMB data, to illustrate how the cosmological parameter constraints change as we include more data. These results are discussed in Secs.\ \ref{subsubsec:P18_data_constraints}-\ref{subsubsec:contour_plots}. In the third paragraph of Sec.\ \ref{subsec:data_set_tensions} we briefly discuss some cosmological parameter constraints from (P18) lensing only data and in Sec.\ \ref{sec:discussion} we discuss whether P18, P18+lensing, P18+non-CMB and P18+lensing+non-CMB data cosmological parameter constraints are model-independent or not. Our results may indicate tensions between some of the the CMB data sets and some non-CMB low-redshift data in the context of the non-flat models. Tension between P18 data and BAO$^\prime$/BAO data in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model has been noted in Refs.\ \cite{Handley:2019tkm, DiValentino:2019qzk, DiValentino:2020hov} (our updated BAO$^\prime$/BAO data differ from those used in these references, see Sec.\ \ref{sec:data}). Here we want to check whether this tension is observed for our updated BAO$^\prime$/BAO data, whether it is observed in the context of the other models we study, and how this tension is affected when we allow the $A_L$ parameter to vary. In addition to the P18 vs.\ BAO$^\prime$/BAO comparison, we also compare P18 data and non-CMB data as well as P18+lensing and non-CMB data. These comparisons are discussed in Secs.\ \ref{sec:P18_vs_BAO}-\ref{sec:P18+lensing_vs_non-CMB}. \begin{table*} \caption{Mean and 68.3\% confidence limits of tilted flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by TT,TE,EE+lowE (P18), P18+lensing, and P18+lensing+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccc} \\[-1mm] & \multicolumn{3}{c}{Tilted flat $\Lambda$CDM model} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02236 \pm 0.00015$ & $0.02237 \pm 0.00014$ & $0.02250 \pm 0.00013$ \\[+1mm] $\Omega_c h^2$ & $0.1202 \pm 0.0014$ & $0.1200 \pm 0.0012$ & $0.11838 \pm 0.00083$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04090 \pm 0.00031$ & $1.04091 \pm 0.00031$ & $1.04110 \pm 0.00029$ \\[+1mm] $\tau$ & $0.0542 \pm 0.0079$ & $0.0543 \pm 0.0073$ & $0.0569 \pm 0.0071$ \\[+1mm] $n_s$ & $0.9649 \pm 0.0043$ & $0.9649 \pm 0.0041$ & $0.9688 \pm 0.0036$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.044 \pm 0.016$ & $3.044 \pm 0.014$ & $3.046 \pm 0.014$ \\[+1mm] \hline \\[-1mm] $H_0$ & $67.28 \pm 0.61$ & $67.34 \pm 0.55$ & $68.09 \pm 0.38$ \\[+1mm] $\Omega_m$ & $0.3165 \pm 0.0084$ & $0.3155 \pm 0.0075$ & $0.3053 \pm 0.0050$ \\[+1mm] $\sigma_8$ & $0.8118 \pm 0.0074$ & $0.8112 \pm 0.0059$ & $0.8072 \pm 0.0058$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{3}{c}{Tilted flat $\Lambda$CDM+$A_L$ model} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02259 \pm 0.00017$ & $0.02251 \pm 0.00017$ & $0.02258 \pm 0.00014$ \\[+1mm] $\Omega_c h^2$ & $0.1180 \pm 0.0015$ & $0.1183 \pm 0.0015$ & $0.11747 \pm 0.00091$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04114 \pm 0.00032$ & $1.04109 \pm 0.00032$ & $1.04118 \pm 0.00029$ \\[+1mm] $\tau$ & $0.0496 \pm 0.0082$ & $0.0487 \pm 0.0087$ & $0.0476 \pm 0.0085$ \\[+1mm] $n_s$ & $0.9710 \pm 0.0050$ & $0.9695 \pm 0.0048$ & $0.9715 \pm 0.0038$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.030 \pm 0.017$ & $3.028 \pm 0.018$ & $3.023 \pm 0.018$ \\[+1mm] $A_{L}$ & $1.181 \pm 0.067$ & $1.073 \pm 0.041$ & $1.089 \pm 0.035$ \\[+1mm] \hline \\[-1mm] $H_0$ & $68.31 \pm 0.71$ & $68.14 \pm 0.69$ & $68.52 \pm 0.42$ \\[+1mm] $\Omega_m$ & $0.3029 \pm 0.0093$ & $0.3048 \pm 0.0091$ & $0.2998 \pm 0.0053$ \\[+1mm] $\sigma_8$ & $0.7997 \pm 0.0088$ & $0.7996 \pm 0.0089$ & $0.7955 \pm 0.0075$ \\[+1mm] \end{tabular} \\[+1mm] \end{ruledtabular} \label{tab:para_FL_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of untilted non-flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by TT,TE,EE+lowE (P18), P18+lensing, and P18+lensing+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccc} \\[-1mm] & \multicolumn{3}{c}{Untilted non-flat $\Lambda$CDM model} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02320 \pm 0.00015$ & $0.02307 \pm 0.00014$ & $0.02301 \pm 0.00014$ \\[+1mm] $\Omega_c h^2$ & $0.11098 \pm 0.00088$ & $0.11108 \pm 0.00086$ & $0.11176 \pm 0.00083$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04204 \pm 0.00030$ & $1.04196 \pm 0.00029$ & $1.04189 \pm 0.00029$ \\[+1mm] $\tau$ & $0.0543 \pm 0.0091$ & $0.0580 \pm 0.0087$ & $0.0799 \pm 0.0089$ \\[+1mm] $\Omega_k$ & $-0.095 \pm 0.024$ & $-0.0322 \pm 0.0075$ & $-0.0065 \pm 0.0014$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.021 \pm 0.019$ & $3.027 \pm 0.018$ & $3.075 \pm 0.018$ \\[+1mm] \hline \\[-1mm] $H_0$ & $47.1 \pm 3.2$ & $58.9 \pm 2.1$ & $67.90 \pm 0.56$ \\[+1mm] $\Omega_m$ & $0.617 \pm 0.082$ & $0.390 \pm 0.027$ & $0.2938 \pm 0.0049$ \\[+1mm] $\sigma_8$ & $0.730 \pm 0.017$ & $0.765 \pm 0.011$ & $0.7997 \pm 0.0076$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{3}{c}{Untilted non-flat $\Lambda$CDM+$A_L$ model} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02320 \pm 0.00015$ & $0.02312 \pm 0.00014$ & $0.02310 \pm 0.00014$ \\[+1mm] $\Omega_c h^2$ & $0.11097 \pm 0.00087$ & $0.11092 \pm 0.00087$ & $0.11100 \pm 0.00085$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04202 \pm 0.00030$ & $1.04193 \pm 0.00029$ & $1.04195 \pm 0.00030$ \\[+1mm] $\tau$ & $0.0540 \pm 0.0087$ & $0.0554 \pm 0.0097$ & $0.0566 \pm 0.0083$ \\[+1mm] $\Omega_k$ & $-0.12 \pm 0.12$ & $0.0161 \pm 0.0094$ & $-0.0060 \pm 0.0014$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.020 \pm 0.018$ & $3.021 \pm 0.020$ & $3.024 \pm 0.017$ \\[+1mm] $A_{L}$ & $1.08 \pm 0.27$ & $1.44 \pm 0.15$ & $1.162 \pm 0.036$ \\[+1mm] \hline \\[-1mm] $H_0$ & $52 \pm 18$ & $85.7 \pm 8.5$ & $68.48 \pm 0.58$ \\[+1mm] $\Omega_m$ & $0.70 \pm 0.42$ & $0.190 \pm 0.043$ & $0.2874 \pm 0.0050$ \\[+1mm] $\sigma_8$ & $0.721 \pm 0.053$ & $0.7805 \pm 0.0094$ & $0.7764 \pm 0.0078$ \\[+1mm] \end{tabular} \\[+1mm] \end{ruledtabular} \label{tab:para_NL_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of Planck-$P(q)$-based tilted non-flat $\Lambda\textrm{CDM}$ ($+A_L$) model parameters constrained by TT,TE,EE+lowE (P18), P18+lensing, and P18+lensing+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccc} \\[-1mm] & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM model [Planck $P(q)$]} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02260 \pm 0.00017$ & $0.02249 \pm 0.00016$ & $0.02249 \pm 0.00015$ \\[+1mm] $\Omega_c h^2$ & $0.1181 \pm 0.0015$ & $0.1186 \pm 0.0015$ & $0.1187 \pm 0.0013$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04116 \pm 0.00032$ & $1.04107 \pm 0.00032$ & $1.04106 \pm 0.00031$ \\[+1mm] $\tau$ & $0.0483 \pm 0.0083$ & $0.0495 \pm 0.0082$ & $0.0563 \pm 0.0073$ \\[+1mm] $\Omega_k$ & $-0.043 \pm 0.017$ & $-0.0103 \pm 0.0066$ & $0.0004 \pm 0.0017$ \\[+1mm] $n_s$ & $0.9706 \pm 0.0047$ & $0.9687 \pm 0.0046$ & $0.9681 \pm 0.0044$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.027 \pm 0.017$ & $3.030 \pm 0.017$ & $3.046 \pm 0.014$ \\[+1mm] \hline \\[-1mm] $H_0$ & $54.5 \pm 3.6$ & $63.7 \pm 2.3$ & $68.17 \pm 0.55$ \\[+1mm] $\Omega_m$ & $0.481 \pm 0.062$ & $0.351 \pm 0.024$ & $0.3051 \pm 0.0053$ \\[+1mm] $\sigma_8$ & $0.775 \pm 0.015$ & $0.796 \pm 0.011$ & $0.8080 \pm 0.0066$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM+$A_L$ model [Planck $P(q)$]} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02258 \pm 0.00017$ & $0.02251 \pm 0.00017$ & $0.02259 \pm 0.00016$ \\[+1mm] $\Omega_c h^2$ & $0.1183 \pm 0.0015$ & $0.1183 \pm 0.0015$ & $0.1173 \pm 0.0014$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04116 \pm 0.00033$ & $1.04110 \pm 0.00032$ & $1.04118 \pm 0.00032$ \\[+1mm] $\tau$ & $0.0478 \pm 0.0081$ & $0.0489 \pm 0.0085$ & $0.0479 \pm 0.0085$ \\[+1mm] $\Omega_k$ & $-0.130 \pm 0.095$ & $-0.005 \pm 0.027$ & $-0.0002 \pm 0.0017$ \\[+1mm] $n_s$ & $0.9704 \pm 0.0048$ & $0.9696 \pm 0.0049$ & $0.9718 \pm 0.0045$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.027 \pm 0.017$ & $3.028 \pm 0.018$ & $3.024 \pm 0.017$ \\[+1mm] $A_{L}$ & $0.88 \pm 0.15$ & $1.09 \pm 0.16$ & $1.090 \pm 0.036$ \\[+1mm] \hline \\[-1mm] $H_0$ & $45 \pm 11$ & $69 \pm 11$ & $68.49 \pm 0.56$ \\[+1mm] $\Omega_m$ & $0.80 \pm 0.35$ & $0.32 \pm 0.11$ & $0.2998 \pm 0.0055$ \\[+1mm] $\sigma_8$ & $0.733 \pm 0.045$ & $0.796 \pm 0.016$ & $0.7952 \pm 0.0085$ \\[+1mm] \end{tabular} \\[+1mm] \end{ruledtabular} \label{tab:para_NL_ns_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of new-$P(q)$-based tilted non-flat $\Lambda\textrm{CDM}$ ($+A_L$) model parameters constrained by TT,TE,EE+lowE (P18), P18+lensing, and P18+lensing+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccc} \\[-1mm] & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM model [new $P(q)$]} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02255 \pm 0.00017$ & $0.02248 \pm 0.00016$ & $0.02248 \pm 0.00015$ \\[+1mm] $\Omega_c h^2$ & $0.1188 \pm 0.0015$ & $0.1188 \pm 0.0014$ & $0.1186 \pm 0.0013$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04109 \pm 0.00032$ & $1.04104 \pm 0.00032$ & $1.04106 \pm 0.00031$ \\[+1mm] $\tau$ & $0.0525 \pm 0.0083$ & $0.0515 \pm 0.0081$ & $0.0566 \pm 0.0074$ \\[+1mm] $\Omega_k$ & $-0.033 \pm 0.014$ & $-0.0086 \pm 0.0057$ & $0.0003 \pm 0.0017$ \\[+1mm] $n_s$ & $0.9654 \pm 0.0045$ & $0.9661 \pm 0.0043$ & $0.9679 \pm 0.0042$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.039 \pm 0.017$ & $3.035 \pm 0.016$ & $3.046 \pm 0.014$ \\[+1mm] \hline \\[-1mm] $H_0$ & $56.9 \pm 3.6$ & $64.2 \pm 2.0$ & $68.13 \pm 0.54$ \\[+1mm] $\Omega_m$ & $0.444 \pm 0.055$ & $0.345 \pm 0.021$ & $0.3054 \pm 0.0051$ \\[+1mm] $\sigma_8$ & $0.786 \pm 0.014$ & $0.799 \pm 0.010$ & $0.8079 \pm 0.0067$ \\[+1mm] \hline \hline \\[-1mm] $ $ & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM+$A_L$ model [new $P(q)$]} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & P18 & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02257 \pm 0.00017$ & $0.02252 \pm 0.00017$ & $0.02260 \pm 0.00016$ \\[+1mm] $\Omega_c h^2$ & $0.1187 \pm 0.0016$ & $0.1183 \pm 0.0015$ & $0.1174 \pm 0.0013$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04111 \pm 0.00033$ & $1.04108 \pm 0.00032$ & $1.04118 \pm 0.00032$ \\[+1mm] $\tau$ & $0.0512 \pm 0.0086$ & $0.0495 \pm 0.0093$ & $0.0486 \pm 0.0086$ \\[+1mm] $\Omega_k$ & $-0.10 \pm 0.11$ & $0.003 \pm 0.016$ & $-0.0002 \pm 0.0017$ \\[+1mm] $n_s$ & $0.9654 \pm 0.0057$ & $0.9688 \pm 0.0053$ & $0.9713 \pm 0.0042$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.036 \pm 0.018$ & $3.030 \pm 0.019$ & $3.025 \pm 0.017$ \\[+1mm] $A_{L}$ & $0.94 \pm 0.20$ & $1.13 \pm 0.15$ & $1.088 \pm 0.035$ \\[+1mm] \hline \\[-1mm] $H_0$ & $51 \pm 14$ & $72.0 \pm 9.2$ & $68.48 \pm 0.56$ \\[+1mm] $\Omega_m$ & $0.70 \pm 0.43$ & $0.287 \pm 0.076$ & $0.2999 \pm 0.0055$ \\[+1mm] $\sigma_8$ & $0.752 \pm 0.052$ & $0.801 \pm 0.011$ & $0.7956 \pm 0.0082$ \\[+1mm] \end{tabular} \\[+1mm] \end{ruledtabular} \label{tab:para_TNL_nonCMB} \end{table*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_P18_fig2.pdf}} \caption{Planck 2018 TT,TE,EE+lowE (P18) data likelihood distributions of parameters of the tilted non-flat $\Lambda$CDM model with the new initial power spectrum [new $P(q)$] (red contours), of the tilted non-flat $\Lambda$CDM model with the Planck team's initial spectrum [Planck $P(q)$] (green), of the untitled non-flat $\Lambda$CDM model (grey), and of the tilted flat $\Lambda$CDM model (blue contours). } \label{fig:like_P18} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_P18_lensing_fig3.pdf}} \caption{P18+lensing data likelihood distributions of parameters of the tilted non-flat $\Lambda$CDM model with the new initial power spectrum [new $P(q)$] (red contours), of the tilted non-flat $\Lambda$CDM model with the Planck team's initial spectrum [Planck $P(q)$] (green), of the untitled non-flat $\Lambda$CDM model (grey), and of the tilted flat $\Lambda$CDM model (blue contours). } \label{fig:like_P18_lensing} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_P18_lensing_nonCMBv2_fig4.pdf}} \caption{P18+lensing+non-CMB data likelihood distributions of parameters of the tilted non-flat $\Lambda$CDM model with the new initial power spectrum [new $P(q)$] (red contours), of the tilted non-flat $\Lambda$CDM model with the Planck team's initial spectrum [Planck $P(q)$] (green), of the untitled non-flat $\Lambda$CDM model (grey), and of the tilted flat $\Lambda$CDM model (blue contours). } \label{fig:like_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_Alens_P18_fig5.pdf}} \caption{Planck 2018 TT,TE,EE+lowE (P18) data likelihood distributions of parameters of the tilted non-flat $\Lambda$CDM+$A_L$ model with the new initial power spectrum [new $P(q)$] (red contours), of the tilted non-flat $\Lambda$CDM+$A_L$ model with the Planck team's initial spectrum [Planck $P(q)$] (green), of the untitled non-flat $\Lambda$CDM+$A_L$ model (grey), and of the tilted flat $\Lambda$CDM+$A_L$ model (blue contours). } \label{fig:like_Alens_P18} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_Alens_P18_lensing_fig6.pdf}} \caption{P18+lensing data likelihood distributions of parameters of the tilted non-flat $\Lambda$CDM+$A_L$ model with the new initial power spectrum [new $P(q)$] (red contours), of the tilted non-flat $\Lambda$CDM+$A_L$ model with the Planck team's initial spectrum [Planck $P(q)$] (green), of the untitled non-flat $\Lambda$CDM+$A_L$ model (grey), and of the tilted flat $\Lambda$CDM+$A_L$ model (blue contours). } \label{fig:like_Alens_P18_lensing} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_Alens_P18_lensing_nonCMBv2_fig7.pdf}} \caption{P18+lensing+non-CMB data likelihood distributions of parameters of the tilted non-flat $\Lambda$CDM+$A_L$ model with the new initial power spectrum [new $P(q)$] (red contours), of the tilted non-flat $\Lambda$CDM+$A_L$ model with the Planck team's initial spectrum [Planck $P(q)$] (green), of the untitled non-flat $\Lambda$CDM+$A_L$ model (grey), and of the tilted flat $\Lambda$CDM+$A_L$ model (blue contours). } \label{fig:like_Alens_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_compar_fig8.pdf}} \caption{Likelihood distributions of tilted flat $\Lambda$CDM model parameters constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets. } \label{fig:like_FL_compar} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_compar_fig9.pdf}} \caption{Likelihood distributions of untilted non-flat $\Lambda$CDM model parameters constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets. } \label{fig:like_NL_compar} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_ns_compar_fig10.pdf}} \caption{Likelihood distributions of tilted non-flat $\Lambda$CDM model parameters with the Planck team's initial power spectrum [Planck $P(q)$] constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets. } \label{fig:like_NL_ns_compar} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_ns1_compar_fig11.pdf}} \caption{Likelihood distributions of tilted non-flat $\Lambda$CDM model parameters with the new initial power spectrum [new $P(q)$] constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets.} \label{fig:like_TNL_compar} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_Alens_compar_fig12.pdf}} \caption{Likelihood distributions of tilted flat $\Lambda$CDM+$A_L$ model parameters constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets. } \label{fig:like_FL_Alens_compar} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_compar_fig13.pdf}} \caption{Likelihood distributions of untilted non-flat $\Lambda$CDM+$A_L$ model parameters constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets. } \label{fig:like_NL_Alens_compar} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_ns_compar_fig14.pdf}} \caption{Likelihood distributions of tilted non-flat $\Lambda$CDM+$A_L$ model parameters with the Planck team's initial power spectrum [Planck $P(q)$] constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets. } \label{fig:like_NL_ns_Alens_compar} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_Alens_ns1_compar_fig15.pdf}} \caption{Likelihood distributions of tilted non-flat $\Lambda$CDM+$A_L$ model parameters with the new initial power spectrum [new $P(q)$] constrained by P18 (gray contours), P18+lensing (red contours), P18+lensing+non-CMB (blue contours) data sets. } \label{fig:like_TNL_Alens_compar} \end{figure*} We now discuss the results obtained from the different data sets we consider. \subsubsection{P18 data cosmological constraints} \label{subsubsec:P18_data_constraints} In the case of the tilted flat $\Lambda$CDM model, with just six primary (not derived) cosmological parameters, and with $\Omega_k = 0$, from P18 data alone (see Table \ref{tab:para_FL_nonCMB} and Figs.\ \ref{fig:like_P18} and \ref{fig:like_Alens_P18}) the derived parameters $\Omega_m = 0.3165\pm 0.0084$ and $H_0 = 67.28\pm 0.61$ km s$^{-1}$ Mpc$^{-1}$, which are consistent with many other measurements of these quantities and which differ from the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}, $\Omega_m = 0.295\pm 0.017$ and $H_0 = 69.7\pm 1.2$ km s$^{-1}$ Mpc$^{-1}$, by $1.1\sigma$ and $1.8\sigma$. The improvement in the fit to P18 data when the $A_L$ parameter is allowed to vary, in the tilted flat $\Lambda$CDM$+A_L$ model, is positive, according to the DIC statistical criterion described in Sec. \ref{sec:method} (see the results in Sec. \ref{subsec:model_selection} ). This fact is reflected in the measured (from P18 data) value of this phenomenological parameter, $A_L=1.181\pm 0.067$, which differs from the theoretically expected $A_L = 1$ by $2.7\sigma$. The inclusion of the $A_L$ parameter, introduced to deal with the lensing anomaly, does not significantly affect the values of the other six primary parameters, leaving them close to the values found for the six parameter ($\Omega_k = 0$) tilted flat $\Lambda$CDM model with $A_L = 1$ (the largest difference is for $\Omega_c h^2$, where it is 1.1$\sigma$ of the quadrature sum of the two error bars); it does however increase the error bars somewhat, with the largest increase being 16\% for $n_s$. In addition, in the case when $A_L$ is allowed to vary, the derived parameters $\Omega_m$ and $H_0$ (as well as $\sigma_8$) errors bars mildly increase, by 11\% and 16\%, resulting in (for P18 data) $\Omega_m = 0.3029\pm 0.0093$ and $H_0 = 68.31\pm 0.71$ km s$^{-1}$ Mpc$^{-1}$, that are consistent with many other measurements of these quantities, now differing only by $0.41\sigma$ and $1.0\sigma$, respectively, from those of Ref.\ \cite{Cao:2022ugh}. These derived $\Omega_m$ and $H_0$ parameter values in the $A_L$-varying case also differ from those in the $A_L = 1$ case by at most 1.1$\sigma$. The addition of the $\Omega_k$ parameter to the six primary cosmological parameters of the tilted flat $\Lambda$CDM model introduces a strong degeneracy between increasing $\Omega_m$ and decreasing $H_0$. The non-flat models also show some degeneracy between $\Omega_m$ and $\Omega_k$ as well as between $H_0$ and $\Omega_k$. These degeneracies can be seen in the corresponding panels in Fig.\ \ref{fig:like_P18}. In the tilted non-flat $\Lambda$CDM models (see Tables \ref{tab:para_NL_ns_nonCMB} and \ref{tab:para_TNL_nonCMB} and Fig.\ \ref{fig:like_P18}) we see that P18 data alone is unable to break the strong geometrical degeneracy between $\Omega_m$-$H_0$-$\Omega_k$. For the Planck $P(q)$ and the new $P(q)$, the measured values (from P18 data) $\Omega_m = 0.481\pm 0.062$ and $0.444\pm 0.055$, as well as $H_0 = 54.5\pm 3.6$ and $56.9\pm 3.6$ km s$^{-1}$ Mpc$^{-1}$, respectively, are in conflict with most other measurements of these parameters; for example, see the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh} in the paragraph before last. Note that even though the values of, and error bars on, the six primary cosmological parameters in common between the two tilted non-flat models and the tilted flat model are very similar (the largest difference is 1.1$\sigma$ for $\Omega_b h^2$ between the tilted flat and the tilted non-flat Planck $P(q)$ models, and the biggest increase, 13\%, in the error bars is also for $\Omega_b h^2$, in both tilted non-flat models relative to the tilted flat model), the additional primary cosmological parameter $\Omega_k$ in the two tilted non-flat models is relatively poorly constrained, and the derived cosmological parameters $\Omega_m$ and $H_0$ error bars in the two tilted non-flat $\Lambda$CDM models are approximately factors of 7 and 6 larger than those in the tilted flat $\Lambda$CDM model (and $\Omega_m$ and $H_0$ in these tilted non-flat models differ by between 2.3$\sigma$ and 3.5$\sigma$ from the tilted flat model values). The evidence in favor of $\Omega_k < 0$ is significant in both of the tilted non-flat $\Lambda$CDM models. For the Planck $P(q)$ case we find $\Omega_k=-0.043\pm 0.017$ while for the new $P(q)$ case $\Omega_k= -0.033\pm 0.014$, being 2.5$\sigma$ and 2.4$\sigma$ away from flat respectively from flat spatial hypersurfaces. In both cases there is clear preference for a closed over an open spatial geometry. And we shall see in Sec. \ref{subsec:model_selection} that the P18 data DIC statistical criterion strongly favors both tilted non-flat models over the tilted flat $\Lambda$CDM model. Allowing the $A_L$ parameter to vary in the non-flat models introduces an additional strong degeneracy between $\Omega_k$, $\Omega_m$, $H_0$, and $A_L$; compare the corresponding panels in Figs.\ \ref{fig:like_P18} and \ref{fig:like_Alens_P18}. In the tilted non-flat $\Lambda$CDM+$A_L$ models with the Planck $P(q)$ and with the new $P(q)$, where the $A_L$ parameter is allowed to vary (see Tables \ref{tab:para_NL_ns_nonCMB} and \ref{tab:para_TNL_nonCMB} and Fig.\ \ref{fig:like_Alens_P18}) P18 data alone is unable to break the strong geometrical degeneracy between $\Omega_m$-$H_0$-$\Omega_k$-$A_L$. (In the tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model some parameters have a somewhat bimodal distribution for P18 data, see the one-dimensional posterior distributions in Fig.\ \ref{fig:like_Alens_P18}. This is not the case for the tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model.) Like in the tilted flat $\Lambda$CDM case discussed in the paragraph before last, the extra $A_L$ parameter does not significantly affect any of the (primary, not derived) cosmological parameter constraints, compared to the $A_L = 1$ case, except, because of the additional $\Omega_k$-$A_L$ degeneracy, the $\Omega_k$ constraints which are now $\Omega_k=-0.130\pm 0.095$ for the Planck $P(q)$ case and $\Omega_k= -0.10\pm 0.11$ for the new $P(q)$, being only 1.4$\sigma$ and 0.91$\sigma$, respectively, away from flat spatial hypersurfaces, with the $\Omega_k$ error bars now being factors of 6 and 8, respectively, larger than those in the $A_L = 1$ case. Also, unlike the tilted flat $\Lambda$CDM case of the paragraph before last, we measure, from the P18 data, $A_L=0.88\pm 0.15$ and $0.94\pm0.20$, which differ from the theoretically expected $A_L = 1$ by only 0.80$\sigma$ and 0.30$\sigma$. We will see in Sec. \ref{subsec:model_selection} that in both these models the fit to P18 data is weakly or positively better when $A_L = 1$ compared to the case when the $A_L$ parameter is allowed to vary. However, when $A_L$ varies the DIC statistical criterion weakly favors [positively disfavors] the tilted non-flat Planck $P(q)$ [new $P(q)$] model over the tilted flat $\Lambda$CDM$+A_L$ model. In addition, in both these cases when $A_L$ is allowed to vary, the $\Omega_m$ and $H_0$ (as well as $\sigma_8$) errors bars significantly increase, resulting in (for P18 data) $\Omega_m = 0.80\pm 0.35$ and $0.70\pm 0.43$, as well as $H_0 = 45\pm 11$ and $51\pm 14$ km s$^{-1}$ Mpc$^{-1}$, that are consistent with many other measurements of these quantities. Again, the error bars on $\Omega_b h^2$, $\Omega_c h^2$, $\theta_{\rm MC}$, $\tau$, $n_s$, and $A_s$ are similar in the two tilted non-flat $\Lambda$CDM$+A_L$ models and the tilted flat $\Lambda$CDM$+A_L$ model, however the $A_L$ error bars are approximately a factor of 2.5 larger in the tilted non-flat models, with the introduction of the seventh primary cosmological parameter $\Omega_k$ (that is poorly constrained) also resulting in the $\Omega_m$ error bars being a factor $\sim$42 larger and the $H_0$ error bars being a factor $\sim$18 larger in the tilted non-flat $\Lambda$CDM$+A_L$ models compared to the tilted flat $\Lambda$CDM$+A_L$ model. The restriction that $n_s=1$ in the untilted non-flat $\Lambda$CDM (+$A_L$) models is an unwelcome feature when fitting the P18 CMB anisotropy spectra, according to the statistical criteria outlined in Sec. \ref{subsec:model_selection}. Because of this we will focus less attention on the untilted non-flat $\Lambda$CDM model compared to the two tilted non-flat models. Despite the poor performance of the untilted non-flat $\Lambda$CDM model in this case (which also affects what happens when additional data are jointly analyzed with P18 data), the model shares some features with the two tilted non-flat $\Lambda$CDM models (see Table \ref{tab:para_NL_nonCMB} and Fig.\ \ref{fig:like_P18}), namely, the evidence in favor of closed spatial geometry, now with $\Omega_k= -0.095\pm 0.024$ (4.0$\sigma$), and the presence of the aforementioned geometrical degeneracy between $\Omega_m$-$H_0$-$\Omega_k$. Also, as in the two tilted non-flat models, in the untilted non-flat case the measured values (from P18 data) $\Omega_m = 0.617\pm 0.082$ and $H_0 = 47.1\pm 3.2$ km s$^{-1}$ Mpc$^{-1}$ are in conflict with most other measurements of these parameters. In the untilted non-flat $\Lambda$CDM+$A_L$ model where the $A_L$ parameter is allowed to vary (see Table \ref{tab:para_NL_nonCMB} and Fig.\ \ref{fig:like_Alens_P18}) P18 data alone is again unable to break the larger geometrical degeneracy between $\Omega_m$-$H_0$-$\Omega_k$-$A_L$. Like in the tilted flat and non-flat $\Lambda$CDM cases discussed earlier, the extra $A_L$ parameter does not significantly affect any of the (primary, not derived) cosmological parameter constraints in the untilted non-flat model, compared to the $A_L = 1$ case, except, because of the additional $\Omega_k$-$A_L$ degeneracy, for the $\Omega_k$ constraint which is now $\Omega_k=-0.12\pm 0.12$ and only 1.0$\sigma$ away from flat spatial hypersurfaces. Also, unlike the tilted flat $\Lambda$CDM case, but like the tilted non-flat cases of the paragraph before last, we measure, from the P18 data, $A_L=1.08\pm 0.27$ which differs from the theoretically expected value of $A_L = 1$ by only 0.30$\sigma$. We will see in Sec.\ \ref{subsec:model_selection} that in this model the fit to P18 data is slightly better when $A_L = 1$ compared to the case when the $A_L$ parameter is allowed to vary. Similar to the two tilted non-flat models of the paragraph before last, when $A_L$ is allowed to vary in the untilted non-flat model, the $\Omega_m$ and $H_0$ (as well as $\sigma_8$) errors bars significantly increase, resulting in (for P18 data) $\Omega_m = 0.70\pm 0.42$ and $H_0 = 52\pm 18$ km s$^{-1}$ Mpc$^{-1}$, that are consistent with most other measurements of these quantities. In Fig.\ \ref{fig:like_P18} we provide the 2$\sigma$ contour plots for all four of the $A_L = 1$ models. The contours for the untilted non-flat $\Lambda$CDM model overlap minimally or even do not overlap at all with those corresponding to the other models. This is likely due to the lack of the degree of freedom encapsulated in $n_s$ in the untilted non-flat model, which greatly hinders the fit of the CMB anisotropy power spectra and causes the other parameters to shift from the ranges preferred in the context of the other three models. As for the other three cosmological models, there is a significant overlap of contours, except when $\Omega_m$ or $H_0$ (or less so $\sigma_8$) is involved, which can even lead to the contours not overlapping at all. This is presumably related with the geometrical degeneracy previously mentioned. The corresponding plots for the four models now including allowing $A_L$ to vary are in Fig.\ \ref{fig:like_Alens_P18}. Allowing $A_L$ to vary broadens the contours, and for some parameters there are two disconnected 1$\sigma$ regions. While the untilted non-flat model contours still do not overlap in many cases with those of the other three models, in the other three models the contours overlap even when $\Omega_m$ or $H_0$ or $\sigma_8$ is involved. \subsubsection{ P18+lensing data cosmological constraints} \label{subsubsec:P18_lensing_data_constraints} Constraints on primary parameters derived from joint analyses of P18 and lensing data are quite similar to those derived from P18 data alone (see Tables \ref{tab:para_FL_nonCMB}-\ref{tab:para_TNL_nonCMB} and Figs. \ref{fig:like_P18_lensing} and \ref{fig:like_Alens_P18_lensing}), except for $\Omega_k$ and $A_L$ constraints. On the other hand, constraints on derived parameters $\Omega_m$ and $H_0$ are, in most non-flat cases, greatly affected by lensing data. In this subsubsection we discuss parameter constraints from jointly analyzed P18 and lensing data and compare these to the P18 data alone constraints. Ideally one would like to establish that cosmological parameter constraints derived from P18 data and from lensing data are mutually consistent, prior to using P18+lensing data in joint analyses. While it is not straightforward to derive (P18) lensing data alone cosmological parameter constraints for the wide flat Our priors of Table \ref{tab:Priors}, we shall see, in Sec.\ \ref{subsec:data_set_tensions} (where we do briefly discuss some of these cosmological constraints), that P18 data and lensing data are not significantly mutually inconsistent. This is also consistent with the results we discuss in this subsubsection. Comparing the six-parameter tilted flat $\Lambda$CDM primary cosmological parameter constraints for P18 data and P18+lensing data, shown in the upper half of Table \ref{tab:para_FL_nonCMB}, we see that there are no significant changes in parameter values (the largest change is that $\Omega_c h^2$ is 0.11$\sigma$ smaller in the P18+lensing case) with all but the $\theta_{\rm MC}$ error bars being smaller in the P18+lensing case (the $\theta_{\rm MC}$ error bar is unchanged and the largest decrease is 14\% for the $\Omega_c h^2$ error bar). For the derived parameters, the largest change is the 0.089$\sigma$ decrease in $\Omega_m$ relative to the P18 data value, and the 20\% smaller $\sigma_8$ error bar. For P18+lensing data we find $\Omega_m = 0.3155\pm 0.0075$ and $H_0 = 67.34\pm 0.55$ km s$^{-1}$ Mpc$^{-1}$ which are consistent with many other measurements of these quantities and 1.1$\sigma$ larger and 1.8$\sigma$ lower than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. Comparing the seven-parameter tilted flat $\Lambda$CDM$+A_L$ primary cosmological parameter constraints for P18 data and P18+lensing data, shown in the lower half of Table \ref{tab:para_FL_nonCMB}, we see more significant changes in the parameter values (the largest change is that $A_L$ is 1.4$\sigma$ smaller in the P18+lensing case, with the next largest being $\Omega_b h^2$ which is 0.33$\sigma$ smaller) with some of the error bars being larger in the P18+lensing case (the largest increase is 6\% for the $\tau$ and $\ln(10^{10}A_s)$ error bars) and some of the error bars being smaller (the largest decrease is 39\% for $A_L$). The reason the error bars of $\tau$ and $\ln (10^{10} A_s)$ increase, contrary to the common expectation that the error bars of the parameters will become smaller as more data is added, appears to be that the degeneracy between parameters is only partially broken by the lensing data. Interestingly, these characteristics are common to all other $A_L$-varying models (see Tables \ref{tab:para_NL_nonCMB}-\ref{tab:para_TNL_nonCMB}). For the derived parameters, the largest change is the 0.17$\sigma$ decrease in $H_0$ relative to the P18 data value, and the 3\% smaller $H_0$ error bar. For P18+lensing data in the varying $A_L$ case we measure $\Omega_m = 0.3048\pm 0.0091$ and $H_0 = 68.14\pm 0.69$ km s$^{-1}$ Mpc$^{-1}$ which are consistent with many other measurements of these quantities and 0.51$\sigma$ larger and 1.1$\sigma$ lower than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. The improvement in the fit to P18+lensing data when the $A_L$ parameter is allowed to vary, in the tilted flat $\Lambda$CDM$+A_L$ model, is only weak, as discussed in Sec.\ \ref{subsec:model_selection}. We now find $A_L = 1.073\pm 0.041$ which is 1.8$\sigma$ away from the theoretically expected $A_L = 1$. While there is still a deviation from the predicted value, the tendency of the lensing data is to push $A_L$ closer to 1, resulting in a smaller deviation than the 2.7$\sigma$ one found for $A_L = 1.181\pm 0.067$ from P18 data in the tilted flat $\Lambda$CDM$+A_L$ model. The inclusion of the $A_L$ parameter does not significantly affect the values of the other six primary parameters, leaving them close to the values found for the six parameter tilted flat $\Lambda$CDM model with $A_L = 1$ (the largest difference is for $\Omega_c h^2$, where it is 0.88$\sigma$ of the quadrature sum of the two error bars); it does however increase the error bars, more than what happens in the P18 alone data case discussed in Sec. \ref{subsubsec:P18_data_constraints}, with largest increase being 29\% for $\ln(10^{10}A_s)$. In addition, in the case when $A_L$ is allowed to vary, the derived parameters change somewhat and their error bars increase, with the largest changes associated with $\sigma_8$, where it is now 1.1$\sigma$ smaller with a 51\% larger error bar. In the six-parameter untilted non-flat $\Lambda$CDM model, including lensing data in the mix results in a reduction in the size of the cosmological parameter error bars relative to those from P18 data (see Table \ref{tab:para_NL_nonCMB} and Fig. \ref{fig:like_P18_lensing}). The most affected parameters are the primary parameter $\Omega_k$, whose error bars decrease by 69\%, and the derived parameters $H_0$, $\Omega_m$ and $\sigma_8$, for which we observe a shrinkage of the error bars by 34\%, 67\%, and 35\%, respectively. As happens in the tilted flat $\Lambda$CDM model, here also there are no significant changes in the values of the primary parameters, with the exception of the curvature parameter $\Omega_k$. This is not true for two of the derived parameters, $H_0$ and $\Omega_m$, which together with the curvature parameter are involved in the $\Omega_m$-$H_0$-$\Omega_k$ geometrical degeneracy. From P18+lensing data we find $\Omega_k=-0.095\pm 0.024$, $H_0=47.1\pm 3.2$ km s$^{-1}$ Mpc$^{-1}$, and $\Omega_m=0.390\pm 0.027 $. These values differ by 2.5$\sigma$, 3.1$\sigma$, and 2.6$\sigma$, respectively, from the corresponding values obtained in the P18 data alone analysis. From the results obtained for the untilted non-flat $\Lambda$CDM+$A_L$ model (see Table \ref{tab:para_NL_nonCMB} and Fig. \ref{fig:like_Alens_P18_lensing}), we observe significant changes in the values of the primary parameters $\Omega_k$ and $A_L$, as well as in the derived parameters $H_0$ and $\Omega_m$. For the P18+lensing data set we get $\Omega_k = 0.0161\pm 0.0094$ (1.7$\sigma$ away from flat) and $A_L = 1.44\pm 0.15$ (2.9$\sigma$ away from $A_L = 1$). These values differ by 1.1$\sigma$ and 1.2$\sigma$, respectively, from the corresponding values obtained in the P18 data alone analysis. For the derived parameters, from P18+lensing data we find $H_0=85.7\pm 8.5$ km s$^{-1}$ Mpc$^{-1}$ and $\Omega_m=0.190\pm 0.043$, which differ by 1.7$\sigma$ and 1.2$\sigma$ from the corresponding P18 data alone values. Joint analyses of the P18 and lensing data in the tilted non-flat models result in constraints that differ more from those derived using just P18 data compared to what happens in the tilted flat model case (see Tables \ref{tab:para_NL_ns_nonCMB} and \ref{tab:para_TNL_nonCMB} and Fig.\ \ref{fig:like_P18_lensing}). This is because lensing data partially breaks the $\Omega_m$-$H_0$-$\Omega_k$ geometrical P18 alone data degeneracy of the tilted non-flat models (compare the corresponding panels in Figs.\ \ref{fig:like_P18} and \ref{fig:like_P18_lensing}). Comparing the seven-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ and new $P(q)$ primary cosmological parameter constraints for P18 data and P18+lensing data, shown in the upper halves of Tables \ref{tab:para_NL_ns_nonCMB} and \ref{tab:para_TNL_nonCMB}, we see that aside from $\Omega_k$ (discussed next) there are no significant changes in parameter values [the largest change is that $\Omega_b h^2$ is 0.47$\sigma$ (0.30$\sigma$) smaller in the P18+lensing case, for the Planck (new) $P(q)$] with some of the error bars being smaller in the P18+lensing case [leaving aside $\Omega_k$ (discussed next) the largest decrease is 6\% (7\%) for the $\Omega_b h^2$ ($\Omega_c h^2$) error bar, for the Planck (new) $P(q)$]. On the other hand, $\Omega_k$ changes significantly when lensing data are added to the mix, becoming 1.8$\sigma$ (1.6$\sigma$) larger, and closer to flat for the Planck (new) $P(q)$, with 61\% (59\%) smaller error bars, still favoring closed geometry over flat but only by 1.6$\sigma$ (1.5$\sigma$), respectively. For the derived parameters, the largest change is the 2.2$\sigma$ (1.8$\sigma$) increase in $H_0$ relative to the P18 data value for the Planck (new) $P(q)$, with 61\% (62\%) smaller error bars for $\Omega_m$. For P18+lensing data we find $\Omega_m = 0.351\pm 0.024$ ($0.345\pm 0.021$) and $H_0 = 63.7\pm 2.3$ ($64.2\pm 2.0$) km s$^{-1}$ Mpc$^{-1}$ for the Planck (new) $P(q)$, which are consistent with many other measurements of these quantities and 1.9$\sigma$ (1.9$\sigma$) larger and 2.3$\sigma$ (2.4$\sigma$) lower, respectively, than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. Comparing the eight-parameter tilted non-flat $\Lambda$CDM$+A_L$ Planck (new) $P(q)$ primary cosmological parameter constraints for P18 data and P18+lensing data, shown in the lower half of Table \ref{tab:para_NL_ns_nonCMB} (\ref{tab:para_TNL_nonCMB}), we see that there are smaller differences compared to the tilted flat $\Lambda$CDM$+A_L$ case. For the Planck $P(q)$ case we mostly find less significant changes (the largest changes are that $\Omega_k$ and $A_L$ are 1.3$\sigma$ and 0.95$\sigma$ larger in the P18+lensing case, with the next largest being $\Omega_b h^2$ which is 0.29$\sigma$ smaller) with some of the error bars being larger in the P18+lensing case (the largest increase is 7\% for the $A_L$ error bar, and this is the only model where the $A_L$ error bar is larger for P18+lensing data than for P18 data) and some of the error bars being smaller (the largest decrease is 72\% for $\Omega_k$). In the new $P(q)$ case we find roughly half the parameters change more significantly (the largest changes again are that $\Omega_k$ and $A_L$ are 0.93$\sigma$ and 0.76$\sigma$ larger in the P18+lensing case, with the next largest being $n_s$ which is 0.44$\sigma$ larger) with some of the error bars being larger in the P18+lensing case (the largest increase is 8\% for the $\tau$ error bar) and some of the error bars being smaller (the largest decrease is 85\% for $\Omega_k$). From the P18+lensing analyses, we measure $\Omega_k=-0.005\pm 0.027$ for the Planck $P(q)$ case and $\Omega_k= 0.003\pm 0.016$ for the new $P(q)$, both being only 0.19$\sigma$ away from flat spatial hypersurfaces, very different from the P18 data alone results. For the derived parameters, the largest change is the 1.5$\sigma$ (1.3$\sigma$) increase in $H_0$ relative to the P18 data value for the Planck (new) $P(q)$, with 69\% (82\%) smaller error bars for $\Omega_m$. For P18+lensing data we find $\Omega_m = 0.32\pm 0.11$ ($0.287\pm 0.076$) and $H_0 = 69\pm 11$ ($72.0\pm 9.2$) km s$^{-1}$ Mpc$^{-1}$ for the Planck (new) $P(q)$, which are consistent with many other measurements of these quantities and 0.22$\sigma$ larger (0.10$\sigma$ smaller) and 0.063$\sigma$ lower (0.25$\sigma$ higher), respectively, than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. (Note that the P18+lensing data Planck $P(q)$ $H_0$ error bar is unchanged, $\pm 11$ km s$^{-1}$, from the P18 data value, and this is the only model where this happens.) We will see in Sec.\ \ref{subsec:model_selection} that in both tilted non-flat $\Lambda$CDM$+A_L$ models the fit to P18+lensing data is weakly better when $A_L = 1$ compared to the case when the $A_L$ parameter is allowed to vary; this differs from what happens in the tilted flat $\Lambda$CDM$+A_L$ model. Also, unlike the tilted flat $\Lambda$CDM$+A_L$ P18+lensing case, we measure, from P18+lensing data, $A_L=1.089\pm 0.16$ and $1.13\pm0.15$, for the Planck $P(q)$ and the new $P(q)$, respectively, which differ from the theoretically expected $A_L = 1$ by only 0.56$\sigma$ and 0.87$\sigma$. The inclusion of the $A_L$ parameter does not significantly affect the values of the other seven primary parameters, leaving them close to the values found for the seven parameter tilted non-flat $\Lambda$CDM models with $A_L = 1$ [the largest difference is for $\Omega_k$, where it is 0.19$\sigma$ (0.68$\sigma$) for the Planck (new) $P(q)$]; it does however increase the error bars, but less than what happens in the P18 alone data case discussed in Sec.\ \ref{subsubsec:P18_data_constraints}, with largest factor being 4 (3) for $\Omega_k$ for the Planck (new) $P(q)$. In addition, in the case when $A_L$ is allowed to vary, the derived parameters change somewhat and their error bars increase, with the largest changes associated with $H_0$, where it is now 0.47$\sigma$ (0.83$\sigma$) larger for the Planck (new) $P(q)$ with a factor of 5 (5) larger error bar. From the discussion above in this subsubsection, the fact that the cosmological constraint contours displayed in Fig.\ \ref{fig:like_P18_lensing} for the three tilted models overlap should not come as a surprise. Unlike in the previous P18 data alone case, the P18+lensing data contours that involve $\Omega_m$, $H_0$, or $\Omega_k$ now overlap for the tilted models, indicating that the geometrical degeneracy is, at least, partially broken. Figure \ref{fig:like_Alens_P18_lensing} shows the results when the $A_L$ parameter is included in the analysis. While the overlap already found in the P18 data alone analysis (see Fig.\ \ref{fig:like_Alens_P18}) remains, the bimodal 1$\sigma$ regions of that plot have now disappeared. \subsubsection{P18+lensing+non-CMB data cosmological constraints} \label{subsubsec:P18_lensing_nonCMB_data_constraints} In this subsubsection we comment on the results obtained from a joint analysis of the P18+lensing+non-CMB data set and how the cosmological constraints change when compared to those obtained using P18+lensing data. As outlined in Sec.\ \ref{sec:data} non-CMB data we use here is comprised of BAO, $f\sigma_8$, SNIa, and $H(z)$ data, all of which provide useful information on the late-time Universe. Ideally one would like to establish that cosmological parameter constraints derived from P18+lensing data and from non-CMB data are mutually consistent, prior to using P18+lensing+non-CMB data in joint analyses. Given that P18 data dominate the P18+lensing data compilation, it is instructive to also study whether P18 data cosmological constraints are consistent with those from non-CMB data. We shall see in Sec.\ \ref{sec:P18_vs_BAO} that, in some of the models we study here, cosmological constraints from BAO$^\prime$ and BAO data, the dominant part of the non-CMB data compilation, are somewhat inconsistent with those derived using P18 data. This is also consistent with the results we discuss in this subsubsection, as well as, with the results presented in Sec.\ \ref{sec:P18_vs_non-CMB}, where we compare the cosmological parameter values obtained using P18 data and using non-CMB data. In Sec.\ \ref{sec:P18+lensing_vs_non-CMB} we compare P18+lensing data cosmological constraints and non-CMB data cosmological constraints, and find similar tensions. In addition, in Sec.\ \ref{subsec:data_set_tensions}, we study tensions between some of the CMB data sets and some of the low-redshift data sets, including the case of P18+lensing data vs.\ non-CMB data, by using the two statistical estimators presented in Sec. \ref{sec:method}. Comparing the six-parameter tilted flat $\Lambda$CDM primary cosmological parameter constraints for P18+lensing data and P18+lensing+non-CMB data, shown in the upper half of Table \ref{tab:para_FL_nonCMB}, we see that there are no significant changes in parameter values (the largest change is that $\Omega_c h^2$ is 1.1$\sigma$ smaller in the P18+lensing+non-CMB case) with all but the ln($10^{10}A_s)$ error bars being smaller in the P18+lensing+non-CMB case (the ln($10^{10}A_s)$ error bar is unchanged and the largest decrease is 31\% for the $\Omega_c h^2$ error bar). For the derived parameters, the largest changes are the 1.1$\sigma$ decrease in $\Omega_m$ and the 1.1$\sigma$ increase in $H_0$ relative to the P18+lensing data values, and the 33\% (31\%) smaller $\Omega_m$ ($H_0$) error bar. For P18+lensing+non-CMB data we find $\Omega_m = 0.3053\pm 0.0050$ and $H_0 = 68.09\pm 0.38$ km s$^{-1}$ Mpc$^{-1}$ which are consistent with many other measurements of these quantities and 0.58$\sigma$ larger and 1.3$\sigma$ lower than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. Comparing the seven-parameter tilted flat $\Lambda$CDM$+A_L$ primary cosmological parameter constraints for P18+lensing data and P18+lensing+non-CMB data, shown in the lower half of Table \ref{tab:para_FL_nonCMB}, we see smaller changes in the parameter values (the largest change is that $\Omega_c h^2$ is 0.47$\sigma$ smaller in the P18+lensing+non-CMB case, with the next largest being $n_s$ which is 0.33$\sigma$ larger) with all but the ln($10^{10}A_s)$ error bars being smaller in the P18+lensing+non-CMB case (the ln$(10^{10}A_s)$ error bar is unchanged and the largest decrease is 39\% for the $\Omega_c h^2$ error bars). For the derived parameters, the largest changes are the 0.47$\sigma$ increase in $H_0$ and the 0.47$\sigma$ decrease in $\Omega_m$ relative to the P18+lensing data values, and the 42\% smaller $\Omega_m$ error bar. For P18+lensing+non-CMB data in the varying $A_L$ case we measure $\Omega_m = 0.2998\pm 0.0053$ and $H_0 = 68.52\pm 0.42$ km s$^{-1}$ Mpc$^{-1}$ which are consistent with many other measurements of these quantities and 0.27$\sigma$ larger and 0.93$\sigma$ lower than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. The improvement in the fit to P18+lensing+non-CMB data when the $A_L$ parameter is allowed to vary, in the tilted flat $\Lambda$CDM$+A_L$ model, is positive, as discussed in Sec.\ \ref{subsec:model_selection}. We now find $A_L = 1.089\pm 0.035$ which is now 2.5$\sigma$ away from the theoretically expected $A_L = 1$, larger than the 1.8$\sigma$ deviation for the P18+lensing case of Sec.\ \ref{subsubsec:P18_lensing_data_constraints}; the tendency of the non-CMB data is to push $A_L$ farther away from 1. The inclusion of the $A_L$ parameter does not significantly affect the values of the other six primary parameters, leaving them close to the values found for the six parameter tilted flat $\Lambda$CDM model with $A_L = 1$ (the largest difference is for ln$(10^{10}A_s)$, where it is 1.0$\sigma$ lower); it does however increase the error bars, comparable to what happens in the P18+lensing data case discussed in Sec.\ \ref{subsubsec:P18_lensing_data_constraints}, with largest increase being 29\% for $\ln(10^{10}A_s)$. In addition, in the case when $A_L$ is allowed to vary, the derived parameters change somewhat and their error bars increase, with the largest changes associated with $\sigma_8$, where it is now 1.2$\sigma$ smaller with a 29\% larger error bar. Adding non-CMB data to P18+lensing data strongly suppresses P18+lensing data support for non-zero spatial curvature, (see Tables \ref{tab:para_NL_ns_nonCMB} and \ref{tab:para_TNL_nonCMB}), except in the case of the untilted non-flat $\Lambda$CDM model, for which $\Omega_k= -0.0065\pm 0.0014$ (4.6$\sigma$ away from flat) and also for the untilted non-flat $\Lambda$CDM$+A_L$ model where $\Omega_k = -0.0060\pm 0.0014$ (4.3$\sigma$ away from flat) (see Table \ref{tab:para_NL_nonCMB}). Comparing the seven-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ and new $P(q)$ primary cosmological parameter constraints for P18+lensing data and P18+lensing+non-CMB data, shown in the upper halves of Tables \ref{tab:para_NL_ns_nonCMB} and \ref{tab:para_TNL_nonCMB}, we see that aside from $\Omega_k$ (discussed next) there are no significant changes in parameter values [the largest change is that ln($10^{10}A_s$) is 0.73$\sigma$ (0.52$\sigma$) larger in the P18+lensing+non-CMB case, for the Planck (new) $P(q)$] with all of the error bars being smaller in the P18+lensing+non-CMB case [leaving aside $\Omega_k$ (discussed next) the largest decrease is 18\% (13\%) for the log($10^{10}A_s$) error bar, for the Planck (new) $P(q)$]. On the other hand, $\Omega_k$ changes significantly when non-CMB data are added to the mix, becoming 1.6$\sigma$ (1.5$\sigma$) larger, and closer to flat for the Planck (new) $P(q)$, with 74\% (70\%) smaller error bars, now favoring open geometry over flat but only by 0.24$\sigma$ (0.18$\sigma$), respectively. For the derived parameters, the largest changes are the 1.9$\sigma$ (1.9$\sigma$) increase in $H_0$ and the 1.9$\sigma$ (1.8$\sigma$) decrease in $\Omega_m$ relative to the P18+lensing data values for the Planck (new) $P(q)$, with 78\% (76\%) smaller error bars for $\Omega_m$ and 76\% (73\%) smaller error bars for $H_0$. For P18+lensing+non-CMB data we find $\Omega_m = 0.3051\pm 0.0053$ ($0.3054\pm 0.0051$) and $H_0 = 68.17\pm 0.55$ ($68.13\pm 0.54$) km s$^{-1}$ Mpc$^{-1}$ for the Planck (new) $P(q)$, which are consistent with many other measurements of these quantities and 0.57$\sigma$ (0.59$\sigma$) larger and 1.2$\sigma$ (1.2$\sigma$) lower, respectively, than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. Comparing the eight-parameter tilted non-flat $\Lambda$CDM$+A_L$ Planck (new) $P(q)$ primary cosmological parameter constraints for P18+lensing data and P18+lensing+non-CMB data, shown in the lower half of Table \ref{tab:para_NL_ns_nonCMB} (\ref{tab:para_TNL_nonCMB}), we see that there are approximately comparable differences to the tilted flat $\Lambda$CDM$+A_L$ case. For the Planck (new) $P(q)$ case the largest change is that $\Omega_c h^2$ is 0.49$\sigma$ (0.45$\sigma$) smaller in the P18+lensing+non-CMB case, with the next largest being $\Omega_b h^2$ ($n_s$) which is 0.34$\sigma$ (0.37$\sigma$) smaller, with most of the error bars being smaller in the P18+lensing+non-CMB case (the largest decreases are 94\% (89\%) for $\Omega_k$ and 77\% (77\%) for $A_L$). From the P18+lensing+non-CMB analyses, we measure $\Omega_k=-0.0002\pm 0.0017$ for both $P(q)$ cases, both being only 0.12$\sigma$ away from flat spatial hypersurfaces, very different from the P18 data alone results. For the derived parameters, the largest change is the 0.18$\sigma$ (0.39$\sigma$) decrease in $\Omega_m$ ($\sigma_8$) relative to the P18+lensing data value for the Planck (new) $P(q)$, with 95\% (93\%) smaller error bars for $\Omega_m$ and 95\% (94\%) smaller error bars for $H_0$. For P18+lensing+non-CMB data we find $\Omega_m = 0.2998\pm 0.0055$ ($0.2999\pm 0.0055$) and $H_0 = 68.49\pm 0.56$ ($68.48\pm 0.56$) km s$^{-1}$ Mpc$^{-1}$ for the Planck (new) $P(q)$, which are consistent with many other measurements of these quantities and 0.27$\sigma$ (0.27$\sigma$) larger and 0.91$\sigma$ (0.92$\sigma$) lower, respectively, than the low-redshift data measurements of Ref.\ \cite{Cao:2022ugh}. We will see in Sec.\ \ref{subsec:model_selection} that in both tilted non-flat $\Lambda$CDM$+A_L$ models the fit to P18+lensing+non-CMB data is positively better when the $A_L$ parameter is allowed to vary compared to the $A_L = 1$ case; this is similar to what happens in the tilted flat $\Lambda$CDM$+A_L$ model. Also, like the tilted flat $\Lambda$CDM$+A_L$ P18+lensing+non-CMB case, we measure, from P18+lensing+non-CMB data, $A_L=1.090\pm 0.036$ and $1.088\pm0.035$, for the Planck $P(q)$ and the new $P(q)$, respectively, which both differ from the theoretically expected $A_L = 1$ by 2.5$\sigma$. The inclusion of the $A_L$ parameter does not significantly affect the values of the other seven primary parameters, leaving them close to the values found for the seven-parameter tilted non-flat $\Lambda$CDM models with $A_L = 1$ [the largest difference is for ln($10^{10}A_s$), where it is 1.0$\sigma$ (0.95$\sigma$) smaller for the Planck (new) $P(q)$]; it does however increase the error bars, but less than what happens in the P18 alone and P18+lensing data cases discussed in Secs.\ \ref{subsubsec:P18_data_constraints} and \ref{subsubsec:P18_lensing_data_constraints}, with largest increase being 21\% for ln($10^{10}A_s$) for both $P(q)$ cases. In addition, in the case when $A_L$ is allowed to vary, the derived parameters change somewhat and their error bars increase, with the largest changes associated with $\sigma_8$, where it is now 1.3$\sigma$ (1.2$\sigma$) smaller for the Planck (new) $P(q)$ with a 21\% (22\%) larger error bar. When non-CMB data (that include $f \sigma_8$ data) are added to the mix and the $A_L$ parameter is allowed to vary, $A_L > 1$ is favored and there is a decrease in the value of $\sigma_8$ compared to the $A_L = 1$ case, which helps to alleviate the corresponding tension. Since $A_L>1$ helps to resolve the lensing anomaly, there is less or no need to increase the value of $\Omega_m$ to predict more lensing. A lower value of $\Omega_m$ means less structure formation in the past, consequently slightly alleviating the $\sigma_8$ tension. While $\Omega_k$ plays a role at both early and late times, the $A_L$ parameter only has an impact on CMB data. Since, as we shall see in Sec. \ref{sec:P18_vs_non-CMB}, non-CMB data prefer a flatter geometry than do P18 data, it is possible to understand why the evidence in favor of $\Omega_k\neq 0$ subsides, while the evidence for $A_L>1$ does not, when non-CMB data is added to the mix. A fairly large negative value of $\Omega_k$ is required to resolve the P18 data lensing anomaly, thus improving upon the performance shown by the tilted flat $\Lambda$CDM model, however, such a large value of the curvature parameter is not supported by lensing data or by non-CMB data. This fact raises the issue of whether it is consistent to jointly use P18, lensing, and non-CMB data sets in the context of the non-flat models. We try to answer this question, through the use of different statistical criteria, in Sec.\ \ref{subsec:data_set_tensions}. Note that Figs.\ \ref{fig:like_P18_lensing_nonCMB} and \ref{fig:like_Alens_P18_lensing_nonCMB} show that when P18+lensing+non-CMB data is used it is not necessary to consider $A_L\neq 1$ in order to make the three sets of tilted model contours overlap. \subsubsection{Comparing P18, P18+lensing, and P18+lensing+non-CMB data cosmological constraints for each model} \label{subsubsec:contour_plots} Cosmological parameter contour plots allow us to easily see the degree of correlation between the two variables. If the two variables are more correlated then the corresponding constraint contours are more line-like, on the other hand, if they are less correlated the contours are broader and enclose 2-dimensional areas. In this subsubsection we comment on how the constraint contours, for each cosmological model, change depending on whether we consider P18, P18+lensing, or P18+lensing+non-CMB data. Figures \ref{fig:like_FL_compar}-\ref{fig:like_TNL_Alens_compar} show, for each of the eight cosmological models we study, the cosmological parameter constraints for P18, P18+lensing, and P18+lensing+non-CMB data. The constraint contours shrink as more data is included in the analysis used to determine them. From Fig.\ \ref{fig:like_FL_compar} for the six-parameter tilted flat $\Lambda$CDM we see that there are significant overlaps between the contours obtained in the three data sets considered. Along with the results discussed in Secs.\ \ref{subsubsec:P18_lensing_data_constraints} and \ref{subsubsec:P18_lensing_nonCMB_data_constraints} this is an indication that there is not significant tension between P18, P18+lensing, and P18+lensing+non-CMB data when these data are analyzed in the tilted flat $\Lambda$CDM model. The $\Omega_m$-$H_0$ panel contours indicate that these two parameters are strongly correlated. The inclusion of lensing data and/or non-CMB data, which provide information about the late-time Universe, partially breaks this correlation and induces a shift in the one-dimensional posterior distributions of not only these two parameters but also other parameters. Non-CMB data cause a larger shift. For the six-parameter untilted non-flat $\Lambda$CDM model (see Fig.\ \ref{fig:like_NL_compar}) constraint contours determined from the three different data sets overlap only for some parameters. In particular, for constraint contours in panels that involve $\Omega_k$, $\Omega_m$, or $H_0$ there is no overlap between those determined using P18 data and those determined using P18+lensing+non-CMB data (there are larger than 2$\sigma$ differences between these contours when one of these three parameters are involved and the differences are larger when two of these three parameters are involved), and there is only a slight amount of overlap between the P18 data contours and the P18+lensing data contours. The $\Omega_m$-$H_0$-$\Omega_k$ geometrical degeneracy is prominent for P18 data and is clearly seen in the $\Omega_k$-$\Omega_m$, $\Omega_k$-$H_0$, and $\Omega_m$-$H_0$ panels, as these three parameters are strongly correlated. Including lensing data and/or non-CMB data partially breaks this degeneracy causing significant shifts in the one-dimensional posterior distributions of not only these three parameters but also other parameters. The shifts are bigger here than in the tilted flat $\Lambda$CDM model and indicate significant tension between the data sets, especially between the P18 and P18+lensing+non-CMB data sets, when they are analyzed in the untilted non-flat $\Lambda$CDM model. Non-CMB data again appear to cause the larger shift. As discussed in more detail in Sec.\ \ref{subsec:data_set_tensions}, these shifts mean that P18 and non-CMB data are mutually inconsistent in the untilted non-flat $\Lambda$CDM model and so cannot be jointly used to derive cosmological parameter constraints in this model. Similar, but quantitatively less discrepant, results are obtained for the seven-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ and the tilted non-flat $\Lambda$CDM new $P(q)$ models (see Figs.\ \ref{fig:like_NL_ns_compar} and \ref{fig:like_TNL_compar}). The differences between the untilted non-flat and tilted non-flat results is likely a consequence of the additional $n_s$ parameter in the tilted non-flat models. In the tilted non-flat models, the more-discrepant P18 and P18+lensing+non-CMB data constraint contours overlap in all panels for pairs of the six primary cosmological parameters excluding the $\Omega_k$ parameter as well as the derived $\Omega_m$ and $H_0$ parameters. The differences are larger in the Planck $P(q)$ case than in the new $P(q)$ case, largest for $H_0$, smallest for $\Omega_m$, with $\Omega_k$ being in-between. In the new $P(q)$ case, the 2$\sigma$ contours overlap for $\Omega_m$ and almost overlap for $\Omega_k$. These results may be an indication of the tension found, in the context of the tilted non-flat models, between P18 data and the BAO data set. We shall study this tension in more detail in Sec.\ \ref{subsec:data_set_tensions}. As in the untilted non-flat $\Lambda$CDM model, the geometrical degeneracy between $\Omega_m$-$H_0$-$\Omega_k$ affects the tilted non-flat models. Again, including lensing data and/or non-CMB data partially breaks this degeneracy causing significant shifts in the one-dimensional posterior distributions of not only these three parameters but also other parameters. The shifts are bigger here than in the tilted flat $\Lambda$CDM model and but smaller than in the untilted non-flat $\Lambda$CDM model, but still indicate some tension between the data sets, especially between the P18 and P18+lensing+non-CMB data sets, especially when they are analyzed in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. Non-CMB data again appear to cause the larger shift. When the $A_L$ parameter is allowed to vary, the three different sets of constraint contours overlap in all four models (see Figs.\ \ref{fig:like_FL_Alens_compar}--\ref{fig:like_TNL_Alens_compar}). In the non-flat models there now is a bigger degeneracy between the cosmological parameters $\Omega_m$-$H_0$-$\Omega_k$-$A_L$ which causes the constraint contours to expand relative to the $A_L = 1$ case, especially for P18 data. For some parameters in the untilted non-flat $\Lambda$CDM model and the tilted non-flat $\Lambda$CDM new $P(q)$ model we observe a bimodal distribution when only P18 data is used, and the same parameters in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model have an almost bimodal distribution for P18 data. These bimodalities are likely a consequences of the above-mentioned geometrical degeneracy. \begin{table*} \caption{Mean and 68.3\% confidence limits of tilted flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{5}{c}{Tilted flat $\Lambda$CDM model} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02236 \pm 0.00015$ & $0.02243 \pm 0.00013$ & $0.043 \pm 0.016$ & $0.02241 \pm 0.00014$ & $0.043 \pm 0.016$ \\[+1mm] $\Omega_c h^2$ & $0.1202 \pm 0.0014$ & $0.11926 \pm 0.00097$ & $0.163 \pm 0.042$ & $0.11946 \pm 0.00098$ & $0.168 \pm 0.044$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04090 \pm 0.00031$ & $1.04102 \pm 0.00029$ & $1.054 \pm 0.026$ & $1.04099 \pm 0.00029$ & $1.059 \pm 0.025$ \\[+1mm] $\tau$ & $0.0542 \pm 0.0079$ & $0.0581 \pm 0.0081$ & $0.0542$ & $0.0555 \pm 0.0077$ & $0.0542$ \\[+1mm] $n_s$ & $0.9649 \pm 0.0043$ & $0.9673 \pm 0.0037$ & $0.9649$ & $0.9665 \pm 0.0038$ & $0.9649$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.044 \pm 0.016$ & $3.051 \pm 0.017$ & $3.01 \pm 0.27$ & $3.045 \pm 0.016$ & $3.044$ \\[+1mm] \hline \\[-1mm] $H_0$ & $67.28 \pm 0.61$ & $67.70 \pm 0.43$ & $83 \pm 12$ & $67.60 \pm 0.44$ & $83 \pm 12$ \\[+1mm] $\Omega_m$ & $0.3165 \pm 0.0084$ & $0.3106 \pm 0.0058$ & $0.294 \pm 0.015$ & $0.3119 \pm 0.0059$ & $0.300 \pm 0.016$ \\[+1mm] $\sigma_8$ & $0.8118 \pm 0.0074$ & $0.8119 \pm 0.0073$ & $0.874 \pm 0.037$ & $0.8102 \pm 0.0070$ & $0.92 \pm 0.12$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2765.80$ & $2786.66$ & $15.92$ & $2777.75$ & $10.98$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $20.22$ & $15.92$ & $11.61$ & $10.98$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $22.24$ & $22.24$ & $12.58$ & $12.58$ \\[+1mm] $\textrm{DIC}$ & $2817.93$ & $2839.25$ & $21.93$ & $2829.61$ & $14.93$ \\[+1mm] $\textrm{AIC}_c$ & $2819.80$ & $2840.66$ & $27.56$ & $2831.75$ & $19.98$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{5}{c}{Tilted flat $\Lambda$CDM+$A_L$ model} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02259 \pm 0.00017$ & $0.02258 \pm 0.00015$ & $0.043 \pm 0.015$ & $0.02256 \pm 0.00014$ & $0.045 \pm 0.013$ \\[+1mm] $\Omega_c h^2$ & $0.1180 \pm 0.0015$ & $0.1183 \pm 0.0010$ & $0.163 \pm 0.042$ & $0.1185 \pm 0.0010$ & $0.177 \pm 0.042$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04114 \pm 0.00032$ & $1.04113 \pm 0.00029$ & $1.055 \pm 0.024$ & $1.04109 \pm 0.00030$ & $1.065 \pm 0.018$ \\[+1mm] $\tau$ & $0.0496 \pm 0.0082$ & $0.0522 \pm 0.0080$ & $0.0496$ & $0.0492 \pm 0.0084$ & $0.0496$ \\[+1mm] $n_s$ & $0.9710 \pm 0.0050$ & $0.9705 \pm 0.0038$ & $0.9710$ & $0.9698 \pm 0.0039$ & $0.9710$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.030 \pm 0.017$ & $3.036 \pm 0.017$ & $3.00 \pm 0.27$ & $3.030 \pm 0.018$ & $3.030$ \\[+1mm] $A_{L}$ & $1.181 \pm 0.067$ & $1.170 \pm 0.060$ & $1.181$ & $1.174 \pm 0.061$ & $1.181$ \\[+1mm] \hline \\[-1mm] $H_0$ & $68.31 \pm 0.71$ & $68.21 \pm 0.46$ & $83 \pm 12$ & $68.11 \pm 0.47$ & $85 \pm 10$ \\[+1mm] $\Omega_m$ & $0.3029 \pm 0.0093$ & $0.3042 \pm 0.0060$ & $0.294 \pm 0.015$ & $0.3055 \pm 0.0061$ & $0.302 \pm 0.017$ \\[+1mm] $\sigma_8$ & $0.7997 \pm 0.0088$ & $0.8031 \pm 0.0077$ & $0.875 \pm 0.037$ & $0.8011 \pm 0.0079$ & $0.93 \pm 0.11$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2756.12$ & $2776.71$ & $15.91$ & $2767.77$ & $10.98$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $20.47$ & $15.91$ & $11.37$ & $10.98$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $20.78$ & $20.78$ & $11.88$ & $11.88$ \\[+1mm] $\textrm{DIC}$ & $2812.41$ & $2832.92$ & $21.83$ & $2823.77$ & $15.04$ \\[+1mm] $\Delta\textrm{DIC}$ & $-5.52$ & $-6.33$ & $-0.10$ & $-5.90$ & $0.11$ \\[+1mm] $\textrm{AIC}_c$ & $2812.12$ & $2832.71$ & $27.55$ & $2823.77$ & $19.98$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $-7.68$ & $-7.95$ & $-0.01$ & $-7.98$ & $0.00$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. The number of free parameters of the tilted flat $\Lambda$CDM model is 27 for P18, P18+BAO, and P18+BAO$^\prime$ data sets (including 21 internal calibration parameters), 4 for BAO data, and 3 for BAO$^\prime$ data. \end{flushleft} \end{ruledtabular} \label{tab:para_FL_BAO} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of untilted non-flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{5}{c}{Untilted non-flat $\Lambda$CDM model} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02320 \pm 0.00015$ & $0.02298 \pm 0.00014$ & $0.040 \pm 0.015$ & $0.02299 \pm 0.00014$ & $0.040 \pm 0.015$ \\[+1mm] $\Omega_c h^2$ & $0.11098 \pm 0.00088$ & $0.11184 \pm 0.00089$ & $0.175 \pm 0.046$ & $0.11171 \pm 0.00089$ & $0.175 \pm 0.047$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04204 \pm 0.00030$ & $1.04188 \pm 0.00029$ & $1.16 \pm 0.13$ & $1.04189 \pm 0.00030$ & $1.13 \pm 0.12$ \\[+1mm] $\tau$ & $0.0543 \pm 0.0091$ & $0.077 \pm 0.010$ & $0.0543$ & $0.073 \pm 0.010$ & $0.0543$ \\[+1mm] $\Omega_k$ & $-0.095 \pm 0.024$ & $-0.0066 \pm 0.0015$ & $-0.047 \pm 0.059$ & $-0.0074 \pm 0.0016$ & $-0.034 \pm 0.057$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.021 \pm 0.019$ & $3.069 \pm 0.021$ & $2.70 \pm 0.43$ & $3.059 \pm 0.021$ & $3.021$ \\[+1mm] \hline \\[-1mm] $H_0$ & $47.1 \pm 3.2$ & $67.77 \pm 0.60$ & $84 \pm 12$ & $67.46 \pm 0.63$ & $83 \pm 12$ \\[+1mm] $\Omega_m$ & $0.617 \pm 0.082$ & $0.2950 \pm 0.0055$ & $0.303 \pm 0.019$ & $0.2975 \pm 0.0057$ & $0.307 \pm 0.019$ \\[+1mm] $\sigma_8$ & $0.730 \pm 0.017$ & $0.7977 \pm 0.0093$ & $0.850 \pm 0.048$ & $0.7927 \pm 0.0090$ & $1.00 \pm 0.18$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2789.77$ & $2837.93$ & $15.91$ & $2828.81$ & $10.67$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $20.34$ & $15.91$ & $11.68$ & $10.67$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $1987.47$ & $1987.47$ & $1765.08$ & $1765.08$ \\[+1mm] $\textrm{DIC}$ & $2847.14$ & $2895.04$ & $24.31$ & $2884.90$ & $17.55$ \\[+1mm] $\Delta\textrm{DIC}$ & $29.21$ & $55.79$ & $2.38$ & $55.29$ & $2.62$ \\[+1mm] $\textrm{AIC}_c$ & $2843.77$ & $2891.93$ & $31.91$ & $2882.81$ & $24.39$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $23.97$ & $51.27$ & $4.35$ & $51.06$ & $4.41$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{5}{c}{Untilted non-flat $\Lambda$CDM+$A_L$ model} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02320 \pm 0.00015$ & $0.02318 \pm 0.00015$ & $0.041 \pm 0.015$ & $0.02320 \pm 0.00015$ & $0.042 \pm 0.014$ \\[+1mm] $\Omega_c h^2$ & $0.11097 \pm 0.00087$ & $0.11117 \pm 0.00086$ & $0.176 \pm 0.045$ & $0.11095 \pm 0.00087$ & $0.180 \pm 0.044$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04202 \pm 0.00030$ & $1.04198 \pm 0.00030$ & $1.16 \pm 0.13$ & $1.04199 \pm 0.00030$ & $1.14 \pm 0.12$ \\[+1mm] $\tau$ & $0.0540 \pm 0.0087$ & $0.0598 \pm 0.0087$ & $0.0540$ & $0.0557 \pm 0.0089$ & $0.0540$ \\[+1mm] $\Omega_k$ & $-0.12 \pm 0.12$ & $-0.0064 \pm 0.0015$ & $-0.050 \pm 0.060$ & $-0.0073 \pm 0.0015$ & $-0.035 \pm 0.058$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.020 \pm 0.018$ & $3.033 \pm 0.018$ & $2.68 \pm 0.41$ & $3.023 \pm 0.019$ & $3.020$ \\[+1mm] $A_{L}$ & $1.08 \pm 0.27$ & $1.310 \pm 0.062$ & $1.08$ & $1.319 \pm 0.063$ & $1.08$ \\[+1mm] \hline \\[-1mm] $H_0$ & $52 \pm 18$ & $68.27 \pm 0.61$ & $84 \pm 12$ & $67.93 \pm 0.62$ & $84 \pm 11$ \\[+1mm] $\Omega_m$ & $0.70 \pm 0.42$ & $0.2897 \pm 0.0054$ & $0.304 \pm 0.018$ & $0.2921 \pm 0.0055$ & $0.307 \pm 0.020$ \\[+1mm] $\sigma_8$ & $0.721 \pm 0.053$ & $0.7799 \pm 0.0083$ & $0.848 \pm 0.049$ & $0.7750 \pm 0.0085$ & $1.01 \pm 0.18$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2787.76$ & $2809.82$ & $15.89$ & $2799.18$ & $10.68$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $21.96$ & $15.89$ & $11.38$ & $10.68$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $106.63$ & $106.63$ & $80.18$ & $80.18$ \\[+1mm] $\textrm{DIC}$ & $2846.45$ & $2869.28$ & $24.63$ & $2857.90$ & $17.89$ \\[+1mm] $\Delta\textrm{DIC}$ & $28.52$ & $30.03$ & $2.70$ & $28.29$ & $2.96$ \\[+1mm] $\textrm{AIC}_c$ & $2843.76$ & $2865.82$ & $31.89$ & $2855.18$ & $24.39$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $23.96$ & $25.16$ & $4.33$ & $23.43$ & $4.41$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_NL_BAO} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of Planck-$P(q)$-based tilted non-flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{5}{c}{Tilted non-flat $\Lambda$CDM model [Planck $P(q)$]} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02260 \pm 0.00017$ & $0.02241 \pm 0.00015$ & $0.040 \pm 0.015$ & $0.02241 \pm 0.00015$ & $0.040 \pm 0.016$ \\[+1mm] $\Omega_c h^2$ & $0.1181 \pm 0.0015$ & $0.1195 \pm 0.0014$ & $0.174 \pm 0.047$ & $0.1195 \pm 0.0014$ & $0.172 \pm 0.047$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04116 \pm 0.00032$ & $1.04099 \pm 0.00032$ & $1.15 \pm 0.13$ & $1.04099 \pm 0.00032$ & $1.13 \pm 0.12$ \\[+1mm] $\tau$ & $0.0483 \pm 0.0083$ & $0.0578 \pm 0.0077$ & $0.0483$ & $0.0550 \pm 0.0078$ & $0.0483$ \\[+1mm] $\Omega_k$ & $-0.043 \pm 0.017$ & $0.0005 \pm 0.0018$ & $-0.046 \pm 0.060$ & $-0.0001 \pm 0.0018$ & $-0.033 \pm 0.055$ \\[+1mm] $n_s$ & $0.9706 \pm 0.0047$ & $0.9667 \pm 0.0045$ & $0.9706$ & $0.9666 \pm 0.0044$ & $0.9706$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.027 \pm 0.017$ & $3.051 \pm 0.016$ & $2.74 \pm 0.43$ & $3.044 \pm 0.016$ & $3.027$ \\[+1mm] \hline \\[-1mm] $H_0$ & $54.5 \pm 3.6$ & $67.83 \pm 0.58$ & $83 \pm 12$ & $67.58 \pm 0.62$ & $83 \pm 12$ \\[+1mm] $\Omega_m$ & $0.481 \pm 0.062$ & $0.3100 \pm 0.0060$ & $0.303 \pm 0.019$ & $0.3122 \pm 0.0063$ & $0.306 \pm 0.019$ \\[+1mm] $\sigma_8$ & $0.775 \pm 0.015$ & $0.8130 \pm 0.0079$ & $0.850 \pm 0.049$ & $0.8099 \pm 0.0081$ & $0.98 \pm 0.17$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2754.73$ & $2786.20$ & $15.88$ & $2776.90$ & $10.68$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $20.09$ & $15.88$ & $11.71$ & $10.68$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $665.90$ & $665.90$ & $582.59$ & $582.59$ \\[+1mm] $\textrm{DIC}$ & $2810.59$ & $2840.62$ & $24.34$ & $2832.28$ & $17.58$ \\[+1mm] $\Delta\textrm{DIC}$ & $-7.34$ & $1.37$ & $2.41$ & $2.67$ & $2.65$ \\[+1mm] $\textrm{AIC}_c$ & $2810.73$ & $2842.20$ & $31.88$ & $2832.90$ & $24.39$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $-9.07$ & $1.54$ & $4.32$ & $1.15$ & $4.41$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{5}{c}{Tilted non-flat $\Lambda$CDM+$A_L$ model [Planck $P(q)$]} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02258 \pm 0.00017$ & $0.02260 \pm 0.00017$ & $0.041 \pm 0.014$ & $0.02262 \pm 0.00017$ & $0.044 \pm 0.013$ \\[+1mm] $\Omega_c h^2$ & $0.1183 \pm 0.0015$ & $0.1180 \pm 0.0015$ & $0.174 \pm 0.045$ & $0.1178 \pm 0.0015$ & $0.182 \pm 0.043$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04116 \pm 0.00033$ & $1.04115 \pm 0.00033$ & $1.16 \pm 0.14$ & $1.04118 \pm 0.00032$ & $1.12 \pm 0.11$ \\[+1mm] $\tau$ & $0.0478 \pm 0.0081$ & $0.0522 \pm 0.0081$ & $0.0478$ & $0.0496 \pm 0.0085$ & $0.0478$ \\[+1mm] $\Omega_k$ & $-0.130 \pm 0.095$ & $-0.0004 \pm 0.0018$ & $-0.045 \pm 0.063$ & $-0.0012 \pm 0.0018$ & $-0.026 \pm 0.054$ \\[+1mm] $n_s$ & $0.9704 \pm 0.0048$ & $0.9712 \pm 0.0047$ & $0.9704$ & $0.9716 \pm 0.0047$ & $0.9704$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.027 \pm 0.017$ & $3.035 \pm 0.017$ & $2.74 \pm 0.45$ & $3.029 \pm 0.018$ & $3.027$ \\[+1mm] $A_{L}$ & $0.88 \pm 0.15$ & $1.170 \pm 0.061$ & $0.88$ & $1.178 \pm 0.061$ & $0.88$ \\[+1mm] \hline \\[-1mm] $H_0$ & $45 \pm 11$ & $68.13 \pm 0.60$ & $84 \pm 11$ & $67.85 \pm 0.61$ & $85 \pm 10$ \\[+1mm] $\Omega_m$ & $0.80 \pm 0.35$ & $0.3044 \pm 0.0062$ & $0.303 \pm 0.019$ & $0.3064 \pm 0.0063$ & $0.307 \pm 0.019$ \\[+1mm] $\sigma_8$ & $0.733 \pm 0.045$ & $0.8020 \pm 0.0089$ & $0.851 \pm 0.048$ & $0.7983 \pm 0.0091$ & $0.99 \pm 0.16$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2754.99$ & $2776.32$ & $15.91$ & $2767.04$ & $10.73$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $20.38$ & $15.91$ & $11.22$ & $10.73$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $593.77$ & $593.77$ & $518.08$ & $518.08$ \\[+1mm] $\textrm{DIC}$ & $2811.63$ & $2835.10$ & $24.31$ & $2825.27$ & $17.54$ \\[+1mm] $\Delta\textrm{DIC}$ & $-6.30$ & $-4.15$ & $2.38$ & $-4.34$ & $2.61$ \\[+1mm] $\textrm{AIC}_c$ & $2812.99$ & $2834.32$ & $31.91$ & $2825.04$ & $24.45$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $-6.81$ & $-6.34$ & $4.35$ & $-6.71$ & $4.47$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_NL_ns_BAO} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of tilted new-$P(q)$-based non-flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters with the new $P(q)$ constrained by TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{5}{c}{Tilted non-flat $\Lambda$CDM model [new $P(q)$]} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02255 \pm 0.00017$ & $0.02242 \pm 0.00015$ & $0.039 \pm 0.015$ & $0.02243 \pm 0.00016$ & $0.041 \pm 0.016$ \\[+1mm] $\Omega_c h^2$ & $0.1188 \pm 0.0015$ & $0.1194 \pm 0.0014$ & $0.173 \pm 0.048$ & $0.1193 \pm 0.0014$ & $0.177 \pm 0.048$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04109 \pm 0.00032$ & $1.04100 \pm 0.00032$ & $1.16 \pm 0.14$ & $1.04102 \pm 0.00032$ & $1.13 \pm 0.12$ \\[+1mm] $\tau$ & $0.0525 \pm 0.0083$ & $0.0582 \pm 0.0081$ & $0.0525$ & $0.0562 \pm 0.0080$ & $0.0525$ \\[+1mm] $\Omega_k$ & $-0.033 \pm 0.014$ & $0.0003 \pm 0.0018$ & $-0.051 \pm 0.061$ & $-0.0004 \pm 0.0018$ & $-0.032 \pm 0.059$ \\[+1mm] $n_s$ & $0.9654 \pm 0.0045$ & $0.9665 \pm 0.0043$ & $0.9654$ & $0.9665 \pm 0.0043$ & $0.9654$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.039 \pm 0.017$ & $3.051 \pm 0.016$ & $2.72 \pm 0.45$ & $3.046 \pm 0.016$ & $3.039$ \\[+1mm] \hline \\[-1mm] $H_0$ & $56.9 \pm 3.6$ & $67.79 \pm 0.59$ & $83 \pm 12$ & $67.52 \pm 0.61$ & $83 \pm 12$ \\[+1mm] $\Omega_m$ & $0.444 \pm 0.055$ & $0.3102 \pm 0.0060$ & $0.304 \pm 0.019$ & $0.3124 \pm 0.0063$ & $0.307 \pm 0.020$ \\[+1mm] $\sigma_8$ & $0.786 \pm 0.014$ & $0.8128 \pm 0.0079$ & $0.846 \pm 0.048$ & $0.8098 \pm 0.0080$ & $0.99 \pm 0.18$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2757.38$ & $2786.27$ & $15.90$ & $2777.01$ & $10.67$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $20.66$ & $15.90$ & $11.82$ & $10.67$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $278.54$ & $278.54$ & $236.71$ & $236.71$ \\[+1mm] $\textrm{DIC}$ & $2811.54$ & $2840.16$ & $24.57$ & $2831.65$ & $17.69$ \\[+1mm] $\Delta\textrm{DIC}$ & $-6.39$ & $0.91$ & $2.64$ & $2.04$ & $2.76$ \\[+1mm] $\textrm{AIC}_c$ & $2813.38$ & $2842.27$ & $31.90$ & $2833.01$ & $24.39$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $-6.42$ & $1.61$ & $4.34$ & $1.26$ & $4.41$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{5}{c}{Tilted non-flat $\Lambda$CDM+$A_L$ model [new $P(q)$]} \\[+1mm] \cline{2-6}\\[-1mm] Parameter & P18 & P18+BAO & BAO & P18+BAO$^\prime$ & BAO$^\prime$ \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.02257 \pm 0.00017$ & $0.02260 \pm 0.00017$ & $0.039 \pm 0.015$ & $0.02261 \pm 0.00017$ & $0.042 \pm 0.015$ \\[+1mm] $\Omega_c h^2$ & $0.1187 \pm 0.0016$ & $0.1180 \pm 0.0014$ & $0.174 \pm 0.047$ & $0.1178 \pm 0.0015$ & $0.177 \pm 0.046$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.04111 \pm 0.00033$ & $1.04117 \pm 0.00033$ & $1.17 \pm 0.14$ & $1.04117 \pm 0.00032$ & $1.13 \pm 0.13$ \\[+1mm] $\tau$ & $0.0512 \pm 0.0086$ & $0.0532 \pm 0.0081$ & $0.0512$ & $0.0495 \pm 0.0084$ & $0.0512$ \\[+1mm] $\Omega_k$ & $-0.10 \pm 0.11$ & $-0.0005 \pm 0.0017$ & $-0.055 \pm 0.060$ & $-0.0012 \pm 0.0018$ & $-0.035 \pm 0.059$ \\[+1mm] $n_s$ & $0.9654 \pm 0.0057$ & $0.9707 \pm 0.0044$ & $0.9654$ & $0.9715 \pm 0.0047$ & $0.9654$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.036 \pm 0.018$ & $3.038 \pm 0.017$ & $2.69 \pm 0.43$ & $3.029 \pm 0.018$ & $3.036$ \\[+1mm] $A_{L}$ & $0.94 \pm 0.20$ & $1.168 \pm 0.061$ & $0.94$ & $1.176 \pm 0.062$ & $0.94$ \\[+1mm] \hline \\[-1mm] $H_0$ & $51 \pm 14$ & $68.09 \pm 0.60$ & $83 \pm 12$ & $67.85 \pm 0.63$ & $84 \pm 11$ \\[+1mm] $\Omega_m$ & $0.70 \pm 0.43$ & $0.3048 \pm 0.0062$ & $0.304 \pm 0.019$ & $0.3065 \pm 0.0065$ & $0.306 \pm 0.020$ \\[+1mm] $\sigma_8$ & $0.752 \pm 0.052$ & $0.8026 \pm 0.0086$ & $0.844 \pm 0.048$ & $0.7982 \pm 0.0092$ & $0.99 \pm 0.18$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $2756.33$ & $2776.32$ & $15.90$ & $2767.43$ & $10.68$ \\[+1mm] $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) & $\cdots$ & $20.30$ & $15.90$ & $11.21$ & $10.68$ \\[+1mm] $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F) & $\cdots$ & $194.81$ & $194.81$ & $160.72$ & $160.72$ \\[+1mm] $\textrm{DIC}$ & $2814.83$ & $2834.67$ & $24.75$ & $2824.97$ & $17.76$ \\[+1mm] $\Delta\textrm{DIC}$ & $-3.10$ & $-4.58$ & $2.82$ & $-4.64$ & $2.83$ \\[+1mm] $\textrm{AIC}_c$ & $2814.33$ & $2834.32$ & $31.90$ & $2825.43$ & $24.39$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $-5.47$ & $-6.34$ & $4.34$ & $-6.32$ & $4.41$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_TNL_ns_BAO} \end{table*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_fig16.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the tilted flat $\Lambda$CDM model. } \label{fig:like_FL_BAO} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_Alens_fig17.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the tilted flat $\Lambda$CDM$+A_L$ model. } \label{fig:like_FL_Alens_BAO} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_fig18.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the untilted non-flat $\Lambda$CDM model. } \label{fig:like_NL_BAO} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_fig19.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the untilted non-flat $\Lambda$CDM$+A_L$ model. } \label{fig:like_NL_Alens_BAO} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_ns_fig20.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the tilted non-flat $\Lambda$CDM model with the Planck $P(q)$. } \label{fig:like_NL_ns_BAO} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_ns_fig21.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the tilted non-flat $\Lambda$CDM$+A_L$ model with the Planck $P(q)$. } \label{fig:like_NL_Alens_ns_BAO} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_ns1_fig22.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the tilted non-flat $\Lambda$CDM model with the new $P(q)$. } \label{fig:like_TNL_ns1_BAO} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_Alens_ns1_fig23.pdf}} \caption{Likelihood distributions constrained by the Planck 2018 TT,TE,EE+lowE (P18), BAO, and BAO$^\prime$ data sets in the tilted non-flat $\Lambda$CDM$+A_L$ model with the new $P(q)$. } \label{fig:like_TNL_Alens_ns1_BAO} \end{figure*} \subsubsection{Comparing P18 data and BAO/BAO$^{\prime}$ data cosmological constraints}\label{sec:P18_vs_BAO} In this subsubsection we compare BAO and BAO$^\prime$ data cosmological constraints to those obtained from P18 data. Prior to jointly analyzing P18+BAO/BAO$^\prime$ data, we need to determine whether P18 and BAO/BAO$^\prime$ data cosmological constraints are mutually consistent. In Sec.\ \ref{subsec:data_set_tensions} we use two other statistical estimators to examine whether or not P18 and BAO/BAO$^\prime$ data are in tension. The cosmological parameter mean values and error bars favored by the P18, BAO, BAO$^\prime$, P18+BAO, and P18+BAO$^\prime$ data sets are summarized in Tables \ref{tab:para_FL_BAO}-\ref{tab:para_TNL_ns_BAO} for the tilted flat $\Lambda$CDM (+$A_L$) models, the untilted non-flat $\Lambda$CDM (+$A_L$) models, the tilted non-flat $\Lambda$CDM (+$A_L$) models with the Planck $P(q)$, and the tilted non-flat $\Lambda$CDM ($+A_L$) models with the new $P(q)$, respectively. Likelihood distributions of cosmological parameters of the four models with $A_L=1$ and $A_L$ varying are shown in Figs.\ \ref{fig:like_FL_BAO}-\ref{fig:like_TNL_Alens_ns1_BAO} for the P18, BAO, BAO$^\prime$, P18+BAO, and P18+BAO$^\prime$ data sets. Since neither BAO$^{\prime}$ nor BAO data have the ability to constrain $\tau$ or $n_s$ or $A_L$, we set their values to those found in the corresponding P18 data analysis. In addition, for the same reason, in the BAO$^\prime$ data analyses, we also set the value of $\ln(10^{10}A_s)$ to that found in the corresponding P18 data analysis. We see from the upper and lower panels of Tables \ref{tab:para_FL_BAO}-\ref{tab:para_TNL_ns_BAO} that the BAO and BAO$^\prime$ data results for the $A_L=1$ and $A_L$-varying cases are similar, even though the fixed $\tau$ and $n_s$ [and $\ln(10^{10}A_s)$] values are slightly different for the $A_L = 1$ and $A_L$-varying cases. From Tables \ref{tab:para_FL_BAO}-\ref{tab:para_TNL_ns_BAO} we see that, in the six non-flat $\Lambda$CDM (+$A_L$) models the constraints set by BAO$^\prime$/BAO data on $\Omega_m$ are tighter than the ones imposed by P18 data and in the three non-flat $\Lambda$CDM+$A_L$ models the constraints set by BAO$^\prime$/BAO data on $\Omega_k$ are tighter than the ones imposed by P18 data. P18 data more restrictively constrains all other parameters in all eight cosmological models. As we discuss below, there is a significant level of disagreement in the non-flat models between P18 data cosmological constraints and BAO$^\prime$/BAO data cosmological constraints, in most cases. From Tables \ref{tab:para_NL_BAO}-\ref{tab:para_TNL_ns_BAO} we see that all three data sets, P18, BAO$^\prime$, and BAO, favor negative values of the curvature parameter, with BAO$^\prime$ and BAO data favoring closed geometry only weakly, at 0.48$\sigma$ to 0.96$\sigma$. However, we should take into account the geometrical degeneracy between $H_0$-$\Omega_k$-$\Omega_m$ and note that both BAO$^\prime$ and BAO data favor higher values of $H_0$ and lower values of $\Omega_m$ than do P18 data and this is what causes the P18 and BAO/BAO$^\prime$ cosmological constraint differences. We first discuss BAO$^\prime$ data results (BAO$^\prime$ data do not include $f\sigma_8$ data points, see Sec.\ \ref{sec:data}) and then consider results from BAO data. This will allow us to test the impact of some $f\sigma_8$ data on the cosmological constraints. Comparing the six-parameter and the three-parameter tilted flat $\Lambda$CDM model primary cosmological parameter constraints for P18 and BAO$^\prime$ data, shown in the upper half of Table \ref{tab:para_FL_BAO}, we see that the values of $\Omega_b h^2$ and $\Omega_c h^2$ are in mild disagreement, at 1.3$\sigma$ and 1.1$\sigma$ respectively. We also observe a similar 1.3$\sigma$ level of tension in the derived $H_0$ values, whereas the other two derived parameters, $\Omega_m$ and $\sigma_8$, show a better agreement, disagreeing by only 0.91$\sigma$ and 0.90$\sigma$ respectively. Comparing the seven-parameter and the three-parameter tilted flat $\Lambda$CDM+$A_L$ model primary cosmological parameter constraints for P18 and BAO$^\prime$ data, shown in the lower half of Table \ref{tab:para_FL_BAO}, we see that the values of $\Omega_b h^2$, $\Omega_c h^2$, and $100\theta_{\textrm{MC}}$ are in 1.7$\sigma$, 1.4$\sigma$ and 1.3$\sigma$ tension respectively. As for the derived parameters, we find $H_0$ and $\sigma_8$ values are in 1.7$\sigma$ and 1.2$\sigma$ disagreement while $\Omega_m$ values differ by only 0.046$\sigma$. This means that only for the $\Omega_m$ parameter does the inclusion of a varying $A_L$ reduce the disagreement found in the $A_L=1$ case, while increasing the disagreement for a number of other parameters. P18 and BAO$^\prime$ data results obtained for the six-parameter and the three-parameter untilted non-flat $\Lambda$CDM model, shown in the upper half of Table \ref{tab:para_NL_BAO}, indicate more significant differences than found in the tilted flat $\Lambda$CDM model. The primary cosmological parameters $\Omega_b h^2$ and $\Omega_c h^2$ values disagree at 1.1$\sigma$ and 1.4$\sigma$, while the primary spatial curvature parameter value is $\Omega_k=-0.034\pm 0.057$ for BAO$^\prime$ data, which is 0.60$\sigma$ away from flat and in 0.99$\sigma$ tension with the P18 value $\Omega_k=-0.095\pm 0.024$, which is 4.0$\sigma$ away from flat. Regarding the derived parameters, we find that $\Omega_m$, $H_0$, and $\sigma_8$ values are in 3.7$\sigma$, 2.9$\sigma$, and 1.5$\sigma$ disagreement. According to these results, P18 and BAO$^\prime$ data probably should not be jointly analyzed in the context of the untilted non-flat $\Lambda$CDM model. The results for the seven-parameter and the three-parameter untilted non-flat $\Lambda$CDM+$A_L$ model, obtained considering P18 and BAO$^\prime$ data, are in the lower half of Table \ref{tab:para_NL_BAO}. While there is a slight increase in the disagreement between the values of the primary parameters $\Omega_b h^2$ (1.3$\sigma$) and $\Omega_c h^2$ (1.6$\sigma$), there are significant decreases for the derived parameters $\Omega_m$, and $H_0$, but not for $\sigma_8$, that now disagree by 0.93$\sigma$, 1.5$\sigma$, and 1.5$\sigma$ respectively. This is caused by the increase in the size of the error bars in the $A_L$-varying P18 case with respect to the corresponding values obtained with $A_L=1$. For the BAO$^\prime$ data primary spatial curvature parameter we find $\Omega_k=-0.035\pm 0.058$, which is 0.60$\sigma$ away from flat hypersurfaces and only in 0.64$\sigma$ tension with the P18 value $\Omega_k=-0.12\pm0.12$, which is now only 1.0$\sigma$ away from flat. According to these results, unlike in the $A_L=1$ case, in the $A_L$-varying case P18 and BAO$^\prime$ data can probably be jointly analyzed in the context of the untilted non-flat $\Lambda$CDM model. Note that in this case a joint analysis of P18+BAO$^\prime$ data favors closed geometry at 4.9$\sigma$, with $\Omega_k=-0.0073\pm0.0015$, although because of the lack of the tilt ($n_s$) degree of freedom this untilted non-flat $\Lambda$CDM+$A_L$ model does not provide a good fit to smaller-angular-scale P18 data, which is reflected in the large $\Delta$DIC and $\Delta$AIC$_c$ values for the P18+BAO$^\prime$ case in the lower half of Table \ref{tab:para_NL_BAO}. Comparing the seven-parameter and the four-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ model primary cosmological parameter constraints for P18 and BAO$^\prime$ data, we see, in the upper half of Table \ref{tab:para_NL_ns_BAO}, that the values of $\Omega_b h^2$ and $\Omega_c h^2$ are both in 1.1$\sigma$ disagreement. The BAO$^\prime$ data primary spatial curvature parameter value $\Omega_k=-0.033\pm 0.055$ is 0.6$\sigma$ away from flat and only in 0.17$\sigma$ tension with the P18 value $\Omega_k=-0.043\pm0.017$, which is 2.5$\sigma$ in favor of closed geometry. The derived parameters $\Omega_m$, $H_0$, and $\sigma_8$ are in 2.7$\sigma$, 2.3$\sigma$, and 1.2$\sigma$ tension. These results reveal that P18 and BAO$^\prime$ data cosmological constraints are somewhat inconsistent in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model and these data probably should not be used jointly to constrain this model. Looking at the lower half of Table \ref{tab:para_NL_ns_BAO} we can compare results obtained for the eight-parameter and the four-parameter tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model from P18 and BAO$^\prime$ data respectively. We observe that the primary parameters $\Omega_b h^2$ and $\Omega_c h^2$ are in 1.6$\sigma$ and 1.5$\sigma$ tension. For the BAO$^\prime$ data primary spatial curvature parameter we find $\Omega_k= -0.026\pm 0.054$, which is only 0.48$\sigma$ away from flat and in 0.95$\sigma$ tension with the P18 value $-0.130\pm0.095$, which is 1.4$\sigma$ away from flat. Regarding the derived parameters we find that $\Omega_m$, $H_0$, and $\sigma_8$ are in 1.4$\sigma$, 2.7$\sigma$ and 1.6$\sigma$ disagreement. Compared to the $A_L = 1$ case, in the $A_L$-varying case we find a significant reduction only in the $\Omega_m$ tension, with most of the other parameter disagreements being more significant, which again suggests that P18 and BAO$^\prime$ data should not be jointly analyzed within the tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model. Comparing the seven-parameter and the four-parameter tilted non-flat $\Lambda$CDM new $P(q)$ model primary cosmological parameter constraints for P18 and BAO$^\prime$ data, from the upper half of Table \ref{tab:para_TNL_ns_BAO} we see that the values of $\Omega_b h^2$ and $\Omega_c h^2$ both disagree at 1.2$\sigma$. The BAO$^\prime$ data primary spatial curvature parameter value is $\Omega_k=-0.032\pm 0.059$, which is only a 0.54$\sigma$ deviation from flat and, similar to the Planck $P(q)$ model, is only in 0.016$\sigma$ disagreement with the P18 value $-0.033\pm 0.014$, which is 2.4$\sigma$ away from flat. Regarding the derived parameters $\Omega_m$, $H_0$, and $\sigma_8$, we find that their values disagree at 2.3$\sigma$, 2.1$\sigma$, and 1.1$\sigma$ respectively. While these disagreements are smaller than the ones found in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model, they still are large enough to require we more carefully test whether P18 and BAO$^\prime$ data can be jointly used to constrain cosmological parameters in this cosmological model. The results for the eight-parameter and the four-parameter tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model are in the lower half of Table \ref{tab:para_TNL_ns_BAO}, for P18 and BAO$^\prime$ data, respectively. As happens in the Planck $P(q)$ model, when the $A_L$ parameter is allowed to vary the tensions found for the primary parameters $\Omega_b h^2$ and $\Omega_c h^2$ do not decrease (in fact they slightly increase) with respect to the $A_L=1$ case, both now being 1.3$\sigma$. For the BAO$^\prime$ data primary spatial curvature parameter we find $\Omega_k= -0.035\pm 0.059$, which is 0.59$\sigma$ away from flat hypersurfaces and only in 0.52$\sigma$ tension with the P18 value $\Omega_k=-0.10\pm 0.11$, which is 0.91$\sigma$ away from flat. As for the value of the derived parameters $\Omega_m$, $H_0$ and $\sigma_8$ we find disagreements at 0.92$\sigma$, 1.9$\sigma$, and 1.3$\sigma$ respectively. The tensions are reduced with respect to the case with $A_L=1$, due to the increase of the error bars, but possibly still are not small enough to allow the joint use of P18+BAO$^\prime$ data for constraining tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model cosmological parameters. We now comment on the consistency between the cosmological constraints obtained using the BAO data set (which contain some $f\sigma_8$ data points) and the P18 data cosmological constraints. Here we also have to deal with the $\sigma_8$ tension, namely the discrepancy between the larger value for $\sigma_8$ obtained when P18 data are considered and the typically smaller values that one gets from low-redshift structure formation data (the $f\sigma_8$ data points we consider) or from weak lensing measurements. Note that since BAO data include some $f\sigma_8$ measurements we allow for ln($10^{10}A_s$) to vary in the BAO data only analyses (unlike the BAO$^\prime$ data only analyses where we fix the value of this parameter). We shall see that the tilted non-flat $\Lambda$CDM new $P(q)$ model is the model that best reconciles these measurements. Comparing the six-parameter and the four-parameter tilted flat $\Lambda$CDM primary cosmological parameter constraints for P18 and BAO data, shown in the upper half of Table \ref{tab:para_FL_BAO}, we see that the values of $\Omega_b h^2$ and $\Omega_c h^2$ are in 1.3$\sigma$ and 1.0$\sigma$ tension, respectively. A similar level of disagreement is found if we look at the values of the derived parameters. In particular for $\Omega_m$, $H_0$, and $\sigma_8$ we find 1.3$\sigma$, 1.3$\sigma$, and 1.6$\sigma$ disagreement. Here the greatest disagreement is that affecting $\sigma_8$, which has to do with the $\sigma_8$ tension mentioned above. Considering the results presented in the lower half of Table \ref{tab:para_FL_BAO} for the seven-parameter and the four-parameter tilted flat $\Lambda$CDM+$A_L$ model, obtained for P18 and BAO data, respectively, we find that including a varying $A_L$ parameter does not decrease the primary parameter tensions found when $A_L=1$. For $\Omega_b h^2$ and $\Omega_c h^2$ the disagreement is now 1.4$\sigma$ and 1.1$\sigma$. On the other hand for the derived $\Omega_m$, $H_0$, and $\sigma_8$ we find that their corresponding values disagree at 0.50$\sigma$, 1.2$\sigma$, and 2.0$\sigma$. Once again, allowing $A_L$ to vary reduces the $\Omega_m$ disagreement and the largest disagreement is between the $\sigma_8$ values. Comparing the six-parameter and the five-parameter untilted non-flat $\Lambda$CDM model primary cosmological parameter constraints for P18 and BAO data, provided in the upper half of Table \ref{tab:para_NL_BAO}, we observe that the values of $\Omega_b h^2$ and $\Omega_c h^2$ show a disagreement of 1.1$\sigma$ and 1.4$\sigma$, respectively. The BAO data value for the primary spatial curvature parameter is $\Omega_k=-0.047\pm 0.059$, which is 0.80$\sigma$ away from flat hypersurfaces and in 0.75$\sigma$ tension with the P18 value $-0.095\pm 0.024$, which represents a 4.0$\sigma$ deviation from flat. The level of tension is worse for the derived parameters $\Omega_m$, $H_0$, and $\sigma_8$, the disagreements now being 3.7$\sigma$, 3.0$\sigma$, and 2.4$\sigma$. We may say that P18 and BAO data should not be jointly used to constrain cosmological parameters in the untilted non-flat $\Lambda$CDM model. Results for the seven-parameter and the five-parameter untilted non-flat $\Lambda$CDM+$A_L$ model for P18 and BAO data, respectively, can be seen in the lower half of Table \ref{tab:para_NL_BAO}. Again we do not observe a reduction in the tension for the primary parameters $\Omega_b h^2$ (1.2$\sigma$) and $\Omega_c h^2$ (1.4$\sigma$) compared with the results found for the $A_L =1$ case. On the other hand, there is an important decrease for the derived parameters $\Omega_m$, $H_0$, and $\sigma_8$, the disagreement now being 0.94$\sigma$, 1.5$\sigma$, and 1.8$\sigma$, respectively. This is probably caused by the increase in the size of the error bars in the $A_L$-varying P18 case, with respect to the corresponding values obtained with $A_L=1$. For the BAO data primary spatial curvature parameter we find $\Omega_k=-0.050\pm 0.060$, which is 0.83$\sigma$ away from flat and in 0.52$\sigma$ tension with the P18 value $\Omega_k = -0.12\pm 0.12$, which is 1.0$\sigma$ in favor of a closed geometry. Comparing the seven-parameter and the five-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ model primary cosmological parameter constraints for P18 and BAO data, shown in the upper half of Table \ref{tab:para_NL_ns_BAO}, we see that the values of $\Omega_b h^2$ and $\Omega_c h^2$ are both in 1.2$\sigma$ tension. The BAO data primary spatial curvature parameter $\Omega_k=-0.046\pm 0.060$ is 0.77$\sigma$ away from flat hypersurfaces and, as in the BAO$^\prime$ case, in good agreement with, differing only by 0.048$\sigma$ from, the P18 result $-0.043\pm 0.017$, which is 2.5$\sigma$ away from flat). As for the derived parameters $\Omega_m$, $H_0$, and $\sigma_8$ we observe disagreements of 2.7$\sigma$, 2.3$\sigma$ and 1.5$\sigma$. These results reveal an inconsistency between P18 and BAO cosmological constraints that probably mean P18 and BAO data should not be used to jointly constrain cosmological parameters in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. We provide results for the eight-parameter and the five-parameter tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model, from P18 and BAO data, in the lower half of Table \ref{tab:para_NL_ns_BAO}. For the primary parameters $\Omega_b h^2$ and $\Omega_c h^2$ we find a tension between the P18 and BAO values of 1.3$\sigma$ and 1.2$\sigma$, respectively. For the BAO data primary spatial curvature parameter we find $\Omega_k= -0.045\pm 0.063$, which represents a 0.71$\sigma$ evidence in favor of closed geometry and is only in 0.75$\sigma$ tension with respect to the P18 value $-0.130\pm0.095$, which represents a 1.4$\sigma$ deviation from flat. Regarding the derived $\Omega_m$, $H_0$, and $\sigma_8$ parameters, the observed disagreements are 1.4$\sigma$, 2.5$\sigma$, and 1.8$\sigma$. The tension for $\Omega_m$ has reduced significantly with respect to the $A_L=1$ case, however overall the disagreements are still large enough to not allow one to jointly analyze P18 and BAO data in this cosmological model. Comparing the seven-parameter and the five-parameter tilted non-flat $\Lambda$CDM new $P(q)$ model primary cosmological parameter constraints for P18 and BAO data, shown in the upper half of Table \ref{tab:para_TNL_ns_BAO}, we see that the values of $\Omega_b h^2$ and $\Omega_c h^2$ are both in 1.1$\sigma$ disagreement. The BAO data value of the primary spatial curvature parameter is $\Omega_k=-0.051\pm 0.061$, which represents a 0.84$\sigma$ deviation from a flat geometry and is only in 0.29$\sigma$ disagreement with the P18 value $\Omega_k=-0.033\pm 0.014$, which is 2.4$\sigma$ away from flat. Regarding the derived parameters $\Omega_m$, $H_0$, and $\sigma_8$, we find 2.4$\sigma$, 2.1$\sigma$, and 1.2$\sigma$ disagreements between the corresponding values. It is necessary to further study the possible tension between P18 and BAO within this model. Results for the eight-parameter and the five-parameter tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model, obtained from P18 and BAO data, can be seen in the lower half of Table \ref{tab:para_TNL_ns_BAO}. For the primary parameters $\Omega_b h^2$ and $\Omega_c h^2$ the disagreement is at 1.1$\sigma$ and 1.2$\sigma$ respectively. For the BAO data primary spatial curvature parameter we find $\Omega_k= -0.055\pm 0.060$, which represents 0.92$\sigma$ evidence in favor of closed geometry and is in only 0.36$\sigma$ disagreement with the P18 value $-0.10\pm 0.11$, which represents a 0.91$\sigma$ deviation from flat. Regarding the derived parameters, $\Omega_m$, $H_0$, and $\sigma_8$ we find 0.92$\sigma$, 1.7$\sigma$, and 1.3$\sigma$ disagreements. The tensions for $H_0$ and $\Omega_m$ have reduced with respect to the case with $A_L=1$, however they are still large enough to wonder whether we can jointly analyze P18 and BAO data in the context of this model. In Tables \ref{tab:para_FL_BAO}-\ref{tab:para_TNL_ns_BAO} $\chi^2_{\textrm{min}}$ (BAO/BAO$^{\prime}$) is the value of $\chi^2$ for BAO or BAO$^\prime$ data respectively, at the best-fit position for BAO or BAO$^\prime$ data, while $\chi^2_{\textrm{BAO/BAO}^\prime}$ (at P18 B-F) is the value of $\chi^2$ for BAO or BAO$^{\prime}$ data evaluated at the best-fit position for P18 data. The values of $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$) and $\chi_{\textrm{BAO/BAO}^{\prime}}^2$ (at P18 B-F) gives a qualitative indication of the agreement or disagreement in the values of the cosmological parameters obtained by considering P18 data and by considering BAO/BAO$^\prime$ data. If the cosmological parameters agree one might expect that $\chi_{\textrm{min}}^2$ (BAO/BAO$^\prime$)$\simeq$ $\chi_{\textrm{BAO/BAO}^\prime}^2$ (at P18 B-F). We see that this is the case only for the tilted flat $\Lambda$CDM(+$A_L$) models for the BAO$^\prime$ data, but again emphasize that this is only a qualitative probe. Figures \ref{fig:like_FL_BAO}-\ref{fig:like_TNL_Alens_ns1_BAO} show one-dimensional likelihoods and two-dimensional contours for cosmological parameters obtained using P18, BAO$^\prime$, BAO, P18+BAO$^\prime$, and P18+BAO data. As mentioned above, BAO$^\prime$ data constraints (shown in green) and BAO data constraints (shown in grey) are comparatively less restrictive than P18 constraints (shown in dark blue), are unable to put tight constraints on the primary cosmological parameters (except for $\Omega_k$ in the three non-flat $\Lambda$CDM$+A_L$ models), in most cases overlap at 2$\sigma$ with each other, and in many cases they also overlap with the P18 data constraints. Since the BAO data set contains more measurements than the BAO$^\prime$ data set, the BAO constraints are typically more restrictive, and BAO data, which includes $f\sigma_8$ measurements, are much more effective at constraining $\sigma_8$ than are BAO$^\prime$ data. Figures \ref{fig:like_FL_BAO} and \ref{fig:like_FL_Alens_BAO} are for tilted flat $\Lambda$CDM (+$A_L$) models. The $\sim 1 \sigma$ disagreements between the BAO$^\prime$/BAO constraints and those obtained with P18 data, discussed above, can be clearly seen in the contour plots. For the tilted flat $\Lambda$CDM model the larger disagreements are in panels for derived cosmological parameters, with the largest for $\sigma_8$. Some of these disagreements decrease when the $A_L$ parameter is allowed to vary. Looking at the contour plots for the untilted non-flat $\Lambda$CDM (+$A_L$) models (see Figs.\ \ref{fig:like_NL_BAO} and \ref{fig:like_NL_Alens_BAO}) we observe non-overlapping contours in those panels that involve the derived parameters $\Omega_m$ and $H_0$. These disagreements are smaller when $A_L$ is allowed to vary. This may indicate that in the context of this cosmological model we may jointly analyze P18 data with BAO$^\prime$/BAO data only when $A_L$ is allowed to vary. Figures \ref{fig:like_NL_ns_BAO} and \ref{fig:like_NL_Alens_ns_BAO} show cosmological parameter constraints for the tilted non-flat $\Lambda$CDM (+$A_L$) Planck $P(q)$ models, while the ones for the tilted non-flat $\Lambda$CDM(+$A_L$) new $P(q)$ models are displayed in Figs.\ \ref{fig:like_TNL_ns1_BAO} and \ref{fig:like_TNL_Alens_ns1_BAO}. As expected, considering the results discussed above in this subsubsection, the contour plots for these tilted non-flat models are quite similar. We see in the panels that involve the primary cosmological parameters there is overlap at 1$\sigma$, not only when $A_L$ is allowed to vary but also when $A_L=1$. When $A_L=1$, for the Planck $P(q)$ model some P18 and BAO$^\prime$/BAO data constraint contours that involve $\Omega_m$ and $H_0$ do not overlap even at 2$\sigma$. This is not true for the new $P(q)$ model with $A_L=1$, where overlap is reached at $< 2 \sigma$. This may indicate that the new $P(q)$ model is better able to reconcile P18 and BAO$^\prime$/BAO data. In view of the results discussed in this subsubsection, further tests are needed to properly quantify the level of disagreement, in the context of non-flat models, between P18 data and BAO$^\prime$/BAO data cosmological constraints. We return to this issue in Sec.\ \ref{subsec:data_set_tensions}. \begin{table*} \caption{Mean and 68.3\% confidence limits of tilted flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by non-CMB, P18, and P18+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Tilted flat $\Lambda$CDM} & \multicolumn{2}{c}{Tilted flat $\Lambda$CDM$+A_L$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18 & P18+non-CMB & P18 & P18+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0256 \pm 0.0025$ & $0.02236 \pm 0.00015$ & $0.02250 \pm 0.00012$ & $0.02259 \pm 0.00017$ & $0.02265 \pm 0.00014$ \\[+1mm] $\Omega_c h^2$ & $0.1129 \pm 0.0062$ & $0.1202 \pm 0.0014$ & $0.11825 \pm 0.00087$ & $0.1180 \pm 0.0015$ & $0.11736 \pm 0.00092$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.0323 \pm 0.0082$ & $1.04090 \pm 0.00031$ & $1.04112 \pm 0.00029$ & $1.04114 \pm 0.00032$ & $1.04120 \pm 0.00029$ \\[+1mm] $\tau$ & $0.0542$ & $0.0542 \pm 0.0079$ & $0.0548 \pm 0.0076$ & $0.0496 \pm 0.0082$ & $0.0484 \pm 0.0083$ \\[+1mm] $n_s$ & $0.9649$ & $0.9649 \pm 0.0043$ & $0.9692 \pm 0.0036$ & $0.9710 \pm 0.0050$ & $0.9726 \pm 0.0038$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.10 \pm 0.11$ & $3.044 \pm 0.016$ & $3.041 \pm 0.015$ & $3.030 \pm 0.017$ & $3.026 \pm 0.017$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $1.181 \pm 0.067$ & $1.201 \pm 0.061$ \\[+1mm] \hline \\[-1mm] $H_0$ & $69.8 \pm 1.7$ & $67.28 \pm 0.61$ & $68.15 \pm 0.39$ & $68.31 \pm 0.71$ & $68.62 \pm 0.43$ \\[+1mm] $\Omega_m$ & $0.286 \pm 0.011$ & $0.3165 \pm 0.0084$ & $0.3045 \pm 0.0051$ & $0.3029 \pm 0.0093$ & $0.2988 \pm 0.0054$ \\[+1mm] $\sigma_8$ & $0.787 \pm 0.027$ & $0.8118 \pm 0.0074$ & $0.8048 \pm 0.0068$ & $0.7997 \pm 0.0088$ & $0.7961 \pm 0.0074$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total) & $1106.54$ & $2765.80$ & $3879.35$ & $2756.12$ & $3865.90$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.54$ & $\cdots$ & $1111.57$ & $\cdots$ & $1109.54$ \\[+1mm] $\textrm{DIC}$ & $1114.45$ & $2817.93$ & $3931.02$ & $2812.41$ & $3922.11$ \\[+1mm] $\Delta\textrm{DIC}$ & $\cdots$ & $\cdots$ & $\cdots$ & $-5.52$ & $-8.91$ \\[+1mm] $\textrm{AIC}_c$ & $1114.54$ & $2819.80$ & $3933.35$ & $2812.1$ & $3921.90$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $\cdots$ & $\cdots$ & $\cdots$ & $-7.68$ & $-11.45$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_FL_P18_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of untilted non-flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by non-CMB, P18, and P18+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Untilted non-flat $\Lambda$CDM} & \multicolumn{2}{c}{Untilted non-flat $\Lambda$CDM$+A_L$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18 & P18+non-CMB & P18 & P18+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0243 \pm 0.0033$ & $0.02320 \pm 0.00015$ & $0.02300 \pm 0.00014$ & $0.02320 \pm 0.00015$ & $0.02320 \pm 0.00015$ \\[+1mm] $\Omega_c h^2$ & $0.120 \pm 0.013$ & $0.11098 \pm 0.00088$ & $0.11161 \pm 0.00086$ & $0.11097 \pm 0.00087$ & $0.11097 \pm 0.00085$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.10 \pm 0.10$ & $1.04204 \pm 0.00030$ & $1.04189 \pm 0.00029$ & $1.04202 \pm 0.00030$ & $1.04199 \pm 0.00030$ \\[+1mm] $\tau$ & $0.0543$ & $0.0543 \pm 0.0091$ & $0.0717 \pm 0.0095$ & $0.0540 \pm 0.0087$ & $0.0562 \pm 0.0086$ \\[+1mm] $\Omega_k$ & $-0.033 \pm 0.050$ & $-0.095 \pm 0.024$ & $-0.0062 \pm 0.0014$ & $-0.12 \pm 0.12$ & $-0.0062 \pm 0.0014$ \\[+1mm] $\ln(10^{10} A_s)$ & $2.87 \pm 0.34$ & $3.021 \pm 0.019$ & $3.057 \pm 0.019$ & $3.020 \pm 0.018$ & $3.024 \pm 0.018$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $1.08 \pm 0.27$ & $1.324 \pm 0.063$ \\[+1mm] \hline \\[-1mm] $H_0$ & $70.2 \pm 1.7$ & $47.1 \pm 3.2$ & $68.07 \pm 0.56$ & $52 \pm 18$ & $68.45 \pm 0.58$ \\[+1mm] $\Omega_m$ & $0.294 \pm 0.018$ & $0.617 \pm 0.082$ & $0.2920 \pm 0.0050$ & $0.70 \pm 0.42$ & $0.2878 \pm 0.0050$ \\[+1mm] $\sigma_8$ & $0.771 \pm 0.034$ & $0.730 \pm 0.017$ & $0.7921 \pm 0.0085$ & $0.721 \pm 0.053$ & $0.7759 \pm 0.0078$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $1106.53$ & $2789.77$ & $3926.27$ & $2787.76$ & $3895.24$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.53$ & $\cdots$ & $1107.71$ & $\cdots$ & $1107.45$ \\[+1mm] $\textrm{DIC}$ & $1116.95$ & $2847.14$ & $3982.38$ & $2846.45$ & $3954.21$ \\[+1mm] $\Delta\textrm{DIC}$ & $2.50$ & $29.21$ & $51.36$ & $28.52$ & $23.19$ \\[+1mm] $\textrm{AIC}_c$ & $1116.53$ & $2843.77$ & $3980.27$ & $2843.76$ & $3951.24$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $1.99$ & $23.97$ & $46.92$ & $23.96$ & $17.89$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_NL_P18_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of Planck-$P(q)$-based tilted non-flat $\Lambda\textrm{CDM}$ ($+A_L$) model parameters constrained by non-CMB, P18, and P18+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM Planck $P(q)$} & \multicolumn{2}{c}{Tilted non-flat $\Lambda$CDM$+A_L$ Planck $P(q)$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18 & P18+non-CMB & P18 & P18+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0242 \pm 0.0033$ & $0.02260 \pm 0.00017$ & $0.02248 \pm 0.00015$ & $0.02258 \pm 0.00017$ & $0.02268 \pm 0.00017$ \\[+1mm] $\Omega_c h^2$ & $0.120 \pm 0.012$ & $0.1181 \pm 0.0015$ & $0.1185 \pm 0.0013$ & $0.1183 \pm 0.0015$ & $0.1170 \pm 0.0014$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.10 \pm 0.11$ & $1.04116 \pm 0.00032$ & $1.04107 \pm 0.00031$ & $1.04116 \pm 0.00033$ & $1.04125 \pm 0.00032$ \\[+1mm] $\tau$ & $0.0483$ & $0.0483 \pm 0.0083$ & $0.0543 \pm 0.0077$ & $0.0478 \pm 0.0081$ & $0.0485 \pm 0.0087$ \\[+1mm] $\Omega_k$ & $-0.032 \pm 0.051$ & $-0.043 \pm 0.017$ & $0.0004 \pm 0.0017$ & $-0.130 \pm 0.095$ & $-0.0006 \pm 0.0017$ \\[+1mm] $n_s$ & $0.9706$ & $0.9706 \pm 0.0047$ & $0.9687 \pm 0.0043$ & $0.9704 \pm 0.0048$ & $0.9735 \pm 0.0046$ \\[+1mm] $\ln(10^{10} A_s)$ & $2.90 \pm 0.34$ & $3.027 \pm 0.017$ & $3.040 \pm 0.016$ & $3.027 \pm 0.017$ & $3.025 \pm 0.018$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $0.88 \pm 0.15$ & $1.203 \pm 0.062$ \\[+1mm] \hline \\[-1mm] $H_0$ & $70.1 \pm 1.7$ & $54.5 \pm 3.6$ & $68.25 \pm 0.56$ & $45 \pm 11$ & $68.48 \pm 0.56$ \\[+1mm] $\Omega_m$ & $0.294 \pm 0.018$ & $0.481 \pm 0.062$ & $0.3040 \pm 0.0055$ & $0.80 \pm 0.35$ & $0.2994 \pm 0.0055$ \\[+1mm] $\sigma_8$ & $0.771 \pm 0.035$ & $0.775 \pm 0.015$ & $0.8055 \pm 0.0076$ & $0.733 \pm 0.045$ & $0.7946 \pm 0.0088$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total) & $1106.53$ & $2754.73$ & $3878.77$ & $2754.99$ & $3865.53$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.53$ & $\cdots$ & $1111.36$ & $\cdots$ & $1109.27$ \\[+1mm] $\textrm{DIC}$ & $1116.92$ & $2810.59$ & $3933.33$ & $2811.63$ & $3924.07$ \\[+1mm] $\Delta\textrm{DIC}$ & $2.47$ & $-7.34$ & $2.31$ & $-6.30$ & $-6.95$ \\[+1mm] $\textrm{AIC}_c$ & $1116.53$ & $2810.73$ & $3934.77$ & $2812.99$ & $3923.53$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $1.99$ & $-9.07$ & $1.42$ & $-6.81$ & $-9.82$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_NL_ns_P18_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of new-$P(q)$-based tilted non-flat $\Lambda\textrm{CDM}$ ($+A_L$) model parameters constrained by non-CMB, P18, and P18+non-CMB data sets. $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM new $P(q)$} & \multicolumn{2}{c}{Tilted non-flat $\Lambda$CDM$+A_L$ new $P(q)$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18 & P18+non-CMB & P18 & P18+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0241 \pm 0.0033$ & $0.02255 \pm 0.00017$ & $0.02249 \pm 0.00015$ & $0.02257 \pm 0.00017$ & $0.02269 \pm 0.00016$ \\[+1mm] $\Omega_c h^2$ & $0.120 \pm 0.013$ & $0.1188 \pm 0.0015$ & $0.1184 \pm 0.0013$ & $0.1187 \pm 0.0016$ & $0.1170 \pm 0.0013$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.11 \pm 0.11$ & $1.04109 \pm 0.00032$ & $1.04108 \pm 0.00031$ & $1.04111 \pm 0.00033$ & $1.04125 \pm 0.00032$\\[+1mm] $\tau$ & $0.0525$ & $0.0525 \pm 0.0083$ & $0.0549 \pm 0.0077$ & $0.0512 \pm 0.0086$ & $0.0490 \pm 0.0086$ \\[+1mm] $\Omega_k$ & $-0.036 \pm 0.051$ & $-0.033 \pm 0.014$ & $0.0003 \pm 0.0017$ & $-0.10 \pm 0.11$ & $-0.0006 \pm 0.0017$\\[+1mm] $n_s$ & $0.9654$ & $0.9654 \pm 0.0045$ & $0.9684 \pm 0.0041$ & $0.9654 \pm 0.0057$ & $0.9730 \pm 0.0043$ \\[+1mm] $\ln(10^{10} A_s)$ & $2.88 \pm 0.34$ & $3.039 \pm 0.017$ & $3.042 \pm 0.016$ & $3.036 \pm 0.018$ & $3.026 \pm 0.018$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $0.94 \pm 0.20$ & $1.204 \pm 0.061$ \\[+1mm] \hline \\[-1mm] $H_0$ & $70.1 \pm 1.8$ & $56.9 \pm 3.6$ & $68.21 \pm 0.55$ & $51 \pm 14$ & $68.47 \pm 0.56$ \\[+1mm] $\Omega_m$ & $0.295 \pm 0.018$ & $0.444 \pm 0.055$ & $0.3043 \pm 0.0054$ & $0.70 \pm 0.43$ & $0.2994 \pm 0.0056$ \\[+1mm] $\sigma_8$ & $0.770 \pm 0.035$ & $0.786 \pm 0.014$ & $0.8057 \pm 0.0074$ & $0.752 \pm 0.052$ & $0.7948 \pm 0.0083$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $1106.49$ & $2757.38$ & $3878.76$ & $2756.33$ & $3865.41$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.49$ & $\cdots$ & $1111.36$ & $\cdots$ & $1109.32$ \\[+1mm] $\textrm{DIC}$ & $1117.31$ & $2811.54$ & $3932.56$ & $2814.83$ & $3923.86$ \\[+1mm] $\Delta\textrm{DIC}$ & $2.86$ & $-6.39$ & $1.54$ & $-3.10$ & $-7.16$ \\[+1mm] $\textrm{AIC}_c$ & $1116.49$ & $2813.38$ & $3934.76$ & $2814.33$ & $3923.41$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $1.95$ & $-6.42$ & $1.41$ & $-5.47$ & $-9.94$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_TNL_P18_nonCMB} \end{table*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_P18_nonCMBv2_fig24.pdf}} \caption{Likelihoods of the tilted flat $\Lambda$CDM model parameters constrained by P18, non-CMB, and P18+non-CMB data sets. } \label{fig:like_FL_P18_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_Alens_P18_nonCMBv2_fig25.pdf}} \caption{Likelihoods of the tilted flat $\Lambda$CDM$+A_L$ model parameters constrained by P18, non-CMB, and P18+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_FL_P18_nonCMB}. } \label{fig:like_FL_Alens_P18_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_P18_nonCMBv2_fig26.pdf}} \caption{Likelihoods of the untilted non-flat $\Lambda$CDM model parameters constrained by P18, non-CMB, and P18+non-CMB data sets. } \label{fig:like_NL_P18_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_P18_nonCMBv2_fig27.pdf}} \caption{Likelihoods of the untilted non-flat $\Lambda$CDM$+A_L$ model parameters constrained by P18, non-CMB, and P18+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_NL_P18_nonCMB}. } \label{fig:like_NL_Alens_P18_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_ns_P18_nonCMBv2_fig28.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM model [with Planck $P(q)$] parameters constrained by P18, non-CMB, and P18+non-CMB data sets. } \label{fig:like_NL_ns_P18_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_ns_P18_nonCMBv2_fig29.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM$+A_L$ model [with Planck $P(q)$] parameters constrained by P18, non-CMB, and P18+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_NL_ns_P18_nonCMB}. } \label{fig:like_NL_Alens_ns_P18_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_ns1_P18_nonCMBv2_fig30.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM model [with new $P(q)$] parameters constrained by P18, non-CMB, and P18+non-CMB data sets. } \label{fig:like_TNL_P18_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_Alens_ns1_P18_nonCMBv2_fig31.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM$+A_L$ model [with new $P(q)$] parameters constrained by P18, non-CMB, and P18+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_TNL_P18_nonCMB}. } \label{fig:like_TNL_Alens_P18_nonCMB} \end{figure*} \subsubsection{Comparing P18 data and non-CMB data cosmological constraints}\label{sec:P18_vs_non-CMB} In the previous subsubsection we compared BAO and BAO$^\prime$ data cosmological constraints to those obtained from P18 data. In the non-flat models with $A_L =1$ there is somewhat significant disagreement between the values of the cosmological parameters (especially the derived parameters $\Omega_m$, $H_0$, and $\sigma_8$) determined using P18 data and those determined from BAO or BAO$^\prime$ data. This disagreement motivates additional tests to decide whether P18 data and BAO$^{\prime}$/BAO data can be used together to constrain parameters of the non-flat models. While both P18 data and BAO$^{\prime}$/BAO data favor negative $\Omega_k$ values, BAO$^{\prime}$/BAO data favor higher values of $H_0$ and lower values of $\Omega_m$ relative to the values obtained in the P18 analysis. Allowing for a varying $A_L$ parameter resolves these tensions, which may indicate that we can only jointly analyze P18 data and BAO$^{\prime}$/BAO data in the non-flat models when $A_L$ is allowed to vary. To further examine these inconsistencies, in this subsubsection we compare non-CMB data (which include BAO as well as BAO$^\prime$ data) cosmological constraints to those obtained from P18 data. (Prior to jointly analyzing P18+non-CMB data, we need to determine whether P18 and non-CMB data cosmological constraints are mutually consistent.) This allows us to examine how the inclusion of SNIa, $H(z)$, and $f\sigma_8$ data affects the P18 data vs.\ BAO$^\prime$/BAO data conclusions of Sec.\ \ref{sec:P18_vs_BAO} and provides a different, perhaps more expansive, test of the consistency of cosmological parameters obtained from high-redshift data and from low-redshift data. In Sec.\ \ref{subsec:data_set_tensions} we use two other statistical estimators to examine whether or not P18 and non-CMB data are in tension. The cosmological parameter mean values and error bars favored by the P18, non-CMB, and P18+non-CMB data sets are summarized in Tables \ref{tab:para_FL_P18_nonCMB}-\ref{tab:para_TNL_P18_nonCMB} for the tilted flat $\Lambda$CDM (+$A_L$) models, the untilted non-flat $\Lambda$CDM (+$A_L$) models, the tilted non-flat $\Lambda$CDM (+$A_L$) models with the Planck $P(q)$, and the tilted non-flat $\Lambda$CDM ($+A_L$) models with the new $P(q)$, respectively. Likelihood distributions of cosmological parameters of the four models with $A_L=1$ and with $A_L$ varying are shown in Figs.\ \ref{fig:like_FL_P18_nonCMB}-\ref{fig:like_TNL_Alens_P18_nonCMB} for the P18, non-CMB, and P18+non-CMB data sets. Since non-CMB data do not have the ability to constrain $\tau$ or $n_s$, we set their values to those found in the corresponding P18 data analysis. $A_L$ does not affect predictions for the non-CMB measurements we study so we do not include $A_L$ in the non-CMB data analyses. (We saw, in Sec.\ \ref{sec:P18_vs_BAO}, that BAO$^\prime$/BAO data constraints for $A_L = 1$ and for varying $A_L$ were very similar, see Tables \ref{tab:para_FL_BAO}-\ref{tab:para_TNL_ns_BAO}.) From Tables \ref{tab:para_NL_P18_nonCMB}-\ref{tab:para_TNL_P18_nonCMB} we see, in the six non-flat $\Lambda$CDM (+$A_L$) models that the constraints set by non-CMB data on $H_0$ and $\Omega_m$ are tighter than the ones imposed by P18 data, and in the three non-flat $\Lambda$CDM+$A_L$ models that the constraints set by non-CMB data on $\Omega_k$ and $\sigma_8$ are tighter than the ones imposed by P18 data. P18 data more restrictively constrain all other parameters in all eight cosmological models. As we discuss below, there is at least one parameter in the three non-flat models with $A_L = 1$ with a more than 3$\sigma$ level of disagreement between P18 data cosmological constraints and non-CMB data cosmological constraints and one parameter in the tilted flat $\Lambda$CDM model with $A_L = 1$ and in the tilted non-flat $\Lambda$CDM$+A_L$ model with the Planck $P(q)$ with a more than 2$\sigma$ level of disagreement between P18 data cosmological constraints and non-CMB data cosmological constraints. From Tables \ref{tab:para_NL_P18_nonCMB}-\ref{tab:para_TNL_P18_nonCMB} we see that both P18 data and non-CMB data favor negative values of the curvature parameter, with non-CMB data only weakly favoring closed spatial hypersurfaces, at 0.66$\sigma$ to 0.71$\sigma$. However, we should take into account the geometrical degeneracy between $H_0$-$\Omega_k$-$\Omega_m$ and note that, like both BAO$^\prime$ and BAO data, non-CMB data favor higher values of $H_0$ and lower values of $\Omega_m$ than do P18 data and this is what causes the P18 and non-CMB cosmological constraint differences. The dominant component of non-CMB data is BAO$^\prime$/BAO data. This is why the cosmological constraints obtained from BAO$^\prime$/BAO data are similar to the ones obtained from the complete non-CMB low-redshift data set. However, there are some differences between these sets of constraints that are worth mentioning. As expected, the error bars obtained considering non-CMB data are smaller than the ones from BAO$^\prime$/BAO data. While similar values for $\Omega_m$ are found in both cases, the values of $H_0$ favored by non-CMB data are $\sim 1\sigma$ smaller than those favored by BAO$^\prime$/BAO data. BAO$^\prime$ data favor closed spatial hypersurfaces at 0.48$\sigma$ to 0.60$\sigma$ while BAO data favor them by 0.71$\sigma$ to 0.96$\sigma$, which are on either side of the 0.66$\sigma$ to 0.71$\sigma$ favoring of closed spatial hypersurfaces from non-CMB data. We also find smaller values for the $\sigma_8$ parameter when non-CMB data are considered, with BAO$^\prime$ data favoring 1.1$\sigma$-1.3$\sigma$ larger values while BAO data favor $\sim 1.3 \sigma$ larger values in the non-flat models and a 1.9$\sigma$ larger value in the tilted flat $\Lambda$CDM case. This might be because the non-CMB data set contain additional $f\sigma_8$ data points that favor lower values of $\sigma_8$ than those in the BAO data set. Comparing the six-parameter and the four-parameter tilted flat $\Lambda$CDM model primary cosmological parameter constraints for P18 and non-CMB data, shown in the left half of Table \ref{tab:para_FL_P18_nonCMB}, we see that the values of $\Omega_b h^2$, $\Omega_c h^2$, and $\theta_\textrm{MC}$ are in mild disagreement, at 1.3$\sigma$, 1.1$\sigma$, and 1.0$\sigma$, respectively. We also observe a more significant 2.2$\sigma$ level of tension in the derived $\Omega_m$ values, the derived $H_0$ values differ by 1.4$\sigma$, and $\sigma_8$ values show a better agreement, disagreeing by only 0.89$\sigma$. Comparing the seven-parameter and the four-parameter tilted flat $\Lambda$CDM+$A_L$ model primary cosmological parameter constraints for P18 and non-CMB data, shown in Table \ref{tab:para_FL_P18_nonCMB}, we see that the values of $\Omega_b h^2$ and $\theta_{\textrm{MC}}$ are in 1.2$\sigma$ and 1.1$\sigma$ tension respectively. As for the derived parameters, we find $\Omega_m$ values differ by 1.2$\sigma$ while $H_0$ and $\sigma_8$ values are in only 0.81$\sigma$ and 0.45$\sigma$ disagreement. So unlike in the BAO data and the BAO$^\prime$ data comparisons with P18 data of Sec.\ \ref{sec:P18_vs_BAO}, the inclusion of a varying $A_L$ reduces the disagreement for all three derived parameters, but less successfully for $\Omega_m$ in the non-CMB case here compared to the BAO/BAO$^\prime$ cases there. P18 and non-CMB data results obtained for the six-parameter and the four-parameter untilted non-flat $\Lambda$CDM model, shown in the left half of Table \ref{tab:para_NL_P18_nonCMB}, indicate mostly less significant differences in primary parameters but more significant differences in derived parameters than found in the tilted flat $\Lambda$CDM model. The primary spatial curvature parameter value is $\Omega_k=-0.033\pm 0.050$ for non-CMB data, which is 0.66$\sigma$ away from flat and in 1.1$\sigma$ tension with the P18 value $\Omega_k=-0.095\pm 0.024$, which is 4.0$\sigma$ away from flat. Regarding the derived parameters, we find that $H_0$, $\Omega_m$, and $\sigma_8$ values are in 6.4$\sigma$, 3.8$\sigma$, and 1.1$\sigma$ disagreement. These results probably mean that P18 and non-CMB data should not be jointly analyzed in the context of the untilted non-flat $\Lambda$CDM model. The results for the seven-parameter and the four-parameter untilted non-flat $\Lambda$CDM+$A_L$ model, obtained considering P18 and non-CMB data, are in Table \ref{tab:para_NL_P18_nonCMB}. There is a slight increase in the disagreement between the values of the primary spatial curvature parameter $\Omega_k$ (now 0.67$\sigma$) and decreases for the derived parameters $H_0$, $\Omega_m$, and $\sigma_8$, that now disagree by 1.0$\sigma$, 0.97$\sigma$, and 0.79$\sigma$ respectively. This is caused by the larger error bars in the $A_L$-varying P18 case compared to the corresponding values obtained with $A_L=1$. According to these results, unlike in the $A_L=1$ case, in the $A_L$-varying case P18 and non-CMB data can probably be jointly analyzed in the context of the untilted non-flat $\Lambda$CDM model. Note that in this case a joint analysis of P18+non-CMB data favors closed geometry at 4.4$\sigma$, with $\Omega_k=-0.0062\pm0.0014$, although because of the lack of the tilt ($n_s$) degree of freedom this untilted non-flat $\Lambda$CDM+$A_L$ model does not provide a good fit to smaller-angular-scale P18 data, which is reflected in the large $\Delta$DIC and $\Delta$AIC$_c$ values for the P18+non-CMB case in the lower half of Table \ref{tab:para_NL_P18_nonCMB}. Comparing the seven-parameter and the five-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ model primary cosmological parameter constraints for P18 and non-CMB data, we see, in the left half of Table \ref{tab:para_NL_ns_P18_nonCMB}, that the primary parameter values do not much disagree. The non-CMB data primary spatial curvature parameter value $\Omega_k=-0.032\pm 0.051$ is 0.63$\sigma$ away from flat and only in 0.20$\sigma$ tension with the P18 value $\Omega_k=-0.043\pm0.017$, which is 2.5$\sigma$ in favor of closed geometry. The derived parameters $H_0$, $\Omega_m$, and $\sigma_8$ are in 3.9$\sigma$, 2.9$\sigma$, and 0.11$\sigma$ tension. These results show that P18 and non-CMB data cosmological constraints are inconsistent in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model and these data probably should not be used jointly to constrain this model. Looking at Table \ref{tab:para_NL_ns_P18_nonCMB} we can compare results obtained for the eight-parameter and the five-parameter tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model from P18 and non-CMB data respectively. Aside from $\Omega_k$, the primary parameter disagreements do not change much compared to the $A_L=1$ case. For the non-CMB data primary spatial curvature parameter we have $\Omega_k= -0.032\pm 0.051$, which is 0.63$\sigma$ away from flat and in 0.91$\sigma$ tension with the P18 value $-0.130\pm0.095$, which is 1.4$\sigma$ away from flat. Regarding the derived parameters we find that $H_0$, $\Omega_m$, and $\sigma_8$ are in 2.3$\sigma$, 1.4$\sigma$ and 0.67$\sigma$ disagreement. Compared to the $A_L = 1$ case, in the $A_L$-varying case we find significant reductions in the $H_0$ and $\Omega_m$ tensions, with both disagreements still being significant, which suggest that P18 and non-CMB data should not be jointly analyzed within the tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model. Comparing the seven-parameter and the five-parameter tilted non-flat $\Lambda$CDM new $P(q)$ model primary cosmological parameter constraints for P18 and non-CMB data, from the left half of Table \ref{tab:para_TNL_P18_nonCMB} we see that the primary parameter values do not much disagree. The non-CMB data primary spatial curvature parameter value is $\Omega_k=-0.036\pm 0.051$, which is only a 0.71$\sigma$ deviation from flat and, similar to the Planck $P(q)$ model, is only in 0.057$\sigma$ disagreement with the P18 value $-0.033\pm 0.014$, which is 2.4$\sigma$ away from flat. Regarding the derived parameters $H_0$, $\Omega_m$, and $\sigma_8$, we find that their values disagree at 3.3$\sigma$, 2.6$\sigma$, and 0.42$\sigma$ respectively. While the $H_0$ and $\Omega_m$ disagreements are a little smaller than the ones found in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model, they still are large enough to require we more carefully test whether P18 and non-CMB data can be jointly used to constrain cosmological parameters in this model. The results for the eight-parameter and the five-parameter tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model are in Table \ref{tab:para_TNL_P18_nonCMB}, for P18 and non-CMB data, respectively. As happens in the Planck $P(q)$ model, when the $A_L$ parameter is allowed to vary the mild tensions found for the primary parameters, except for $\Omega_k$, do not change much compared to the $A_L=1$ case. For the non-CMB data primary spatial curvature parameter we have $\Omega_k= -0.036\pm 0.051$, which is 0.71$\sigma$ away from flat hypersurfaces and now in 0.53$\sigma$ tension with the P18 value $\Omega_k=-0.10\pm 0.11$, which is 0.91$\sigma$ away from flat. As for the value of the derived parameters $H_0$, $\Omega_m$, and $\sigma_8$ we find disagreements at 1.4$\sigma$, 0.94$\sigma$, and 0.29$\sigma$ respectively. The tensions are reduced with respect to the case with $A_L=1$, due to the increase of the error bars, but possibly the $H_0$ tension is still not small enough to allow the joint use of P18+non-CMB data for constraining tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model cosmological parameters. Figures \ref{fig:like_FL_P18_nonCMB}-\ref{fig:like_TNL_Alens_P18_nonCMB} show one-dimensional likelihoods and two-dimensional contours for cosmological parameters obtained using P18, non-CMB, and P18+non-CMB data. As mentioned above, non-CMB data constraints (shown with unfilled black lines) are comparatively less restrictive than P18 constraints (shown in grey), are unable to put tight constraints on the primary cosmological parameters (except on $\Omega_k$ in the three non-flat $\Lambda$CDM$+A_L$ models), and in many cases they at least partially overlap with the P18 data constraints. Figures \ref{fig:like_FL_P18_nonCMB} and \ref{fig:like_FL_Alens_P18_nonCMB} are for tilted flat $\Lambda$CDM (+$A_L$) models. The $\sim 1 \sigma$ disagreements between the non-CMB constraints and those obtained with P18 data, discussed above, can be clearly seen in the contour plots. For the tilted flat $\Lambda$CDM model the larger disagreements are in panels for derived cosmological parameters, with the largest for $\Omega_m$ and the next largest for $H_0$. These disagreements decrease when the $A_L$ parameter is allowed to vary. Looking at the contour plots for the untilted non-flat $\Lambda$CDM (+$A_L$) models (see Figs.\ \ref{fig:like_NL_P18_nonCMB} and \ref{fig:like_NL_Alens_P18_nonCMB}) we observe non-overlapping contours in those panels that involve the derived parameters $H_0$ and $\Omega_m$ or the primary parameter $\Omega_k$, especially in the $\Omega_k$-$\theta_{\rm MC}$ subpanel of Fig.\ \ref{fig:like_NL_P18_nonCMB}. These disagreements largely disappear when $A_L$ is allowed to vary, except perhaps for the $H_0$ one. This may indicate that in the context of this cosmological model we may jointly analyze P18 data with non-CMB data only when $A_L$ is allowed to vary. Figures \ref{fig:like_NL_ns_P18_nonCMB} and \ref{fig:like_NL_Alens_ns_P18_nonCMB} show cosmological parameter constraints for the tilted non-flat $\Lambda$CDM (+$A_L$) Planck $P(q)$ models, while the ones for the tilted non-flat $\Lambda$CDM(+$A_L$) new $P(q)$ models are displayed in Figs.\ \ref{fig:like_TNL_P18_nonCMB} and \ref{fig:like_TNL_Alens_P18_nonCMB}. As expected, considering the results discussed above in this subsubsection, the contour plots for these tilted non-flat models are quite similar. We see in the panels that involve the primary cosmological parameters that there is overlap at 1$\sigma$, not only when $A_L$ is allowed to vary but also when $A_L=1$. When $A_L=1$, for the Planck $P(q)$ model, P18 and non-CMB data constraint contours that involve $H_0$ and $\Omega_m$ do not overlap even at 2$\sigma$. These disagreements are less severe for the new $P(q)$ model with $A_L=1$, where overlap is reached in most cases at a little over $2 \sigma$. In view of the results discussed in this subsubsection, further tests are needed to properly quantify the level of disagreement, in the context of non-flat models, between P18 data and non-CMB data cosmological constraints. We return to this issue in Sec.\ \ref{subsec:data_set_tensions}. \begin{table*} \caption{Mean and 68.3\% confidence limits of tilted flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by non-CMB, P18+lensing, and P18+lensing+non-CMB data sets. The Hubble constant $H_0$ has a unit of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Tilted flat $\Lambda$CDM} & \multicolumn{2}{c}{Tilted flat $\Lambda$CDM$+A_L$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18+lensing & P18+lensing+non-CMB & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0257 \pm 0.0026$ & $0.02237 \pm 0.00014$ & $0.02250 \pm 0.00013$ & $0.02251 \pm 0.00017$ & $0.02258 \pm 0.00014$ \\[+1mm] $\Omega_c h^2$ & $0.1128 \pm 0.0061$ & $0.1200 \pm 0.0012$ & $0.11838 \pm 0.00083$ & $0.1183 \pm 0.0015$ & $0.11747 \pm 0.00091$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.0321 \pm 0.0080$ & $1.04091 \pm 0.00031$ & $1.04110 \pm 0.00029$ & $1.04109 \pm 0.00032$ & $1.04118 \pm 0.00029$ \\[+1mm] $\tau$ & $0.0543$ & $0.0543 \pm 0.0073$ & $0.0569 \pm 0.0071$ & $0.0487 \pm 0.0087$ & $0.0476 \pm 0.0085$ \\[+1mm] $n_s$ & $0.9649$ & $0.9649 \pm 0.0041$ & $0.9688 \pm 0.0036$ & $0.9695 \pm 0.0048$ & $0.9715 \pm 0.0038$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.10 \pm 0.11$ & $3.044 \pm 0.014$ & $3.046 \pm 0.014$ & $3.028 \pm 0.018$ & $3.023 \pm 0.018$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $1.073 \pm 0.041$ & $1.089 \pm 0.035$ \\[+1mm] \hline \\[-1mm] $H_0$ & $69.9 \pm 1.7$ & $67.34 \pm 0.55$ & $68.09 \pm 0.38$ & $68.14 \pm 0.69$ & $68.52 \pm 0.42$ \\[+1mm] $\Omega_m$ & $0.285 \pm 0.011$ & $0.3155 \pm 0.0075$ & $0.3053 \pm 0.0050$ & $0.3048 \pm 0.0091$ & $0.2998 \pm 0.0053$ \\[+1mm] $\sigma_8$ & $0.787 \pm 0.026$ & $0.8112 \pm 0.0059$ & $0.8072 \pm 0.0058$ & $0.7996 \pm 0.0089$ & $0.7955 \pm 0.0075$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total) & $1106.55$ & $2774.71$ & $3888.41$ & $2771.24$ & $3881.55$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.55$ & $\cdots$ & $1112.05$ & $\cdots$ & $1109.64$ \\[+1mm] $\textrm{DIC}$ & $1114.38$ & $2826.45$ & $3940.70$ & $2825.53$ & $3935.15$ \\[+1mm] $\Delta\textrm{DIC}$ & $\cdots$ & $\cdots$ & $\cdots$ & $-0.92$ & $-5.55$ \\[+1mm] $\textrm{AIC}_c$ & $1114.55$ & $2828.71$ & $3942.41$ & $2827.24$ & $3937.55$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $\cdots$ & $\cdots$ & $\cdots$ & $-1.47$ & $-4.86$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_FL_P18_lensing_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of untilted non-flat $\Lambda\textrm{CDM}$ (+$A_L$) model parameters constrained by non-CMB, P18+lensing, and P18+lensing+non-CMB data sets. The Hubble constant $H_0$ has a unit of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Untilted non-flat $\Lambda$CDM} & \multicolumn{2}{c}{Untilted non-flat $\Lambda$CDM$+A_L$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18+lensing & P18+lensing+non-CMB & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0241 \pm 0.0033$ & $0.02307 \pm 0.00014$ & $0.02301 \pm 0.00014$ & $0.02312 \pm 0.00014$ & $0.02310 \pm 0.00014$ \\[+1mm] $\Omega_c h^2$ & $0.121 \pm 0.013$ & $0.11108 \pm 0.00086$ & $0.11176 \pm 0.00083$ & $0.11092 \pm 0.00087$ & $0.11100 \pm 0.00085$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.11 \pm 0.11$ & $1.04196 \pm 0.00029$ & $1.04189 \pm 0.00029$ & $1.04193 \pm 0.00029$ & $1.04195 \pm 0.00030$ \\[+1mm] $\tau$ & $0.0580$ & $0.0580 \pm 0.0087$ & $0.0799 \pm 0.0089$ & $0.0554 \pm 0.0097$ & $0.0566 \pm 0.0083$ \\[+1mm] $\Omega_k$ & $-0.037 \pm 0.050$ & $-0.0322 \pm 0.0075$ & $-0.0065 \pm 0.0014$ & $0.0161 \pm 0.0094$ & $-0.0060 \pm 0.0014$ \\[+1mm] $\ln(10^{10} A_s)$ & $2.84 \pm 0.34$ & $3.027 \pm 0.018$ & $3.075 \pm 0.018$ & $3.021 \pm 0.020$ & $3.024 \pm 0.017$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $1.44 \pm 0.15$ & $1.162 \pm 0.036$ \\[+1mm] \hline \\[-1mm] $H_0$ & $70.2 \pm 1.7$ & $58.9 \pm 2.1$ & $67.90 \pm 0.56$ & $85.7 \pm 8.5$ & $68.48 \pm 0.58$ \\[+1mm] $\Omega_m$ & $0.295 \pm 0.018$ & $0.390 \pm 0.027$ & $0.2938 \pm 0.0049$ & $0.190 \pm 0.043$ & $0.2874 \pm 0.0050$ \\[+1mm] $\sigma_8$ & $0.769 \pm 0.035$ & $0.765 \pm 0.011$ & $0.7997 \pm 0.0076$ & $0.7805 \pm 0.0094$ & $0.7764 \pm 0.0078$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $1106.51$ & $2813.13$ & $3938.22$ & $2807.91$ & $3915.05$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.51$ & $\cdots$ & $1108.60$ & $\cdots$ & $1107.39$ \\[+1mm] $\textrm{DIC}$ & $1117.24$ & $2869.06$ & $3992.71$ & $2856.10$ & $3973.55$ \\[+1mm] $\Delta\textrm{DIC}$ & $2.86$ & $42.61$ & $52.01$ & $29.65$ & $32.85$ \\[+1mm] $\textrm{AIC}_c$ & $1116.51$ & $2867.13$ & $3992.22$ & $2863.91$ & $3971.05$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $1.96$ & $38.42$ & $49.81$ & $35.20$ & $28.64$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_NL_P18_lensing_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of Planck-$P(q)$-based tilted nonflat $\Lambda\textrm{CDM}$ ($+A_L$) model parameters constrained by non-CMB, P18+lensing, and P18+lensing+non-CMB data sets. The Hubble constant $H_0$ has a unit of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM Planck $P(q)$} & \multicolumn{2}{c}{Tilted non-flat $\Lambda$CDM$+A_L$ Planck $P(q)$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18+lensing & P18+lensing+non-CMB & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0242 \pm 0.0033$ & $0.02249 \pm 0.00016$ & $0.02249 \pm 0.00015$ & $0.02251 \pm 0.00017$ & $0.02259 \pm 0.00016$ \\[+1mm] $\Omega_c h^2$ & $0.120 \pm 0.013$ & $0.1186 \pm 0.0015$ & $0.1187 \pm 0.0013$ & $0.1183 \pm 0.0015$ & $0.1173 \pm 0.0014$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.10 \pm 0.10$ & $1.04107 \pm 0.00032$ & $1.04106 \pm 0.00031$ & $1.04110 \pm 0.00032$ & $1.04118 \pm 0.00032$ \\[+1mm] $\tau$ & $0.0495$ & $0.0495 \pm 0.0082$ & $0.0563 \pm 0.0073$ & $0.0489 \pm 0.0085$ & $0.0479 \pm 0.0085$ \\[+1mm] $\Omega_k$ & $-0.033 \pm 0.050$ & $-0.0103 \pm 0.0066$ & $0.0004 \pm 0.0017$ & $-0.005 \pm 0.027$ & $-0.0002 \pm 0.0017$ \\[+1mm] $n_s$ & $0.9687$ & $0.9687 \pm 0.0046$ & $0.9681 \pm 0.0044$ & $0.9696 \pm 0.0049$ & $0.9718 \pm 0.0045$ \\[+1mm] $\ln(10^{10} A_s)$ & $2.90 \pm 0.34$ & $3.030 \pm 0.017$ & $3.046 \pm 0.014$ & $3.028 \pm 0.018$ & $3.024 \pm 0.017$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $1.09 \pm 0.16$ & $1.090 \pm 0.036$ \\[+1mm] \hline \\[-1mm] $H_0$ & $70.1 \pm 1.8$ & $63.7 \pm 2.3$ & $68.17 \pm 0.55$ & $69 \pm 11$ & $68.49 \pm 0.56$ \\[+1mm] $\Omega_m$ & $0.294 \pm 0.018$ & $0.351 \pm 0.024$ & $0.3051 \pm 0.0053$ & $0.32 \pm 0.11$ & $0.2998 \pm 0.0055$ \\[+1mm] $\sigma_8$ & $0.771 \pm 0.036$ & $0.796 \pm 0.011$ & $0.8080 \pm 0.0066$ & $0.796 \pm 0.016$ & $0.7952 \pm 0.0085$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total) & $1106.51$ & $2771.53$ & $3887.99$ & $2771.14$ & $3881.37$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.51$ & $\cdots$ & $1111.94$ & $\cdots$ & $1110.31$ \\[+1mm] $\textrm{DIC}$ & $1117.27$ & $2826.17$ & $3942.07$ & $2827.14$ & $3936.85$ \\[+1mm] $\Delta\textrm{DIC}$ & $2.89$ & $-0.28$ & $1.37$ & $0.69$ & $-3.85$ \\[+1mm] $\textrm{AIC}_c$ & $1116.51$ & $2827.53$ & $3943.99$ & $2829.14$ & $3939.37$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $1.96$ & $-1.18$ & $1.58$ & $0.43$ & $-3.04$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_NL_ns_P18_lensing_nonCMB} \end{table*} \begin{table*} \caption{Mean and 68.3\% confidence limits of new-$P(q)$-based tilted nonflat $\Lambda\textrm{CDM}$ ($+A_L$) model parameters constrained by non-CMB, P18+lensing, and P18+lensing+non-CMB data sets. The Hubble constant $H_0$ has a unit of km s$^{-1}$ Mpc$^{-1}$. } \begin{ruledtabular} \begin{tabular}{lccccc} \\[-1mm] & \multicolumn{3}{c}{Tilted non-flat $\Lambda$CDM new $P(q)$} & \multicolumn{2}{c}{Tilted non-flat $\Lambda$CDM$+A_L$ new $P(q)$} \\[+1mm] \cline{2-4}\cline{5-6}\\[-1mm] Parameter & Non-CMB & P18+lensing & P18+lensing+non-CMB & P18+lensing & P18+lensing+non-CMB \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0242 \pm 0.0033$ & $0.02248 \pm 0.00016$ & $0.02248 \pm 0.00015$ & $0.02252 \pm 0.00017$ & $0.02260 \pm 0.00016$ \\[+1mm] $\Omega_c h^2$ & $0.120 \pm 0.013$ & $0.1188 \pm 0.0014$ & $0.1186 \pm 0.0013$ & $0.1183 \pm 0.0015$ & $0.1174 \pm 0.0013$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.10 \pm 0.10$ & $1.04104 \pm 0.00032$ & $1.04106 \pm 0.00031$ & $1.04108 \pm 0.00032$ & $1.04118 \pm 0.00032$\\[+1mm] $\tau$ & $0.0515$ & $0.0515 \pm 0.0081$ & $0.0566 \pm 0.0074$ & $0.0495 \pm 0.0093$ & $0.0486 \pm 0.0086$ \\[+1mm] $\Omega_k$ & $-0.033 \pm 0.050$ & $-0.0086 \pm 0.0057$ & $0.0003 \pm 0.0017$ & $0.003 \pm 0.016$ & $-0.0002 \pm 0.0017$\\[+1mm] $n_s$ & $0.9654$ & $0.9661 \pm 0.0043$ & $0.9679 \pm 0.0042$ & $0.9688 \pm 0.0053$ & $0.9713 \pm 0.0042$ \\[+1mm] $\ln(10^{10} A_s)$ & $2.89 \pm 0.34$ & $3.035 \pm 0.016$ & $3.046 \pm 0.014$ & $3.030 \pm 0.019$ & $3.025 \pm 0.017$ \\[+1mm] $A_{L}$ & $\cdots$ & $\cdots$ & $\cdots$ & $1.13 \pm 0.15$ & $1.088 \pm 0.035$ \\[+1mm] \hline \\[-1mm] $H_0$ & $70.1 \pm 1.8$ & $64.2 \pm 2.0$ & $68.13 \pm 0.54$ & $72.0 \pm 9.2$ & $68.48 \pm 0.56$ \\[+1mm] $\Omega_m$ & $0.295 \pm 0.017$ & $0.345 \pm 0.021$ & $0.3054 \pm 0.0051$ & $0.287 \pm 0.076$ & $0.2999 \pm 0.0055$ \\[+1mm] $\sigma_8$ & $0.771 \pm 0.036$ & $0.799 \pm 0.010$ & $0.8079 \pm 0.0067$ & $0.801 \pm 0.011$ & $0.7956 \pm 0.0082$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ (Total)& $1106.49$ & $2771.75$ & $3887.55$ & $2770.45$ & $3880.69$ \\[+1mm] $\chi_{\textrm{min}}^2$ (Non-CMB) & $1106.49$ & $\cdots$ & $1111.65$ & $\cdots$ & $1109.43$ \\[+1mm] $\textrm{DIC}$ & $1117.14$ & $2825.74$ & $3942.22$ & $2827.29$ & $3937.52$ \\[+1mm] $\Delta\textrm{DIC}$ & $2.76$ & $-0.71$ & $1.52$ & $0.84$ & $-3.18$ \\[+1mm] $\textrm{AIC}_c$ & $1116.49$ & $2827.75$ & $3943.55$ & $2828.45$ & $3938.69$ \\[+1mm] $\Delta\textrm{AIC}_c$ & $1.94$ & $-0.96$ & $1.14$ & $-0.26$ & $-3.72$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\Delta\textrm{DIC}$ ($\Delta\textrm{AIC}_c$) indicates an excess value relative to that of the tilted flat $\Lambda$CDM model constrained with the same data. \end{flushleft} \end{ruledtabular} \label{tab:para_TNL_P18_lensing_nonCMB} \end{table*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_P18_lensing_nonCMBv2_fig32.pdf}} \caption{Likelihoods of the tilted flat $\Lambda$CDM model parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. } \label{fig:like_FL_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_FL_Alens_P18_lensing_nonCMBv2_fig33.pdf}} \caption{Likelihoods of the tilted flat $\Lambda$CDM$+A_L$ model parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_FL_P18_lensing_nonCMB}. } \label{fig:like_FL_Alens_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_P18_lensing_nonCMBv2_fig34.pdf}} \caption{Likelihoods of the untilted non-flat $\Lambda$CDM model parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. } \label{fig:like_NL_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_P18_lensing_nonCMBv2_fig35.pdf}} \caption{Likelihoods of the untilted non-flat $\Lambda$CDM$+A_L$ model parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_NL_P18_lensing_nonCMB}. } \label{fig:like_NL_Alens_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_ns_P18_lensing_nonCMBv2_fig36.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM model [with Planck $P(q)$] parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. } \label{fig:like_NL_ns_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_NL_Alens_ns_P18_lensing_nonCMBv2_fig37.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM$+A_L$ model [with Planck $P(q)$] parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_NL_ns_P18_lensing_nonCMB}. } \label{fig:like_NL_Alens_ns_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_ns1_P18_lensing_nonCMBv2_fig38.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM model [with new $P(q)$] parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. } \label{fig:like_TNL_P18_lensing_nonCMB} \end{figure*} \begin{figure*}[htbp] \centering \mbox{\includegraphics[width=170mm]{plot_triangle_TNL_Alens_ns1_P18_lensing_nonCMBv2_fig39.pdf}} \caption{Likelihoods of the tilted non-flat $\Lambda$CDM$+A_L$ model [with new $P(q)$] parameters constrained by P18+lensing, non-CMB, and P18+lensing+non-CMB data sets. The likelihoods for the non-CMB data set, which do not depend on $A_L$, are the same as in Fig.\ \ref{fig:like_TNL_P18_lensing_nonCMB}. } \label{fig:like_TNL_Alens_P18_lensing_nonCMB} \end{figure*} \subsubsection{Comparing P18+lensing data and non-CMB data cosmological constraints}\label{sec:P18+lensing_vs_non-CMB} In the previous subsubsection we compared non-CMB data cosmological constraints to those obtained from P18 data. We found significant tensions in the non-flat models with $A_L=1$ for the derived parameters $H_0$ and $\Omega_m$, and a 2.2$\sigma$ tension between the two $\Omega_m$ values in the flat $\Lambda$CDM model with $A_L = 1$. In view of these results additional tests are needed if one wants to know whether P18 and non-CMB data can be jointly analysed to determine cosmological constraints. We study this in Sec.\ \ref{subsec:data_set_tensions}. Interestingly, when the $A_L$ parameter is allowed to vary these tensions decrease significantly, with the largest tension being 2.3$\sigma$ between the two $H_0$ values in the tilted non-flat Planck $P(q)$ model, and the remaining tensions not exceeding 1.4$\sigma$, perhaps an indication that P18 and non-CMB data can be use jointly to constrain cosmological parameters when $A_L$ is allowed to vary. In Secs.\ \ref{subsubsec:P18_data_constraints} and \ref{subsubsec:P18_lensing_data_constraints} we discussed the cosmological constrained obtained from P18 data and from P18+lensing data. We shall see, in Sec.\ \ref{subsec:data_set_tensions}, that, in the non-flat models, P18 data and lensing data are less mutually inconsistent than P18 data and non-CMB data are, however it is necessary to perform an additional test to determine whether or not P18, lensing, and non-CMB data can be jointly analyzed to derive cosmological constraints in the non-flat models with $A_L=1$. In this subsubsection we describe the results of this additional test that compares non-CMB data cosmological constraints to the ones obtained from P18+lensing data. P18+lensing+non-CMB data cannot be jointly used in the context of a given model unless the cosmological constraints obtained with P18+lensing data and with non-CMB data are consistent. While in the previous subsubsection we labelled the study of P18 vs.\ non-CMB as a study of high-redshift data cosmological constraints vs.\ low-redshift data cosmological constraints, we cannot do that in this subsubsection since most of the information in the lensing data is from low-redshift data. The cosmological parameter mean values and error bars favored by the P18+lensing, non-CMB, and P18+lensing+non-CMB data sets are summarized in Tables \ref{tab:para_FL_P18_lensing_nonCMB}-\ref{tab:para_TNL_P18_lensing_nonCMB} for the tilted flat $\Lambda$CDM (+$A_L$) model, the untilted non-flat $\Lambda$CDM (+$A_L$) model, the tilted non-flat $\Lambda$CDM (+$A_L$) Planck $P(q)$ model, and the tilted non-flat $\Lambda$CDM (+$A_L$) new $P(q)$ model. Likelihood distributions of cosmological parameters of the four models with $A_L=1$ and with varying $A_L$ are shown in Figs.\ \ref{fig:like_FL_P18_lensing_nonCMB}-\ref{fig:like_TNL_Alens_P18_lensing_nonCMB} for the P18+lensing, non-CMB and P18+lensing+non-CMB data sets. Since non-CMB data do not have the ability to constrain $\tau$ or $n_s$, in this subsubsection we set their values in the non-CMB data only analyses to those found in the corresponding P18+lensing data analyses. Note that in the previous subsubsection, where the case of P18 data versus non-CMB data was studied, the values of $\tau$ and $n_s$ in the non-CMB data only analyses were set to those found in the corresponding P18 data analyses, nevertheless, the cosmological parameter constraints from the two non-CMB data analyses are practically identical. This indicates that non-CMB data are mostly insensitive to changes in the values of $\tau$ and $n_s$, as we have assumed. Again, we do not include the $A_L$ parameter in the analyses when only non-CMB data are considered, since it does not play a role at low redshift. Similar to what happens in the previous subsubsection, when we compare cosmological constraints from P18 data and from non-CMB data, by looking at Tables \ref{tab:para_NL_P18_lensing_nonCMB}-\ref{tab:para_TNL_P18_lensing_nonCMB} we observe that for the six non-flat $\Lambda$CDM (+$A_L$) models the constraints imposed by non-CMB data on the $H_0$ and $\Omega_m$ parameters are tighter than the ones from P18+lensing data. P18+lensing data more restrictively constrain all other parameters in all eight cosmological models. Comparing the six-parameter and the four-parameter tilted flat $\Lambda$CDM model primary cosmological parameter constraints for P18+lensing and non-CMB data, shown in the left part of Table \ref{tab:para_FL_P18_lensing_nonCMB}, we observe that the values of $\Omega_b h^2$, $\Omega_c{h^2}$, and $\theta_{\textrm{MC}}$ are in mild disagreement, at 1.3$\sigma$, 1.2$\sigma$, and 1.1$\sigma$, respectively. We also see tensions in the derived parameters. In particular, for the non-relativistic matter density parameter $\Omega_m$, the level of tension reaches 2.3$\sigma$, whereas the values of $H_0$ disagree by 1.4$\sigma$. From Table \ref{tab:para_FL_P18_lensing_nonCMB} we can compare the seven-parameter and the four-parameter tilted flat $\Lambda$CDM+$A_L$ model cosmological parameter constraints for P18+lensing data and for non-CMB data. Regarding the primary cosmological parameters we see that the values of $\Omega_b{h^2}$ and $\theta_{\textrm{MC}}$ disagree at 1.2$\sigma$ and 1.1$\sigma$ respectively. The inclusion of the $A_L$-varying parameter reduces significantly the tension found in the $A_L=1$ case for $\Omega_m$ and $H_0$, with them now disagreeing by only 1.4$\sigma$ and 0.96$\sigma$. We do not find any clear evidence that prevents us from jointly analyzing P18+lensing and non-CMB data, in the context of the tilted flat $\Lambda$CDM model, with and without a varying $A_L$ parameter. The results for the six-parameter and the four-parameter untilted non-flat $\Lambda$CDM model obtained from P18+lensing and non-CMB data are in Table \ref{tab:para_NL_P18_lensing_nonCMB}. While for the primary cosmological parameters we do not observe significant tensions, we do for the derived parameters. The primary spatial curvature parameter is $\Omega_k=-0.037\pm 0.050$ for non-CMB data, which is 0.74$\sigma$ away from flat hypersurfaces and in 0.095$\sigma$ tension with the P18+lensing analysis value $\Omega_k=-0.0322\pm 0.0075$, which is 4.3$\sigma$ away from flat. As for the derived parameters, we find that $H_0$, $\Omega_m$ and $\sigma_8$ values are in 4.2$\sigma$, 2.9$\sigma$, and 0.11$\sigma$ disagreement. The high levels of tensions reached for some of the parameters may indicate that P18+lensing and non-CMB data should not be jointly analyzed in the context of the untilted non-flat $\Lambda$CDM model. P18+lensing and non-CMB data results obtained for the seven-parameter and the four-parameter untilted non-flat $\Lambda$CDM+$A_L$ model are shown in Table \ref{tab:para_NL_P18_lensing_nonCMB}. Regarding the values of the primary cosmological parameters, except for $\Omega_k$ (discussed next), as was observed in the $A_L=1$ case, there are no significant tensions. The value of the curvature parameter is $\Omega_k=-0.037\pm 0.050$ (0.74$\sigma$ away from flat) for the non-CMB data and $\Omega_k=0.0161\pm 0.0094$ for the P18+lensing data, which indicates 1.7$\sigma$ evidence in favor of an open spatial geometry. The two $\Omega_k$ values disagree at 1.0$\sigma$. The disagreement in the values of the derived parameters $H_0$, $\Omega_m$ and $\sigma_8$ values are 1.8$\sigma$, 2.3$\sigma$, and 0.32$\sigma$, respectively, which clearly represents a reduction with respect to the $A_L=1$ case. This is due to the elongation of the error bars in the varying $A_L$ case compared to the $A_L=1$ case. Given these results, the P18+lensing and the non-CMB data should perhaps not be used together in the context of the untilted non-flat $\Lambda$CDM+$A_L$ model. It may be noticed however that when we do so, namely in the P18+lensing+non-CMB analysis, the obtained value for the curvature parameter is $\Omega_k=-0.0060\pm0.0014$, which is 4.3$\sigma$ away from flat. Nonetheless, according to the AIC and DIC this model is strongly disfavoured when it is compared with the tilted models, due to the lack of the degree of freedom contained in the $n_s$ parameter. The results that allow us to compare the seven-parameter and the five-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ model primary cosmological parameter constraints for P18+lensing data and non-CMB data can be seen in Table \ref{tab:para_NL_ns_P18_lensing_nonCMB}. There are no significant tensions in the values of the primary cosmological parameters. The non-CMB data value of the spatial curvature parameter, $\Omega_k=-0.033\pm 0.050$, is 0.66$\sigma$ away from flat and in 0.45$\sigma$ tension with the value found in the P18+lensing analysis, namely $\Omega_k=-0.0103\pm0.0066$ which represents a 1.6$\sigma$ deviation from flat hypersurfaces. As for the values of the derived parameters $H_0$, $\Omega_m$, and $\sigma_8$, the tensions are 2.2$\sigma$, 1.9$\sigma$, and 0.66$\sigma$, respectively. Given these results, further tests are probably necessary in order to decide whether P18+lensing and non-CMB data can be jointly analyzed in the context of the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. P18+lensing and non-CMB data results obtained for the eight-parameter and the five-parameter tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model are shown in Table \ref{tab:para_NL_ns_P18_lensing_nonCMB}. Similar to the $A_L=1$ case we do not find significant disagreements in the values of the primary cosmological parameters. For the non-CMB data the value of the curvature parameter is $\Omega_k=-0.033\pm 0.050$ which is 0.66$\sigma$ away from flat and in 0.49$\sigma$ tension with the P18+lensing value, $\Omega_k=-0.005\pm 0.027$, which in turn is only 0.19$\sigma$ in favor of a closed geometry. An important reduction in the disagreements found in the derived parameters, with respect to the $A_L=1$ case, is observed. In particular, for $H_0$, $\Omega_m$, and $\sigma_8$ the disagreement found is 0.099$\sigma$, 0.23$\sigma$, and 0.63$\sigma$. We may say that in the context of the tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ we are allowed to analyze together P18+lensing data and non-CMB data. By doing so, we get for P18+lensing+non-CMB data no evidence in favor of a non-flat geometry, $\Omega_k=-0.0002\pm 0.0017$, but still a clear 2.5$\sigma$ preference for $A_L\neq 1$ since $A_L=1.090\pm 0.036$. Comparing the seven-parameter and the five-parameter tilted non-flat $\Lambda$CDM new $P(q)$ model primary cosmological parameter constraints for P18+lensing data and non-CMB data, from the left part of Table \ref{tab:para_TNL_P18_lensing_nonCMB} we see no important differences in the values of the primary parameters. The value for the spatial curvature parameter is $\Omega_k=-0.033\pm 0.050$ for non-CMB data, which represents a 0.66$\sigma$ deviation from flat and it is in 0.48$\sigma$ tension with the value obtained in the P18+lensing analysis, $\Omega_k=-0.0086\pm 0.0057$, which is 1.5$\sigma$ away from flat hypersurfaces. Regarding the triad of derived parameters $H_0$, $\Omega_m$, and $\sigma_8$, the disagreement found for each of them is 2.2$\sigma$, 1.9$\sigma$, and 0.75$\sigma$ respectively. In light of these results we deem that more testing is required to decide whether the P18+lensing and non-CMB data can be jointly analyzed in the context of the tilted non-flat $\Lambda$CDM new $P(q)$ model. In Table \ref{tab:para_TNL_P18_lensing_nonCMB} we provide the results for the eight-parameter and the five-parameter tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model when P18+lensing data and non-CMB data are considered. The tensions found for the values of primary cosmological parameters are not significant, as in the $A_L=1$ case. When non-CMB data is considered we find $\Omega_k=-0.033\pm 0.050$, which shows a 0.66$\sigma$ evidence in favor of a closed geometry and is in 0.69$\sigma$ tension with the P18+lensing data value, $\Omega_k=0.003\pm 0.016$, which shows only a 0.19$\sigma$ preference for an open geometry. As for the derived parameters $H_0$, $\Omega_m$, and $\sigma_8$ the level of agreement is really good, with the corresponding values only in 0.20$\sigma$, 0.10$\sigma$, and 0.80$\sigma$ tension, respectively. These results seem to indicate that in the context of the tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model P18+lensing data and non-CMB data can be jointly analyzed. In the P18+lensing+non-CMB analysis we find $\Omega_k=-0.0002\pm 0.0017$, so no clear preference for an open or a closed geometry. On the other hand, we find $A_L=1.088\pm 0.035$ which is 2.5$\sigma$ away from the predicted value $A_L=1$. In Figs.\ \ref{fig:like_FL_P18_lensing_nonCMB}-\ref{fig:like_TNL_Alens_P18_lensing_nonCMB} we show the one-dimensional likelihoods and the two-dimensional contours for cosmological parameters obtained using P18+lensing, non-CMB, and P18+lensing+non-CMB data. The constraints coming from non-CMB data (shown with unfilled black lines) are less restrictive than P18+lensing constraints (shown in grey), except for the $H_0$ and $\Omega_m$ constraints in the six non-flat models. Except for the untilted non-flat model with $A_L=1$ we observe at least partial overlaps between the three sets of contours even when the $A_L$ parameter is not allowed to vary. The contour plots for the tilted flat $\Lambda$CDM (+$A_L$) models are in Figs.\ \ref{fig:like_FL_P18_lensing_nonCMB} and \ref{fig:like_FL_Alens_P18_lensing_nonCMB}. The aforementioned $\sim$1$\sigma$ disagreements (and the $\sim 2\sigma$ $\Omega_m$ disagreement in the $A_L = 1$ case) found when we compared the one-dimensional likelihood P18+lensing and non-CMB results can also be observed here. The largest tensions are seen in the panels containing one of the derived parameters and the inclusion in the analysis of the varying $A_L$ parameter clearly reduces them. Looking at the contour plots for the untilted non-flat $\Lambda$CDM (+$A_L$) models displayed in Figs.\ \ref{fig:like_NL_P18_lensing_nonCMB} and \ref{fig:like_NL_Alens_P18_lensing_nonCMB} we observe significantly non-overlapping contours either when the primary parameter $\Omega_k$ is involved or when the derived parameter $H_0$ or $\Omega_m$ is involved. This reinforces the idea that when $A_L$ is not allowed to vary the P18+lensing and non-CMB data sets cannot be analyzed together in the untilted non-flat $\Lambda$CDM model. Quite different results are found when we do allow $A_L$ to vary. The disagreements observed in the $A_L=1$ case largely disappear. Therefore we may say that in the context of this varying $A_L$ cosmological model we can jointly analyze P18+lensing and non-CMB data. Figures \ref{fig:like_NL_ns_P18_lensing_nonCMB} and \ref{fig:like_NL_Alens_ns_P18_lensing_nonCMB} show cosmological parameter constraints for the tilted non-flat $\Lambda$CDM (+$A_L$) models, while the ones for the tilted non-flat $\Lambda$CDM (+$A_L$) new $P(q)$ models are displayed in Figs. \ref{fig:like_TNL_P18_lensing_nonCMB} and \ref{fig:like_TNL_Alens_P18_lensing_nonCMB}. The contour plots for these tilted non-flat models are very similar, something that was not unexpected given the results discussed above in this subsubsection. In both cases, when $A_L$ is not allowed to vary and when it is allowed to vary, we observe overlaps between the primary parameter panels contours at 1$\sigma$. When $A_L=1$ we observe an improvement in the overlapping in the current P18+lensing data vs.\ non-CMB data case, compared to the P18 data vs.\ non-CMB data case of the previous subsubsection, where now for both the Planck $P(q)$ model and the new $P(q)$ model the contours do overlap at 2$\sigma$. On the other hand, in the varying $A_L$ case we observe overlaps, even in those panels that involve some of the derived parameters, at 1$\sigma$. As in the P18 data vs.\ non-CMB data cosmological constraints comparison discussed in the previous subsubsection, further tests are needed to determine whether or not P18+lensing data and non-CMB data can be jointly analyzed in the context of the non-flat models under study. We discuss this issue in detail in Sec. \ref{subsec:data_set_tensions}. \begin{table*} \caption{Individual and total $\chi^2$ values for the best-fit flat and non-flat $\Lambda\textrm{CDM}$ inflation models. The deviance information criterion (DIC) and the Akaike information criterion (AIC$_c$) are also listed. } {\scriptsize \begin{ruledtabular} \begin{tabular}{lcccccccccccccc} Data sets & $\chi_{\textrm{plik}}^2$ & $\chi_{\textrm{lowl}}^2$ & $\chi_{\textrm{simall}}^2$ & $\chi_{\textrm{lensing}}^2$ & $\chi_{\textrm{prior}}^2$ & $\chi_{\textrm{SN}}^2$ & $\chi_{\textrm{BAO}}^2$ & $\chi_{H(z)}^2$ & $\chi_{f\sigma_8}^2$ & $\chi^2_{\textrm{total}}$ & $\Delta\chi^2$ & DIC & $\Delta\textrm{DIC}$ & $\Delta\textrm{AIC}_c$ \\[+0mm] \hline \\[-2mm] \multicolumn{15}{c}{Tilted flat $\Lambda\textrm{CDM}$ model} \\ \hline \\[-2mm] P18 & 2344.71 & 23.39 & 396.05 & & 1.66 & & & & & 2765.80 & & 2817.93 & & \\[+1mm] P18+lensing & 2344.66 & 23.39 & 396.06 & 8.79 & 1.82 & & & & & 2774.71 & & 2826.45 & & \\[+1mm] P18+lensing+non-CMB & 2346.61 & 22.64 & 396.34 & 8.94 & 1.84 & 1058.99 & 20.10 & 14.76 & 18.20 & 3888.41 & & 3940.70 & & \\[+1mm] \hline\\[-2mm] \multicolumn{15}{c}{Tilted flat $\Lambda$\textrm{CDM}$+A_L$ model} \\ \hline \\[-2mm] P18 & 2337.23 & 21.92 & 395.66 & & 1.31 & & & & & 2756.12 & $-9.68$ & 2812.41 & $-5.52$ & $-7.68$ \\[+1mm] P18+lensing & 2341.62 & 22.29 & 395.68 & 9.94 & 1.71 & & & & & 2771.24 & $-3.47$ & 2825.53 & $-0.92$ & $-1.47$ \\[+1mm] P18+lensing+non-CMB & 2342.43 & 21.99 & 395.68 & 9.74 & 2.06 & 1059.14 & 21.46 & 14.73 & 14.31 & 3881.55 & $-6.86$ & 3935.15 & $-5.55$ & $-4.86$ \\[+1mm] \hline \\[-2mm] \multicolumn{15}{c}{Untilted non-flat $\Lambda\textrm{CDM}$ model} \\ \hline \\[-2mm] P18 & 2369.95 & 22.22 & 395.69 & & 1.92 & & & & & 2789.77 & $23.97$ & 2847.14 & $29.21$ & $23.97$ \\[+1mm] P18+lensing & 2383.06 & 20.88 & 396.13 & 10.63 & 2.43 & & & & & 2813.13 & $38.42$ & 2869.06 & $42.61$ & $38.42$ \\[+1mm] P18+lensing+non-CMB & 2396.21 & 19.89 & 399.59 & 11.65 & 2.28 & 1059.51 & 20.65 & 15.68 & 12.77 & 3938.22 & $49.81$ & 3992.71 & $52.01$ & $49.81$ \\[+1mm] \hline\\[-2mm] \multicolumn{15}{c}{Untilted non-flat $\Lambda$\textrm{CDM}$+A_L$ model} \\ \hline \\[-2mm] P18 & 2369.32 & 20.34 & 395.87 & & 2.23 & & & & & 2787.76 & $21.96$ & 2846.45 & $28.52$ & $23.96$ \\[+1mm] P18+lensing & 2378.87 & 20.09 & 395.65 & 11.25 & 2.05 & & & & & 2807.91 & $33.20$ & 2856.10 & $29.65$ & $35.20$ \\[+1mm] P18+lensing+non-CMB & 2379.11 & 19.95 & 395.82 & 10.72 & 2.06 & 1060.16 & 22.50 & 15.47 & 9.26 & 3915.05 & $26.64$ & 3973.55 & $32.85$ & $28.64$ \\[+1mm] \hline \\[-2mm] \multicolumn{15}{c}{Tilted non-flat $\Lambda\textrm{CDM}$ model [Planck $P(q)$]} \\ \hline \\[-2mm] P18 & 2336.45 & 21.29 & 395.60 & & 1.38 & & & & & 2754.73 & $-11.07$ & 2810.59 & $-7.34$ & $-9.07$ \\[+1mm] P18+lensing & 2342.29 & 21.86 & 395.66 & 10.09 & 1.63 & & & & & 2771.53 & $-3.18$ & 2826.17 & $-0.28$ & $-1.18$ \\[+1mm] P18+lensing+non-CMB & 2345.82 & 22.90 & 396.53 & 8.92 & 1.88 & 1059.00 & 20.09 & 14.70 & 18.15 & 3887.99 & $-0.42$ & 3942.07 & $1.37$ & $1.58$ \\[+1mm] \hline\\[-2mm] \multicolumn{15}{c}{Tilted non-flat $\Lambda$\textrm{CDM}$+A_L$ model [Planck $P(q)$]} \\ \hline \\[-2mm] P18 & 2336.57 & 21.51 & 395.61 & & 1.29 & & & & & 2754.99 & $-10.81$ & 2811.63 & $-6.30$ & $-6.81$ \\[+1mm] P18+lensing & 2341.32 & 22.55 & 395.71 & 9.44 & 2.12 & & & & & 2771.14 & $-3.57$ & 2827.14 & $0.69$ & $0.43$ \\[+1mm] P18+lensing+non-CMB & 2341.91 & 22.16 & 395.77 & 9.62 & 1.60 & 1059.06 & 20.61 & 14.74 & 15.90 & 3881.37 & $-7.04$ & 3936.85 & $-3.85$ & $-3.04$ \\[+1mm] \hline\\[-2mm] \multicolumn{15}{c}{Tilted non-flat $\Lambda\textrm{CDM}$ model [new $P(q)$]} \\ \hline \\[-2mm] P18 & 2338.26 & 21.42 & 396.28 & & 1.42 & & & & & 2757.38 & $-8.42$ & 2811.54 & $-6.39$ & $-6.42$ \\[+1mm] P18+lensing & 2342.99 & 21.18 & 395.90 & 9.92 & 1.76 & & & & & 2771.75 & $-2.96$ & 2825.74 & $-0.71$ & $-0.96$ \\[+1mm] P18+lensing+non-CMB & 2346.63 & 22.53 & 396.30 & 8.91 & 1.53 & 1058.99 & 20.12 & 14.75 & 17.79 & 3887.55 & $-0.86$ & 3942.22 & $1.52$ & $1.14$ \\[+1mm] \hline\\[-2mm] \multicolumn{15}{c}{Tilted non-flat $\Lambda\textrm{CDM}$+$A_L$ model [new $P(q)$]} \\ \hline \\[-2mm] P18 & 2337.56 & 21.31 & 395.93 & & 1.52 & & & & & 2756.33 & $-9.47$ & 2814.83 & $-3.10$ & $-5.47$ \\[+1mm] P18+lensing & 2341.21 & 22.62 & 395.75 & 9.49 & 1.37 & & & & & 2770.45 & $-4.26$ & 2827.29 & $0.84$ & $-0.26$ \\[+1mm] P18+lensing+non-CMB & 2342.85 & 21.35 & 395.81 & 9.72 & 1.53 & 1059.13 & 21.27 & 14.77 & 14.27 & 3880.69 & $-7.72$ & 3937.52 & $-3.18$ & $-3.72$ \\[+1mm] \end{tabular} \\[+1mm] Note: $\Delta\chi^2$, $\Delta\textrm{DIC}$, and $\Delta\textrm{AIC}_c$ indicate the values relative to those of the tilted flat $\Lambda\textrm{CDM}$ model for the same combination of data sets. For the tilted flat $\Lambda$CDM model AIC$_c=2819.8$ (P18), $2828.7$ (P18+lensing), and $3942.4$ (P18+lensing+non-CMB). All $\chi^2$ values are computed at the corresponding model best-fit cosmological parameter values. \end{ruledtabular} } \label{tab:chi2_lcdm} \end{table*} \subsection{Model selection} \label{subsec:model_selection} In Sec.\ \ref{subsec:cosmological_parameters} we determined and discussed the cosmological parameter mean values and error bars in four pairs of cosmological models (with $A_L = 1$ and with varying $A_L$) from P18, P18+lensing, and P18+lensing+non-CMB data, as well as the differences in the values of the cosmological parameters obtained from P18 data and BAO/BAO$^\prime$ data, from P18 data and non-CMB data, and from P18+lensing data and non-CMB data. In this subsection we utilize the DIC, eq.\ \eqref{eq:DIC}, to determine which of these models best-fit some combinations of these data sets. For the P18, P18+lensing, and P18+lensing+non-CMB data sets, the values of $\Delta \textrm{AIC}_c$, $\Delta \textrm{DIC}$, and the individual contributions to the $\chi^2_{\textrm{total}}$ for each model are in Table \ref{tab:chi2_lcdm}. Here the Planck CMB data $\chi^2$s are: $\chi^2_{\textrm{plik}}$ from the TT data power spectra $30\leq \ell\leq 2508$ multipoles, the TE data $30\leq \ell \leq 1996$ multipoles, and the EE data $30\leq \ell \leq 1996$ multipoles; $\chi^2_{\textrm{lowl}}$ from the TT data power spectra $2\leq \ell \leq 29$ multipoles; $\chi^2_{\textrm{simall}}$ from the EE data power spectra $2\leq \ell \leq 29$ multipoles; $\chi^2_{\textrm{lensing}}$ from the lensing potential data power spectrum; and $\chi^2_{\textrm{prior}}$ from the priors for the Planck calibration and dust foreground emission. The P18+BAO/BAO$^{\prime}$ data values of $\Delta \textrm{AIC}_c$ and $\Delta \textrm{DIC}$ are provided in Tables \ref{tab:para_FL_BAO}-\ref{tab:para_TNL_ns_BAO}, whereas the corresponding P18+non-CMB data results can be found in Tables \ref{tab:para_FL_P18_nonCMB}-\ref{tab:para_TNL_P18_nonCMB}. In this subsection we do not discuss the results obtained for the untilted non-flat $\Lambda$CDM models, without and with a varying $A_L$, since as seen in the results presented in Tables \ref{tab:chi2_lcdm}, \ref{tab:para_NL_BAO}, and \ref{tab:para_NL_P18_nonCMB} this model is not able to fit CMB data as well as the other (tilted) models do. According to the statistical criteria we use, the untilted non-flat $\Lambda$CDM model is very strongly disfavoured when it is compared with the rest of the models that allow for a tilt ($n_s$) degree of freedom. We also do not discuss results obtained when only BAO$^\prime$, BAO, (P18) lensing (but see Table \ref{tab:para_lensing} and the brief related discussion in the third paragraph in Sec.\ \ref{subsec:data_set_tensions}), or non-CMB data are considered, because these data sets do not much discriminate between models. From Tables \ref{tab:para_FL_BAO}-\ref{tab:para_TNL_ns_BAO} and \ref{tab:para_FL_P18_nonCMB}-\ref{tab:para_TNL_P18_nonCMB} one sees that for these three data sets the DIC values for all models, including the untilted non-flat $\Lambda$CDM model, are very similar. In order to find more significant differences among the models under study we must include CMB data. In what follows we summarize results we find in a number of different combinations of data sets for the three tilted models. For clarity we focus on DIC results, since this is a more reliable indicator, \cite{DIC, Liddle:2007fy}. The tables also list the AIC$_c$ values. {\bf P18.} The results for these data are listed in Table \ref{tab:chi2_lcdm}. When $A_L = 1$, the non-flat Planck $P(q)$ and the non-flat new $P(q)$ models are strongly favored over the tilted flat model while the Planck $P(q)$ model is weakly favored over the new $P(q)$ model. When $A_L$ is allowed to vary, the non-flat Planck $P(q)$ model is weakly favored over the flat model, with both models being positively favored over the non-flat new $P(q)$ model. The flat$+A_L$ model is positively favored over the flat one, the Planck $P(q)$ model is weakly favored over the Planck $P(q)+A_L$ one, and the new $P(q)$ model is positively favored over the new $P(q)+A_L$ one. It is interesting that compared to the varying $A_L$ case, when $A_L=1$ both tilted non-flat models are strongly favored over the tilted flat $\Lambda$CDM model. {\bf P18+lensing.} The results for these data are listed in Table \ref{tab:chi2_lcdm}. These data provide only weak discrimination between models. When $A_L = 1$, the non-flat new $P(q)$ model is weakly favored over the non-flat Planck $P(q)$ model and both are weakly favored over the flat model. When $A_L$ is allowed to vary, the tilted flat model is weakly favored over both non-flat models while the non-flat Planck $P(q)$ model is weakly favored over the non-flat new $P(q)$ model. The flat$+A_L$ model is weakly favored over the flat one, the Planck $P(q)$ model is weakly favored over the Planck $P(q)+A_L$ one, and the new $P(q)$ model is weakly favored over the new $P(q)+A_L$ one. {\bf P18+BAO/P18+BAO$^\prime$.} The results for these data are listed in Tables \ref{tab:para_FL_BAO}, \ref{tab:para_NL_ns_BAO}, and \ref{tab:para_TNL_ns_BAO}. We discuss the P18+BAO data and P18+BAO$^\prime$ data results together since the conclusions are very similar. When $A_L = 1$, the tilted flat model is weakly (positively) favored over the non-flat Planck and new $P(q)$ models with the non-flat new $P(q)$ model weakly (weakly) favored over the non-flat Planck $P(q)$ model for P18+BAO (P18+BAO$^\prime$) data. When $A_L$ is allowed to vary, the tilted flat model is positively (weakly) favored over the non-flat Planck (new) $P(q)$ model, and the non-flat new $P(q)$ model weakly is favored over the non-flat Planck $P(q)$ model, for P18+BAO data, while for P18+BAO$^\prime$ data the tilted flat model is weakly favored over both non-flat Planck and new $P(q)$ models, and the non-flat new $P(q)$ model is weakly favored over the non-flat Planck $P(q)$ model. The flat$+A_L$ model is strongly (positively) favored over the flat one, the Planck $P(q)+A_L$ model is positively (strongly) favored over the Planck $P(q)$ one, and the new $P(q)+A_L$ model is positively (strongly) favored over the new $P(q)$ one for P18+BAO (P18+BAO$^\prime$) data. {\bf P18+non-CMB.} The results for these data are listed in Tables \ref{tab:para_FL_P18_nonCMB}, \ref{tab:para_NL_ns_P18_nonCMB}, and \ref{tab:para_TNL_P18_nonCMB}. Since the dominant component of non-CMB data is BAO/BAO$^\prime$ data, in the P18+non-CMB case here we find similar conclusions to the ones presented in the P18+BAO/P18+BAO$^\prime$ cases above. When $A_L = 1$, the tilted flat model is positively (weakly) favored over the non-flat Planck (new) $P(q)$ model with the non-flat new $P(q)$ model weakly favored over the non-flat Planck $P(q)$ model. When $A_L$ is allowed to vary, the tilted flat model is weakly favored over the non-flat Planck $P(q)$ and non-flat new $P(q)$ models, with the non-flat new $P(q)$ model weakly favored over the non-flat Planck $P(q)$ model. The flat$+A_L$ model is strongly favored over the flat one, the Planck $P(q)+A_L$ model is strongly favored over the Planck $P(q)$ one, and the new $P(q)+A_L$ model is strongly favored over the new $P(q)$ one. {\bf P18+lensing+non-CMB.} The results for these data are listed in Table \ref{tab:chi2_lcdm}. When $A_L = 1$, the tilted flat model is weakly favored over the non-flat Planck $P(q)$ and non-flat new $P(q)$ models with the non-flat Planck $P(q)$ model weakly favored over the non-flat new $P(q)$ model. When $A_L$ is allowed to vary, the tilted flat model is weakly (positively) favored over the non-flat Planck (new) $P(q)$ model, with the non-flat Planck $P(q)$ model weakly favored over the non-flat new $P(q)$ model. The flat$+A_L$ model is positively favored over the flat one, the Planck $P(q)+A_L$ model is positively favored over the Planck $P(q)$ one, and the new $P(q)+A_L$ model is positively favored over the new $P(q)$ one. In summary: P18 data and P18+non-CMB data both strongly disfavor the tilted flat $\Lambda$CDM model with $A_L =1$ relative to some of the tilted $\Omega_k < 0$ or varying $A_L$ options; P18+lensing data are largely agnostic; and P18+lensing+non-CMB data, P18+BAO data, and P18+BAO$^\prime$ data all positively favor the varying $A_L$ options over the $A_L=1$ cases. \begin{table*} \caption{$\log_{10} \mathcal{I}$ and tension ($\sigma$ and $p$) parameters for P18 data versus lensing data, P18 data versus BAO (BAO$^\prime$) data, P18 data versus non-CMB data, and P18+lensing data versus non-CMB data in the six tilted flat and non-flat $\Lambda$CDM models. Table \ref{tab:Priors} lists the Our, Handley, and Handley+$\Omega_k$ priors. } {\scriptsize \begin{ruledtabular} \begin{tabular}{lccccccc} \\[-1mm] & \multicolumn{7}{c}{Tilted flat $\Lambda$CDM model} \\[+1mm] \cline{2-8}\\[-1mm] Data: & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ BAO & P18 vs.\ BAO$^\prime$ & P18 vs.\ non-CMB & P18+lensing vs.\ non-CMB \\[+1mm] Prior: & Our & Handley & Handley+$\Omega_k$ & Our & Our & Our & Our \\[+1mm] \hline \\[-1mm] $\log_{10} {\mathcal I}$ & $1.240$ & $1.166$ & $\ldots$ & $0.132$ & $0.707$ & $0.296$ & $0.029$ \\[+1mm] $\sigma$ & $0.718$ & $0.390$ & $\ldots$ & $1.533$ & $0.426$ & $1.749$ & $1.747$ \\[+1mm] $p$ (\%) & $47.3$ & $69.7$ & $\ldots$ & $12.5$ & $67.0$ & $8.03$ & $8.06$ \\[+1mm] \hline \hline \\[-1mm] \\[-1mm] & \multicolumn{7}{c}{Tilted flat $\Lambda$CDM$+A_L$ model} \\[+1mm] \cline{2-8}\\[-1mm] Data: & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ BAO & P18 vs.\ BAO$^\prime$ & P18 vs.\ non-CMB & P18+lensing vs.\ non-CMB \\[+1mm] Prior: & Our & Handley & Handley+$\Omega_k$ & Our & Our & Our & Our \\[+1mm] \hline \\[-1mm] $\log_{10} {\mathcal I}$ & $\ldots$ & $\ldots$ & $\ldots$ & $0.286$ & $0.810$ & $1.033$ & $1.033$ \\[+1mm] $\sigma$ & $\ldots$ & $\ldots$ & $\ldots$ & $1.402$ & $0.371$ & $0.835$ & $0.774$ \\[+1mm] $p$ (\%) & $\ldots$ & $\ldots$ & $\ldots$ & $16.1$ & $71.0$ & $40.4$ & $43.9$ \\[+1mm] \hline \hline \\[-1mm] \\[-1mm] & \multicolumn{7}{c}{Tilted non-flat $\Lambda$CDM model [Planck $P(q)$]} \\[+1mm] \cline{2-8}\\[-1mm] Data: & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ BAO & P18 vs.\ BAO$^\prime$ & P18 vs.\ non-CMB & P18+lensing vs.\ non-CMB \\[+1mm] Prior: & Our & Handley & Handley+$\Omega_k$ & Our & Our & Our & Our \\[+1mm] \hline \\[-1mm] $\log_{10} {\mathcal I}$ & $-0.486$ & $-0.316$ & $-0.360$ & $-1.236$ & $-0.891$ & $-1.263$ & $0.297$ \\[+1mm] $\sigma$ & $2.479$ & $2.411$ & $2.403$ & $3.000$ & $2.478$ & $3.005$ & $1.837$\\[+1mm] $p$ (\%) & $1.32$ & $1.59$ & $1.63$ & $0.270$ & $1.32$ & $0.265$ & $6.62$ \\[+1mm] \hline \hline \\[-1mm] \\[-1mm] & \multicolumn{7}{c}{Tilted non-flat $\Lambda$CDM$+A_L$ model [Planck $P(q)$]} \\[+1mm] \cline{2-8}\\[-1mm] Data: & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ BAO & P18 vs.\ BAO$^\prime$ & P18 vs.\ non-CMB & P18+lensing vs.\ non-CMB \\[+1mm] Prior: & Our & Handley & Handley+$\Omega_k$ & Our & Our & Our & Our \\[+1mm] \hline \\[-1mm] $\log_{10} {\mathcal I}$ & $\ldots$ & $\ldots$ & $\ldots$ & $0.182$ & $0.847$ & $0.972$ & $1.641$ \\[+1mm] $\sigma$ & $\ldots$ & $\ldots$ & $\ldots$ & $1.460$ & $0.465$ & $0.793$ & $0.516$ \\[+1mm] $p$ (\%) & $\ldots$ & $\ldots$ & $\ldots$ & $14.4$ & $64.2$ & $42.8$ & $60.6$ \\[+1mm] \hline \hline \\[-1mm] \\[-1mm] & \multicolumn{7}{c}{Tilted non-flat $\Lambda$CDM model [new $P(q)$]} \\[+1mm] \cline{2-8}\\[-1mm] Data: & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ BAO & P18 vs.\ BAO$^\prime$ & P18 vs.\ non-CMB & P18+lensing vs.\ non-CMB \\[+1mm] Prior: & Our & Handley & Handley+$\Omega_k$ & Our & Our & Our & Our \\[+1mm] \hline \\[-1mm] $\log_{10} {\mathcal I}$ & $-0.062$ & $-0.089$ & $-0.057$ & $-0.880$ & $-0.526$ & $-0.806$ & $0.143$ \\[+1mm] $\sigma$ & $2.201$ & $1.887$ & $1.843$ & $2.604$ & $2.108$ & $2.577$ & $1.886$ \\[+1mm] $p$ (\%) & $2.77$ & $5.91$ & $6.54$ & $0.922$ & $3.50$ & $0.996$ & $5.93$ \\[+1mm] \hline \hline \\[-1mm] \\[-1mm] & \multicolumn{7}{c}{Tilted non-flat $\Lambda$CDM$+A_L$ model [new $P(q)$]} \\[+1mm] \cline{2-8}\\[-1mm] Data: & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ lensing & P18 vs.\ BAO & P18 vs.\ BAO$^\prime$ & P18 vs.\ non-CMB & P18+lensing vs.\ non-CMB \\[+1mm] Prior: & Our & Handley & Handley+$\Omega_k$ & Our & Our & Our & Our \\[+1mm] \hline \\[-1mm] $\log_{10} {\mathcal I}$ & $\ldots$ & $\ldots$ & $\ldots$ & $1.066$ & $1.655$ & $1.798$ & $1.500$ \\[+1mm] $\sigma$ & $\ldots$ & $\ldots$ & $\ldots$ & $1.052$ & $0.145$ & $0.402$ & $0.573$ \\[+1mm] $p$ (\%) & $\ldots$ & $\ldots$ & $\ldots$ & $29.3$ & $88.4$ & $68.7$ & $56.7$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: The statistical estimator values in the tilted flat $\Lambda$CDM model for the Handley+$\Omega_k$ priors are the same as for the Handley priors because $\Omega_k = 0$ in the flat model. \end{flushleft} \end{ruledtabular} } \label{tab:para_sigmap} \end{table*} \subsection{Data set tensions} \label{subsec:data_set_tensions} In this subsection we check whether there is concordance (discordance) between pairs of some of the data sets we study (in the context of a given cosmological model), as well as whether or not this concordance (discordance) is model independent. To do this, we use the two Sec.\ \ref{sec:method} statistical estimators, in eq.\ \eqref{eq:Tension_estimator_1} and in eqs.\ \eqref{eq:Tension_estimator_2} and \eqref{eq:Tension_estimator_2_sigma}. The values of these statistical estimators for the six tilted flat and non-flat $\Lambda$CDM ($+A_L$) models are listed in Table \ref{tab:para_sigmap}; we do not compute these estimators in the untilted non-flat $\Lambda$CDM model which does not include the tilt ($n_s$) degree of freedom that is strongly favored by data. As in Sec.\ \ref{subsec:model_selection}, here we only study pairs of data sets in which one of the data sets is or includes the P18 data set. Conclusions based on either of the two statistical estimators qualitatively agree, for the five pairs of data sets we compare in this subsection, as discussed next. \begin{table*} \caption{Mean and 68.3\% confidence limits of tilted flat and non-flat $\Lambda\textrm{CDM}$ model parameters constrained by lensing data alone. Table \ref{tab:Priors} lists the Our, Handley, and Handley+$\Omega_k$ priors. The Hubble constant $H_0$ has units of km s$^{-1}$ Mpc$^{-1}$. } {\tiny \begin{ruledtabular} \begin{tabular}{lccc} \\[-1mm] & \multicolumn{3}{c}{Lensing data constraints with Our priors} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & Tilted flat $\Lambda$CDM & Tilted non-flat $\Lambda$CDM [Planck $P(q)$] & Tilted non-flat $\Lambda$CDM [new $P(q)$] \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.049 \pm 0.023$ & $0.052 \pm 0.027$ & $0.048 \pm 0.026$ \\[+1mm] $\Omega_c h^2$ & $0.125 \pm 0.032$ & $0.120 \pm 0.023$ & $0.116 \pm 0.022$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.016 \pm 0.022$ & $1.41 \pm 0.33$ & $1.47 \pm 0.27$ \\[+1mm] $\tau$ & $0.0542$ & $0.0483$ & $0.0525$ \\[+1mm] $\Omega_k$ & $\ldots$ & $-0.26 \pm 0.11$ & $-0.279 \pm 0.095$ \\[+1mm] $n_s$ & $0.9649$ & $0.9706$ & $0.9654$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.23 \pm 0.11$ & $3.10 \pm 0.19$ & $3.13 \pm 0.16$ \\[+1mm] \hline \\[-1mm] $H_0$ & $83 \pm 10$ & $65 \pm 17$ & $66 \pm 16$ \\[+1mm] $\Omega_m$ & $0.255 \pm 0.070$ & $0.54 \pm 0.48$ & $0.48 \pm 0.36$ \\[+1mm] $\sigma_8$ & $0.779 \pm 0.082$ & $0.85 \pm 0.16$ & $0.88 \pm 0.15$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ & $3.67$ & $3.12$ & $3.38$ \\[+1mm] $\textrm{DIC}$ (lensing) & $14.2$ & $13.3$ & $13.9$ \\[+1mm] $\textrm{AIC}$ (lensing) & $13.7$ & $15.1$ & $15.4$ \\[+1mm] \hline \\[-1mm] $\textrm{DIC}$ (P18) & $2817.9$ & $2810.6$ & $2811.5$ \\[+1mm] $\textrm{DIC}$ (P18+lensing) & $2826.5$ & $2826.2$ & $2825.7$ \\[+1mm] $\log_{10} \mathcal{I}$ & $1.240$ & $-0.486$ & $-0.062$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{3}{c}{Lensing data constraints with Handley priors} \\[+1mm] \cline{2-4}\\[-1mm] Parameter & Tilted flat $\Lambda$CDM & Tilted non-flat $\Lambda$CDM [Planck $P(q)$] & Tilted non-flat $\Lambda$CDM [new $P(q)$] \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $0.0220 \pm 0.0018$ & $0.0221 \pm 0.0017$ & $0.0220 \pm 0.0017$ \\[+1mm] $\Omega_c h^2$ & $0.1121 \pm 0.0093$ & $0.1117 \pm 0.0099$ & $0.1134 \pm 0.0097$ \\[+1mm] $100\theta_\textrm{MC}$ & $1.0397 \pm 0.0058$ & $1.0395 \pm 0.0058$ & $1.0395 \pm 0.0059$ \\[+1mm] $\tau$ & $0.21 \pm 0.11$ & $0.20 \pm 0.11$ & $0.21 \pm 0.11$ \\[+1mm] $\Omega_k$ & $\ldots$ & $-0.032 \pm 0.040$ & $-0.029 \pm 0.040$ \\[+1mm] $n_s$ & $0.957 \pm 0.043$ & $0.954 \pm 0.043$ & $0.939 \pm 0.033$ \\[+1mm] $\ln(10^{10} A_s)$ & $3.26 \pm 0.15$ & $3.20 \pm 0.16$ & $3.21 \pm 0.16$ \\[+1mm] \hline \\[-1mm] $H_0$ & $69.7 \pm 3.9$ & $62 \pm 14$ & $63 \pm 14$ \\[+1mm] $\Omega_m$ & $0.281 \pm 0.050$ & $0.40 \pm 0.15$ & $0.39 \pm 0.15$ \\[+1mm] $\sigma_8$ & $0.869 \pm 0.064$ & $0.826 \pm 0.083$ & $0.836 \pm 0.084$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ & $6.81$ & $6.89$ & $6.79$ \\[+1mm] $\textrm{DIC}$ (lensing) & $13.9$ & $14.1$ & $13.8$ \\[+1mm] $\textrm{AIC}$ (lensing) & $20.8$ & $22.9$ & $22.8$ \\[+1mm] \hline \\[-1mm] $\textrm{DIC}$ (P18) & $2817.9$ & $2810.6$ & $2811.5$ \\[+1mm] $\textrm{DIC}$ (P18+lensing) & $2826.5$ & $2826.2$ & $2825.7$ \\[+1mm] $\log_{10} \mathcal{I}$ & $1.166$ & $-0.316$ & $-0.088$ \\[+1mm] \hline \hline \\[-1mm] & \multicolumn{3}{c}{Lensing data constraints with Handley$+\Omega_k$ priors } \\[+1mm] \cline{2-4}\\[-1mm] Parameter & Tilted flat $\Lambda$CDM & Tilted non-flat $\Lambda$CDM [Planck $P(q)$] & Tilted non-flat $\Lambda$CDM [new $P(q)$] \\[+1mm] \hline \\[-1mm] $\Omega_b h^2$ & $\cdots$ & $0.0221 \pm 0.0017$ & $0.0221 \pm 0.0017$ \\[+1mm] $\Omega_c h^2$ & $\cdots$ & $0.1088 \pm 0.0088$ & $0.1104 \pm 0.0089$ \\[+1mm] $100\theta_\textrm{MC}$ & $\cdots$ & $1.0395 \pm 0.0058$ & $1.0396 \pm 0.0059$ \\[+1mm] $\tau$ & $\cdots$ & $0.20 \pm 0.11$ & $0.20 \pm 0.11$ \\[+1mm] $\Omega_k$ & $\cdots$ & $-0.123 \pm 0.095$ & $-0.122 \pm 0.096$ \\[+1mm] $n_s$ & $\cdots$ & $0.951 \pm 0.041$ & $0.939 \pm 0.032$ \\[+1mm] $\ln(10^{10} A_s)$ & $\cdots$ & $3.11 \pm 0.16$ & $3.11 \pm 0.16$ \\[+1mm] \hline \\[-1mm] $H_0$ & $\cdots$ & $48 \pm 15$ & $48 \pm 15$ \\[+1mm] $\Omega_m$ & $\cdots$ & $0.70 \pm 0.33$ & $0.71 \pm 0.33$ \\[+1mm] $\sigma_8$ & $\cdots$ & $0.745 \pm 0.096$ & $0.75 \pm 0.10$ \\[+1mm] \hline \\[-1mm] $\chi_{\textrm{min}}^2$ & $\cdots$ & $6.79$ & $6.77$ \\[+1mm] $\textrm{DIC}$ (lensing) & $\cdots$ & $13.9$ & $13.9$ \\[+1mm] $\textrm{AIC}$ (lensing) & $\cdots$ & $22.8$ & $22.8$ \\[+1mm] \hline \\[-1mm] $\textrm{DIC}$ (P18) & $\cdots$ & $2810.6$ & $2811.5$ \\[+1mm] $\textrm{DIC}$ (P18+lensing) & $\cdots$ & $2826.2$ & $2825.7$ \\[+1mm] $\log_{10} \mathcal{I}$ & $\cdots$ & $-0.360$ & $-0.057$ \\[+1mm] \end{tabular} \\[+1mm] \begin{flushleft} Note: $\mathcal{I}=\exp(-\mathcal{F}/2)$ where $\mathcal{F}=\textrm{DIC(P18+lensing)}-\textrm{DIC(P18)}-\textrm{DIC(lensing)}$. The cosmological parameter values in the tilted flat $\Lambda$CDM model for the Handley+$\Omega_k$ priors are the same as for the Handley priors because $\Omega_k = 0$ in the flat model. \end{flushleft} \end{ruledtabular} } \label{tab:para_lensing} \end{table*} \begin{itemize} \item {\bf P18 vs.\ lensing}. Since, as mentioned earlier, lensing data (see Sec.\ \ref{sec:data}) alone do not place significant constraints on cosmological parameters (even if we fix the values of some of them), the role played by the priors is more important in lensing data alone analyses than in other cases. Therefore, in this case, we use three different sets of priors (see Table \ref{tab:Priors}) in order to determine whether and how the lensing data alone cosmological parameter constraints and statistical estimator values depend on the priors used. In all three cases we report results obtained from converged chains. Due to the weak constraining power of lensing data alone, it is not possible to reach convergence when the $A_L$ parameter is allowed to vary. Consequently, we provide results only for the $A_L=1$ cases. Here we first briefly comment on the lensing data alone cosmological parameter constraints, which do depend on the set of priors used, see Table \ref{tab:para_lensing}. For instance, if we look at the value of the curvature parameter $\Omega_k$ (which is most affected by the choice of prior) obtained by employing Our priors, for the titled non-flat $\Lambda$CDM Planck (new) $P(q)$ model, $\Omega_k= -0.26\pm 0.11$ ($\Omega_k=-0.279\pm 0.095$), we find a 1.9$\sigma$ (2.4$\sigma$) difference with the Handley priors analysis value $\Omega_k=-0.032\pm 0.040$ ($\Omega_k=-0.029\pm 0.040$) and a 0.94$\sigma$ (1.2$\sigma$) difference with the Handley+$\Omega_k$ priors analysis value $\Omega_k=-0.123\pm 0.095$ ($\Omega_k=-0.122\pm 0.096$). Reassuringly, we find that when we broaden the prior for $\Omega_k$, as we do when we move from Handley priors to Handley+$\Omega_k$ priors, the results get closer to those obtained with Our priors, the broadest priors we use. Additionally, our lensing data alone analysis (and cosmological parameter constraints) differ from those of the Planck team (Sec.\ 3.2.1 of Ref.\ \cite{Planck:2018lbu}) in that we fix $n_s$ and vary $\Omega_b h^2$ freely, whereas the Planck team use Gaussian priors for $n_s$ and $\Omega_b h^2$. Also, in our analysis $0.2 < h <1.0$ was chosen as the prior, while the Planck team used $0.4 < h < 1.0$. One notable difference is that when Our priors are used the value we find for $\Omega_b h^2$ is larger than the Gaussian prior value ($\Omega_b h^2 = 0.0222 \pm 0.0005$) adopted by the Planck team. In the tilted flat $\Lambda$CDM model we find $\Omega_b h^2=0.049 \pm 0.023$, and similar results are seen in the tilted non-flat models with the Planck and the new $P(q)$. However, when the Handley priors and the Handley+$\Omega_k$ priors are used, due to the very narrow range of $\Omega_b h^2$ (between $0.019$ and $0.025$) in these priors, such a deviation disappears, and $\Omega_b h^2$ is constrained with very consistent values in the tilted flat and the two tilted non-flat $\Lambda$CDM models and is also consistent with the Gaussian prior value adopted by the Planck team. Given the significant dependence on prior of the lensing data alone cosmological constraints, it is not possible to compare lensing data alone cosmological constraints to cosmological constraints we have derived from the other data sets. On the other hand, looking at Table \ref{tab:para_sigmap}, we do not see significant differences in the statistical estimator values from lensing only data analyses for the three different priors. This being the case, in the following, for the sake of consistency with our other discussions, we discuss only the lensing data alone results obtained using Our priors. For the tilted flat $\Lambda$CDM model we do not find discordance between P18 data and lensing data. We find $\textrm{log}_{10}\mathcal{I}=1.240$ which indicates a {\it strong} consistency between the two data sets. A similar conclusion is indicated by the other statistical estimator, $\sigma=0.718$ and $p=47.3\%$. We conclude that P18 and lensing data can be jointly analyzed in the context of the tilted flat $\Lambda$CDM model. Looking at the results for the tilted non-flat $\Lambda$CDM Planck $P(q)$ model, for the first statistical estimator $\textrm{log}_{10}\mathcal{I}=-0.486$ which is on the verge of indicating a {\it substantial} discordance while for the second one $\sigma=2.479$ and $p = 1.32\%$ which indicate a moderate tension. These results however may not be significant enough to conclude that P18 and lensing data cannot be used together in an analysis of the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. In the tilted non-flat new $P(q)$ $\Lambda$CDM model, the two statistical estimators considered here point to somewhat different conclusions. While for the first one we get $\textrm{log}_{10}\mathcal{I}=-0.062$, which indicates neither consistency nor inconsistency between the two data sets, the second one, $\sigma=2.201$ and $p = 2.77\%$, indicates a moderate tension between the two data sets. Taken together these results indicate that there is at most moderate inconsistency between P18 and lensing data within the tilted flat new $P(q)$ $\Lambda$CDM model. \item {\bf P18 vs.\ BAO$^\prime$}. In the context of the tilted flat $\Lambda$CDM model there is no sign of discordance between these two data sets. We find $\textrm{log}_{10}\mathcal{I}=0.707$, which indicates a {\it substantial} consistency. The other statistical estimator points to a similar conclusion, with $\sigma=0.426$ and $p=67\%$. Very similar results are found for the tilted flat $\Lambda$CDM+$A_L$ model. The value $\textrm{log}_{10}\mathcal{I}=0.810$, once again, indicates a {\it substantial} consistency between P18 and BAO$^\prime$ data, whereas for the second estimator we find $\sigma=0.371$ and $p=71\%$. The P18 and BAO$^\prime$ data sets are mutually consistent and can be jointly analyzed in the tilted flat $\Lambda$CDM (+$A_L$) models. On the other hand, the opposite is true in the tilted non-flat $\Lambda$CDM models (with $A_L = 1$). The comparison of P18 and BAO$^\prime$ data in the tilted non-flat Planck $P(q)$ model results in $\textrm{log}_{10}\mathcal{I}=-0.891$ which indicates a {\it substantial} disagreement between these two data sets. Reassuringly the second statistical estimator points to the same conclusion, in particular, $\sigma =2.478$ and $p =1.32\%$. As expected (see Sec.\ \ref{sec:P18_vs_BAO}) inclusion of the varying $A_L$ parameter reduces the tensions with respect to the $A_L=1$ case. For the Planck $P(q)$+$A_L$ model we find $\textrm{log}_{10}\mathcal{I}=0.847$, which indicates a {\it substantial} degree of consistency between the two data sets, and $\sigma=0.465$ and $p=64.2\%$, therefore, there is no tension between P18 data and BAO$^\prime$ data in this model. We noted in Sec.\ \ref{sec:P18_vs_BAO} that the tilted non-flat $\Lambda$CDM new $P(q)$ model better accommodates P18 and BAO$^\prime$ data than does the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. In particular, in tilted non-flat $\Lambda$CDM new $P(q)$ model when $A_L=1$ we find $\textrm{log}_{10}\mathcal{I}=-0.526$, that is just in the range of {\it substantial} inconsistency. According to the values obtained for the other statistical estimator, $\sigma = 2.108$ and $p=3.50\%$, there is a moderate tension between the two data sets. The inclusion of a varying $A_L$ parameter in the analysis completely changes the conclusions with respect to the $A_L=1$ case. For the new $P(q)$+$A_L$ model we find $\textrm{log}_{10}\mathcal{I}=1.655$, indicating {\it strong} agreement. The values $\sigma=0.145$ and $p=88.4\%$ support this conclusion. \item {\bf P18 vs.\ BAO}. We comment now on the results obtained when the tension between P18 data and BAO data is studied in the context of the different cosmological models. We note that the BAO data set includes some $f\sigma_8$ data points which, as we shall see, induces some changes in the results with respect to the P18 data and BAO$^\prime$ data case. Both statistical estimators do not indicate significant disagreement between P18 data and BAO data for the tilted flat $\Lambda$CDM model with $A_L=1$. For the first one we have $\textrm{log}_{10}\mathcal{I}= 0.132$, which neither indicates consistency nor inconsistency, and this is supported by the second one for which we obtain $\sigma=1.533$ and $p=12.5\%$. It is important to note that in this case the statistical estimators are closer to indicating a moderate tension than they are in the P18 data vs.\ BAO$^\prime$ data case. This is related to the previously mentioned $\sigma_8$ tension. We get similar results for the tilted flat $\Lambda$CDM+$A_L$ model, in which case we find $\textrm{log}_{10}\mathcal{I}=0.286$, which again neither indicates an agreement nor a disagreement, while for the second estimator $\sigma = 1.402$ and $p=16.1\%$, and again no tension is revealed. In view of these results we find no evidence that P18 and BAO data cannot be considered together in the analysis of the tilted flat $\Lambda$CDM (+$A_L$) models. Given the P18 data vs.\ BAO$^\prime$ data comparison results in the tilted non-flat $\Lambda$CDM models, it should not come as a surprise that we find tensions when P18 data and BAO data are compared. In the tilted non-flat Planck $P(q)$ $\Lambda$CDM model with $A_L =1$ we find, for the first estimator $\textrm{log}_{10}\mathcal{I}=-1.236$, and $\sigma=3.000$ and $p=0.27\%$ for the second one. Both results indicate a {\it strong} inconsistency between the two data sets. This level of tension fades when the $A_L$ parameter is allowed to vary. For the Planck $P(q)$+$A_L$ model we obtain $\textrm{log}_{10}\mathcal{I}=0.182$, which does not indicate consistency or inconsistency, and $\sigma=1.460$ and $p=14.4\%$. The P18 and BAO data can be jointly used in the Planck $P(q)$+$A_L$ model. As happens in the case of the P18 data vs.\ BAO$^\prime$ data comparison, the tilted non-flat new $P(q)$ $\Lambda$CDM model performs better than the Planck $P(q)$ when it comes to accommodating the P18 and BAO data sets. For the $A_L=1$ case we find $\textrm{log}_{10}\mathcal{I}=-0.880$, revealing {\it substantial} disagreement, while for the other estimator $\sigma=2.604$ and $p = 0.922\%$, which indicates a moderate tension. Once again, the tensions observed when $A_L=1$, in the context of non-flat models, disappear when this parameter is allowed to vary. For the tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model, we find $\textrm{log}_{10}\mathcal{I}=1.066$, which points out to a {\it strong} consistency between the two data sets, and for the other estimator we obtain $\sigma=1.052$ and $p=29.3\%$. The P18 and BAO data can be jointly used in the new $P(q)$+$A_L$ model. In summary, in the tilted non-flat models, in the Planck $P(q)$ model P18 and BAO data should not be jointly analyzed unless the $A_L$ parameter is allowed to vary, while in the new $P(q)$ models these two data sets can be considered together to put constraints on the cosmological parameters even when $A_L =1$. \item {\bf P18 vs.\ non-CMB}. We now discuss whether or not there is tension between P18 data and non-CMB data in the context of the different cosmological models. Similar results to the ones obtained in the P18 data and BAO$^\prime$/BAO data comparisons are expected, since BAO$^\prime$ data and BAO data are dominant components of non-CMB data. For the tilted flat $\Lambda$CDM model with $A_L = 1$ we find $\textrm{log}_{10}\mathcal{I}=0.296$, which neither indicates agreement nor disagreement, and $\sigma=1.749$ together with $p=8.03\%$, with neither of the two estimators pointing to tension between P18 and non-CMB data in this model. Including a varying $A_L$ in the model improves the agreement between the two data sets. For the tilted flat $\Lambda$CDM+$A_L$ model we find $\textrm{log}_{10}\mathcal{I}=1.033$ which points to {\it strong} consistency between the two data sets, and for the other estimator we get $\sigma=0.835$ and $p=40.4\%$, a result consistent with the first. There is no tension that prevents us from jointly analyzing P18 data and non-CMB data in the tilted flat $\Lambda$CDM (+$A_L$) models. In the case of the tilted non-flat Planck $P(q)$ $\Lambda$CDM model with $A_L = 1$, the value $\textrm{log}_{10}\mathcal{I}=-1.263$ indicates a {\it strong} inconsistency between the P18 and non-CMB data sets. The second statistical estimator provides similar results, $\sigma = 3.005$ and $p=0.265\%$. In the light of these results, we conclude that P18 data and non-CMB data should not be jointly analyzed in the context of this tilted non-flat $A_L =1$ model. For the Planck $P(q)$+$A_L$ model, we get $\textrm{log}_{10}\mathcal{I}=0.972$, so {\it substantial} agreement is observed between P18 data and non-CMB data in this case. In agreement with the result obtained employing the first statistical estimator, for the second one we find $\sigma=0.793$ and $p=42.8\%$, which again does not indicate any tension. Once again the tilted non-flat $\Lambda$CDM new $P(q)$ model does better in jointly accommodating P18 and non-CMB data than does the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. In the new $P(q)$ case with $A_L =1$, the values obtained for both statistical estimators, $\textrm{log}_{10}\mathcal{I}=-0.806$ and $\sigma=2.577$ and $p=0.996\%$, indicate a {\it substantial} discordance between P18 data and non-CMB data in the context of this model. Allowing $A_L$ to vary reduces the tension found in the $A_L=1$ cases. For the new $P(q)$+$A_L$ model we get $\textrm{log}_{10}\mathcal{I}=1.798$, which points to a {\it strong} agreement between the two data sets, whereas for the second estimator we find $\sigma=0.402$ and $p=68.7\%$ and no tension. Therefore, we may say that in the context of the tilted non-flat $\Lambda$CDM (+$A_L$) new $P(q)$ models, P18 and non-CMB data can be jointly analyzed. \item {\bf P18+lensing vs.\ non-CMB}. In the previous cases we have detected some tensions in the context of the non-flat models. Here we study the possible disagreement between P18+lensing data and non-CMB data. For the tilted flat $\Lambda$CDM model with $A_L =1$, both statistical estimators, with values $\textrm{log}_{10}\mathcal{I}=0.029$ and $\sigma=1.747$ and $p=8.06\%$, shed no light on a possible consistency or inconsistency between P18+lensing data and non-CMB data. For the tilted flat $\Lambda$CDM+$A_L$ model, we find $\textrm{log}_{10}\mathcal{I}=1.033$ which indicates a {\it strong} consistency between the two data sets. On the other hand, the second statistical estimator provides $\sigma=0.774$ and $p=43.9\%$, which do not indicate consistency or inconsistency. As we noted at the beginning of this subsection we do not always expect a perfect match in the conclusions from the two estimators. In the tilted non-flat $\Lambda$CDM Planck $P(q)$ model with $A_L=1$ we find $\textrm{log}_{10}\mathcal{I}=0.297$ which neither indicates consistency nor inconsistency between P18+lensing data and non-CMB data, whereas for the second estimator we find $\sigma=1.837$ and $p=6.62\%$, which does not reveal inconsistency. The consistency between P18+lensing and non-CMB data improves considerably in the context of the Planck $P(q)$+$A_L$ model. We get $\textrm{log}_{10}\mathcal{I}=1.641$ indicating a {\it strong} consistency between the two data sets, while the second one gives $\sigma=0.516$ and $p=60.6\%$, in agreement with the conclusion provided by the first estimator. Very similar conclusions are found for the new $P(q)$ (+$A_L$) and the Planck $P(q)$ (+$A_L$) models. When the $A_L$ parameter is not allowed to vary for the new $P(q)$ $A_L = 1$ model we find $\textrm{log}_{10}\mathcal{I}=0.143$ which does not reveal either an inconsistency or a consistency, with second estimator giving $\sigma =1.886$ and $p=5.927\%$ and again no tension is revealed. On the other hand, in the context of the new $P(q)$+$A_L$ model, we get $\textrm{log}_{10}\mathcal{I}=1.50$ indicating a {\it strong} consistency between the two data sets and reassuringly we find similar conclusions from the second statistical estimator, $\sigma=0.573$ and $p=56.7\%$. Unlike in the comparisons of P18 data and BAO$^\prime$/BAO data and the comparisons of P18 data and non-CMB data, we do not find tensions in the context of the non-flat models between P18+lensing data and non-CMB data, even when the $A_L$ parameter is not allowed to vary. This may be suggesting that if we want to jointly analyze P18 data and a low-redshift data set, such as BAO$^\prime$/BAO data or non-CMB data, we should either consider a varying $A_L$ parameter or include (P18) lensing data in the mix. \end{itemize} We have studied the tensions between pairs of data sets, in the context of a given cosmological model, in three different ways based on Bayesian statistics. In Secs.\ \ref{sec:P18_vs_BAO}-\ref{sec:P18+lensing_vs_non-CMB} we quantified the level of tension by comparing the (one- and two-dimensional) cosmological parameter constraints favored by each of the pair of data sets. In the one-dimensional cases we estimated the tension by considering the quadrature sum of the two error bars for each parameter, while in the two-dimensional cases we looked at whether or not the two sets of contours shared a common parameter space area. In this subsection we study tensions between data set pairs by using the two more precise statistical estimators of Sec.\ \ref{sec:method}, see eq.\ \eqref{eq:Tension_estimator_1} and eqs.\ \eqref{eq:Tension_estimator_2} and \eqref{eq:Tension_estimator_2_sigma}. Reassuringly, all three techniques employed result in similar conclusions in most cases. Among all the data set comparisons we study there are two with significant enough discordances to be ruled out: we find in the tilted non-flat $\Lambda$CDM Planck $P(q)$ $A_L = 1$ model that P18 data and BAO data, as well P18 data and non-CMB data, are not mutually consistent. In the first case, when P18 and BAO data are compared, we observe a 2.7$\sigma$ tension between the derived cosmological parameter values of $\Omega_m$ and of $H_0$, obtained with P18 data and with BAO data. Additionally, in Fig.\ \ref{fig:like_NL_ns_BAO}, contour plot panels that contain one of these derived parameters show non-overlapping regions at more than 2$\sigma$. As for the P18 vs.\ non-CMB case, the tensions are even greater than for P18 vs.\ BAO. Comparing the derived cosmological parameter values of $\Omega_m$ and of $H_0$, obtained with P18 data and non-CMB data, we observe a disagreement at 2.9$\sigma$ and 3.9$\sigma$ respectively. Again the contour plot panels in Fig.\ \ref{fig:like_NL_ns_P18_nonCMB} containing $\Omega_m$ and/or $H_0$ show a non-overlapping region at more than 2$\sigma$. For the two statistical estimators of Sec.\ \ref{sec:method}, if we choose to say two data sets are mutually inconsistent (in a given model) when $\textrm{log}_{10}\mathcal{I}\leq -1$ or $\sigma\geq 3$, then this is true only in the two cases discussed in the previous paragraph. For the the tilted non-flat $\Lambda$CDM Planck $P(q)$ $A_L = 1$ model, in the P18 vs.\ BAO case we find $\textrm{log}_{10}\mathcal{I}=-1.236$ (meaning a {\it strong} disagreement between the two data sets) and $\sigma$ = 3.000 ($p=0.270\%$), while in the P18 vs.\ non-CMB analysis we find $\textrm{log}_{10}\mathcal{I}=-1.263$ (again a {\it strong} disagreement between the two data sets) and $\sigma$ = 3.005 ($p=0.265\%$). These results are qualitatively consistent with those of the previous paragraph. They mean that P18 data and BAO data, as well as P18 data and non-CMB data, cannot be jointly analyzed in this model, alternatively it means that the tilted non-flat $\Lambda$CDM Planck $P(q)$ $A_L = 1$ model is inconsistent with these data and ruled out at approximately 3$\sigma$, by them. We note that the level of tensions seen in the P18 vs.\ BAO and P18 vs.\ non-CMB comparisons are less severe in the context of the new $P(q)$ model, which does not strongly rule out the joint analyses of P18 data and BAO data, as well as P18 data and non-CMB data, in the tilted non-flat $\Lambda$CDM new $P(q)$ $A_L = 1$ model. What is more, none of the other combinations studied, namely, P18 data vs.\ lensing data, P18 data vs.\ BAO$^{\prime}$ data, and P18+lensing data vs.\ non-CMB data, are strongly mutually inconsistent in the tilted non-flat $\Lambda$CDM new $P(q)$ model, even when the $A_L$ parameter is not allowed to vary. We now turn to a comparison between some of our results in Table \ref{tab:para_sigmap} and results presented in Refs.\ \cite{Handley:2019tkm} and \cite{DiValentino:2019qzk}. We emphasize that these are only semi-quantitative comparisons, since the data sets used are not identical and the priors used also differ. Reference \cite{Handley:2019tkm} compares P18 data and lensing data, as well P18 data and BAO data (note that while we refer to both data sets as BAO there are some significant differences between the BAO data points used in Ref.\ \cite{Handley:2019tkm} and the updated BAO data we use here), in the tilted flat $\Lambda$CDM model and in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. As described in Sec.\ \ref{sec:method} we use the same ($p, \sigma$) statistical estimator as Ref.\ \cite{Handley:2019tkm} does and so these are the results we compare. For the tilted flat $\Lambda$CDM model, from the P18 vs.\ lensing analysis, Ref.\ \cite{Handley:2019tkm} Fig.\ 2 reports $\sigma \simeq 0.19$ and $p\simeq 85\%$, while we get $\sigma$=0.72 and $p=47\%$ (for Our priors) and $\sigma=0.39$ and $p = 70\%$ (for Handley priors). Some differences are expected due to the different set of data and priors used and this is reflected in these results. Reassuringly, when we employ the same priors for the lensing data (but not for P18 data) as used in Ref.\ \cite{Handley:2019tkm} the results get closer. From the P18 vs.\ BAO analysis in the tilted flat $\Lambda$CDM model Ref.\ \cite{Handley:2019tkm} finds $\sigma \simeq 0.95$ and $p\simeq 65\%$ while we get $\sigma = 1.5$ and $p = 13\%$, consequently the qualitative conclusions are the same, indicating that no tension is found. As for the tilted non-flat $\Lambda$CDM Planck $P(q)$ model, from the P18 vs.\ lensing analysis, Ref.\ \cite{Handley:2019tkm} reports $\sigma \simeq 2.5$ and $p\simeq 1.2\%$ and we find $\sigma=2.5$ and $p=1.3\%$ (for Our priors) and $\sigma=2.4$ and $p = 1.6\%$ (for Handley priors) so there is very good agreement between the results. Finally, in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model, from a comparison of P18 data and BAO data, Ref.\ \cite{Handley:2019tkm} finds $\sigma \simeq 3.0$ and $p\simeq 0.3\%$ whereas we get $\sigma = 3.0$ and $p=0.3\%$. Considering all results, and the fact that somewhat different BAO data and priors are used in the two analyses, there is good agreement between the results and conclusions of Ref.\ \cite{Handley:2019tkm} and our results and conclusions. Reference \cite{DiValentino:2019qzk} uses $\textrm{log}_{10}\mathcal{I}$ to quantify tensions so here we compare our and their results for this statistical estimator. Reference \cite{DiValentino:2019qzk} compares P18 data and lensing data, as well as P18 and BAO$^{\prime}$ data (note that while we refer to both data sets as BAO$^{\prime}$ there are significant differences between the BAO$^{\prime}$ data used in Ref.\ \cite{DiValentino:2019qzk} and the updated BAO$^{\prime}$ data we use here), in the tilted flat $\Lambda$CDM model and in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. For the tilted flat $\Lambda$CDM model and the P18 data vs.\ lensing data analysis, Ref.\ \cite{DiValentino:2019qzk} find $\textrm{log}_{10}\mathcal{I}=0.6$ ({\it substantial} concordance) while we get $\textrm{log}_{10}\mathcal{I}= 1.24$ ({\it strong} concordance). For the P18 data vs.\ BAO$^{\prime}$ data analysis in the tilted flat $\Lambda$CDM model, Ref.\ \cite{DiValentino:2019qzk} report $\textrm{log}_{10}\mathcal{I}=0.2$ (neither a concordance nor a discordance) and we find $\textrm{log}_{10}\mathcal{I}=0.7$ ({\it substantial} concordance). On the other hand, in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model, for the P18 vs.\ lensing data analysis, Ref.\ \cite{DiValentino:2019qzk} provide $\textrm{log}_{10}\mathcal{I}=-0.84$ ({\it substantial} discordance) while we obtain $\textrm{log}_{10}\mathcal{I}=-0.49$ which is on the verge of also indicating a {\it substantial} discordance between the two data sets. Finally, from the P18 vs.\ BAO$^{\prime}$ analysis in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model, Ref.\ \cite{DiValentino:2019qzk} report $\textrm{log}_{10}\mathcal{I}=-1.8$ ({\it strong} discordance) whereas we get $\textrm{log}_{10}\mathcal{I}=-0.89$ ({\it substantial} discordance). As can be appreciated from the preceding discussion, the agreement between our results and the results presented in Ref.\ \cite{DiValentino:2019qzk} is not as good as the one obtained from a comparison of our results and those of Ref.\ \cite{Handley:2019tkm}. It is important to note that the ($p, \sigma$) statistical estimator of eqs.\ \eqref{eq:Tension_estimator_2} and \eqref{eq:Tension_estimator_2_sigma} is not as dependent on the priors as is the $\textrm{log}_{10}\mathcal{I}$ statistical estimator of eq.\ \eqref{eq:Tension_estimator_1}. This may explain the differences found in the comparisons of our results to those of Refs.\ \cite{Handley:2019tkm} and \cite{DiValentino:2019qzk}. All in all, we consider that there is reasonable, and so reassuring, agreement between our results and results available in the literature. \section{Discussion} \label{sec:discussion} We have used P18 data, (P18) lensing data, BAO$^\prime$ data, BAO data, and non-CMB data to constrain cosmological parameters in eight cosmological models, the tilted flat $\Lambda$CDM (+$A_L$) model, the untilted non-flat $\Lambda$CDM (+$A_L$) model, the tilted non-flat $\Lambda$CDM (+$A_L$) Planck $P(q)$ model, and the tilted non-flat $\Lambda$CDM (+$A_L$) new $P(q)$ model, and to determine the goodness-of-fit of these models to the data sets. We have also used the models to examine whether or not pairs of data sets are mutually consistent, studying five cases: P18 data vs.\ lensing data, P18 data vs.\ BAO$^\prime$/BAO data, P18 data vs.\ non-CMB data, and P18+lensing data vs.\ non-CMB data. Assuming these data are correct and that there are no unaccounted systematic errors, three of the eight models we consider may be rejected because they are incompatible with some of these data at levels of significance discussed in Sec.\ \ref{sec:results} and summarized next. These rejected models are the two untilted non-flat $\Lambda$CDM (+$A_L$) models and the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. When P18 data are included in the analyses the untilted non-flat $\Lambda$CDM (+$A_L$) models are, according to the DIC, {\it very strongly} disfavoured when compared with the tilted models. This is because the untilted models lack the degree of freedom encapsulated in the power spectrum tilt ($n_s$) parameter that is strongly favored by P18 data and so the untilted models are incompatible with P18 data. When we use the tilted non-flat $\Lambda$CDM Planck $P(q)$ model to compare cosmological parameter values from P18 data and BAO$^{\prime}$/BAO data, as well as from P18 data and non-CMB data, we find disagreements in the one-dimensional values of the $H_0$ and $\Omega_m$ derived parameters of 2.3$\sigma$ and 2.7$\sigma$ (BAO$^\prime$), 2.3$\sigma$ and 2.7$\sigma$ (BAO), and 2.9$\sigma$ and 2.9$\sigma$ (non-CMB). In Figs.\ \ref{fig:like_NL_ns_BAO} and \ref{fig:like_NL_ns_P18_nonCMB}, in those panels containing $H_0$ and $\Omega_m$, the two-dimensional contours do not overlap even at more than 2$\sigma$ significance. Additionally, in the P18 data vs.\ BAO data case we find $\textrm{log}_{10}\mathcal{I}=-1.236$ (meaning a {\it strong} disagreement between the two data sets) and $\sigma$ = 3.000 ($p=0.27\%$), while in the P18 data vs.\ non-CMB data analysis we get $\textrm{log}_{10}\mathcal{I}=-1.263$ (again a {\it strong} disagreement between the two data sets) and $\sigma$ = 3.005 ($p=0.265\%$). At their levels of significance, these results mean that the tilted non-flat $\Lambda$CDM Planck $P(q)$ model is unable to simultaneously accommodate P18 data and non-CMB data and so is ruled out at 3$\sigma$. Note that non-CMB data include BAO$^{\prime}$/BAO data and Refs.\ \citep{Handley:2019tkm, DiValentino:2019qzk} have previously noted the incompatibility of P18 data and older BAO$^{\prime}$/BAO data in the tilted non-flat $\Lambda$CDM Planck $P(q)$ model. We return to this point below. The six-parameter tilted flat $\Lambda$CDM model is the simplest, (largely, see below) observationally consistent, general-relativistic cosmological model. It assumes the existence of cold dark matter, a non-evolving dark energy density $\Lambda$, flat spatial hypersurfaces ($\Omega_k = 0)$, and $A_L = 1$. This is the current standard cosmological model. We have found that this model passes all the consistency tests we use. The largest data set we have used is the P18+lensing+non-CMB data set. These data provide the most restrictive constraints on the parameters of this model, and if the tilted flat $\Lambda$CDM model is a reasonably good approximation of the Universe, the cosmological parameters values measured in this model from these data provide a reasonably good description of parameters of the Universe. From P18+lensing+non-CMB data we find, for the six primary cosmological parameters, $\Omega_b{h^2}=0.02250\pm 0.00013$, $\Omega_c{h^2}=0.11838\pm 0.00083$, $100\theta_{\textrm{MC}}= 1.04110\pm 0.00029$, $\tau=0.0569\pm 0.0071$, $n_s = 0.9688\pm 0.0036$, and $\ln(10^{10}A_s)= 3.046\pm 0.014$. We also provide the values of three derived parameters, $\Omega_m = 0.3053\pm 0.0050$, $H_0=68.09\pm 0.38$ km s$^{-1}$ Mpc$^{-1}$, and $\sigma_8 = 0.8072\pm 0.0058$. The least well-determined parameters are the reionization optical depth $\tau$ at 8.0$\sigma$ and the scalar spectral index $n_s$ at 8.7$\sigma$. As we discuss below, the values of the cosmological parameters determined using any of the six tilted models we study are relatively independent of the cosmological model used, indicating that the values of the cosmological parameters listed above for the tilted flat $\Lambda$CDM model are relatively model independent. It is interesting that the Hubble constant value measured using P18+lensing+non-CMB data in the tilted flat $\Lambda$CDM model, $H_0=68.09\pm 0.38$ km s$^{-1}$ Mpc$^{-1}$, is consistent with that from an early estimate from a median statistics analysis of a large compilation of Hubble constant measurements, $H_0=68\pm 2.8$ km s$^{-1}$ Mpc$^{-1}$, see Refs.\ \citep{ChenRatra2011, Gottetal2001, Calabreseetal2012}, as well as with some local measurements, e.g., $H_0=69.8\pm 1.7$ km s$^{-1}$ Mpc$^{-1}$ (quadrature sum of statistical and systematic uncertainties) from Ref.\ \citep{Freedman2021}, but not with some other local measurements, e.g, $H_0=73.04\pm 1.04$ km s$^{-1}$ Mpc$^{-1}$ from Ref.\ \citep{Riessetal2022}. As for the other derived parameter employed to quantify another tension affecting the tilted flat $\Lambda$CDM model, the $\sigma_8$ parameter, there are differences in its value depending on the data set considered. In the tilted flat $\Lambda$CDM model, using P18 data, we get $\sigma_8=0.8118\pm 0.0074$ whereas non-CMB data give $\sigma_8=0.787\pm 0.027$, with the two values differing by 0.89$\sigma$. In the P18+lensing+non-CMB data analysis case we obtain $\sigma_8=0.8072\pm 0.0058$ which is between the P18 value and the non-CMB value. The shifts in the cosmological parameter values obtained by jointly analyzing non-CMB data with P18+lensing data, compared to the cosmological parameter values obtained from ``Planck'' P18+lensing data, for the tilted flat $\Lambda$CDM are: $-0.68\sigma$ ($\Omega_b{h^2}$), 1.1$\sigma$ ($\Omega_c{h^2}$), $-0.45\sigma$ (100$\theta_{\textrm{MC}}$), $-0.26\sigma$ ($\tau$), $-0.71\sigma$ ($n_s$), $-0.10\sigma$ [$\ln(10^{10}A_s)$], $-1.1\sigma$ ($H_0$), 1.1$\sigma$ ($\Omega_m$), and 0.48$\sigma$ ($\sigma_8$), with the largest shifts being 1.1$\sigma$, suggesting again that in this model non-CMB data and P18+lensing data are not inconsistent. As for the reduction in the error bars obtained by jointly analyzing non-CMB data with P18+lensing data, compared to the error bars obtained from ``Planck'' P18+lensing data, we find 7.1$\%$ ($\Omega_b{h^2}$), 31$\%$ ($\Omega_c{h^2}$), 6.5$\%$ (100$\theta_{\textrm{MC}}$), 2.7$\%$ ($\tau$), 12$\%$ ($n_s$), 0$\%$ [$\ln(10^{10}A_s)$], 31$\%$ ($H_0$), 33$\%$ ($\Omega_m$), and 1.7$\%$ ($\sigma_8$), with the biggest reductions being the 33$\%$ $\Omega_m$ one and the 31$\%$ $\Omega_c{h^2}$ and $H_0$ ones; adding non-CMB data to the mix does quite significantly improve the constraints on some cosmological parameters. We mentioned above that P18 data and non-CMB data are incompatible in the seven-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$. When this model is used to analyze P18 data it favors a closed geometry at 2.5$\sigma$ with $\Omega_k = -0.043 \pm 0.017$, when it is used to analyze P18+lensing data it favors a closed geometry at 1.6$\sigma$ with $\Omega_k = -0.0103 \pm 0.0066$, and when it is used to analyze non-CMB data it favors a closed geometry at 0.63$\sigma$ with $\Omega_k = -0.032 \pm 0.051$. However, since P18 data and non-CMB data are incompatible in this model, the model is ruled out at the relevant levels of significance and so cannot be used to measure the geometry of spatial hypersurfaces from P18+lensing+non-CMB data. On the other hand, the seven-parameter tilted non-flat $\Lambda$CDM new $P(q)$ model is not ruled out. According to the statistical estimators presented in Sec. \ref{sec:method} (see values in Table \ref{tab:para_sigmap}) for all the cases studied using the new $P(q)$ model, in none are our conditions to rule out a model, $\textrm{log}_{10}\mathcal{I}\leq -1$ or $\sigma\geq 3$, fulfilled. For the new $P(q)$ model, in the P18 data analysis, we find $\Omega_k=-0.033\pm 0.014$ which favors closed geometry at 2.4$\sigma$. When the new $P(q)$ model is used to analyze P18+lensing data the results indicate a 1.5$\sigma$ preference for closed geometry with $\Omega_k = -0.0086\pm 0.0057$, and when non-CMB data is analyzed alone we find $\Omega_k = -0.036\pm 0.051$ which is 0.71$\sigma$ in favor of closed geometry. Contrary to what happens in the case of the Planck $P(q)$ model, in the new $P(q)$ model it is reasonable to jointly analyze P18 data, (P18) lensing data, and non-CMB data. And in the P18+lensing+non-CMB and P18+non-CMB analysis cases we obtain $\Omega_k = 0.0003\pm 0.0017$ favoring open geometry by only 0.18$\sigma$ in both cases. It may come as a surprise that even though each data set individually favors a closed geometry, some even with a somewhat significant level of evidence, the joint consideration of all three (or just two) of them reveals a result consistent with flat spatial hypersurfaces, and also more consistent with open than with closed geometry. This is because of the $H_0$-$\Omega_k$-$\Omega_m$ degeneracy and the fact that, in the non-flat models, non-CMB data favor higher $H_0$ values and lower $\Omega_m$ values than do P18 data and P18+lensing data. We have found that with $A_L = 1$ the six-parameter untilted non-flat and the seven-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ models are incompatible with some data we consider. If these data are correct, these models are ruled out. On the other had, we find that the most restrictive data compilation we consider, the P18+lensing+non-CMB data set, indicates that the seven-parameter tilted non-flat $\Lambda$CDM new $P(q)$ model has flat (or very close to flat) spatial hypersurfaces. Yes, P18 data alone favor closed geometry at 2.4$\sigma$, and while it would be valuable to have a much better understanding of this result than is currently available, at this point we feel that the P18+lensing+non-CMB data support for flat geometry should be given more credence. Perhaps more and better future non-CMB might alter this conclusion, however current data are consistent with flat spatial hypersurfaces when $A_L = 1$. In the seven-parameter tilted flat $\Lambda$CDM$+A_L$ model $A_L$ is allowed to vary and is constrained by data. In this model P18 data favor $A_L = 1.181 \pm 0.067$, $A_L > 1$ at 2.7$\sigma$; P18+non-CMB data favor $A_L = 1.204 \pm 0.061$, $A_L > 1$ at 3.3$\sigma$; P18+lensing data favor $A_L = 1.073 \pm 0.041$, $A_L > 1$ at 1.8$\sigma$; and, P18+lensing+non-CMB data favor $A_L = 1.089 \pm 0.035$, $A_L > 1$ at 2.5$\sigma$. With P18+lensing+non-CMB data resulting in $\Delta{\rm DIC} = -5.55$ in favor of $A_L > 1$ over $A_L = 1$, just a little bit below the {\it strongly} favoring threshold of $-6$, the 2.5$\sigma$ $A_L > 1$ value indicates a more serious CMB weak lensing consistency issue than the preference for closed spatial geometry exhibited by some of the data sets. If these data are correct, these results are somewhat uncomfortable for the six-parameter tilted flat $\Lambda$CDM model --- the standard cosmological model. New, and better, data should help to clarify this issue. When $A_L$ is allowed to vary, the eight-parameter tilted non-flat $\Lambda$CDM+$A_L$ Planck $P(q)$ model is not ruled out by data sets incompatibilities, unlike what happens in the $A_L = 1$ seven-parameter tilted non-flat $\Lambda$CDM Planck $P(q)$ model. The eight-parameter tilted non-flat $\Lambda$CDM+$A_L$ new $P(q)$ model also does not suffer from data sets incompatibilities, similar to the $A_L = 1$ seven-parameter tilted non-flat $\Lambda$CDM new $P(q)$ model case. In the eight-parameter tilted non-flat $\Lambda$CDM$+A_L$ Planck (new) $P(q)$ model, P18 data favor $A_L = 0.88 \pm 0.15$ and $A_L < 1$ at 0.8$\sigma$ ($A_L = 0.94 \pm 0.20$ and $A_L < 1$ at 0.3$\sigma$) and $\Omega_k = -0.130 \pm 0.095$ and closed at 1.4$\sigma$ ($\Omega_k = -0.10 \pm 0.11$ and closed at 0.91$\sigma$); P18+non-CMB data favor $A_L = 1.203 \pm 0.062$ and $A_L > 1$ at 3.3$\sigma$ ($A_L = 1.204 \pm 0.061$ and $A_L > 1$ at 3.3$\sigma$) and $\Omega_k = -0.0006 \pm 0.0017$ and closed at 0.35$\sigma$ ($\Omega_k = -0.0006 \pm 0.0017$ and closed at 0.35$\sigma$); P18+lensing data favor $A_L = 1.089 \pm 0.16$ and $A_L > 1$ at 0.56$\sigma$ ($A_L = 1.13 \pm 0.15$ and $A_L > 1$ at 0.87$\sigma$) and $\Omega_k = -0.005 \pm 0.027$ and closed at 0.19$\sigma$ ($\Omega_k = 0.003 \pm 0.0016$ and open at 0.19$\sigma$); and, P18+lensing+non-CMB data favor $A_L = 1.090 \pm 0.036$ and $A_L > 1$ at 2.5$\sigma$ ($A_L = 1.088 \pm 0.035$ and $A_L > 1$ at 2.5$\sigma$) and $\Omega_k = -0.0002 \pm 0.0017$ and closed at 0.12$\sigma$ ($\Omega_k = -0.0002 \pm 0.0017$ and open at 0.12$\sigma$). With P18+lensing+non-CMB data in the eight-parameter tilted non-flat $\Lambda$CDM$+A_L$ Planck (new) $P(q)$ model resulting in $\Delta{\rm DIC} = -5.22\ (-4.70)$, again (as in the seven-parameter tilted flat $\Lambda$CDM$+A_L$ model) {\it positively} favoring $A_L > 1$ over $A_L = 1$, there is a bit more evidence supporting the existence of a CMB weak lensing consistency issue, in all tilted, flat as well as non-flat, models, although the resulting $\Omega_k$ values in both non-flat cases are quite consistent with flat geometry. In the eight-parameter tilted non-flat $\Lambda$CDM$+A_L$ new $P(q)$ model, which unlike the Planck $P(q)$ model is not ruled out, allowing $A_L$ to vary reduces support for closed geometry. Compared to the seven-parameter new $P(q)$ model with $A_L = 1$, for P18 data, support for closed spatial hypersurfaces drops from 2.4$\sigma$ to 0.91$\sigma$, while for P18+lensing data the 1.5$\sigma$ support for closed geometry becomes 0.19$\sigma$ support for open geometry. We also note, from comparing P18 data results given in the two previous paragraphs for the seven-parameter tilted flat $\Lambda$CDM$+A_L$ model and for the eight-parameter tilted non-flat $\Lambda$CDM$+A_L$ Planck and new $P(q)$ models, as one goes from the first to either of the second models, $A_L$ values becomes consistent with unity while $\Omega_k$ values deviate from flat by only 1.4$\sigma$ and 0.91$\sigma$. So for P18 data both the tilted non-flat models cannot be ruled out while the seven-parameter tilted flat model with $A_L > 1$ at 2.7$\sigma$ and a lower DIC value indicates that the standard six-parameter tilted flat $\Lambda$CDM model with $A_L = 1$ is somewhat uncomfortably observationally squeezed. These and other results from our more comprehensive analyses and updated and more expansive data here support and extend the earlier results of Refs.\ \cite{Planck:2018vyg, Handley:2019tkm, DiValentino:2019qzk} that indicate that P18 data support either a closed geometry with $\Omega_k<0$ or $A_L>1$, both of which make the amount of CMB weak lensing higher than in the tilted flat $\Lambda$CDM model. References \cite{Planck:2018vyg, Handley:2019tkm, DiValentino:2019qzk} have also noted that in the tilted non-flat Planck $P(q)$ model, when P18 data and BAO$^\prime$/BAO data are jointly analyzed, evidence for closed geometry dissipates, as we have found here for updated BAO$^\prime$/BAO data as well as for non-CMB data (even though, as we have found here, P18 data and to a lesser extent BAO$^\prime$/BAO data and non-CMB data are all by themselves not inconsistent with closed geometry). References \cite{Handley:2019tkm, DiValentino:2019qzk} have suggested that this might be because of a problem (possibly undetected systematic errors) with BAO$^\prime$/BAO data (and so also with non-CMB data) and so these results (from combinations of these data and P18 data) should not be taken to mean that spatial hypersurfaces are flat. Along these lines, we note that Ref.\ \cite{Glanville:2022xes} present results from a full-shape analysis (instead of the compressed BAO and $f\sigma_8$ data points analysis here) of the 6dFGS, BOSS, and eBOSS catalogs and find $\Omega_k = -0.0041^{+0.0026}_{-0.0021}$ (see their Table 6) when P18 data (not exactly the same P18 data used here) are jointly analyzed with the full-shape galaxy sample data, which is still in favor of a closed geometry, contrary to the conclusions we present here. New and better data and improved analysis techniques will help to shed some light on this issue. It is useful to determine which of the data sets we use are able to set model-independent constraints on the cosmological parameter values. Here we only consider the P18, P18+lensing, P18+non-CMB, and P18+lensing+non-CMB data sets, as the other data sets we study have less constraining power. In our analyses here we consider only the six tilted models, flat and non-flat, with $A_L = 1$ and varying $A_L$. In order to determine whether the constraints are model independent, we compute the shifts in the cosmological parameter value between pairs of models and say that the cosmological constraints are model-independent if almost all the shifts are $<1\sigma$. Neither P18 data nor P18+lensing data are able to place model-independent constraints on the cosmological parameter values. In the case of P18 data, when we compare the flat model with the flat+$A_L$ model, we observe disagreements in the values of the derived parameter $H_0$, $\Omega_m$, and $\sigma_8$ at $\sim 1\sigma$ confidence level. More significant are the discrepancies found when the flat model is compared with the tilted non-flat models. In particular for the Planck (new) $P(q)$ models, we get for $H_0$ a shift of $-3.5\sigma$ ($-2.8\sigma$), for $\Omega_m$ a shift of 2.6$\sigma$ (2.3$\sigma$), and for $\sigma_8$ a shift of $-2.2\sigma$ ($-1.7\sigma$). As expected, when the flat model is compared with the tilted non-flat models with varying $A_L$ the differences are smaller, though still significant. Comparing the flat model cosmological parameter values with the Planck (new) $P(q)$+$A_L$ cosmological parameter values we find for $H_0$ a shift of $-2.0\sigma$ ($-1.2\sigma$), for $\Omega_m$ a shift of 1.4$\sigma$ (0.89$\sigma$), and for $\sigma_8$ a shift of $-1.7\sigma$ ($-1.1\sigma$). Similar results are found when the flat+$A_L$ model is compared with the tilted non-flat models with and without a varying $A_L$ parameter. On the other hand, we do not find significant disagreements when we compare the cosmological parameter values of the four tilted non-flat models, the Planck $P(q)$ (+$A_L$) and the new $P(q)$ (+$A_L$) models, with each other, with the shifts always remaining below 1$\sigma$. The joint consideration of P18 data and (P18) lensing data reduces the disagreements discussed above though it is not possible to claim that P18+lensing data impose model-independent constraints. In this case when the cosmological parameter constraints for the flat and the flat+$A_L$ models are compared, the largest disagreement found is $-1.1\sigma$ for $\sigma_8$. When the flat model is compared with the Planck (new) $P(q)$ model, we get for $H_0$ a shift of $-1.5\sigma$ ($-1.5\sigma$), for $\Omega_m$ a shift of 1.4$\sigma$ (1.3$\sigma$), and for $\sigma_8$ a shift of $-1.2\sigma$ ($-1.1\sigma$), while when the flat model is compared with the Planck (new) $P(q)$+$A_L$ model, all differences remain $<1\sigma$. When we compare the cosmological parameter values obtained for the flat+$A_L$ model with those obtained for the Planck (new) $P(q)$ model, we observe disagreements at $-1.9\sigma$ ($-1.9\sigma$) for $H_0$ and 1.8$\sigma$ (1.8$\sigma$) for $\Omega_m$. As happens in the P18 analysis, in the P18+lensing analysis no significant differences are observed when we compare the Planck $P(q)$ (+$A_L$) and new $P(q)$ (+$A_L$) models with each other. It is the inclusion of non-CMB data which results in model-independent constraints. When P18 data are jointly analyzed with non-CMB data we do not find discrepancies $>1\sigma$. The most important differences in this case, in absolute value, are 0.78$\sigma$-0.96$\sigma$ ($\Omega_b{h^2}$) and 0.87$\sigma$-0.98$\sigma$ ($\sigma_8$) that are found when the results for models with a varying $A_L$ parameter are compared with the results obtained when $A_L = 1$. In the P18+lensing+non-CMB data case almost no significant model-to-model discrepancies are found. The largest ones are found when the varying $A_L$ models are compared with those with $A_L=1$. In particular, the two largest shifts are in $\ln(10^{10}A_s)$ (the largest one being in absolute value 1$\sigma$) and in $\sigma_8$ (the largest one being in absolute value 1.3$\sigma$). We note that P18+non-CMB data cosmological parameter constraints are slightly more model-independent than those determined using P18+lensing+non-CMB data. This is partly because (P18) lensing data changes the $A_L$ parameter value, which in turn causes small shifts in some of the other parameter values. Consequently, when (P18) lensing data are included in the mix we observe larger differences between the cosmological parameter values of the varying $A_L$ models and those of the $A_L=1$ models. Also, P18+lensing+non-CMB cases error bars are smaller than the ones found in the P18+non-CMB analyses, and this contributes to increasing the significance of the differences in some of the cosmological parameter values in the P18+lensing+non-CMB cases. We may say that, as long as at least P18+non-CMB data are considered, if we start from the tilted flat $\Lambda$CDM and then vary $A_L$ and/or $\Omega_k$ (which implies the consideration of one of the non-flat $P(q)$s we have used in this work), we obtain model-independent constraints as a result, since the shifts in the cosmological parameter values remain within or just slightly above 1$\sigma$. In light of these results we can conclude that the P18+lensing+non-CMB data set is powerful enough to result in model-independent cosmological parameter constraints and, if these data are correct and include all systematic errors, this data set is able to accurately measure these parameters of the (reasonably accurate tilted flat $\Lambda$CDM approximation of the) real Universe. \section{Conclusion} \label{sec:conclusion} In what follows we summarize our main conclusions. If the data sets we use are correct and free from unknown systematics, three of the eight cosmological models are ruled out due to incompatibilities with some of the data sets employed in the analyses. The untilted non-flat $\Lambda$CDM (+$A_L$) models are unable to properly fit the P18 data while the tilted non-flat $\Lambda$CDM Planck $P(q)$ model is ruled out at 3$\sigma$ because it is not able to simultaneously accommodate P18 data and non-CMB (or some subset of these) data. Interestingly, the new $P(q)$ tilted non-flat inflation $\Lambda$CDM cosmological model, characterized by the primordial power spectrum in Eq.\ (\ref{eq:tilted_nonflat_new_PS}), does better than the Planck $P(q)$ model in being able to simultaneously accommodate P18 data and non-CMB data. In Sec.\ \ref{subsec:data_set_tensions} we study the mutual compatibility of pairs of data sets and in none of the cases studied is the level of tension high enough to rule out this model. The same holds true for the flat (+$A_L$) models and the Planck and new $P(q)$+$A_L$ models. P18 data do not break the geometrical $\Omega_m$-$H_0$-$\Omega_k$-$A_L$ degeneracy present in the Planck and the new $P(q)$ (+$A_L$) models. In the tilted non-flat $\Lambda$CDM new $P(q)$ model the P18 data analysis reveals a 2.4$\sigma$ evidence in favor of closed geometry with $\Omega_k=-0.033\pm 0.014$ and this model is {\it strongly} favored over the tilted flat $\Lambda$CDM model. In the tilted non-flat models when the $A_L$ parameter is allowed to vary the evidence in favor of closed geometry subsides yet they are either {\it strongly} favored (Planck $P(q)$+$A_L$) or {\it positively} favored (new $P(q)$+$A_L$) over the tilted flat model. The tilted flat $\Lambda$CDM+$A_L$ model better fits P18 data, compared to the tilted flat $\Lambda$CDM model fit, with an $A_L$ parameter value 2.7$\sigma$ larger than the theoretically expected value of $A_L=1$. These results update and strengthen those presented in Refs.\ \cite{Handley:2019tkm, DiValentino:2019qzk}; both options $\Omega_k<0$ and $A_L>1$ appear more indicative of a CMB weak lensing consistency issue. The joint consideration of P18 data and (P18) lensing data does not result in significant changes in the values of most primary cosmological parameters with respect to those from the P18 data alone analysis, the exceptions being $\Omega_k$ and $A_L$. From P18+lensing data in the seven parameter tilted non-flat new $P(q)$ model we find 1.5$\sigma$ evidence in favor of closed geometry with $\Omega_k=-0.0086\pm 0.0057$, while in the seven-parameter tilted flat $\Lambda$CDM+$A_L$ model we find that $A_L>1$ is favored by 1.8$\sigma$ with $A_L=1.073\pm 0.041$. In these single parameter extensions of the tilted flat $\Lambda$CDM model, the addition of (P18) lensing data to P18 data does not favor $\Omega_k<0$ over $A_L>1$ or vice-versa. However, in the eight-parameter tilted non-flat Planck (new) $P(q)$ $\Lambda$CDM+$A_L$ models we find from P18+lensing data that $\Omega_k=-0.005\pm 0.027$ closed at 0.19$\sigma$ ($\Omega_k=0.003\pm 0.016$ open at 0.19$\sigma$), and $A_L=1.09\pm 0.16$ ($A_L=1.13\pm 0.15$) favoring $A_L>1$ at 0.56$\sigma$ (0.87$\sigma$), highlighting, if anything, the CMB weak lensing consistency issue. On the other hand, the values of the derived parameters $\Omega_m$ and $H_0$ are greatly affected by the inclusion of lensing data, and the geometrical degeneracy, when $A_L=1$, is partially broken. According to the DIC values, P18+lensing data do not strongly discriminate between models. The two statistical estimators ($\log_{10} {\mathcal I}$ and $\sigma$) tell us that there is only moderate tensions between P18 data and lensing data in the tilted non-flat models, and even less tension in the tilted flat model. Comparing the constraints from P18 data and non-CMB data allows for a robust test of the consistency of cosmological parameter values determined from high- and low-redshift data, respectively. For these data, the statistical estimators we consider do not show tensions between P18 data and non-CMB data, in the tilted flat model and in the varying $A_L$ models. Also, in the new $P(q)$ model with $A_L = 1$ we find $\textrm{log}_{10}\mathcal{I}=-0.806$ and $\sigma =2.577$ which indicates a non-negligible tension between P18 data results and non-CMB data results, but this is not high enough to rule out this model. No significant evidence is found in favor of non-flat hypersurfaces within the non-flat models. On the other hand, when the $A_L$ parameter is allowed to vary, the $A_L>1$ option is strongly preferred over the $A_L=1$ one. From P18+non-CMB data, for the flat+$A_L$ model we get $A_L=1.201\pm 0.061$ (3.3$\sigma$), for the Planck $P(q)$+$A_L$ model we find $A_L=1.203\pm 0.062$ (3.3$\sigma$), and for the new $P(q)$+$A_L$ model we obtain $A_L=1.204\pm 0.061$ (3.3$\sigma$). Amongst the data sets we consider in this paper, the P18+lensing+non-CMB data set provides the tightest constraints on cosmological parameters, and pins down the cosmological parameter values of the standard tilted flat $\Lambda$CDM model with impressive precision. (We emphasize that in most of the discussion in this paper we assume these data are accurate.) In fact, due to the great constraining power of this data set, almost all cosmological parameter values determined using this data set in the six tilted models considered are compatible at 1$\sigma$ (actually at slightly above 1$\sigma$ for the $\sigma_8$ parameter). Therefore we may say that the cosmological parameter values determined using P18+lensing+non-CMB data are very close to being model independent. From the P18+lensing+non-CMB analysis it is clear that the evidence in favor of $A_L>1$ remains while the evidence in favor of non-flat hypersurfaces subsides. We get $A_L=1.089\pm 0.035$ for the flat+$A_L$ model, $A_L=1.090\pm 0.036$ for the Planck $P(q)$+$A_L$ model, and $A_L=1.088\pm 0.035$ for the new $P(q)$+$A_L$ model, with a 2.5$\sigma$ deviation from $A_L=1$ in all cases. It is interesting that the large (in absolute value) negative $\Omega_k$ values demanded by P18 data in order to deal with the lensing anomaly are not supported by non-CMB data (although the non-CMB data do mildly favor a closed geometry), and the larger $H_0$ and smaller $\Omega_m$ favored by non-CMB data (compared to those favored by P18 data) result in P18+lensing+non-CMB data favoring flat spatial hypersurfaces. This is at the heart of the tensions found, in the context of the tilted non-flat models, when comparing P18 data and BAO$^\prime$/BAO data cosmological parameter constraints and P18 data and non-CMB data constraints. It is interesting that the Hubble constant value measured using P18+lensing+non-CMB data in the tilted flat $\Lambda$CDM model, $H_0=68.09\pm 0.38$ km s$^{-1}$ Mpc$^{-1}$, is consistent with that from a median statistics analysis of a large compilation of Hubble constant measurements as well as with some local measurements. More and better cosmological data are needed in order to shed additional light on the issues studied in this paper. In the meantime the P18+lensing+non-CMB data set looks like the most reliable among all those considered and consequently we conclude that current observational data do not favor curved spatial geometry --- consistent with the standard tilted flat $\Lambda$CDM model --- but do favor $A_L>1$ and so somewhat uncomfortably squeeze the standard tilted flat $\Lambda$CDM model. \acknowledgements We thank H\'ector Gil-Mar\'in for useful discussions about BAO data. J.d.C.P.\ was supported by a FPI fellowship associated to the project FPA2016-76005-C2-1-P (MINECO). C.-G.P.\ was supported by National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No.\ 2020R1F1A1069250). B.R.\ was supported by DOE grant DE-SC0011840. \def{and }{{and }}
1,116,691,499,983
arxiv
\section{Introduction} The \IEEEPARstart{I}{EEE} 802.11 \cite{Stand2012} WiFi standard has achieved massive market penetration due to its low cost, easy deployment and high bandwidth. Also, with the recent emergence of naked-eye 3D mobile devices, such as Amazon's 3D Fire Phone, HTC's EVO 3D, LG's Optimus 3D, and Sharp's Lynx, mobile 3D video services are expected to become increasingly important for video service providers such as Youtube and Netflix. In contrast to traditional stereo single-view 3D video formats, multi-view 3D videos provide users with a choice of viewing angles and thus are expected to stimulate the development of innovative applications in television, movies, education, and advertising \cite{Signal2009}. Previous research on the deployment of 3D videos in wireless networks has mostly focused on improving 3D video quality for single-view 3D videos \cite% {ICC2013, ICCF2013}, but multi-view 3D videos, which typically offer 5, 16 and 32 different viewing angels \cite{Meet2008} have attracted much less attention. Multi-View 3D videos are expected to significantly increase the network load when all views are transmitted. One promising way to remedy the bandwidth issue is to exploit depth-image-based rendering (DIBR) in mobile clients, in which the idea is to synthesize the desired view from one left view and one right view \cite{Signal2009}, because adjacent left and right views with a sufficiently small angle usually share many similar scenes and objects. Several schemes for bit allocation between the texture and depth map \cite{Multimedia2012} and rate control with layered encoding for a multi-view 3D video \cite{ACMMultimedia2012} have been proposed to ensure that the quality of the synthesized view is very close to the original view (i.e., by minimizing total distortion or maximizing quality). Therefore, exploiting DIBR in clients eliminates the need to deliver every view of a video in a network. For practical situations, the computation overhead and extra energy consumption incurred by DIBR is small enough to be supported by current mobile devices \cite{ACMMultimedia2012, Processing2012}.For HTTP video streaming (ex., Youtube and MPEG-DASH) with TCP \cite{Dash20, SIGCOMM2011}, instead of UDP, DIBR can be performed when the views are waiting in the streaming buffer before playback. Equipped with DIBR, only a subset of views are required to be multicasted in a network. However, multi-view 3D video multicast with DIBR brings new challenges in \textit{view selection} for WiFi networks due to view synthesis and wireless erasure. Firstly, the number of skipped views between the left and right views in DIBR needs to be constrained to ensure the quality of the synthesized view \cite{Signal2009}. In other words, since each transmitted view is multicasted to multiple clients, it is crucial to carefully select the transmitted views so that the desired view of each user can be synthesized with a left view and a right view close to each other. DIBR has a quality constraint \cite{Signal2009}, which specifies that the left and right views are allowed to be at most $R$ views away (i.e., $R-1$ views skipped between them) to ensure that every view between the left and right view can be successfully synthesized with good quality. Therefore, each new user cannot arbitrarily choose a left and a right view for synthesis with DIBR. The second challenge is that WiFi networks frequently suffer from wireless erasure, and different clients suffer from different loss probabilities due to varying channel conditions \cite{INFOCOM2007, IWCMC2014, IICSP2007}. In 2D and single-view 3D videos, the \textit{view loss probability} for each user can be easily derived according to the selected bit-rate, channel, and the setting of MIMO (e.g., antennas, spatial streams) in 802.11 networks. For multi-view 3D videos, however, when a video frame is lost for a user $i$ subscribing a view $k_{i}$, we observe that the left and right views multicasted in the network to other users can natively serve to \textit{% protect} view $k_{i}$, since the user $i$ can synthesize the desired view from the two views using DIBR. However, the view synthesis will fail if only one left view or one right view is received successfully by the client. Therefore, a new research problem is to derive the \textit{view failure probability}, which is the probability that each user does not successfully receive and synthesize his/her desired view. In this paper, we first analyze the view failure probability and compare it with the traditional view loss probability, which is the probability that a view fails to be sent to a user without DIBR. We then propose the Multi-View Group Management Protocol (MVGMP) for multi-view 3D multicast. When a user joins the video multicast group, it can exploit our analytical result to request the access point(AP) to transmit the most suitable right and left views, so that the view failure probability is guaranteed to stay below a threshold. On the other hand, when a user leaves the video multicast group, the proposed protocol carefully selects and withdraws a set of delivered views to reduce the network load, so that the video failure probability for other users will not exceed the threshold. Bandwidth consumption can be effectively reduced since it is not necessary to deliver all the views subscribed by the clients. The rest of the paper is organized as follows. Section II describes the system model. Section III analyzes the view loss probability and view failure probability. Section IV presents the proposed protocol. Section V shows the simulation results, and Section VI concludes this paper. \section{System Model} This paper considers single-cell video multicast in IEEE 802.11 networks, where the views transmitted by different bit-rates and on different channels are associated with different loss probabilities \cite{INFOCOM2007, IWCMC2014, IICSP2007}. Currently, many video services, such as Youtube and Netflix, require reliable transmissions since Flash or MPEG DASH \cite{Dash20} are exploited for video streaming. Nevertheless, the current IEEE 802.2 LLC protocol for IEEE 802.11 networks does not support reliable multicast transmissions \cite{Stand1998}, and error recovery therefore needs to be handled by Layer-3 reliable multicast standards, such as PGM \cite{PGM2001}. A multi-view 3D video can be encoded by various encoding schemes \cite{Circuits2007, Proceed2011}. Each view in a video consists of a texture image and a depth map of the corresponding viewing angle. The idea of DIBR is to synthesize a view according to its neighbor left view and neighbor right view. Since the angle between the neighbor left and right views is relatively small, it is expected that the video objects in the synthesized view can be warped (i.e., bent) from those in the two neighbor views. Effective techniques in computer vision and image processing have been proposed to ensure the video quality and limit the processing delay \cite{Processing2011}. For example, suppose there are three multicast views, i.e., view 1, 3, and 4 subscribed by all clients. In the original WiFi multicast without DIBR, AP separately delivers each view in a multicast group to the corresponding clients, and three views are separately recovered or retransmitted during packet losses. In contrast, our approach enables a subscribed view to be synthesized by neighbor left and right views with DIBR, while the quality constraint in DIBR states that there are at most $R-1$ views between the neighbor left and right views, and $R$ can be set according to \cite{Signal2009}. When $R=3$ in the above example, the lost of view 3 can be recovered by view 1 and 4, since view 3 can be synthesized by view 1 and 4 accordingly. In other words, a user can first try to synthesize the view according to the left view and right view when a subscribed view is lost, by joining the multicast groups corresponding to the left and right views. The intuition behind our idea is \textit{traffic protection} from neighbor views. A user can join more multicast groups to protect the desired view without extra bandwidth consumption in the network, because the nearby left view and right view may be originally multicasted to other users that subscribe the views. However, more unnecessary traffic will be received if the desired view is not lost, and the trade-off will be explored in the next section. \section{Analytical Solution\label{sec: analysis}} In this section, we present the analytical results for multi-view 3D multicast in multi-rate multi-channel IEEE 802.11 networks with DIBR. We first study the scenario of single-view subscription for each user and then extend it to multi-view subscription. Table \uppercase\expandafter{\romannumeral1} summarizes the notations in the analysis. Based on the mathematical analysis, a new protocol is proposed in the next section to dynamically assign the proper views to each user. \subsection{Single View Subscription} In single-view subscription, each user $i$ specifies only one desired view $% k_{i}$. Each view can be sent once or multiple times if necessary. Let $% p_{i,c,r}$ represent the \textit{view loss probability}, which is the probability that user $i$ does not successfully receive a view under channel $c$ and bit-rate $r$. We define a new probability $P_{\varepsilon }^{(i)}$ for multi-view 3D videos, called \textit{view failure probability}, which is the probability that user $i$ fails to receive and synthesize the desired view because the view and nearby left and right views for synthesis are all lost. In other words, the view loss probability considers only one view, while the view failure probability jointly examines the loss events of multiple views. \begin{thm} In the theorem, we first explore the most generalized case studied in \cite{INFOCOM2005,MobiCom2005,Transac2009} with each multi-radio client able to operate on multiple channels simultaneously. For single-view subscription, the view failure probability for user $i$ is \begin{align} & P_{\varepsilon }^{(i)}=\prod_{c\in C_{i},r\in D_i}p_{i,c,r}^{n_{k_{i},c,r}}\times \Bigg(\mathbf{1}\{k_{i}=1\}+\mathbf{1}\{k_{i}=M\}\Bigg) \notag \label{formula1} \\ & + \sum_{k=1}^{R-1}\Bigg((1-\prod_{c^{\prime }\in C_{i},r^{\prime }\in D_i}p_{i,c^{\prime },r^{\prime }}^{n_{k_{i}-k,c^{\prime },r^{\prime }}})\prod_{l=1}^{\min (R-k,M-k_{i})}\prod_{\underset{c_{1},c_{2}\in C_{i}}{% r_{1},r_{2}\in D_i}} \notag \\ &\prod_{q=0}^{k-1}p_{i,c_{1},r_{1}}^{n_{k_{i}-q,c_{1},r_{1}}}p_{i,c_{2},r_{2}}^{n_{k_{i}+l,c_{2},r_{2}}}% \mathbf{1}\{M-1\geq k_{i}\geq k+1\}\Bigg) \notag \\ &+\prod_{q=0}^{\min (R-1,k_{i}-1)}\prod_{c_{3}\in C_{i},r_{3}\in D_i}p_{i,c_{3},r_{3}}^{n_{k_{i}-q,c_{3},r_{3}}}\mathbf{1}\{M-1\geq k_{i}\geq 2\}\notag \end{align}% where $\mathbf{1}\{\cdot \}$ denotes the indicator function. \end{thm} \textit{Proof:} The view failure event occurs when both of the following two conditions hold: 1) user $i$ does not successfully receive the desired view, and 2) user $i$ fails to receive any feasible set consisting of a left view and a right view with the view distance at most $R$ to synthesize the desired view. The probability of the first condition is $\prod_{c\in C_{i},r\in D_i}p_{i,c,r}^{n_{k_{i},c,r}}$ when the the desired view $k_{i} $ of user $i$ is transmitted by $n_{k_{i}}$ times. Note that if the desired view of user $i $ is view $1$ or view $M$, i.e., $k_{i}=1$ or $k_{i}=M $, user $i$ is not able to synthesize the desired view with DIBR, and thus the view failure probability can be directly specified by the first condition. For every other user $i$ with $M-1\geq k_{i}\geq 2$, we define a set of non-overlapping events $\{\mathcal{B}_{k}\}_{k=0}^{R-1}$, where $\mathcal{B}% _{k}$ with $k>0$ is the event that the nearest left view received by user $i$ is $k_{i}-k$ , but user $i$ fails to receive a feasible right view to synthesize the desired view. On the other hand, $\mathcal{B}_{0}$ is the event that the user $i$ fails to receive any left view. Therefore, $% \bigcup_{k=0}^{R-1}\mathcal{B}_{k}$ jointly describes all events for the second condition. \begin{table}[t] \caption{Notations in Analysis.} \label{table1 \begin{center} \begin{tabular}{|l|l|} \hline \textbf{Description} & \textbf{Notation} \\ \hline $R$ & Quality constraint of DIBR \\ \hline $M$ & Total number of views \\ \hline $k_i$ & The view desired by user $i$ \\ \hline $D_i$ & A set of the available data rates for user $i$ \\ \hline $C_i$ & A set of the available channels for user $i$ \\ \hline $n_{j,c,r}$ & Number of multicast transmissions for view $j$ \\ & transmitted by rate $r$ in the channel $c$ \\ \hline $p_{i,c,r}$ & The view loss probability for user $i$ under \\ & channel $c$ and rate $r$ \\ \hline $P_{\varepsilon}^{(i)}$ & The probability that user $i$ cannot obtain the \\ & desired view either by direct transmission or by \\ & DIBR \\ \hline $p_{c,r}^{\text{AP}}(n)$ & The probability that AP multicasts a view $n$ times \\ & under the channel $c$ and the rate $r$ \\ \hline $\alpha_i$ & The percentage of the desired views that can be \\ & received or synthesized successfully by user $i$ \\ \hline $p_{\text{select}}$ & The probability that a user selects each view \\ \hline \end{tabular}% \end{center} \end{table} For each event $\mathcal{B}_{k}$ with $k>0$, \begin{align} & P(\mathcal{B}_{k})=(1-\prod_{c^{\prime }\in C_{i},r^{\prime }\in D_i}p_{i,c^{\prime },r^{\prime }}^{n_{k_{i}-k,c^{\prime },r^{\prime }}})\prod_{l=1}^{\min (R-k,M-k_{i})} \notag \\ &\prod_{\underset{c_{1},c_{2}\in C_{i}}{r_{1},r_{2}\in D_i}% }\prod_{q=0}^{k-1}p_{i,c_{1},r_{1}}^{n_{k_{i}-q,c_{1},r_{1}}}p_{i,c_{2},r_{2}}^{n_{k_{i}+l,c_{2},r_{2}}}% \mathbf{1}\{M-1\geq k_{i}\geq k+1\} \notag \end{align}% The first term $1-\prod_{c^{\prime }\in C_{i},r^{\prime }\in D_i}p_{i,c^{\prime },r^{\prime }}^{n_{k_{i}-k,c^{\prime },r^{\prime }}}$ indicates that user $i$ successfully receives view $k_{i}-k $, and the second term \begin{equation*} \prod_{l=1}^{\min (R-k,M-k_{i})}\prod_{\underset{c_{1},c_{2}\in C_{i}}{% r_{1},r_{2}\in D_i}% }\prod_{q=0}^{k-1}p_{i,c_{1},r_{1}}^{n_{k_{i}-q,c_{1},r_{1}}}p_{i,c_{2},r_{2}}^{n_{k_{i}+l,c_{2},r_{2}}} \end{equation*}% means that user $i$ does not successfully receive any left view between $% k_{i}-k$ and $k$ and any right view from $k_{i}+1$ to $k_{i}+\min (R-k,M-k_{i})$. It is necessary to include an indicator function in the last term since $\mathcal{B}_{k}$ will be a null event if $k_{i}\leq k$, i.e., user $i$ successfully receives a view outside the view boundary. Finally, the event $\mathcal{B}_{0}$ occurs when no left view is successfully received by user $i$. \begin{align} &P(\mathcal{B}_{0})= \notag \\ &\prod_{q=0}^{\min (R-1,k_{i}-1)}\prod_{c_{3}\in C_{i},r_{3}\in D_i}p_{i,c_{3},r_{3}}^{n_{k_{i}-q,c_{3},r_{3}}}\mathbf{1}\{M-1\geq k_{i}\geq 2\} \notag \end{align}% The theorem follows after summarizing all events. $\blacksquare $ \newline \textbf{Remark:} The advantage of a multi-view 3D multicast with DIBR can be clearly seen when comparing the view loss probability and view failure probability. The latter probability attaches a new term (i.e., the probability of $\bigcup_{k=0}^{R-1}\mathcal{B}_{k}$) to the view loss probability, where a larger $R$ reduces the probability of the second term. Equipped with DIBR, therefore, the view failure probability is much smaller than the view loss probability, see Section \ref{sec: simulation}. For the case that each single-radio client can access only one channel and rate at any time, the theorem can be changed to the following one.\footnote{$\displaystyle P_{\varepsilon }^{(i)}=\prod_{r\in D_i}p_{i,c,r}^{n_{k_{i},c,r}}\times \Bigg(\mathbf{1}\{k_{i}=1\}+\mathbf{1}\{k_{i}=M\}\Bigg)\\+\sum_{k=1}^{R-1}\Bigg((1-\prod_{r^{\prime }\in D_i}p_{i,c,r^{\prime }}^{n_{k_{i}-k,c,r^{\prime}}})\prod_{l=1}^{\min (R-k,M-k_{i})}\prod_{r_{1},r_{2}\in D_i}\prod_{q=0}^{k-1}\\p_{i,c,r_{1}}^{n_{k_{i}-q,c,r_{1}}}p_{i,c,r_{2}}^{n_{k_{i}+l,c,r_{2}}}\mathbf{1}\{M-1\geq k_{i}\geq k+1\}\Bigg)\\+\prod_{q=0}^{\min (R-1,k_{i}-1)}\prod_{r_{3}\in D_i}p_{i,c,r_{3}}^{n_{k_{i}-q,c,r_{3}}}\mathbf{1}\{M-1\geq k_{i}\geq 2\}$} \subsection{Multiple View Subscription} In the following, we explore the case of a user desiring to subscribe to multiple views. We first study the following two scenarios: 1) every view is multicasted; 2) only one view is delivered for every $\widetilde{R}$ views, $% \widetilde{R}$ $\leq R$, and thus it is necessary for a user to synthesize other views accordingly. We first define $\alpha _{i}$, which represents the percentage of desired views that can be successfully received or synthesized by user $i$. \begin{equation*} \alpha _{i}=\frac{\sum_{k_{i}\in \mathcal{K}_{i}}\mathbf{1}\{\text{user }i% \text{ can obtain view }k_{i}\}}{|\mathcal{K}_{i}|} \end{equation*}% where $\mathcal{K}_{i}$ denotes the set of desired views for user $i$. By using Theorem 1, we can immediately arrive at the following corollary. \begin{cor} \begin{align} \label{formula2} \mathbb{E}[\alpha_i]=\frac{\sum_{k_i\in\mathcal{K}_i}(1-P_{ \varepsilon}^{(i)}(k_i))}{|\mathcal{K}_i|} \end{align} where $P_{\varepsilon}^{(i)}(k_i)$ is given in Theorem 1. \end{cor} \textit{Proof:} \begin{align} \mathbb{E}[\alpha _{i}]=& \frac{\sum_{k_{i}\in \mathcal{K}_{i}}\mathbb{E}% \mathbf{1}\{\text{user }i\text{ can obtain view }k_{i}\}}{|\mathcal{K}_{i}|} \notag \\ =& \frac{\sum_{k_{i}\in \mathcal{K}_{i}}(1-P_{\varepsilon }^{(i)}(k_{i}))}{|% \mathcal{K}_{i}|} \notag \end{align}% $\blacksquare $ Eq. (1) becomes more complicated as $|\mathcal{K}_{i}|$ increases. In the following, therefore, we investigate the asymptotic behavior of $\alpha _{i}$ for a large $|\mathcal{K}_{i}|$ and a large $M$ (i.e., $|\mathcal{K}_{i}|\leq M $). To find the closed-form solution, we first consider a uniform view subscription and assume that user $i$ subscribes to each view $j$ with probability $p_{\text{select}}=\frac{|\mathcal{K}_i|}{M}$ independently across all views so that the average number of selected views is $|\mathcal{K% }_i|$. Assume the AP multicasts view $j$ in channel $c$ with rate $r$ by $n$ times with probability $p_{j,c,r}^{\text{AP}}(n)$ independently across all views, channels, and rates. In the following, we first perform the asymptotic analysis to derive the theoretical closed-form solution, and we then present the insights from the theorem by comparing the results of single-view subscription and multi-view subscription. \begin{thm} In multi-view 3D multicast, \begin{align} \alpha _{i}(\mathcal{K}_i)\overset{a.s.}{\rightarrow }& (1-p_i)\Bigg\{% \sum_{k=1}^{R}k(1-p_i)p_i^{k-1}+p_i^{R}\Bigg\} \\ \mathbb{E}[\alpha _{i}(\mathcal{K}_i)]\overset{a.s.}{\rightarrow }& (1-p_i)% \Bigg\{\sum_{k=1}^{R}k(1-p_i)p_i^{k-1}+p_i^{R}\Bigg\} \end{align}% as $|\mathcal{K}_i|\rightarrow \infty $, where $p_i=\prod_{c\in C_{i},r\in D_i}\sum_{n}p_{c,r}^{\text{AP}}(n)p_{i,c,r}^{n}$ \end{thm} \textit{Proof:} We first derive the view loss probability for user $i$. Suppose that the AP multicasts a view $n$ times via channel $c$ and rate $% r$. The probability that user $i$ cannot successfully receive the view is $% p_{i,c,r}^{n}$. Because the AP will multicast a view $n$ times via channel $c $ and rate $r$ with probability $p_{c,r}^{\text{AP}}(n)$, the probability that user $i$ cannot receive the view via channel $c$ and rate $r $ is $\sum_{n}p_{c,r}^{\text{AP}}(n)p_{i,c,r}^{n}$. Therefore, the view loss probability for user $i$ is the multiplication of the view loss probabilities in all channels and rates, i.e., $\prod_{c\in C_{i},r\in D_i}\sum_{n}p_{c,r}^{\text{AP}}(n)p_{i,c,r}^{n}$. For simplification, we denote $p_i$ as the view loss probability for user $i$ in the remainder of the proof. Since the multicast order of views is not correlated to $\alpha _{i}$, we assume that the AP sequentially multicasts the views from view $1$ to view $M$. Now the scenario is similar to a tossing game, where we toss $% M $ coins, and a face-up coin represents a view successfully received from the AP. Therefore, the face-up probability of at least one coin is $1-p_i^M$. Now we mark a coin with probability $p_{\text{select}}$ if it is face-up or if there is one former tossed face-up coin and one latter tossed face-up coin with the view distance at most $R$. Since the above analogy captures the mechanism of direct reception and DIBR of views, the marked coins then indicate that the views selected by user $i$ can be successfully acquired. To derive the closed-form asymptotic result, we exploited the \textit{delayed renewal reward process}, in which a cycle begins when a face-up coin appears, and the cycle ends when the next face-up coin occurs. The reward is defined as the total number of marked coins. Specifically, let $\{N(t):=\sup \{n:\sum_{i=0}^{n}X_{i}\leq t\},t\geq 0\}$ denote the delayed renewal reward process with inter-arrival time $X_{n}$, where $X_{n}$ with $n\geq 1$ is the time difference between two consecutive face-up coins, and $X_{0}$ is the time when the first face-up coin appears. Let $R(M)$ and $R_{n}$ denote the total reward earned at the time $M$, which corresponds to the view numbers in a multi-view 3D video. At cycle $n$, \begin{equation*} \frac{R(M)}{M}=\frac{\sum_{n=1}^{N(M)}R_{n}}{M}+o(1)~~~a.s. \end{equation*}% where the $o(1)$ term comes from the fact that the difference between the total reward and $\sum_{n=1}^{N(M)}R_{n}$ will have a finite mean. Recall that the reward earned at each cycle is the number of marked coins, \begin{numcases}{\mathbb{E}[R_n|X_n]=} p_{\textrm{select}}, & for $X_n > R$\nonumber\\ X_np_{\textrm{select}}, & for $X_n\leq R$ \end{numcases} since when $X_{n}\leq R$, $X_{n}$ coins can be marked (each with probability $p_{\text{select}}$) between two consecutive face-up coins, and thus the expected reward given $X_n$ is $X_np_{\text{select}}$. By contrast, only one coin can be marked with probability $p_{\text{select}}$ when $X_{n}>R$, and the expectation of reward given $X_n$ is only $p_{\text{% select}}$. Since $X_{n}$ is a geometric random variable with parameter $1-p_i$, we have \begin{equation*} \mathbb{E}[X_{n}]=1-p_i+2p_i(1-p_i)+3p_i^{2}(1-p_i)+\cdots =\frac{1}{% 1-p_i} \end{equation*}% and \begin{align} \mathbb{E}[R_{n}]=& p_{\text{select}}(1-p_i)+2p_{\text{select}% }p_i(1-p_i)+\cdots \notag \\ & +Rp_{\text{select}}p_i^{R-1}(1-p_i)+p_{\text{select}}p_i^{R} \end{align}% By theorem 3.6.1 of renewal process in \cite{ross}, \begin{align} \frac{\sum_{n=1}^{N(M)}R_{n}}{M}& \overset{a.s.}{\rightarrow }\frac{\mathbb{E% }R_{n}}{\mathbb{E}X_{n}} \notag \\ & =p_{\text{select}}(1-p_i)\Bigg\{\sum_{k=1}^{R}k(1-p_i)p_i^{k-1}+p_i^{R}% \Bigg\} \end{align}% Let $U_{M}$ denote the number of views selected by user $i$. Therefore, \begin{equation*} \alpha _{i}=\frac{R(M)}{U_{M}}=\frac{R(M)}{M}\frac{M}{U_{M}} \end{equation*}% For $\frac{U_{M}}{M}\overset{a.s.}{\rightarrow }p_{\text{select}}$, by the strong law of large numbers, after combining with Eq. (4), (5), (6), \begin{equation*} \alpha _{i}\overset{a.s.}{\rightarrow }(1-p_i)\Bigg\{% \sum_{k=1}^{R}k(1-p_i)p_i^{k-1}+p_i^{R}\Bigg\} \end{equation*}% The proof for convergence in mean is similar. It is only necessary to replace the convergence in Eq. (6) by the convergence in mean, which can be proven by the same theorem. $\blacksquare $ \textbf{Remark:} Under the above uniform view subscription, it can be observed that $% \alpha _{i}$ is irrelevant to $p_{\text{select}}$, implying that different users with different numbers of subscription will acquire the same percentage of views. Most importantly, $\alpha _{i}=1-p_i$ for multi-view 3D multicasts without DIBR. In contrast, multi-view 3D multicasting with DIBR effectively improves $\alpha _{i}$ by $% \sum_{k=1}^{R}k(1-p_i)p_i^{k-1}+p_i^{R}$. Since this term is strictly monotonically increasing with $R$, we have $% \sum_{k=1}^{R}k(1-p_i)p_i^{k-1}+p_i^{R}> \sum_{k=1}^{1}k(1-p_i)p_i^{k-1}+p_i=1$, which implies that the percentage of obtained views is strictly larger in statistic term s by utilizing the DIBR technique. In the following, we consider the second case with only one view delivered for every $\widetilde{R}$ view, where the bandwidth consumption can be effectively reduced. Note that the following corollary is equivalent to Theorem 2 when $\widetilde{R}=1$. \begin{cor} If the AP only transmits one view with probability $p_{c,r}^{\text{AP}}(n)$ for every $\widetilde{R}$ views, \begin{align} \alpha _{i}(\mathcal{K}_i)\overset{a.s.}{\rightarrow }& \frac{(1-p_i)\Bigg\{% \sum_{k=1}^{\lfloor \frac{R}{\widetilde{R}}\rfloor }\widetilde{R}% k(1-p_i)p_i^{k-1}+p_i^{\lfloor \frac{R}{\widetilde{R}}\rfloor }\Bigg\}}{% \widetilde{R}} \\ \mathbb{E}[\alpha _{i}(\mathcal{K}_i)]\rightarrow & \frac{(1-p_i)\Bigg\{% \sum_{k=1}^{\lfloor \frac{R}{\widetilde{R}}\rfloor }\widetilde{R}% k(1-p_i)p_i^{k-1}+p_i^{\lfloor \frac{R}{\widetilde{R}}\rfloor }\Bigg\}}{% \widetilde{R}} \end{align}% as $|\mathcal{K}_i|\rightarrow \infty $, where $p_i=\prod_{c\in C_{i},r\in D_i}\sum_{n}p_{c,r}^{\text{AP}}(n)p_{i,c,r}^{n}$ \end{cor} \section{Protocol Design} For a multi-view 3D multicast, each view sent in a channel with a rate is associated with a multicast group. Based on the analytical results in Section \ref{sec: analysis}, each client subscribes to a set of views by joining a set of multicast group, in order to satisfy the view failure probability. To support the dynamic join and leave of users and the change of the subscribed views, we present a new protocol, named Multi-View Group Management Protocol (MVGMP), which exploits the theoretical results in Section III. The MVGMP protocol extends the current IETF Internet standard for multicast group management, the IGMP \cite{IGMPRFC}, by adding the view selection feature to the protocol. The IGMP is a receiver-oriented protocol, where each user periodically and actively updates its joined multicasting groups to the designated router (i.e., the AP in this paper). In MVGMP, the AP maintains a table, named \textit{ViewTable}, for each video. The table specifies the current multicast views and the corresponding bit-rates and channels for each view% \footnote{% Note that each view is allowed to be transmitted multiple times in different channels and rates if necessary, as described in Section \ref{sec: analysis}.% }, and each multicast view is associated with a multicast address and a set of users that choose to receive the view. ViewTable is periodically broadcasted to all users in the WiFi cell. The MVGMP includes two control messages. The first message is Join, which contains the address of a new user and the corresponding requested view(s), which can be the subscribed views, or the left and right views to synthesize the subscribed view. An existing user can also exploit this message to update its requested views. The second message is Leave, which includes the address of a leaving user and the views that no longer need to be received. An existing user can also exploit this message to stop receiving a view. Following the design rationale of the IGMP, the MVGMP is also a soft-state protocol, which implies that each user is required to periodically send the Join message to refresh its chosen views, so that unexpected connection drops will not create dangling states in ViewTable. \textbf{Join. }When a new member decides to join a 3D video multicast transmission, it first acquires the current ViewTable from the AP. After this, the user identifies the views to receive according to Theorem 1. Specifically, the client first examines whether ViewTable has included the subscribed view. If ViewTable does not include the subscribed view, or if the view loss probability for the subscribed view in the corresponding channel and bit-rate exceeds the threshold, the user adds a left view and a right view that lead to the maximal decrement on the view failure probability. To properly select the views, The user can search the view combinations exhaustively with the theoretical results in Section III, because R is small and thus only a small number of views nearby to the desired view is necessary to be examined. However, a view cannot be added to a channel without sufficient bandwidth. When a multi-view 3D video starts, usually the current multicast views in ViewTable are not sufficient for a new user. In other words, when the view failure probability still exceeds the threshold after the user selects all transmitted left and right views within the range $R$ in ViewTable, the user needs to add the subscribed view to ViewTable with the most suitable channel and bit-rate to reduce the view failure probability. Also, the left and right views are required to be chosen again according to the analytic results in Section \ref{sec: analysis} to avoid receiving too many views. After choosing the views to be received, a Join message is sent to the AP. The message contains the views that the user chooses to receive, and the AP adds the user to ViewTable accordingly. To avoid receiving too many views, note that a user can restrict the maximum number of left and right views that are allowed to be received and exploited for DIBR. \textbf{Leave and View Re-organization. }On the other hand, when a user decides to stop subscribing to a multi-view 3D video, it multicasts a Leave message to the AP and to any other user that receives at least one identical view $k_{i}$. Different from the Join message, the Leave message is also delivered to other remaining users in order to minimize the bandwidth consumption, since each remaining user that receives $k_{i}$ will examine if there is a chance to switch $k_{i}$ to another view $\overline{k}_{i}$ that is still transmitted in the network. In this case, the remaining user also sends a Leave message that includes view $k_{i}$, together with a Join message that contains view $\overline{k}_{i}$. If a view is no longer required by any remaining users, the AP stops delivering the view. Therefore, the MVGMP can effectively reduce the number of multicast views. \textbf{Discussion. }Note that the MVGMP can support the scenario of a user changing the desired view, by first sending a Leave message and then a Join message. Similarly, when a user moves, thus changing the channel condition, it will send a Join message to receive additional views if the channel condition deteriorates, or a Leave message to stop receiving some views if the channel condition improves. Moreover, when a user is handed over to a new WiFi cell, it first sends a Leave message to the original AP and then a Join message to the new AP. If the network connection to a user drops suddenly, the AP removes the information corresponding to the user in ViewTable when it does not receive the Join message (see soft-state update as explained earlier in this section) for a period of time. Therefore, the MVGMP also supports the silent leave of a user from a WiFi cell. Moreover, our protocol can be extended to the multi-view subscription for each client by replacing Theorem 1 with Theorem 2. The fundamental operations of Join/Leave/Reorganize remain the same since each view is maintained by a separate multicast group. \section{Simulation Results\label{sec: simulation}} In the following, we first describe the simulation setting and then compare the MVGMP with the current multicast scheme. \subsection{Simulation Setup} We evaluate the channel time of the MVGMP in a series of scenarios with NS3 802.11n package. The channel time of a multicast scheme is the average time consumption of a frame in WiFi networks. To the best knowledge of the authors, there has been no related work on channel time minimization for multi-view 3D video multicast in WiFi networks. For this reason, we compare the MVGMP with the original WiFi multicast scheme, in which all desired views are multicasted to the users. We adopt the setting of a real multi-view 3D dataset Book Arrival \cite{Meet2008} and the existing multi-view 3D videos \cite{ICIP2007} with 16 views, where the texture video quantization step is 6.5, and depth map quantization step is 13, and the PSNR of the synthesized views in DIBR is around 37dB \cite{Broadcasting2012}. The video rates for reference texture image and its associated depth map are assigned as $r_t=600$kbps and $r_d=200$kbps, respectively, and thus $r=800$ kbps. The DIBR quality constraint is 3, $R=3$. The threshold of each user is uniformly distributed in (0, 0.1]. Each user randomly chooses one preferred view from three preference distributions: Uniform, Zipf, and Normal distributions. There is no specifically hot view in the Uniform distribution. In contrast, the Zipf distribution, $f(k;s;N)=(\frac{1}{k^{s}})/\sum_{n=1}{N}(\frac{1}{n^{s}})$, differentiates the desired views, where $k$ is the preference rank of a view, $s$ is the the exponent characterizing the distribution, and $N$ is the number of views. The views with smaller ranks are major views and thus more inclined to be requested. We set $s=1$ and $N=|V|$ in this paper. In the Normal distribution, central views are accessed with higher probabilities. The mean is set as $|V|/2$, and the variance is set as 1 throughout this study. We simulate a dynamic environment with $50$ client users located randomly in the range of an AP. After each frame, there is an arrival and departure of a user with probabilities $\lambda $ and $\mu $, respectively. In addition, a user changes the desired view with probability $\eta $. The default probabilities are $\lambda =0.2$, $\mu =0.3$, $\eta =0.4$. TABLE II summarizes the simulation setting consisting of an 802.11n WiFi network with a 20MHz channel bandwidth and 13 orthogonal channels. In the following, we first compare the performance of the MVGMP with the current WiFi multicast scheme in different scenarios and then compare the analytical and simulation results. \begin{table}[t] \caption{Simulation Settings.} \label{table1 \begin{center} \begin{tabular}{|l|l|} \hline \textbf{Parameter} & \textbf{Value} \\ \hline Carrier Frequency & 5.0 GHz \\ \hline The unit of Channel Time & $1ms$ \\ \hline Channel Bandwidth & 20MHz \\ \hline AP Tx Power & 20.0 dBm \\ \hline OFDM Data Symbols & 7 \\ \hline Subcarriers & 52 \\ \hline Video bit-rate(per view) & 800kbps \\ \hline Number of Orthogonal Channels & 13 \\ \hline Transmission Data Rates & \{6.5, 13, 19.5, 26, 39, 52, 58.5, 65\} \\ & Mbps defined in 802.11n spec. \cite{Stand2012} \\ \hline \end{tabular} \end{center} \vspace{-8pt} \end{table} The relationship between the setting of $R$ and the video quality has been studied in \cite{Signal2009, Processing2011}. Therefore, due to the space constraint, we focus on the channel time and view failure probability with different $R$ here, and the corresponding video quality can be derived according to \cite{Signal2009, Processing2011}. \subsection{Scenario: Synthesized Range} Fig. 1 evaluates the MVGMP with different settings of $R$. Compared with the current WiFi multicast, the channel time is effectively reduced in the MVGMP as $R$ increases. Nevertheless, it is not necessary to set a large $R$ because the improvement becomes marginal as $R$ exceeds 3. Therefore, this finding indicates that a small $R$ (i.e., limited quality degradation) is sufficient to effectively reduce the channel time in WiFi. \subsection{Scenario: Number of Views} Fig. 2 explores the impact of the numbers of views in a video. The channel time in both schemes increases when the video includes more views, because more views need to be transmitted. This result shows that MVGMP consistently outperforms the original WiFi multicast scheme with different numbers of views in a video. \subsection{Scenario: Number of Users in Steady State} Fig. 3 evaluates the channel time with different numbers of users in the steady state. We set $\lambda =\mu =0.25$, so that the expected number of users in the network remains the same. The channel time was found to grow as the number of users increases. Nevertheless, the increment becomes marginal since most views will appear in \textit{ViewTable}, and thus more users will subscribe to the same views in the video. \subsection{Scenario: Utilization Factor} Fig. 4 explores the impact of the network load. Here, we change the \textit{loading ratio} $\rho :=\frac{\lambda }{\mu }$, i.e., the ratio between the arrival probability $\lambda $ and departure probability $\mu $. Initially, new multicast users continuously join the 3D video stream until the network contains 50 users. The results indicate that the channel time increased for both multicast schemes. Nevertheless, the MVGMP effectively reduces at least $40\%$ of channel time for all three distributions. \begin{figure*}[!t] \captionsetup[subfigure]{labelformat=empty} \centering \subfloat[Fig. 1: Synthesis Range]{\includegraphics[width=2.2in]{r_my.eps}} \hfil \subfloat[Fig. 2: Number of Views in a Video]{\includegraphics[width=2.2in]{view_my.eps}} \hfil \subfloat[Fig. 3: Number of Users]{\includegraphics[width=2.2in]{user_my.eps}} \subfloat[Fig. 4: Network Load]{\includegraphics[width=2.2in]{lr_my.eps}} \hfil \subfloat[Fig. 5: View Failure Probability]{\includegraphics[width=2.2in]{thrm1_my.eps}} \hfil \subfloat[Fig. 6: Ratio of Successfully Received Views]{\includegraphics[width=2.2in]{thrm2_my.eps}} \caption*{} \end{figure*} \subsection{Impact of User Preferences} From Fig. 1 to Fig. 4. the results clearly show that Uniform distribution requires the most channel time compared with Zipf and Normal distributions. This is because in Zipf and Normal distributions, users prefer a sequence of hot views, and those views thus have a greater chance to be synthesized by nearby views with DIBR. \subsection{Analytical Result} Fig. 5 and Fig. 6 compare the simulation results from NS3 and the analytical results of Theorem 1 and Theorem 2 for the Uniform distribution, where each user subscribes to each view with a probability of 0.8. The results reveal that the discrepancy among the simulation and analysis is very small. Most importantly, $\alpha $ increases for a larger $R$ since each user can synthesize and acquire a desired view from more candidate right and left views when the desired view is lost during the transmissions. \section{Conclusion} With the emergence of naked-eye mobile devices, this paper proposes to incorporate DIBR for multi-view 3D video multicast in WiFi networks. We first investigated the merits of view protection via DIBR and showed that the view failure probability is much smaller than the view loss probability, while the multi-view subscription for each client was also studied. Thereafter, we proposed the Multi-View Group Management Protocol (MVGMP) to handle the dynamic joining and leaving for a 3D video stream and the change of the desired view for a client. The simulation results demonstrated that our protocol effectively reduces the bandwidth consumption and increases the probability for each client to successfully playback the desired view in a multi-view 3D video. \section{CoRR} To investigate the case where user subscribes a consecutive sequence of views, we adopt the following setting. User subscribes views according to a Zipf distribution, which means the $k$th view is subscribed with probability $\frac{c}{(k\textrm{ mod }m)^s}$ independently to other views. Figure$7$ depicts this scenario using $m=5$ as an example. Following theorem serves as a counterpart of theorem $2$ in our main article. \begin{thm} In the consecutive view subscription scenario as described above, the ratio $\widetilde{\alpha}$ of expected number of views that can be received or synthesized to the number of total subscribed views tends to \begin{align} &p\,\Bigg\{\sum_{j=1}^m\sum_{x=1}^R\Bigg[\Bigg(\sum_{l=1}^{m-j}\frac{c}{(j+l)^s}+\Bigg(\sum_{t=1}^m\frac{c}{t^s}\Bigg) \frac{x-(m-j)}{m} \notag \\ &+\sum_{l=1}^{[x-(m-j)] \textrm{mod }m}\frac{c}{l^s}\Bigg)p(1-p)^{x-1}\Bigg]\Bigg\} \Bigg/{\sum_{l=1}^m\frac{c}{l^s}} \notag \end{align} as $|\mathcal{K}_i|\rightarrow\infty$, where $p=1-\prod_{c\in C_{i},r\in D_i}\sum_{n}p_{c,r}^{\text{AP}}(n)p_{i,c,r}^{n}$ \end{thm} \textbf{Proof:} We follow a similar arguments in our main article, which derives the theorem by reward theory. This time, however, we should use a generalized reward process, the Markov reward process. Let $T_n$ denote the index of the n-th successfully received view, and $G_n$ denote the state of the embedded Markov chain, which represents the "position" of the n-th renewal cycle. An example of this definition is represented in figure. 7, in which the states of the first, second and the third cycles are $1,1,4$ respectively. The transition probability of $G_n$ is \begin{numcases}{p_{ij}} \frac{p(1-p)^{j-i-1}}{1-(1-p)^m}, & $1\leq i<j\leq m$ \nonumber\\ \frac{p(1-p)^{m-i+j-1}}{1-(1-p)^m}, & $1\leq j\leq i \leq m$ \end{numcases}% since, for example $1\leq i<j\leq m$, the position change from $i$ to $j$ occurs if and only if there are $j-i$ plus a multiple of $m$ views between the nearest two successfully received views, which means \begin{small} \begin{equation} p_{ij}=p(1-p)^{j-i-1}+p(1-p)^{j-i-1+m}+p(1-p)^{j-i-1+2m}+\cdots\nonumber \end{equation} \end{small} The $\{(G_n,T_n),n=1,2,3,\dots\}$ so defined is then a Markov renewal process. \begin{figure}[!t] \captionsetup{labelformat=empty} \includegraphics[width=3in, angle=270]{correx.eps} \vspace{-3cm} \caption{Fig. 7: Example of consecutive view subscription scenario} \end{figure} If we define the reward function of the process as $\rho (j,x)=$ \begin{small} \begin{numcases}{} \sum_{l=1}^x\mathbf{1(\textrm{view in the $l$ position has been subscribed})}, &$x \leq R$ \nonumber\\ 0, &$x> R$ \nonumber \end{numcases}% \end{small} then \begin{align} Z_{\rho}=\sum_{n:T_{n+1}<t}\rho(G_n,T_{n+1}-T_n)+\rho(G(t),X(t)) \end{align} is a Markov reward process, where $X(t)$ is the age process and $G(t)$ be the semi-Markov process associated with our interested Markob renewal process $\{(G_n,T_n),n=1,2,3,\dots\}$. The process so defined as the following desired property: The process just defined has a direct relation to our desired quantity $\widetilde{\alpha}$, which is \begin{align} \widetilde{\alpha}=\frac{EZ_{\rho}}{S_t}\nonumber \end{align} where $S_t$ is the number of views subscribed by the user. We now intend to apply the theorem 4.1 in \cite{soltani1998} to the right hand side of the above equation. In the following, we will use the same notations as in the article just mentioned. \begin{align} h(j)&=\sum_{x=1}^{\infty}\rho(j,x)\sum_{j=1,2,\dots}P(G_{n+1}=j,T_{n+1}-T_n= x|G_n=i)\nonumber\\ &=\sum_{x=1}^{\infty}\rho(j,x)p(1-p)^{x-1}\nonumber\\ &=\sum_{x=1}^R\Bigg[\Bigg(\sum_{l=1}^{m-j}\frac{c}{(j+l)^s}+\Bigg(\sum_{t=1}^m\frac{c}{t^s}\Bigg)\, \frac{x-(m-j)}{m}\!\nonumber\\ &+\sum_{l=1}^{[x-(m-j)] \textrm{mod }m}\frac{c}{l^s}\Bigg)p(1-p)^{x-1}\Bigg]\nonumber\\ \end{align} Observe that the steady state of the chain $G_n$ is uniform distribution, which means \begin{align} \pi_i=\frac{1}{m} \end{align} Now apply theorem 4.1 in \cite{soltani1998}, we have \begin{align} \mathbb{E}Z_{\rho}(t)=pt\sum_{j=1,2,\dots}\pi_jh(j)+o(t)\nonumber \end{align} Hence, \begin{align} &\frac{\mathbb{E}Z_{\rho}(t)}{S_t}\rightarrow mp\frac{\sum_{j=1,2,\dots}\pi_jh(j)} {\sum_{l=1}^{m}\frac{c}{l^s}}\nonumber\\ &=p\Bigg\{\sum_{j=1}^m\sum_{x=1}^R\Bigg[\Bigg(\sum_{l=1}^{m-j}\frac{c}{(j+l)^s}+\Bigg(\sum_{t=1}^m\frac{c}{t^s}\Bigg) \frac{x-(m-j)}{m}\nonumber\\ &+\sum_{l=1}^{[x-(m-j)] \textrm{mod }m}\frac{c}{l^s}\Bigg)p(1-p)^{x-1}\Bigg]\Bigg\} \Bigg/{\sum_{l=1}^{m}\frac{c}{l^s}}\nonumber \end{align} \bibliographystyle{IEEEtran}
1,116,691,499,984
arxiv
\section{Introduction} Sparked by the ambition to dynamically manipulate microparticles in solution, there have been major advances in the development of experimental methods to control ultrasound acoustic fields at the microscale~\cite{Drinkwater2016, Bruus2011c}, for example, using bulk acoustic waves~\cite{Laurell2007, Augustsson2011, Leibacher2014}, surface acoustic waves~\cite{Ding2012, Tran2012, Riaud2015a}, transducer arrays~\cite{Courtney2014, Marzo2015, Baresch2016}, and 3d-printed transmission holograms~\cite{Melde2016}. The acoustic radiation force acting on particles in acoustic fields is used in these systems to manipulate particles and cells, thereby concentrating~\cite{Antfolk2015}, trapping~\cite{Wiklund2014}, separating~\cite{Lee2015}, and sorting~\cite{Grenvall2015} bioparticles and cells based on their acoustomechanical properties. It would be of considerable interest if these methods could be extended to the manipulation of solute concentration fields in microfluidic systems. Indeed, the ability to pattern and manipulate molecular concentration fields plays essential roles in several lab-on-a-chip applications and in controlled studies of biological processes such as development, inflammation, wound healing, and cancer, for which biomolecule gradients act as cellular signaling mechanisms~\cite{Keenan2008}. The standard approach to precisely generate specified concentration gradients is to use microfluidic networks~\cite{Takayama1999, Dertinger2001}, however, with limited temporal control. Here, we present a theoretical analysis of acoustic tweezing, patterning, and manipulation of solute concentration fields in microfluidics. We predict that acoustics offers a high degree of spatio-temporal control in these dynamical operations. Our study is predominantly motivated by the recent development of iso-acoustophoresis~\cite{Augustsson2016}, a microfluidic analog to density-gradient centrifugation. In iso-acoustophoresis cells are differentiated by their phenotype-specific acoustic impedance by observing their equilibrium position in an acoustically stabilized concentration gradient. The physics of this stabilization was only recently understood~\cite{Karlsen2016}, and an increased understanding of the ability of acoustics to shape and manipulate a concentration field is important to further develop the method. In this work, we explore the consequences of our recent theory of the acoustic force density acting on inhomogeneous fluids in acoustic fields~\cite{Karlsen2016}, a theory successfully validated by experiments, which explains the acoustic stabilization and relocation of inhomogeneous fluids observed in microchannels~\cite{Deshmukh2014}. We define an inhomogeneous fluid as a fluid with spatial variations in density and speed of sound caused by a varying concentration of a solute. Consequently, there is a direct correspondence between fluid inhomogeneities and solute concentration. We present a theoretical framework for analyzing acoustic manipulation of such concentration fields, and apply it to the special cases of rectangular-channel eigenmodes and Bessel-function acoustic vortices. In the former system, we present methods to obtain stable horizontal and vertical multi-layer stratification of the concentration field at the end of a flow-through channel starting from typical inlet conditions. In the latter system, we demonstrate acoustic tweezing and spatio-temporal manipulation of a local high-concentration region in a lower-concentration medium. This extends the realm of acoustic tweezing to include concentration fields. \begin{figure*}[!t] \centering \includegraphics[width=2.0\columnwidth]{fig_01.eps} \caption[]{\figlab{fig_01} (Color online) Sketches of the two model systems considered in this work for the controlled ultrasound manipulation of inhomogeneous fluids at the microscale. The concentration fields (white, high; black, low) are manipulated by the acoustic field excited in the fluid domain by the attached piezoelectric transducers. (a) Acoustic eigenmodes in the two-dimensional cross-section of a rectangular microchannel of width $W$ and height $H$ in the $yz$-plane. (b) Acoustic Bessel-function vortices in the two-dimensional $xy$-plane generated by a circular 16-element phased transducer array of inner radius $R$. Gravity acts in the negative $z$-direction.} \end{figure*} \section{Model systems} \seclab{model_systems} In~\figref{fig_01}, the two typical model systems are introduced to provide the context necessary to appreciate the ensuing theoretical development. The implementation and design of the numerical model and how it corresponds to experimental conditions is discussed in more detail in~\secref{numerical_model}. The first model system, shown in~\figref{fig_01}(a), is a long, straight, rectangular glass-silicon microchannel, placed along the $x$-axis, with a piezo-transducer glued underneath. By actuating the transducer at a resonance frequency of the cavity, an acoustic standing wave field can be established in the channel cross-section in the $yz$-plane, which is typically a few hundred $\upmu\textrm{m}$ in the width $W$ and height $H$ leading to fundamental resonance frequencies of order 1-10$~\textrm{MHz}$. These systems are well-characterized~\cite{Barnkob2010, Augustsson2011, Muller2013, Oever2015, Lamprecht2016} and used in various biomedical applications, for example, the enrichment of circulating tumor cells in blood~\cite{Augustsson2012, Antfolk2015}. The second model system, shown in~\figref{fig_01}(b), consists of a transducer array with 16 elements enclosing a circular fluid chamber. It is inspired by, and closely resembles, the experimental systems in Refs.~\cite{Bernassau2013, Courtney2013, Courtney2014, Riaud2015}. The radius of these chambers is typically around 1~mm, and the chambers may have between 8 and 64 transducer elements operating at MHz frequency. By controlling the amplitude and phase of each transducer, approximate Bessel-function acoustic vortices may be generated by a superposition of waves, and then used to trap and move microparticles~\cite{Courtney2013, Courtney2014}. \section{Theory} \seclab{theory} The recently developed theory for the acoustic force density acting on inhomogeneous fluids in acoustic fields~\cite{Karlsen2016} is based on the separation of time scales between the fast acoustic time scale $t$ and the slow hydrodynamic time scale $\tau$. In general, the large separation of time scales ($\tau\sim10^5t$) allows the acoustic fields, oscillating at the fast time scale $t$, to be solved for while keeping the hydrodynamic degrees of freedom fixed at each instance in time $\tau$ on the slow time scale. Due to the inhomogeneity in the fluid medium, the resulting acoustic field yields a divergence in the time-averaged acoustic momentum-flux-density tensor~\cite{Karlsen2016}, and this is the origin of the acoustic force density $\vec{f}_\mathrm{ac}$, which enters the slow-time-scale hydrodynamics as an external driving force. The inhomogeneity in the fluid medium is caused by the solute concentration field $s(\vec{r},\tau)$. The fluid density $\rho_0$, compressibility $\kappa_0$, and dynamic viscosity $\eta_0$ are all functions of the solute concentration $s$, and thus functions of space and time as the concentration field evolves by advection and diffusion, \beq{inhoms} \rho_0=\rho_0\big(s(\vec{r},\tau)\big) , \, \kappa_0=\kappa_0\big(s(\vec{r},\tau)\big) , \, \eta_0=\eta_0\big(s(\vec{r},\tau)\big) . \end{equation} The specific dependence of $\rho_0$, $\kappa_0$, and $\eta_0$ on concentration $s$ depend on the solute used to establish the inhomogeneity, e.g. iodixanol (OptiPrep), polysucrose (Ficoll), or colloidal nanoparticles (Percoll) as commonly used in density-gradient centrifugation. In this work we consider solutions of iodixanol, for which we have measured the fluid properties as functions of concentration~\cite{Augustsson2016}. \subsection{Slow-time-scale hydrodynamics} The hydrodynamics on the slow time scale $\tau$ is governed by the momentum- and mass-continuity equations for the fluid velocity $\vec{v}(\vec{r},\tau)$ and pressure $p(\vec{r},\tau)$, as well as the advection-diffusion equation for the solute concentration field $s(\vec{r},\tau)$ of the solute with diffusivity $D$, \begin{subequations} \eqlab{DynamicsSlow} \bal \eqlab{NSSlow} \partial_\tau (\rho_0 \vec{v}) &= \nablabf\cdot \big[ \bm{\sigma} - \rho_0\vec{v}\vvv \big] + \vec{f}_\mathrm{ac} + \rho_0 \vec{g} , \\ \eqlab{ContSlow} \partial_\tau \rho_0 &= - \nablabf\cdot \big( \rho_0 \vec{v} \big) , \\ \eqlab{DiffusionSlow} \partial_\tau s &= - \nablabf\cdot \big[ - D \boldsymbol{\nabla} s + \vec{v} s \big] . \eal \end{subequations} Here, $\vec{g}$ is the acceleration due to gravity, and $\bm{\sigma}$ is the fluid stress tensor, given by \bal \bm{\sigma} = - p \, \mathbf{I} + \eta_0 \Big[ \boldsymbol{\nabla} \vec{v} + (\boldsymbol{\nabla} \vec{v})^\mathrm{T} \Big] + \Big(\eta_0^\mathrm{b} -\frac{2}{3}\eta_0 \Big) (\nablabf\cdot\vec{v}) \, \mathbf{I} , \eal where the superscript $\mathrm{T}$ indicates tensor transposition, and $\eta_0$ and $\eta_0^\mathrm{b}$ are the dynamic and bulk viscosity, respectively. The equations constitute an advection-diffusion flow problem with an external forcing due to the acoustic and gravitational force densities $\vec{f}_\mathrm{ac}$ and $\rho_0 \vec{g}$, both appearing on the right-hand side of the momentum equation~\eqrefnoEq{NSSlow}. \subsection{The acoustic force density} The acoustic force density $\vec{f}_\mathrm{ac}$ acting on the fluid on the slow hydrodynamic time scale $\tau$ was derived in Ref.~\cite{Karlsen2016} from a divergence in the time-averaged acoustic momentum-flux-density tensor induced by continuous spatial variations in the fluid parameters of density $\rho_0$ and compressibility $\kappa_0$, \beq{facFinal} \vec{f}_\mathrm{ac} = - \frac14 |p_1|^2 \boldsymbol{\nabla}\kappa_0 - \frac14 |\vvv_1|^2 \boldsymbol{\nabla}\rho_0 . \end{equation} Here, $p_1$ and $\vvv_1$ are the acoustic pressure and velocity field, respectively, assumed to be time-harmonic first-order perturbations of the hydrodynamic degrees of freedom. Because the compressibility $\kappa_0$ is difficult to measure directly, it is often more convenient to work with the fluid density $\rho_0$ and speed of sound $c_0$, both of which are readily measured as functions of concentration. Using that $\kappa_0=1/(\rho_0 c_0^2)$, we find \beq{dkapO} \boldsymbol{\nabla} \kappa_0 = \boldsymbol{\nabla} \Big( \frac{1}{\rho_0 c_0^2} \Big) = - \kappa_0 \frac{\boldsymbol{\nabla} \rho_0}{\rho_0} - 2\kappa_0\frac{\boldsymbol{\nabla} c_0}{c_0} , \end{equation} and the expression~\eqrefnoEq{facFinal} becomes \beq{fac_crho} \vec{f}_\mathrm{ac} = \frac{1}{4} \Big[ \kappa_0 |p_1|^2 - \rho_0 |\vvv_1|^2 \Big] \frac{\boldsymbol{\nabla} \rho_0}{\rho_0} + \frac12 \kappa_0 |p_1|^2 \frac{\boldsymbol{\nabla} c_0}{c_0} . \end{equation} In the weakly inhomogeneous limit where the variations in density $\rho_0$ and speed of sound $c_0$ are small, we introduce the dimensionless relative deviations $\hat{\rho}(\vec{r},\tau)$ and $\hat{c}(\vec{r},\tau)$, and write \begin{subequations} \eqlab{weakInhom} \bal \rho_0(\vec{r},\tau) = \rho_0^{(0)}[1 + \hat{\rho}(\vec{r},\tau)] , \quad |\hat{\rho}(\vec{r},\tau)| \ll 1 , \\ c_0(\vec{r},\tau) = c_0^{(0)}[1 + \hat{c}(\vec{r},\tau)] , \quad |\hat{c}(\vec{r},\tau)| \ll 1 . \eal \end{subequations} Here, the superscript $(0)$ indicates zeroth-order in the inhomogeneity $\hat{\rho}$ and $\hat{c}$. To first order in $\hat{\rho}$ and $\hat{c}$ the force density~\eqrefnoEq{fac_crho} then becomes \bal \eqlab{facWeakInhom} \vec{f}_\mathrm{ac}^{(1)} = \frac14 \Big[ \kappa_0^{(0)} |p_1^{(0)}|^2 - \rho_0^{(0)} |\vvv_1^{(0)}|^2 \Big] \boldsymbol{\nabla}\hat{\rho} + \frac12 \kappa_0^{(0)} |p_1^{(0)}|^2 \boldsymbol{\nabla}\hat{c} . \eal In this expression, the acoustic fields $p_1^{(0)}$ and $\vvv_1^{(0)}$ are zeroth order in $\hat{\rho}$ and $\hat{c}$, and consequently, the fields are obtained as solutions of the homogeneous-fluid wave equation. This constitutes a significant simplification in applications of the theory, as will be shown next. Let $p_\mathrm{a}$ denote the acoustic pressure amplitude, $\omega$ the angular acoustic frequency, and $k_0^{(0)}=\omega/c_0^{(0)}$ the homogeneous-fluid wave number. The time-harmonic acoustic fields $p_1^{(0)}$ and $\vvv_1^{(0)}$ may then be written in terms of a non-dimensionalized pressure field $\hat{p}_1^{(0)}(\vec{r},\tau)$, as \beq{fieldsGeneral} p_1^{(0)} = p_\mathrm{a} \hat{p}_1^{(0)} \mathrm{e}^{-\mathrm{i}\omega t}, \quad \vvv_1^{(0)} = \frac{ - \mathrm{i} p_\mathrm{a}}{k_0^{(0)} c_0^{(0)} \rho_0^{(0)}} \boldsymbol{\nabla} \hat{p}_1^{(0)} \mathrm{e}^{-\mathrm{i}\omega t}. \end{equation} Inserting this into~\eqref{facWeakInhom} and introducing the homogeneous-fluid oscillation-time-averaged acoustic energy density $E_\mathrm{ac}^{(0)}=\frac{1}{4}\kappa_0^{(0)}p_\mathrm{a}^2$, the acoustic force density $\vec{f}_\mathrm{ac}^{(1)}$ can be rewritten as \begin{subequations} \eqlab{facRC} \bal \eqlab{facRCa} \vec{f}_\mathrm{ac}^{(1)} = E_\mathrm{ac}^{(0)} \Big[ R(\vec{r},\tau)\: \boldsymbol{\nabla} \hat{\rho} + C(\vec{r},\tau)\: \boldsymbol{\nabla} \hat{c} \Big] , \eal where we have introduced the dimensionless field-shape functions $R(\vec{r},\tau)$ and $C(\vec{r},\tau)$, given by \bal R(\vec{r},\tau) &= |\hat{p}_1^{(0)}|^2 - \big(k_0^{(0)}\big)^{-2} |\boldsymbol{\nabla} \hat{p}_1^{(0)}|^2 , \\ C(\vec{r},\tau) &= 2 |\hat{p}_1^{(0)}|^2 . \eal \end{subequations} The field-shape functions $R(\vec{r},\tau)$ and $C(\vec{r},\tau)$ depend on the shape of the homogeneous-fluid acoustic pressure field $\hat{p}_1^{(0)}(\vec{r},\tau)$, often known analytically, and may thus be varied in space and time. Consequently, our theoretical framework suggests that a high level of spatio-temporal control of fluid inhomogeneities can be achieved. \subsection{Eigenmodes in a rectangular microchannel} Consider a long, straight, hard-walled microchannel of width $W$ and height $H$, with the aspect ratio $\alpha=H/W$. The acoustic fields obtained at resonance conditions in the two-dimensional channel cross-section take the form of eigenmode solutions to the Helmholtz wave equation with hard-wall boundary conditions. Choosing the fluid domain in the $yz$-plane defined by $0<y<W$ and $0<z<H$, and introducing the normalized coordinates ${\hat{y}{}}=\frac{\pi}{W} y$ and ${\hat{z}{}} = \frac{\pi}{H} z$, the eigenmodes $\hat{p}_1^{(0)}({\hat{y}{}},{\hat{z}{}})$ are \begin{subequations} \eqlab{fieldsRect} \bal \hat{p}_1^{(0)} &= \cos( n {\hat{y}{}}) \cos( m {\hat{z}{}}) , \\ \eqlab{fres_nm} \mathrm{with} \quad f_{nm} &= \frac{\omega_{nm}}{2\pi} = \frac{c}{2} \sqrt{ \Big( \frac{n}{W} \Big)^2 + \Big( \frac{m}{H} \Big)^2}. \eal \end{subequations} Here, $n=0,1,2,...$ and $m=0,1,2,...$ are the mode numbers in the $y$- and $z$-direction, respectively, and $f_{nm}$ is the resonance frequency of the $nm$-mode. Inserting the eigenmode solution~\eqrefnoEq{fieldsRect} into~\eqref{facRC}, one obtains the acoustic force density acting on the fluid in the $nm$-mode. After some algebra, the field-shape functions $R_{nm}({\hat{y}{}},{\hat{z}{}})$ and $C_{nm}({\hat{y}{}},{\hat{z}{}})$ take the form, \begin{subequations} \eqlab{RCeigen} \bal R_{nm}({\hat{y}{}},{\hat{z}{}}) &= \frac12 \Bigg\lbrace \frac{n^2}{n^2 + m^2 \alpha^{-2}} \Big[ \cos(2n{\hat{y}{}}) - \cos(2m{\hat{z}{}}) \Big] \nonumber \\ & \quad\quad + \cos(2n{\hat{y}{}})\cos(2m{\hat{z}{}}) + \cos(2m{\hat{z}{}}) \Bigg\rbrace , \\ C_{nm}({\hat{y}{}},{\hat{z}{}}) &= \frac12 \Big[1 + \cos(2n{\hat{y}{}}) \Big] \Big[1 + \cos(2m{\hat{z}{}}) \Big] . \eal \end{subequations} In the horizontal half-wave resonance $(n,m)=(1,0)$, we obtain $R_{10}=\cos(2{\hat{y}{}})$ and $C_{10}=1+\cos(2{\hat{y}{}})$, in agreement with Ref.~\cite{Karlsen2016}, given an appropriate change of the coordinate system. \subsection{Bessel-function acoustic vortex fields} It has been demonstrated that transducer arrays can be used to generate acoustic vortices in fluid-filled chambers~\cite{Courtney2013, Courtney2014, Riaud2015, Hefner1999}. By controlling the amplitude and phase of each transducer in a circular array, one can generate approximate Bessel-function pressure fields of the form~\cite{Courtney2013}, \beq{fieldsBessel} \hat{p}_1^{(0)} = J_l(k_0^{(0)} r) \mathrm{e}^{\mathrm{i} l \theta} . \end{equation} Here, we are using cylindrical polar coordinates $(r,\theta,z)$ with the origin at the center of the Bessel function. $J_l$ is the $l$'th order Bessel function of the first kind, and $l$ is the number of $2\pi$ phase shifts around the axis of the vortex, often referred to as the topological charge. The acoustic force density acting on an inhomogeneous fluid in the acoustic vortex is obtained by inserting~\eqref{fieldsBessel} into~\eqref{facRC}. Introducing the normalized radial coordinate ${\hat{r}{}}=k_0^{(0)} r$, and making use of the recurrence relations $\frac{2 n}{{\hat{r}{}}} J_n({\hat{r}{}}) = J_{n-1}({\hat{r}{}}) + J_{n+1}({\hat{r}{}})$ and $2 J_n^\prime({\hat{r}{}}) = J_{n-1}({\hat{r}{}})-J_{n+1}({\hat{r}{}})$, the field-shape functions $R_l({\hat{r}{}})$ and $C_l({\hat{r}{}})$ of the $l$'th order vortex take the form, % \begin{subequations} \eqlab{RCbessel} \bal \eqlab{RCbesselA} R_l({\hat{r}{}}) &= [J_l({\hat{r}{}})]^2 - \frac12 [J_{l-1}({\hat{r}{}})]^2 - \frac12 [J_{l+1}({\hat{r}{}})]^2 , \\ C_l({\hat{r}{}}) &= 2 [J_l({\hat{r}{}})]^2 . \eal \end{subequations} \section{Numerical model} \seclab{numerical_model} In this section we present the implementation and design of our numerical models. Emphasis is put on the considerations that went into designing numerical models that describe actual experimental conditions that may be reproduced with the setups introduced in~\secref{model_systems} and sketched in \figref{fig_01}. \subsection{Numerical implementation} In the numerical models of the slow-time-scale hydrodynamics, the coupled field equations~\eqrefnoEq{DynamicsSlow} are implemented and solved on weak form using the finite-element solver COMSOL Multiphysics~\cite{COMSOL52}. We consider the limit of weakly inhomogeneous fluids and use the analytical expression~\eqrefnoEq{facRCa} for the acoustic force density $\vec{f}_\mathrm{ac}^{(1)}$ with the field-shape functions given in the rectangular-channel eigenmodes and acoustic vortex fields, respectively, in~\eqsref{RCeigen}{RCbessel}. For numerical stability, a logarithmic concentration field $\hat{s}$, with $s=s_0 \exp(\hat{s})$, is used as the independent concentration variable. The boundary conditions imposed on the slow-time-scale velocity and concentration fields $\vec{v}(\vec{r},\tau)$ and $s(\vec{r},\tau)$ at the boundary $\partial\Omega$ of the fluid domain $\Omega$ with normal vector $\vec{n}$, are the standard no-slip and no-flux conditions, \bal \eqlab{BCs} \vec{v}=\vec{0}, \quad \vec{n}\cdot\boldsymbol{\nabla} s = 0 , \quad \mathrm{for} \ \vec{r}\in\partial\Omega , \eal Several convergence tests were carried out to ensure numerical convergence. For example, the integrated concentration was conserved with a maximum relative error of $2\times10^{-3}$ at all times. \subsection{Modeling the fluid inhomogeneity} We model aqueous solutions of iodixanol (OptiPrep), for which the fluid parameters have been measured experimentally as functions of the iodixanol volume-fraction concentration $s$~\cite{Augustsson2016}. OptiPrep is a cell-friendly medium that is used in density-gradient centrifugation and iso-acoustic focusing. In the models, we consider initial concentration fields with iodixanol volume-fractions ranging from $s_\mathrm{min}=0.1$ to $s_\mathrm{max}=0.3$, yielding a relative density difference of up to 10\%, while the maximum relative variation in the speed of sound is 0.5\%. Consequently, we neglect variations in $c_0$, which means that only gradients in $\rho_0$ contribute to the acoustic force density. The polynomials fitting the measured density $\rho_0(s)$ and dynamic viscosity $\eta_0(s)$, as functions of the iodixanol volume-fraction concentration $s$, are~\cite{Augustsson2016} \bsubal \rho_0 &= \rho_0^{(0)}[1+ a_1 s] , \\ \eta_0 &= \eta_0^{(0)}[1+b_1 s + b_2 s^2 + b_3 s^3] . \esubal Here, $\rho_0^{(0)} = 1005~\mathrm{kg/m}^3$ and $\eta_0^{(0)} = 0.954~\mathrm{mPa\,s}$, and the dimensionless constants are $a_1=0.522$, $b_1=2.05$, $b_2=2.54$, and $b_3=22.8$. The diffusivity of iodixanol was measured to $D=0.9\times10^{-10}$~$\mathrm{m^2/s}$. For the bulk viscosity we use the value of pure water~\cite{Muller2014}. \begin{figure*}[!t] \centering \includegraphics[width=2.0\columnwidth]{fig_02.eps} \caption[]{\figlab{fig_02} (Color online) Patterning of inhomogeneous iodixanol solutions in rectangular-channel eigenmodes. The top row shows the field-shape functions $R_{nm}$ for each mode $nm$ (min, dark blue; max, light green). Three different initial concentration fields $s(\vec{r},0)$ of the dense (30\% iodixanol, white) and less dense (10\% iodixanol, black) solutions are considered (1st column, $a$, $b$ and $c$). The next columns show the resulting concentration fields $s(\vec{r},\tau)$ after a retention time of $\tau=1.0~\textrm{s}$ in either the 10-mode (2nd column), the 20-mode (3rd column), the 01-mode (4th column), the 02-mode (5th column), or the 21-mode (6th column), starting from the initial condition $a$ (second row), $b$ (third row), or $c$ (bottom row).} \end{figure*} \subsection{Modeling the rectangular microchannel} In this model, we consider a long straight rectangular microchannel of width $W=375~\upmu\textrm{m}$ and height $H=150~\upmu\textrm{m}$ as sketched in~\figref{fig_01}(a). In acoustophoresis experiments, acoustic eigenmodes of the two-dimensional channel cross-section transverse to the flow are used extensively to manipulate and focus particles and cells based on their mechanical properties. Two notable advantages of using acoustic eigenmodes, or bulk acoustic waves, is that the eigenmodes are easily excited by an attached piezoceramic transducer actuated at the resonance frequency, and that high acoustic energy densities can be obtained in the resonant modes. Typical quality-factors in glass-silicon microchips are between $10^2$ and $10^3$, and typical measured acoustic energy densities are in the range 1-1000~J/m$^3$~\cite{Barnkob2010, Oever2015}. We use $E_\mathrm{ac}^{(0)}=10~\mathrm{J/m^3}$, approximately an order of magnitude larger than the hydrostatic pressure difference across the channel height, ensuring that gravity plays only a minor role in the fluid relocation~\cite{Karlsen2016}. Referring again to \figref{fig_01}(a), we are modeling a flow-through microchannel system where the flow-rate can be controlled, thereby setting the retention time of the fluid in the channel. In our time-dependent model, the time $\tau$ can thus be translated into a downstream length $L$ from the inlet. For example, in the system under consideration a fluid retention time of $\tau_\mathrm{ret}=1.0$~s over a length of $L=5.0$~mm implies a flow-rate of $17~\upmu\mathrm{L/min}$, all of which are realistic experimental parameters. Diffusion generally plays an important role in manipulating concentration fields. However, the time scale of diffusion across one third of the channel width is $\tau_\mathrm{diff}=\frac{1}{2D}(\frac{1}{3}W)^2=87$~s, leaving enough time to conduct typical steady-flow experiments at relevant flow rates without diffusion flattening the gradients. \subsection{Modeling the acoustic vortex field} In this model, we consider a circular fluid chamber, as sketched in~\figref{fig_01}(b), in which an acoustic vortex field of the form~\eqrefnoEq{fieldsBessel} is excited by the surrounding transducer array or by swirling surface acoustic waves~\cite{Courtney2014, Riaud2015}. Notice that, in contrast to the rectangular-microchannel acoustic fields, the acoustic vortices are non-resonant fields, and the center of the vortex can be moved relative to the chamber. In our model, we use a chamber of radius $R=250~\upmu\textrm{m}$, an acoustic energy density of $E_\mathrm{ac}^{(0)}=10~\mathrm{J/m^3}$, and a frequency of $f = 7.5~\textrm{MHz}$. \section{Simulation results} We present a selection of simulation results demonstrating acoustics as a means to spatio-temporally control, manipulate, and relocate solute concentration fields in microsystems. Specifically, we demonstrate manipulation of concentration fields in rectangular-channel eigenmodes and in acoustic vortex fields in circular chambers. In the former, we demonstrate the use of sequential eigenmode actuation to obtain horizontal or vertical multi-layering of the fluid inhomogeneities. We further motivate and introduce the simple but useful concept of orthogonal relocation. In the circular chamber, we demonstrate trapping and translation of a fluid inhomogeneity using Bessel-function acoustic tweezers. \subsection{Multi-layering of concentration fields in rectangular-channel eigenmodes} We consider patterning of concentration fields in the $nm$-eigenmodes in the rectangular microchannel using the modes $(n,m)=(1,0)$, $(2,0)$, $(0,1)$, $(0,2)$, and $(2,1)$ as examples. The resonance frequency $f_{nm}$ of these eigenmodes is obtained from~\eqref{fres_nm}, yielding $f_{10} = 2.0~\textrm{MHz}$, $f_{20} = 4.0~\textrm{MHz}$, $f_{01} = 5.0~\textrm{MHz}$, $f_{02} = 10~\textrm{MHz}$, and $f_{21} = 6.4~\textrm{MHz}$. In~\figref{fig_02} we consider three different initial conditions $a$, $b$, and $c$ (first column) on the concentration field $s(\vec{r},0)$. In the following columns are shown the concentration fields $s(\vec{r},\tau)$ in the selected $nm$-modes after a time $\tau=1.0~\textrm{s}$, for each of the three initial configurations $a$, $b$, and $c$. The resulting configurations are denoted $i$-$nm$, with $i$ indicating the initial configuration ($i=a$, $b$, or $c$), and $nm$ denoting the mode of actuation. The top row shows the field-shape functions $R_{nm}$ of the corresponding modes. In general, the denser high-concentration fluid (30\% iodixanol, white) is relocated into the minima of $R_{nm}$ appearing at pressure nodes, as one might anticipate from the analogy to the acoustic radiation force acting on a particle. It should be emphasized, however, that in contrast to the acoustic radiation force acting on a particle in a standing wave, $\vec{f}_\mathrm{ac}$ is a non-conservative force and it cannot in general be written as the gradient of a potential. The acoustic force density $\vec{f}_\mathrm{ac}$ moreover depends on the history of the system, which is also in contrast to the particle force. For a given mode, the concentration fields tend to evolve towards the same quasi-stable equilibrium configuration, however, the different initial conditions generally influence the resulting configurations. Inspecting~\figref{fig_02}, one finds that relocation of the inhomogeneity into vertical layers is obtained for $m=0$, while horizontal layers are obtained for $n=0$. This is to be expected from the geometry of the acoustic field. However, comparing $a$-01, $b$-01, and $c$-01 it is evident that the concentration field after 1 s of actuation in the 01-mode depends strongly on the initial configuration ($a$, $b$, or $c$). Indeed, the configurations $a$ and $c$ have been relocated into much "cleaner" 01-mode configurations with a single horizontal layer as compared to the configuration $b$. The reason is that the relocations $a \rightarrow a$-01 and $c \rightarrow c$-01 are orthogonal relocations in the sense that the initial and final stratifications are orthogonal to one another. In contrast, the relocation $b \rightarrow b$-01 is a parallel relocation, where whole fluid layers are to be moved into new parallel positions, which can only proceed by an instability. This is particularly evident in the 02-mode comparing the orthogonally relocated configurations $a$-02 and $c$-02 to $b$-02, the latter for which the parallel relocation proceeds by a Rayleigh--Taylor-like instability, shooting up three streams that slowly feed the second horizontal layer. \begin{figure}[!t] \centering \includegraphics[width=0.92\columnwidth]{fig_03.eps} \caption[]{\figlab{fig_03} (Color online) Vertical and horizontal layering of iodixanol concentration fields $s(\vec{r},\tau)$ starting from the horizontally-layered initial configuration $b$ with the dense fluid (30\% iodixanol, white) at the bottom of the channel and the less dense fluid (10\% iodixanol, black) at the top (top row). Each arrow (blue) represents an orthogonal relocation obtained by exciting an $nm$-eigenmode in the rectangular channel for 1 s, with the miniature showing the transition 50~ms after the mode shift. Vertical layering is obtained directly by actuation of the 10- or the 20-mode, yielding the configurations $b$-10 and $b$-20, respectively. Horizontal layering involves an intermediate step going through the 10-mode, yielding the $b$-10-01 and $b$-10-02 configurations. } \end{figure} These observations suggest that orthogonal relocation provides the most effective way of relocating and patterning concentration fields. In the event that a desired relocation is parallel, as in the example $b$-01 starting from the configuration $b$, the resulting horizontally layered 01-mode configuration is blurred because it proceeded by an instability. The solution to obtaining sharp horizontally-layered 01- and 02-mode configurations starting from $b$ is to go through a sequence of orthogonal relocations. By applying the sequence $b$-10-0$m$, the 10-mode being an intermediate, one can achieve sharp horizontally-layered 0$m$-mode configurations from the initial configuration $b$. This is illustrated in~\figref{fig_03}, where the relocation dynamics is also indicated by showing intermediate configurations. A movie of the dynamics in the sequence $b$-10-01-20 can be found in the Supplemental Material~\footnote{See Supplemental Material at [url] for movies of the time-evolution of the concentration fields.}. \begin{figure*}[!t] \centering \includegraphics[width=2.0\columnwidth]{fig_04.eps} \caption[]{\figlab{fig_04} (Color online) Patterning of inhomogeneous iodixanol solutions in acoustic vortices of topological charge $l$. (a) Initial concentration field $s(\vec{r},0)$ with the dense (30\% iodixanol, white) and less dense (10\% iodixanol, black) solution each occupying half of the circular domain. The radial field-shape function $R_l(r)$ is shown for $l=0$ (blue), $l=1$ (green), and $l=2$ (violet), indicating the initial magnitude and (negative) direction of the acoustic force density acting on the blurred interface. (b)-(d) Resulting concentration fields $s(\vec{r},\tau)$ after $\tau=3.0$~s in the acoustic vortex with $l=0$, $l=1$, and $l=2$, respectively, with a central trapping region for $l>0$. The denser fluid (white) is relocated into the minima of the field-shape functions $R_l$.} \end{figure*} In summary, starting from a single-layer configuration, one can achieve multi-layering of concentration fields on a one-second timescale in the rectangular-channel eigenmodes commonly employed in acoustophoresis. While we have focused on the spatial patterning, the ability to switch between modes provides temporal control of the concentration field at the end of the flow-through channel. This type of acoustic fluid manipulation is best performed by orthogonal relocation, and a parallel relocation can always be substituted by two sequential orthogonal relocations. \subsection{Patterning and tweezing of concentration fields in acoustic vortex fields} Next, we demonstrate patterning and spatio-temporal manipulation of concentration fields in Bessel-function acoustic vortex fields in circular fluid chambers. Starting from the initial concentration field $s(\vec{r},0)$, shown in~\figref{fig_04}(a), with the denser fluid (30\% iodixanol, white) occupying half the circular domain, \figref{fig_04}(b)-(d) shows the concentration fields $s(\vec{r},\tau)$ after $\tau=3.0~\textrm{s}$ of actuation in an acoustic vortex of order $l=0$, $l=1$, and $l=2$, respectively. Again, it is observed that the denser fluid tends to be relocated into the minima of the field-shape functions $R_l$. The central region of an acoustic vortex is of particular interest because it provides a trapping potential that can be used to trap and manipulate particles. Here, considering inhomogeneous fluid manipulation, we define the central region of the $l$'th order vortex from the condition ${\hat{r}{}} < {\hat{r}{}}_l^*$, where ${\hat{r}{}}_l^*$ is the first non-zero root of the field-shape function, $R_l({\hat{r}{}}_l^*)=0$. This yields the approximate values, ${\hat{r}{}}_0^*=1.44$, ${\hat{r}{}}_1^*=1.18$, and ${\hat{r}{}}_2^*=2.26$. As demonstrated in \figref{fig_04}, in the vortex with $l=0$ the denser fluid (white) is forced outside of the central region, while in the vortices with $l=1$ and $l=2$ the denser fluid is forced into the central region. Mathematically, this follows directly from \eqref{facRCa} (with $\boldsymbol{\nabla}\hat{c}=\vec{0}$) by inspecting the field-shape functions $R_l(r)$ shown in \figref{fig_04}, because they indicate the initial radial distribution of the acoustic force density acting on the blurred interface. Physically, the acoustic pressure is maximum at the center for $l=0$, while it is zero for $l>0$. Note furthermore, that for $l>0$ the central trapping region becomes larger for increasing $l$. These findings for manipulation of inhomogeneous fluids are analogous to those of acoustic tweezing of particles~\cite{Courtney2014}. Acoustic tweezing of a high-concentration region in a lower-concentration medium can thus be realized in the central region of vortices with $l>0$, and this may be used to confine and translate a fluid inhomogeneity as will be demonstrated next using the $l=1$ vortex. We consider an initial concentration field $s(\vec{r},0)$ that has a Gaussian high-concentration region (30\% iodixanol, white) centered at the position $(r,\theta)=(\frac12 R,\frac12 \pi)$, as given in polar coordinates, in the lower-concentration medium (10\% iodixanol), see \figref{fig_05}(a). The width (or standard deviation) of the Gaussian is set to $\sigma=0.5 \, {\hat{r}{}}_1^*$, half the width of the central trapping region. The acoustic vortex is initially centered at the position of the inhomogeneity, and it is then translated in a closed-loop equilateral triangle moving in straight lines from $(\frac12 R, \frac12 \pi)$ to $(\frac12 R,-\frac{1}{6}\pi)$, to $(\frac12 R,-\frac{5}{6}\pi)$, and finally back to the starting position in $(\frac12 R,\frac12 \pi)$. The translation speed $U=0.7~\textrm{mm}/\textrm{s}$ of the center of the vortex was chosen such that it takes 0.3~s to move the distance from one corner of the triangle to the next. The resulting concentration field $s(\vec{r},\tau)$ after $\tau=0.3$~s, 0.6~s, and 0.9~s is shown in~\figref{fig_05}(b), (c), and (d), respectively, with the central region of the vortex indicated by the green circle, and the path of the center of the vortex by the straight green lines. To a good approximation, the high-concentration solution is kept within the central region of the vortex as it is translated in space, leaving only a trailing diffusive residue. Movies showing the manipulation in real time for two different translation speeds are available in the Supplemental Material~\cite{Note1}. We find that when the translation speed of the vortex is increased by a factor of 3, the inhomogeneity does not remain trapped at the center during the full loop. Conversely, for slower translation speeds, the inhomogeneity stays in the center of the vortex, but the increased loop time leads to a more pronounced diffusion broadening. \begin{figure*}[!t] \centering \includegraphics[width=2.0\columnwidth]{fig_05.eps} \caption[]{\figlab{fig_05} (Color online) Acoustic tweezing and translation of a local high-concentration region using an acoustic vortex with topological charge $l=1$. (a) Initial concentration field $s(\vec{r},0)$ with a Gaussian high-concentration region (30\% iodixanol, white) in a lower-concentration medium (10\% iodixanol, black). Initially, the acoustic vortex is centered at the position of the inhomogeneity, with the green circle indicating the central region of the vortex. At time $\tau>0.0~\textrm{s}$ the vortex is moved at constant speed $U=0.7~\textrm{mm}/\textrm{s}$ along the green path in a closed-loop triangle. The resulting concentration fields $s(\vec{r},\tau)$ after $\tau=0.3$~s, 0.6~s, and 0.9~s are shown in (b), (c), and (d), respectively.} \end{figure*} The results presented in this section provide theoretical evidence that the applicability of acoustic tweezers can be extended beyond particle manipulation to include manipulation of concentration fields -- a phenomenon that has yet to be demonstrated experimentally. \section{Discussion} In this paper, we have explored some consequences of our recent theory of the acoustic force density acting on inhomogeneous fluids~\cite{Karlsen2016}. For this purpose, a useful formulation of the theory was given in terms of the field-shape functions $R$ and $C$ in the experimentally relevant limit of weakly inhomogeneous fluids. The theory of the acoustic force density acting on inhomogeneous fluids show resemblance to the Gorkov theory of the acoustic radiation force acting on a particle~\cite{Gorkov1962}, for example, by the tendency of dense fluids being focused at the pressure nodes. However, the two theories have important distinctions. (1) The theory of the acoustic force density acting on inhomogeneous fluids is a field theory with $\vec{f}_\mathrm{ac}$ generally acting on the fluid in every point in space, in contrast to the Newtonian theory for the radiation force acting on a point particle. (2) The acoustic force density $\vec{f}_\mathrm{ac}$ is a non-conservative force, and in general it cannot be written as the gradient of a potential, as can the radiation force on a particle in a standing wave~\cite{Settnes2012, Karlsen2015}. Instead, one may use the field-shape functions to assess the direction and magnitude of the forces acting on the fluid for a given initial concentration field. For density inhomogeneities, the denser fluid tends to relocate to the minima of the field-shape function $R$. (3) Not unrelated, in the theory of the acoustic force density, the force density $\vec{f}_\mathrm{ac}$ depends on the history of the system and it evolves as the concentration field changes by advection and diffusion. While the acoustic force density can stabilize a fluid inhomogeneity against destabilizing forces, such as gravity in the case of a density gradient, it cannot counteract molecular diffusion. Consequently an inhomogeneity always has a finite lifetime set by the characteristic diffusion time, and it will broaden due to diffusion. Interestingly, this is an advantage in iso-acoustic focusing, because it allows fine-tuning the gradient at the end of a steady-flow-through channel by varying the flow rate~\cite{Augustsson2016}. In acoustic tweezing of a high-concentration region, diffusion limits the time that the inhomogeneity can be manipulated in a closed chamber. One can obtain longer diffusion times by going to larger scales or by using Ficoll or Percoll solutions with larger solute molecules that diffuse slower. Importantly, the ability to manipulate concentration fields requires that the concentration field introduces inhomogeneities ($\gtrsim 1\%$) in the fluid density or speed of sound. This is true for concentrations of iodixanol (OptiPrep), polysucrose (Ficoll), or colloidal nanoparticles (Percoll), that are used in density-gradient separation. To manipulate a concentration field of a specific biomolecule at low concentration, one can add OptiPrep, Ficoll, or Percoll, so the solution containing the dilute concentration of biomolecules still introduces a gradient.\\[10mm] \section{Conclusion} Advances in the development of experimental methods to control acoustic fields for microparticle-manipulation purposes, for example, using transducer arrays, surface acoustic waves, and transmission holograms, allows spatio-temporal tailoring of acoustic fields. In this paper, we have demonstrated theoretically that this provides dynamic control of solute concentration fields at the microscale. We can think of this as acoustic "landscaping" of concentration fields, because of the ability to dynamically manipulate "hills" and "valleys" of high and low concentration. Using acoustic landscaping one may relocate, shape, and pattern concentration fields with the methods already developed for particle-handling. We have presented two examples of this. Firstly, in rectangular microchannels, we have described an operational principle for obtaining multi-layer stratification of concentration fields using acoustic eigenmodes. Secondly, we have demonstrated acoustic tweezing and manipulation of a high-concentration fluid region in a lower-concentration fluid medium using a Bessel-function acoustic vortex. We envision that the insights obtained in this study will find applications in the further development of iso-acoustophoresis and other gradient-based separation methods. Another use may be found in studies of biological processes with active spatio-temporal control of solute gradients. Finally, the ability to pattern fluid inhomogeneities using acoustics might also find applications in drug delivery, tissue engineering, and 3d-printing of microstructures.
1,116,691,499,985
arxiv
\section*{Introduction} A nodal surface is a projective surface with only ordinary double points as singularities. A set of nodes of a surface $F$ is said to be even if there is a double cover $S\rightarrow F$ branched exactly in the nodes from that set. In \cite{catanese-tonoli}, Catanese and Tonoli showed that an even set of nodes on a sextic surface has cardinality in $\{24,32,40,56\}$. They also provided a construction of such $56$ nodal surfaces, constructions for the other cases were already known. Their method is based on the paper \cite{casnati-catanese}, where it is shown that even sets of nodes correspond to certain symmetric maps between vector bundles. A careful study of the sheaves involved leads to certain matrices whose entries are homogeneous polynomials on $\Proj^3$. The points in $\Proj^3$ where such a matrix has rank less than $6$ is a sextic surface with an even set of $56$ nodes. In this way, one can find explicit examples of such surfaces, but the equations tend to be rather complicated and it is not easy to understand the geometry of these surfaces. Let $F$ be a $56$ nodal surface as constructed in \cite{catanese-tonoli} and let $f:S\to F$ be the double cover which is branched exactly over the nodes of $F$. The first Betti number of the smooth surface $S$ is equal to $6$, hence $S$ has a $3$-dimensional Albanese variety. With some trial and error, we then found the following construction for $56$ nodal surfaces. A principally polarized abelian threefold $A$ has a theta divisor $\Th$, defining the polarization, which can be taken to be symmetric, so $[-1]\Th=\Th$. The fixed points of $[-1]$ are the two-torsion points. There are exactly $28$ such points on $\Th$ precisely in the case that $(A,\Th)$ is the Jacobian of a non-hyperelliptic genus three curve. Assume that we are in this case. Then $\ol{\Th}=\Th/[-1]$ has $28$ nodes. This singular surface has been studied before, cf.\ \cite[Chapter IX.6, Theorem 4 and Remark 6]{dolgachev-ortland}. In particular it has an embedding into $\Proj^6$ where it is a Cartier divisor in a cone over a Veronese surface. This cone is the quotient of $\Proj^3$ by an involution which changes the sign of one of the homogeneous coordinates. The inverse image of $\ol{\Th}$ in $\Proj^3$ is then a sextic surface $F$ with an even set of $56$ nodes, as we show in Section \ref{sec_construction}. By construction, $F$ has an involution with quotient $\ol{\Th}$. This involution lifts to $S$ and together with the covering involution of the map $S\to F$ generates a subgroup $(\Z/2\Z)^2$ of $\mr{Aut}(S)$. In Section \ref{covers} we study the cohomology of the quotients of $S$. We also show there that our construction and the one from Catanese and Tonoli produce the same surfaces. In the last section we give an explicit example, with a simple equation, of such a surface. \section{Construction of a family of 56 nodal sextic surfaces}\label{sec_construction} Let $C$ be a smooth non-hyperelliptic curve of genus 3 and consider its Jacobian $A=\mr{Jac}(C)$. The abelian variety $A$ admits a principal polarization defined by the theta divisor $\Th$ and we will identify $\Th=S^2C$. We can choose $\Th$ to be a symmetric divisor on $A$, i.e. $[-1]^*\Th=\Th$. The involution $[-1]$ on $\Theta$ corresponds to the involution $D\mapsto K_C- D$ on $S^2C$, where $K_C$ is the canonical divisor on $C$. The linear system $|2\Th|$ is totally symmetric, and defines a morphism $$ \varphi_{2\Th}:A \,\longrightarrow\,\Proj^7 $$ which is the quotient map by the involution $[-1]$. Let $\ol{A}\cong A/[-1]$, the Kummer variety of $A$, be the image of $\varphi_{2\Th}$. The singular locus of $\ol{A}$ consists of 64 nodes, these are the images of the two-torsion points of $A$. Consider the hyperplane $H_{2\Th}$ of $\Proj^7$ corresponding to the divisor $2\Th$. The intersection of $H_{2\Th}$ with $\ol{A}$ is the image $\ol{\Th}\cong\Th/[-1]$ of $\Th$, with multiplicity two. As $\Th$ contains $28$ of the two-torsion points of $A$, the surface $\ol{\Th}$ has 28 nodes. Equivalently, these are the images of the $28$ odd theta characteristics in $S^2C$. To describe this map $\varphi_{2\Th}|_{\Th} :\Th\to \Proj^6$, notice that the adjunction formula on $A$ shows that the canonical class of $\Th$ is $K_\Th=\Th_{|\Th}$. Thus $\O_\Th(2\Th)\cong \omega_\Th^{\otimes 2}$. Moreover, the cohomology of the restriction sequence $$ 0\,\longrightarrow\,\O_A(\Th)\,\longrightarrow\, \O_A(2\Th)\,\longrightarrow\,\O_\Th(2\Th)\,\longrightarrow\,0 $$ combined with $H^i(A,\O_A(\Th))=0$ for $i>0$ (Kodaira vanishing or Riemann- Roch on $A$), shows that $h^0(\om_\Th^{\otimes 2})=h^0(\O_\Th(2\Th))=7$. Hence, when restricted to $\Th$, the morphism $\varphi_{2\Th}|_\Th=\varphi_{2K_\Th}$ is given by the complete linear system $|2K_\Th|$. To understand this morphism better, we first consider the map $\varphi_{K_\Th}$. From the restriction sequence above, twisted by $\O_A(-\Th)$, one deduces that $H^0(\Th,\omega_\Th)\cong H^1(A,\O_A)$ is three dimensional. The map $\varphi_{K_\Th}:\Th\rightarrow \Proj^2$ is the Gauss map, which is a morphism of degree $(\Th_{|\Th})^2=\Th^3=6$ which factors over $\ol{\Th}$. As $\varphi_{K_\Th}$ is surjective, the natural map $S^2H^0(\Th,\omega_\Th)\rightarrow H^0(\Th,\omega_\Th^{\otimes 2})$ is injective, thus the image has codimension one. Let $t\in H^0(\Th,\omega_\Th^{\otimes 2})$ be a general section in the complement of the image of $S^2H^0(\Th,\omega_\Th)$. Since $|2\Th|$ is basepoint free, we may assume that the divisor $B$ in $\Th$ defined by $t=0$ is smooth and does not pass through any two-torsion points. Since $|2\Th|$ is totally symmetric, any divisor in this linear system is symmetric, that is, $[-1]B=B$. Let $s_0,\ldots,s_2$ be a basis of $H^0(\Th,\omega_\Th)$. Then we have: $$ \varphi_{2K_\Th}:\;\Th\,\longrightarrow\, \Proj H^0(\Th,\omega_\Th^{\otimes 2})\,\cong\,H_{2\Th}\,\cong\,\Proj^6\,~, \qquad x\,\longmapsto\,(\ldots:s_i(x)s_j(x):\ldots:t(x))_{0\leq i\leq j\leq 2}~. $$ The image $\ol{\Th}$ of $\Th$ thus lies in a cone over the Veronese surface of $\Proj^2$. This cone is the image $Y$ of the weighted projective 3-space $\Proj(1,1,1,2)$, which is embedded into $\Proj^6$ by the (very) ample generator $\O_Y(1)$ of its Picard group: $$ \Proj(1,1,1,2)\,\longrightarrow\,Y\,\subset\,\Proj^6,\qquad (y_0:y_1:y_2:y_3)\,\longmapsto\,(\ldots:y_iy_j:\ldots:y_3)_{0\leq i\leq j\leq 2}~. $$ As $\varphi_{K_\Th}$ has no base points, the surface $\ol{\Th}\subset Y$ does not contain the singular point $v=(0:\ldots:0:1)$ of $Y$, the vertex of the cone over the Veronese surface. Hence, $\ol{\Th}$ is a Cartier divisor on $Y$. The projection of $\ol{\Th}$ from $v$ onto the Veronese surface is the Gauss map $\varphi_{K_\Th}$, which has degree $6/2=3$ on $\ol{\Th}$. This implies that $\ol{\Th}$ lies in the linear system on $Y$ defined by three times the ample generator. Since the map $S^3H^0(Y,\O_Y(1))\rightarrow H^0(Y, \O_Y(3))$ is surjective, we conclude that $\ol{\Th}$ is defined by a weighted homogeneous polynomial $p$ of degree six in $Y=\Proj(1,1,1,2)$: $$ \ol{\Th}\,=\,\{(y_0:y_1:y_2:y_3)\in \Proj(1,1,1,2):\; p(y_0,\ldots,y_3)\,=\,\sum_{i=0}^3 p_{2i}(y_0,y_1,y_2)y_3^{3-i}\,=\,0\,\}~, $$ where each $p_{2i}$ is homogeneous of degree $2i$ in $y_0,y_1,y_2$. Since $v\not\in \ol{\Th}$, we may and will assume that $p_0=1$. The weighted projective space $\Proj(1,1,1,2)$ is also the quotient of $\Proj^3$ by the involution $i_3:(x_0:\ldots:x_3)\mapsto (x_0:x_1:x_2:-x_3)$, the quotient map is explicitly given by: $$ \ol{p}:\,\Proj^3\,\longrightarrow\,\Proj(1,1,1,2),\qquad (x_0:x_1:x_2:x_3)\,\longmapsto\,(\ldots:x_ix_j:\ldots:x_3^2)_{0\leq i\leq j\leq 2}~. $$ Now we define a surface $F$ in $\Proj^3$ as $F:=\ol{p}^{-1}(\ol{\Th})$, thus $F$ is defined by the sextic equation $P=0$ where $$ P\,:=\, p_{6}(x_0,x_1,x_2)\,+\,p_4(x_0,x_1,x_2)x_3^2 \,+\,p_2(x_0,x_1,x_2)x_3^4\,+\,x_3^6~. $$ The double cover $\ol{p}:F\to\ol{\Th}$ is branched over the points where $x_3=0$, so the branch locus is the divisor $\ol{B}\subset\ol{\Th}$ defined by $t=0$. Here $\ol{B}=B/[-1]$, which is a smooth curve since by assumption $B$ is smooth and does not pass through the $28$ fixed points of $[-1]$ in ${\Th}$. Hence the singular locus of $F$ consists of $56$ nodes. The 28 nodes of $\ol{\Th}$ form an even set since the double cover $\Th\rightarrow \ol{\Th}$ is branched only over the nodes. Hence the preimage $\De\subset F$ of these nodes is also an even set, cf.\ diagram \ref{cd_main}, in fact $F$ has a double cover $S$ branched only over the nodes by pulling back the double cover $\Th\rightarrow \ol{\Th}$ along $\ol{p}:F\rightarrow\ol{\Th}$. We summarize the construction as follows: \begin{thm}\label{ours} There exists a family of 56 nodal sextic surfaces with the nodes forming an even set, which is parametrized by pairs $(C,B)$ where $C$ is a non-hyperelliptic curve of genus $3$ and $B\in |2K_{S^2C}|$ is a general divisor. In particular, we have a $6+6=12$ dimensional family of such surfaces. Moreover, each surface in the family has an automorphism of order two. \end{thm} \section{Coverings of $\ol{\Th}$}\label{covers} First of all, we provide another construction of the double cover $f:S\to F$ branched over the set $\De$ of 56 nodes of $F$. Let $\pi_F:\t F\to F$ be a minimal resolution of singularities and let $N_i\subset\t F$ be the inverse image of the node $p_i\in F$. Since the nodes form an even set, the divisor $\t\De=\sum_{i=1}^{56}N_i$ is even, that is, it is 2-divisible in $\Pic(\t F)$. Thus $\t F$ admits a smooth double cover $\t f:\t S\to\t F$ branched along $\t\De$. Let $E_i=\t f^{-1}(N_i)$, so $\t f^*N_i=2E_i$. Since $p_i$ are nodes, the exceptional curves $N_i$ are $(-2)$-curves, so $$ E_i\cdot E_i\,=\,{1\ov 4}\t f^*N_i\cdot\t f^*N_i\,=\,{1\ov 2}\t f^*(N_i\cdot N_i) \,=\,-1 $$ and $E_i$ are $(-1)$-curves. The surface $S$ can be obtained by blowing down this set of $(-1)$-curves on the smooth surface $\t S$, so it is also smooth and it is a double cover of $F$, giving a commutative diagram $$ \xymatrix{\t{S}\ar[r]^-{\pi_S}\ar[d]_{\t{f}}&S\ar[d]^f\\\t{F}\ar[r]_-{\pi_F}&F~.} $$ From the definition of $S$ as the base change along $\ol{p}:F\rightarrow\ol{\Th}$ of the double cover $\Th\rightarrow \ol{\Th}$, it follows that the covering $S\rightarrow \ol{\Th}$ is a $(\Z/2\Z)^2$-covering. Let $\iota_1$ and $\iota_2$ be involutions on $S$ with quotient surface $F$ and $\Th$ respectively. Let $\iota_3=\iota_1\iota_2$, then $\iota_3$ is an involution and we define $T:=S/\iota_3$. This gives a commutative diagram \bea\label{cd_main}\xymatrix{S\ar[rr]^-{p}\ar[dd]_-{f}\ar[rd]&&\Th\ar[dd]^-{\phi}\\ &T\ar[rd]&\\ F\ar[rr]_-{\ol{p}}&&\ol{\Th}}\eea \begin{prop} The double cover $S\to T$ is unramified. In particular, $T$ is smooth and $T\to\ol{\Th}$ is branched along $\ol B$ and the 28 nodes. \end{prop} \begin{proof} The ramification locus of $S\to T$ is the fixed locus $R_3$ of $\iota_3$, which is precisely the points $s\in S$ such that $\iota_3\in\mr{Stab}_{(\Z/2\Z)^2}(s)$. The fixed loci of $\iota_1$ and $\iota_2$ are $f^{-1}\De$ and $p^{-1}B$ respectively. Since the branch curve $B$ does not contain any of the $28$ two-torsion points on $\Theta$, the intersection of the fixed loci $f^{-1}\De\cap p^{-1}B=\emptyset$. Hence, there are no points $s\in S$ such that $\mr{Stab}_{(\Z/2\Z)^2}(s)=(\Z/2\Z)^2$. In particular, $R_3=\{s\in S|\mr{Stab}_{(\Z/2\Z)^2}(s)=\lan\iota_3\ran\}$ is disjoint from $f^{-1}\De\cup p^{-1}B$. Since the ramification locus of $S\to\ol\Th$ is precisely the union of that of $f$ and $p$, we conclude that $R_3=\emptyset$ and $S\to T$ is unramified. \end{proof} We now consider the Hodge numbers of the surfaces in diagram \ref{cd_main}. \begin{prop} The smooth surfaces $\Th$, $S$, $T$ and $\t F$ have Hodge numbers: $$ \begin{array}{cccc} &h^{1,0}&h^{2,0}&h^{1,1}\\ \Th&3&3&10\\ S&3&10&38\\ \t S&3&10&94\\ \t F&0&10&86\\ T&0&3&16 \end{array} $$ \end{prop} \begin{proof} The cohomologies of $\O_\Th$ are computed using the short exact sequence $$ 0\longrightarrow\O_A(-\Th)\longrightarrow\O_A\longrightarrow\O_\Th\longrightarrow0~. $$ Since $\Th$ is ample on $A$ and $K_A=0$, $h^i(\O_A(-\Th))=0$ for $i<3$ by Kodaira vanishing and, using Serre duality, $h^3(\O_A(-\Th))=h^0(\O_A(\Th))=1$ since $\Th$ is a principal polarization. Moreover, $h^i(\O_A)=({}^3_i)$, hence $h^{1,0}(\Th)=h^1(\O_\Th)=3$ and $h^{2,0}(\Th)=3$. As $$ \chi_{\mr{top}}(\Th)\,=\,2-4h^{1,0}(\Th)+2h^{2,0}(\Th)+h^{1,1}(\Th)\,=\, 2-12+6+h^{1,1}(\Th)\,=\,h^{1,1}(\Th)-4~, $$ we can compute $h^{1,1}(\Th)$ from Noether's formula $$ \chi(\O_\Th)\,=\,{\chi_{\mr{top}}(\Th)+K_\Th^2\ov 12}\;\Rightarrow\; h^{1,1}(\Th)=12\chi(\O_\Th)-K_\Th^2+4\,=\,12-6+4\,=\,10~. $$ For the double cover $p:S\to\Th$, branched over the divisor $B$, there is an isomorphism $$ p_*\O_S=\O_\Th\oplus\mathcal{L}^{-1},\qquad {\mathcal L}\,\cong\,\om_\Th~, $$ so ${\mathcal L}^{\otimes 2}=\O_\Th(B)$. Thus $h^{i,0}(S)=h^{i}(\O_\Th)+h^i({\mathcal L}^{-1})$. As ${\mathcal L}=\om_\Th$ is ample, by Kodaira vanishing we get $h^i({\mathcal L}^{-1})=0$ for $i<2$. Hence, by Riemann-Roch $$ h^2({\mathcal L}^{-1})=\chi(\om_\Th^{-1})=\chi(\O_\Th)+{K_\Th\cdot(K_\Th+K_\Th)\ov2}=7~. $$ On the canonical bundles, we have an isomorphism $\om_S=p^*(\om_\Th\otimes{\mathcal L})$. Thus, $K_S^2=p^*(2K_\Th)^2=8K_\Th^2=48$ and we obtain $h^{1,1}(S)=38$ by Noether's formula. The blowup $\pi_S:\t S\to S$ at 56 points does not change $h^{1,0}$ and $h^{2,0}$, and $h^{1,1}(\t S)=h^{1,1}(S)+56$. Since $\pi_F:\t F\to F$ is a blowup at isolated rational singularities, we have $h^i(\O_{\t F})=h^i(\O_F)$. The latter can be computed using the short exact sequence $$0\longrightarrow\O_{\Proj^3}(-6)\longrightarrow\O_{\Proj^3}\longrightarrow\O_F\longrightarrow 0~.$$ Since the singularity is canonical, we have $\om_{\t F}=\pi_F^*\om_F=\pi_F^*(\om_{\Proj^3}\otimes\O_{\Proj^3}(F))_{|F}=\pi_F^*\O_{\Proj^3}(2)_{|F}$ by the adjunction formula. Thus, $K_{\t F}^2=(\O(2)_{|F}\cdot\O(2)_{|F})=2\cdot2\cdot6=24$. By Noether's formula, we obtain $h^{1,1}(\t F)=86$. Finally, we use the eigenspace decomposition of the cohomologies on $S$ to compute the Hodge numbers of $T$. Let $G=(\Z/2\Z)^2$. The $G$-action on $S$ induces a decomposition $$H^i(S,\O_S)=\bigoplus_{\chi\in G^*}H^i(S,\O_S)_\chi$$ where $G^*=\{1,\chi_1,\chi_2,\chi_3\}$ is the character group and $\chi_i$ is chosen such that, if $H_i$ is the stabilizer of $\iota_i$, then $G^*_{|H_i}=\{1,\chi_i\}$. Hence, $$h^i(\O_F)=h^i(\O_S)_1+h^i(\O_S)_{\chi_1},\quad h^i(\O_\Th)=h^i(\O_S)_1+h^i(\O_S)_{\chi_2},\quad h^i(\O_T)=h^i(\O_S)_1+h^i(\O_S)_{\chi_3}.$$ From the Hodge numbers $h^{1,0}$ and $h^{2,0}$ of $S$, $T$ and $\t F$ (notice $h^{i,0}(F)=h^{i,0}(\t F)$), we obtain that $$ h^{1,0}(S)_\chi=\begin{cases}3&\chi=\chi_2,\\0&\chi\ne\chi_2,\end{cases} \qquad\text{and}\qquad h^{2,0}(S)_\chi=\begin{cases}3&\chi=1,\\7&\chi=\chi_1,\\0&\chi=\chi_2,\chi_3~. \end{cases} $$ Hence, $h^{1,0}(T)=0$ and $h^{2,0}(T)=3$. Since $S\to T$ is an unramified double cover, we have an equality $\chi_{\mr{top}}(S)=2\chi_{\mr{top}}(T)$. This allows us to compute $h^{1,1}(T)=16$. \end{proof} A consequence of the fact that we have a morphism $p:S\rightarrow\Th$ and $h^{1,0}(S)=h^{1,0}(\Th)$, is that the Albanese map of $S$ factors over the Albanese map for $\Th$, which is just the inclusion $\Th\hookrightarrow A$, hence $A=\mr{Alb}(S)$. We now deduce that the $12$ dimensional family of $56$ nodal sextics we constructed coincides with the family constructed by Catanese and Tonoli in \cite[Main Theorem B]{catanese-tonoli}. Notice that they obtain a $27$ dimensional subvariety of the space of sextic surfaces parametrizing $56$ nodal sextics, but modulo the action of $\mr{Aut}(\Proj^3)$ one again finds a $27-15=12$ dimensional family. We were not able to relate their construction to ours. However, when using their Macaulay scripts (which can be found in the eprint arXiv:math/0510499) we noticed that it does produce sextics which are invariant under the involution $x_0\mapsto -x_0$ in $\Proj^3$. \begin{cor} The family of sextics with an even set of $56$ nodes from \cite[Main Theorem B]{catanese-tonoli} coincides with the family constructed in Theorem \ref{ours}. \end{cor} \begin{proof} For a double cover $f:S\rightarrow F$ of a $56$ nodal sextic surface $F$, branched exactly over the nodes of $F$, the `quadratic' sheaf ${\mathcal F}$ on $F$ defined by $f_*\O_S=\O_F\oplus{\mathcal F}$ must satisfy $(\tau,a)=(3,3)$ or $(\tau,a)=(3,4)$, where $2\tau=h^1(F,{\mathcal F}(1))$ and $a=h^1(F,{\mathcal F})$, cf.\ \cite[Theorem 2.5]{catanese-tonoli}. The family constructed in \cite{catanese-tonoli} is the one with invariants $(\tau,a)=(3,3)$. For our surfaces we have $h^{1,0}(S)=h^1(F,f_*\O_S)=h^1(F,\O_F)+h^1({F,\mathcal F})$ so we get $h^1({F,\mathcal F})=3$, which shows that they are in the same family. \end{proof} \section{An explicit example} Let $C$ be a non-hyperelliptic genus three curve, we will also denote the canonical model of $C$, a quartic curve in $\Proj^2$, by $C$. Recall that $\Th=S^2C$, the symmetric product of $C$. We show how to find the global sections $H^0(\Th,\omega_\Th^{\otimes 2})$ in terms of the geometry of $C$, following \cite{brivio-verra}. Note that if we map $S^2C\rightarrow \mr{Jac}(C)$ by $p+q\mapsto p+q-t$ where $t\in S^2C$ is an odd theta characteristic (so $2t\equiv K_C$), then the image of $S^2C$ is a symmetric theta divisor. Let $d=z_1+\ldots+z_4$ be an effective canonical divisor on $C$, $D=\sum (z_i+C)$ be the corresponding divisor on $S^2C$ and $\Delta$ be the diagonal in $S^2C$. Then, $2K_{S^2C}=2D-\Delta$. By \cite[Lemma 4.7]{brivio-verra}, we have the restriction sequence $$ 0\,\longrightarrow\,\O_{S^2C}(2K_{S^2C})\,\longrightarrow\, \O_{S^2C}(2D)\,\longrightarrow\,\O_\Delta(2D)\cong\O_C(4d)\,\longrightarrow\,0~. $$ and $$ H^0(S^2C,\om_{S^2C}^{\otimes 2})\,\cong \, \ker\big(S^2H^0(C,\om_C^{\otimes 2})\,\stackrel{\mu}{\longrightarrow}\,H^0(C,\om_C^{\otimes 4})\big)~. $$ where $\mu$ is the multiplication map. Note that $h^0(C,\om_C^{\otimes 2})=6$, so $\dim S^2H^0(C,\om_C^{\otimes 2})=21$, and $h^0(C,\om_C^{\otimes 4})=14$. By the same lemma, $\mu$ is surjective so indeed $h^0(S^2C,\om_{S^2C}^{\otimes 2})=7$. Let $\sigma_0,\ldots,\sigma_2$ be a basis of $H^0(C,\om_C)$. It induces a basis $\sigma_i\otimes\sigma_j$ of $H^0(C^2,\om_{C^2})=H^0(C,\om_C)^{\otimes 2}$. The sections of $H^0(\Th,\omega_\Th)\cong \wedge^2H^0(C,\om_C)$ define the Gauss map $S^2C\cong\Theta\rightarrow \Proj^2$. Explicitly, the Gauss map is induced by the map $$ C\times C\,\longrightarrow\, \Proj^2,\quad (x,y)\,\longmapsto\,(p_{12}:p_{13}:p_{23}),\quad p_{ij}(x,y):=\sigma_i(x)\sigma_j(y)-\sigma_j(x)\sigma_i(y)~. $$ The six products $p_{ij}p_{kl}$ span a six dimensional subspace of $\ker(\mu)$ which is the image of $S^2H^0(\Th,\omega_\Th)$ in $H^0(\Th,\omega_\Th^{\otimes 2})$. Let $f(z)$ be a homogeneous quartic polynomial in $\C[z_0,z_1,z_2]$ such that $f(\si_0(x),\si_1(x),\si_2(x))=0$ for all $x\in C$, that is, $f$ defines the curve $C\subset\Proj^2$. Choose any polynomial $g(u,v)$ of bidegree $(2,2)$ in $\C[u_0,u_1,u_2,v_0,v_1,v_2]$ such that $g(z,z)=f(z)$ and let $g_s(u,v):=g(u,v)+g(v,u)$, then $\tilde{g}(x,y):=g_s(\si_0(x),\ldots,\si_2(y))\in S^2H^0(C,\om_C^{\otimes 2})$ and lies in $\ker(\mu)$. Thus the choice of $\tilde{g}$ provides the section $t$ used to construct the map $\varphi_{2K_\Th}$, any other choice of $\tilde{g}$ is of the form $\lambda\tilde{g}+\sum \lambda_{ij}p_{ij}$ for complex numbers $\lambda,\lambda_{ij}$ with $\lambda\neq 0$. The map $\varphi_{2K_\Th}:\Th\rightarrow\Proj^6$ is therefore induced by the map $$ C\times C\,\longrightarrow\, \Proj^6,\qquad (x,y)\,\longmapsto\,\big(\cdots:p_{ij}(x,y)p_{kl}(x,y):\cdots:\tilde{g}(x,y)\big)~. $$ A homogeneous polynomial $P$ in seven variables is an equation for the image of this map if \\ $P(\ldots,p_{ij}(u,v)p_{kl}(u,v),\ldots,\tilde{g}(u,v))$ lies in the ideal of $\C[u_0,\ldots,v_2]$ generated by $f(u)$ and $f(v)$. \ An explicit example, worked out using the computer program Magma \cite{magma}, is provided by the choice $f=z_0z_1^3+z_1z_2^3+z_2z_0^3$, which defines the Klein curve in $\Proj^2$. We will take $g=u_0u_1v_1^2+u_1u_2v_2^2+u_2u_0v_0^2$ and the map $\varphi_{2K_\Th}$ is given by: $$ (y_{00}:y_{01}:\ldots:y_{22}:y_g)\,=\, \big(p_{01}^2:p_{01}p_{02}:p_{01}p_{12}:p_{02}^2:p_{02}p_{12}:p_{12}^2: \tilde{g}\big)~. $$ One of the equations for the image is $$ y_{00}^2y_{02} -y_{12}y_{22}^2 -y_{01}y_{11}^2 -5y_{01}^2y_{22}+ (-y_{00}y_{01}+ y_{02}y_{22} -y_{11}y_{12})y_g -y_g^3 $$ (this equation thus defines the image in $\Proj(1,1,1,2)\subset\Proj^6$). Next we pull this equation back to $\Proj^3$ along the map $\ol{p}$ by substituting $y_{ij}=x_ix_j$ and $y_g=x_3^2$, {\it moreover} we change the sign of $x_1$ in order to simplify the equation and we obtain $$ Q\,:=\, x_0^5x_2 + x_0x_1^5 + x_1x_2^5 - 5x_0^2x_1^2x_2^2 \,+\,( x_0^3x_1 + x_0x_2^3 + x_1^3x_2)x_3^2 - x_3^6~. $$ The singular locus of the surface $F$ defined by $Q=0$ consists of $56$ nodes and these are thus an even set of nodes. To find all the nodes, we observe that $\mr{Aut}(F)$ contains a subgroup $G_{336}$ of order $336$ with generators $$ g_7\,:=\,\mbox{diag}(\omega,\omega^4,\omega^2,1),\qquad g_2:=\frac{1}{\sqrt{-7}}\left(\begin{array}{cccc} a&c&b&0\\c&b&a&0\\b&a&c&0\\0&0&0&\sqrt{-7}\end{array}\right)~,\qquad \left\{\begin{array}{rcl} a&=&\omega^2-\omega^5,\\ b&=&\omega-\omega^6,\\ c&=&\omega^4-\omega^3~, \end{array}\right. $$ where $\omega$ is a primitive seventh root of unity. One of the nodes is $(1:1:1:1)$ and $G_{336}$ acts transitively on the $56$ nodes, the stabilizer of a node is isomorphic to the symmetric group $S_3$. The covering involution $\mr{diag}(1,1,1,-1)$ generates the center of $G_{336}$ and $G_{336}\cong \{\pm 1\}\times G_{168}$ where $G_{168}\cong SL(3,\F_2)$ is the automorphism group of the Klein curve. The equation of $F$ can be written as $p_6+p_4x_3^2-x_3^6$, the discriminant of the cubic polynomial $p_6+p_4T-T^3$ has degree $12$ in $\C[x_0,x_1,x_2]$ and the curve it defines is the dual of the Klein curve (as expected from the presence of the Gauss map). \bibliographystyle{alpha}
1,116,691,499,986
arxiv
\section{Introduction} Let $p$ be a prime number, then there exists pairs of ($p$-adic) elliptic modular eigenforms $(f,g)$ of level $\Gamma_0(p)\cap \Gamma_1(N)$ for some $p\nmid N$ such that $f$ and $g$ share the same eigenvalues for Hecke operators $T_{\ell}$ when $\ell\nmid pN$ (i.e. $f$ and $g$ are associated with the same $p$-adic Galois representation), but have \emph{different} non-zero eigenvalues for the $U_p$-operator. The results on the existence of such \emph{companion forms} for $p$-adic or mod-$p$ modular forms, as of Gross in \cite{gross1990tameness}, have many significant applications. For example, Buzzard and Taylor (\cite{buzzard1999companion}, and see \cite{buzzard2003analytic}) use Gross's results to prove the classicality of overconvergent $p$-adic weight one modular forms (hence certain cases of the Artin's conjecture), and their methods have been successfully generalized for Hilbert modular forms of parallel weight one, e.g., \cite{pilloni2017formes}, \cite{pilloni2016arithmetique} and \cite{sasaki2019integral}.\par In \cite{hansen2017universal}, Hansen made a conjecture on the existence of all companion forms for finite slope overconvergent $p$-adic automorphic forms of general $\GL_n$ in the language of determining the set of \emph{companion points} on the eigenvariety that are associated with the same $p$-adic Galois representation but with possibly different $U_p$-eigenvalues or weights. Similar to the weight part of Serre's modularity conjecture, the recipes for companion forms are given by the $p$-adic local Galois representations. In fact, the conjecture on companion points is closely related to Breuil's locally analytic socle conjecture in \cite{breuil2016versI}\cite{breuil2015versII} from the point of view of the local-global compatibility in the locally analytic aspect of the $p$-adic local Langlands program.\par We will work in the setting of definite unitary groups as Breuil. Let $F$ be a quadratic imaginary extension of a totally real field $F^+$. Let $S_p$ be the set of places of $F^+$ that divide $p$. We assume that each $v\in S_p$ splits in $F$. Let $G$ be a definite unitary group of rank $n\geq 2$ over $F^+$ that is split over $F$ (so that $G(F^+\otimes_{\Q}\Q_p)\simeq \prod_{v\in S_p}\GL_n(F_v^{+})$). Then an eigenvariety $Y(U^p,\overline{\rho})$ of $G$, of certain tame level $U^p$ and localized at a modular absolutely irreducible $\overline{\rho}:\Gal(\overline{F}/F)\rightarrow \GL_n(\overline{\mathbb{F}}_p)$, is a rigid analytic space parametrizing pairs $(\rho,\underline{\delta})$ where $\rho:\Gal(\overline{F}/F)\rightarrow \GL_n(\overline{\mathbb{Q}}_p)$ are continuous representations which lift $\overline{\rho}$ and $\underline{\delta}=(\underline{\delta}_v)_{v\in S_p}=(\delta_{v,i})_{v\in S_p,i=1,\cdots,n}: \prod_{v\in S_p}((F^{+}_v)^{\times})^n\rightarrow \overline{\Q}_p^{\times}$ is a continuous character such that $\rho$ is associated with a finite slope overconvergent $p$-adic automorphic form of $G$ which has ``weight'' $\underline{\delta}\mid_{\prod_{v\in S_p}(\cO_{F_v^{+}}^{\times})^n}$ and has ``$U_p$-eigenvalues'' $\prod_{j=1}^i\delta_{v,j}(\varpi_{F_v^+})$ for $v\in S_p$, $i=1,\cdots n$ where $\varpi_{F_v^{+}}$ denotes some uniformizers. \par Recall an algebraic character of $F_v^{+}$ has the form $(F_v^{+})^{\times}\rightarrow \overline{\Q}_p^{\times}:z\mapsto \prod_{\tau:F_v^{+}\hookrightarrow \overline{\Q}_p}\tau(z)^{k_{\tau}}$ for some $k_{\tau}\in \Z$. Now take a point $x=(\rho,\underline{\delta})\in Y(U^p,\overline{\rho})$ and assume that $\rho_v:=\rho\mid_{\Gal(\overline{F_{\widetilde{v}}}/F_{\widetilde{v}})}$ is crystalline for all $v\in S_p$ where $\widetilde{v}\mid v$ is a place of $F$ chosen for each $v\in S_p$. Then $\underline{\delta}$ is locally algebraic, i.e. $\underline{\delta}=\underline{\delta}_{\mathrm{alg}}\underline{\delta}_{\mathrm{sm}}$ where each $\delta_{\mathrm{alg},v,i}$ is algebraic and $\delta_{\mathrm{sm},v,i}$ is smooth. A companion point $(\rho,\delta')$ of $x$ falls in one of the following two types: \begin{enumerate}[label=(\alph*)] \item $\underline{\delta}_{\mathrm{alg}}'\neq \underline{\delta}_{\mathrm{alg}}$ but $\underline{\delta}_{\mathrm{sm}}'=\underline{\delta}_{\mathrm{sm}}$ (different ``weights''); \item $\underline{\delta}_{\mathrm{sm}}'\neq \underline{\delta}_{\mathrm{sm}}$ (different ``$U_p$-eigenvalues up to some normalizations''). \end{enumerate} Our main theorem is the following. \begin{theorem}[Theorem \ref{theoremmaincrystalline}]\label{thm:introduction} Suppose that $x=(\rho,\underline{\delta})\in Y(U^p,\overline{\rho})$ is a point such that $\rho_v$ is generic crystalline (see \S\ref{sec:companionpointsdesciption} for the generic condition) for all $v\in S_p$. Assume the tame level is sufficiently small and the usual Taylor-Wiles hypothesis (Assmption \ref{ass:taylorwiles}). Then all the companion points of $x$ in the conjecture of Hansen or Breuil appear on $Y(U^p,\overline{\rho})$. \end{theorem} The above theorem was already proved by Breuil-Hellmann-Schraen in \cite{breuil2019local} under the assumption that the Hodge-Tate weights of each $\rho_v$ are regular (i.e. pairwise different). In \cite{wu2021local}, the author removed the regular assumption on the Hodge-Tate weights, but only was able to prove the existence of all companion points of the type (a) above in the non-regular cases. The task of this paper is to find all companion points of type (b) for non-regular points. These are companion points corresponding to different triangulation (refinements) of the trianguline (crystalline) Galois representations.\par The proof of our theorem is motivated by some arguments in ordinary cases and will use the known results in both regular and non-regular cases. In ordinary cases, modularity lifting theorems were proved for ordinary families of Galois representations that will specialize to companion points with possibly non-regular weights. In our finite slope/trianguline cases, for a non-regular point $x$ as in Theorem \ref{thm:introduction}, the naive strategy is to find a sequence of points $x^i$ on $Y(U^p,\overline{\rho})$ with regular Hodge-Tate weights such that $x=\varinjlim_{i}x^i$ and certain companion points $(x^i)'$ of $x^i$, which will exist on $Y(U^p,\overline{\rho})$ using \cite{breuil2019local}, satisfy that $(x^i)'$ converge to a point $x'$ on $Y(U^p,\overline{\rho})$ and that $x'$ is a companion point of $x$ of type (b). \par The actual proof is Galois-theoretical. Using patching methods \cite{caraiani2016patching} and the patched eigenvariety \cite{breuil2017interpretation}, we can reduce the task to find those nearby regular $x^i$ to a similar problem on the \emph{trianguline variety} in \cite{breuil2017interpretation}, the local Galois-theoretical eigenvariety. Those $x^i=(\rho^i,\underline{\delta}^i)$ (now $\rho^i$ are representations of local Galois groups) are found by studying some ``crystalline/de Rham'' loci on the moduli space of trianguline $(\varphi,\Gamma)$-modules in the proof of \cite[Thm. 2.6]{breuil2017interpretation} and $\rho^i$ will be the Galois representations corresponding to certain \'etale trianguline $(\varphi,\Gamma)$-modules of parameter $\underline{\delta}^i$ (the \'etaleness will be achieved by the results of Hellmann in \cite{hellmann2016families}). The key example is the case $n=2$. \begin{remark} Our proof for the existence of companion points of type (b) will not use directly the theory of local models of the trianguline variety in \cite{breuil2019local} and \cite{wu2021local}. However, it is the existence of all companion points of type (a) in \cite{wu2021local}, which used the local models, that allows us to keep working in the smooth locus of the trianguline variety consisting of points $(\rho,\underline{\delta})$ where $\rho$ is trianguline of parameter $\underline{\delta}$. \end{remark} \begin{remark} As a corollary of Theorem \ref{thm:introduction}, we can determine all the companion constituents (certain locally analytic representations of $\prod_{v\in S_p}\GL_n(F_{v}^+)$) in the Hecke-isotypic part of the completed cohomology of $G$ associated with generic crystalline Galois representations in Breuil's locally analytic socle conjecture (Corollary \ref{cor:socle}). Since there are no locally algebraic constituents in the non-regular cases, the existence of all of these companion constituents could be a replacement in the automorphic side to compare with de Rhamness of the Galois side in the $p$-adic Langlands correspondence in this particular non-regular situation. \end{remark} \begin{remark} For non-regular weights, the existence of all companion points will not lead directly to a classicality result of Hecke eigensystems in contrast with \cite{buzzard1999companion} since classical automorphic representations for definite unitary groups will have regular weights. The results of this paper might be able to be adapted for Hilbert modular forms and have applications in classicality of $p$-adic Hilbert modular forms with non-regular and possibly non-parallel weights. \end{remark} The paper is organized as follows. In \S\ref{sec:coho} and \S\ref{sec:crystallin}, we collect some (presumedly known) results on $(\varphi,\Gamma)$-modules over the Robba rings. In \S\ref{sec:hunting}, we find the companion points on the trianguline variety. In \S\ref{sec:global}, we apply the local results in \S\ref{sec:hunting} to the global settings and prove the main theorem. \subsection{Notation} We will use the notation in \cite[\S1.7]{wu2021local}. Let $K$ be a finite extension of $\Q_p$ and $L/\Q_p$ be a large enough coefficient field such that $\Sigma:=\{\tau:K\hookrightarrow L\}$ has size $[K:\Q_p]$. Let $C$ be the completion of an algebraic closure of $K$. We have the Robba ring $\cR_{L,K}$ of $K$ over $L$ defined in \cite[Def. 6.2.1]{kedlaya2014cohomology}. Let $t\in \cR_{L,K}$ denote Fontaine's $2\pi i$ and $t=u\prod_{\tau \in \Sigma}t_{\tau}$ for some $u\in \cR_{L,K}^{\times}$ (see \cite[Not. 6.2.7]{kedlaya2014cohomology} for details). For $\mathbf{k}=(k_{\tau})_{\tau\in\Sigma}\in\Z^{\Sigma}$, write $t^{\mathbf{k}}=\prod_{\tau\in\Sigma}t_{\tau}^{k_{\tau}}$. If $\delta:K^{\times}\rightarrow L^{\times}$ is a continuous character, let $\cR_{L,K}(\delta)$ be the associated rank one $(\varphi,\Gamma_K)$-modules over $\cR_{L,K}$ in \cite[Cons. 6.2.4]{kedlaya2014cohomology} where $\Gamma_K=\Gal(K(\mu_{\infty})/K)$. Then $t^{\mathbf{k}}\cR_{L,K}=\cR_{L,K}(z^{\mathbf{k}})$ where $z^{\mathbf{k}}$ denotes the character $z\mapsto \prod_{\tau\in\Sigma}\tau(z)^{k_{\tau}}$. If $a\in L^{\times}$, then denote by $\mathrm{unr}(a)$ the unramified character of $K^{\times}$ sending a uniformizer of $K$ to $a$. Let $\cT_{L}$ be the rigid space over $L$ parametrizing continuous characters of $K^{\times}$ and $\cT_{0}\subset \cT_{L}$ be the complement of the subset of characters $\delta$ such that $\delta$ or $\epsilon \delta^{-1}$ is algebraic. Here $\epsilon$ is the character $\mathrm{Norm}_{K/\Q_p}|\mathrm{Norm}_{K/\Q_p}|_{\Q_p}$ of $K^{\times}$. We can define $\tau$-part $\wt_{\tau}(\delta)$ of the weight $\wt(\delta)$ of $\delta$ (see \cite[\S1.7.2]{wu2021local}). The cyclotomic character of $\cG_K:=\Gal(\overline{K}/K)$ has Hodge-Tate weights one. We fix an integer $n\geq 2$. \section{Cohomology of $(\varphi,\Gamma_K)$-modules}\label{sec:coho} We collect some results of the cohomology of $(\varphi,\Gamma_K)$-modules (of character type). We fix $\delta:K^{\times}\rightarrow L^{\times}$ to be a continuous character. Recall if $D$ is a $(\varphi,\Gamma_K)$-module over $\cR_{L,K}$, then $H^{i}_{\varphi,\gamma_K}(D[\frac{1}{t}])=\varinjlim_{m\to +\infty}H^{i}_{\varphi,\gamma_K}(t^{-m}D), i=0,1,2$ (\cite[(3.11)]{breuil2019local}). \begin{lemma}\label{lem:dimensionH1} If $\delta\in \cT_0$, then $\dim_LH^{i}_{\varphi,\gamma_K}(\cR_{L,K}(\delta))=0$ for $i=0,2$ and $\dim_LH^{1}_{\varphi,\gamma_K}(\cR_{L,K}(\delta))=[K:\Q_p]$. \end{lemma} \begin{proof} \cite[Prop. 6.2.8]{kedlaya2014cohomology}. \end{proof} Recall in \cite[\S3.3]{breuil2019local} we have a functor $W_{\mathrm{dR}}$ (resp. $W_{\mathrm{dR}}^+$) sending a $(\varphi,\Gamma_K)$-module over $\cR_{L,K}[\frac{1}{t}]$ (resp. $\cR_{L,K}$) to an $L\otimes_{\Q_p}\mathrm{B}_{\mathrm{dR}}$-representations (resp. $L\otimes_{\Q_p}\mathrm{B}_{\mathrm{dR}}^+$-representation) of $\cG_K$. \begin{lemma}\label{lem:cohomologyderhamphigamma} If $\delta\in \cT_0$ and is locally algebraic, then \[H^{1}_{\varphi,\gamma_K}(\cR_{L,K}(\delta)[\frac{1}{t}])\simrightarrow H^1(\cG_K, W_{\mathrm{dR}}(\cR_{L,K}(\delta)[\frac{1}{t}])).\] \end{lemma} \begin{proof} \cite[Lem. 3.4.2]{breuil2019local} \end{proof} \begin{proposition} For $\mathbf{k}\in \Z_{\geq 0}^{\Sigma},i=0,1$, \[\dim_L H^{i}(t^{-\mathbf{k}}\cR_{L,K}(\delta)/\cR_{L,K}(\delta))=|\{\tau\in\Sigma\mid k_{\tau}\geq 1, \wt_{\tau}(\delta)\in\{1,\cdots,k_{\tau} \} \}|. \] \end{proposition} \begin{proof} This follows from \cite[Appendix A]{kedlaya2009some} and \cite[Lem. 2.16]{nakamura2009classification} (and some other well-known results: \cite[Thm.4.7, Cor. 4.8]{liu2007cohomology} and the comparison in \cite[Prop. 2.2]{nakamura2009classification}, or a generalized version \cite[Thm. 5.11]{nakamura2014deformations}). \end{proof} \begin{corollary}\label{cor:dimensionker} For $\mathbf{k}\in \Z_{\geq 0}^{\Sigma}$, and $\delta\in\cT_0$, then \begin{align*} &\dim_L\mathrm{Ker}\left(H^{1}_{\varphi,\gamma_K}(\cR_{L,K}(\delta))\rightarrow H^{1}_{\varphi,\gamma_K}(t^{-\mathbf{k}}\cR_{L,K}(\delta))\right)\\ =&\dim_L\mathrm{Coker}\left(H^{1}_{\varphi,\gamma_K}(\cR_{L,K}(\delta))\rightarrow H^{1}_{\varphi,\gamma_K}(t^{-\mathbf{k}}\cR_{L,K}(\delta))\right)\\ =&|\{\tau\in\Sigma\mid k_{\tau}\geq 1, \wt_{\tau}(\delta)\in\{1_{\tau},\cdots,k_{\tau}\} \}|. \end{align*} \end{corollary} \begin{corollary}\label{cor:cohomologyiso} If $\delta\in \cT_0$ is locally algebraic and $\wt_{\tau}(\delta)\leq 0$ for all $\tau\in\Sigma$, then the natural maps $\cR_{L,K}(\delta)\hookrightarrow t^{-\mathbf{k}}\cR_{L,K}(\delta)$ induce isomorphisms $H^{i}_{\varphi,\gamma_K}(\cR_{L,K}(\delta))\simrightarrow H^{i}_{\varphi,\gamma_K}(t^{-\mathbf{k}}\cR_{L,K}(\delta))$ for all $i=0,1,2, \mathbf{k}\in \Z_{\geq 0}^{\Sigma}$. \end{corollary} If $\mathbf{k}\in\Z^{\Sigma}$, write $\mathbf{k}^{\sharp}\in \Z^{\Sigma}$ where $k_{\tau}^{\sharp}=k_{\tau}$ if $k_{\tau}\geq 1$ and $k_{\tau}^{\sharp}=0$ otherwise. \begin{proposition}\label{prop:kernelandderham} Assume that $\delta\in \cT_0$ is locally algebraic with weights $\mathbf{k}\in \Z^{\Sigma}$. Then the image of an element $x\in H^{1}_{\varphi,\gamma_K}(\cR_{L,K}(\delta))$ in $H^1_{\varphi,\gamma_K}(\cR_{L,K}(\delta)[\frac{1}{t}])$ is $0$ if and only if \[x\in\mathrm{Ker}\left(H^{1}_{\varphi,\gamma_K}(\cR_{L,K}(\delta))\rightarrow H^{1}_{\varphi,\gamma_K}(t^{-\mathbf{k}^{\sharp}}\cR_{L,K}(\delta))\right). \] \end{proposition} \begin{proof} We have $\wt_{\tau}(\delta z^{-\mathbf{k}^{\sharp}})\leq 0$ for all $\tau\in \Sigma$. Thus by Corollary \ref{cor:cohomologyiso}, $H^{1}_{\varphi,\gamma_K}(t^{-\mathbf{k}^{\sharp}}\cR_{L,K}(\delta))\simeq H^{1}_{\varphi,\gamma_K}(\cR_{L,K}(\delta)[\frac{1}{t}])$. \end{proof} \section{A crystalline criterion}\label{sec:crystallin} We need some criterion to guarantee that the points on the trianguline variety we will find in the next section are crystalline. For the definition of de Rham or crystalline $(\varphi,\Gamma_K)$-modules, see \cite[Def. 2.5]{hellmann2016density}. We say a trianguline $(\varphi,\Gamma_K)$-module of parameter $\underline{\delta}=(\delta_1,\cdots,\delta_n)$ is generic if $\delta_i\delta_j^{-1}\in\cT_0$ for all $i\neq j$ (or $\underline{\delta}\in\cT_0^n$ in the notation of \cite[\S3.2]{wu2021local}, remark that $\cT_0^n\neq (\cT_0)^n$!). Recall that a locally algebraic character $\delta:K^{\times}\rightarrow L^{\times}$ is crystalline (or semi-stable) if and only if the smooth part $\delta_{\mathrm{sm}}$ is unramified (see \cite[Exam. 6.2.6]{kedlaya2014cohomology}). \begin{lemma}\label{lem:crystalline} If $D$ is a generic trianguline $(\varphi,\Gamma_K)$-module over $\cR_{L,K}$ of parameter $\underline{\delta}$ such that all $\delta_i$ are crystalline, then $D$ is a crystalline $(\varphi,\Gamma_K)$-module if and only if $D$ is de Rham. \end{lemma} \begin{proof} This follows from the proof of \cite[Cor. 2.7(i)]{hellmann2016density}. Assume $D$ is de Rham. As $D$ is a successive extension of crystalline $(\varphi,\Gamma_K)$-modules, $D$ is semi-stable (by \cite{berger2008equations}, see also the arguments in \cite[\S6.1]{berger2002representations}). By the generic assumption, the monodromy must be trivial. Hence $D$ is crystalline. \end{proof} \begin{lemma} Let $D$ be a trianguline $(\varphi,\Gamma_K)$-module over $\cR_{L,K}$ of rank $n$ with the trianguline filtration $\Fil_{\bullet}D$ such that $\Fil_{i}D/\Fil_{i-1}D\simeq \cR_{L,K}(\delta_i)$ for $i=1,\cdots,n$. Fix $i_0\in\{1,\cdots,n-1\}$ and let $D_0=\Fil_{i_0}D$ and $D_1=D/\Fil_{i_0}D$. Assume that $\underline{\delta}$ is locally algebraic and let $\lambda=(\lambda_{\tau,i})_{\tau\in\Sigma,i=1,\cdots,n}=\wt(\underline{\delta})\in (\Z^{\Sigma})^n$. Assume that for every $\tau\in\Sigma$, $\lambda_{\tau,i}< \lambda_{\tau,j}$ if $i>i_0\geq j$ and that both $D_0$ and $D_1$ are de Rham, then $D$ is de Rham. \end{lemma} \begin{proof} This is a generalization of \cite[Prop. 2.6]{hellmann2016density}. We need to prove that $\dim_LW_{\mathrm{dR}}(D)^{\cG_K}=n[K:\Q_p]$. For $\tau\in \Sigma$, let $k_{\tau}=\mathrm{max}_{i>i_0}\lambda_{\tau,i}$. Then $\dim_LW_{\mathrm{dR}}^+(t^{-\mathbf{k}}D_0)^{\cG_K}=0$ as the Hodge-Tate weights of $t^{-\mathbf{k}}D_0$ are positive and $t^{-\mathbf{k}}D_0$ is de Rham. We have an exact sequence \[0\rightarrow W_{\mathrm{dR}}^+(t^{-\mathbf{k}}D)^{\cG_K}\rightarrow W_{\mathrm{dR}}^+(t^{-\mathbf{k}}D_1)^{\cG_K}\rightarrow H^1(\cG_K,W_{\mathrm{dR}}^+(t^{-\mathbf{k}}D_0)).\] The Hodge-Tate weights of $t^{-\mathbf{k}}D_0$ are $\geq 1$, hence $H^1(\cG_K,W_{\mathrm{dR}}^+(t^{-\mathbf{k}}D_0))=0$ by \cite[Cor. 5.6]{nakamura2014deformations} (we have that $H^1(\cG_K,C(i))=0$ for $i\neq 0$ by \cite[Prop. 2.15(ii)]{fontaine2004arithmetique}). We get $\dim_LW_{\mathrm{dR}}^+(t^{-\mathbf{k}}D)^{\cG_K}=\dim_LW_{\mathrm{dR}}^+(t^{-\mathbf{k}}D_1)^{\cG_K}=(n-i_0)[K:\Q_p]$ since $t^{-\mathbf{k}}D_1$ is de Rham with non-positive Hodge-Tate weights. As $D_0$ is de Rham, $\dim_LW_{\mathrm{dR}}(D_0)^{\cG_K}=i_0[K:\Q_p]$. Since $W_{\mathrm{dR}}^+(t^{-\mathbf{k}}D_0)[\frac{1}{t}]\cap W_{\mathrm{dR}}^+(t^{-\mathbf{k}}D)=W_{\mathrm{dR}}^+(t^{-\mathbf{k}}D_0)$, we have $W_{\mathrm{dR}}(D_0)^{\cG_K}\cap W_{\mathrm{dR}}^+(t^{-\mathbf{k}}D)^{\cG_K}=W_{\mathrm{dR}}^+(t^{-\mathbf{k}}D_0)^{\cG_K}=\{0\}$. Then $W_{\mathrm{dR}}^+(t^{-\mathbf{k}}D)^{\cG_K}$ and $W_{\mathrm{dR}}(D_0)^{\cG_K}$ span an $n[K:\Q_p]$-dimensional $L$-subspace in $W_{\mathrm{dR}}(D)^{\cG_K}$. \end{proof} The above lemma will be used in the following form later. \begin{proposition}\label{prop:de Rham} Assume that $D$ is a trianguline $(\varphi,\Gamma_K)$-module of rank $n$ over $\cR_{L,K}$ with the trianguline filtration $\Fil_{\bullet}D$ such that $\Fil_{i}D/\Fil_{i-1}D\simeq \cR_{L,K}(\delta_i)$ for $i=1,\cdots,n$. Fix $i_0\in\{1,\cdots,n-1\}$ and let $D_0=\Fil_{i_0-1}D, D_1=\Fil_{i_0+1}D/D_0$ and $D_2=D/\Fil_{i_0+1}D$. Let $\lambda=\wt(\underline{\delta})$ and assume that $\underline{\delta}$ is locally algebraic. Assume that for every $\tau\in\Sigma$, $\lambda_{\tau,i}> \lambda_{\tau,i+1}$ if $i\neq i_0$, $\lambda_{\tau,i}> \lambda_{\tau,i_0},\lambda_{\tau,i_0+1}$ if $i<i_0$, and $\lambda_{\tau,i}<\lambda_{\tau,i_0},\lambda_{\tau,i_0+1}$ if $i>i_0+1$. If $D_1$ is de Rham, then $D$ is de Rham. \end{proposition} \section{Critical points hunting}\label{sec:hunting} Let $\overline{r}:\cG_K\rightarrow \GL_n(k_L)$ be a continuous representation. We firstly recall some constructions around the trianguline variety $X_{\mathrm{tri}}(\overline{r})$ in \cite[\S2.2]{breuil2017interpretation}. In the first parts of this section, we will only need the Zariski open dense subset $U_{\mathrm{tri}}(\overline{r})\subset \trivar$. \subsection{The trianguline variety}\label{subsec:trianguline} Let $\cT_{\mathrm{reg}}^n$ be the Zariski open subset of $\cT_L^n$ consisting of characters $\underline{\delta}=(\delta_i)_{i=1,\cdots,n}$ such that $\delta_i\delta_j^{-1}\neq z^{-\mathbf{k}},\epsilon z^{\mathbf{k}}$ for $i\neq j$ and $\mathbf{k}\in\Z_{\geq 0}^{\Sigma}$. There are rigid spaces $\mathcal{S}_n^{\square}(\overline{r})\rightarrow \mathcal{S}_n$ over $\cT^n_{\mathrm{reg}}$ in the proof of \cite[Thm. 2.6]{breuil2017interpretation} (and in \cite[\S2.2]{hellmann2016density}) which will be used later, and we recall below. \par The space $\mathcal{S}_n$ represents the functor sending a reduced rigid space $X$ over $L$ to the isomorphic classes of quadruples $(D_X,\Fil_{\bullet}D_X,\nu_X, \underline{\delta}_X)$ where $D_X$ is a $(\varphi,\Gamma_K)$-module of rank $n$ over $\cR_{X,K}$ where $\cR_{X,K}$ denotes the Robba ring of $K$ over $X$, $\Fil_{\bullet}D_X$ is a filtration of sub-$(\varphi,\Gamma_K)$-modules of $D_X$ which are locally direct summands as $\cR_{X,K}$-modules, $\underline{\delta}_X\in \cT_{\mathrm{reg}}^n(X)$ and $\nu_X: \Fil_{i}D_X/\Fil_{i-1}D_X\simeq \cR_{X,K}(\delta_{i})$ (we omit the subscripts of the spaces for the universal characters to simplify the notation). There are obvious morphisms $\mathcal{S}_n\rightarrow \mathcal{S}_{n-1}\times_L\cT_L\rightarrow \cT_{\mathrm{reg}}^{n-1}\times_L \cT_L\subset \cT_L^n$. Let $U\subset \mathcal{S}_{n-1}\times_L\cT_L$ be the preimage of $\cT_{\mathrm{reg}}^n$ which is Zariski open in ${S}_{n-1}\times_L \cT_L$ and let $D_U$ be the $(\varphi,\Gamma)$-modules over $U$ pulled back from the universal one on $\mathcal{S}_{n-1}$. Then $\mathcal{S}_n\simeq \mathrm{Spec}^{\mathrm{an}}(\mathrm{Sym}^{\bullet}(\mathcal{E}xt^1_{\varphi,\gamma_K}(\cR_{U,K}(\delta_n), D_U)^{\vee}))$ is a geometric vector bundle over $U$ where $\mathcal{E}xt^1_{\varphi,\gamma_K}(\cR_{U,K}(\delta_n), D_U)\simeq H^1_{\varphi,\gamma_K}(D_U(\delta_n^{-1}))$ is a locally free sheaf on $U$ of rank $(n-1)[K:\Q]$ (\cite[Prop. 2.3]{hellmann2016density}) and the notion $\mathrm{Spec}^{\mathrm{an}}$ is taken from \cite[Thm. 2.2.5]{conrad2006relative}. It follows from induction that the map $\mathcal{S}_n\rightarrow \cT^n_{\mathrm{reg}}\subset \cT_L^n$ is smooth. \par Let $\mathcal{S}_n^{\mathrm{adm}}\subset\mathcal{S}_n$ be the open subset (as adic spaces) of the admissible locus, which comes from a rigid space, and let $\mathcal{S}_n^{\square,\mathrm{adm}}\rightarrow \mathcal{S}_n^{\mathrm{adm}}$ be the $\GL_n$-torsor trivializing the universal Galois representation over $\mathcal{S}^{\mathrm{adm}}_n$. Let $\mathcal{S}_n^{\square,\mathrm{adm},+}\subset\mathcal{S}_n^{\square,\mathrm{adm}}$ be the admissible open subset where the universal framed representation $\cG_K\rightarrow \GL_n(\Gamma(\mathcal{S}_n^{\square,\mathrm{adm}},\cO_{\mathcal{S}_n^{\square,\mathrm{adm}}}))$ factors through $\cG_K\rightarrow \GL_n(\Gamma(\mathcal{S}_n^{\square,\mathrm{adm}},\cO_{\mathcal{S}_n^{\square,\mathrm{adm}}}^+))$. We denote by $\mathcal{S}_n^{\square}(\overline{r})$ the admissible open subset of $\mathcal{S}_n^{\square,\mathrm{adm},+}$ where the reduction $\cG_K\rightarrow {\GL_n(\Gamma(\mathcal{S}_n^{\square,\mathrm{adm}},\cO_{\mathcal{S}_n^{\square,\mathrm{adm},+}}^+/\cO_{\mathcal{S}_n^{\square,\mathrm{adm},+}}^{++}))}$ coincides with $\overline{r}$ (see also the discussion before \cite[Prop. 8.17]{hartl2020universal}). The map $\kappa:\mathcal{S}_n^{\square}(\overline{r})\rightarrow \mathcal{S}_n\rightarrow \cT_L^n$ is also smooth. \par Let $R_{\overline{r}}$ (over $\cO_L$) be the framed deformation ring of $\overline{r}$ and let $\fX_{\overline{r}}:=\Spf(R_{\overline{r}})^{\mathrm{rig}}$ be the rigid generic fiber (we follow the notation in \cite{breuil2019local} and \cite{wu2021local} rather than \cite{breuil2017interpretation}). The image of $\mathcal{S}_n^{\square}(\overline{r})\rightarrow \fX_{\overline{r}}\times\cT_L^n$ is equal to $U_{\mathrm{tri}}(\overline{r})$ and the trianguline variety $X_{\mathrm{tri}}(\overline{r})$ is the Zariski closure of $U_{\mathrm{rig}}(\overline{r})$ in $\fX_{\overline{r}}\times\cT_L^n$ with the reduced induced structure. The map $\pi_{\overline{r}}:\mathcal{S}_n^{\square}(\overline{r})\rightarrow U_{\mathrm{tri}}(\overline{r})\subset X_{\mathrm{tri}}(\overline{r})$ is smooth. \subsection{Some ``de Rham'' locus} We will define some subspace $\mathcal{S}^{\square}_{n,(i_0,\mathbf{k}_J)}(\overline{r})\subset \mathcal{S}^{\square}_{n}(\overline{r})$ where the criteria in the last section will apply for certain points on it. \par We fix datum $i_0\in\{1,\cdots,n-1\}$, a subset $J\subset \Sigma$ and $\mathbf{k}_J=(k_{\tau})_{\tau\in J}\in \Z_{\geq 1}^J$. We allow $J$ to be $\emptyset$ or $\Sigma$. Let $\mathcal{T}_{(i_0,\mathbf{k}_J)}^n$ be the subset of characters $\underline{\delta}\in\cT_{\mathrm{reg}}^n$ such that $\mathrm{wt}_{\tau}(\delta_{i_0}\delta_{i_0+1}^{-1})=k_{\tau}$ for all $\tau\in J$ and $\delta_{i_0}\delta_{i_0+1}^{-1}\in \cT_0$. Let $\ft$ be the base change to $L$ of the $\Q_p$-Lie algebra of $(K^{\times})^n$ and view its dual $\ft^{*}$ as the affine space of weights and we have a weight map $\wt:\cT_L^n\rightarrow \ft^{*}$. Let $\ft^{*}_{(i_0,\mathbf{k}_J)}$ be the subspace of points $(\lambda_{\tau,i})_{\tau\in\Sigma,i=1,\cdots,n}$ such that $\lambda_{\tau,i_0}-\lambda_{\tau,i_0+1}=k_{\tau}$ for all $\tau\in J$. \begin{lemma} The rigid space $\mathcal{T}_{(i_0,\mathbf{k}_J)}^n$ is smooth reduced equidimensional and is \'etale over $\ft^*_{(i_0,\mathbf{k}_J)}$. \end{lemma} \begin{proof} This follows from \cite[Prop. 6.1.13]{ding2017formes}. \end{proof} Consider the universal $(\varphi,\Gamma_K)$-modules $D_X$ and $\Fil_{\bullet}D_X$ over $X=\mathcal{S}_{n}\times_{\cT_L^n}\cT^n_{(i_0,\mathbf{k}_J)}$ or $ \mathcal{S}_{n}^{\square}(\overline{r})\times_{\cT_L^n}\cT^n_{(i_0,\mathbf{k}_J)}$ pulled back from $\mathcal{S}_n$. The extension \[0\rightarrow \cR_{X,K}(\delta_{X,i_0})\rightarrow \Fil_{i_0+1}D_X/\Fil_{i_0-1}D_X\rightarrow \cR_{X,K}(\delta_{X,i_0+1})\rightarrow 0 \] together with the trivialization $\nu_X$ defines a section $s_X$ in \[\mathcal{E}xt^1_{\varphi,\gamma_K}(\cR_{X,K}(\delta_{X,i_0+1}),\cR_{X,K}(\delta_{X,i_0}))\simeq H^1_{\varphi,\gamma_K}(\cR_{X,K}(\delta_{X,i_0}\delta_{X,i_0+1}^{-1})).\] By the main result of \cite{kedlaya2014cohomology}, both $H^1_{\varphi,\gamma_K}(\cR_{X,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))$ and $H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{X,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))$ are coherent sheaves on $X$. We define the subspace $\mathcal{S}_{n,(i_0,\mathbf{k}_J)}$ or $\mathcal{S}_{n,(i_0,\mathbf{k}_J)}^{\square}(\overline{r})$ to be the vanishing locus on $X=\mathcal{S}_{n}\times_{\cT_L^n}\cT^n_{(i_0,\mathbf{k}_J)}$ or $\mathcal{S}_{n}^{\square}(\overline{r})\times_{\cT_L^n}\cT^n_{(i_0,\mathbf{k}_J)}$ of the image of $s_X$ under the natural map \[H^1_{\varphi,\gamma_K}(\cR_{X,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))\rightarrow H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{X,K}(\delta_{i_0}\delta_{i_0+1}^{-1})).\] The vanishing loci are Zariski closed subspaces as $H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{X,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))$ is locally free. \begin{lemma}\label{lem:locallyfree} Let $X$ be a reduced rigid space over $L$ and $\delta_X:K^{\times}\rightarrow \Gamma(X,\cO_X)^{\times}$ be a continuous character. Assume that for any $x\in X$, we have $\delta_x\in\mathcal{T}_0$ and $\wt_{\tau}(\delta_x)=k_{\tau}$ for all $\tau\in J$. Then the coherent sheaves $H^1_{\varphi,\gamma_K}(\cR_{X,K}(\delta_X))$, $H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{X,K}(\delta_X))$, as well as \[\mathrm{Ker}(H^1_{\varphi,\gamma_K}(\cR_{X,K}(\delta_X))\rightarrow H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{X,K}(\delta_X)))\] and \[\mathrm{Coker}(H^1_{\varphi,\gamma_K}(\cR_{X,K}(\delta_X))\rightarrow H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{X,K}(\delta_X)))\] are finite projective over $X$ of rank $|\Sigma|,|\Sigma|, |J|,|J|$ respectively and their formation commutes with arbitrary base change. \end{lemma} \begin{proof} We write $\mathrm{Ker}(\delta_X)$ or $\mathrm{Coker}(\delta_X)$ for the kernel or the cokernel of the map \[H^1_{\varphi,\gamma_K}(\cR_{X,K}(\delta_X))\rightarrow H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{X,K}(\delta_X))\] for simplicity.\par For any $x\in X$, $\dim_{k(x)}H^1_{\varphi,\gamma_K}(\cR_{k(x),K}(\delta_x))=\dim_{k(x)}H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{k(x),K}(\delta_x))=|\Sigma|$ by Lemma \ref{lem:dimensionH1} and $\dim_{k(x)}\mathrm{Ker}(\delta_x)=\dim_{k(x)}\mathrm{Coker}(\delta_x)=|J|$ by Corollary \ref{cor:dimensionker} and our assumptions on $\delta_x$. The fact that $H^1_{\varphi,\gamma_K}(\cR_{k(x),K}(\delta_x))$ and $H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{k(x),K}(\delta_x))$ are locally free and commute with base change of the form $\Sp(k(x))\rightarrow X$ for $x\in X$ follows from \cite[Prop. 2.3]{hellmann2016density}. Thus for any $x\in X$, \begin{align*} &\mathrm{Coker}(\delta_{X})\otimes_{\cO_X}k(x)\\ \simeq&\mathrm{Coker}(H^1_{\varphi,\gamma_K}(\cR_{X,K}(\delta_X))\otimes_{\cO_X}k(x)\rightarrow H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{X,K}(\delta_X))\otimes_{\cO_X}k(x))\\ \simeq&\mathrm{Coker}(H^1_{\varphi,\gamma_K}(\cR_{k(x),K}(\delta_x))\rightarrow H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{k(x),K}(\delta_x)))\\ \simeq &\mathrm{Coker}(\delta_x). \end{align*} Thus $\mathrm{Coker}(\delta_{X})$ has constant rank, hence projective by \cite[Lem. 2.1.8 (1)]{kedlaya2014cohomology}, and commutes with base change of the form $\Sp(k(x))\rightarrow X$ for $x\in X$.\par Let $\mathrm{Im}(\delta_X)$ be the image of the map $H^1_{\varphi,\gamma_K}(\cR_{X,K}(\delta_X))\rightarrow H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{X,K}(\delta_X))$. Then we have $\dim_{k(x)}\mathrm{Im}(\delta_x)=|\Sigma|-|J|$ for any $x\in X$. By the exact sequence \[0\rightarrow \mathrm{Im}(\delta_X)\rightarrow H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{X,K}(\delta_X))\rightarrow \mathrm{Coker}(\delta_X)\rightarrow 0\] and $\Tor^1_{\cO_X}(k(x),\mathrm{Coker}(\delta_X))=0$, we get \[0\rightarrow \mathrm{Im}(\delta_X)\otimes_{\cO_X}k(x)\rightarrow H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{k(x),K}(\delta_x))\rightarrow \mathrm{Coker}(\delta_x)\rightarrow 0\] for any $x\in X$. Hence $\mathrm{Im}(\delta_X)\otimes_{\cO_X}k(x)\simeq \mathrm{Im}(\delta_x)$ for any $x\in X$ and $\mathrm{Im}(\delta_X)$ is finite projective of rank $|\Sigma|-|J|$. Repeat the argument using the exact sequence \[0\rightarrow \mathrm{Ker}(\delta_X)\rightarrow H^1_{\varphi,\gamma_K}(\cR_{X,L}(\delta_X))\rightarrow \mathrm{Im}(\delta_X)\rightarrow 0\] and that $\Tor^1_{\cO_X}(k(x),\mathrm{Im}(\delta_X))=0$, we see $\mathrm{Ker}(\delta_X)\otimes_{\cO_X}k(x)\simeq\mathrm{Ker}(\delta_x)$ and $\mathrm{Ker}(\delta_X)$ is finite projective of rank $|J|$.\par The statement for general base changes, which we will not essentially need, follows form \cite[Lem. 4.1.5, Thm. 4.4.3 (2)]{kedlaya2014cohomology} and the locally-freeness of those sheaves over the base $X$. \end{proof} The image of $\mathcal{S}_{n,(i_0,\mathbf{k}_J)}^{\square}(\overline{r})$ in $U_{\mathrm{tri}}(\overline{r})$ consists of $x=(r,\underline{\delta})$ such that $\wt_{\tau}(\delta_{i_0}\delta_{i_0+1}^{-1})=k_{\tau}$ for $\tau\in J$, $\delta_{i_0}\delta_{i_0+1}^{-1}\in\cT_0$ and the extension (the condition will be independent of the trivialization of $\cR_{k(x),K}(\delta_{i})$) \[0\rightarrow\cR_{k(x),K}(\delta_{i_0})\rightarrow\Fil_{i_0+1}D_{\mathrm{rig}}(r)/\Fil_{i_0-1}D_{\mathrm{rig}}(r)\rightarrow\cR_{k(x),K}(\delta_{i_0+1})\rightarrow 0\] corresponds to an element in $H^1_{\varphi,\gamma_K}(\cR_{k(x),K}(\delta_{i_0}\delta_{i_0+1}^{-1}))$ which lies in the kernel of \[H^1_{\varphi,\gamma_K}(\cR_{k(x),K}(\delta_{i_0}\delta_{i_0+1}^{-1}))\rightarrow H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{k(x),K}(\delta_{i_0}\delta_{i_0+1}^{-1})).\] The following lemma will be important. \begin{lemma}\label{Lem:smooth} The morphism $\kappa:\mathcal{S}^{\square}_{n,(i_0,\mathbf{k}_J)}(\overline{r})\rightarrow \cT_{(i_0,\mathbf{k}_J)}^n$ is smooth. \end{lemma} \begin{proof} The diagram \begin{center} \begin{tikzpicture}[scale=1.3] \node (A) at (0,1) {$\mathcal{S}^{\square}_{n,(i_0,\mathbf{k}_J)}(\overline{r})$}; \node (B) at (2,1) {$\mathcal{S}_{n,(i_0,\mathbf{k}_J)}$}; \node (C) at (0,0) {$\mathcal{S}^{\square}_{n}(\overline{r})$}; \node (D) at (2,0) {$\mathcal{S}_n$}; \path[->,font=\scriptsize,>=angle 90] (A) edge node[above]{} (B) (B) edge node[above]{} (D) (A) edge node[above]{} (C) (C) edge node[above]{} (D) ; \end{tikzpicture} \end{center} is Cartesian. Hence the map $\mathcal{S}^{\square}_{n,(i_0,\mathbf{k}_J)}(\overline{r})\rightarrow \mathcal{S}_{n,(i_0,\mathbf{k}_J)}$ is smooth. The map $\mathcal{S}^{\square}_{n,(i_0,\mathbf{k}_J)}(\overline{r})\rightarrow \cT_{(i_0,\mathbf{k}_J)}^n$ factors through $\mathcal{S}_{n,(i_0,\mathbf{k}_J)}\rightarrow \cT_{(i_0,\mathbf{k}_J)}^n$. Therefore, we only need to prove that $\mathcal{S}_{n,(i_0,\mathbf{k}_J)}\rightarrow \cT_{(i_0,\mathbf{k}_J)}^n$ is smooth. In \S\ref{subsec:trianguline}, we have maps $\mathcal{S}_i\rightarrow \mathcal{S}_{i-1}\times_L \mathcal{T}_L$. We can define $\mathcal{S}_{i_0+1,(i_0,\mathbf{k})}$ replacing $n$ by $i_0+1$. We have $\mathcal{T}_{(i_0,\mathbf{k}_J)}^n=\mathcal{T}^{n}_{\mathrm{reg}}\times_{\mathcal{T}^{i_0+1}_L}\mathcal{T}_{(i_0,\mathbf{k}_J)}^{i_0+1}$. The section $s_{\mathcal{S}_n\times_{\cT_{L}^{n}}\cT_{(i_0,\mathbf{k}_J)}^{n}}$ is the pullback of the section $s_{\mathcal{S}_{i_0+1}\times_{\cT_{L}^{i_0+1}}\cT_{(i_0,\mathbf{k}_J)}^{i_0+1}}$ via $\mathcal{S}_n\times_{\cT_{L}^{n}}\cT_{(i_0,\mathbf{k}_J)}^{n}\rightarrow \mathcal{S}_{i_0+1}\times_{\cT_{L}^{i_0+1}}\cT_{(i_0,\mathbf{k}_J)}^{i_0+1}$ since the definition of $s_X$ only involves $\Fil_{i_0+1}D_X$ and $\delta_{i_0},\delta_{i_0+1}$. Thus the diagram \begin{center} \begin{tikzpicture}[scale=1.3] \node (A) at (0,1) {$\mathcal{S}_{n,(i_0,\mathbf{k}_J)}$}; \node (B) at (2,1) {$\mathcal{S}_{i_0+1,(i_0,\mathbf{k}_J)}$}; \node (C) at (0,0) {$\mathcal{S}_n$}; \node (D) at (2,0) {$\mathcal{S}_{i_0+1}$}; \path[->,font=\scriptsize,>=angle 90] (A) edge node[above]{} (B) (B) edge node[above]{} (D) (A) edge node[above]{} (C) (C) edge node[above]{} (D) ; \end{tikzpicture} \end{center} is Cartesian. As each $\mathcal{S}_{i}\rightarrow \mathcal{S}_{i-1}\times_L\mathcal{T}_L$ is smooth (as a geometric vector bundle over a Zariski open subset of the image), we see so is, by base change, $\mathcal{S}_{i,(i_0,\mathbf{k}_J)}\rightarrow \mathcal{S}_{i-1,(i_0,\mathbf{k}_J)}\times_L\mathcal{T}_L\rightarrow \mathcal{T}_{(i_0,\mathbf{k}_J)}^i\subset \mathcal{T}_{(i_0,\mathbf{k}_J)}^{i-1}\times_L\mathcal{T}_L$ for $i\geq i_0+2$ if the result is true for $i_0+1$. Thus, we reduce to the case when $n=i_0+1$. \par We consider the map \[\mathcal{S}_{i_0+1,(i_0,\mathbf{k}_J)}\rightarrow \mathcal{S}_{i_0+1}\times_{\cT_{L}^{i_0+1}}\cT_{(i_0,\mathbf{k}_J)}^{i_0+1}\rightarrow (\mathcal{S}_{i_0}\times_L\mathcal{T}_L)\times_{\cT_{L}^{i_0+1}}\cT_{(i_0,\mathbf{k}_J)}^{i_0+1}.\] Since the map $\mathcal{S}_{i_0}\rightarrow \mathcal{T}_L^{i_0}$ is smooth, so is $(\mathcal{S}_{i_0}\times_L\mathcal{T}_L)\times_{\cT_{L}^{i_0+1}}\cT_{(i_0,\mathbf{k}_J)}^{i_0+1}\rightarrow \cT_{(i_0,\mathbf{k}_J)}^{i_0+1}$. Write $V$ for $(\mathcal{S}_{i_0}\times_L\mathcal{T}_L)\times_{\cT_{L}^{i_0+1}}\cT_{(i_0,\mathbf{k}_J)}^{i_0+1}$. We only need to prove that $\mathcal{S}_{i_0+1,(i_0,\mathbf{k}_J)}$ is a geometric vector bundle over $V$ which will imply all we need.\par Recall that $\mathcal{S}_{i_0+1}\times_{\cT_{L}^{i_0+1}}\cT_{(i_0,\mathbf{k}_J)}^{i_0+1}\simeq \mathrm{Spec}^{\mathrm{an}}(\mathrm{Sym}^{\bullet}(H^1_{\varphi,\gamma_K}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))^{\vee}))$ where $\Fil_{i_0}D_V$ is the universal one pulled back from $\mathcal{S}_{i_0}$ and $\delta_{i_0+1}$ is the character pulled back from $\mathcal{T}_{(i,\mathbf{k}_J)}^{i_0+1}$. Consider the kernel of the following composite of morphisms of coherent sheaves on $V$ \begin{equation}\label{equa:coherentsheaves} H^1_{\varphi,\gamma_K}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))\rightarrow H^1_{\varphi,\gamma_K}(\cR_{V,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))\rightarrow H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{V,K}(\delta_{i_0}\delta_{i_0+1}^{-1})) \end{equation} where $ \cR_{V,K}(\delta_{i_0})=\Fil_{i_0}D_V/\Fil_{i_0-1}D_V$. We denote the kernel (resp. cokernel) of the above composite morphism by $\mathrm{Ker}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))$ (resp. $\mathrm{Coker}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))$). \par We claim that $\mathrm{Ker}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))$ is locally free of rank $(i_0-1)|\Sigma|+|J|$ and \[\mathrm{Ker}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))\otimes_{\cO_V}k(x)\simeq \mathrm{Ker}(\Fil_{i_0}D_x(\delta_{x,i_0+1}^{-1}))\] for any $x\in V$. If $i_0=1$, this follows from Lemma \ref{lem:locallyfree}. Now we assume $i_0-1\geq 1$. The sheaves $H^1_{\varphi,\gamma_K}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))$, $H^1_{\varphi,\gamma_K}(\cR_{V,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))$ and $H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{V,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))$ are locally free of ranks $i_0|\Sigma|$, $|\Sigma|$ and $|\Sigma|$ respectively and commute with base change. The morphism \[H^1_{\varphi,\gamma_K}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))\rightarrow H^1_{\varphi,\gamma_K}(\cR_{V,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))\] is surjective since for any $x\in V$, $H^2_{\varphi,\gamma_K}(\Fil_{i_0-1}D_x(\delta_{x,i_0+1}^{-1}))=0$ (see \cite[Prop. 2.3]{hellmann2016density}). Hence \[\mathrm{Coker}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))=\mathrm{Coker}(H^1_{\varphi,\gamma_K}(\cR_{V,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))\rightarrow H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{V,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))).\] By Lemma \ref{lem:locallyfree}, we get $\mathrm{Coker}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))$ is locally free of rank $|J|$ and for any point $x\in V$, $\mathrm{Coker}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))\otimes_{\cO_V}k(x)\simeq \mathrm{Coker}(\Fil_{i_0}D_x(\delta_{x,i_0+1}^{-1}))$. Repeat the last step of the proof of Lemma \ref{lem:locallyfree}, we get the desired claim. \par The injection $\mathrm{Ker}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))\hookrightarrow H^1_{\varphi,\gamma_K}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))$ of projective coherent sheaves induces a surjection \[H^1_{\varphi,\gamma_K}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))^{\vee}\twoheadrightarrow \mathrm{Ker}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))^{\vee}\] which by \cite[Thm. 2.2.5]{conrad2006relative} induces a closed embedding \[\mathrm{Spec}^{\mathrm{an}}(\mathrm{Sym}^{\bullet}(\mathrm{Ker}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))^{\vee}))\hookrightarrow \mathcal{S}_{i_0+1}\times_{\cT_{L}^{i_0+1}}\cT_{(i_0,\mathbf{k}_J)}^{i_0+1}.\] The left-hand side is a geometric vector bundle over $V$ by the previous discussion, and we remain to prove that $\mathrm{Spec}^{\mathrm{an}}(\mathrm{Sym}^{\bullet}(\mathrm{Ker}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))^{\vee}))$ coincides with $\mathcal{S}_{i_0+1,(i_0,\mathbf{k}_J)}$. The statement is local and trivial. We write a proof below.\par We may take an affinoid open $W=\Sp(A)\subset V$ and assume that the sheaves in (\ref{equa:coherentsheaves}) are free over $W$. Then since all the modules are projective, we may take a basis $e_1,\cdots,e_{i_0|\Sigma|}$ of $H^1_{\varphi,\gamma_K}(\Fil_{i_0}D_W(\delta_{i_0+1}^{-1}))$ and assume that the surjection \[H^1_{\varphi,\gamma_K}(\Fil_{i_0}D_W(\delta_{i_0+1}^{-1}))\twoheadrightarrow H^1_{\varphi,\gamma_K}(\cR_{W,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))\] corresponds to projection to the subspace $\langle e_1,\cdots,e_{|\Sigma|}\rangle$ (equivalently choose a split of the surjection). We assume that $e_1',\cdots,e_{|\Sigma|}'$ is a basis of $H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{W,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))$. As the cokernel and the kernel of the map $H^1_{\varphi,\gamma_K}(\cR_{W,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))\rightarrow H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{W,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))$ are locally free, we may, after possibly shrinking $W$, assume that the morphism is given by sending $e_{|J|+1},\cdots,e_{|\Sigma|}$ to $e_{|J|+1}',\cdots, e_{|\Sigma|}'$ and sending $e_{1},\cdots, e_{|J|}$ to $0$. Let $e_1^{\vee},\cdots, e_{i_0|\Sigma|}^{\vee}$ be the dual basis. Then \[\mathrm{Spec}^{\mathrm{an}}(\mathrm{Sym}^{\bullet}(H^1_{\varphi,\gamma_K}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))^{\vee}))\text{ resp. }\mathrm{Spec}^{\mathrm{an}}(\mathrm{Sym}^{\bullet}(\mathrm{Ker}(\Fil_{i_0}D_V(\delta_{i_0+1}^{-1}))^{\vee}))\] are covered by \[W_N:=\Sp(A\langle p^Ne_1^{\vee},\cdots, p^Ne_{i_0|\Sigma|}^{\vee}\rangle)\text{ resp. }\Sp(A\langle p^Ne_1^{\vee},\cdots,p^Ne_{|J|}^{\vee},p^Ne_{|\Sigma|+1}^{\vee},\cdots, p^Ne_{i_0|\Sigma|}^{\vee}\rangle)\] where $N\in \N$. The tautological section $s_{W_N}$ of the sheaf \[H^1_{\varphi,\gamma_K}(\cR_{W_N,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))=\cO_{W_N}e_1\oplus\cdots\oplus \cO_{W_N}e_{|\Sigma|}\] is given by $e_1^{\vee}e_1+\cdots+e_{|\Sigma|}^{\vee}e_{|\Sigma|}$. Thus the image of $s_{W_N}$ in \[H^1_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{W_N,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))=\cO_{W_N}e_1'\oplus\cdots\oplus \cO_{W_N}e_{|\Sigma|}'\] is given by $e_{|J|+1}^{\vee}e_{|J|+1}'+\cdots+e_{|\Sigma|}^{\vee}e_{|\Sigma|}'$. Hence the vanishing locus is cut out by $e_{|J|+1}^{\vee}=\cdots=e_{|\Sigma|}^{\vee}=0$ and coincides with $\Sp(A\langle p^Ne_1^{\vee},\cdots,p^Ne_{|J|}^{\vee},p^Ne_{|\Sigma|+1}^{\vee},\cdots, p^Ne_{i_0|\Sigma|}^{\vee}\rangle)$. \end{proof} \subsection{Nearby critical crystalline points} Now we fix $x=(r,\underline{\delta})\in U_{\mathrm{tri}}(\overline{r})(L)\subset (\fX_{\overline{r}}\times \cT_L^n)(L)$. We assume that $r$ is crystalline and $\underline{\delta}=z^{\lambda}\mathrm{unr}(\underline{\varphi}):=(z^{\lambda_i}\mathrm{unr}(\varphi_i))_{i=1,\cdots,n}$ for some $\lambda=(\lambda_{\tau,i})_{\tau\in\Sigma,i=1,\cdots,n}\in(\Z^n)^{\Sigma}$ and $\underline{\varphi}\in (L^{\times})^n$. Assume furthermore that $\varphi_i\varphi_j^{-1}\notin \{1, q\}$ for all $i\neq j$ where $q$ is the cardinal of the residue field of $\cO_K$. This means that $r$ is generic in the sense of \cite[\S4.1]{wu2021local} or the beginning of \S\ref{sec:companionpointsdesciption}, and $\underline{\delta}\in \cT_0^n$. \par We continue to fix $i_0\in \{1,\cdots,n-1\}$. Let $J:=\{\tau\in \Sigma\mid \lambda_{\tau,i_0}\geq \lambda_{\tau,i_0+1}+1\}$ and let $\mathbf{k}_J=(k_{\tau})_{\tau\in J}:=(\lambda_{\tau,i_0}- \lambda_{\tau,i_0+1})_{\tau\in J}$. We have a filtration $\Fil_{\bullet}D_{\mathrm{rig}}(r)$ such that $\Fil_{i}D_{\mathrm{rig}}(r)/\Fil_{i-1}D_{\mathrm{rig}}(r)\simeq \cR_{L,K}(\delta_i)$. Since $r$ is de Rham, so are $D_{\mathrm{rig}}(r)$ and the subquotient $\Fil_{i_0+1}D_{\mathrm{rig}}(r)/\Fil_{i_0-1}D_{\mathrm{rig}}(r)$. The extension \[0\rightarrow \cR_{L,K}(\delta_{i_0})\rightarrow \Fil_{i_0+1}D_{\mathrm{rig}}(r)/\Fil_{i_0-1}D_{\mathrm{rig}}(r)\rightarrow \cR_{L,K}(\delta_{i_0+1})\rightarrow 0\] defines an element in $H^1_{\varphi,\gamma_K}(\cR_{L,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))$ (up to $L^{\times}$) which lies in the kernel of \[H^{1}_{\varphi,\gamma_K}(\cR_{L,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))\rightarrow H^{1}_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{L,K}(\delta_{i_0}\delta_{i_0+1}^{-1}))\] by Lemma \ref{lem:cohomologyderhamphigamma}, Proposition \ref{prop:kernelandderham}, \cite[Lem. 3.3.7, Lemma 3.4.2]{breuil2019local}, the isomorphism \[H^1(\cG_K, L\otimes_{\Q_p}\BdR)\simeq \mathrm{Ext}^1_{\mathrm{Rep}_{L\otimes_{\Q_p}\BdR}(\cG_K)}(L\otimes_{\Q_p}\BdR,L\otimes_{\Q_p}\BdR)\] and the following lemma. \begin{lemma} Let $W$ be an $L\otimes_{\Q_p}\BdR$-representation of $\cG_K$ which is an extension $0\rightarrow L\otimes_{\Q_p}\BdR\rightarrow W\rightarrow L\otimes_{\Q_p}\BdR\rightarrow 0$ as representations of $\cG_K$. Then $W$ is trivial (i.e. $W \simeq (L\otimes_{\Q_p}\BdR)^2$) if and only if the extension splits. \end{lemma} \begin{proof} If the extension splits, then $W$ is trivial. Conversely, if $W$ is trivial, then $\dim_LW^{\cG_K}=2[K:\Q_p]$ and we have an exact sequence of $L\otimes_{\Q_p}K$-modules $0\rightarrow L\otimes_{\Q_p}K\rightarrow W^{\cG_K}\rightarrow L\otimes_{\Q_p}K\rightarrow 0.$ The extension splits and we may choose a section $L\otimes_{\Q_p}K\rightarrow W^{\cG_K}$ which induces a section $L\otimes_{\Q_p}\BdR\rightarrow W$ of $L\otimes_{\Q_p}\BdR$-representations. \end{proof} Thus $x$ lies in the image of $\mathcal{S}_{n,(i_0,\mathbf{k}_J)}^{\square}(\overline{r})$. Recall the following diagram. \begin{center} \begin{tikzpicture}[scale=1.3] \node (A) at (2,1) {$\mathcal{S}^{\square}_{n,(i_0,\mathbf{k}_J)}(\overline{r})\subset \mathcal{S}^{\square}_{n}(\overline{r})$}; \node (C) at (0,0) {$x\in U_{\mathrm{tri}}(\overline{r})$}; \node (D) at (4,0) {$\cT_{(i_0,\mathbf{k}_J)}^n\subset \cT_L^n.$}; \path[->,font=\scriptsize,>=angle 90] (A) edge node[above]{$\pi_{\overline{r}}$} (C) (A) edge node[above]{$\kappa$} (D) (C) edge node[above]{$\omega'$} (D) ; \end{tikzpicture} \end{center} \begin{definition} Let $A$ and $B$ be two subsets of a rigid space $X$ over $L$. Then we say that $A$ \emph{quasi-accumulates} at $B$ if for every point $b\in B$ and every affinoid open neighbourhood $Y$ of $b$, $A\cap Y\neq \emptyset$ (compare with \cite[Def. 2.2]{breuil2017interpretation}). \end{definition} \begin{lemma}\label{lem:quasi-accumulateszariskiclosure} If $A$ and $B$ are two subsets of a rigid space $X$, then $A$ quasi-accumulates at $B$ if and only if for any $b\in B$ and any affinoid open neighbourhood $Y$ of $b$, $b$ lies in the Zariski closure of $Y\cap A$ in $Y$. In particular, if $A$ quasi-accumulates at $B$, then $B$ is contained in the Zariski closure of $A$ in $X$. \end{lemma} \begin{proof} We prove by contradiction. Assume that $A$ quasi-accumulates at $B$ and there exists an affinoid neighbourhood $Y$ of $b\in B$ such that $b$ is not in the Zariski closure $\overline{Y\cap A}$ in $Y$. Since Zariski open subsets in an affinoid are admissible open (\cite[Cor. 5.1.9]{bosch2014lectures}), there exists an affinoid neighbourhood $Y'\subset Y\setminus \overline{Y\cap A}$ of $b$. Then $Y'\cap A=\emptyset$, this contradicts the assumption. \end{proof} \begin{lemma}\label{lem: quasi-accumulates} Let $Y\hookrightarrow X$ be a closed immersion of rigid analytic spaces over $L$. Let $Z$ be a subset of $Y$ and $y\in Y$ be a point. Then $Z$ quasi-accumulates at $y$ in $X$ if and only if $Z$ quasi-accumulates at $y$ in $Y$. \end{lemma} \begin{proof} The problem is local and we may assume $X=\Sp(A), Y=\Sp(B)$ and $B=A/I$ for an ideal $I$. Assume that $Z$ quasi-accumulates at $y$ in $X$. We only need to prove that for any affinoid neighbourhood $Y'$ of $y$ in $Y$, there exists an affinoid neighbourhood $Y''\subset Y'$ such that $Y''$ has the form $X'\cap Y$ for some affinoid neighbourhood $X'$ of $y$ in $X$. As affinoid subdomains are open in the canonical topology (\cite[Prop. 3.3.19]{bosch2014lectures}) and Weierstrass domains form a basis of the canonical topology (\cite[Lem. 3.3.8]{bosch2014lectures}), we may assume that $Y''$ has the form $\{x\in Y\mid |f_i(x)|\leq 1\}$ for $f_1,\cdots,f_m\in A/I$. We may choose lifts $ \widetilde{f}_1,\cdots,\widetilde{f}_m$ for $f_1,\cdots,f_m$ in $A$. Then $\{x\in X\mid |\widetilde{f}_i(x)|\leq 1\}\cap Y=\{x\in Y\mid |f_i(x)|\leq 1\}$. \end{proof} \begin{lemma}\label{lem:characters} Let $C$ be a positive integer. Then the set of crystalline characters $\underline{\delta}\in\cT_L^n$ such that, if we write $\lambda=\wt(\underline{\delta})$, $\lambda_{\tau,i}- \lambda_{\tau,i+1}>C$ if $i\neq i_0$, $\lambda_{i}-\lambda_{i_0}>C,\lambda_{i}-\lambda_{i_0+1}>C$ if $i<i_0$, $\lambda_{i}-\lambda_{i_0}<-C,\lambda_{i}-\lambda_{i_0+1}<-C$ if $i>i_0+1$, $\lambda_{\tau,i_0}=\lambda_{\tau,i_0+1}$ if $\tau\in J$ and $\lambda_{\tau,i_0+1}-\lambda_{\tau,i_0}>C$ quasi-accumulates at the trivial character in $\cT_L^n$ \end{lemma} \begin{proof} Let $q=p^{[K_0:\Q_p]}$, where $K_0$ is the maximal unramified subfield of $K$, and $d=|\Sigma|$. Take a uniformizer $\varpi_K$ of $K$. We prove that for any character $\delta\in\cT_L$ such that $\delta(\varpi_K)=1$, the set $\{ \delta^{p^N(q-1)},N\in\N\}$ quasi-accumulates at the trivial character. For some $m$ large, we have $\cO_K^{\times}=\Z_p^d\times \mu(\cO_K)\times \Z/(q-1)$ where $\Z_p^d/\exp(\varpi_K^m\cO_K)$ is finite and $\mu(\cO_K)$ denotes the $p$-power roots of unity in $\cO_K$ (see \cite[Prop. II.5.7]{neukirch2013algebraic}). We only need to consider $\Z_p^d=\Z_pe_1\oplus\cdots\oplus\Z_pe_n$ since the characters $\delta^{p^{N}(q-1)}$ are trivial on the torsion subgroups of $\cO_K^{\times}$ and $\varpi_K^{\Z}$ when $N$ is large. The space $\widehat{\Z_p^d}=\mathbb{U}^d$ which parametrizes characters of $\Z_p^d$ is the open polydisk in $d$ variables $T_1,\cdots,T_d$ by sending a character $\delta$ to $(\delta(e_1)-1,\cdots,\delta(e_d)-1)$. Then $\delta^{p^N}$ is sent to $(\delta(e_1)^{p^N}-1,\cdots,\delta(e_d)^{p^N}-1)$. For any $x\in C$ such that $|x-1|_p<1$, where $|-|_p$ denotes the standard valuation, $\lim_{N\to\infty }|x^{p^N}-1|_p=\lim_{N\to\infty}|\sum_{1\leq i\leq p^N}\binom{p^N}{i}(x-1)^i|_p=0$. Hence for any $\epsilon>0$ and $N$ large enough, we have $(\delta(e_1)^{p^N}-1,\cdots,\delta(e_d)^{p^N}-1)\in\overline{\mathbb{B}}(0,\epsilon)^d:=\{x\in \mathbb{U}^d\mid |T_1(x)|_p\leq\epsilon,\cdots, |T_d(x)|_p\leq\epsilon\}$. Any affinoid neighbourhood of $0$ in $\overline{\mathbb{B}}(0,\frac{1}{p})^d$ contains a Weierstrass subdomain of the form $\{x\in \overline{\mathbb{B}}(0,\frac{1}{p})^d \mid |f_1(x)|_p\leq 1,\cdots |f_m(x)|_p\leq 1\}$ for some $f_1,\cdots,f_m\in L\langle p^{-1}T_1,\cdots,p^{-1}T_d\rangle$ by \cite[Lem. 3.3.8, Prop. 3.3.19]{bosch2014lectures}. Since $|f_i(0)|_p\leq 1$, there exists $\epsilon>0$ satisfying that for all $(x_1,\cdots,x_d)\in C^d$ such that $|p^{-1}x_i|_p<\epsilon$ for all $i=1,\cdots,d$, $|f_i(x_1,\cdots,x_n)|_p\leq 1$. Hence $\overline{\mathbb{B}}(0,\epsilon)^d \subset \{x\in \overline{\mathbb{B}}(0,\frac{1}{p})^d \mid |f_1(x)|_p\leq 1,\cdots |f_m(x)|_p\leq 1\}$. Therefore, we have $\{\delta^{p^N(q-1)},N\in\N\}$ quasi-accumulates at the trivial character.\par We write $x_{\tau}$ for the character that sends $x\in \cO_K^{\times}$ to $\tau(x)$ and $\varpi_{K}$ to $1$. For $i\neq i_0,i_0+1$, let $\delta_i=\prod_{\tau\in\Sigma}x_{\tau}^{-i}$. Let $\delta_{i_0}=\prod_{\tau\in J}x_{\tau}^{-i_0}\prod_{\tau\notin J}x_{\tau}^{-i_0-1}$ and $\delta_{i_0+1}=\prod_{\tau\in J}x_{\tau}^{-i_0}\prod_{\tau\notin J}x_{\tau}^{-i_0}$. Let $\underline{\delta}=(\delta_1,\cdots,\delta_n)$. Then $\underline{\delta}$, as well as its powers, is crystalline. The set $\{\underline{\delta}^{p^N(q-1)}\mid N\in\N \}$ quasi-accumulates at the trivial character by a similar proof as above and $\underline{\delta}^{p^N(q-1)}$ satisfies the requirements for the weights if $N$ is large. \end{proof} Finally we can prove the main local results. Write $z^{\mathbf{k}_J}$ for the character $K^{\times}\rightarrow L^{\times}: z\mapsto \prod_{\tau\in J}\tau(z)^{k_{\tau}}$. \begin{proposition}\label{prop:key} Let $X$ be an affinoid open neighbourhood of $x$ in $U_{\mathrm{tri}}(\overline{r})$. \begin{enumerate} \item There exists a subset $Z\subset X$ that quasi-accumulates at $x$ and such that for every $z=(r_z,\underline{\delta}_z)\in Z$, \begin{enumerate} \item $z$ lies in the image of $\mathcal{S}^{\square}_{n,(i_0,\mathbf{k}_J)}(\overline{r})\subset \mathcal{S}^{\square}_{n}$, \item $\underline{\delta}_z\in\cT_0^n$ is crystalline, and \item if we write $\lambda_z$ for $\wt(\underline{\delta}_z)$, then for every $\tau\in\Sigma$, $\lambda_{z,\tau,i}> \lambda_{z,\tau,i+1}$ if $i\neq i_0$, $\lambda_{z,\tau,i}> \lambda_{z,\tau,i_0},\lambda_{z,\tau,i_0+1}$ if $i<i_0$, $\lambda_{z,\tau,i}<\lambda_{z,\tau,i_0},\lambda_{z,\tau,i_0+1}$ if $i>i_0+1$, and $\lambda_{z,\tau,i_0}<\lambda_{z,\tau,i_0+1}$ if $\tau\notin J$. \end{enumerate} \item Every point in $Z$ is generic crystalline and regular (i.e. $\lambda_{z,\tau,i}\neq \lambda_{z,\tau,j}$ for all $i\neq j,\tau\in\Sigma$). \item Let $\zeta$ be the automorphism of $\cT_L^n$ sending $\underline{\delta}'=(\delta_1',\cdots, \delta_n')$ to \[(\delta_1',\cdots,\delta_{i_0-1}', \delta_{i_0+1}'z^{\mathbf{k}_J},\delta_{i_0}'z^{-\mathbf{k}_J},\delta'_{i_0+2},\cdots,\delta'_{n}).\] Use also the notation $\zeta$ to denote the automorphism of $\fX_{\overline{r}}\times \cT_L^n:(r,\underline{\delta}')\mapsto (r,\zeta(\underline{\delta}'))$. Then $\zeta(Z)$ is a subset of $X_{\mathrm{tri}}(\overline{r})$ and quasi-accumulates at $\zeta(x)$ in $X_{\mathrm{tri}}(\overline{r})$. \end{enumerate} \end{proposition} \begin{proof} (1) By the definition of quasi-accumulation, we only need to verify that there exists one point $z\in X$ for an arbitrary affinoid open neighbourhood $X\subset U_{\mathrm{tri}}(\overline{r})$ satisfying the condition (a), (b) and (c). Let $\pi_{\overline{r}}^{-1}(X)$ be the preimage of $X$ in $\mathcal{S}^{\square}_{n,(i_0,\mathbf{k}_J)}(\overline{r})$. Then $\pi_{\overline{r}}^{-1}(X)$ is admissible open. We only need to prove that there exists a point $z'\in \pi_{\overline{r}}^{-1}(X)$ such that $\kappa(z')=\omega'(\pi_{\overline{r}}(z'))\in \cT_{(i_0,\mathbf{k}_J)}$ satisfies the conditions in (b) and (c). As $\kappa:\mathcal{S}^{\square}_{n,(i_0,\mathbf{k}_J)}(\overline{r})\rightarrow \cT_{(i_0,\mathbf{k}_J)}^n$ is smooth by Lemma \ref{Lem:smooth}, the image $\kappa(\pi_{\overline{r}}^{-1}(X))$, which contains $\underline{\delta}$, contains an admissible open subset of $\cT_{(i_0,\mathbf{k}_J)}^n$ that contains $\underline{\delta}$ by \cite[Cor. 9.4.2]{bosch2014lectures}. Then the result follows from that the set of points $\underline{\delta}'\in \cT_{(i_0,\mathbf{k}_J)}^n$ that satisfy (b) and (c) quasi-accumulates at $\underline{\delta}$ by Lemma \ref{lem:characters} (since $\underline{\delta}\in \cT_0^n$ and $\cT_L^n\rightarrow\cT_L^n: \underline{\delta}'\mapsto \underline{\delta}\underline{\delta}'$ is an isomorphism).\par (2) Assume $z=(r_z,\underline{\delta}_z)\in Z$ as in (1). By (c), the $\tau$-weights of $\underline{\delta}_z$ are pairwise different, thus the Sen weights of $r_z$ are regular. Since $(r_z,\underline{\delta}_z)$ lies in the image of $\mathcal{S}^{\square}_{n,(i_0,\mathbf{k}_J)}(\overline{r})$, the extension \[0\rightarrow \cR_{k(z),K}(\delta_{z,i_0})\rightarrow \Fil_{i_0+1}D_{\mathrm{rig}}(r_z)/\Fil_{i_0-1}D_{\mathrm{rig}}(r_z)\rightarrow \cR_{k(z),K}(\delta_{z,i_0+1})\rightarrow 0\] corresponds to an element (up to $L^{\times}$) in the kernel of \[H^{1}_{\varphi,\gamma_K}(\cR_{k(z),K}(\delta_{z,i_0}\delta_{z,i_0+1}^{-1}))\rightarrow H^{1}_{\varphi,\gamma_K}(t^{-\mathbf{k}_J}\cR_{k(z),K}(\delta_{z,i_0}\delta_{z,i_0+1}^{-1}))\] and in particular, in the kernel of \[H^{1}_{\varphi,\gamma_K}(\cR_{k(z),K}(\delta_{z,i_0}\delta_{z,i_0+1}^{-1}))\rightarrow H^{1}_{\varphi,\gamma_K}(\cR_{k(z),K}(\delta_{z,i_0}\delta_{z,i_0+1}^{-1})[\frac{1}{t}]).\] Since $\delta_{z,i_0},\delta_{z,i_0+1}$ are both locally algebraic, we get that $\Fil_{i_0+1}D_{\mathrm{rig}}(r_z)/\Fil_{i_0-1}D_{\mathrm{rig}}(r_z)[\frac{1}{t}]$ is a direct sum of de Rham $(\varphi,\Gamma_K)$-modules over $\cR_{k(z),K}[\frac{1}{t}]$ by \cite[Lem. 3.3.7]{breuil2019local}. Hence the $(\varphi,\Gamma_K)$-module $\Fil_{i_0+1}D_{\mathrm{rig}}(r_z)/\Fil_{i_0-1}D_{\mathrm{rig}}(r_z)$ over $\cR_{k(z),K}$ is de Rham. By Proposition \ref{prop:de Rham} and the condition of weights in (c), $r_z$ is de Rham. By (b) and Lemma \ref{lem:crystalline}, $r_z$ is generic crystalline.\par (3) Let $z=(\rho_z,\underline{\delta}_z)\in Z\subset U_{\mathrm{tri}}(\overline{r})$. Then $\underline{\delta}_z=z^{\lambda_z}\mathrm{unr}(\underline{\varphi}_z)$ for a refinement $\underline{\varphi}_z=(\varphi_{z,1},\cdots,\varphi_{z,n})$ where $\lambda_z$ is as in (c) (we abuse the notation $z$ for a point and the character). Let $\underline{\varphi}_z'$ be the refinement such that $\varphi_{z,i}'=\varphi_{z,i}$ if $i\neq i_0,i_0+1$ and $\varphi_{z,i_0}'=\varphi_{z,i_0+1},\varphi_{z,i_0+1}'=\varphi_{z,i_0}$. Let $\lambda^{\mathrm{dom}}_z$ be the weight such that $\lambda_{z,\tau,i}^{\mathrm{dom}}=\lambda_{z,\tau,i}$ if $i\neq i_0,i_0+1$ or if $\tau\in J$ and let $\lambda_{z,\tau,i_0}^{\mathrm{dom}}=\lambda_{z,\tau,i_0+1}, \lambda_{z,\tau,i_0+1}^{\mathrm{dom}}=\lambda_{z,\tau,i_0}$ if $\tau\notin J$. Then $\lambda^{\mathrm{dom}}_z$ is dominant and differs from $ \lambda_z$ by permutations. It is easy to verify that $z^{\lambda_{z}^{\mathrm{dom}}}\mathrm{unr}(\underline{\varphi}'_z)=\zeta(z^{\lambda_z}\mathrm{unr}(\underline{\varphi}_z))$. By \cite[Thm. 4.2.3]{breuil2019local}, all the companion points of $z$ exist on $X_{\mathrm{tri}}(\overline{r})$. In particular the dominant point $(r_z,z^{\lambda_{z}^{\mathrm{dom}}}\mathrm{unr}(\underline{\varphi}'_z))$ corresponding to the refinement $\underline{\varphi}'_z$ is on $X_{\mathrm{tri}}(\overline{r})$. Let $z$ vary, we see $\zeta(Z)\subset X_{\mathrm{tri}}(\overline{r})$. Since $Z$ quasi-accumulates at $x$ in $U_{\mathrm{tri}}(\overline{r})\subset X_{\mathrm{tri}}(\overline{r})$, $Z$ quasi-accumulates at $x$ in $\fX_{\overline{r}}\times \cT_L^n$ by Lemma \ref{lem: quasi-accumulates}. Since $\zeta$ is an automorphism, $\zeta(Z)$ quasi-accumulates at $\zeta(x)$ in $\fX_{\overline{r}}\times \cT_L^n$. As $\zeta(Z)\subset X_{\mathrm{tri}}(\overline{r})$, we see $\zeta(Z)$ quasi-accumulates at $\zeta(x)\in X_{\mathrm{tri}}(\overline{r})$ by Lemma \ref{lem:quasi-accumulateszariskiclosure}. \end{proof} \section{Companion points on the eigenvariety}\label{sec:global} We now prove the existence of all companion points for generic crystalline points on the eigenvariety. We recall the definition of the eigenvariety for definite unitary groups in \cite[\S5.1]{breuil2019local} or \cite[\S3.1]{breuil2017smoothness}. \subsection{The eigenvariety} Let $F$ be a quadratic imaginary extension of a totally real field $F^+$. Let $S_p$ be the set of places of $F^+$ that divide $p$. We assume that each $v\in S_p$ splits in $F$ and for every $v\in S_p$, we choose a place $\widetilde{v}$ of $F$ above $v$. Let $G$ be a definite unitary group of rank $n\geq 2$ over $F^+$ that is split over $F$ so that $G_p=\prod_{v\in S_p}G_v:=G(F^+\otimes_{\Q}\Q_p)\simeq \prod_{v\in S_p}\GL_n(F_{\widetilde{v}})$ (we fix an isomorphism $G\times_{F^+}F\simeq\GL_{n/F}$). Let $B_p=\prod_{v\in S_p}B_v$ be the subgroup of upper triangular matrices in $G_p$ and let $T_p=\prod_{v\in S_p}T_v\subset B_p$ be the diagonal torus. Let $U^p=\prod_{v\nmid p} U_v$ be a sufficiently small (see \cite[(3.9)]{breuil2017smoothness}) open compact subgroup of $G(\mathbf{A}_{F^+}^{p\infty})$. Write $\widehat{S}(U^p,L):=\{f:G(F^+)\setminus G(\mathbf{A}^{\infty}_{F^+})/U^p\rightarrow L,\text{ continuous}\}$, where $L/\Q_p$ is a large enough finite extension with the residue field $k_L$. Let $G_p$ acts by right translations on $\widehat{S}(U^p,L)$. Let $S\supset S_p$ be a finite set of places of $F^+$ that split in $F$ and contains all such split places $v$ such that $U_v$ is not maximal. The space $\widehat{S}(U^p,L)$ is also endowed with some usual action of (away from $S$) Hecke operators and one can talk about the $p$-adic representations of $\cG_F:=\Gal(\overline{F}/F)$ associated with Hecke eigenvalues that appear in $\widehat{S}(U^p,L)$. We fix a modular absolutely irreducible $\overline{\rho}:\cG_F\rightarrow \GL_n(k_L)$ and write $\widehat{S}(U^p,L)_{\overline{\rho}}\neq 0$ for the localization of $\widehat{S}(U^p,L)_{\overline{\rho}}$ at the non-Eisenstein maximal ideal of the Hecke algebra over $\cO_L$ associated with $\overline{\rho}$ (see \cite[\S2.4]{breuil2017interpretation} for details). We assume the following ``standard Taylor-Wiles hypothesis''. \begin{assumption}\label{ass:taylorwiles} \begin{enumerate} \item $p>2$; \item $F$ is an unramified extension of $F^+$; \item $G$ is quasi-split at all finite places of $F^{+}$; \item $U_v$ is hyperspecial at all places $v$ of $F^+$ that are inert in $F$; \item $F$ contains no non-trivial $\sqrt[p]{1}$ and the image of $\overline{\rho}\mid_{\Gal(\overline{F}/F(\sqrt[p]{1}))}$ is adequate, see \cite[Rem. 1.1]{breuil2019local}. \end{enumerate} \end{assumption} Let $R_{\overline{\rho},S}$ be the deformation ring of polarized deformations of $\overline{\rho}$ that are unramified outside $S$. This is a Noetherian complete local ring over $\cO_L$ with residue field $k_L$. We have an action of $R_{\overline{\rho},S}$ over $\widehat{S}(U^p,L)_{\overline{\rho}}$ which factors through the Hecke actions and commutes with that of $G_p$ (for details, see also \cite[\S2.4]{breuil2017interpretation}). Let $\mathrm{Spf}(R_{\overline{\rho},S})^{\mathrm{rig}}$ denote the rigid generic fiber of the formal scheme $\mathrm{Spf}(R_{\overline{\rho},S})^{\mathrm{rig}}$ in the sense of Berthelot, cf. \cite[\S7]{de1995crystalline}. Let $\widehat{T}_{p}$ be the rigid space over $\Q_p$ parametrizing continuous characters of $T_{p}$ and we write $\widehat{T}_{p,L}$ for its base change. Denote by $\widehat{S}(U^p,L)_{\overline{\rho}}^{\mathrm{an}}$ the subspace of $\widehat{S}(U^p,L)_{\overline{\rho}}$ consisting of $\Q_p$-locally analytic vectors under the action of $G_p$. Then $\widehat{S}(U^p,L)_{\overline{\rho}}^{\mathrm{an}}$ is a locally analytic representation of $G_p$ and if we apply Emerton's Jacquet module functor with respect to $B_p$, $J_{B_p}(\widehat{S}(U^p,L)_{\overline{\rho}}^{\mathrm{an}})$ becomes an essentially admissible locally analytic representation of $T_p$ (\cite[Def. 6.4.9]{emerton2017locally}). The dual $J_{B_p}(\widehat{S}(U^p,L)^{\mathrm{an}}_{\overline{\rho}})'$ defines a coherent sheaf on the quasi-Stein space $\mathrm{Spf}(R_{\overline{\rho},S})^{\mathrm{rig}}\times \widehat{T}_{p,L}$. We define the eigenvariety $Y(U^p,\overline{\rho})$ to be the scheme-theoretical support of the coherent sheaf $J_{B_p}(\widehat{S}(U^p,L)_{\overline{\rho}}^{\mathrm{an}})'$ in $\mathrm{Spf}(R_{\overline{\rho},S})^{\mathrm{rig}}\times \widehat{T}_{p,L}$. An $L$-point $(\rho,\underline{\delta})\in \mathrm{Spf}(R_{\overline{\rho},S})^{\mathrm{rig}}\times \widehat{T}_{p,L}$ is in $Y(U^p,\overline{\rho})$ if and only if \[\Hom_{T_p}(\underline{\delta}, J_{B_p}(\widehat{S}(U^p,L)_{\overline{\rho}}[\fm_{\rho}]^{\mathrm{an}}))\neq 0\] where $\fm_{\rho}$ is the maximal ideal of $R_{\overline{\rho},S}[\frac{1}{p}]$ corresponding to $\rho$ and $\widehat{S}(U^p,L)_{\overline{\rho}}[\fm_{\rho}]$ denotes the subspace of elements in $\widehat{S}(U^p,L)_{\overline{\rho}}$ annihilated by $\fm_{\rho}$. \subsection{The companion points}\label{sec:companionpointsdesciption} We will give the description of all companion points for a generic crystalline point. Suppose $(\rho,\underline{\delta})\in Y(U^p,\overline{\rho})(L)$. Let $\rho_v:=\rho\mid_{\cG_{F_{\widehat{v}}}}$ for $v\in S_p$. Set $\Sigma_v:=\{\tau: F_{\widetilde{v}}\hookrightarrow L\}$ for $v\in S_p$ and $\Sigma_p:=\cup_{v\in S_p} \Sigma_v$. Assume that for each $v\in S_p$, $\rho_v$ is crystalline. Then we have $\varphi$-modules $D_{\mathrm{cris}}(\rho_v)$ over $L\otimes_{\Q_p}F_{\widetilde{v},0}$, where $F_{\widetilde{v},0}$ is the maximal unramified subfield of $F_{\widetilde{v}}$. Take $\tau_{v,0}\in\Sigma_v$. Then $\varphi^{[F_{\widetilde{v},0}:\Q_p]}$ acts linearly on $D_{\mathrm{cris}}(\rho_v)\otimes_{L\otimes_{\Q_p}F_{\widetilde{v},0}, 1\otimes \tau_{v,0}}L$. Let $\{\varphi_{v,1},\cdots,\varphi_{v,n}\}$ be the multiset of eigenvalues of $\varphi^{[F_{\widetilde{v},0}:\Q_p]}$ which is independent of the choice of $\tau_{v,0}$. We say that $\rho$ is \emph{generic crystalline} if $ \varphi_{v,i}\varphi_{v,j}^{-1}\notin \{1, p^{[F_{\widetilde{v},0}:\Q_p]}\}$ for any $i\neq j$ and $v\in S_p$. A \emph{refinement} $\cR=(\cR_v)_{v\in S_p}$ for the generic crystalline representation $\rho$ is a choice of an ordering $\cR_v:\underline{\varphi}_v=(\varphi_{v,1},\cdots,\varphi_{v,n})$ of the $n$ different eigenvalues for all $v\in S_p$. Thus, $\rho$ has $(n!)^{|S_p|}$ different refinements.\par Let $|\cdot|_{F_{\widetilde{v}}}$ be the norm of $F_{\widetilde{v}}$ such that $|p|_{F_{\widetilde{v}}}=p^{-[F_{\widetilde{v}}:\Q_p]}$. Denote by $\delta_{B_v}$ the smooth character $|\cdot|_{F_{\widetilde{v}}}^{n-1}\otimes\cdots\otimes|\cdot|_{F_{\widetilde{v}}}^{n-2i+1}\otimes\cdots \otimes |\cdot|^{1-n}_{F_{\widetilde{v}}}$ of $T_v\simeq (F_{\widetilde{v}}^{\times})^n$ and $\delta_{B_p}=\otimes_{v\in S_p}\delta_{B_v}$ the character of $T_p$. We define an automorphism $\iota=(\iota_{v})_{v\in S_p}$ of $\widehat{T}_{p,L}=\prod_{v\in S_p}\widehat{T}_{v,L}$ given by $\iota_v((\delta_{v,1},\cdots,\delta_{v,n}))=\delta_{B_v}(\delta_{v,1},\cdots,\delta_{v,i}\epsilon^{i-1}, \cdots,\delta_{v,n}\epsilon^{n-1})$ where $\epsilon$ denotes the cyclotomic characters.\par Let $\mathbf{h}=(\mathbf{h}_{\tau})_{\tau\in \Sigma_p}=(h_{\tau,1},\cdots,h_{\tau,n})_{\tau\in \Sigma_p}$ where $h_{\tau,1}\leq \cdots\leq h_{\tau,n}$ are the $\tau$-Hodge-Tate weights of $\rho_v$ if $\tau\in \Sigma_v$. Let $\mathcal{S}_n$ be the $n$-th symmetric group and act on the $n$-tuples $(h_{\tau,1},\cdots,h_{\tau,n})$ in standard ways. For $w=(w_v)_{v\in S_p}=(w_{\tau})_{v\in S_p,\tau\in \Sigma_v}\in (\mathcal{S}_n)^{\Sigma_p}$, define a character $\underline{\delta}_{\cR,w}:=\left(\iota_v\left(z^{w_v(\mathbf{h}_v)}\mathrm{unr}(\underline{\varphi}_v)\right)\right)_{v\in S_p}$ of $T_p$. Let $W_{P_p}=(W_{P_{\tau}})_{v\in S_p,\tau\in\Sigma_v}$ be the subgroup of $(\mathcal{S}_n)^{\Sigma_p}$ consisting of permutations that fix $\mathbf{h}$. Here $P_{\tau}$ denotes the parabolic subgroup of block upper-triangular matrices in $\GL_n$ with the Weyl group (of its Levi subgroup) identified with $W_{P_{\tau}}$. Set $D_{\mathrm{dR},\tau}(\rho_v):=D_{\mathrm{dR}}(\rho_v)\otimes_{L\otimes_{\Q_p} F_{\widetilde{v}},1\otimes\tau} L$. If we choose a basis $(e_1,\cdots, e_n)$ of $D_{\mathrm{dR},\tau}(\rho_v)$ of eigenvectors of $\varphi^{[F_{\widetilde{v},0},\Q_p]}$ with eigenvalues $(\varphi_{v,1},\cdots,\varphi_{v,n})$, then the Hodge-Tate filtration on $D_{\mathrm{dR},\tau}(\rho_v)$ corresponds to a point on the flag variety $\GL_n/P_{\tau}$ which lies in some Bruhat cell $B_{\tau}w_{\cR_{\tau}}P_{\tau}/P_{\tau}$ for some $w_{\cR_{\tau}}\in \mathcal{S}_n/W_{P_{\tau}}$ and $w_{\cR_{\tau}}$ is independent of scaling of the eigenvectors. Here $\cR$ signifies the refinement $\underline{\varphi}$. Let $w_{\cR}=(w_{\cR_{\tau}})_{v\in S_p,\tau\in \Sigma_v}\in (\mathcal{S}_n)^{\Sigma_p}/W_{P_p}$.\par Define a subset of points of $\widehat{T}_{p,L}$ \[W(\rho):=\{\underline{\delta}_{\cR,w} \mid w\in (\mathcal{S}_n)^{\Sigma_p}/W_{P_p}, w\geq w_{\cR}, \cR\text{ is a refinement of }\rho \} \] where $\geq$ denotes the usual Bruhat order on $\mathcal{S}_n$ (or its quotient). Notice that there is a natural partition $W(\rho)=\coprod_{\cR}W_{\cR}(\rho)$ and $W(\rho)$ depends only on $\rho_v,v\in S_p$.\par By the control of the companion points on the trianguline variety in the generic crystalline cases (\cite[\S4.2]{breuil2019local} and \cite[\S4.1]{wu2021local}), we have an inclusion $\{\underline{\delta}'\mid (\rho,\underline{\delta}')\in Y(U^p,\overline{\rho})\}\subset W(\rho)$. Below is our main theorem. \begin{theorem}\label{theoremmaincrystalline} Let $(\rho,\underline{\delta})\in Y(U^p,\overline{\rho})(L)$ be a generic crystalline point as above and recall that we have assumed the Taylor-Wiles hypothesis (Assumption \ref{ass:taylorwiles}). Then \[W(\rho)\subset \{\underline{\delta}'\mid (\rho,\underline{\delta}')\in Y(U^p,\overline{\rho})\}.\] \end{theorem} \begin{proof} We need the patched eigenvariety in \cite[\S3.2]{breuil2017interpretation}. For $v\in S_p$, let $R_{\overline{\rho}_v}'/\cO_L$ be the maximal reduced $\Z_p$-flat quotient of the framed deformation ring of $\overline{\rho}_{\widetilde{v}}$. We can similarly define $R_{\overline{\rho}_v}'$ for $v\in S\setminus S_p$. Let $K_p=\prod_{v\in S_p}\GL_n(\cO_{F_{\widetilde{v}}})\subset \prod_{v\in S_p}\GL_n(F_{\widetilde{v}})\simeq G_p$. Recall that under the Taylor-Wiles assumption, there are some positive integers $g$ and $q$, a patching module $M_{\infty}$ in \cite{caraiani2016patching} over the ring $R_{\infty}=\widehat{\otimes}_{v\in S}R_{\overline{\rho}_v}'[[x_1,\cdots,x_g]]$, an $\cO_L$-morphism $S_{\infty}:=\cO_L[[y_1,\cdots,y_q]]\rightarrow R_{\infty}$ and a surjection $R_{\infty}/\mathfrak{a}\rightarrow R_{\overline{\rho},\mathcal{S}}$ of completed local rings over $\cO_L$ where $\mathfrak{a}=(y_1,\cdots,y_q)$, such that $M_{\infty}$ is a finite projective $S_{\infty}[[K_p]]$-module and $\Pi_{\infty}:=\Hom_{\cO_L}^{\mathrm{cont}}(M_{\infty},L)$ is a $R_{\infty}$-admissible Banach representation of $G_p$ with an isomorphism $\Pi_{\infty}[\mathfrak{a}]\simeq \widehat{S}(U^p,L)_{\overline{\rho}}$ that is compatible with the actions of $R_{\infty}/\mathfrak{a}$ and $R_{\overline{\rho},\mathcal{S}}$ (the action of $R_{\overline{\rho},S}$ factors through the quotient $ R_{\overline{\rho},\mathcal{S}}$). Write $\Pi_{\infty}^{\mathrm{an}}$ for the subspace of locally $R_{\infty}$-analytic vectors in $\Pi_{\infty}$ (\cite[D\'ef. 3.2]{breuil2017interpretation}). The patched eigenvariety $X_p(\overline{\rho})$ is the support of $J_{B_p}(\Pi_{\infty}^{\mathrm{an}})'$ inside $\Spf(R_{\infty})^{\mathrm{rig}}\times\widehat{T}_{p,L}\simeq \Spf(\widehat{\otimes}_{v\in S_p}R_{\overline{\rho}_v}')^{\mathrm{rig}}\times \Spf(\widehat{\otimes}_{v\in S\setminus S_p}R_{\overline{\rho}_v}')^{\mathrm{rig}}\times \Spf(\cO_L[[x_1,\cdots,x_g]])^{\mathrm{rig}}\times\widehat{T}_{p,L}=:\fX_{\overline{\rho}_p}\times \fX_{\overline{\rho}^p}\times \mathbb{U}^g\times\widehat{T}_{p,L}$. By \cite[Thm. 3.21, \S4.1]{breuil2017interpretation}, we have closed embeddings \[Y(U^p,\overline{\rho})\hookrightarrow X_p(\overline{\rho})\hookrightarrow \iota\left(X_{\mathrm{tri}}(\overline{\rho}_p)\right)\times (\mathfrak{X}_{\overline{\rho}^p}\times\mathbb{U}^g) \subset \fX_{\overline{\rho}_p}\times\widehat{T}_{p,L}\times \fX_{\overline{\rho}^p}\times \mathbb{U}^g\] where $X_{\mathrm{tri}}(\overline{\rho}_p)=\prod_{v\in S_p}X_{\mathrm{tri}}(\overline{\rho}_v)$ and $\iota$ is extended to an automorphism of $\fX_{\overline{\rho}_p}\times \widehat{T}_{p,L}$ by base change. Moreover, $X_p(\overline{\rho})$ is equidimensional and is identified with a union of irreducible components of $\iota\left(X_{\mathrm{tri}}(\overline{\rho}_p)\right)\times (\mathfrak{X}_{\overline{\rho}^p}\times\mathbb{U}^g)$ under the above closed embedding. By the argument as in the first steps of the proof of \cite[Thm. 5.3.3]{breuil2019local}, we are reduced to prove the lemma below. \begin{lemma} Assume that a point $\left((\rho_p=(\rho_v)_{v\in S_p},\iota(\underline{\delta})),y\right)\in\iota\left(X_{\mathrm{tri}}(\overline{\rho}_p)\right)\times (\mathfrak{X}_{\overline{\rho}^p}\times\mathbb{U}^g)$ is in $X_p(\overline{\rho})(L)$ where each $\rho_v, v\in S_p$ is generic crystalline. Then $\left((\rho_p,\underline{\delta}_{\cR,w}),z\right)\in X_p(\overline{\rho})(L)$ if and only if $w\geq w_{\cR}$ in $(\mathcal{S}_n)^{\Sigma_p}/W_{P_p}$ where $\cR$ denotes refinements of $\rho_p$. \end{lemma} Now we prove the lemma. By \cite[Thm. 4.10]{wu2021local}, we may assume $\left((\rho_p=(\rho_v)_{v\in S_p},\iota(\underline{\delta})),z\right)\in\iota\left(U_{\mathrm{tri}}(\overline{\rho}_p)\right)\times (\mathfrak{X}_{\overline{\rho}^p}\times\mathbb{U}^g)$. Suppose that $\underline{\delta}$ corresponds to a refinement $\underline{\varphi}=(\underline{\varphi}_v)_{v\in S_p}$. We need to prove that the companion points for other refinements exist on the eigenvariety. We only need to prove the existence of companion points for an arbitrary refinement $\underline{\varphi}'$ such that $\underline{\varphi}_v'=\underline{\varphi}_v$ for all $v\neq v_0$ for some $v_0$ and $\varphi_{v_0}'$ is the refinement permuting $\varphi_{v_0,i_0}$ and $\varphi_{v_0,i_0+1}$. By Proposition \ref{prop:key}, there exists a subset $Z\subset X_{\mathrm{tri}}(\overline{\rho}_{v_0})$ that quasi-accumulates at $(\rho_{v_0},\underline{\delta}_{v_0})$ consisting of generic regular crystalline points and their local companion points $\zeta(Z)$ quasi-accumulates at $\zeta((\rho_{v_0},\underline{\delta}_{v_0}))$ which is a local companion point of $(\rho_{v_0},\underline{\delta}_{v_0})$ for the refinement $\underline{\varphi}_{v_0}'$. Since $U_{\mathrm{tri}}(\overline{\rho}_p)$ is smooth at $(\rho_p,\underline{\delta})$, we may assume every $(z, (\rho_{v},\underline{\delta}_v)_{v\neq v_0}),z\in Z$ is contained in the same irreducible component of $X_{\mathrm{tri}}(\rho_p)$ with $((\rho_v,\underline{\delta}_v)_{v\in S_p})$. In particular for any $z\in Z$, $\left(\iota(z,(\rho_v,\underline{\delta}_v)_{v\neq v_0}),y\right)\in X_p(\overline{\rho})$. By \cite[Thm. 5.5]{breuil2017smoothness}, the classicality (which follows from \cite[Prop. 4.9]{wu2021local}, but essentially \cite[Thm. 3.9]{breuil2017smoothness} is enough for us, and the classicality is only partial for $v_0$), and the discussions in the beginning of \cite[\S5.3]{breuil2019local}, the companion points $\left(\iota(\zeta(z),(\rho_v,\underline{\delta}_v)_{v\neq v_0}),y\right)$ are in $X_p(\overline{\rho})$ and quasi-accumulate at the point $\left(\iota(\zeta((\rho_{v_0},\underline{\delta}_{v_0})),(\rho_v,\underline{\delta}_v)_{v\neq v_0}),y\right)$ in $\fX_{\overline{\rho}_p}\times \widehat{T}_{p,L}\times (\mathfrak{X}_{\overline{\rho}^p}\times\mathbb{U}^g)$. Hence $\left(\iota(\zeta((\rho_{v_0},\underline{\delta}_{v_0})),(\rho_v,\underline{\delta}_v)_{v\neq v_0}),y\right)\in X_p(\overline{\rho})$ by Lemma \ref{lem: quasi-accumulates} and Lemma \ref{lem:quasi-accumulateszariskiclosure}. \end{proof} \subsection{Locally analytic socle conjecture} Let $(\rho,\underline{\delta})\in Y(U^p,\overline{\rho})(L)$ be generic crystalline as before. We write $\lambda=(\lambda_{\tau})_{\tau\in\Sigma_v,v\in S_p}\in (\Z^n)^{\Sigma_p}$ where $\lambda_{\tau}=(\lambda_{\tau,1},\cdots,\lambda_{\tau,n}):= (h_{\tau,n},\cdots, h_{\tau,i}+n-i,\cdots,h_{\tau,1}+n-1)$. We identify the base change to $L$ of the $\Q_p$-Lie algebra of $G_p$ with $\fg:=\prod_{\tau\in \Sigma_p}\mathfrak{gl}_{n/L}$. Let $\overline{\fb}=\prod_{\tau\in \Sigma_p}\overline{\fb}_{\tau}$ be the Borel subalgebra of $\fg$ of lower triangular matrices and $\ft=\prod_{\tau\in \Sigma_p} \ft_{\tau}$ be the Cartan subalgebra of diagonal matrices. We view $\lambda$ as a weight of $\ft$ and extend it to $\overline{\fb}$. For a weight $\mu$ of $\ft$, let $\overline{L}(\mu)$ be the irreducible $\fg$-module with the highest weight $\mu$ in the BGG category attached to $\overline{\fb}$. For a refinement $\cR$ of $\rho$, we write $\underline{\delta}_{\cR,\mathrm{sm}}$ for the smooth part of $\delta_{\cR,w}$, that is $\underline{\delta}_{\cR,\mathrm{sm}}\underline{\delta}_{\cR,w}^{-1}$ is an algebraic character of $T_p$. Notice that $\underline{\delta}_{\cR,\mathrm{sm}}$ is independent of $w$. Let $\overline{B}_p$ be the opposite Borel subgroup of $B_p$ in $G_p$. Recall by Orlik-Strauch's theory \cite{orlik2015jordan}, we have topologically irreducible admissible locally analytic representations ${\cF_{\overline{B}_p}^{G_p}(\overline{L}(-ww_0\cdot \lambda), \underline{\delta}_{\cR,\mathrm{sm}}\delta_{B_p}^{-1})}$, see e.g. \cite[\S4.3]{wu2021local}. Here $w_0$ is the longest element in $\mathcal{S}_n^{\Sigma_p}$ and $ww_0\cdot\lambda$ denotes the usual dot action. By \cite[Prop. 4.9]{wu2021local}, we have the following corollary of Theorem \ref{theoremmaincrystalline} on the locally analytic socle conjecture. \begin{corollary}\label{cor:socle} Under the assumptions and notation of Theorem \ref{theoremmaincrystalline}, there is an injection \begin{equation}\label{equa:socle} \cF_{\overline{B}_p}^{G_p}(\overline{L}(-ww_0\cdot \lambda), \underline{\delta}_{\cR,\mathrm{sm}}\delta_{B_p}^{-1})\hookrightarrow \widehat{S}(U^p,L)_{\overline{\rho}}[\mathfrak{m}_{\rho}]^{\mathrm{an}} \end{equation} of locally analytic representations of $G_p$ for all refinements $\cR$ of $\rho$ and $w\in \mathcal{S}_n^{\Sigma_p}/W_{P_p}, w\geq w_{\cR}$. \end{corollary} \begin{remark} Assuming that, in the situation of Theorem \ref{theoremmaincrystalline} and Corollary \ref{cor:socle}, the Hodge-Tate weights of $\rho_v$ satisfy that $h_{\tau,i}\neq h_{\tau,j}$ for all $i\neq j$ and $\tau\in \Sigma_p$, then there exists a finite length admissible locally analytic representation $\Pi(\rho_p)^{\mathrm{fs}}:=\widehat{\otimes}_{v\in S_p}\Pi(\rho_v)^{\mathrm{fs}}$ of $G_p$ in \cite{breuil2020towards} such that the $G_p$-socle of $\Pi(\rho_p)^{\mathrm{fs}}$ coincides with the finite direct sum of pairwise non-isomorphic irreducible admissible locally analytic representations of $G_p$ that are isomorphic to one of those in the left-hand side of (\ref{equa:socle}) and there exists an injection $\Pi(\rho_p)^{\mathrm{fs}}\hookrightarrow \widehat{S}(U^p,L)_{\overline{\rho}}[\mathfrak{m}_{\rho}]^{\mathrm{an}}$ (\cite[Thm. 1.1]{breuil2020towards}). The representation $\Pi(\rho_p)^{\mathrm{fs}}$ is called the ``finite slope part'' since it is constructed from principal series (thus has Jordan-Hölder factors of the type of Orlik-Strauch). Using Corollary \ref{cor:socle}, similar result still holds without the regular assumption on Hodge-Tate weights. One just need to notice that \cite[Prop. 4.8]{breuil2020towards} is proved without any assumption on the regularity of weights and we can define $\Pi(\rho_v)^{\mathrm{fs}}$ in non-regular cases in the same way as \cite[Def. 5.7]{breuil2020towards}. Then the proof of \cite[Thm. 5.12]{breuil2020towards} applies with minor modifications. \end{remark}
1,116,691,499,987
arxiv
\section{Text2Human} Our aim is to generate human images conditioned on texts describing the attributes of clothes (clothes shapes and clothes textures). Given a human pose $P \in \mathbb{R}^{H \times W}$, texts for clothes shapes $T_{shape}$, and texts for clothes textures $T_{texture}$, the output should be the corresponding human image $I \in \mathbb{R}^{H \times W \times 3}$. The whole pipeline of Text2Human is shown in Fig.~\ref{pipeline_illustration}. We decompose the human generation into two stages. Stage I synthesizes a human parsing mask with the given pose and texts for clothes shapes. We transform the text information to attribute embeddings and concatenate them with human pose features to predict the desired human parsing mask. With the human parsing mask obtained from Stage I as the input, the final image is synthesized according to the required clothing textures in Stage II. We set up a hierarchical texture-aware codebook to characterize various types of texture as illustrated in Fig.~\ref{hier_vqvae}, where the final image is synthesized using both coarse-level (top-level) and fine-level (bottom-level) codebooks. To sample the codebook indices at the coarse level, a sampler with mixture-of-experts is proposed, where features are routed to different expert heads to predict the desired indices. To speed up the sampling at the fine level, we propose a feed-forward codebook index prediction network, which further refines the quality of generated images. \subsection{Stage I: Pose to Parsing} Given a human pose $P$ and texts about clothes shapes, we hope to synthesize the human parsing map $S \in \mathbb{R}^{H \times W}$. First, texts are transformed to a set of clothes shape attributes $\{a_1, ..., a_i, ..., a_k\}$, where $a_i \in \{0, 1, ..., C_i\}$ and $C_i$ is the class number of attribute $a_i$. The attributes are then fed into the Attribute Embedding Module to obtain a shape attribute embedding $f_{shape} \in \mathbb{R}^C$: \begin{equation} f_{shape} = Fusion([E_1(a_1), E_2(a_2), ..., E_i(a_i), ..., E_k(a_k)]), \end{equation} where $E_i(\cdot)$ is the attribute embedder for $a_i$ and $Fusion(\cdot)$ fuses attribute embeddings from $k$ attribute embedders. $[\cdot]$ denotes the concatenation operation. Together with $P$, the $f_{shape}$ is then fed into the Pose-to-Parsing Module, which is composed of an encoder $Enc$ and a decoder $Dec$. The operation at layer $i$ of $Enc$ is defined as follows: \begin{equation} f_{p_i} = Enc_i([f_{p_{i-1}}, \mathcal{B}(f_{shape})]), \end{equation} where $\mathcal{B}(\cdot)$ is the spatial broadcast operation so that $f_{shape}$ is broadcasted to have the same spatial size with $f_{p_{i-1}}$, and $f_{p_0} = P$. The operation of $Dec$ at layer $i$ can be expressed as $f_{p_i}^{\prime} = Dec_i([f_{p_i},$ $f_{p_{i-1}}^{\prime}])$. The final decoded feature $f_{p}^{\prime}$ is fed into fully convolutional layers to make the final parsing prediction. We use the cross-entropy loss to train the whole Pose-to-Parsing Module. \begin{figure} \begin{center} \includegraphics[width=0.96\linewidth]{figure/hierarchical_vqvae.pdf} \end{center} \caption{\textbf{Illustration of Hierarchical VQVAE and Texture-Aware Codebooks.} The images are reconstructed using two levels of features, \emph{i.e}., top level for coarse-scale features and bottom level for fine-scale features. Texture-Aware Codebooks are built for different types of clothing textures.} \label{hier_vqvae} \end{figure} \subsection{Stage II: Parsing to Human} \subsubsection{Preliminaries} \paragraph{VQVAE} The goal of Vector-Quantized Variational AutoEncoder (VQVAE) \cite{oord2017neural} is to learn a discrete codebook that stores discrete neural representations by learning to reconstruct images. VQVAE consists of an encoder $E$, a decoder $G$ and a learnable codebook $\mathcal{Z} = \{ z_k | z_k \in \mathbb{R}^{c_z}\}_{k=1}^{K}$. We first extract the continuous neural representation $\hat{z}$ by feeding the image $I$ into the encoder, \emph{i.e}., $\hat{z} = E(I) \in \mathbb{R}^{h \times w \times c_z}$. Then the quantizer $Quant$ is adopted to discretize the continuous $\hat{z}$, and the operation is defined as follows: \begin{equation} z_q = Quant(\hat{z}) := \underset{z_k \in \mathcal{Z}}{\mathrm{argmin}} \left\| \hat{z}_{ij} - z_k \right\| \in \mathbb{R}^{h \times w \times c_z}. \label{eq:quant} \end{equation} Then the image is reconstructed using the quantized representation $\hat{I} = G(z_q)$. The encoder, decoder and codebook are end-to-end trained through the following loss function: \begin{equation} \mathcal{L} = \left\| I - \hat{I} \right\| + \left\| sg(\hat{z}) - z_q \right\|_2^2 + \left\| sg(z_q) - \hat{z} \right\|_2^2, \label{eq:loss_vqvae} \end{equation} where $sg(\cdot)$ denotes the stop-gradient operation. \paragraph{Diffusion-based Transformer.} To sample images from learned codebooks, autoregressive models \cite{salimans2017pixelcnn++, chen2018pixelsnail} are employed to predict the orderings of codebook indices. Autoregressive models predict indices in a fixed unidirectional manner and the prediction of the incoming index only relies on already sampled top-left parts. In VQVAE, PixelCNN \cite{oord2016conditional} is adopted as the autoregressive model. In recently proposed VQGAN \cite{esser2021taming}, transformer \cite{vaswani2017attention} is adopted for its capability to capture long-term dependencies among codebook indices (In transformer, codebook indices are referred to as `tokens'). Recently, some works \cite{bond2021unleashing, gu2021vector, esser2021imagebart,chang2022maskgit} proposed to use the diffusion model to replace the autoregressive model motivated by two advantages: 1) Indices are predicted based on global and bidirectional context, resulting in more coherent sampled images; 2) Indices are predicted in parallel, leading to much faster sampling speed. Specifically, in diffusion-based transformer, starting from fully-masked indices $k_0$, the final prediction of indices $k_T$ are sampled $T$ steps by transformers. The indices $k_t$ at the step $t$ are sampled following the distributions: \begin{equation} k_t \sim q_{\theta}(k_t | k_{t-1}), \end{equation} where $\theta$ is the parameters of transformers. At each time step, the indices are randomly replaced with newly sampled ones. \subsubsection{Hierarchical VQVAE with Texture-Aware Codebook} Considering the complicated nature of clothes textures, representing textures in single-scale features is not enough. For example, as shown in Fig.~\ref{fig:ablation}(a), the reconstruction of a plaid shirt with multi-scale features contains more details. Inspired by this, we propose the hierarchical VQVAE with multi-scale codebooks. Specifically, given an input image $I \in \mathbb{R}^{H \times W \times 3}$, we first train an encoder $E_{top}$ to downsample $I$ to obtain its coarse-level feature $\hat{feat}_{top}$: \begin{equation} \hat{feat}_{top} = E_{top}(I) \in \mathbb{R}^{H/16 \times W/16 \times c_z}. \end{equation} We build a top-level codebook $\mathcal{Z}_{top}$ for $\hat{feat}_{top}$ with codes $\in\mathbb{R}^{1 \times 1 \times c_z}$. The quantization of $\hat{feat}_{top}$ is the same as Eq.~(\ref{eq:quant}). Then the image is reconstructed using the quantized feature $feat_{top}$ through the decoder $D$: $\hat{I} = D(feat_{top})$. Here we view $D$ as two consecutive parts $D = D_{bot} \circ D_{top}$. The spatial sizes of the inputs to $D_{bot}$ and $D_{top}$ are $H/8 \times W/8$ and $H/16 \times W/16$, respectively. Once the top-level codebook $\mathcal{Z}_{top}$ is trained, we move to build the bottom-level codebook $\mathcal{Z}_{bot}$. The image features represented by the codes of $\mathcal{Z}_{top}$ already recover the coarse information. Therefore, $\mathcal{Z}_{bot}$ just needs to learn residual information to $\mathcal{Z}_{top}$. We introduce a residual encoder $E_{bot}$ to extract fine-level feature $\hat{feat_{bot}}$, which is quantized into $feat_{bot}$ with $\mathcal{Z}_{bot}$. The image is then constructed as follows: \begin{equation} \hat{I} = D_{bot}(D_{top}(feat_{top}) + feat_{bot}). \end{equation} During the training of bottom-level codebook and $E_{bot}$, $E_{top}$ and $D_{top}$ are fixed. The network is optimized by Eq.~(\ref{eq:loss_vqvae}) combined with the perceptual loss and discriminator loss. To make the codes in $\mathcal{Z}_{bot}$ contain richer texture information as well as keep the well-learned structure information in $\mathcal{Z}_{top}$, the code shape is set to $2 \times 2 \times c_z$ rather than the conventional $1 \times 1 \times c_z$. It is implemented by dividing $feat_{bot}$ into non-overlapping patches with spatial size of $2 \times 2$. Once the features are divided into patches, the quantization process is the same as Eq.~(\ref{eq:quant}). Our hierarchical VQVAE shares some similarities with VQVAE2 \cite{razavi2019generating} in the hierarchical design, but differs in the following aspects: 1) Codes in our fine-level codebook have a spatial size of $2\times2$, while the codes in codebooks of VQVAE2 has no spatial size; 2) Our hierarchical design is motivated by representing textures at multiple scales while VQVAE2 is motivated to learn more powerful priors over the latent codes. 3) VQVAE2 trains the whole network end-to-end, which leads to poor representation ability of coarse-level features. Our stage-wise training strategy ensures meaningful representations at all levels. Apart from multi-level codebooks, we further design a texture-aware codebook. The motivation behind the texture-awareness of the codebook lies in that the textures with different appearances at the original scale may appear to be similar at downsampled scales, leading to an ambiguity problem if we build a single coarse-level codebook for all textures. Therefore, we build different codebooks for different texture attributes separately. We will divide features extracted by the encoders according to their texture attributes at the image level and feed them into different codebooks to get the quantized features. \subsubsection{Sampler with Mixture-of-Experts} To incorporate texture-aware codebooks, we adapt the diffusion-based transformer into a texture-aware one as well. A straightforward idea is to train multiple samplers for different textures. However, this naive idea has two shortcomings: 1) Contextual information in the whole image is vital for the sampling of codebook indices, while training sampler for one single texture makes such information blind to the network. 2) Training multiple samplers are not ideal if we adopt the transformer as the sampler, since multiple transformers are too heavy for modern GPU devices. Therefore, we introduce the idea of mixture-of-experts \cite{shazeer2017outrageously} into the diffusion-based transformer. The inputs to the mixture-of-experts sampler consist of three parts: 1) codebook index $T_{code}$, 2) tokenized human segmentation masks $T_{seg}$, and 3) tokenized texture masks $T_{tex}$. The texture mask is obtained by filling the texture attribute labels of clothes in the corresponding regions of the segmentation mask. The multi-head attention $MHA(\cdot)$ of the transformer is computed among all of the tokens: \begin{equation} f = MHA(Emb_{code}(T_{code}) + Emb_{seg}(T_{seg}) + Emb_{tex}(T_{tex})), \end{equation} where $Emb_{code}$, $Emb_{seg}$ and $Emb_{tex}$ are learnable embeddings. The feature $f$ extracted by the multi-head attention is routed to different experts heads. The router routes the specific textures based on the texture attribute information provided by $T_{texture}$. Each expert head is in charge of the prediction of tokens for a single texture. The prediction of tokens is formulated as a classification task, where the class number is the size of the codebook. The final codebook indices are composed of outputs from all expert heads. During training, the codebook index $T_{code}$ is the coarse-level codebook index obtained by the hierarchical VQVAE. When it comes to sampling, $T_{code}$ is initialized with masked tokens and it is iteratively filled with newly sampled ones until fully filled. \subsubsection{Feed-forward Codebook Index Prediction} To sample an image from the hierarchical VQVAE, multiple feature maps composed of the hierarchical codebooks need to be fed into the decoders. The traditional paradigm \cite{razavi2019generating} is to sample multiple features at different scales. However, token-wisely sampling at larger feature scales is time-consuming. Besides, when sampling at a large feature scale, long-term dependencies are hard to capture, and thus the generated images are of poor quality. Motivated by these, we propose a feed-forward codebook index prediction network by harnessing the implicit relationship between codebooks at different levels learned by our proposed hierarchical VQVAE. Specifically, features, which are token-wisely sampled at the coarse level, are fed into the codebook index prediction network to predict the fine-level codebook indices. The codebook index prediction network $N$ is defined as: \begin{equation} index_{bot} = N(feat_{top}). \end{equation} The encoder-decoder network is adopted for the index prediction network. It should be noted that the codebook index prediction network is texture-aware as well. Shared features are extracted by the encoder and decoder, but fed into different classifier heads according to the attributes. The use of the codebook index prediction network and the hierarchical codebooks improves the quality of generated images compared to images generated with only one level codebook. Thanks to the feed-forward index prediction network, the sampling process at larger scales under the hierarchical VQVAE design can be achieved within only one single forward pass. It speeds up the sampling process compared to the token-wisely autoregressive sampling used in \cite{razavi2019generating}. \subsection{Text-driven Synthesis} Our framework is a text-driven one. To transform the texts requested by users into attributes, we have some predefined text descriptions for each attribute. We use the pretrained Sentence-BERT model \cite{reimers-2019-sentence-bert} to extract the word embeddings of our predefined texts and the text requested by users and then calculate their cosine similarities. According to the cosine similarities of word embeddings, we then classify the texts into their corresponding attributes. \subsection{Interactive User Interface} We present an interactive user interface for our Text2Human as shown in Fig.~\ref{fig:teaser}(a). Users can upload a human pose map and then type a text describing the clothing shapes. A human parsing map will be generated accordingly. Then users provide another text describing the clothing textures, and Text2Human generates the corresponding final human image. On the right side of the interface, we provide a parsing palette, which enables users to edit the human parsing. For example, as shown in Fig.~\ref{fig:ui}, users can draw some holes on jeans and make the right pant leg longer using the palette to make the generated images more customized. \begin{figure} \begin{center} \includegraphics[width=0.96\linewidth]{figure/ui.pdf} \end{center} \caption{\textbf{User Interface with Parsing Palette.} To generate the human image, users are required to upload a human pose and texts describing the clothing shapes and textures. Users can modify the generated human parsing by using the parsing palette. For example, they can edit the right pant leg from a short one to a long one. Some holes can be added to the right pant leg to make the results more customized.} \label{fig:ui} \end{figure} \section{Conclusions} In this work, we proposed the Text2Human framework for text-driven controllable human generation in two stages: pose-to-parsing and parsing-to-human. The first stage synthesizes the human parsing masks based on required clothes shapes. In the second stage, we propose a hierarchical VQVAE with texture-aware codebooks to capture the rich multi-scale representations for diverse clothes textures, and then propose a sampler with mixture-of-experts to sample desired human images conditioned on the texts describing the textures. To speed up the sampling process of hierarchical VQVAE and further refine the sampled images from the coarse level, a feed-forward codebook index prediction network is employed. Our proposed Text2Human is able to generate human images with high diversity and fidelity in clothes textures and shapes. We also contribute a large-scale dataset, named DeepFashion-MultiModal dataset, for the controllable human image generation task. \section{DeepFashion-MultiModal Dataset} Currently, most human generation methods are developed on the low-resolution version of the DeepFashion dataset and the datasets lack fine-grained annotations. Therefore, a publicly available and well-annotated high-quality human image dataset is important for the research on the human generation task. Motivated by this, we set up a large-scale high-quality human dataset with rich attribute annotations named DeepFashion-MultiModal Dataset. In a nutshell, our dataset has the following properties: 1) It contains 11,484 high-quality images at $1024 \times 512$ resolution. 2) For each image, we manually annotate the human parsing labels with 24 classes. 3) Each image is annotated with attributes for both clothes shapes and textures. 4) We provide densepose for each human image. \paragraph{Data Source and Processing.} DeepFashion dataset is a large-scale clothes database that contains over 800,000 fashion images, ranging from in-shop images to unconstrained photos uploaded by customers on e-commerce websites with varying quality. Since images from the in-shop clothes retrieval benchmark are mostly of high quality with pure color background, we filter full-body images from this benchmark. There are 11,484 full-body images in total. Similar to the data alignment method used in FFHQ~\cite{karras2019style}, we align the full-body images based on their poses. \paragraph{Annotations.} \textbf{1)} Human Pose Representations: We extract densepose for each image using the off-the-shelf method~\cite{guler2018densepose}. \textbf{2)} Human Parsing Annotations: Human parsing serves as an effective intermedium in pose-to-photo synthesis. For each image, we provide human parsing annotations including 24 semantic labels of body components (face, hair, skin), clothes (top, outer, skirt, dress, pants, rompers) and accessories (headwear, eyeglasses, neckwear, \textit{etc}.). The human parsing is manually annotated from scratch by annotators using Photoshop. \textbf{3)} Clothes Shape Annotations: We manually label the clothes shape attributes for each image. The annotations include the length of upper clothes and lower clothes, the presence of fashion accessories (\emph{e.g}., hat, glasses, neckwear), and the shapes of the upper clothes' necklines. The length of upper clothes falls into four classes: sleeveless, short-sleeve, medium-sleeve, and long-sleeve. The categories for lower clothes are three-point shorts, shorts, cropped pants, and trousers. The shapes of necklines are roughly divided into V-shape, square-shape, crew neck, turtleneck, and lapel. The presence of fashion accessories has two states, \textit{i.e.}, presence or absence. When we annotate clothes shapes for jumpsuits (\emph{e.g}., dress and rompers), the upper part and the lower part of garments are treated separately. \textbf{4)} Clothes Texture Annotations: We manually label the clothes textures by two orthogonal dimensions: clothes colors and clothes fabrics. Clothes colors consist of floral, patterned, stripes, solid color, lattice, color blocks, and hybrid colors. Clothes fabrics are divided into denim, cotton, leather, furry, knitted, tulle, and other materials. \section{Related Work} \paragraph{Generative Models.} Generative Adversarial Network (GAN) has demonstrated its powerful capabilities in generating high-fidelity images. Since \cite{goodfellow2014generative} proposed the first generative model in 2014, different variants of GAN \cite{brock2018large, karras2020analyzing, karras2021alias, karras2019style,chai2022any} have been proposed. In addition to unconditional generation, conditional GANs \cite{mirza2014conditional} were proposed to generate images based on conditions like segmentation mask \cite{isola2017image, wang2018high, park2019semantic} and natural language \cite{xu2018attngan, surya2020restgan}. Our proposed Text2Human is a conditional image generation framework by taking human poses and texts as inputs. In parallel to GAN, VAE \cite{kingma2013auto} is another paradigm for image generation. It embeds input images into a latent distribution and synthesizes images by sampling vectors from the prior distribution. Several VAE-based works \cite{larsen2016autoencoding, esser2018variational, oord2017neural, esser2021taming} have been proposed to improve the visual quality of the generated images. Our proposed method shares some similarities with existing VAE-based methods but differs in the texture-aware codebook, sampler with mixture-of-experts, and feed-forward index prediction network for the hierarchical sampling. \paragraph{Human Image Manipulation and Synthesis.} The goal of pose transfer \cite{ma2017pose, ma2018disentangled, liu2019neural, liu2020neural, balakrishnan2018synthesizing,tao2022structure} is to transfer the appearance of the same person from one pose to another. \cite{albahar2021pose} proposed a pose-conditioned StyleGAN framework. The details of the source image are warped to the target pose and then are used to spatially modulate the features for synthesis. \cite{zhou2019text} proposed a method for the text-guided pose transfer task. \cite{men2020controllable} proposed ADGAN for controllable person image synthesis. The person image is synthesized by providing a pose and several example images. All of these tasks require a source person image to synthesize the target person. Recently, TryOnGAN \cite{lewis2021tryongan} and HumanGAN \cite{sarkar2021humangan} are proposed to support the human image generation conditioned on human pose only. TryOnGAN trained a pose conditioned StyleGAN2 network and can generate human images under the given pose condition. HumanGAN proposed a VAE-based human image generation framework. Human images are generated by sampling from the learned distribution. However, these methods do not offer fine-grained controls on human generation. Our proposed framework allows for controllable human generation by giving texts describing the desired attributes. \section{Introduction} Recent years have witnessed the rapid progress of image generation since the emergence of Generative Adversarial Networks (GANs) \cite{goodfellow2014generative}. Nowadays, we can easily generate diverse faces of high fidelity using a pretrained StyleGAN \cite{karras2020analyzing}, which further supports several downstream tasks, such as facial attribute editing \cite{abdal2021styleflow, patashnik2021styleclip,jiang2021talk} and face stylization \cite{song2021agilegan, pinkney2020resolution,yang2022Pastiche}. Human full-body images, another type of human-related media, are more diverse, richer, and fine-grained in content. Furthermore, human image generation \cite{fu2022styleganhuman,Fruehstueck2022InsetGAN,grigorev2021stylepeople} has wide applications, including human pose transfer \cite{albahar2021pose, sarkar2021style}, virtual try-on \cite{lewis2021tryongan, cui2021dressing}, and animations \cite{yoon2021pose,chan2019everybody,hong2022avatarclip}. From the perspective of applications and interactions, apart from generating high-fidelity human images, it is even desirable to intuitively control the synthesized human images for layman users. For example, they may want to generate a person wearing a floral T-shirt and jeans without expert software knowledge. Human image generation with explicit textual controls makes it possible for users to create 2D avatars more easily. Despite the great potential, controllable human body image generation with high fidelity and diversity is less explored due to the following challenges: \textbf{1)} Compared to faces, human body images are more complex with multiple factors, including the diversity of human poses, the complicated silhouettes of clothing, and sundry textures of clothing; \textbf{2)} Existing human body image generation methods \cite{sarkar2021humangan, yildirim2019generating, weng2020misc} fail to generate diverse styles of clothes since they tend to generate clothes with simple patterns like pure color, let alone fine-grained controls on the textures of clothes in the generated images. \textbf{3)} The generation of clothes with textual controls relies on additional fine-grained annotations. However, currently, there is a lack of human image generation datasets containing fine-grained labels on clothes shapes and textures \cite{liu2016deepfashion,liu2016fashion,cai2022humman}. To bridge the gap, in this work, we propose the Text2Human framework for the text-driven controllable human image generation. As shown in Fig.~\ref{fig:teaser}, given a human pose, users can specify the clothes shapes and textures using solely natural language descriptions. Human images are then synthesized in accordance with the textual requests. Due to the complexity of human body images, it is challenging to handle all involving factors in a single generative model. We decompose the human generation task into two stages. Stage I generates a human parsing mask with diverse clothes shapes based on the given human pose and user-specified texts describing the clothes shapes. Then Stage II enriches the human parsing mask with diverse textures of clothes based on texts describing the clothes textures. Considering the high diversity of clothes textures, we introduce the concept of codebook, which is widely used in VQVAE-based methods \cite{oord2017neural, esser2021taming}, into our framework. The codebook learns discrete neural representations of images. To adaptively characterize textures, we propose a hierarchical VQVAE with texture-aware codebook designs. Specifically, the codebooks are constructed in multiple scales. The codebook in the coarser scale contains more structural information about textures of clothes, while the codebook in finer scales includes more detailed textures. Due to the different natures of different textures, we also build codebooks separately for each texture. In order to conditionally generate human images consistent with the texts describing the textures, we need a sampler to select appropriate texture representations (\emph{i.e}., codebook indices) from the codebook, and then re-arrange them in a reasonable order in the spatial domain. In this manner, with rich texture representations stored in codebooks, the human generation task is formulated as to sample an intermediate feature map from the learned codebooks. We adopt the diffusion model based transformer \cite{bond2021unleashing, gu2021vector, esser2021imagebart} as the sampler. With the texture-aware codebook design, we incorporate mixture-of-experts \cite{shazeer2017outrageously} into the sampler. The sampler has multiple index prediction expert heads to predict indices for different textures. With the hierarchical codebooks, we need to sample intermediate feature maps from the coarse level to the fine level, \emph{i.e}., sampling indices for both the coarse-level and fine-level codebook is required for the image synthesis. Thanks to the implicit relationship between codebooks at different levels learned by our proposed hierarchical VQVAE, the indices of codebook at the coarse level can provide hints for the sampling of the fine level features. A similar idea is also adopted VQVAE2 \cite{razavi2019generating}. However, in VQVAE2, the pixel-wise sampling by auto-regressive models is time-consuming. By comparison, we propose a feed-forward codebook index prediction network, which predicts the desired fine-level codebook indices directly from the coarse-level features. The proposed index prediction network speeds up the sampling process and ensures the generation quality. To facilitate the controllable human generation, we construct a large-scale full-body human image dataset dubbed DeepFashion-MultiModal dataset, which contains rich clothes shape and texture annotations, human parsing masks with diverse fashion attribute classes, and human poses. Both the textual attribute annotations and human parsing masks are manually labeled. The human poses are extracted using \cite{guler2018densepose}. All images are collected from the high-resolution version of DeepFashion dataset. These images are further cleaned and selected to ensure they are full-body and of good quality \footnote{The dataset is available at {\textcolor{myblue}{\url{https://github.com/yumingj/DeepFashion-MultiModal}}}.}. \begin{figure*} \begin{center} \includegraphics[width=1.0\linewidth]{figure/pipeline.pdf} \end{center} \caption{\textbf{Overview of Text2Human.} We decompose the human generation into two stages. Stage I translates the given human pose to the human parsing according to the text describing the clothes shapes. The text for clothes shapes is first transformed to one-hot shape attributes and embedded to a vector $f_{shape}$. The shape vector $f_{shape}$ is then fed into the pose-to-parsing module to spatially modulate the pose features. Stage II generates the human image from the synthesized human parsing by sampling multi-level indices from our learned hierarchical texture-aware codebooks. To sample coarse-level indices, we employ a sampler with mixture-of-experts, where features are routed to different expert heads to predict the indices based on the required textures. At the fine level, we propose a feed-forward network to efficiently predict fine-level indices to refine the generated human image.} \label{pipeline_illustration} \end{figure*} To summarize, our main contributions are as follows: \textbf{1)} We propose the Text2Human framework for the task of text-driven controllable human generation. Our proposed framework is able to generate photo-realistic human images from natural language descriptions. \textbf{2)} We build a hierarchical VQVAE with the texture-aware codebook design. We propose a transformer-based sampler with the concept of mixture-of-experts. The features are routed to different expert heads according to the required attributes. The hierarchical design and mixture-of-experts sampler enable the synthesis and control of complicated textures. \textbf{3)} We propose a feed-forward index prediction network to predict codebook indices of fine-level codebook based on the features sampled at the coarse level, which overcomes the limitation of the time-consuming sampling process in classical hierarchical VQVAE methods. \textbf{4)} We contribute a large-scale and high-quality human image dataset with rich clothes shape and texture annotations as well as human parsing masks to facilitate the task of controllable human synthesis. \section{Experiments} \subsection{Implementation Details} We split the dataset into a training set and a testing set. The training set contains $10,335$ images and the testing set contains $1,149$ images. We downsample the images to $512 \times 256$ resolution. The texture attribute labels are the combinations of clothes colors and fabrics annotations. The modules in the whole pipeline are trained stage by stage. All of our models are trained on one NVIDIA Tesla V100 GPU. We adopt the Adam optimizer. The learning rate is set as $1 \times 10^{-4}$. For the training of Stage I (\emph{i.e}., Pose to Parsing), we use the (human pose, clothes shape labels) pairs as inputs and the labeled human parsing masks as ground truths. We use the instance channel of densepose (three-channel IUV maps in original) as the human pose $P$. Each shape attribute $a_i$ is represented as one-hot embeddings. We train the Stage I module for $50$ epochs. The batch size is set as $8$. For the training of hierarchical VQVAE in Stage II, we first train the top-level codebook, $E_{top}$, and decoder for 110 epochs, and then train the bottom-level codebook, $E_{bot}$, and $D_{bot}$ for 60 epochs with top-level related parameters fixed. The batch size is set as $4$. The sampler with mixture-of-experts in Stage II requires $T_{seg}$ and $T_{tex}$. $T_{seg}$ is obtained by a human parsing tokenizer, which is trained by reconstructing the human parsing maps for $20$ epochs with batch size $4$. $T_{tex}$ is obtained by directly downsampling the texture instance maps to the same size of codebook indices maps using nearest interpolation. The cross-entropy loss is employed for training. The sampler is trained for $90$ epochs with the batch size of $4$. For the feed-forward index prediction network, we use the top-level features and bottom-level codebook indices as the input and ground-truth pairs. The feed-forward index prediction network is optimized using the cross-entropy loss. The index prediction network is trained for $45$ epochs and the batch size is set as $4$. \begin{table}[] \begin{center} \caption{\textbf{Quantitative Comparisons} on human images generated given human parsing maps and clothes textures. Our method achieves the lowest FID score and highest attribute prediction accuracy for complicated textures.} \begin{tabular}{l|ccccc} \Xhline{1pt} \textbf{Methods} & \textbf{FID} & \textbf{Floral} & \textbf{Stripe} & \textbf{Denim} \\ \Xhline{1pt} Pix2PixHD & 39.80 & 52.94\% & 69.44\% & \textbf{99.03\%} \\ \hline SPADE & 30.13 & 61.76\% & 55.56\% & 98.55\% \\ \hline MISC & 27.97 & 50.00\% & 69.44\% & 97.58\% \\ \hline HumanGAN-parsing & 27.71 & 8.82\% & 5.56\% & 87.17\% \\ \hline Text2Human-parsing & \textbf{22.95} & \textbf{70.59\%} & \textbf{88.89\%} & 95.88\% \\ \Xhline{1pt} \end{tabular} \label{quant:parsing} \end{center} \end{table} \begin{table}[] \begin{center} \caption{\textbf{Quantitative Comparisons} on human images generated given human poses. Our method achieves the lowest FID score and the largest ratio of floral, stripe, and lattice textures on clothes.} \begin{tabular}{l|ccccc} \Xhline{1pt} \textbf{Methods} & \textbf{FID} & \textbf{Floral} & \textbf{Stripe } & \textbf{Lattice} \\ \Xhline{1pt} HumanGAN-pose & 32.20 & 2.00\% & 0.61\% & 0.35\% \\ \hline TryOnGAN & 29.00 & 3.05\% & 2.52\% & 0.44\% \\ \hline Text2Human-pose & \textbf{24.54} & \textbf{3.57\%} & \textbf{4.61\%} & \textbf{1.39\%} \\ \Xhline{1pt} \end{tabular} \label{quant:pose} \end{center} \end{table} \subsection{Comparison Methods} \paragraph{Pix2PixHD} \cite{wang2018high} is a conditional GAN for semantic map guided image synthesis. Here, we use the human parsing map and the texture map obtained by filling texture attribute labels in the human parsing map as inputs. \paragraph{SPADE} \cite{park2019semantic} is a conditional GAN for semantic map guided synthesis. It is adapted in a similar way to Pix2PixHD. \paragraph{MISC} \cite{weng2020misc} synthesizes human images based on a human parsing map and some attributes about the clothes. \paragraph{HumanGAN} \cite{sarkar2021humangan} is a pose-conditioned VAE-based human generation method, which generates diverse human appearances by sampling from a fixed distribution (\emph{e.g}., Gaussian distribution). \paragraph{TryOnGAN} \cite{lewis2021tryongan} is a pose-conditioned StyleGAN method. The constant noise is replaced with the pose features. We train the model with the same human pose representation as our method for fair comparisons. \paragraph{Taming Transformer} \cite{esser2021taming} is a VQVAE-based method and also shows an application to conditional human image generation. For a fair comparison, we use human parsing as the input condition. \subsection{Evaluation Metrics} \paragraph{FID} For image generation tasks, Fréchet Inception Distance (FID) is a metric evaluating the similarities between generated images and training images. A lower FID indicates a higher quality. \paragraph{Attribute Prediction Accuracy.} We use a pretrained predictor to predict the texture attributes of generated images. The prediction accuracy is reported to measure the realism of the generated texture. We also use the pretrained predictor to calculate the ratios of complicated textures (floral, stripe, lattice) to evaluate the diversity. \paragraph{User Study.} A user study is performed to evaluate the quality of the generated images. Users are presented with 20 groups of results. Each group has five images generated by baselines and our method. A total of 16 users are asked to 1) rank images according to photorealism (rank 5 is the best) and 2) score texture consistency with the given three attribute labels for upper clothes, lower clothes and outer clothes. The full score is 3. If the outer clothing is not required, the score for the outer clothing is 1. \begin{figure} \begin{center} \includegraphics[width=1.0\linewidth]{figure/user_study.pdf} \end{center} \caption{\textbf{User Study Results.} (a) Average rank for the photorealism of generated images. A higher rank indicates a better visual quality. (b) Scores of the consistency of the textures of generated images and provided labels.} \label{fig:user_study} \end{figure} \subsection{Quantitative Comparisons} We report the quantitative results under two different settings: human image generation 1) from a human parsing, and 2) from a given human pose. Table~\ref{quant:parsing} shows the comparisons with state-of-the-art conditional image generation methods. A well-annotated human parsing map and labels for clothes texture annotations are provided to synthesize the human images. As shown in Table~\ref{quant:parsing}, our method achieves the lowest FID, which demonstrates the fidelity and diversity of our generated human images. In addition, the best texture attribute prediction accuracy shows that our proposed Text2Human framework can accurately generate human images conditioned on provided textures. In Table~\ref{quant:pose}, we show the quantitative comparisons on pose-guided human image synthesis. Since it is non-trivial to add clothes shape and texture controls for HumanGAN and TryOnGAN, under this setting, we report the ratio of complicated textures among all generated images. The highest ratio demonstrates that our methods can synthesize diverse textures for clothes. The user study results are shown in Fig.~\ref{fig:user_study}. Our proposed Text2Human gets the highest rank in terms of the photorealism of the generated images. As for the clothes textures, the images synthesized by our framework are more consistent with the required texture attributes. The user study results are consistent with other quantitative results. \subsection{Qualitative Comparisons} \begin{figure} \begin{center} \includegraphics[width=1.0\linewidth]{figure/compare1.pdf} \end{center} \caption{\textbf{Qualitative Comparison} on image generation given human parsing maps and clothes textures. Our proposed method can generate complicated textures with finer details and high-fidelity faces.} \label{fig:compare1} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.0\linewidth]{figure/compare2.pdf} \end{center} \caption{\textbf{Qualitative Comparison} on Pose-guided Image Generation. The compared baselines do not offer any controls on clothes shapes and textures, while our method can explicitly control these attributes.} \label{fig:compare2} \end{figure} Figure~\ref{fig:compare1} shows visual comparisons on synthesized human images given human parsing maps and clothes textures. Our proposed method can generate complicated textures with finer details and high-fidelity faces. Figure~\ref{fig:compare2} shows visual comparisons with state-of-the-art pose-guided TryOnGAN~\cite{lewis2021tryongan} and HumanGAN~\cite{sarkar2021humangan}. The compared baselines do not offer any controls on clothes shapes and textures, while our method can explicitly control these attributes. We also compare our proposed Text2Human with another VQVAE-based method, Taming Transformer \cite{esser2021taming}. As shown in Fig.~\ref{fig:compare3}, given the same human parsing map, our method can generate more plausible human images. \subsection{Ablation Study} \paragraph{Hierarchical Design for Texture Reconstruction.} Fig.~\ref{fig:ablation}(a) shows the improvement brought by the proposed hierarchical VQVAE on the recovery of plaid and stripe patterns. With the hierarchical design, the reconstructed images contain more high-frequency details, verifying better texture representations are learned. The hierarchical design reduces the reconstruction loss (\emph{i.e}., $l_{1}$ loss + perceptual loss) from $0.1415$ to $0.1192$ on the whole testing set. \begin{figure} \begin{center} \includegraphics[width=1.0\linewidth]{figure/comp_taming.pdf} \end{center} \caption{\textbf{Qualitative Comparison} with Taming Transformer. Given the human parsing as input, our method can generate human images with better fidelity.} \label{fig:compare3} \end{figure} \begin{table}[] \begin{center} \caption{\textbf{Quantitative Results} on the effectiveness of texture-aware codebook and mixture-of-experts. We report the attribute prediction accuracy for models with MoE and without MoE.} \begin{tabular}{l|ccc} \Xhline{1pt} \textbf{Methods} & \textbf{Floral} & \textbf{Stripe} & \textbf{Denim} \\ \Xhline{1pt} Without MoE & 20.59\% & 22.22\% & 92.01\% \\ \hline With MoE & \textbf{70.59\%} & \textbf{88.89\%} & \textbf{95.88\%} \\ \Xhline{1pt} \end{tabular} \label{quant:moe} \end{center} \end{table} \begin{table}[] \begin{center} \caption{\textbf{LPIPS distance and ArcFace cosine distance} between the reconstructed images and original images.} \begin{tabular}{l|cc} \Xhline{1pt} \textbf{Methods} & \textbf{LPIPS $\downarrow$} & \textbf{ArcFace $\uparrow$} \\ \Xhline{1pt} VQVAE2 & 0.0791 & 0.3415 \\ \hline Ours & \textbf{0.0609} & \textbf{0.4869} \\ \Xhline{1pt} \end{tabular} \label{quant:prediction} \end{center} \end{table} \paragraph{Texture-Aware Codebook and Mixture-of-Experts Sampler.} To evaluate the effectiveness of our texture-aware and mixture-of-experts design, we train a diffusion-based sampler with only one codebook for all textures. As shown in Fig.~\ref{fig:ablation}(b), the sampler without mixture-of-experts and texture-aware codebook cannot generate requested floral textures, demonstrating our design makes the sampler better conditioned on the textual inputs. We report attribute prediction accuracies on complicated textures (\emph{i.e}., floral, stripe, and denim). The results are shown in Table~\ref{quant:moe}. Without mixture-of-experts, the attribute prediction accuracy drops by $50.00\%$, $66.67\%$, and $3.87\%$ on floral, stripe, and denim textures, respectively. There are more denim textures (3449 images) than floral (325 images) and stripe (361 images) textures in the training set. It is easier for models to capture the patterns of denim textures even without the mixture-of-expert design. As a result, we can observe a smaller performance gap for denim textures, compared to those for floral and stripe textures. It indicates that the mixture-of-experts design is more effective in generating uncommon textures with fewer training samples. \begin{figure*} \begin{center} \includegraphics[width=0.9\linewidth]{figure/ablation.pdf} \end{center} \caption{\textbf{Ablation Study.} (a) The hierarchical design recovers more high-frequency details. (b) Mixture-of-Experts sampler enables the framework to synthesize images conditioned on requested texture inputs. (c) Compared to VQVAE2, our feed-forward network generates clearer images. (d) With the feed-forward network, sampled lattice patterns are refined.} \label{fig:ablation} \end{figure*} \paragraph{Feed-Forward Index Prediction Network.} To overcome the limitations of the hierarchical sampling paradigm of VQVAE2, we propose a feed-forward index prediction network to speed up the sampling speed as well as refine the textures. In terms of running time, our feed-forward network predicts fine-level codebook indices within \textbf{0.6s} while VQVAE2 takes \textbf{25mins}. In terms of quality, we conduct a comparative experiment with VQVAE2. For a fair comparison, we use the ``ground-truth'' coarse-level codebook indices obtained when reconstructing a given human image as input to predict fine-level indices by the autoregressive model of VQVAE2 or our feed-forward network. As shown in Fig.~\ref{fig:ablation}(c), our method reconstructs more clear and high-fidelity clothes textures than VQVAE2. We report LPIPS distance \cite{zhang2018perceptual} and ArcFace distance \cite{deng2019arcface} between the reconstructed images and the original images in Table~\ref{quant:prediction}. It further verifies the effectiveness of our proposed feed-forward index prediction network in terms of reconstruction performance. Fig.~\ref{fig:ablation}(d) further provides a visualization of the refinement of our feed-forward network. Our network effectively refines the synthesized lattice patterns sampled from the coarse-level codebook. \subsection{Limitations} In this section, we discuss three common limitations of our proposed Text2Human. \begin{figure} \begin{center} \includegraphics[width=1.0\linewidth]{figure/failure_cases.pdf} \end{center} \caption{\textbf{Failure Cases.} (a) Under uncommon poses, implausible artifacts will appear in the generated images. (b) Blurry plaid textures are generated due to imbalanced datasets.} \label{fig:failure} \end{figure} 1) \textbf{Uncommon poses}. The performance would degrade with human poses which are uncommon in the DeepFashion-MultiModal dataset. Two examples of uncommon poses are shown in Fig.~\ref{fig:failure}(a). The first pose is with two legs crossed, artifacts would appear in the cross-region. The second person stands facing the side rather than the front. In this case, artifacts would appear in the face region, as the model is prone to generate faces heading the front. And thus, the generated image looks unnatural. Our framework is data-driven and can benefit from more diverse human datasets in future work. 2) \textbf{Plaid textures} are blurry as shown in as shown in Fig.~\ref{fig:failure}(b). This is attributed to the imbalanced textures in DeepFashion. Only 162 out of 10335 training images have plaid patterns in upper clothes. This is a common problem for all baselines, and our performance is superior. In future work, the performance could be boosted by adding more data with such complicated patterns. For newly added data, the labels for clothes attributes could be provided by the attribute predictor trained on our dataset. Some techniques dealing with imbalanced data could also be employed to mitigate the problem. 3) \textbf{Potential error in word embeddings.} Translating text descriptions to one-hot embeddings inevitably introduces errors. For example, for the length of sleeves, we only define four classes, \emph{i.e}., sleeveless, short sleeves, medium sleeves, and long sleeves. If the user wants to generate a sweater with sleeves covering the elbow but not reaching the wrist, the synthesized human parsing cannot be perfectly aligned with the text inputs as the predefined texts cannot handle sleeves with arbitrary lengths. In future work, continuous word embeddings could be employed to provide richer and more robust information.
1,116,691,499,988
arxiv
\section{Introduction} Conductive oxides are essential components in composite oxide heterostructures where they are often used as electrode materials in thin film applications.\cite{Marrec_et_al:2002,Takahashi/Tokura_et_al:2005,Gallagher/Parkin_2006} In the perovskite crystal family ($AB$O$_3$ stoichiometry), the itinerant ferromagnetic SrRuO$_3$ (SRO) is a popular choice since it is one of the more conductive metallic oxides with good thermal properties.\cite{Lee_Lowndes_et_al:2004} In thin films, SRO is intensely investigated as a possible route to the realization of novel field-effect devices.\cite{Ahn_et_al:2006, Takahasi_et_al:2006} In addition, it is of particular interest to the spintronic\cite{Zutic/Fabian/Sarma:2005,Awschalom_Flatte:2007} and multiferroic\cite{Spaldin/Fiebig:2005,Ramesh/Spaldin:2007} communities, which have been recently energized by the possible device applications available from engineering interface phenomena.\cite{Ohtomo_et_al:2002,Ohtomo/Hwang:2004,Hwang_science:2006,Huijben_et_al:2006,Yamada_et_al:2004, de_Teresa_et_al:1999,Brinkman_et_al:2007} However, one limitation in the design of thin film oxide devices is the observation of increased resistivity in metal oxides as the film thickness decreases. Such behavior is clearly present in ultra-thin films of SrRuO$_3$, where a metal-to-insulator (MI) transition\cite{Toyota_et_al:2005} occurs at four monolayers. This substantial change in the electrical conductivity presents a serious challenge for device miniaturization. In this work we explore the underlying physics of the thin film MI-transition, which to date remain to be understood. The 3$d$ transition metal oxides (TMOs) are known to possess strong electron-electron correlation effects that can drive a system that should be metallic within a simple band picture into an insulating state. Due to the large spatial extent of the 4$d$-orbitals in the ruthenates, correlation effects are anticipated to be less important as stronger hybridization provides more effective screening and a reduced Hubbard $U$ (Coulomb repulsion energy). Many experimental studies have already addressed the degree of electron-electron correlation in SrRuO$_3$ including X-ray and ultraviolet photoemission spectroscopy,\cite{Kim_et_al:2004,Park_et_al:2004, Toyota_et_al:2005,Siemons_et_al:2007} specific heat measurements,\cite{Allen_et_al:1996} infrared and optical conductivity measurements,\cite{Kostic_et_al:1998} and transport experiments.\cite{Cao_et_al:1997} For example, Kim and coworkers\cite{Kim_et_al:2004} use X-ray photoemission spectroscopy (XPS) to identify how such correlations change within the ruthenate family, and Toyota {\it et al.}\cite{Toyota_et_al:2005} use photemission spectroscopy (PES) to detail the metal-insulator transition in SrRuO$_3$ as a function of film thickness concomitant with the onset of magnetism. In all of these studies, the general consensus is that electron correlation effects {\it do} play a role in determining the electronic structure of this itinerant ferromagnet, but to what degree remains unclear. Furthermore, some theoretical investigations have begun examining covalency,\cite{Maiti:2006} correlation\cite{Maiti/Singh:2005} and orbital ordering\cite{Jeng:2006} effects in bulk SrRuO$_3$. The magnetic properties of SRO under epitaxial strain have also been investigated with first-principles techniques.\cite{Zayak/Rabe:2006} In this work, first-principles density functional theory (DFT) calculations are performed, first to identify the degree of correlation in bulk SRO, and second to investigate the driving force for the metal-insulator transition in ultra-thin films. We use two approaches to introduce correlation effects into the conventional band theory (local spin density) approach for treating the Ru 4$d$-orbitals and their hybridization with O 2$p$-orbitals: the local spin density + Hubbard $U$ (LSDA+$U$), and the pseudopotential self-interaction corrected (pseudo-SIC) local spin density methods. In addition we investigate two structural variants -- the ideal cubic perovskite structure and the experimentally observed orthorhombic structure, which includes tiltings and rotations of the RuO$_6$ octahedra. By comparing the results that we obtained for both methods and both structure types, we are able to comment on the nature of the metal-insulator transition in ultra-thin films. \section{\label{sec:structure} Crystal Structure \& Magnetism} The perovskite class of materials is described by a network of corner-sharing $B$O$_6$ octahedra, in which the $A$-site cation is located at the center of a cube defined by eight $B$O$_6$ units. The ideal perovskite is cubic (space group $Pm\bar{3}m$), however several modifications exist owing to the range of cation sizes that can be accommodated in the structure. Deviations from the ideal cubic structure are defined by the Goldschmidt tolerance factor \[ t^\prime = \frac{R_A+R_O}{\sqrt{2}(R_B+R_O)} \quad , \] where $R_i$ is the radius of atom $i$, and can be attributed to the requirement to optimize the anion coordination about the $A$-site cation.\cite{Goldschmidt} Using the Shannon-Prewitt radii for this compound, a predicted tolerance factor of $t^\prime$=0.908 is found, which is far from the ideal case $t^\prime$=1, suggesting that distortions should occur. Indeed, SRO undergoes a series of structural transformations with temperature, from high symmetry cubic ($Pm\bar{3}m$, stable above 950~K) to tetragonal ($I4/mcm$, stable between 820~K and 950~K) to distorted orthorhombic structure ($Pbnm$) at low temperatures. The orthorhombic distortion from the ideal cubic can be described by the tilting of the RuO$_6$ octahedra in alternate directions away from the $c$ axis, and the rotation of the octahedra around the $b$ axis; in both cases adjacent octahedra distort in the opposite sense (Figure \ref{fig:structures}). \begin{figure} \includegraphics[width=0.38\textwidth]{fig1} \caption{\label{fig:structures} (Color online) The orthorhombic ($Pbnm$) crystal structure of SrRuO$_3$. The unit cell contains four formula units (f.u.) of the ideal cubic perovskite ($Pm\bar{3}m$). The structure is stabilized by shortening the Sr--O distance followed by a cooperative distortation of the RuO$_6$ octahedra to reduce the coordination volume of the Sr ions; this results in a smaller Ru--O--Ru bond angle, which in turn decreases the Ru 4$d$ bandwidth and metallicity. The deviation in the structure from the high symmetry cubic state can be quantified using the tilting angle $(180^\circ-\phi)/2$ and the rotation angle $(90^\circ-\theta)/2$ of the oxygen octahedra.} \end{figure} The degree of tilting and rotation (as defined in Fig.\ \ref{fig:structures}) of the octahedra are useful in describing the distortions in the oxygen network from the perfect cubic case. A rotation angle of 7.56$^\circ$ and a tilting angle of 10.47$^\circ$ (corresponding to a Ru-O-Ru angle of 159$^\circ$) are found for $Pbnm$ SrRuO$_3$. The structural changes reduce the hybridization between the Ru 4$d$ states and O 2$p$ states and lead to a narrowing of the bandwidths (see Section \ref{RD}) compared with the ideal cubic case. Consequently the degree of correlation, described by $U/W$ where $W$ is the valence bandwidth, is expected to be enhanced. Below approximately 160~K, SrRuO$_3$ exhibits strong ferromagnetic behavior, and has a measured Rhodes-Wohlfarth ratio\cite{Fukunagai:1994} ($\mu_{\rm eff} / \mu_{\rm sat}$) of 1.3 suggesting that its magnetism can be well described by a localized $d$-electron model similar to the elemental ferromagnetic metals. Within this model and under an octahedral crystal field, the $4d$-manifold splits into a threefold degenerate $t_{2g}$ subband that is lower in energy than the twofold degenerate $e_g$ band. Neglecting covalency, we would expect a spin-only magnetic moment of 2~$\mu_B$, corresponding to a low-spin state for the Ru$^{4+}$ ions ($d^4: t_{2g\uparrow}^3, t_{2g\downarrow}^1$). Experimentally, however, the moment is measured to be closer to 1.1~$\mu_B$/f.u., although values ranging from 0.9~$\mu_B$/f.u. and 1.6~$\mu_B$/f.u. have also been reported.\cite{Kanbayasi:1976} (The spread in values is attributed to the large magnetocrystalline anisotropy of the material, and the difficulty in making large single-domain samples.) First-principles calculations also report a magnetic moment ranging from 0.9~$\mu_B$/f.u. to 2.0~$\mu_B$/f.u.\cite{Singh_SrRuO3:1996,Allen_et_al:1996,Mazin/Singh:1997,Santi_Jarlborg:1997} The reduced calculated magnetic moment in the solid compared to that in the free ion limit is due in part to the large spatial extent of the Ru 4$d$ orbitals, which results in a significant overlap (hybridization) with the oxygen 2$p$. Furthermore, due to the metallic character of SRO, an overlap of the majority and minority Ru 4$d$ bands occurs at the Fermi level; as a result partial occupation of the minority band also leads to a reduced magnetic moment. In this work, we examine the LSDA and ``beyond-LSDA'' electronic and magnetic properties of both the $Pbnm$ and $Pm\bar{3}m$ crystal variants. Metallicity and magnetism are both related to the $d$-bandwidth, which in turn depends on both correlations and structural properties such as tiltings and rotations of the oxygen octahdera. Our goal, therefore, is to identify the relative contributions of electron-electron correlation effects and structural distortions in driving the metal-insulator transition in SrRuO$_3$ thin films. \section{Theoretical Methods} \subsection{LSDA} Our initial electronic band structure calculations were performed within the local spin density approximation\cite{Kohn/Sham:1965} (LSDA) using both the {\sc siesta}\cite{Soler_et_al:2002,Siesta2,Ordejon95} and {\sc vasp}\cite{Kresse/Furthmueller_PRB:1996,Kresse/Joubert:1999} density functional theory (DFT) packages. In each method we used the Perdew-Zunger\cite{Perdew/Zunger:1981} parametization of the Ceperley-Alder data\cite{Ceperley_Alder}for the exchange and correlation (XC) functional. The core and valence electrons were treated with the projector-augmented-wave (PAW) method\cite{Bloechl:1994} for all calculations performed with {\sc vasp}.\footnote{We used 10 valence electrons for Sr (4$s^2$4$p^6$5$s^2$), 14 for Ru ($4$p$^6$4$d^7$5$s^1$), and 6 for each oxygen (2$s^2$2$p^4$).} Furthermore a plane-wave energy cutoff of 500~eV was used and found to produce excellent convergence. The three-dimensional Brillioun zone was sampled with a $12 \times 12 \times 12$ $k$-point Monkhorst-Pack mesh\cite{Monkhorst_Pack} for the cubic bulk (5 atom unit cell) and thin film structures and a $12 \times 12 \times 10$ $k$-point sampling mesh for the bulk orthorhombic structure (20 atom unit cell). For the orthorhombic films we used a $12 \times 12 \times 3$ $k$-point sampling. In all cases the tetrahedron method with Bl{\"o}chl corrections\cite{Bloechl/Jepsen/Andersen:1994} was used for the Brillouin zone integrations. In the localized basis code {\sc siesta}, the core and valence electrons were treated with norm-conserving fully separable\cite{KleinmanBylander} Troullier-Martin\cite{troullier} pseudopotentials.\footnote{The electronic configurations for each atom is: Sr 4$s^2$4$p^6$4$d^0$4$f^0$ (1.50, 1.50, 2.00, 2.00), Ru 4$s^2$4$p^6$4$d^7$4$f^0$ (1.30, 1.30, 1.40, 1.30), and O 2$s^2$2$p^4$3$d^0$4$f^0$ (1.15, 1.15, 1.15, 1.50), where the cutoff radii for each orbital is given in parentheses.} The localized atomic orbitals for each atom used a single-$\zeta$ basis set for the semicore states and a double-$\zeta$ for the valence states. Total energies were computed on a uniform real space grid with a cutoff of 800~Ry in order to reach comparable accuracy to the planewave code.\footnote{ The grid cutoff mentioned is not directly comparable to the plane-wave cutoff value used to represent the wavefunctions in a standard plane-wave implementation of DFT, it is rather used here to represent the density, and is typically four times larger than the wavefunction cutoff.} The Brillioun zone of the cubic structure was sampled with a $26 \times 26 \times 26$ $k$-point Monkhorst-Pack mesh, while the $Pbnm$ structure was sampled with a $15 \times 15 \times 12$ $k$-point mesh. Integrations were performed with a Gaussian broadening of 0.10~eV in all calculations. The equilibrium lattice parameter for the cubic structure was found by fitting the total energy as a function of volume to the Murnaghan equation of state. Excellent agreement was found between the two codes (Table~\ref{tab:Vol_data}), with a slight underestimate of the experimental lattice constant typical for the LSDA. \begin{table} \begin{ruledtabular} \begin{tabular}{ccc & \sc vasp & \sc siesta \\ \hline $a/a_o$ & 0.98 & 0.98 \\ $B$~ (GPa) & 200 & 219 \\ $B^\prime$ & 4.6 & 4.4 \\ Moment ($\mu_b$/f.u.)& 1.09 & 1.26 \\ \end{tabular} \end{ruledtabular} \caption{\label{tab:Vol_data} Results obtained for cubic SrRuO$_3$ within the local spin density approximation (LSDA) from the two codes used in this work. The equilibrium lattice constant relative to the experimental value ($a_0=3.9735$~\AA) determined from high-temperature neutron diffraction data,\cite{Chakoumakos/et_al:1998} the bulk modulus $B$, the pressure derivative $B^\prime$, and the magnetic moment per formula unit are given for each code. Very good agreement is found between the two codes. The notation for the codes are as follows, {\sc siesta}: Spanish Initiative for Electronic Simulations with Thousands of Atoms local orbital code and {\sc vasp}: Vienna Ab-initio Simulation Package planewave code. } \end{table} The cell parameters and atomic positions of the orthorhombic structure (Table \ref{tab:lattice_info}) were optimized by starting from the positions reported in Ref.\ \onlinecite{Zayak/Rabe:2006} and the ionic coordinates were relaxed until the Hellmann-Feynman forces on the atoms were less than 4~meV~\AA$^{-1}$. \begin{table} \begin{ruledtabular} \begin{tabular}{llccc}% Atom & Site & $x$ & $y$ & $z$ \\ \hline Sr & $4c$ & -0.0050 & 0.03039 & 0.25 \\ Ru & $4a$ & 0.5 & 0.0 & 0.0 \\ O(1) & $4c$ & 0.0650 & 0.4942 & 0.25 \\ O(2) & $8d$ & 0.7158 & 0.2834 & 0.0340 \\ \end{tabular} \end{ruledtabular} \caption{\label{tab:lattice_info}Calculated structural parameters for SrRuO$_3$ with the $Pbnm$ symmetry using {\sc vasp}. Our calculated orthorhombic lattice constants are $a=5.4924$, $b=5.4887$, and $c=7.7561$~\AA.} \end{table} Investigations of the electronic structure of cubic SrRuO$_3$ have been described by several different groups\cite{Singh_SrRuO3:1996,Allen_et_al:1996,Zayak/Rabe:2006} within the LSDA; here we briefly summarize their conclusions and remark that our results are consistent with the earlier calculations. A complete comparison of the electronic structure of cubic SrRuO$_3$ calculated with both {\sc vasp} and {\sc siesta} is made in Section \ref{RD}. In all cases, a metallic ferromagnetic ground state is found to be stable, with strong Ru 4$d$ character at the Fermi level. Substantial hybridization occurs between the O 2$p$ states and the Ru 4$d$-states and no energy gaps are observed in the densities of states. The calculated magnetic moment is also always reduced from the fully ionic limit of 2~$\mu_B$. \subsection{LSDA+$U$} The first ``beyond-LSDA'' method that we use to treat the exchange and correlation (XC) within DFT is the local spin density approximation with Hubbard $U$ (LSDA+$U$).\cite{Anisimov/Aryasetiawan/Liechtenstein:1997} Here we use the spherically averaged form of the rotationally invariant LSDA+$U$ introduced by Dudarev {\it et al.},\cite{Dudarev_et_al:1998} in which only one effective Hubbard parameter, $U_{\rm eff} = U - J$, is used, where $U$ and $J$ are the spherically averaged Hubbard repulsion and intra-atomic exchange for electrons with the angular momentum of interest; in this case Ru 4$d$ states. We treat the double-counting term within the fully-localized-limit and note that this should more correctly describe the insulating side of the metal-insulator transition that we study here; an improved description of the metallic side might be achieved using the recently introduced interpolation between the around-mean-field and fully-localized-limit extremes.\cite{Petukov_et_al:2003} Within these approximations, the LSDA+$U$ correction to the LSDA potential is \begin{equation} \Delta V(mm' \sigma) = - (U-J) \left( \rho^{\sigma}_{mm'} - \frac{1}{2} \delta_{mm'} \right) \quad, \label{FLL} \end{equation} where $m$, $m'$ are the orbital indices, $\sigma$ is the spin index, and $\rho^{\sigma}_{mm'}$ is the orbital occupation matrix. The effect of the LSDA+$U$ correction given by Eq.~\ref{FLL} is particularly transparent in the limit of diagonal $\rho^{\sigma}_{mm'}$ with orbital occupancies 1 or 0: Occupied orbitals experience a potential which is lower in energy by $(U-J)/2$ compared with the LSDA, and the potential for unoccupied orbitals is raised by the same amount. In this study we varied $U_{\rm eff}$ from 0 to 6 eV for the Ru $d$-states, (the standard LSDA corresponds to a $U_{\rm eff}=0$~eV). Structural optimizations were also performed within LSDA+$U$ approximation, however negligible structural changes compared with the LSDA were observed. \subsection{Self Interaction Corrections} Our second approach for extending the treatment of the exchange and correlation is to correct for the spurious self-Coulomb and self-exchange interactions which arise within the LSDA, using the pseudopotential self-interaction corrected (pseudo-SIC) method.\cite{Filippetti/Spaldin:2003, Vogel/Kruger/Pollmann:1996,Pemmaraju/Sanvito:2007} These self-interaction errors are small for materials with delocalized electronic states, but can be significant in systems with localized electrons where the interaction of an electron with itself is large. Since the SIC in a periodic, extended system is not uniquely defined, many different methods have been proposed to remove SI in DFT calculations for solids (for a review see Ref.~\onlinecite{Stengel_Spaldin:2008}). The procedure followed in the pseudo-SIC method is to: \begin{enumerate} \item Project the occupied Bloch states onto the basis of the pseudo-atomic orbitals. \item Correct the potential for each Bloch state by the SIC for the pseudo-atomic orbital weighted by the projection, and scaled to account for the relaxation energy. \end{enumerate} Note that only the valence bands are corrected since the empty conduction bands, derived from orbitals where the occupation numbers are close to zero, are not self-interacting. This is in contrast to the LSDA+$U$ method, in which the occupied bands are lowered in energy and the unoccupied bands raised. In principle, however, the two formalisms would yield equivalent results if a suitable $U-J$ (corresponding to the SIC energy) were applied to all orbitals in the LSDA+$U$ calculation. Indeed, whether the deficiencies of LSDA for strongly correlated systems derive from the absence of Hubbard $U$ or the self-interaction error or both remains an open question. The pseudo-SIC method has some advantages over LSDA+$U$, since it does not require a choice of which orbital to correct, nor of the $U$ or $J$ parameters, and it can be applied readily to both magnetic and non-magnetic systems. We note that this is the first application of pseudo-SIC to an itinerant-correlated system, so the comparison with our LSDA$+U$ results provides a test for the pseudo-SIC method. \section{\label{RD}Results \& Discussions} As we have mentioned, the electronic structure of SrRuO$_3$ has been investigated previously using the LSDA.\cite{Singh_SrRuO3:1996,Allen_et_al:1996,Zayak/Rabe:2006} Here we first revisit the LSDA with our own calculations, with an emphasis on understanding discrepancies between the experimental measured photoemission results\cite{Kim_et_al:2004,Toyota_et_al:2005} and the calculated LSDA electronic structure of the orthorhombic material. We then extend our study to the two ``beyond LSDA'' approaches to examine correlation effects in {\it bulk} $Pbnm$ and $Pm\bar{3}m$ SrRuO$_3$. Finally, we examine unsupported {\it films} of SrRuO$_3$ in both structures in order to analyse the nature of the experimentally observed metal-insulator phase transformation. {\it Cubic LSDA$\quad$} The total energies were calculated for both the ferromagnetically ordered and non-magnetic states of cubic SrRuO$_3$ using the optimized lattice parameters. With both electronic structure codes the ferromagnetic groundstate is always found to be lower in energy ({\sc vasp}: 11.5~meV, {\sc siesta}: 31.6~meV). Both codes yield very similar electronic structures. The density of states obtained using the {\sc vasp} code is shown in Figure \ref{fig:cubic_dos}. The valence band is composed largely of O 2$p$ states hybridized with Ru 4$d$ states, with oxygen states predominately found in lower regions of the valence band and Ru states dominating at the Fermi energy. The large peak in the DOS near the Fermi level is caused by the fairly flat Ru $t_{2g}$ bands near the Fermi level while the strongly hybridized $e_g$ orbitals form broader bands at the bottom of the valence and conduction bands. The Sr $4d$ states are found around 5~eV above the Fermi energy. \begin{figure} \includegraphics[width=0.45\textwidth]{fig2} \caption{\label{fig:cubic_dos}(Color online) The total (gray line) and partial (shaded) spin-resolved densities of states for cubic SrRuO$_3$ calculated within the LSDA using {\sc vasp}. ({\sc upper}) Sr 4$d$ states, ({\sc middle}) Ru 4$d$ states [the $t_{2g}$ and $e_{g}$ (unshaded, bold line) symmetries are shown], and ({\sc lower}) O 2$p$ states. The dashed line at 0~eV denotes the Fermi level.} \end{figure} The exchange splitting causes an energy shift between the majority spin and minority spin states; at the $\Gamma$-point a splitting of $\approx$0.50~eV is observed in the Ru 4$d$ states, and of $\approx$0.20~eV in the O $2p$ states. The calculated magnetic moments per formula unit are found to be 1.09~$\mu_B$ ({\sc vasp}) and 1.26~$\mu_B$ ({\sc siesta}). With both implementations of DFT, approximately 70\% of the moment is found on the Ru atoms, with the remaining distributed about the oxygen network. The slight enhancement of the magnetic moment calculated with {\sc siesta} compared to {\sc vasp} is due to {\sc siesta}'s slight downward shift in energy of the Ru $t_{2g}$ band relative to the Fermi energy. We note that the consistency between the two DFT flavors is essential to our later discussion of the effect of electron-electron correlations in the electronic structure of SrRuO$_3$, since the LSDA+$U$ approach has been implemented in the {\sc vasp} code, and the pseudo-SIC method in the {\sc siesta} code. {\it Orthorhombic LSDA$\quad$} Using the optimized LSDA lattice parameters for the $Pbnm$ structure, we find that the ferromagnetic ground state is 6.34~meV/f.u.\ lower in energy than the constrained paramagnetic structure, and additionally is 188~meV/f.u.\ ({\sc vasp}) and 150~meV/f.u.\ ({\sc siesta}) lower in energy than the ferromagnetic cubic phase. This energy stabilization can be associated with the oxygen octahedral tiltings and rotations, and agrees well with previous first-principles studies\cite{Singh_SrRuO3:1996,Zayak/Rabe:2006} that used experimental lattice parameters\cite{Jones:1989} (the LSDA underestimates the lattice parameters by only about 1\%). Also it is consistent with the experimental observation of ferromagnetic SrRuO$_3$ in the distorted GdFeO$_3$ structure.\cite{Jones:1989} \begin{figure} \includegraphics[width=0.45\textwidth]{fig3} \caption{\label{fig:ortho_dos}(Color online) The total (grey line) and partial spin-resolved (shaded) densities of states for orthorhombic SrRuO$_3$ calculated within the LSDA using {\sc vasp}. ({\sc upper}) Sr 4$d$ states, ({\sc middle}) Ru 4$d$ states [the $t_{2g}$ and $e_{g}$ (unshaded, bold line) symmetries are shown], and ({\sc lower}) O 2$p$ states. } \end{figure} The (P)DOS for the orthorhombic structure are shown in Fig.\ \ref{fig:ortho_dos}, and can be seen to be similar to those of the cubic structure discussed earlier (Fig.~\ref{fig:cubic_dos}).\footnote{ Although in this symmetry the RuO$_6$ cages are rotated, we retain the standard spherical harmonics for the 4$d$ orbitals described in terms of a octahedral crystal field for a cubic perovskite. The transformation from the cubic to orthorhombic $d$ orbital reference frame requires a rotation of $\pi/4$ about the [001]-direction; additionally, the octahedral units retain almost all of their integrity, i.e.\ the apical and axial Ru-O bond lengths are identical within 0.01~\AA. } Consistent with the reduction in Ru 4$d$ -- O 2$p$ overlap resulting from the tiltings and rotations, the bandwidths are slightly narrower in the orthorhombic structure, with the $t_{2g}$ bandwidth reduced by 0.35~eV, the $e_g$ by 1.5~eV and the O 2$p$ by 0.60~eV. This results in a pseudo-gap opening in the minority $e_g$ bands at $\approx$-2.2~eV and a 0.20~eV gap opening $\approx$0.80~eV above the Fermi level. Interestingly, the Ru $4d$ exchange splitting is reduced slightly to 0.30~eV at $\Gamma$. This is accompanied by a reduction in the magnetic moment compared with the cubic structure, to 0.79~$\mu_B$/f.u.\ ({\sc vasp}) or 0.92~$\mu_B$/f.u.\ ({\sc siesta}). As noted, the spontaneous magnetization in the bulk (films) has been reported to be near 1.6~$\mu_B$ (1.4$\mu_B$), the LSDA underestimate is likely the result of the usual LSDA overbinding leading to enhanced Ru 4$d$ -- O 2$p$ covalency. Note that this underestimate as compared to experiment\cite{Kanbayasi:1976} is significantly larger than that usually found for $3d$ transition metal oxides. It is further notable, because the orbital angular momentum is expected to be strongly quenched for 4$d$ orbitals due to the cubic crystal field. We comment on the effect of including spin-orbit coupling later. Finally for the LSDA section, we compare our first-principles LSDA results with recent photoemission spectroscopy (PES) data\cite{Park_et_al:2004,Okamoto_et_al:1999,Fujioka_et_al:1997} with the goal of identifying which features are driven by correlation. In an ideal single-electron picture, the measured PES would consist of narrow peaks corresponding to the energies required to excite non-interacting electrons from the valence band into the continuum. However, the photoemission energies are more accurately interpreted as differences between two many-body $N$-electron states: the ground state, and the excited state with a photoelectron and hole. The effect of the many-body interactions is to broaden the one-electron peaks and shift spectral weight into so-called quasiparticle peaks. The strongest reduction in spectral weight from correlation effects occurs from so-called coherent peaks near $\epsilon_F$, and is accompanied by transfer of the spectral weight to higher energy features (incoherent peaks). Redistribution of the incoherent spectral weight into a well-defined satellite structure is indicative of strong correlations, whereas a redistribution into the background spectral distribution with a renormalization of the bandwidth indicates weak correlations. \begin{figure} \includegraphics[width=0.35\textwidth]{fig4} \caption{\label{fig:ortho_u_exp}(Color online) The experimental PES spectra for bulk polycrystalline\cite{Okamoto:private} SrRuO$_3$ (filled circles) and 100 monolayer SRO film\cite{Toyota:copyright} (triangles) grown on SrTiO$_3$ are compared to the calculated LSDA(+$U$) and pseudo-SIC densities of states. The calculated DOS are broadened with an energy dependent Lorentzian (FWHM = $0.1\left|\epsilon - \epsilon_F\right|$~eV) and a Gaussian function (0.34~eV FWHM). An energy dependent parabolic background has also been added.} \end{figure} In Figure \ref{fig:ortho_u_exp}, we show two experimentally measured spectra (see Refs.\ \onlinecite{Toyota_et_al:2005} and \onlinecite{Okamoto:private} for further information on the sample preparation) and our calculated LSDA results. First we comment on the discrepancies between the bulk polycrystalline spectrum and that of the thin film. Comparing the experimental spectra, we see that the thin film at 100 monolayers (which is representative of the bulk material) shows stronger coherent peaks than the highly broadened structure of the polycrystalline sample. In both cases however, the spectra are dominated by three principle features from the Ru $t_{2g}$ states near the Fermi level and the O 2$p$ states between -8 and -2~eV. If we look more closely around the Fermi level, the polycrystal sample has substantially reduced spectral weight, whereas near -1.3~eV it is enhanced compared to the film. This shift in the spectral weight to the incoherent peak agrees well with previous experimental comparisons\cite{Kim_et_al:2005} made between SrRuO$_3$ films and polycrystals and is attributed to the creation of near surface states induced during {\it in situ} scraping and not due to intrinsic correlation effects. Additionally the presence of grain boundaries and compositional defects are also known to yield reduced coherent peak features in polycrystalline samples. For the remainder of this study, we therefore restrict our comparison of the PES data to the 100 monolayer film, since it more accurately describes the intrinsic electronic structure. By comparing to our calculated densities of states, we can assign the peak features in the PES data to the corresponding electronic states. In order to make the comparison with our calculated DOS, we convolute an energy dependent Lorentzian function [full width at half maximum (FWHM) = $0.1\left|\epsilon - \epsilon_F\right|$~eV)] with the calculated DOS to account for lifetime broadening. A Gaussian function with a FWHM = 0.34~eV is also used to account for the instrumental resolution, and an energy dependent parabolic background is also added. We find that the bands between approximately -2.5~eV and -8~eV are due to the O 2$p$ states, with the peak of the non-bonding state centered at -3~eV. In the range between -8~eV up to the Fermi level are the occupied Ru 4$d$ states, with the $t_{2g}$ state lying across the Fermi energy beginning at -3~eV. Some important discrepancies exist between our LSDA results and the spectroscopic data: The Sr 4$d$ states are positioned approximately 1.5~eV lower in energy than is expected from the experimental spectra as determined in the BIS and XAS spectra (not shown).\cite{Park_et_al:2004,Okamoto_et_al:1999} The spread of the O 2$p$ states is also underestimated by 2~eV. The most drastic difference, and one examined many times in the literature, occurs in the Ru 4$d$ states, where indications of correlations are found. From Fig.\ \ref{fig:ortho_u_exp} it is clear that the $t_{2g}$ states at $\epsilon_{F}$ are overemphasized in the LSDA calculation. Experimentally the Ru 4$d$ spectrum shows only a small weak coherent peak and is 1.5~eV broader than the LSDA predicts. As mentioned, the signature of strong correlations as observed in the PES data is the strong renormalization (or even absence) of the quasiparticle peak near $\epsilon_F$ or large satellite peaks. The large coherent $t_{2g}$ peak about 0.5~eV below the Fermi level has a substantial incoherent feature near 1.3~eV and is good evidence for localized electronic states from strong correlation effects. We also note that Santi and Jarlborg \cite{Santi_Jarlborg:1997} suggest that this suppression of the $t_{2g}$ states is possibly due to small matrix elements for the $d \rightarrow f$ and $d \rightarrow p$ transitions. Using our two ``beyond-LSDA'' techniques to introduce correlation effects, we attempt to address whether this incoherent feature, not found in the LSDA calculations, is indeed due to strong electron-electron interactions. More recent ultraviolet photoemission spectroscopy (UPS) data\cite{Siemons_et_al:2007} suggests that the Ru stoichiometry plays a significant role in determining the spectral weight and intensity of the $t_{2g}$ peak at the Fermi level. For stoichiometric SrRuO$_3$ films, the spectral intensity near the Fermi level is in much better agreement with the calcuated PDOS, while Ru deficient samples have a reduced intensity. These facts suggest that previous comparisions may have been made with non-stoichiometric Ru samples, which may be caused by a high partial pressure of oxygen during growth. Although the Ru cation deficiency appears to explain the discrepancy with experimentally measured spectra, it is difficult to fully remove correlation effects that implicitly result from changes in stoichiometry, e.g.\ the $d$ bandwidth can be varied by changing the volume of the unit cell via changes in O--Ru--O bond angles which occurs upon Ru vacancy formation. For example, Siemons and co-workers found the enhancement of an incoherent peak at approximately 1.5~eV below the Fermi level in ruthenium poor samples measured with ultraviolet photoemission spectroscopy.\cite{Siemons_et_al:2007} Future first-principles calculations may be able to identify the role of these defects. It is clear that the LSDA poorly describes the electronic and magnetic structure of bulk SrRuO$_3$, and this suggests that some underlying physics is missing in the local spin density approximation. We next explore two extensions of that description to include electron-electron correlation effects. \subsection{``Beyond LSDA''} We now explicitly add correlation effects into our electronic structure calculations for SrRuO$_3$ using the two methods outlined above. The pseudo-SIC and LSDA+$U$ methods give very similar results when a $U_{\rm eff}=1.0$~eV is used. Therefore we first compare the correlated and LSDA results, and later point out the small differences between results from the two correlated formalisms. {\it Orthorhombic $\quad$} Figure \ref{fig:asic_v_u1_pbnm} shows the densities of states calculated with LSDA+$U$ with a $U_{\rm eff}=1.0$~eV and the pseudo-SIC method for $Pbnm$ SrRuO$_3$. Compared with the LSDA, the correlated bands are narrower, with energy gaps appearing in both spin channels. \begin{figure} \includegraphics[width=0.45\textwidth]{fig5} \caption{\label{fig:asic_v_u1_pbnm}(Color online) The total (grey line) and partial spin-resolved (shaded) density of states for orthorhombic SrRuO$_3$ calculated with $U_{\rm eff}=1$~eV (left) and pseudo-SIC (right) are shown in each panel. ({\sc upper}) Sr 4$d$ states, ({\sc middle}) Ru 4$d$ states and ({\sc lower}) O 2$p$ states.} \end{figure} The inclusion of correlations causes a 70\% drop in the total DOS at the Fermi level compared with LSDA, with the result that the contribution to the Ru 4$d$ states is almost entirely derived from the minority spin channel, and the material is close to half-metallicity. This significantly enhances the magnetic properties compared to the LSDA, increasing the magnetic moment per formula unit to around 1.0~$\mu_B$, and enhancing the exchange splitting of the Ru $4d$ states at $\Gamma$ to 0.45~eV ($\approx$0.30~eV in LSDA). In addition, the peak positions of the correlation-included densities of states are in better agreement with the experimental spectra (Fig.~\ref{fig:ortho_u_exp}), although the intensity of the O 2$p$ peak at $\approx$ -7~eV is still too low compared with that of the Ru $t_{2g}$ peak. The main difference between the electronic structures calculated with $U_{\rm eff}=1$~eV and the pseudo-SIC methods is a larger bandwidth of the occupied orbitals, by $\approx$1.7~eV, in the pseudo-SIC calculation. This has the greatest effect on the Ru 4$d$ bands and the oxygen 2$p$ bands near -8.0~eV. As a result, the pseudo-SIC shows better agreement with the PES in the bandwidth for the O 2$p$ states between -8 and -4~eV, and the $t_{2g}$ states at the Fermi level have been more accurately surpressed. \begin{figure} \includegraphics[width=0.4\textwidth]{fig6} \caption{\label{fig:ortho_U_W}(Color online) The orbital bandwidth dependence on $U_{\rm eff}$ for orthorhombic SrRuO$_3$. The minority spin states are shown by the unshaded symbols and the lines are a guide to the eye. Using the pseudo-SIC method, the majority (minority) bandwidths are slightly larger than with the LSDA+$U$ method, but the relative ratios are consistent: Ru 4$d$ $t_{2g}=3.30$ $(3.50)$~eV, Ru 4$d$ $e_g = 4.75$ $(4.60)$~eV, and O 2$p = 3.25$ $(3.50)$~eV.} \end{figure} In order to understand how the hybridization changes as electron-electron correlation effects are included, we plot in Figure \ref{fig:ortho_U_W} the change in bandwidth for the Ru 4$d$ and O 2$p$ states as a function of $U_{\rm eff}$ for orthorhombic SrRuO$_3$. As the amount of correlation is increased in the calculation through the $U_{\rm eff}$ term, the majority Ru $t_{2g}$ and O 2$p$ bandwidths are strongly reduced, and upon narrowing (both by approximately 1.80~eV) half-metallic behavior is observed for $U_{\rm eff}>2$~eV. On the other hand, only a weak dependence in the orbital bandwidth is observed for the minority spin states. The valence bandwidth never narrows sufficiently in the bulk material (due mostly to the insensitivity of the minority $t_{2g}$ bands to correlation effect) to open an insulating gap in both spin channels. Since the half-metallic ground state that we find for $U_{\rm eff}>2$~eV is not observed experimentally, and motivated also by the observation of large magnetic anisotropy in Kerr rotation measurements\cite{Herranz_et_al:2005} on SrRuO$_3$ we have repeated our calculations with spin-orbit coupling (SOC) effects included. \begin{figure} \includegraphics[width=0.45\textwidth]{fig7} \caption{\label{fig:ortho_bands}(Color online) LSDA+$U$ band structures along $\Gamma-X$ calculated with {\sc vasp} for ferromagnetic orthorhombic SrRuO$_3$. The horizontal grey line marks the Fermi level. Majority (minority) bands are shown as the bold (dashed) lines. The highlighted bands indicate the filling of the majority $t_{2g}$ hole pocket, which gives rise to the observed half-metallicity at large Hubbard $U$ values. For $U_{\rm eff}$ = 2 eV, the open circles show the results of calculations including spin-orbit coupling.} \end{figure} In Figure \ref{fig:ortho_bands} we plot the band structure along $\Gamma-X$ in the Brillouin zone as a function of increasing $U_{\rm eff}$. We see that the $t_{2g}$ bands move down in energy with increasing $U_{\rm eff}$, forming a small hole pocket which becomes completely filled at $U_{\rm eff}$ = 2 eV giving the half-metallic behavior. We note that without careful sampling of the Brillouin zone, this hole pocket is often missed, and half-metallic behavior can be prematurely predicted. Furthermore, we superimpose the band structure calculated with $U_{\rm eff} = 2$ eV and spin-orbit coupling in Figure \ref{fig:ortho_bands}. We find here that the degeneracy of the $t_{2g}$ bands is completely removed, and the highest occupied majority band is pushed only 0.05~eV higher in energy. Furthermore, this splitting decreases at larger $U_{\rm eff}$ values. These results indicate that the inclusion of spin-orbit coupling does not have a large effect on the band structure. Although spin is strictly not a good quantum number, due to quenching of the angular momentum by the crystal field, the total angular momentum is well-approximated by the spin only component, and therefore the calculated proximity to half-metallicity in SrRuO$_3$ is robust to spin-orbit coupling effects. Finally we investigate the enhancement of the magnetic properties as $U_{\rm eff}$ is increased. \begin{figure} \includegraphics[width=0.45\textwidth]{fig8} \caption{\label{fig:mag_U}(Color online) Calculated spin polarization at the Fermi level ($P_0^{\epsilon_F}$) for orthorhombic and cubic SrRuO$_3$ caculated with {\sc vasp} as a function of $U_{\rm eff}$. The pseudo-SIC calculations give a spin polarization of +2.0\% and 8.8\% for each structure, respectively. ({\sc inset}) Calculated magnetic moment per formula unit for each structure type with LSDA+$U$. The pseudo-SIC calculations yield a magnetic moment of 1.99 and 1.77$\mu_B$/f.u.\ for each structure, respectively.} \end{figure} In Figure \ref{fig:mag_U} (inset) we show the magnetic moment per formula unit as a function of increased correlation for both crystal structures of SrRuO$_3$. For example, the magnetic moment per Ru atom is found to be 1.97~$\mu_B$ with $U_{\rm eff}=1.0$~eV and 1.99~$\mu_B$ with the pseudo-SIC. At $U_{\rm eff} \geq 2$~eV saturation of the moment occurs, and the localized ionic moment is observed (2.0~$\mu_B$). {\it Cubic}$\quad$ We complete this discussion by describing the differences in the bulk cubic electronic structure with correlation effects added in order to isolate the contribution of the octahedral distortions in the orthorhombic structure to the bandwidth narrowing. This analysis will provide the framework for exploring the MI-transition in the SrRuO$_3$ thin films. Overall the weight and shape of the total density of states with correlations included is consistent with that calculated in the LSDA, with the exception that the large densities of states at the Fermi level (majority $t_{2g}$ states) are pushed lower in energy. As was observed in the orthorhombic structure, a similar DOS is found between both correlation methods, and in general the occupied orbitals with the pseudo-SIC are broadened by 1.5~eV in energy compared to those calculated with the LSDA+$U$. As in the orthorhombic structure the minority bandwidths in the cubic case are insensitive to the choice of $U_{\rm eff}$ while the majority O 2$p$ and Ru $t_{2g}$ bandwidths narrow considerably (1.2 and 1.8~eV, respectively). Furthermore, the observed exchange splittings for the various states are overall larger for the cubic structure, and this is consistent with the electronic structures calculated with the LSDA. The calculated magnetic moment for $U_{\rm eff}=1$~eV is 1.64~$\mu_B$ and agrees well with that from the pseudo-SIC method (1.77~$\mu_B$). Again, for values of $U_{\rm eff} > 2$ eV a half-metallic ground state becomes stable while with the pseudo-SIC a fully metallic ground state is always maintained. To summarize the bulk SrRuO$_3$ results, each of the two ``beyond LSDA'' methods described here improve the description of the electronic and magnetic structure. However the precise experimental spectra are not fully reproduced although correct peak assignments can be made. The addition of a small Hubbard term $U_{\rm eff}=0.6$ (1.2)~eV for the orthorhombic (cubic) structure, or alternatively by correcting the SI error in LSDA, also improves the Ru $t_{2g}$ bandwidths with respect to experiment.\cite{Maiti/Singh:2005} The total width of the O 2$p$ band structure is also increased to approximately 7~eV in agreement with the spectral weights. We therefore suggest that SrRuO$_3$ can best be described as {\it weakly strongly--correlated}. Finally, as stated earlier, the intensity at $\epsilon_{F}$ has been decreased in comparison to LSDA, although it is still larger than experimently observered. \subsection{\label{sec:spin-polarization}Spin Polarization \& Transport Properties} SrRuO$_3$ has been experimentally reported\cite{Worledge/Geballe:2000} to belong to the class of negatively spin-polarized materials-- characterized by a greater number of {\it minority} spins at the Fermi surface which are aligned anti-parallel to the bulk magnetization. However, the magnitude of the spin polarization at the Fermi level remains controversial within the experimental community, due in part to the different definitions of the spin polarization (resulting from the different experimental techniques used to probe this quantity), as well as to difficulties in performing the experiments. Furthermore, the theoretical community has also not converged on the magnitude of the spin polarization, due to the sensitivity of the Ru $t_{2g}$ states near $\epsilon_F$ on the choice of exchange-correlation functional. In this section, we perform first-principles transport calculations on orthorhombic and cubic SrRuO$_3$, and compare our results to the available data in the literature. We also describe the various definitions of the spin polarization commonly used in the literature, and relate them to calculated {\it ab initio} quantities. The spin polarization at the Fermi level $P_0^{\epsilon_F}$ can be calculated from the density of states at the Fermi level ($N_{\epsilon_F}$) by the following ratio, \begin{equation} P_0^{\epsilon_F} = \frac{N_{\epsilon_F}^\uparrow-N_{\epsilon_F}^\downarrow}{ N_{\epsilon_F}^\uparrow+N_{\epsilon_F}^\downarrow}\quad. \label{eqn:polarization} \end{equation} Using this definition with the LSDA, the sign of the spin polarization for orthorhombic SrRuO$_3$ is ambigous: We find that with the planewave code $P_0^{\epsilon_F}=-2.95\%$ while the local orbital code gives a positive spin polarization of 2.00\%. In constrast, for the cubic structure we find a positive spin-polarization ($P_0^{\epsilon_F}$) in both first-principles calculations, +1.3\% ({\sc vasp}) and +8.8\% ({\sc siesta}). The reason for this discrepancy is the sensitivity of the exchange splitting of the Ru 4$d$ bands near the Fermi level to the structure. The majority $t_{2g}$ band is positioned very close to the band edge, and its precise location is sensitive to the finer details of the DFT calculation. As a result, the large spread in the calculated spin polarization is not surprising, and since the magnitudes of the spin polarizations are small, changes of a few percent can give a change of sign. When correlations are introduced, $P_0^{\epsilon_F}$ increases in magnitude signifigantly and is negative in all cases. The spin polarization as a function of $U_{\rm eff}$ for the orthorhombic and cubic structures is plotted in Figure \ref{fig:mag_U}. The spin-polarization calculated for the orthorhombic structure with the pseudo-SIC method is -85.7\%, compared with a value of +2.00\% when the SI error is not corrected. This value agreesw with that obtained by the LSDA+$U$ when $U_{\rm eff}$=1.4~eV. We previously found that smaller $U_{\rm eff}$ values optimize the agreement between pseudo-SIC and LSDA+$U$ band structures and magnetic properties, suggesting that the pseudo-SIC transport results should be regarded as providing an upper bound on the magnitudes of spin-polarization. For $U_{\rm eff}$ exceeding a critical value of 1.6~eV (3.0~eV) the half-metallic groundstate becomes the most stable solution for orthorhombic (cubic) structure, as an energy gap opens in the majority t$_{2g}$ band and $P_0^{\epsilon_F}$ reaches 100\%. Despite being the most natural definition of spin polarization at the Fermi level, determining $P_0^{\epsilon_F}$ as defined in Eq.\ \ref{eqn:polarization}, is a non-trivial experimental process, since the spectroscopic measurements required typically have poor energy resolution. As knowledge of the degree of spin polarization in a ferromagnet is crucial for its use in spintronics, several different experimental methods have been developed in order to determine this quantity. The {\it transport} spin polarization can be defined as \begin{equation} P = \frac{I^\uparrow - I^\downarrow}{I^\uparrow + I^\downarrow}\quad, \label{eqn:current_polarization} \end{equation} where $I^\sigma$ is the spin dependent current. However $I^\sigma$ is not directly observable and must be determined indirectly. The transport spin polarization now depends on the experiment in question, and in particular whether the transport is in the ballistic or diffusive regime. In the ballistic limit the current is proportional to $N_{\epsilon_{F}} \nu_{F}$, while for diffusive transport it is proportional to $N_{\epsilon_{F}} \nu_{F}^2$ (assuming both spin species have the same relaxation time), where $\nu_{F}^\sigma$ are the spin dependent Fermi velocities. Therefore the transport spin polarization at the Fermi level can be redefined as \begin{equation} P_n^{\epsilon_F} = \frac{N_{\epsilon_{F}}^{\uparrow} \nu_{F}^{n \uparrow} - N_{\epsilon_{F}}^{\downarrow} \nu_{F}^{n \downarrow}}{ N_{\epsilon_{F}}^{\uparrow} \nu_{F}^{n \uparrow} + N_{\epsilon_{F}}^{\downarrow} \nu_{F}^{n \downarrow}}\quad \label{eqn:transport_pol} \end{equation} where $n=1$ for ballistic transport or $n=2$ for diffusive transport\cite{Mazin:1998,Coey:2004}. If $n=0$, this definition reduces to that of the spectroscopic polarization, $P_0^{\epsilon_F}$. An additional definition of polarization is used in Meservey-Tedrow style tunneling experiments. Here the spin dependent DOS are weighted by their corresponding tunneling matrix elements. Such an experiment has been performed for SRO and report approximately a -10\% spin polarization.\cite{Worledge/Geballe:2000} Inverse tunnel magnetoresistance measurements also agree that SRO is negatively spin polarized.\cite{Takahashi/Tokura_et_al:2005} This is in agreement with the majority of the calculations which find that SRO is a negatively spin polarized material at the Fermi surface. The point-contact Andreev reflection (PCAR) technique, which is based on the process of Andreev reflection,\cite{Andreev:1964} and developed as an experimental method in the work of Soulen {\it et al.}\cite{Soulen_et_al:1998} and Upadhyay {\it et al.},\cite{Upadhyay_et_al:1998} has been used successfully to determine the magnitude of the transport spin polarization, although it is not sensitive to its sign. Experimental results\cite{Raychaudhuri/Beasley:2003,Nadgorny/Eom:2003,Sanders:2005} using this method report values ranging between 51\% and 60\%. It should be noted that in the Andreev experiment the polarization is not uniquely defined, in that it must be extracted from the data through a fitting procedure and involve terms that describe the transmittivity of the interface between the ferromagnet and the superconductor. These parameters are typically difficult to determine precisely and consquently introduce further uncertainty in the experimental spin polarization. Furthermore, it is important to note that in all PCAR experiments, it is necessary to establish whether the transport is in the ballistic, diffusive or intermediate regime (non-integer $n$) which ultimately depends on the transmittivity of the interface. The experimental results for SRO are further complicated by the fact that the transport in the system has been measured in both regimes. \begin{table} \begin{ruledtabular} \begin{tabular}{lcc} & Orthorhombic & Cubic \\ \cline{2-3} & \multicolumn{2}{c}{$P_n^{\epsilon_F}$ \% (LSDA, pseudo-SIC)} \\ \hline $n=0$ & +2.00, -85.7 & +8.80, -16.1 \\ $n=1$ & -1.44, -92.9 & -8.99, -50.9 \\ $n=2$ & -15.1, -98.0 & -32.9, -79.5\\ \end{tabular} \end{ruledtabular} \caption{\label{tab:polarization} Transport spin polarizations calculated with {\sc smeagol}, according to the definition of Eq.\ \ref{eqn:transport_pol} using both the LDSA and pseudo-SIC. Results for both the orthorhombic and cubic SrRuO$_3$ structures are included.} \end{table} \begin{figure} \includegraphics[width=0.48\textwidth]{fig9} \caption{\label{fig:transport}(Color online) Spin dependent transport coefficients, $N \nu$ and $N \nu^2$, calculated with both the LSDA (unshaded) and pseudo-SIC (shaded).} \end{figure} To allow for a direct comparison with the PCAR experiments, the transport spin polarization in both the ballistic and diffusive limit was determined using the {\it ab initio} electronic transport code {\sc smeagol}.\cite{rocha:085414} Here we calculated the transport at zero bias through both the orthorhombic and cubic structures,\footnote{ The Brillioun zone was sampled in these cases with a $20\times20\times1$ Monkhorst-Pack mesh for the orthorhombic structure and a $40\times40\times1$ mesh for the cubic structure.} and present the results in Table \ref{tab:polarization} and Figure \ref{fig:transport}. The shortcomings of the LSDA in describing the spin polarization at the Fermi level in SrRuO$_3$ are again apparent. The highest spin polarization obtained with the LSDA for the orthorhombic structure is -15\% and it is obtained in the diffusive limit. This is notably smaller than the experimental PCAR results measuring the same quantity, $P_2^{\epsilon_F}$. As shown in Fig.\ \ref{fig:transport}, on changing from $P_0^{\epsilon_F}$ to $P_2^{\epsilon_F}$ the polarization increases and becomes more negative. Since the group velocity tends to zero at the band edge, and is often maximized at the band center, higher powers of $n$ in $P_n^{\epsilon_F}$ suppress the contribution of the Ru 4$d$ states at the band edge while enhancing those at the band center. From Figure \ref{fig:transport} it is then clear that the large negative polarization is a consequence of the center of the {\it majority} Ru 4$d$ band positioned approximately 1~eV below the Fermi level, while the {\it minority} Ru 4$d$ band center is aligned across the Fermi level. Further enhancement is seen by introducing correlation; for example, by correcting for the SI error, the spin polarization increases due to the reduction of the number of majority Ru $t_{2g}$ states at the Fermi level. The correlated \emph{ab initio} calculations now give very high spin polarization, ranging between -85.7\% and -98.0\% where as the highest value achieved experimentally is just 60\%. Qualitativity similiar results are found for the cubic structure, although the SIC in general has a smaller influence on the spin polarization. For example, $P_1^{\epsilon_F}$ goes from -8.99\% (LSDA) to -50.9\% (pseudo-SIC), while $P_2^{\epsilon_F}$ goes from -32.9\% to -79.5\%. \begin{figure} \includegraphics[width=0.48\textwidth]{fig10} \caption{\label{fig:polarization_ef}(Color online) Spin polarization defined according to Eq.\ \ref{eqn:transport_pol} as a function of distance from the Fermi energy (set to $0$~eV) and calculated with the pseudo-SIC for the orthorhombic structure.} \end{figure} It is also useful to note the strong dependence of spin polarization on distance from the Fermi level. In Figure~\ref{fig:polarization_ef} we show for the orthorhombic structure that if the Fermi level is moved just 100~meV into the valence band, $P_1^{\epsilon_F}$ is decreased to -81.4\%, while moving $E_F$ by -200~meV decreases it further to -59.8\%, bringing it within the experimental range of values. In practice this shift in the Fermi level can be realized by off-stoichiometric compounds such as those investigated by Siemons {\it et al.}\cite{Siemons_et_al:2007} The discrepancy between the computational and experimental results could be then due to a number of factors: For example, there are several known limitations with PCAR including spin-flip scattering events which could drastically reduce the measured value of $P^{\epsilon_F}$, as well as the possible ambiguous fit of PCAR measurements to a multiparameter model.\cite{Taddei_et_al:2001} We also note that spin-orbit coupling, which we did not account for in our transport calculations, could reduce the spin polarization at the Fermi level. Despite these disparities, both \emph{ab initio} calculations and experiment show SrRuO$_3$ with a high negative spin polarization. As expected, LSDA underestimates the spin polarization at the Fermi level, whereas the inclusion of correlation through the correction of the SI error with the pseudo-SIC results in much better agreement between theory and experiment. \subsection{Thin films} The electronic and magnetic structure of epitaxially grown oxide multilayers can be tuned by controlling the film thickness. In particular, it has been demonstrated that metallic SrRuO$_3$ can be transformed into an insulating state by growing films thinner than five monolayers on SrTiO$_3$ substrates.\cite{Toyota_et_al:2005} It was also found that the Curie temperature decreases with reduced film thickness, along with the disappearance of strong ferromagnetic order. Photoemission experiments show a shift in the spectral weight to the incoherent peak features in the spectra, suggesting that these effects are a result of changes in electron-electron correlation effects. With our first-principles techniques, we systematically investigate whether we can reproduce this transition purely from structural confinement, or by also including correlation effects and/or the octahedral tiltings of the orthorhombic structure. For the remainder of this section, we choose to include correlation with the LSDA+$U$ method, rather than the pseudo-SIC method, and note that from the discussion so far, both methods reproduce similar electronic structures. {\it Cubic LSDA slabs$\quad$} To investigate the effects of structural confinement on the metal-insulator transition, we first performed a series of slab calculations (from 1 to 5 unit cells thick) on cubic SrRuO$_3$ constrained to the calculated bulk equilibrium SrTiO$_3$ lattice parameter. This is in part motivated by the fact that good epitaxy is made with the substrate surface, and that tilting of the octahedra may be suppressed. Additionally, it is computationally more feasible to systematically investigate these smaller supercell slabs. We discuss later the effect of including the octahedra tiltings in the orthorhombic thin films; we saw earlier that this structural effect is important in fully describing the subtle details of the electronic structure of SRO. In all calculations the slabs were terminated with a SrO surface, to be consistent with that experimentally observed to be the most thermodynamically stable.\cite{Rijnders_CB_EOM_et_al:2004} \begin{figure} \includegraphics[width=0.45\textwidth]{fig11} \caption{\label{fig:slab_moments}(Color online) Magnetic moment dependence on slab thickness for cubic SrRuO$_3$. The bulk LSDA magnetic moment is shown as the dashed line. The total energy differences are calculated for the spin-polarized films with respect to the non-magnetic ground state.} \end{figure} In Figure \ref{fig:slab_moments} we plot the Ru magnetic moment (per f.u.) as a function of increasing slab thickness. We find that the LSDA films become non-magnetic below a critical thickness of only two monolayers; this is lower than the experimentally observed loss of the strong ferromagnetic order below six monolayers.\cite{Toyota_et_al:2006} In addition, all of our calculations on the cubic films remain metallic down to one monolayer. Experimentally the situation is different and insulating behavior is observed in heteroepitaxial thin films at six monolayers on SrTiO$_3$. At one unit cell, where all atoms are surface-like, the magnetic moment is considerably suppressed from its bulk value, and the non-magnetic structure is actually lower in energy. An enhancement in the magnetic moment is observed at two unit cells in thickness; the moment then decreases toward the bulk value as the film thickness grows. For films larger than two unit cells, the ferromagnetic ground state is always found to be stable. In a mean field theory approach, the energy difference between the ferromagnetic and the paramagnetic ground states $\Delta$ can be expected to be proportional to the Curie temperature $T_c$ according to $k_BT_c=\frac{2}{3}\Delta$. For the four and five unit cell slabs, we find mean field $T_c$'s of 170 and 120~K, respectively; these values are close to the experimental bulk value of 160~K suggesting that even in these thin films the strong itinerancy remains. This is consistent with temperatue dependent magnetization data recorded on strained and free standing films\cite{Gan_et_al:1998} as well as on ultra-thin SrRuO$_3$ films.\cite{Toyota_et_al:2006} Additionally, the spin-polarization $P_0^{\epsilon_F}$ as a function of slab thickness (not shown) exhibits a large negative polarization at two unit cells, while a small positive spin-polarization is found with increasing thickness (consistent with our bulk spin polarization calculations). Most importantly, the insulating state is not found in any of the cubic slab calculations nor is the non-magnetic ground state generally favored (the one unit cell case is an exception and is due to competing interactions from surface effects). To better understand how the magnetism is distributed in the slabs, we have also calculated the layer-by-layer local density of states (LDOS). As in the bulk case, on average the majority of the spin ($\approx$65\%) is located on the Ru atom, with the remaining found on the oxygen network. Interestingly, the Ru atoms closest to the surface layers experience a suppressed magnetic moment for each slab. This is in contrast to most transition metal (non-oxide) ferromagnets, where often enhancement occurs due to a loss of coordination, weaker interatomic hybridization, and enhancement of the orbital angular momentum. In this oxide, enhanced covalency at the surface layer may be responsible for the reduced magnetism. Before adding correlation effects in the cubic slabs, we first discuss the changes in the electronic structure due to the thin film geometry. The overall shape and weight of the density of states for the cubic slab and the bulk cubic LSDA calculation are very similar suggesting that confinement effects are minimal. The calculated exchange splittings are also similar with the exception that the Ru 4$d$ states are split by approximately 0.25~eV. A small gap in the minority $e_g$ states opens at approximately {-4.30}~eV, and partial occupation of the majority $e_g$ states occurs; these features are not observed in the bulk cubic LSDA calculation. For a free standing, three unit cell film we find that the structure has a magnetic moment of 1.26~$\mu_B$ and a spin polarization of +13.2\% within the LSDA, both larger than the bulk cubic values of 1.09~$\mu_B$ and +1.3\% respectively. The increased positive spin polarization is a result of the band center of the minority Ru 4$d$ states shifting to higher energy in the thin films. To summarize the results for the cubic SrRuO$_3$ thin films, we do not find a metal-insulator transition as a function of film thickness, although we do find a slightly enhanced magnetization. We therefore are able to rule out the effect of dimensional confinement as the driving force for a metal-insulator transition. {\it Cubic LSDA+$U$ slabs$\quad$} We now examine the effect of adding correlation in the calculations for the cubic thin films in order to determine if electron-electron correlation in these structures is sufficient to obtain a metal-insulator transition. Here we use a $U_{\rm eff}=2$~eV, which although larger than that we described earlier to more accurately reproduce the PES spectra, does allow us to verify that in the absence of insulating behavior, the driving force for the MI-transition is not due to intrinsic correlation effects. Although the numbers we discuss here are particular to a three unit cell thin film we note that the general trends are consistent across the series of cubic thin films. In contrast to the LSDA calculations, when a finite Hubbard $U$ is placed on the Ru 4$d$ states, we find that the majority $e_g$ states are completely unoccupied, and occupation of the majority O 2$p$ states near -2.3~eV is enhanced over the minority O 2$p$ states which nearly open a gap in the minority spin channael. An enhancement in the exchange splitting for the Ru $d$ orbitals is also observed with $U_{\rm eff}=2$~eV compared to $U_{\rm eff}=0$~eV, while the valence bandwidth is reduced. For the cubic slab with $U_{\rm eff}=2$~eV, the narrowing of the bandwidth nearly stabilizes a half-metallic ground state, as the majority Ru $t_{2g}$ bands become completely filled. Despite these small changes in the occupation of the Ru 4$d$ levels, we do not find an insulating ground state in any of the cubic slabs even in the presence of strong correlations ($U_{\rm eff}<6$~eV). Regarding the magnetic moment in these slabs, we find 2.0~$\mu_B$/f.u.\ for $U_{\rm eff}=2$~eV, and a corresponding spin polarization at the Fermi level of -85.9\%. These results are consistent with the effects of adding correlation in bulk cubic SrRuO$_3$, and because an insulating ground state is not achieved, we suggest that neither correlations nor structural confinement (from our previous discussion) are sufficient to induce a metal-insulator transition. {\it Orthorhombic LSDA slabs$\quad$}% We now address films of orthorhombic SrRuO$_3$ which allow the full octahedral distortions found in the bulk experimental structure to occur. Earlier we showed that the effect of these distortions in the bulk is to reduce the $t_{2g}$ valence bandwidth; in this section we examine whether these distortions with the addition of a confined geometry in a thin film form can stabilize the experimentally observed insulating SrRuO$_3$ ground state. With the relaxed coordinates for bulk $Pbnm$ SrRuO$_3$, we calculate the electronic ground state for a three unit cell thick slab separated by 10~\AA\ of vacuum on each side and SrO termination layers within both the LSDA and LSDA+$U$ method ($U_{\rm eff}=2$~eV).\footnote{Complete structural relaxation of the three unit cell slab within the orthorhombic symmetry did not produce signifigant changes in the electronic or magnetic structure.} We now discuss the changes in the electronic structure of the orthorhombic thin film: In Figure \ref{fig:3uc_slab_doses} we show the (P)DOS for for the three unit cell slab with and without correlation. \begin{figure} \includegraphics[width=0.48\textwidth]{fig12} \caption{\label{fig:3uc_slab_doses}The total and partial spin-resolved density of states for a three unit cell orthorhombic SrRuO$_3$ slab calculated with $U_{\rm eff}=0$~eV (shaded) and $U_{\rm eff}=2$~eV (unshaded, bold) are shown in each panel. ({\sc upper}) Total (grey) and Sr 4$d$-states, ({\sc middle}) Ru 4$d$-states, $t_{2g}$ and $e_{g}$, and ({\sc lower}) O 2$p$-states.} \end{figure} With the LSDA in the thin film system, the exchange splittings are similar to the bulk LSDA orthorhombic calculations, and the character around the Fermi level remains a mixture of majority and minority Ru $t_{2g}$. Energy gaps similar to those found in the bulk are observed in other regions of the electronic structure with the exception that an additional gap opens in the minority $t_{2g}$ states at -2.5~eV, which is not observed in the bulk calculation. We now compare the magnetic properties of the LSDA slab calculation to the bulk $Pbnm$ LSDA calculation. With the LSDA method, we find a magnetic moment of 1.01~$\mu_B$ per Ru atom and a spin polarization at the Fermi level of $P_0^{\epsilon_F}=-7.96\%$ (compared to a bulk orthorhombic structure where a moment of 0.79~$\mu_B$, and spin-polarization of -2.95\%). Therefore, we find enhanced magnetic properties in the thin film geometry when the octahedral tiltings are included. However we still do not find an insulating ground state. {\it Orthorhombic LSDA+$U$ slabs$\quad$}% Finally we incorporate correlation into the orthorhombic slab calculations and examine the effect on the electronic and magnetic structure. We have already demonstrated that a $U_{\rm eff}=2$~eV is sufficient to establish a half-metallic ground state in the bulk orthorhombic structure; therefore, we use this limit to establish whether correlation can drive the insulating ground state. If we do not find a metal-insulator transition even at this large Hubbard $U$ value, we can be certain that the effect is not due to correlation. In general, the shape and weight of the densities of states and exchange splittings for the different states remains similar to the orthorhombic LSDA slab calculation, however unlike the cubic slabs, the valence bandwidth does not noticably narrow. With the addition of the Hubbard $U$ term in the calculation, the half-metallic ground state becomes stable, with 0.70~eV energy gap opening in the majority spin states. This behavior is realized by the minority $t_{2g}$ bands shifting higher in energy while the majority bands become completely occupied. The majority $e_g$ band is also lowered in energy from 1.0~eV in the LSDA slab calculation ($U_{\rm eff}=0$~eV) to 0.50~eV, while the minority spin-states move 0.30~eV higher in energy with $U_{\rm eff}=2$~eV. Similar energy gaps are observed as in the LSDA slab calculation, with the caveat that there is no gap in the majority O 2$p$ states below the Fermi level; this is due to a shift of the O 2$p$ states from the Fermi level to lower energy when correlation is added. With a $U_{\rm eff}=2$~eV we find -100\% spin-polarization at the Fermi level and a magnetic moment of 2.0~$\mu_B$ per Ru atom. This effect on the magnetism with increased correlation is consistent with that found in the bulk calculations. In summary, we never find a fully insulating ground state in our thin film calculations even in the presence of large correlation effects. We have also examined the layer-by-layer DOS for each slab (data not shown) and have not found an insulating surface layer in any of the calculations. However, as a result of the 2D confinement in the slabs, we do observe a narrowing of the minority $t_{2g}$ bandwidth, and a shift of the Fermi level away from the band-center. Furthermore, with correlations the Fermi level also is seen to cut across the band-edge. These two properties together indicate that SrRuO$_3$ thin films are closer to a metal-insulator instability (with regards to the bulk), and consequently disorder is more likely to induce electron localization and form an insulating state. From these results we suggest the following two possibilities regarding the experimentally observed metal-insulator transition: (1) either the transition in SrRuO$_3$ thin films is not an intrinsic property of the system, but rather extrinsic and possibly due to surface roughness or defects from film deposition combined with the band narrowing from confinement and correlation; or (2) that SrRuO$_3$ thin films must be treated with more exotic electronic structure methods. It is worth mentioning that PES experiments\cite{Kim_et_al:2005} of SrRuO$_3$ films grown on SrTiO$_3$ substrates found a strong sensitivity of the $t_{2g}$ spectral intensity and weight early in the deposition process (less than eight monolayers) with the intensity of the Ru 4$d$ states becoming strongly enhanced above 15 monolayers. It was found that the film growth proceeds with a step terrace mechanism with minor atomic diffusion at less than five monolayers and followed by 3D island growth.\cite{Toyota_et_al:2006} The disordered growth process should also reduce the stability of the ferromagnetic order, and due to poor percolation pathways, could contribute to the observed MI-transition concomitant with ferromagnetic ordering at less than five monolayers. The disorder at the surface has also recently been compared to that at the interface with the substrate (in this case SrTiO$_3$) with {\it in situ} PES techniques, and it was found that the sharp $t_{2g}$ peak at the Fermi level is greatly suppressed at the surface, while it persists at the interface.\cite{Kumigashira_Fujimori_et_al:2008} The decrease in itineracy due to the suppressed DOS at the Fermi level was also verified with surface and interface conductivity experiments. Since our calculations do not include any disordered surface configurations or non-stoichiometry, future first-principles calculations could clarify these competing interactions. We have however shown that neither strong correlations nor octahedral distortions, nor their combination are sufficient to reproduce the experimentally observed ultra-thin film metal-insulator transition. \section{Conclusions} We have examined the effects of structural distortions and correlation effects on the electronic and magnetic properties of SrRuO$_3$ with first-principles calculations. We find that by including weak strong-correlations with an effective Hubbard $U$ of 0.6~eV or correction of the self-interaction error gives good agreement for bulk orthorhombic SrRuO$_3$ with the experimental spectroscopic data. The addition of the octahedral distortions leads to a narrowing of the majority spin Ru $t_{2g}$ and O 2$p$ states; however the exchange splitting is small with respect to these bandwidths and consequently a fully insulating ground state is not obtained. A half-metallic ground state was shown to be stable by including moderate electron-electron correlation effects $U > 2$~eV, which we note has not been observed experimentally. The behavior of thin films was also examined in both cubic and orthorhombic unsupported films within the conventional LSDA approach and with weak correlations included. In neither case was the experimentally observed metal-insulator transition obtained. Since the electronic structures of surfaces are very sensitive to atomic reconstructions, we suggest that the experimentally observed metal-insulator transition could be a consequence of extrinsic defects or an atomically disordered surface configuration. \begin{acknowledgments} We thank A.\ Fujimori for helpful discussions and bringing our attention to correlation characteristics in the spectroscopic data. The authors also thank A.\ Zayak, J.\ Neaton and W.\ Siemon for useful discussions and J.\ Okamoto and H.\ Kumigashira for providing us permission and use of the experimental PES data. This work was supported by the NSF under the grant NIRT 0609377 (NAS), by the SFI under the grant 07/IN.1/I945 (NC, SS) and by Seagate. JMR acknowledges support through a NDSEG fellowship sponsored by the DoD. Portions of this work made use of MRL Central Facilities supported by the MRSEC Program of the National Science Foundation under award No.\ DMR05-20415 and the CNSI Computer Facilities at UC Santa Barbara under NSF award No.\ CHE-0321368. Additional computational resources have been provided by the HEA IITAC project managed by the Trinity Center for High Performance Computing and by ICHEC. \end{acknowledgments}
1,116,691,499,989
arxiv
\section{Section title} The pinning of vortices in superconducting films by arrays of magnetic dipoles placed in the vicinity of the film is a topic that has received a great deal of attention lately. Most of the experimental\cite{rev1} and theoretical\cite{coff,wei,sah,myp,mp,gmc1} work carried out so far deals with arrays of permanent dipoles, that is, dipoles with magnetic moments fixed both in magnitude and direction. A related topic that has received little attention is vortex pinning by arrays of dipoles with magnetic moments free to rotate. The feasibility of fabricating such arrays has been demonstrated recently by Cowburn, et. al.\cite{ckaw}. These authors reported on the magnetic properties of arrays of nanomagnets made of Supermalloy, each nanomagnet being a thin circular disk of radius $R$. They found that for $R\sim 50-100$nm the magnetic state of each nanomagnet is a single domain one with the magnetization parallel to the disk plane, and that the magnetization can be reoriented by small applied fields. One possible source of interest in vortex pinning by freely rotating dipoles is, as demonstrated in this paper, that the critical current may be tuned by an applied field. This paper studies theoretically in the London limit the interaction between one vortex in a thin superconducting film with one dipole, located outside the film, in the presence of a magnetic field parallel to the film surfaces. The magnetic dipole moment is assumed to be parallel to the film surfaces, to have constant magnitude and freedom to rotate. Tuning of the critical current is this model results because the interaction between the vortex and the dipole depends on the dipole orientation which, in turn, depends on the applied field. Besides, in a thin film, a magnetic field parallel to the film surfaces has no effect on the vortex in the absence of the dipole. As shown here, this mechanism allows the pinning potential to be changed by the applied field over a wide range. When a transport current is applied to the film, the magnetic field created by it is parallel to the film surfaces and also contributes to the dipole orientation. This makes the pinning potential dependent on the transport current, and has important consequences for the critical current, as shown here. The main new results reported in this paper are: i) the exact analytic calculation of the pinning potential for one vortex interacting with a freely rotating dipole, and its dependence on the applied field and transport current, ii) the numerical calculation of the critical current for one vortex pinned by the dipole, and its dependence on the magnitude and direction of the applied field. This paper argues that these results are relevant for vortex pinning by arrays of nanomagnets, similar to those reported in Ref.\cite{ckaw}, placed on top of superconducting films made of homogeneous materials, like most low-$T_c$ ones. The model is not applicable to layered high-$T_c$ superconducting films. The calculation of the pinning potential proceeds as follows. The superconductor film is assumed to be planar, with surfaces parallel to each other and to the $x-y$ plane, isotropic, characterized by the penetration depth $\lambda$, and of thickness $d\ll \lambda$. A vortex with vorticity $q$ is located at position ${\bf r}$, and the dipole is at ${\bf r}_0 =(0,0,z_0>0)$. The dipole moment ${\bf m}$, has constant magnitude, $m$, is oriented parallel to the film surfaces, and is free to rotate in the $x-y$ plane. An uniform magnetic field ${\bf H}$ is applied parallel to the film surfaces. The vortex-dipole system is shown in Fig.\ \ref{fig.fig1}. The total energy in the London limit, neglecting pinning by random material defects, can be written as \cite{gmc1} \begin{equation} E_{T} = - {\bf m}\cdot({\bf b^s_{\perp}} + {\bf H}) +mH \label{eq.ett} \end{equation} where ${\bf b^s_{\perp}}$ is the component parallel to the film surfaces of the field generated by the vortex at the dipole position. The energy $E_{T}$ does not include the vortex self-energy nor the interaction energy of the dipole with the field of the screening current generated by it in the film, because both are independent of the vortex position and dipole orientation. The constant $mH$ is added for future convenience. The parallel component of the vortex field is given in the thin film limit ($d\ll \lambda$) by \cite{bsv} \begin{equation} {\bf b^s_{\perp} }=-q\frac{\phi_0 d}{4\pi \lambda^2}\, \frac{{\bf r}}{r^2}(1-\frac{z_0}{\sqrt{r^2+z^2_0}}) \, . \label{eq.bvt} \end{equation} This expression is exact for a thin film provided that $r \ll \Lambda=2\lambda^2/d\;$, which is the region of interest here. \begin{figure}[h] \centerline{\includegraphics[scale=0.15]{V1EPLFIG1.eps}} \vspace{5mm} \caption {Superconducting film with one vortex at ${\bf r}$, a magnetic dipole, ${\bf m}$, at ${\bf r}_{0}=(0,0,z_0)$, and an applied magnetic field, ${\bf H}$, parallel to the film surfaces.} \label{fig.fig1} \end{figure} The total energy, $E_{T}$, depends both on the vortex position ${\bf r}$ and on the dipole orientation. The pinning potential for the vortex at zero temperature, denoted by $U_{vm}$, is the total energy for the equilibrium dipole orientation, that is, for ${\bf m}$ which minimizes $E_{T}$, with the vortex held fixed at ${\bf r}$. Thus, according to Eq.\ (\ref{eq.ett}), the equilibrium ${\bf m}$ is parallel to ${\bf b^s_{\perp}} +{\bf H}$, and the pinning potential is given by \begin{equation} U_{vm} = - m\mid {\bf b^s_{\perp}} +{\bf H}\mid + mH \; . \label{eq.ete} \end{equation} Note that, by definition, $U_{vm}$ vanishes in the absence of a vortex. According to Eqs.\ (\ref{eq.ete}) and (\ref{eq.bvt}), the spatial dependence of $U_{vm}$ is anisotropic. It depends both on $r$ and on the angle between ${\bf r}$ and ${\bf H}$. An important consequence of the dipole freedom to rotate is the non-trivial dependence of $U_{vm}$ on ${\bf H}$ obtained in Eq.\ (\ref{eq.ete}). According to it ${\bf H}$ plays the role of a handle that controls the strength and spatial dependence of $U_{vm}$, as will be discussed shortly. The scale for $H$ in Eq.\ (\ref{eq.ete}) is the vortex field, which is bound by $b^s_{\perp}\leq b^s_{max}=0.3d/4\pi z_0\times (\phi_0/\lambda^2)$. It is convenient to use the following natural scales for physical quantities. Energy: $\epsilon_0d$, where $\epsilon_0=(\phi_0/4\pi\lambda)^2\,$ is the basic scale for energy/length of the superconductor. Magnetic moment: $\phi_0z_0$. Magnetic field: $\phi_0/\lambda^2$. \begin{figure}[h] \centerline{\includegraphics[scale=0.3,clip=]{V1EPLFIG2.eps}} \vspace{5mm} \caption{Spatial dependence of the vortex pinning potential, $U_{vm}$, (in units of $\epsilon_0 d$) for $q=1$, $d=z_0= 2\xi\;, \lambda =10\xi$ ($\xi=$vortex core radius), $m=0.5\phi_0z_0$, and external field in the $x$-direction. a) Permanent dipole. b) $H=0.01\phi_0/\lambda^2$. c) $H=0.02\phi_0/\lambda^2$. d) $H=0$. $x$ and $y$ in units of $\xi$.} \label{fig.fig2} \end{figure} For $H \gg b^s_{max}$ the dipole equilibrium orientation is parallel to ${\bf H}$, and $U_{vm}$ reduces to the pinning potential for a vortex interacting with a permanent dipole. Assuming that ${\bf H}$ is along the $x$-direction, $U_{vm}=-mb^s_{x}$, which, according to Eq.\ (\ref{eq.bvt}), coincides with the expression obtained in Refs.\cite{wei,sah,mp,gmc1}. In this case $U_{vm}$ is anti-symmetric with respect to both an inversion of the vortex position (${\bf r} \rightarrow -{\bf r}\; \Longrightarrow \;U_{vm}\rightarrow -U_{vm}$), and to a change in the sign of the vorticity ( $q\rightarrow -q$). For a vortex ($q>0$), $U_{vm}$ has a minimum ( maximum ) on the $x$-axis at $x= - (+)1.3 z_0$, with minimum (maximum) value $U_{vm}=-(+)0.3\times 4\pi \epsilon_0 d(m/\phi_0 z_0)$, as shown in Fig.\ \ref{fig.fig2}.a. In general, for $H \neq 0$ the minimum of $U_{vm}$ occurs when ${\bf b^s_{\perp}}$ is parallel to ${\bf H}$, that is when the vortex (anti-vortex) is on the negative (positive) $x$-axis. In this case, according to Eq.\ (\ref{eq.ete}), $U_{vm} = - m b^s_x$. As a consequence, the minimum of $U_{vm}$ for $H\neq 0$ is identical to that for a permanent dipole. However, the spatial dependence of $U_{vm}$ is strongly dependent on H, as shown in Fig.\ \ref{fig.fig2} for some values of $H<b^s_{max}$ ($b^s_{max}=0.024\phi_0/\lambda^2$ for the parameters in Fig.\ \ref{fig.fig2}). For $H=0$, $U_{vm}$ is given by $U_{vm} = - m\mid {\bf b^s_{\perp}}\mid $. In this case, according to Eq.\ (\ref{eq.bvt}), $U_{vm}$ is the same for vortices and anti-vortices, has circular symmetry, and is attractive with a repulsive core, as shown in Fig.\ \ref{fig.fig2}.d. The minimum of $U_{vm}$ is degenerate on a circle of radius $\,r= 1.3 z_0\,$, and has the same minimum value as a permanent dipole ($U_{vm}=-0.3\times4\pi \epsilon_0 d(m/\phi_0 z_0)$). Now the critical current, $J_c$, for a single vortex with vorticity $q=1$ is considered. The effect of a transport current density, ${\bf J}$, applied to the film is twofold: it exerts on the vortex a force ${\bf F}_L=q(\phi_0 d/c){\bf J}\times \hat{{\bf z}}$ and creates a field at the dipole position ${\bf H}_J=(2\pi d/c){\bf J}\times \hat{{\bf z}}$, which adds to the external field and modifies the vortex pinning potential, because $U_{vm}$ is now given by Eq.\ (\ref{eq.ete}) with ${\bf H}$ replaced by the total field ${\bf H}_{T}={\bf H}+{\bf H}_J$. The critical current depends on the relative orientation of ${\bf J}$ and ${\bf H}$. Here it is assumed that ${\bf J}$ is fixed in the positive $y$-direction, so that both ${\bf F}_L$ and ${\bf H}_J$ are along the positive $x$-direction, and have magnitudes $H_J= 2\pi dJ /c$ and $F_L=\phi_0dJ/c$, and that ${\bf H}$ points in a direction that makes an angle $\alpha $ with the positive $x$-axis, that is with ${\bf F}_L$. In this paper $J_c$ is obtained by solving numerically the equations of motion for the vortex. It is assumed that for ${\bf J}=0$ the vortex is pinned at the absolute minimum of $U_{vm}$, and that $J$ increases very slowly with time. These assumptions ensure that the vortex follows the position of the minimum of $U_{vm}- F_L\,x$ as $J$ increase, until $J$ reaches a value for which the minimum becomes unstable, and the vortex depins. As $J$ increases further, the vortex velocity also increases. The $J_c$ obtained here corresponds to $J$ for which the vortex velocity reaches a small value chosen for numerical convenience. The obtained $J_c$ is slightly larger than the $J$ for which the minimum becomes unstable. This is analogous to the voltage criterion in $J_c$ measurements. The values of $J$ are, of course, limited to $J<J_d$, where $J_d=c\phi_0/(12\sqrt{3}\pi^2\lambda^2\xi)$ is the depairing current, $\xi$ being the vortex core radius. In the results reported next, regions where $J_c>J_d$ are discussed for the sake of completeness. Now there are two scales for $H$ in $U_{vm}$: $b^s_{max}$, as discussed above, and $H_J$. The maximum $H_J$ occurs for $J=J_c$, and can be written as $H_{J_c}=0.031d/\xi(J_c/J_d)(\phi_0/\lambda^2)$. For $d\sim z_0\sim\xi$, these two scales are comparable if $J_c\sim J_d$. \begin{figure}[h] \centerline{\includegraphics[scale=0.30]{V1EPLFIG3.eps}} \vspace{5mm} \caption{ Single vortex ($q=1$) critical current for $\lambda=10\xi, d=z_0= 2\xi$: a) and b) $J_c$ vs. $\alpha$; c) $J_c$ vs. $H$ for constant $\alpha$, indicated in the boxes; d) discontinuity field $H_d$ vs $m$ . Labels: $m$ in units of $\phi_0z_0$, $H$ in units of $\phi_0/\lambda^2$. } \label{fig.fig3} \end{figure} For $H \gg (b^s_{max},\; H_J)$, $U_{vm}$ reduces to that for a permanent dipole oriented parallel to ${\bf H}$, that is, with ${\bf m}$ making an angle $\alpha $ with the $x$-axis. In this case, $U_{vm}$ is independent of $H$ and $J$ and has a spatial dependence like that shown in Fig.\ \ref{fig.fig2}.a rotated by $\alpha$ with respect to the $x$-axis. For $J=0$, the vortex is pinned at the absolute minimum of $U_{vm}$, located at a point in the $x-y$ plane defined in polar coordinates, $(\rho,\theta)$, by $(\rho=1.3z_0,\; \theta=\alpha +\pi)$. The critical current depends on $\alpha$ and $m$, being a linear function of $m$, since $U_{vm}$ is linear in $m$. It is found that $J_c$ depends strongly on $\alpha$, being largest for $\alpha=0^o$, and decreasing smoothly with $\alpha$, as shown in Fig.\ \ref{fig.fig3} a) and b) ( curves labeled {\it permanent dipole}) . This results from the spatial dependence of $U_{vm}$, as can be seen for $\alpha=0^o,\;180^o$, where the critical current can be estimated analytically, because the vortex moves only along the $x$ direction as $J$ increases. The result is $J_c/J_d \simeq 4\, m/\phi_0z_0$ for $\alpha=0^o$, and $J_c/J_d \simeq 0.4\, m/\phi_0z_0$ for $\alpha=180^o$. The origin of this tenfold difference can be seen in the plot of $U_{vm}$ shown Fig.\ \ref{fig.fig2}.a The driving force is parallel to the $x$-axis in Fig.\ \ref{fig.fig2}.a for $\alpha=0^o$, and antiparallel for $\alpha=180^o$. As can be seen in Fig.\ \ref{fig.fig2}.a, the slope of potential barrier is much steeper in the positive $x$-direction than in the negative one. For other values of $\alpha$ the depinning process is more complicated, because the vortex motion as $J$ increases is not confined to the direction of drive. For $H$ comparable to or less than $b^s_{max}$ and $H_J$, the equilibrium orientation of ${\bf m}$ is no longer fixed, and $J_c$ depends, besides on $\alpha$ and $m$, also on $H$. Typical results for $\lambda=10.0\xi$ and $ d=z_0= 2.0\xi$ are shown in Fig.\ \ref{fig.fig3}. The $J_c$ vs. $\alpha$ curves are shown in Fig.\ \ref{fig.fig3}.a for $m=0.25\phi_0z_0$, and in Fig.\ \ref{fig.fig3}.b for $m=0.5\phi_0z_0$, for characteristic values of $H$. In both cases the $J_c$ vs. $\alpha$ curves differ considerably from those for a permanent dipole for small $H$, being strongly dependent on $H$, and showing sharp changes in $J_c$ close to $\alpha=180^o$, like those for $m=0.25\phi_0z_0$, $H=0.001\phi_0/\lambda^2$ (Fig.\ \ref{fig.fig3}.a) and $m=0.5\phi_0z_0$, $H=0.01\phi_0/\lambda^2$ (Fig.\ref{fig.fig3}.b) . The curve labeled $H=0$ in Fig.\ \ref{fig.fig3}.a is the limit of the $J_c$ vs. $\alpha$ curve as $H\rightarrow 0$ with $\alpha$ fixed. The strong dependence of $J_c$ on $H$ is even more evident if $J_c$ is plotted as a function of $H$ for fixed $\alpha$, as shown in Fig.\ \ref{fig.fig3}.c for $m=0.5\phi_0z_0$. In this case it is found that for $\alpha \geq 146.25^o$ the $J_c$ vs. $H$ curves have discontinuities at $H=H_d$, jumping from $J_c >J_d$ for $H<H_d$ to $J_c \sim 0.2J_d $ for $H>H_d$. For $\alpha < 146.25^o$, the dependence of $J_c$ on $H$ is continuous, as illustrated by the curves for $\alpha=135^o$ and $\alpha=90^o$. For $\alpha=135^o$, $J_c$ undergoes a rapid change with $H$ around $H_d=0.014\phi_0/\lambda^2$, whereas for $\alpha=90^o$ the change in $J_c$ with $H$ is much slower. It is found that the $J_c$ vs. $H$ curves have no discontinuities if $m$ is smaller than a minimum value which depends on $\alpha$. As shown in Fig.\ \ref{fig.fig3}.d, $H_d$ vanishes at the minimum $m$, and increases above it essentially linearly with $m$. \begin{figure}[h] \centerline{\includegraphics[scale=0.3]{V1EPLFIG4.eps}} \caption{Vortex positions with increasing $J$ for $m=0.5\phi_0z_0$, $\lambda=10.0\xi,\, d=z_0= 2.0\xi$. A: Vortex initial position for $J=0$. C: Positions where the vortex depins. a) and c) vortex trajectories. Dot indicates dipole location. b) and d) $x$ and $y$ coordinates vs. $J$ corresponding to trajectories in a) and c). In d) top curves represent $x/\xi$, bottom curves $y/\xi$. Labels: $H$ in units of $\phi_0/\lambda^2$. } \label{fig.fig4} \end{figure} This complex behavior results from the dependence of $U_{vm}$ on $J$, through ${\bf H}_{T}={\bf H}+{\bf H}_J$, as can be seen by examining how the position the minimum of $U_{vm}- F_L x$, which coincides with the vortex position, changes as $J$ increases (Fig.\ \ref{fig.fig4}). For $\alpha=157.5^o;\;H=0.011\phi_0/\lambda^2<H_d$, and $\alpha=135^o; H=0.011\phi_0/\lambda^2,\, 0.013\phi_0/\lambda^2$, when there are large enhancements in $J_c$ with respect to the permanent dipole value, the position of the minimum undergoes a large displacement, from the initial one on the right side of the dipole (A in Figs.\ \ref{fig.fig4}.a and \ref{fig.fig4}.c) to the final one, where the minimum becomes unstable, on the left side of the dipole (C in Figs.\ \ref{fig.fig4}.a and \ref{fig.fig4}.c). This is accompanied by a flip in the direction of ${\bf H}_{T}$ from near the negative $x$-axis at $J=0$ to one near the positive $x$-axis when the minimum becomes unstable. The enhancement in $J_c$ results because the vortex is effectively pinned by a permanent dipole oriented at a small angle with the positive $x$-axis. This can be seen for $\alpha=157.5^o;\; H=0.011\phi_0/\lambda^2$ ( Fig.\ \ref{fig.fig4}.b), which shows that most of the vortex displacement from A to C takes place for $0<J<0.5J_d$. In this interval the direction of ${\bf H}_{T}$ rotates from $157.5^o$ to $12^o$ with the $x$-axis. When the vortex depins, at $J=1.35J_d$, ${\bf H}_{T}$ points at $3^o$ with the $x$-axis and has magnitude $H_{T}=0.074\phi_0/\lambda^2$. When there is little or no enhancement in $J_c$ ($\alpha=157.5^o;\;H=0.0115\phi_0/\lambda^2$, and $\alpha=135^o;\; H=0.015\phi_0/\lambda^2$) the position of the minimum undergoes only a small displacement, from A to B in Figs.\ \ref{fig.fig4}.a and \ref{fig.fig4}.c, and ${\bf H}_{T}$ points in a direction away from the $x$-axis. The reason for the discontinuous jumps in $J_c$ is related to way that the stability of the minimum of $U_{vm}- F_L x$ changes as $J$ increases. It is found that for $H>H_d$ the minimum becomes unstable twice, whereas for $H<H_d$ it becomes unstable only once. For $H>H_d$ ( $\alpha=157.5^o;\;H=0.0115\phi_0/\lambda^2$ in Fig.\ \ref{fig.fig4}.a ) the minimum becomes unstable at B, where $J=0.25J_d$. A stable minimum, not shown in Fig.\ \ref{fig.fig4}.a, appears again at a slightly larger value of $J$, and follows a trajectory close to the A-C curve. However, the vortex depins when the minimum becomes unstable for the first time at point B. For $H=0.0115\phi_0/\lambda^2>H_d$ in Fig.\ \ref{fig.fig4}, the minimum only becomes unstable once at point C. The $J_c$ results described above are believed to be representative of low-$T_c$ superconducting films. First, the particular set of parameters used, $d\sim z_0\sim \xi$, are typical ones for superconducting films with magnetic dots placed on top. For instance, in the experiments with arrays of magnetic dots with permanent magnetization placed on top of superconducting Nb films, reported in Ref.\cite{pann1}, $d=20nm\sim\xi$. The magnetic dots are separated from the film by a thin protective layer of thickness $\sim 20nm$, so that the distance from the magnetic dipole to the film is $z_0\sim \xi$. Second, since the dependence of $J_c/J_d$ on the model parameters $d$, $z_0$, $m$, $\lambda$, $\xi$, and $H$ is, according to Eqs.\ (\ref{eq.ete}), and (\ref{eq.bvt}), only through the scaled variables $d/z_0$, $m/\phi_0z_0$, and $H\lambda^2/\phi_0$, many superconducting film-dipole systems are equivalent. The London limit is valid for vortices in low-$T_c$ films. However, when a magnetic dipole is placed close to the film, it certainly breaks down if the dipole field destroys superconductivity locally in the film. Roughly speaking, London theory is valid as long as the maximum dipole field at the film is less than the upper critical field, that is, $m/z^3_0 < \phi_0/(2\pi \xi^2)$, or $m/(\phi_0z_0) < (z_0/\xi)^2/2\pi$. For the values used in the above calculations ($z_0=2 \xi$) this gives $m/(\phi_0z_0) < 0.64$, which is larger than the values used in this paper. The London limit would be a better approximation if the present calculations were carried out for larger values of $z_0/\xi$. However, the results for $J_c/J_d$ would be identical to those described above if $m$ and $d$ were scaled by the same factor as $z_0/\xi$. For instance, if $z_0\rightarrow 2z_0$, $J_c/J_d$ would remain the same if $d\rightarrow 2d$ and $m\rightarrow 2m$, but the upper limit of $m/\phi_0z_0$ for the validity of the London approximation would increase by a factor of $4$. The present model also breaks down if $m$ is sufficiently large to create vortices in the film. The threshold value of $m$ for spontaneous vortex creation, estimated as $m\sim 0.7\phi_0z_0$ using the results of Ref.\cite{gmc1}, is larger tham $m$ used here. The simple model discussed here is relevant to vortex pinning by arrays of magnetic dots, provided that: i) the dots are sufficiently far apart to neglect dipole-dipole interactions between them, ii) the number of vortices per dot is small enough, so that each dot pins at most one vortex, and the vortices are far enough apart to neglect vortex-vortex interactions. Unfortunately, there are no experimental results on vortex pinning by magnetic dots with freely rotating magnetic moments to compare the model predictions with. Instead, consider under which conditions the results described above apply to a system consisting of a typical array of nanomagnets reported in Ref.\cite{ckaw} on top of a thin superconducting film. Assuming that $\xi=20nm$, it follows that for $d=z_0=2\xi$, $\lambda=10\xi$ (as above), $d=z_0=40nm$, $\lambda=200nm$, and $\phi_0/\lambda^2=500G$. The value $m=0.5\phi_0z_0$ follows if the disk radius and thickness are chosen respectively as $R\sim 50nm$ and $t\sim 10nm$, and the disk magnetization is taken as $M\sim 10^2 \mu_B/(nm)^3$. If the distance between disks in the array is $a\sim 1\mu m$, the dipole-dipole interaction energy , $E_{dd}\sim m^2/a^3$, is small compared with the vortex pinning potential, $U_{vm}\sim -mb^s_{max}$, since $E_{dd}/U_{vm}\sim 10^{-2}$. The values chosen for the disk radius and thickness, for the magnetization, and for the distance between disks are typical of those Ref.\cite{ckaw}. The results reported above (Fig.\ \ref{fig.fig3}) predict that for $H<b^s_{max}=12G\,$, $J_c$ depends strongly on $H$, like in Fig.\ \ref{fig.fig3}.c, whereas for $H>b^s_{max}=12G$, $J_c$ is that for a permanent dipole, and depends only on $\alpha$. In conclusion then, this paper demonstrates that the critical current for a vortex in a thin superconducting film pinned by a freely rotating dipole can be tuned by a magnetic field applied parallel to the film surfaces. It is found that tuning takes place for a wide range of fields. For large fields, when the dipole moment is stuck in the field direction, the critical current changes continuously by one order of magnitude when the field is rotated $180^o$, from the direction parallel to the driving force to the direction opposite to it. For fields comparable to the vortex field the critical current is very sensitive to field variations, showing very rapid and even discontinuous changes by as much as one order of magnitude. It is suggested that the results apply to experiments on magnetic dot arrays on top of clean superconducting films. \acknowledgments Research supported in part by the Brazilian agencies CNPq, CAPES, FAPERJ, and FUJB.
1,116,691,499,990
arxiv
\section{Introduction} The equations of perfect hydrodynamics have no internal scale, and hence they describe aspects of the time evolution of systems with vastly different sizes: from galactic clusters and galaxies through stars, planets and human-scale systems, down to the femtometer scale sQGP, created in heavy ion collisions at RHIC~\cite{Adcox:2004mh,Adams:2005dq} and the LHC~\cite{Aamodt:2010jd, Aamodt:2010pa, CMS:2012aa, Chatrchyan:2012ta}. The sQGP is formed in heavy ion collisions after an initial thermalization time $\mathcal{O}(1\:{\rm fm}/c)$, its evolutions lasts $\mathcal{O}(10\:{\rm fm}/c)$, after that it creates hadrons in the hadronization. We observe these hadrons, and hydrodynamics may be used to infer the time evolution and the initial state from the hadron final state distributions. Hydrodynamics is based on the local conservation of energy and momentum, expressed through \begin{align} \partial_{\nu}T^{\mu\nu} = 0,\label{e:tmunucons} \end{align} with $T^{\mu\nu}$ being the energy-momentum tensor. In case of a perfect fluid, this can be written as \begin{align} T^{\mu\nu}=(\epsilon + p) u^{\mu}u^{\nu}-pg^{\mu\nu}. \end{align} where $u^\mu$ is the flow field (subject to the $u_\mu u^\mu =1$ constraint), $\epsilon$ is the energy density, $p$ is the pressure and $g^{\mu\nu}$ is the metric tensor, $\rm{diag}(1,-1,-1,-1)$. The Equation of State (EoS) closes this set of equations: \begin{align} \epsilon = \kappa p \end{align} where $\kappa$ is the EoS parameter, which may depend on the temperature. In this paper we assume constant values, even if $\kappa(T)$ type of solutions of relativistic hydrodynamics are know~\cite{Csanad:2012hr}. In case of the above described perfect fluid, continuity for the entropy density $\sigma=(\epsilon+p)/T$ follows from the above equations, and a similar continuity equation for the density of some conserved charge ($n$) may be prescribed: \begin{align} \partial_{\mu} (\sigma u^\mu) = 0,\\ \partial_{\mu} (n u^\mu) = 0.\label{e:cont0} \end{align} With this, a solution of the equations is a set of fields $(u^\mu,p,n)$ or $(u^\mu,p,\sigma)$, given in terms of coordinates $x^\mu$, where sometimes also the coordinate proper-time is introduced as $\tau=\sqrt{x_\mu x^\mu}$, along some scaling variable $s(x^\mu)$ that describes the spatial profile of the densities in the solution. The discovery of the fluid nature of the sQGP produced a revival of interest for solutions of hydrodynamics, beyond the well-known Landau-Khalatnikov~\cite{Landau:1953gs,Khalatnikov:1954aa} and Hwa-Bjorken~\cite{Hwa:1974gn,Bjorken:1982qr} solutions. Besides numerical simulations (see e.g. Refs.~\cite{Shen:2014vra,Pang:2016igs,Weller:2017tsr} for recent examples), multiple advanced analytic solutions were found in the last decade~\cite{Csorgo:2003ry,Csorgo:2006ax,Nagy:2007xn,Csanad:2012hr,Borshch:2007uf,Pratt:2008jj,Gubser:2010ze,Csanad:2014dpa}. One important example is the simple, ellipsoidal Hubble-flow described in Ref.~\cite{Csorgo:2003ry}, which describes hadron and photon observables well~\cite{Csanad:2009wc,Csanad:2011jq}. However, this solution lacks acceleration, and while Hubble-flow is natural in the final state, initial pressure gradients may be important in understanding the time evolution of this system. In this paper we attempt to find accelerating perturbations on top of Hubble-flow. \section{Perturbative Solutions of Hydrodynamics} The equation for the conservation of energy and momentum density, Equation~(\ref{e:tmunucons}) may be projected onto $u^\mu$, producing a Lorentz-parallel and a Lorentz-orthogonal equation, similarly to Refs.~\cite{Csanad:2012hr,Nagy:2007xn}: \begin{align} \kappa u^\mu\partial_\mu p+(\kappa+1)p\partial_\mu u^\mu=0\label{e:energy0}\\ (\kappa+1)pu^\mu\partial_\mu u^\nu=(g^{\mu\nu}-u^\mu u^\nu)\partial_\mu p,\label{e:euler0} \end{align} where the first is called the energy equation, and the second is the Euler equation of relativistic hydrodynamics. If a given solution is given in terms of $(u^\mu,p,n)$, then perturbations on top of this solution may be given as: \begin{align} u^\mu &\rightarrow u^\mu + \delta u^\mu,\label{e:pert:u}\\ p &\rightarrow p + \delta p, \label{e:pert:p}\\ n &\rightarrow n + \delta n,\label{e:pert:n} \end{align} where we restrict ourselves to a conserved charge here, but the continuity may be understood for the entropy density just as well. Now if these perturbations are small, then the equations of hydrodynamics may be given in first order. First of all, the perturbations of the flow field must fulfill \begin{align} (u^\mu+\delta u^\mu)(u_\mu+\delta u_\mu)=1 \end{align} which yields the first order equation of \begin{align} u_\mu \delta u^\mu=0.\label{e:orth} \end{align} With this, we may substitute the perturbed fields in Equations~(\ref{e:pert:u})--(\ref{e:pert:n}) into the equations of hydrodynamics, Equations~(\ref{e:cont0})--(\ref{e:euler0}). For the continuity equation, we get the following first order equation: \begin{align} u^\mu\partial_\mu\delta n+\delta n\partial_\mu u^\mu+\delta u^\mu\partial_\mu n + n\partial_\mu\delta u^\mu = 0.\label{e:cont1} \end{align} For the energy equation, we obtain: \begin{align} \kappa\delta u^\mu\partial_\mu p+\kappa u^\mu\partial_\mu\delta p+ (\kappa+1)\delta p\partial_\mu u^\mu+(\kappa+1)p\partial_\mu \delta u^\mu=0.\label{e:energy1} \end{align} And for the Euler-equation, the first order perturbative equation is \begin{align} (\kappa+1)\delta p u^\mu \partial_\mu u^\nu+(\kappa+1) p \delta u^\mu \partial_\mu u^\nu+ (\kappa+1) p u^\mu \partial_\mu \delta u^\nu=(g^{\mu\nu}-u^\mu u^\nu)\partial_\mu \delta p- \delta u^\mu u^\nu \partial_\mu p-u^\mu \delta u^\nu \partial_\mu p.\label{e:euler1} \end{align} To perform a basic consistency check of the above equations, one may investigate what happens when the basic solution of a fluid at rest. The flow and pressure is then \begin{align} u_\mu=(1,0,0,0)\textnormal{ and }p=p_0. \end{align} One may immediately observe, that $\partial_\mu u^\mu=0$, $\partial_\mu p=0$, and $u^\mu\partial_\mu=\partial_t$. With this, the linearized energy and Euler equations become \begin{align} \kappa\partial_t\delta p+(\kappa+1)p\partial_\mu\delta u^\mu&=0,\\ (\kappa+1)p\partial_t\delta u^\nu-(u^\mu u^\nu -g^{\mu \nu})\partial_\mu \delta p&=0. \end{align} The time derivative of the energy equation is then \begin{align} \kappa\partial_t^2\delta p+(\kappa+1)p\partial_t\partial_\mu\delta u^\mu=0.\label{e:energyderivative0} \end{align} Let us then introduce the $Q^{\mu\nu}=(u^\mu u^\nu -g^{\mu \nu})$ operator --- which is here nothing else than ${\rm diag}(0,1,1,1)$. Then the effect of $Q_{\rho\nu}\partial^\rho$ on the Euler equation is \begin{align} (\kappa+1)p\partial_t\partial_\nu\delta u^\nu +\Delta\delta p=0,\label{e:EulerQd0} \end{align} where we observed that \begin{align} Q_{\rho\nu}\partial^\rho Q^{\mu\nu}\partial^\mu= (\partial_x^2 +\partial_y^2 + \partial_z^2)=\Delta. \end{align} From Equations~(\ref{e:energyderivative0}) and (\ref{e:EulerQd0}), we obtain \begin{align}\label{hul} \partial_t^2\delta p-\frac{1}{\kappa}\Delta\delta p=0, \end{align} which means that, as expected, pressure perturbations behave as waves with a speed of sound of $c_s={1}/{\sqrt{\kappa}}$. \section{Perturbations on Top of Hubble-Flow} As mentioned above, in Ref.~\cite{Csorgo:2003ry} a Hubble-type of self-similar solution is given, with a flow field \begin{align} u^\mu = \frac{x^\mu}{\tau},\label{e:Hubbleflow} \end{align} where again $\tau^2=x^\mu x_\mu$ is the coordinate proper-time. The basic quantity of this solution is the scale variable assuring self-similarity, for which the comoving derivative vanishes: \begin{align} u^\mu \partial_\mu S=0.\label{e:scale} \end{align} Since in this case, $u^\mu \partial_\mu=\partial_\tau$, the following simple pressure field and density can be obtained: \begin{align} n&=n_0\left(\frac{\tau_0}{\tau}\right)^3\mathcal{N}(S),\label{e:Hubblen}\\ p&=p_0 \left(\frac{\tau_0}{\tau}\right)^{3+\frac{3}{\kappa}},\label{e:Hubblep}\\ \end{align} where $\mathcal{N}(S)$ is an arbitrary scale function. This solution can be generalized to describe multipole type of scale variables~\cite{Csanad:2014dpa}, but a standard choice yielding ellipsoidal symmetry is \begin{align} S=\frac{x^2}{X^2}+\frac{y^2}{Y^2}+\frac{z^2}{Z^2} \end{align} with the coordinates given as $x, y, z$, and the axes of the expanding ellipsoid are $X, Y, Z$, all linear in time. We will focus here on the spherical case: \begin{align} S=\frac{r^2}{\dot R_0^2 t^2}, \end{align} where $r$ is the radial coordinate, and $\dot R_0$ describes the expansion velocity of the scale of the solution. This solution yields the following equations for the perturbations of the fields: \begin{align} \delta u^\mu n\frac{\mathcal{N}'}{\mathcal{N}}\partial_\mu S+u^\mu\partial_\mu\delta n +\frac{3\delta n}{\tau}+n\partial_\mu\delta u^\mu&=0\label{e:contH},\\ \kappa u^\mu\partial_\mu \delta p+\frac{3(\kappa+1)}{\tau}\delta p&=-(\kappa+1)p\partial_\mu\delta u^\mu,\label{e:energyH}\\ \frac{\partial_\mu\delta p}{(\kappa+1)p}\left[g^{\mu\nu}-u^\mu u^\nu\right]&=\frac{\kappa-3}{\tau\kappa}\delta u^\nu +u^\mu\partial_\mu\delta u^\nu.\label{e:eulerH} \end{align} A similar setup was investigated in Ref.~\cite{Shi:2014kta}, where the authors found expressions for the ripples propagating on Hubble-flow. Unlike Ref.~\cite{Shi:2014kta}, we will now discuss global perturbations in terms of $\delta u^\mu$, $\delta p$ and $\delta n$. In this proceedings paper we do not detail the way this solution was obtained, but simply present the result for the flow, pressure and density: \begin{align} \delta u^\mu&=\delta \cdot F(\tau) g(x^\mu) \chi (S)\partial^\mu S,\label{e:du1}\\ \delta p&=\delta\cdot p_0\left(\frac{\tau_0}{\tau}\right)^{3+\frac{3}{\kappa}}\pi (S),\label{e:dp1}\\ \delta n&=\delta \cdot n_0\left(\frac{\tau_0}{\tau}\right)^3 h(x^\mu)\nu (S),\label{e:dn1} \end{align} where $S$ is the scale variable (with vanishing comoving derivative), $\delta$ is the perturbation scale, $c$ is an arbitrary constant, $F, h, g$ are profile functions, while $\pi$, $\chi$, $\nu$ are scale functions subject to the following condition equations: \begin{align}\label{e:chi:s} \frac{\chi'(S)}{\chi(S)}&=-\frac{\partial_\mu\partial^\mu S}{\partial_\mu S\partial^\mu S} -\frac{\partial_\mu S\partial^\mu \ln g(x^\mu)}{\partial_\mu S\partial^\mu S},\\ \label{e:pi:s} \frac{\pi'(S)}{\chi(S)}&=(\kappa+1)\left[F(\tau)\left(u^\mu\partial_\mu g(x^\mu)- \frac{3g(x^\mu)}{\kappa\tau}\right)+F'(\tau)g(x^\mu)\right],\\ \label{e:nu:s} \frac{\nu (S)}{\chi(S)\mathcal{N}'(S)} &=-\frac{F(\tau)g(x^\mu) \partial_\mu S\partial^\mu S}{u^\mu\partial_\mu h(x^\mu)}. \end{align} In simple terms, these equations can be translated to the following conditions: \begin{itemize} \item The scale variable $S$ fulfills $u_\mu \partial^\mu S=0$ with the original flow field. \item The right hand sides of Equations~(\ref{e:chi:s})--(\ref{e:nu:s}) depends only on $S$. \end{itemize} First of all, let us restrict ourselves to the simplest case of $g(x^\mu)=1$ here, in order to describe the way this class of perturbative solutions works. This gives a simple form for $F$ as \begin{align} F(\tau)=\tau+c\tau_0\left(\frac{\tau}{\tau_0}\right)^\frac{3}{\kappa}. \end{align} Then let us select an $h$ function that leads to simpler condition equations: \begin{align} h(x^\mu)&=\ln\left(\frac{\tau}{\tau_0}\right)+ \frac{c\kappa}{3-\kappa}\left(\frac{\tau}{\tau_0}\right)^{\frac{3}{\kappa}-1}, (\textnormal{ if } \kappa\neq 3),\label{e:hdef1}\\ h(x^\mu)&=(1+c)\ln\left(\frac{\tau}{\tau_0}\right), (\textnormal{ if }\kappa=3).\label{e:hdef2} \end{align} The above choices of transforms Equations~(\ref{e:chi:s})--(\ref{e:nu:s}) to the simple equations of \begin{align} \frac{\chi'(S)}{\chi(S)}&=-\frac{\partial_\mu\partial^\mu S}{\partial_\mu S\partial^\mu S},\label{e:chi:s2}\\ \frac{\pi'(S)}{\chi(S)}&=\frac{(\kappa+1)(\kappa-3)}{\kappa} \label{e:pi:s2}\\ \frac{\nu (S)}{\chi(S)\mathcal{N}'(S)}&=-\tau^2\partial_\mu S\partial^\mu S.\label{e:nu:s2} \end{align} While more general solutions can also be found, a broad class of perturbative solutions can already be given, if suitable $S$ scale variables and associated $\pi$, $\chi$, $\nu$ and $h$ functions are found. Such suitable scale variables include \begin{align} S=\frac{r^m}{t^m}, \qquad S=\frac{r^m}{\tau^m}, \qquad S=\frac{\tau^m}{t^m}.\label{e:scales} \end{align} In the next section, we will detail one particular sub-class of this class of solutions. \section{A Selected Sub-Class of Perturbative Solutions} If we introduce $h$ as given in Equations~(\ref{e:hdef1})--(\ref{e:hdef2}) and $S$ as $r^m/t^m$, we obtain the following scale functions: \begin{align} \chi(S)&=S^{-\frac{m+1}{m}},\label{e:rntn:chi}\\ \pi(S)&=-\frac{(\kappa+1)(\kappa-3)}{\kappa}m S^{-\frac{1}{m}},\label{e:rntn:pi}\\ \nu (S)&=m^2 S^{\frac{m-1}{m}}\left(S^\frac{2}{m}-1\right)\left(1-S^{-\frac{2}{m}}\right)\mathcal{N}'(S).\label{e:rntn:nu} \end{align} This sub-class of solutions contains an arbitrary parameter $c$, the $\delta$ perturbation scale, the $m$ exponent and the $\mathcal{N}(S)$ scale function (included in the original Hubble-solution as well). Let us chose $m=-1$, then the scale functions are \begin{align} \chi(S)&=1,\\ \pi(S)&=\frac{(\kappa+1)(\kappa-3)}{\kappa} S,\\ \nu(S)&= \left(1-S^2\right)^2\mathcal{N}'(S). \end{align} Let us furthermore choose a suitable $\mathcal{N}$, leading to a Gaussian profile: \begin{align} \mathcal{N}(S)=e^{-bS^{-2}}=e^{-b\frac{r^2}{t^2}} \end{align} With these, the perturbed fields (for $\kappa\neq 3$, the special case of $\kappa=3$ is discussed in Equation~(\ref{e:hdef2})) are as follows: \begin{align} \delta u^\mu&= \delta \cdot \left[\tau+c\tau_0\left(\frac{\tau}{\tau_0}\right)^\frac{3}{\kappa}\right] \partial^\mu S,\label{e:durm}\\ \delta p&= \delta\cdot p_0\left(\frac{\tau_0}{\tau}\right)^{3+\frac{3}{\kappa}}\frac{(\kappa+1)(\kappa-3)}{\kappa}S,\label{e:dprm}\\ \delta n&= \delta \cdot n_0\left(\frac{\tau_0}{\tau}\right)^3 \left[\ln\left(\frac{\tau}{\tau_0}\right)+ c\frac{\kappa}{3-\kappa}\left(\frac{\tau}{\tau_0}\right)^{\frac{3}{\kappa}-1}\right] S^{-3}\left(1-S^2\right)^2 2b\mathcal{N}(S).\label{e:dnrm} \end{align} For the visualisation of these fields, let us chose parameter values from Refs.~\cite{Csanad:2009wc,Csanad:2011jq} as $\tau_0 = 7.7\:{\rm fm}/c$, $\kappa=10$ and $b=-0.1$. On the top left panel of Figure~\ref{f:du}, a slice of the $x$ component of the flow field is shown with $\tau=6$ fm/$c$, $c=-3$ and $\delta=0.001$. The perturbation is the most important in the center, it also changes the direction of the field, but it vanishes for large radial distances. The top right panel indicates the $c$ and $\delta$ dependence of the relative perturbed fields. We observe here that for this particular solution, the relative perturbation increases to very large values for very small distances. The bottom panel of Figure~\ref{f:du} indicates the transverse flow field for various proper-time slices, showing that also the direction of the flow is perturbed for some particular distances. Next, let us investigate the pressure perturbation. The top panels of Figure~\ref{f:dp} shows the pressure field with fixed values of $\delta=0.001$ and $\tau=6$ fm/$c$ (there is no $c$-dependence in $p$). Again it is clear that the perturbation vanishes for increasing radial distance, and increases for small distances. It is an important next step to present a sub-class of perturbative solutions that does not exhibit this feature. One may also note that $\delta$ controls the perturbation magnitude, as also visible in the ratio plots in the top right panel of Figure~\ref{f:dp}. On the bottom panel, the time evolution of the pressure perturbation is given in the transverse plane, showing a vanishing perturbation for large times. Finally, let us investigate the behavior of the density $n$. The left panel of Figure~\ref{f:dn} (with $\tau=6$ fm/$c$, $\delta=0.001$ and $c=-3$) indicates again a vanishing perturbation for large distances. The right panel shows the relative perturbation and its dependence on $\delta$ and $c$. As it can be seen also in the figures, the perturbations become larger than the original fields for very small radial distances and large $\delta$ values, i.e. the method breaks down in these cases. This sets a limit to the applicability of the particular investigated perturbative solutions. Note however, that this is not necessarily a general property of the whole class of solutions. With these fields at hand, and utilizing a freeze-out hypersurface similarly to e.g., Ref.~\cite{Csanad:2009wc}, one may evaluate observables such as transverse momentum distribution, flow and Bose-Einstein correlation radii. We plan to do this in a subsequent publication. \begin{figure}[H] \centering \includegraphics[width=0.496\linewidth]{du} \includegraphics[width=0.496\linewidth]{duratio}\\ \hspace{0.02\linewidth}\includegraphics[width=0.975\linewidth]{du2d} \caption{The perturbed flow field component ($u_x+\delta u_x$) is shown in the left plot as a function of $x$, for $\tau=6$ fm/$c$ (the other parameters are given in the text). The right plot indicates the relative change $(u_x+\delta u_x)/u_x$ for various $\delta$ and $c$ values. The bottom plot shows the flow perturbation field $(\delta u_x,\delta u_y)$ in the transverse plane, for various proper-time values.} \label{f:du} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.496\linewidth]{dp} \includegraphics[width=0.496\linewidth]{dpratio}\\ \hspace{0.03\linewidth}\includegraphics[width=0.965\linewidth]{dp2d} \caption{The perturbed pressure $p+\delta p$ is shown in the left plot as a function of $x$, for $\tau=6$ fm/$c$ (the other parameters are given in the text). The right plot indicates the relative change $(p+\delta p)/p$ for various $\delta$ and $c$ values. The bottom plot shows the pressure perturbation $\delta p$ in the transverse plane, for various proper-time values.} \label{f:dp} \end{figure} \begin{figure}[H] \centering \includegraphics[width=0.495\linewidth]{dn} \includegraphics[width=0.495\linewidth]{dnratio} \caption{The perturbed density $n+\delta n$ is shown in the left plot as a function of $x$, for $\tau=6$ fm/$c$ (the other parameters are given in the text). The right plot indicates the relative change $(n+\delta n)/n$ for various $\delta$ and $c$ values. } \label{f:dn} \end{figure} \section{Conclusions} In this paper we presented the method of obtaining perturbative solutions of relativistic hydrodynamics on top of known solutions. A new perturbative class of solutions on top of Hubble flow was discussed, and the modified fields were investigated in detail. These fields were scaled to a single $\delta$ perturbation parameter, and several scale functions appeared, subject to condition equations. As a subsequent step, we plan to describe more particular sub-classes of solutions. We also plan to calculate the modification of observables and in case of realistic geometries, we plan to compare them to measurements. \section*{Acknowledgments} The authors are supported by the New National Excellence program of the Hungarian Ministry of Human Capacities and the NKFIH grant FK-123842. M. Cs. was also supported by the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences. \bibliographystyle{../../../prlsty}
1,116,691,499,991
arxiv
\section{Introduction} In formation shape control, a group of interacting mobile agents are commanded to acquire and maintain a desired geometric pattern in space. A well-known method for solving this problem involves regulating a set of inter-agent distances to values prescribed by the desired shape \citep{de2019formation,krick2009stabilisation}. This method, which is commonly referred to as distance-based formation control \citep{oh2015survey}, has the main advantage of being implementable in a fully decentralized manner. However, this advantage comes with the limitation that the set of inter-agent distances may not uniquely define the formation position and orientation in space. Mathematically, this nonuniqueness is related to the existence of multiple equilibrium points in the multi-agent system distance dynamics. The question then is how do you steer the system away from the undesired equilibria and towards the equilibrium corresponding to the desired formation shape (up to translation and rotation). The above question is partially answered by requiring the formation graph to be rigid, which imposes a minimum number of distances to be controlled. This reduces the unwanted equilibrium points\ to formations that are flipped or reflected versions of the desired shape. Here, the agents' initial conditions determine whether the formation converges to one that is isomorphic to the desired formation or to a flipped/reflected formation. This implies that rigid distance-based controllers are locally stable. In recent years, some methods have been introduced to address the limitation of distance-based formation controllers. The common feature of these methods is the use of an additional controlled variable (or constraint) that is capable of distinguishing formation ambiguities. In \cite{mou2015target}, an approach called target-point control was introduced to rule out the undesirable equilibria in planar formations. The inter-agent distances and the order of agents were used to calculate the desired position of the agents, i.e., target position, in a local coordinate frame. However, the leader and the first follower cannot be collocated at time zero and the leader's motion needs to satisfy certain conditions. A similar method was used in \cite{kang2017distance} with the name ``desired order of neighbors''. In \cite{ferreira2016distance}, inter-agent distance and angular constraints were employed to enlarge the region of attraction to the desired planar formation by a proper choice of control gains. In \cite{anderson2017formation}, the signed area of a triangle was used as the second controlled variable, and convergence analyses were conducted for special cases of 3- and 4-agent planar formations. In \cite{sugie2018hierarchical}, the authors further explored the idea of \cite{anderson2017formation} by applying the area constraints to only a subset of agents and thus extended the method to $n$ agents. However, the result in \cite{sugie2018hierarchical} required the triangulated formation to be composed of equilateral triangles. Recently in \cit {liu2019further}, we extended the distance/signed area method to directed 2D formations of $n$ agents and introduced the concept of strong congruency. In this result, the triangulations were not restricted to equilateral ones; however, certain triangulation and control gain conditions had to be met to prove the asymptotic stability of the desired formation. In \cite{cao2019almost}, a specific control gain value in the signed area term was found that causes the multi-agent system to have one stable equilibrium point corresponding to the desired formation and some discrete unstable equilibria. A switching control strategy based on the signed area or edge angle was proposed in \cite{liu2020switching} to remove the restrictions on the shape of the desired formation. In \cite{liu2020distance}, we designed a non-switching, distance/edge angle-based controller that ensures the almost-global asymptotic stability of the desired formation under certain conditions on the triangulations. Recently in \cite{jing2020multiagent}, a formation controller based on angles and the ``sign'' of the triangulated framework was shown to guarantee almost-global convergence of the angle errors. For the 3D formation problem, relatively few results exist. For example, \cite{ferreira2016adaptive} extended the method in \cit {ferreira2016distance} to 3D by using distance and volume constraints. However, unless the control gains satisfy a persistency of excitation-type condition, the system under the control of \cite{ferreira2016adaptive} may still converge to an undesired formation shape. In \cite{lan2018adaptive}, volume constraints were applied to a 4-agent system to distinguish the two possible orientations of a tetrahedron under the assumption that three of the agents are at their desired distances. In this paper, we address the problem of using additional feedback variables in the distance-based 3D formation controller. Specifically, we introduce a new method called the \textit{orthogonal basis approach} which decomposes the feedback variables and control inputs into three orthogonal subspaces. By applying this decomposition to directed frameworks formed by tetrahedrons, we are able to guarantee the \textit{global} asymptotic stability of the desired formation. Moreover, we can show the desired formation is locally exponentially stable which provides robustness to the system. These results are achieved with no limitations on the ``tetrahedralizations'' of the desired formation, control gains, or number of agents. Thus, this work greatly extends the applicability of the dual-feedback-variable approach for avoiding 3D formation ambiguities. To the best of our knowledge, it is the first result to show convergence to the desired 3D formation for all initial conditions, including collocated and collinear agents. A preliminary version of our orthogonal basis approach appeared in \cite{liu2020ortho}, where it was applied to planar formations and achieved almost-global asymptotic stability of the desired formation. \section{Background Material \label{sec:back-mat}} Some background material is reviewed in this section. Throughout the paper, we use the following vector notation: $x\in \mathbb{R}^{n}$ or $x=\left[ x_{1},\ldots ,x_{n}\right] $ denotes an $n\times 1$ (column) vector, and $x \left[ x_{1},\ldots ,x_{n}\right] $ where $x_{i}\in \mathbb{R}^{m}$ is the stacked $mn\times 1$ vector. \subsection{Graph Theory \label{Sec:graph-theory}} An undirected graph $G$ is represented by a pair $(\mathcal{V},\mathcal{E ^{u})$, where $\mathcal{V}=\{1,2,\ldots ,N\}$ is the set of vertices and \mathcal{E}^{u}=\{(i,j)|\,i,j\in \mathcal{V},i\neq j\}\subset \mathcal{V \times \mathcal{V}$ is the set of undirected edges. A directed graph $G$ is a pair $(\mathcal{V},\mathcal{E}^{d})$ where the edge set $\mathcal{E}^{d}$ is directed in the sense that if $(i,j)\in \mathcal{E}^{d}$ then $i$ is the source vertex of the edge and $j$ is the sink vertex. The set of neighbors of vertex $i\in \mathcal{V}$ is defined as $\mathcal{N}_{i}(\mathcal{E ^{d})=\{j\in \mathcal{V}|(i,j)\in \mathcal{E}^{d}\}$. For $i\in \mathcal{V} , the out-degree of $i$ (denoted by $\text{out}(i)$) is the number of edges in $\mathcal{E}^{d}$ whose source is vertex $i$ and sinks are in $\mathcal{V -\{i\}$. If $p_{i}\in \mathbb{R}^{3}$ is the coordinate of the $i$th vertex of a 3D graph, then a framework $F$ is defined as the pair $(G,p)$ where $p \left[ p_{1},\ldots ,p_{N}\right] \in \mathbb{R}^{3N}$. Let the map $\mathcal{T}:\mathbb{R}^{3}\rightarrow \mathbb{R}^{3}$ be such that $\mathcal{T}(x)=\mathcal{R}x+d$ where $\mathcal{R}\in SO(3)$ and $d\in \mathbb{R}^{3}$. A framework $F=\left( G,p\right) $ is rigid in $\mathbb{R ^{3}$ if all of its continuous motions satisfy $p_{i}(t)=\mathcal{T}(p_{i})$ for all $i=1,\ldots ,N$ and $\forall t\geq 0$ \citep{asimow1979rigidity,izmestiev2009infinitesimal}. A 3D rigid framework is minimally rigid if and only if $\left\vert \mathcal{E}^{u}\right\vert =3N-6$ \citep{anderson2008rigid}. The edge function of a minimally rigid framework \gamma :\mathbb{R}^{3N}\rightarrow \mathbb{R}^{3N-3(3+1)/2}$ is defined as \begin{equation} \gamma (p)=\left[ \ldots ,\left\Vert p_{i}-p_{j}\right\Vert ,\ldots \right] , \, (i,j) \in \mathcal{E}^{u} \label{edge function} \end{equation such that its $l$th component, $\left\Vert p_{i}-p_{j}\right\Vert $, relates to the $l$th edge of $\mathcal{E}^{u}$ connecting the $i$th and $j$th vertices. Frameworks $(G,p)$ and $(G,\hat{p})$ are equivalent if $\gamma (p)=\gamma (\hat{p})$, and are congruent if $\left\Vert p_{i}-p_{j}\right\Vert =\left\Vert \hat{p}_{i}-\hat{p}_{j}\right\Vert $ for all distinct vertices $i$ and $j$ in $\mathcal{V}$ \citep{jackson2007notes}. If rigid frameworks $(G,p)$ and $(G,\hat{p})$ are equivalent but not congruent, they are flip- or flex-ambiguous \citep{anderson2008rigid}. Frameworks based on directed graphs are required to be constraint consistent and persistent to maintain its shape \citep{yu2007three}. A persistent graph is said to be minimally persistent if no single edge can be removed without losing persistence. A sufficient condition for a directed graph $\left( \mathcal{V},\mathcal{E}^{d}\right) $ in $\mathbb{R}^{3}$ to be constraint consistent is $\text{out}(i)\leq 3$ for all $i\in \mathcal{V}$ (see Lemma 5 of \cite{yu2007three}). A necessary condition for a graph in $\mathbb{R}^{3}$ to be minimally persistent is $\text{out}(i)\leq 3$ for all $i\in \mathcal{V} $, while a sufficient condition is minimal rigidity \citep{yu2007three}. A minimally persistent graph can be constructed by the 3D Henneberg insertion of type I\footnote As shown in \cite{grasegger2018lower}, the 3D Henneberg insertion of type I does not allow edge splitting operations \citep{eren2005information} in the graph construction procedure.} \citep{grasegger2018lower}. This method starts with three vertices with three directed edges, and grows the graph by iteratively adding a vertex with three outgoing edges. Henceforth, we refer to a framework constructed in this manner as a 3D Henneberg framework. \subsection{Strong Congruency \label{Sec: Strong Congr}} The concept of congruency defined above can distinguish between two frameworks that are flip- or flex-ambiguous, but does not capture a third type of ambiguity --- a reflection of the whole framework. In \cit {liu2019further}, we introduced the concept of \textit{strong congruency} to handle this type of ambiguity in 2D. Specifically, Henneberg frameworks F=(G,p)$ and $\hat{F}=(G,\hat{p})$ are said to be \textit{strongly congruent} if they are congruent and not reflected versions of each other. In \cite{liu2019further} (Lemma 2.2), it was shown that the signed area of a triangular framework in addition to a set of edge lengths can be used to ensure strong congruency in 2D. In order to extend this concept to 3D, we will employ the \textit{signed volume of a tetrahedron, }$V:\mathbb{R ^{12}\rightarrow \mathbb{R}$ \citep{mallison1935use}: \begin{align} V(p)=& \frac{1}{6}\det \left[ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ p_{1} & p_{2} & p_{3} & p_{4 \end{array \right] \notag \\ =& - \frac{1}{6}\left( p_{1}-p_{4}\right) ^{\intercal }\left[ \left( p_{2}-p_{4}\right) \times \left( p_{3}-p_{4}\right) \right] \label{eq:signed-volume} \end{align} where $p=\left[ p_{1},p_{2},p_{3},p_{4}\right] $. If the order of vertices 1,2,3$ is counterclockwise (resp., clockwise) from an observer located at vertex $4$ facing the $1$-$2$-$3$ plane, then (\ref{eq:signed-volume}) is positive (resp., negative). Moreover, this quantity is zero if any three vertices are collinear or the four vertices are coplanar. As an example of the use of the signed volume, all the frameworks in Figure \ref{fig:3d-scgt} are congruent, but only frameworks (a), (b), and (c) are strongly congruent. Hereafter, we denote the set of all $3$-dimensional frameworks that are strongly congruent to framework $F$ by $\text{SCgt}^{3}(F)$. \begin{figure}[tbph] \centering \adjincludegraphics[scale=0.45, trim={{0.0\width} {0.0\height} {0.0\width} {0.0\height}},clip]{scgt-3D} \caption{All four 3D frameworks are congruent, but only the ones in (a), (b), and (c) are strongly congruent.} \label{fig:3d-scgt} \end{figure} A 3D Henneberg framework can be divided into tetrahedral sub-frameworks. In such cases, the signed volume of a 3D Henneberg framework with $N$ vertices and directed edge set $\mathcal{E}^{d}$, $\mathbf{V}:\mathbb{R}^{3N}\rightarrow \mathbb{R}^{N-3}$, is defined as \begin{align} & \mathbf{V}(p)=\left[ \ldots ,\frac{1}{6}\det \left[ \begin{array}{cccc} 1 & 1 & 1 & 1 \\ p_{i} & p_{j} & p_{k} & p_{l \end{array \right] ,\ldots \right] , \notag \\ & \qquad \forall (l,i),(l,j),(l,k)\in \mathcal{E}^{d}-\{(2,1),(3,1),(3,2)\} \end{align where $p=\left[p_{1},\ldots ,p_{N}\right]$ and its $n$th component is the signed volume of the $n$th tetrahedron constructed with vertices $i<j<k<l$. For example, the signed volume of the framework in Figure \re {fig:signed-volume-framework-example} is given by \begin{equation} \mathbf{V}(p)=\left[ \begin{array}{l} - \frac{1}{6}(p_{1}-p_{4})^{\intercal }\left[ (p_{2}-p_{4})\times (p_{3}-p_{4}) \right] \\ - \frac{1}{6}(p_{1}-p_{5})^{\intercal }\left[ (p_{3}-p_{5})\times (p_{4}-p_{5}) \right \end{array \right] \label{Psi} \end{equation where the first element is positive and the second one negative. Note that if $\hat{F}$ is a reflected version of $F$, then $\mathbf{V}(p)=-\mathbf{V} \hat{p})$. \begin{figure}[tbph] \centering \includegraphics[width=0.45\linewidth]{signed-volume-framework-example} \caption{Framework with two tetrahedrons.} \label{fig:signed-volume-framework-example} \end{figure} \begin{lemma} \label{lem:scgt-3d}3D Henneberg frameworks $F=(G,p)$ and $\hat{F}=(G, \hat{p}) $ are strongly congruent if and only if they are equivalent and $\mathbf{V} (p)=\mathbf{V}(\hat{p})$. (See Appendix \ref{Sec:proof:scgt-3d} for proof.) \end{lemma} \subsection{Stability Results} Here, we recall some results concerning the stability of nonlinear systems. \begin{lemma} \label{lem:global-ISS}\citep{khalil2015nonlinear} Suppose $f(x,u)$ is continuously differentiable and globally Lipschitz in $\left[ x,u\right] $. If $\dot{x}=f(x,0)$ has a globally exponentially stable (GES) equilibrium point at the origin, then the system $\dot{x}=f\left( x,u\right) $ is input-to-state stable (ISS). \end{lemma} \begin{lemma} \label{lem:global-interconn}\citep{khalil2015nonlinear} If the systems $\dot{\eta}=f_{1}(\eta ,\xi )$ and $\dot{\xi}=f_{2}(\xi ,u)$ are ISS, then the cascade connection \begin{equation} \dot{\eta}=f_{1}(\eta ,\xi ),\quad \dot{\xi}=f_{2}(\xi ,u) \end{equation is ISS. Consequently, if $\dot{\eta}=f_{1}(\eta ,\xi )$ is ISS and the origin of $\dot{\xi}=f_{2}(\xi ,0)$ is globally asymptotically stable (GAS), then the origin of the cascade connection \begin{equation} \dot{\eta}=f_{1}(\eta ,\xi ),\quad \dot{\xi}=f_{2}(\xi ,0) \end{equation} is GAS. \end{lemma} \section{Problem Statement \label{Probl Stat}} Consider a system of mobile agents described by directed framework F(t)=(G,p(t))$ where $G=(\mathcal{V},\mathcal{E})$, $\left\vert \mathcal{V \right\vert =N$, $\left\vert \mathcal{E}\right\vert =3N-6$, $N\geq 4$, $p \left[ p_{1},\ldots ,p_{N}\right] $, and $p_{i}\in \mathbb{R}^{3}$ is the position of agent $i$. The directed edge $(j,i)\in \mathcal{E}$ means that agent $j$ can measure its relative position to agent $i$, p_{ji}:=p_{i}-p_{j}$, but not the opposite. We assume agent $j$ can sense all relative positions $p_{ji}$ where $i\in \mathcal{N}_{j}(\mathcal{E})$. The agents are assumed to be governed by the dynamics \begin{equation} \dot{p}_{i}=u_{i},\quad \forall i\in \mathcal{V} \label{SI model} \end{equation where $u_{i}\in \mathbb{R}^{3}$ is the control input. The desired formation is characterized by a set of desired distances $d_{ji} , $(j,i)\in \mathcal{E}$ and a set of desired signed volumes $V_{ijkl}^{\ast }=V(p_{i}^{\ast },p_{j}^{\ast },p_{k}^{\ast },p_{l}^{\ast })$, (l,i),(l,j),(l,k)\in \mathcal{E}$ (see (\ref{eq:signed-volume})) where p_{i}^{\ast }$ is the desired position of agent $i$.\footnote Note that $p_{i}^{\ast }$ is not explicitly used by the control since we are not controlling the global position of the agents. This variable is only mentioned so we can formally define the desired formation.\bigskip} This gives the desired framework $F^{\ast }=\left( G,p^{\ast }\right) $ where p^{\ast }=\left[ p_{1}^{\ast },\ldots ,p_{N}^{\ast }\right] $ and \left\Vert p_{j}^{\ast }-p_{i}^{\ast }\right\Vert =d_{ji}$. We assume F^{\ast }$ satisfies the following conditions: i) $F^{\ast }$ is non-degenerate in 3D space, i.e., all tetrahedrons have nonzero volume; ii) out$(i)=i-1$ for $i=1,2,3$ and out$(i)=3$ for $i=4,\ldots ,N$; and iii \textbf{\ }if there is an edge between agents $i$ and $j$, the direction must be $i\leftarrow j$ if $i<j$. Given these conditions, we say that agent 1 is the leader, agent 2 is the first follower, agent 3 is the secondary follower, and agents $i\geq 4$ are ordinary followers. Note that this nomenclature is the 3D extension of the leader-first-follower, minimally persistent framework discussed in \cite{summers2011control}, which was called an acyclic minimally structural persistent framework in \cit {lan2018adaptive}. Our control objective is to design $u_{i}$, $\forall i\in \mathcal{V}$ such that \begin{equation} F(t)\rightarrow \text{SCgt}^{3}(F^{\ast })\text{ as }t\rightarrow \infty \label{objective} \end{equation for the largest set of initial conditions possible. \section{Orthogonal Basis} Ambiguous frameworks in 3D can be discerned by employing the signed volume of the framework in the formation controller. However, this variable may introduce new undesired equilibria since the distance and volume constraints will interfere with each other at certain agent positions. In other words, these two variables do not always constitute an orthogonal space. To remedy this situation, we will introduce projection variables that are always orthogonal. \begin{figure}[tbph] \centering \includegraphics[width=0.5\linewidth]{3D-framework} \caption{Tetrahedron framework.} \label{fig:3d-framework} \end{figure} Consider the tetrahedron in Figure \ref{fig:3d-framework}. If $\left\Vert p_{ji}\right\Vert \neq 0$ and if there exists a vector $n_{ijk}$ such that \begin{equation} n_{ijk}^{\intercal }p_{ki}=0\quad \text{and}\quad n_{ijk}^{\intercal }p_{kj}=0, \end{equation then $p_{ji}^{\intercal }n_{ijk}=0$, $p_{ji}^{\intercal }\left( n_{ijk}\times p_{ji}\right) =0$, $n_{ijk}^{\intercal }\left( n_{ijk}\times p_{ji}\right) =0$, and we say $\{p_{ji},n_{ijk}\times p_{ji},n_{ijk}\}$ is the \textit{orthogonal basis of vertex }$l$. With this in mind, the projection variables for vertex $l$ are defined as \begin{subequations} \label{eq:3D-OB:projections} \begin{align} \zeta _{l}=& p_{li}^{\intercal }p_{ji}=\left\Vert p_{li}\right\Vert ^{2}-p_{li}^{\intercal }p_{lj} \label{eq:3D-OB:zeta-l} \\ \varphi _{l}=& p_{li}^{\intercal }(n_{ijk}\times p_{ji})=p_{lj}^{\intercal }(n_{ijk}\times p_{ji}) \label{eq:3D-OB:varphi-l} \\ \vartheta _{l}=& p_{li}^{\intercal }n_{ijk}=p_{lj}^{\intercal }n_{ijk}=p_{lk}^{\intercal }n_{ijk}, \label{eq:3D-OB:vartheta-l} \end{align} \end{subequations} where \begin{equation} n_{ijk}=\left\{ \begin{array}{ll} p_{ki}\times p_{kj}, & \text{ if }\{i,j,k\}\neq \{1,2,3\} \\ & \\ \dfrac{p_{31}\times p_{32}}{\left\Vert p_{31}\times p_{32}\right\Vert }, & \text{ if }\{i,j,k\}=\{1,2,3\} \\ & \text{ and }\left\Vert p_{31}(t)\times p_{32}(t)\right\Vert \neq 0 \\ & \\ n_{123}^{+}, & \text{ if }\{i,j,k\}=\{1,2,3\} \\ & \text{ and }\left\Vert p_{31}(0)\times p_{32}(0)\right\Vert = \end{array \right. \label{eq:n} \end{equation and $n_{123}^{+}$ is any unit vector satisfying $p_{31}^{\intercal }n_{123}^{+}=0$ and $p_{32}^{\intercal }n_{123}^{+}=0$. This unit vector can be computed using Algorithm \ref{alg:pick-n123} below. Notice that (\re {eq:3D-OB:zeta-l}) (resp., (\ref{eq:3D-OB:varphi-l}); (\re {eq:3D-OB:vartheta-l})) is associated with the projection of $p_{li}$ onto the direction of $p_{ji}$ (resp., $n_{ijk}\times p_{ji}$; $n_{ijk}$). The reason for differentiating $\{i,j,k\}=\{1,2,3\}$ from the case \{i,j,k\}\neq \{1,2,3\}$ in (\ref{eq:n}) is that our framework is composed of multiple tetrahedrons where vertices $\{1,2,3\}$ have different out-degree properties than the others (see Section \ref{Probl Stat}). \begin{algorithm} \begin{algorithmic}[1] \State Input: $p_{31} = \left[ x_1, y_1, z_1 \right], p_{32} = \left[ x_2, y_2, z_2 \right]$ \State Output: $n_{123}$ \State $\text{eps} \gets 1e-3$ \If{$\left\Vert p_{31} \right\Vert > \text{eps} $} \If{$\left\vert z_1 \right\vert > \text{eps} $} \State $n_z \gets - \left( x_1 + y_1 \right)/z_1$ \; \State $n_{t} \gets \left[ 1, 1, n_z\right]$ \; \ElsIf{$\left\vert y_1 \right\vert > \text{eps} $} \State $n_y \gets - \left( x_1 + z_1 \right)/y_1$ \; \State $n_{t} \gets \left[ 1, n_y, 1\right]$ \; \ElsIf{$\left\vert x_1 \right\vert > \text{eps} $} \State $n_x \gets - \left( y_1 + z_1 \right)/x_1$ \; \State $n_{t} \gets \left[ n_x, 1, 1\right]$ \; \EndIf \ElsIf{$\left\Vert p_{32} \right\Vert > \text{eps} $} \If{$\left\vert z_2 \right\vert > \text{eps} $} \State $n_z \gets - \left( x_2 + y_2 \right)/z_2$ \; \State $n_{t} \gets \left[ 1, 1, n_z\right]$ \; \ElsIf{$\left\vert y_2 \right\vert > \text{eps} $} \State $n_y \gets - \left( x_2 + z_2 \right)/y_2$ \; \State $n_{t} \gets \left[ 1, n_y, 1\right]$ \; \ElsIf{$\left\vert x_2 \right\vert > \text{eps} $} \State $n_x \gets - \left( y_2 + z_2 \right)/x_2$ \; \State $n_{t} \gets \left[ n_x, 1, 1\right]$ \; \EndIf \Else \State { $n_t \gets \left[0,0,1\right]$ } \EndIf \State $n_{123} \gets n_t/\left\Vert n_t \right\Vert$ \caption{Selecting $n_{123}$ when $\left\Vert p_{31}(0) \times p_{32}(0) \right\Vert = 0$. (The value $1e-3$ below can be replaced with any sufficiently small number.)} \label{alg:pick-n123} \end{algorithmic} \end{algorithm} Generally, the above projection variables are defined for agents $l\geq 4$ since a tetrahedron is composed of four vertices and only agents $l\geq 4$ have out$(l)=3$. Therefore, special definitions for the projection variables are required for agents $2$ and $3$. For agent $2$, we define \begin{equation} \zeta _{2}=p_{21}^{\intercal }n_{2}, \label{eq:zeta-2} \end{equation where \begin{equation} n_{2}=\left\{ \begin{array}{ll} \dfrac{p_{21}}{\left\Vert p_{21}\right\Vert }, & \text{ if }\left\Vert p_{21}(t)\right\Vert \neq 0 \\ & \\ n_{2}^{+}, & \text{ if }\left\Vert p_{21}(0)\right\Vert = \end{array \right. \label{eq:n2} \end{equation and $n_{2}^{+}$ is any unit vector. Variables $\varphi _{2}$ and $\vartheta _{2}$ are undefined since out$(2)=1$. For agent $3$, we let $l=k=3$, $i=1$, and $j=2$ in (\ref{eq:3D-OB:zeta-l}) and (\ref{eq:3D-OB:varphi-l}) to obtain \begin{equation} \zeta _{3}=p_{31}^{\intercal }p_{21} = \left\Vert p_{31} \right\Vert^2 - p_{31}^{\intercal} p_{32} \label{eq:zeta-3} \end{equation and \begin{equation} \varphi _{3}=p_{31}^{\intercal }\left( n_{123}\times p_{21}\right) . \label{eq:varphi-3} \end{equation Here, variable $\vartheta _{3}$ is undefined since out$(3)=2$. \begin{rmk} One can show that projection variable (\ref{eq:3D-OB:vartheta-l}) is related to the signed volume of a tetrahedron and the area of a triangle. Consider \ref{eq:3D-OB:vartheta-l}) when $\{i,j,k\}\neq \{1,2,3\}$. It follows from \ref{eq:n}) that \begin{align} \vartheta _{l}=& p_{li}^{\intercal }n_{ijk}=p_{li}^{\intercal }\left( p_{ki}\times p_{kj}\right) \notag \\ =& p_{li}^{\intercal }\left( p_{ki}\times p_{kj}\right) -p_{ki}^{\intercal }\left( p_{ki}\times p_{kj}\right) =p_{lk}^{\intercal }\left( p_{ki}\times p_{kj}\right) \notag \\ =& p_{lk}^{\intercal }\left[ \left( p_{li}-p_{lk}\right) \times \left( p_{lj}-p_{lk}\right) \right] \notag \\ =& p_{lk}^{\intercal }\left[ p_{li}\times p_{lj}-p_{li}\times p_{lk}-p_{lk}\times p_{lj}+p_{lk}\times p_{lk}\right] \notag \\ =& p_{lk}^{\intercal }\left( p_{li}\times p_{lj}\right) =p_{li}^{\intercal }\left( p_{lj}\times p_{lk}\right) =-6V_{ijkl} \label{eq:3D-OB:vartheta-l-volume-not-123} \end{align where $V_{ijkl}:=V(p_{i},p_{j},p_{k},p_{l})$ from (\ref{eq:signed-volume}). When $\{i,j,k\}=\{1,2,3\}$ and $\left\Vert p_{31}\times p_{32}\right\Vert \neq 0$, we have \begin{equation} \vartheta _{l}=p_{l1}^{\intercal }n_{123}=\frac{p_{l1}^{\intercal }\left( p_{31}\times p_{32}\right) }{\left\Vert p_{31}\times p_{32}\right\Vert }= \frac{3V_{123l}}{\breve{S}_{123}} \label{eq:3D-OB:vartheta-l-volume-123} \end{equation where $\breve{S}_{ijk}$ is the regular (unsigned) area of\ $\triangle ijk$ (Heron's formula \citep{zwillinger2002crc}): \begin{align} \breve{S}_{ijk}=& \dfrac{1}{2}\left\Vert p_{ki}\times p_{kj}\right\Vert \notag \\ =& \dfrac{1}{4}\left( 2\left\Vert p_{ji}\right\Vert ^{2}\left\Vert p_{ki}\right\Vert ^{2}+2\left\Vert p_{ji}\right\Vert ^{2}\left\Vert p_{kj}\right\Vert ^{2}\right. \notag \\ & \quad \left. +2\left\Vert p_{ki}\right\Vert ^{2}\left\Vert p_{kj}\right\Vert ^{2}-\left\Vert p_{ji}\right\Vert ^{4}-\left\Vert p_{ki}\right\Vert ^{4}-\left\Vert p_{kj}\right\Vert ^{4}\right) ^{1/2}. \label{regular area} \end{align} \end{rmk} The projection variables of a tetrahedralized Henneberg framework $\left( G, p \right) $ with $\left\vert \mathcal{V}\right\vert =N$ is defined as \begin{equation} \Lambda (p)=\left[ \Lambda _{2},\Lambda _{3},\Lambda _{4}, \ldots , \Lambda_{N}\right] \end{equation} where \begin{equation} \begin{array}{l} \Lambda _{2}=\zeta _{2} \\ \Lambda _{3}=\left[ \zeta _{3},\varphi _{3}\right] \\ \Lambda _{4}=\left[ \zeta _{4},\varphi _{4},\vartheta _{4}\right] \\ \multicolumn{1}{c}{\vdots} \\ \Lambda _{N}=\left[ \zeta _{N},\varphi _{N},\vartheta _{N}\right] \end{array} \label{eq:Lambda_i} \end{equation} \begin{lemma} \label{lem:scgt-ortho-3d}3D Henneberg frameworks $F=(G,p)$ and $\hat{F}=(G \hat{p})$ are strongly congruent if and only if $\Lambda (p)=\Lambda (\hat{p )$. (See Appendix \ref{Sec:proof:scgt-ortho-3d} for proof.) \end{lemma} Note that by Lemma \ref{lem:scgt-ortho-3d}, the control objective in (\re {objective}) is equivalent to \begin{equation} \Lambda (p(t))\rightarrow \Lambda (p^{\ast })\text{ as }t\rightarrow \infty . \label{3d-obj} \end{equation} \section{Formation Controller \label{Section:OBA:3DForm}} \subsection{Error Variables} The control objective will be quantified by the following three \textit projection error} variables \begin{subequations} \label{errors} \begin{eqnarray} \tilde{\zeta}_{l} &=&\zeta _{l}-\zeta _{l}^{\ast } \label{eq:zeta-error-3d} \\ \tilde{\varphi}_{l} &=&\varphi _{l}-\varphi _{l}^{\ast } \label{eq:varphi-error-3d} \\ \tilde{\vartheta}_{l} &=&\vartheta _{l}-\vartheta _{l}^{\ast } \label{eq:vartheta-error-3d} \end{eqnarray} \end{subequations} where the asterisk denotes the desired value for the projection. Since F^{\ast }$ is typically specified in terms of the desired inter-agent distances, the desired projections can be calculated in terms of $d_{ji}$, (j,i)\in \mathcal{E}$ as shown in Appendix \re {Sec:desired-projection-variables-3d}. The stacked vector of all the projection errors is represented by $\tilde{\Lambda}=\left[ \tilde{\Lambda _{2},\tilde{\Lambda}_{3},\ldots ,\tilde{\Lambda}_{N}\right] $ where $\tilde \Lambda}_{2}=\tilde{\zeta}_{2}$, $\tilde{\Lambda}_{3}=\left[ \tilde{\zeta _{3},\tilde{\varphi}_{3}\right] $, and $\tilde{\Lambda}_{i}=\left[ \tilde \zeta}_{i},\tilde{\varphi}_{i},\tilde{\vartheta}_{i}\right] $ for i=4,\ldots ,N$. \begin{lemma} \label{lem:3d-error-variables} For agent $l$, the planes corresponding to \tilde{\zeta}_{l}=0$, $\tilde{\varphi}_{l}=0$, and $\tilde{\vartheta}_{l}=0$ are mutually orthogonal if $\left\Vert p_{ji}\right\Vert =d_{ji}$, \left\Vert p_{ki}\right\Vert =d_{ki}$, and $\left\Vert p_{kj}\right\Vert =d_{kj}$, where $i<j<k<l$ and $(l,i),(l,j),(l,k)\in \mathcal{E}$. (See Appendix \ref{Sec:proof:error-variables} for proof.) \end{lemma} \subsection{Control Law} We propose the following formation control \begin{subequations} \label{ctrl:3D-OB:SI} \begin{align} u_{1}=& \ 0 \label{ctrl:3D-OB:SI-1} \\ u_{2}=& \ \mu _{2}\tilde{\zeta}_{2} n_{2} \label{ctrl:3D-OB:SI-2} \\ u_{3}=& \ \mu _{3}\tilde{\zeta}_{3}\left( p_{31}-p_{32}\right) + \nu _{3} \tilde{\varphi}_{3}n_{123}\times \left( p_{31}-p_{32}\right) \label{ctrl:3D-OB:SI-3} \\ u_{l}=& \ \mu _{l}\tilde{\zeta}_{l}\left( p_{li}-p_{lj}\right) + \nu _{l} \tilde{\varphi}_{l}n_{ijk}\times \left( p_{li}-p_{lj}\right) \notag \\ & + \lambda _{l}\tilde{\vartheta}_{l}n_{ijk},\quad l = 4, \ldots, N \label{ctrl:3D-OB:SI-l} \end{align} \end{subequations} where $i<j<k<l$, $(l,i),(l,j),(l,k)\in \mathcal{E}$, and \mu_{l},\nu_{l},\lambda _{l}$ are positive control gains. Note that since $p_{ki}=p_{li}-p_{lk}$ and $p_{kj}=p_{lj}-p_{lk}$ (see Figure \ref{fig:3d-framework}), the term $n_{ijk}$ defined in (\ref{eq:n}) can be expressed as a function of $p_{li}$, $p_{lj}$, and $p_{lk}$ for (l,i),(l,j),(l,k)\in \mathcal{E}$. Therefore, the above control is decentralized in the sense that it requires each agent to know its relative position to neighboring agents only. This means that control (\re {ctrl:3D-OB:SI}) can be implemented in each agent's local coordinate frame. \begin{rmk} Since a 3D minimally persistent graph has $3N-6$ edges, a 3D formation requires $3N-6$ constraints to maintain its shape. Previous 3D formation control work that utilized distance and volume variables \citep{ferreira2016adaptive} are over-constrained since they require $3N-6$ distance constraints and $N-3$ volume constraints. Although these extra constraints rule out formation ambiguities, they introduce new undesired equilibria. On the other hand, the orthogonal basis method proposed here has $N-1$ $\zeta $-type projection constraints, $N-2$ $\varphi $-type projection constraints, and $N-3$ $\vartheta $-type projection constraints, totalling exactly $3N-6$ constraints. \end{rmk} \subsection{Preliminary Analysis} In preparation for our main results, we present next some preliminary results. \begin{lemma} \label{lem:ndot} With control (\ref{ctrl:3D-OB:SI}), $\dot{n}_{2}=0$ and \dot{n}_{123}=0$. \end{lemma} \begin{proof} If $\left\Vert p_{21}\right\Vert \neq 0$, the time derivative of (\ref{eq:n2}) is given by \begin{align} \dot{n}_{2}=& \dfrac{1}{\left\Vert p_{21}\right\Vert }\left( \dot{p}_{1} \dot{p}_{2}\right) -\dfrac{p_{21}}{\left\Vert p_{21}\right\Vert ^{2}}\dfrac{ }{dt}\left\Vert p_{21}\right\Vert \notag \\ =& \dfrac{1}{\left\Vert p_{21}\right\Vert }\left( \dot{p}_{1}-\dot{p _{2}\right) -\dfrac{p_{21}}{\left\Vert p_{21}\right\Vert ^{2}}\dfrac p_{21}^{\intercal }\left( \dot{p}_{1}-\dot{p}_{2}\right) }{\left\Vert p_{21}\right\Vert } \notag \\ =& -\dfrac{1}{\left\Vert p_{21}\right\Vert }u_{2}+\dfrac{p_{21}}{\left\Vert p_{21}\right\Vert ^{3}}p_{21}^{\intercal }u_{2}. \label{eq:n2-dot-1} \end{align After substituting (\ref{ctrl:3D-OB:SI}) into (\ref{eq:n2-dot-1}), we obtain \begin{align} \dot{n}_{2}=& -\dfrac{1}{\left\Vert p_{21}\right\Vert }\left( \mu _{2}\tilde \zeta}_{2}\dfrac{p_{21}}{\left\Vert p_{21}\right\Vert }\right) +\dfrac{p_{21 }{\left\Vert p_{21}\right\Vert ^{3}}p_{21}^{\intercal }\left( \mu _{2}\tilde \zeta}_{2}\dfrac{p_{21}}{\left\Vert p_{21}\right\Vert }\right) \notag \\ =& -\mu _{2}\tilde{\zeta}_{2}\dfrac{p_{21}}{\left\Vert p_{21}\right\Vert ^{2 }+\mu _{2}\tilde{\zeta}_{2}\dfrac{p_{21}}{\left\Vert p_{21}\right\Vert ^{2}} \notag \\ =& \ 0. \end{align When $\left\Vert p_{21}\right\Vert =0$, it is obvious from (\ref{eq:n2}) that $\dot{n}_{2}=0$. If $\left\Vert p_{31}\times p_{32}\right\Vert \neq 0$, the derivative of \ref{eq:n}) is given by \begin{align} \dot{n}_{123}=& \frac{1}{\left\Vert p_{31}\times p_{32}\right\Vert }\frac{d} dt}\left( p_{31}\times p_{32}\right) \notag \\ & -\left( p_{31}\times p_{32}\right) \dfrac{\left( p_{31}\times p_{32}\right) ^{\intercal }}{\left\Vert p_{31}\times p_{32}\right\Vert ^{3} \dfrac{d}{dt}\left( p_{31}\times p_{32}\right) \label{eq:3D-OB:n123-dot} \end{align where \begin{equation} \dfrac{d}{dt}\left( p_{31}\times p_{32}\right) =\left( u_{1}-u_{3}\right) \times p_{32}+p_{31}\times \left( u_{2}-u_{3}\right) . \label{eq:3D-OB:p31xp32-dot} \end{equation} Substituting (\ref{ctrl:3D-OB:SI}) into (\ref{eq:3D-OB:p31xp32-dot}) yields \begin{eqnarray} &&\dfrac{d}{dt}\left( p_{31}\times p_{32}\right) \notag \\ &=&-\left( p_{31}-p_{32}\right) \times \left( \mu _{3}\tilde{\zeta _{3}p_{21}+\nu _{3}\tilde{\varphi}_{3}n_{123}\times p_{21}\right) \notag \\ &&+p_{31}\times \left( \mu _{2}\tilde{\zeta}_{2}n_{2}\right) \notag \\ &=&-\nu _{3}\tilde{\varphi}_{3}\left\Vert p_{21}\right\Vert ^{2}n_{123}+\mu _{2}\tilde{\zeta}_{2}p_{31}\times n_{2}. \label{eq:p31xp32} \end{eqnarray} Since $\left\Vert p_{31}\times p_{32}\right\Vert \neq 0$, we have \begin{equation} \left\Vert p_{31}\times p_{32}\right\Vert =\left\Vert p_{31}\times \left( p_{31}-p_{21}\right) \right\Vert =\left\Vert p_{31}\times p_{21}\right\Vert \neq 0, \notag \end{equation which implies $\left\Vert p_{21}\right\Vert \neq 0$. Then, we can substitute the first case of (\ref{eq:n2}) and the second case of (\ref{eq:n}) into \ref{eq:p31xp32}) to obtain \begin{align} & \dfrac{d}{dt}\left( p_{31}\times p_{32}\right) \notag \\ =& -\nu _{3}\tilde{\varphi}_{3}\left\Vert p_{21}\right\Vert ^{2}\dfrac p_{31}\times p_{32}}{\left\Vert p_{31}\times p_{32}\right\Vert }+\mu _{2 \tilde{\zeta}_{2}p_{31}\times \dfrac{p_{21}}{\left\Vert p_{21}\right\Vert } \notag \\ =& -\nu _{3}\tilde{\varphi}_{3}\left\Vert p_{21}\right\Vert ^{2}\dfrac p_{31}\times p_{32}}{\left\Vert p_{31}\times p_{32}\right\Vert }-\mu _{2 \tilde{\zeta}_{2}\dfrac{p_{31}\times p_{32}}{\left\Vert p_{21}\right\Vert } . \label{eq:p31xp32-2} \end{align} Now, substituting (\ref{eq:p31xp32-2}) into (\ref{eq:3D-OB:n123-dot}) gives \begin{align} \dot{n}_{123}=& \dfrac{1}{\left\Vert p_{31}\times p_{32}\right\Vert } \notag \\ & \cdot \left( -\nu _{3}\tilde{\varphi}_{3}\left\Vert p_{21}\right\Vert ^{2 \dfrac{p_{31}\times p_{32}}{\left\Vert p_{31}\times p_{32}\right\Vert }-\mu _{2}\tilde{\zeta}_{2}\dfrac{p_{31}\times p_{32}}{\left\Vert p_{21}\right\Vert }\right) \notag \\ & -\left( p_{31}\times p_{32}\right) \cdot \dfrac{\left( p_{31}\times p_{32}\right) ^{\intercal }}{\left\Vert p_{31}\times p_{32}\right\Vert ^{3} \left( -\nu _{3}\tilde{\varphi}_{3}\left\Vert p_{21}\right\Vert ^{2}\right. \notag \\ & \left. \cdot \dfrac{p_{31}\times p_{32}}{\left\Vert p_{31}\times p_{32}\right\Vert }-\mu _{2}\tilde{\zeta}_{2}\dfrac{p_{31}\times p_{32}} \left\Vert p_{21}\right\Vert }\right) \notag \\ =& \ 0. \label{eq:3D-OB:n123-dot-equal-0} \end{align} For $p_{31}\times p_{32}=0$, it is clear from (\ref{eq:n}) that $\dot{n _{123}=0$. \end{proof} \begin{rmk} The purpose of $n_{2}^{+}$ in (\ref{eq:n2}) is to force agent $2$ to leave the collocated initial position with agent $1$. Since $\dot{n}_{2}=0$, the two cases of (\ref{eq:n2}) have the same value. For example, assume the two agents are collocated at time zero and let $n_{2}^{+}=\left[ 1,0,0\right] $. Control (\ref{ctrl:3D-OB:SI-2}) will cause agent $2$ to move away from the collocated position along vector $\left[ 1,0,0\right] $. Since the agents are no longer collocated, then $\dfrac{p_{21}}{\left\Vert p_{21}\right\Vert =\left[ 1,0,0\right] $. The vector $n_{123}^{+}$ in (\ref{eq:n}) serves a similar purpose, viz., to force agent $3$ to leave the collinear initial position with agents $1$ and $2$. Note that the second and third cases of \ref{eq:n}) have the same value due to $\dot{n}_{123}=0$. If the three agents are collinear at time zero on the $x$-$y$ plane and n_{123}^{+}=[0,0,1]$ for example, then the second term in (\re {ctrl:3D-OB:SI-3}) will cause agent $3$ to move on the $x$-$y$ plane away from the collinear position. \end{rmk} \begin{lemma} \label{lem:3D:2-agent-system} For the two-agent system $\{1,2\}$, (\re {ctrl:3D-OB:SI-1}) and (\ref{ctrl:3D-OB:SI-2}) render $\tilde{\zeta}_{2}=0$ GES. \end{lemma} \begin{proof} The time derivative of $\tilde{\zeta}_{2}$ is given by \begin{equation} \dot{\tilde{\zeta}}_{2}=\left( \dot{p}_{1}-\dot{p}_{2}\right) ^{\intercal }n_{2}+p_{21}^{\intercal }\dot{n}_{2}. \label{eq:agent2-zeta-error-dot} \end{equation Applying Lemma \ref{lem:ndot} and substituting (\ref{ctrl:3D-OB:SI-1}) and \ref{ctrl:3D-OB:SI-2}) into (\ref{eq:agent2-zeta-error-dot}), we obtain \begin{equation} \dot{\tilde{\zeta}}_{2}=-n_{2}^{\intercal }u_{2}=-\mu _{2}\tilde{\zeta _{2}\left\Vert n_{2}\right\Vert ^{2}=-\mu _{2}\tilde{\zeta}_{2} \label{eq:zeta2tilda_dot} \end{equation which indicates that $\tilde{\zeta}_{2}=0$ is GES. \end{proof} Now, consider the Lyapunov function candidates \begin{subequations} \label{W_3D} \begin{align} W_{3}& =\dfrac{1}{2}\tilde{\zeta}_{3}^{2}+\dfrac{1}{2}\tilde{\varphi}_{3}^{2} \label{W_3} \\ W_{l}& =\dfrac{1}{2}\tilde{\zeta}_{l}^{2}+\dfrac{1}{2}\tilde{\varphi _{l}^{2}+\dfrac{1}{2}\tilde{\vartheta}_{l}^{2},\quad l=4,\ldots ,N \label{eq:3D:Wl} \end{align} \end{subequations} where $i<j<k<l$, $(l,i),(l,j),(l,k)\in \mathcal{E}$. \begin{lemma} \label{lem:3D:3-agent-system} If $\left\Vert p_{21}\right\Vert =d_{21}$ and u_{2}=0$, then (\ref{ctrl:3D-OB:SI-1}) and (\ref{ctrl:3D-OB:SI-3}) render \tilde{\Lambda}_{3}=0$ GES for the three-agent system $\{1,2,3\}$. \end{lemma} \begin{proof} The dynamics of $\tilde{\zeta}_{3}$ is given by \begin{equation} \dot{\tilde{\zeta}}_{3}=\dot{p}_{31}^{\intercal }p_{21}+p_{31}^{\intercal \dot{p}_{21}=(u_{1}-u_{3})^{\intercal }p_{21}+p_{31}^{\intercal }(u_{1}-u_{2}) \label{zeta3_tilda_dot} \end{equation where (\ref{eq:zeta-error-3d}), (\ref{eq:3D-OB:zeta-l}), and (\ref{SI model ) were used. Similarly, the dynamics of $\tilde{\varphi}_{3}$ can be computed as \begin{eqnarray} \dot{\tilde{\varphi}}_{3} &=&(u_{1}-u_{3})^{\intercal }(n_{123}\times p_{21})+p_{31}^{\intercal }(\dot{n}_{123}\times p_{21}) \notag \\ &&+p_{31}^{\intercal }(n_{123}\times (u_{1}-u_{2})). \label{varphi3_tilda_dot} \end{eqnarray} Therefore, the time derivative of (\ref{W_3}) is given by \begin{align} \dot{W}_{3}=& \tilde{\zeta}_{3}\dot{\tilde{\zeta}}_{3}+\tilde{\varphi}_{3 \dot{\tilde{\varphi}}_{3} \notag \\ =& \tilde{\zeta}_{3}\left[ p_{21}^{\intercal }(u_{1}-u_{3})+p_{31}^{\intercal }(u_{1}-u_{2})\right] \notag \\ & +\tilde{\varphi}_{3}\left[ (n_{123}\times p_{21})^{\intercal }\left( u_{1}-u_{3}\right) \right. \notag \\ & \left. +p_{31}^{\intercal }(\dot{n}_{123}\times p_{21})+p_{31}^{\intercal }(n_{123}\times (u_{1}-u_{2}))\right] . \label{eq:W3-dot-3D-SI} \end{align Recall from Lemma \ref{lem:ndot} that $\dot{n}_{123}=0$. Therefore, substituting $u_{2}=0$, $\left\Vert p_{21}\right\Vert =d_{21}$, (\re {ctrl:3D-OB:SI-1}), and (\ref{ctrl:3D-OB:SI-3}) into (\ref{eq:W3-dot-3D-SI}) gives \begin{eqnarray} \dot{W}_{3} &=&-\left[ \tilde{\zeta}_{3}p_{21}^{\intercal }+\tilde{\varphi _{3}(n_{123}\times p_{21})^{\intercal }\right] u_{3} \notag \\ &=&-\mu _{3}\tilde{\zeta}_{3}^{2}d_{21}^{2}-\nu _{3}\tilde{\varphi _{3}^{2}\left\Vert n_{123}\times p_{21}\right\Vert ^{2} \label{W3 dot} \end{eqnarray where we used the fact that $p_{31}-p_{32}=p_{21}$. Since $\left\Vert n_{123}\right\Vert =1$, then $\left\Vert n_{123}\times p_{21}\right\Vert =d_{21}$ and (\ref{W3 dot}) becomes \begin{equation} \dot{W}_{3}=-d_{21}^{2}\left( \mu _{3}\tilde{\zeta}_{3}^{2}+\nu _{3}\tilde \varphi}_{3}^{2}\right) , \end{equation which means $\tilde{\Lambda}_{3}=0$ is GES. \end{proof} \begin{lemma} \label{lem:3D:4-agent-system}If $\left\Vert p_{ji}\right\Vert =d_{ji}$, \left\Vert p_{ki}\right\Vert =d_{ki}$, $\left\Vert p_{kj}\right\Vert =d_{kj} , and $u_{i}=u_{j}=u_{k}=0$, then (\ref{ctrl:3D-OB:SI-l}) ensures that \tilde{\Lambda}_{l}=\left[ \tilde{\zeta}_{l},\tilde{\varphi}_{l},\tilde \vartheta}_{l}\right] =0$ is GES for the tetrahedron formed by agents \{i,j,k,l\}$. \end{lemma} \begin{proof} The time derivative of (\ref{eq:3D:Wl}) along the dynamics of (\re {eq:zeta-error-3d}), (\ref{eq:varphi-error-3d}), and (\re {eq:vartheta-error-3d}) is given by \begin{align} \dot{W}_{l}=& \tilde{\zeta}_{l}\left[ p_{ji}^{\intercal} (u_{i}-u_{l})+p_{li}^{\intercal }(u_{i}-u_{j})\right] \notag \\ & +\tilde{\varphi}_{l}\left[ (u_{i}-u_{l})^{\intercal }(n_{ijk}\times p_{ji})+p_{li}^{\intercal }(\dot{n}_{ijk}\times p_{ji})\right. \notag \\ & \left. +p_{li}^{\intercal }(n_{ijk}\times (u_{i}-u_{j}))\right] \notag \\ & +\tilde{\vartheta}_{l}\left[ n_{ijk}^{\intercal }(u_{i}-u_{l})+p_{li}^{\intercal }\dot{n}_{ijk}\right] \label{eq:3D-OB:W-l-dot} \end{align} where \begin{equation} \dot{n}_{ijk}=(u_{i}-u_{k})\times p_{kj}+p_{ki}\times (u_{j}-u_{k})\text{ if }\{i,j,k\} \neq \{1,2,3\}, \notag \end{equation and $\dot{n}_{123}=0$ from Lemma \ref{lem:ndot}. After substituting $\left\Vert p_{ji}\right\Vert =d_{ji}$, u_{i}=u_{j}=u_{k}=0$, and (\ref{ctrl:3D-OB:SI-l}) into (\re {eq:3D-OB:W-l-dot}), we obtain \begin{align} \dot{W}_{l}=& - \left[ \tilde{\zeta}_{l}p_{ji}^{\intercal }+\tilde{\varphi _{l}(n_{ijk}\times p_{ji})^{\intercal }+\tilde{\vartheta}_{l}n_{ijk}^ \intercal }\right] u_{l} \notag \\ =& -\mu _{l}d_{ji}^{2}\tilde{\zeta}_{l}^{2}-\nu _{l}\tilde{\varphi _{l}^{2}\left\Vert n_{ijk}\times p_{ji}\right\Vert ^{2}-\lambda _{l}\tilde \vartheta}_{l}^{2}\left\Vert n_{ijk}\right\Vert ^{2} \label{eq:Wl-dot-3D-SI-2} \end{align where we used the fact that $p_{ji}^{\intercal }n_{ijk}=0$. Given that \left\Vert p_{ki}\right\Vert =d_{ki}$ and $\left\Vert p_{kj}\right\Vert =d_{kj}$, we know from (\ref{eq:n}) and (\ref{regular area}) that \left\Vert n_{ijk}\right\Vert $ is constant. Therefore, \begin{equation} \dot{W}_{l}=-\mu _{l}d_{ji}^{2}\tilde{\zeta}_{l}^{2}-\nu _{l}d_{ji}^{2} \tilde{\varphi}_{l}^{2}-\lambda _{l}c\tilde{\vartheta}_{l}^{2} \label{Wldot1} \end{equation where $c\ $is some positive constant. It then follows from (\ref{eq:3D:Wl}) and (\ref{Wldot1}) that $\left[ \tilde{\zeta}_{l},\tilde{\varphi}_{l},\tilde \vartheta}_{l}\right] =0$ is GES. \end{proof} \subsection{Main Results} The following two theorems give our main results. \begin{theorem} \label{thm:GAS}Control (\ref{ctrl:3D-OB:SI}) ensures $\tilde{\Lambda}=0$ is GAS and $F(t)\rightarrow \text{SCgt}^{3}(F^{\ast })$ as $t\rightarrow \infty $. \end{theorem} \begin{proof} We know from Lemma \ref{lem:3D:2-agent-system} that the subsystem composed of agents 1 and 2 is GES at $\tilde{\zeta}_{2}=0$. If a third agent is added to this subsystem, we get the cascade system \begin{subequations} \label{eq:3D-OB:intercon-3} \begin{align} \overset{\cdot }{\tilde{\Lambda}_{3}}& =f_{3}(\tilde{\Lambda}_{3},\tilde \zeta}_{2}) \label{eq:3D-OB:sub-3-xi-dot} \\ \overset{\cdot }{\tilde{\zeta}}_{2}& =g_{2}(\tilde{\zeta}_{2}) \label{eq:3D-OB:sub-3-Xi-dot} \end{align} \end{subequations} where (\ref{eq:3D-OB:sub-3-Xi-dot}) is in fact (\ref{eq:zeta2tilda_dot}). If $\tilde{\zeta}_{2}=0$, then $u_{2}=0$ from (\ref{ctrl:3D-OB:SI-2}) and \left\Vert p_{21}\right\Vert =d_{21}$ from (\ref{eq:Lambda_i}) and (\re {eq:zeta-error-3d}). Therefore, (\ref{eq:3D-OB:sub-3-xi-dot}) with $\tilde \zeta}_{2}=0$ is GES at the origin by Lemma \ref{lem:3D:3-agent-system}. It then follows from Lemma \ref{lem:global-ISS} that (\re {eq:3D-OB:sub-3-xi-dot}) is ISS with respect to input $\tilde{\zeta}_{2}$. Finally, we can use Lemma \ref{lem:global-interconn} to show that the origin of (\ref{eq:3D-OB:intercon-3}), i.e., $\left[ \tilde{\zeta}_{2},\tilde \Lambda}_{3}\right] =0$, is GAS. As we grow the graph step-by-step in the analysis by adding a vertex $l$ with three outgoing edges to any distinct vertices $i$, $j$ and $k$ of the previous graph, we obtain the following cascade system at each step: \begin{subequations} \label{eq:3D-OB:interconn-l} \begin{align} \overset{\cdot }{\tilde{\Lambda}_{l}}=& f_{l}(\tilde{\Lambda}_{l},z_{l-1}) \label{eq:3D-OB:sub-l-xi-dot} \\ \dot{z}_{l-1}=& g_{l-1}(z_{l-1}) \label{eq:3D-OB:sub-l-Xi-dot} \end{align} \end{subequations} where $z_{l-1}=\left[ \tilde{\Lambda}_{2},\ldots ,\tilde{\Lambda}_{l-1 \right] $ and $i<j<k<l$. Note that the GAS of $z_{l-1}=0$ for (\re {eq:3D-OB:sub-l-Xi-dot}) was established in the previous step. Therefore, we only need to check if (\ref{eq:3D-OB:sub-l-xi-dot}) is ISS with respect to input $z_{l-1}$. If $z_{l-1}=0$, then $u_{i}=u_{j}=u_{k}=0$ from (\re {ctrl:3D-OB:SI-l}). Now, consider subframeworks $F_{l-1}=\left( G_{l-1},p\right) $ and $F_{l-1}^{\ast }=\left( G_{l-1},p^{\ast }\right) $ where $G_{l-1}$ is the subgraph of $G$ that contains vertices $\{1,\ldots ,l-1\}$ and the corresponding edges connecting these vertices in the original graph. The condition $z_{l-1}=0$ is equivalent to $\left[ \Lambda _{2},\ldots ,\Lambda _{l-1}\right] =\left[ \Lambda _{2}^{\ast },\ldots ,\Lambda _{l-1}^{\ast }\right] $ where $\Lambda _{i}^{\ast }=\Lambda _{i} \breve{p}^{\ast })$, and $\breve{p}^{\ast }=\left[ p_{1}^{\ast },\ldots ,p_{l-1}^{\ast }\right] $. This indicates that $F_{l-1}$ and $F_{l-1}^{\ast } $ are strongly congruent from Lemma \ref{lem:scgt-ortho-3d}. Thus, we know $\left\Vert p_{ji}\right\Vert =d_{ji}$, $\left\Vert p_{ki}\right\Vert =d_{ki} $, and $\left\Vert p_{kj}\right\Vert =d_{kj}$ from Lemma \re {lem:scgt-3d}. We can now use Lemma \ref{lem:3D:4-agent-system} to show that (\ref{eq:3D-OB:sub-l-xi-dot}) with $z_{l-1}=0$ is GES at the origin. As a result, (\ref{eq:3D-OB:sub-l-xi-dot}) is ISS by Lemma \ref{lem:global-ISS}. Finally, we can invoke Lemma \ref{lem:global-interconn} to conclude that $\left[ z_{l-1},\tilde{\Lambda}_{l}\right] =0$ in (\ref{eq:3D-OB:interconn-l}) is GAS. Repeating this process until $l = N$ leads to the conclusion that $\tilde \Lambda}=0$ is GAS, which implies $\Lambda (p(t)) \rightarrow \Lambda (p^{\ast})$ as $t \rightarrow \infty$. Thus, we know from Lemma \re {lem:scgt-ortho-3d} that $F(t)\rightarrow \text{SCgt}^{3}(F^{\ast })$ as $t \rightarrow \infty$. \end{proof} Next, we show that the proposed control yields \textit{local exponential} convergence to the equilibrium point. This property is important in practice since exponential stability is known to provide some level of robustness to the system \citep{khalil2015nonlinear}. \begin{theorem} \label{thm:LES} In the neighborhood of $\tilde{\Lambda}=0$, the equilibrium point is locally exponentially stable (LES). \end{theorem} \begin{proof} The error dynamics for $\tilde{\Lambda}$ can be expressed as \begin{equation} \dot{\tilde{\Lambda}}=-A(\tilde{\Lambda})\tilde{\Lambda} \label{eq:3D:matrix-form} \end{equation where \begin{equation} A(\tilde{\Lambda})=\left[ \begin{array}{cccccccccc} D_{1} & 0 & 0 & 0 & & \cdots & & \cdots & & 0 \\ \star & D_{2} & 0 & 0 & & \cdots & & \cdots & & 0 \\ \star & 0 & D_{3} & 0 & & \cdots & & \cdots & & 0 \\ \star & \star & \star & D_{4} & 0 & 0 & 0 & \cdots & & 0 \\ \star & \star & \star & 0 & D_{5} & 0 & 0 & \cdots & & 0 \\ \star & \star & \star & 0 & 0 & D_{6} & 0 & \cdots & & 0 \\ \vdots & & & & & & \ddots & & & \vdots \\ \star & & \cdots & & \cdots & & \star & \star & 0 & 0 \\ \star & & \cdots & & \cdots & & \star & 0 & \star & 0 \\ \star & & \cdots & & \cdots & & \star & 0 & 0 & D_{3N-6 \end{array} \right] \label{P} \end{equation} is a lower triangular $\left[ 3\left( N-3\right) +3\right] \times $ $\left[ 3\left( N-3\right) +3\right] $ matrix whose diagonal elements are $D_{1}=\mu _{2}$, $D_{2}=\mu _{3}\left\Vert p_{21}\right\Vert ^{2}$, $D_{3}=\nu _{3}\left\Vert n_{123}\times p_{21}\right\Vert ^{2}$, $D_{4}=\mu _{4}\left\Vert p_{21}\right\Vert ^{2}$, $D_{5}=\nu _{4}\left\Vert n_{123}\times p_{21}\right\Vert ^{2}$, $D_{6}=\lambda _{4}\left\Vert n_{123}\right\Vert ^{2}$, $\ldots $, $D_{3N-8}=\mu _{N}\left\Vert p_{N_{1}N_{2}}\right\Vert ^{2}$, $D_{3N-7}=\nu_{N}\left\Vert n_{N_{1}N_{2}N_{3}}\times p_{N_{1}N_{2}}\right\Vert ^{2}$, and D_{3N-6}=\lambda _{N}\left\Vert n_{N_{1}N_{2}N_{3}}\right\Vert ^{2}$ where N_{1}$, $N_{2}$, and $N_{3}$ ($N_{1}<N_{2}<N_{3}$) are the out-neighbors of agent $N$. Linearizing (\ref{eq:3D:matrix-form}) at the equilibrium $\tilde{\Lambda}=0$ yields \begin{equation} \dot{\tilde{\Lambda}}\approx -A(0)\tilde{\Lambda} \end{equation where $A(0)$ is a constant matrix whose eigenvalues (diagonal elements) are positive and can be made arbitrarily large by adjusting the control gains. Therefore, $\tilde{\Lambda}=0$ is LES. \end{proof} \begin{rmk} It is worth noting that the 2D formation problem can be viewed as a degenerate case of the 3D problem. Specifically, we can consider the coordinates of agent $i$ as $p_{i}=\left[ x_{i},y_{i},0\right] $ and express the control law with the third component equal to zero, i.e., $u_{i}=\left[ u_{ix},u_{iy},0\right] $. The results in Theorems \ref{thm:GAS} and \re {thm:LES} are also valid in the 2D case. This analysis is omitted here since it is based on similar arguments as above. \end{rmk} \section{Conclusion} This paper introduced a new set of controlled variables for avoiding undesirable equilibria in the 3D distance-based formation control approach. The proposed variables, which are related to the inter-agent distances, signed volume of the framework, and areas of the triangular faces, form an orthogonal basis that decomposes the control inputs into orthogonal subspaces. The resulting formation controller guarantees the global asymptotic stability and the local exponential stability of the desired formation shape with any initial conditions. This result is valid for any tetrahedralized-like framework with no conditions on the formation shape or control gains. The proposed approach can also handle the 2D formation problem, unifying the two problems. Specifically, by setting the $z$-coordinate of each agent position to zero, we arrive at the orthogonal basis controller for 2D formations that appeared in our preliminary result in \cite{liu2020ortho}. \begin{appendices} \section{Lemma Proofs} \subsection{Lemma \protect\ref{lem:scgt-3d}} \label{Sec:proof:scgt-3d} Consider Figure \ref{fig:dihedral_angle_directed_height} where n_{123}=p_{23}\times p_{21}$ and $n_{124}=p_{24}\times p_{21}$ are the vectors normal to planes 1-2-3 and 1-2-4, respectively, and $\alpha $ is the dihedral angle between the two planes. Then, we have that \begin{eqnarray} \cos \alpha &=&\frac{n_{123}^{\intercal }n_{124}}{\left\Vert n_{123}\right\Vert \left\Vert n_{124}\right\Vert } \notag \\ &=&\frac{(p_{23}\times p_{21})^{\intercal }(p_{24}\times p_{21})}{\left\Vert p_{23}\right\Vert \left\Vert p_{21}\right\Vert \sin \theta _{123}\left\Vert p_{24}\right\Vert \left\Vert p_{21}\right\Vert \sin \theta _{124}} \label{eq:cos-alpha1} \end{eqnarray where $\theta _{ijk}$ is the face angle between edges $(j,i)$ and $(j,k)$. Using property \begin{equation} (a\times b)^{\intercal }(c\times d)=(a^{\intercal }c)(b^{\intercal }d)-(a^{\intercal }d)(b^{\intercal }c), \end{equation we obtai \begin{eqnarray} &&(p_{23}\times p_{21})^{\intercal }(p_{24}\times p_{21}) \notag \\ &=&(p_{23}^{\intercal }p_{24})(p_{21}^{\intercal }p_{21})-(p_{23}^{\intercal }p_{21})(p_{21}^{\intercal }p_{24}) \notag \\ &=&\left\Vert p_{23}\right\Vert \left\Vert p_{24}\right\Vert \cos \theta _{423}\left\Vert p_{21}\right\Vert ^{2} \notag \\ &&-\left\Vert p_{23}\right\Vert \left\Vert p_{21}\right\Vert \cos \theta _{123}\left\Vert p_{21}\right\Vert \left\Vert p_{24}\right\Vert \cos \theta _{124} \notag \\ &=&\left\Vert p_{21}\right\Vert ^{2}\left\Vert p_{23}\right\Vert \left\Vert p_{24}\right\Vert \left( \cos \theta _{423}-\cos \theta _{123}\cos \theta _{124}\right) . \label{eq:cos-alpha-den_1} \end{eqnarray Substituting (\ref{eq:cos-alpha-den_1}) into (\ref{eq:cos-alpha1}) gives \begin{equation} \cos \alpha =\frac{\cos \theta _{423}-\cos \theta _{123}\cos \theta _{124}} \sin \theta _{123}\sin \theta _{124}}. \label{eq:cos-dihedral-angle} \end{equation} A dihedral angle $\alpha $ can be associated with the signed volume by defining a \textit{signed dihedral angle}, $\alpha _{s}$. To this end, given the tetrahedron in Figure \ref{fig:dihedral_angle_directed_height}, we can define its \textit{signed height}, $h$, to have the same sign as the signed volume. Then, \begin{equation} \sin \alpha _{s}=\frac{h}{b} \label{eq:sin-dihedral-angle} \end{equation where $b$ is the distance from vertex $4$ to edge $(1,2)$. Combining (\re {eq:cos-dihedral-angle}) and (\ref{eq:sin-dihedral-angle}), we can calculate the signed dihedral angle as \begin{equation} \alpha _{s}=\arctan \text{2}(h/b,\cos \alpha ). \label{alpha_s} \end{equation \begin{figure}[tbph] \centering \adjincludegraphics[scale=1.1, trim={{0.0\width} {0.0\height} {0.0\width} {0.00\height}},clip]{dihedral_angle_directed_height} \caption{Signed dihedral angle and signed height of a tetrahedron.} \label{fig:dihedral_angle_directed_height} \end{figure} Now, we can prove Lemma \ref{lem:scgt-3d} as follows. \textit{(Proof of $\Rightarrow $)} If $F$ and $\hat{F}$ are strongly congruent, then $\left\Vert p_{i}-p_{j}\right\Vert =\left\Vert \hat{p}_{i} - \hat{p}_{j}\right\Vert $, $\forall i,j\in \mathcal{V}$ and $\mathbf{V}(p) = \mathbf{V}(\hat{p})$ by definition. Therefore, since $\mathcal{E}\subset \mathcal{V}\times \mathcal{V}$, we know $\left\Vert p_{i}-p_{j}\right\Vert = \left\Vert \hat{p}_{i}-\hat{p}_{j}\right\Vert $, $\forall (i,j)\in \mathcal{ }$, i.e., $F$ and $\hat{F}$ are equivalent. \textit{(Proof of $\Leftarrow$)} If $\left\vert \mathcal{V} \right\vert = 4 , then framework equivalency and congruency are equivalent, so the conditions for strong congruency are trivially satisfied. If a vertex is added such that $\left\vert \mathcal{V}\right\vert =5$, the resulting 3D framework will have three additional edges and one additional tetrahedron. Consider without loss of generality the framework in Figure \re {fig:tri_bipyramid}, and denote the signed dihedral angle between planes 1-2-3 and 1-2-4 as $\alpha_{s1}$ and between planes 1-2-3 and 1-2-5 as \alpha _{s2}$. Then, the signed dihedral angle between planes 1-2-4 and 1-2-5 is $\alpha _{s3}=\alpha _{s1}+\alpha _{s2}$. \begin{figure}[tbph] \centering \adjincludegraphics[scale=0.9, trim={{0.0\width} {0.0\height} {0.0\width} {0.00\height}},clip]{tri_bipyramid} \caption{Framework with $\left\vert \mathcal{V}\right\vert =5$ containing two tetrahedrons.} \label{fig:tri_bipyramid} \end{figure} Since $F$ and $\hat{F}$ are equivalent, $\left\Vert p_{i}-p_{j}\right\Vert =\left\Vert \hat{p}_{i}-\hat{p}_{j}\right\Vert $, $\forall (i,j)\in \mathcal E}$. This along with $\mathbf{V}(p)=\mathbf{V}(\hat{p})$ indicates that all face angles in $F$ and $\hat{F}$ are equal. From (\ref{alpha_s}), we then know that $\alpha _{s1}=\hat{\alpha}_{s1}$, $\alpha _{s2}=\hat{\alpha}_{s2} , and $\alpha _{s3}=\hat{\alpha}_{s3}$ where $\hat{\alpha}_{si}$ denotes the $i$th dihedral angle of $\hat{F}$. Using (\ref{eq:cos-dihedral-angle}), we have \begin{equation} \cos \alpha _{s3}=\frac{\cos \theta _{425}-\cos \theta _{124}\cos \theta _{125}}{\sin \theta _{124}\sin \theta _{125}}. \end{equation Since $\alpha _{s3}(p)=\alpha _{s3}(\hat{p})$ and all face angles are equal, we obtain \begin{equation} \cos \theta _{425}=\cos \hat{\theta}_{425}. \label{cos425} \end{equation After applying (\ref{cos425}) to the law of cosines, we arrive at \left\Vert p_{5}-p_{4}\right\Vert =\left\Vert \hat{p}_{5}-\hat{p _{4}\right\Vert $. This proves that $\left\Vert p_{i}-p_{j}\right\Vert =\left\Vert \hat{p}_{i}-\hat{p}_{j}\right\Vert $, $\forall i,j\in \mathcal{V} $, so $F$ and $\hat{F}$ are strongly congruent for $\left\vert \mathcal{V \right\vert =5$.\footnote This analysis is not restricted to convex structures such as Figure \ref{fig:tri_bipyramid}. A similar analysis can be applied to concave cases.} As more vertices are added, each new vertex will create a new tetrahedron. Thus, the above process can be recursively employed to show that $F$ and $\hat{F}$ are strongly congruent for $\left\vert \mathcal{V}\right\vert =N$. \subsection{Lemma \protect\ref{lem:scgt-ortho-3d}} \label{Sec:proof:scgt-ortho-3d} \textit{(Proof of $\Rightarrow $)} From (\ref{eq:Lambda_i}), (\re {eq:3D-OB:zeta-l}), and the fact that strong congruency implies $\left\Vert p_{ij}\right\Vert =\left\Vert \hat{p}_{ij}\right\Vert $ $\forall i, j \in \mathcal{V}$, we have \begin{equation} \zeta _{2}(p)=\left\Vert p_{21}\right\Vert =\left\Vert \hat{p _{21}\right\Vert =\zeta _{2}(\hat{p}) \label{eq:3D-OB:zeta2-equal} \end{equation} and \begin{align} \zeta _{3}(p)=& p_{31}^{\intercal }p_{21}=\left\Vert p_{31}\right\Vert \left\Vert p_{21}\right\Vert \frac{\left\Vert p_{31}\right\Vert ^{2}+\left\Vert p_{21}\right\Vert ^{2}-\left\Vert p_{32}\right\Vert ^{2}} 2\left\Vert p_{31}\right\Vert \left\Vert p_{21}\right\Vert } \notag \\ =& \frac{\left\Vert p_{31}\right\Vert ^{2}+\left\Vert p_{21}\right\Vert ^{2}-\left\Vert p_{32}\right\Vert ^{2}}{2} \notag \\ =& \frac{\left\Vert \hat{p}_{31}\right\Vert ^{2}+\left\Vert \hat{p _{21}\right\Vert ^{2}-\left\Vert \hat{p}_{32}\right\Vert ^{2}}{2}=\zeta _{3} \hat{p}). \label{zeta3} \end{align} From (\ref{eq:varphi-3}) and (\ref{regular area}), we obtain \begin{align} \varphi _{3}(p)& = n_{123}^{\intercal} \left(p_{31} \times p_{32}\right) = \left\Vert p_{31} \times p_{32} \right\Vert = 2\breve{S}_{123}(p) \notag \\ & =\dfrac{1}{2}\left( 2\left\Vert p_{21}\right\Vert ^{2}\left\Vert p_{32}\right\Vert ^{2}+2\left\Vert p_{31}\right\Vert ^{2}\left\Vert p_{32}\right\Vert ^{2}-\left\Vert p_{21}\right\Vert ^{4}\right. \notag \\ & \left. \quad -\left\Vert p_{31}\right\Vert ^{4}-\left\Vert p_{32}\right\Vert ^{4}+2\left\Vert p_{21}\right\Vert ^{2}\left\Vert p_{31}\right\Vert ^{2}\right) ^{1/2} \notag \\ & =\dfrac{1}{2}\left( 2\left\Vert \hat{p}_{21}\right\Vert ^{2}\left\Vert \hat{p}_{32}\right\Vert ^{2}+2\left\Vert \hat{p}_{31}\right\Vert ^{2}\left\Vert \hat{p}_{32}\right\Vert ^{2}-\left\Vert \hat{p _{21}\right\Vert ^{4}\right. \notag \\ & \left. \quad -\left\Vert \hat{p}_{31}\right\Vert ^{4}-\left\Vert \hat{p _{32}\right\Vert ^{4}+2\left\Vert \hat{p}_{21}\right\Vert ^{2}\left\Vert \hat{p}_{31}\right\Vert ^{2}\right) ^{1/2} \notag \\ & =2\breve{S}_{123}(\hat{p})=\varphi _{3}(\hat{p}). \label{eq:3D-OB:varphi3-equal} \end{align} The relation $\zeta _{4}(p)=\zeta _{4}(\hat{p})$ can be shown as in (\re {zeta3}). From (\ref{eq:3D-OB:varphi-l}), (\ref{eq:n}), and (\ref{regular area}), we get \begin{align} \varphi _{4}(p)& =n_{123}^{\intercal }\left( p_{21}\times p_{41}\right) \notag \\ =& \left( \dfrac{p_{31}\times p_{32}}{\left\Vert p_{31}\times p_{32}\right\Vert }\right) ^{\intercal }\left( p_{21}\times p_{41}\right) \notag \\ =& \dfrac{1}{\left\Vert p_{31}\times p_{32}\right\Vert }\left( -p_{32}\times p_{31}\right) ^{\intercal }\left( -p_{41}\times p_{21}\right) \notag \\ =& \dfrac{1}{\left\Vert p_{31}\times p_{32}\right\Vert }\left[ \left( -p_{32}\times p_{32}-p_{32}\times p_{21}\right) ^{\intercal }\left( -p_{42}\times p_{21}\right. \right. \notag \\ & \left. \left. -p_{21}\times p_{21}\right) \right] \notag \\ =& \dfrac{1}{\left\Vert p_{31}\times p_{32}\right\Vert }\left( p_{32}\times p_{12}\right) ^{\intercal }\left( p_{42}\times p_{12}\right) \notag \\ =& \dfrac{1}{2\breve{S}_{123}(p)}\left[ (p_{32})^{\intercal }p_{42}(p_{12})^{\intercal }p_{12}-(p_{32})^{\intercal }p_{12}(p_{12})^{\intercal }p_{42}\right] \notag \\ =& \dfrac{1}{2\breve{S}_{123}(p)}\left[ \left\Vert p_{32}\right\Vert \left\Vert p_{42}\right\Vert \dfrac{\left\Vert p_{32}\right\Vert ^{2}+\left\Vert p_{42}\right\Vert ^{2}-\left\Vert p_{43}\right\Vert ^{2}} 2\left\Vert p_{32}\right\Vert \left\Vert p_{42}\right\Vert }\left\Vert p_{12}\right\Vert ^{2}\right. \notag \\ & \quad \left. -\left\Vert p_{32}\right\Vert \left\Vert p_{12}\right\Vert \dfrac{\left\Vert p_{32}\right\Vert ^{2}+\left\Vert p_{12}\right\Vert ^{2}-\left\Vert p_{13}\right\Vert ^{2}}{2\left\Vert p_{32}\right\Vert \left\Vert p_{12}\right\Vert }\right. \notag \\ & \quad \left. \cdot \left\Vert p_{12}\right\Vert \left\Vert p_{42}\right\Vert \dfrac{\left\Vert p_{12}\right\Vert ^{2}+\left\Vert p_{42}\right\Vert ^{2}-\left\Vert p_{41}\right\Vert ^{2}}{2\left\Vert p_{12}\right\Vert \left\Vert p_{42}\right\Vert }\right] . \label{varphi4} \end{align Since (\ref{varphi4}) is only a function of the inter-agent distances, then \varphi _{4}(p)=\varphi _{4}(\hat{p})$. A useful formula for calculating the signed volume $V_{ijkl}$ is given by the Cayley-Menger determinant \citep{sommerville2011introduction}: \begin{equation} V_{ijkl}=\pm \sqrt{\frac{1}{288}\left\vert \begin{array}{ccccc} 0 & 1 & 1 & 1 & 1 \\ 1 & 0 & \left\Vert p_{ji}\right\Vert ^{2} & \left\Vert p_{ki}\right\Vert ^{2} & \left\Vert p_{li}\right\Vert ^{2} \\ 1 & \left\Vert p_{ji}\right\Vert ^{2} & 0 & \left\Vert p_{kj}\right\Vert ^{2} & \left\Vert p_{lj}\right\Vert ^{2} \\ 1 & \left\Vert p_{ki}\right\Vert ^{2} & \left\Vert p_{kj}\right\Vert ^{2} & 0 & \left\Vert p_{lk}\right\Vert ^{2} \\ 1 & \left\Vert p_{li}\right\Vert ^{2} & \left\Vert p_{lj}\right\Vert ^{2} & \left\Vert p_{lk}\right\Vert ^{2} & \end{array \right\vert }, \label{eq:V*} \end{equation where the sign convention described in Section \ref{Sec: Strong Congr} determines if the sign is positive or negative. From (\re {eq:3D-OB:vartheta-l-volume-123}), we have $\vartheta _{4}(p)= - 3V_{1234}(p)/\breve{S}_{123}(p)$. Given (\ref{regular area}) and (\ref{eq:V* ), we can see that $\vartheta _{4}(p)$ is only dependent on the inter-agent distances and the sign of the volume; hence, it is clear that $\vartheta _{4}(p)=\vartheta _{4}(\hat{p})$. Repeating this analysis for $\zeta _{l}$, $\varphi _{l}$, and $\vartheta_{l} $, $l=5,\ldots ,N$ leads to the conclusion that $\Lambda (p)=\Lambda (\hat{p )$. \textit{(Proof of $\Leftarrow $)} If $\Lambda (p)=\Lambda (\hat{p})$, then \zeta _{l}(p)=\zeta _{l}(\hat{p})$, $l=2,\ldots ,N$, $\varphi _{l}(p)=\varphi _{l}(\hat{p})$, $l=3,\ldots ,N$, and $\vartheta _{l}(p)=\vartheta _{l}(\hat{p})$, $l=4,\ldots ,N$. From $\zeta _{2}(p)=\zeta _{2}(\hat{p})$, we obtain \begin{equation} \left\Vert p_{21}\right\Vert =\left\Vert \hat{p}_{21}\right\Vert \label{eq:3D-OB:p-21-two-frame} \end{equation where (\ref{eq:Lambda_i}) was used. From $\zeta _{3}(p)=\zeta _{3}(\hat{p})$ and (\ref{eq:3D-OB:zeta-l}), we have \begin{align} & \left\Vert p_{31}\right\Vert \left\Vert p_{21}\right\Vert \frac{\left\Vert p_{31}\right\Vert ^{2}+\left\Vert p_{21}\right\Vert ^{2}-\left\Vert p_{32}\right\Vert ^{2}}{2\left\Vert p_{31}\right\Vert \left\Vert p_{21}\right\Vert } \notag \\ =& \frac{\left\Vert p_{31}\right\Vert ^{2}+\left\Vert p_{21}\right\Vert ^{2}-\left\Vert p_{32}\right\Vert ^{2}}{2} \notag \\ =& \frac{\left\Vert \hat{p}_{31}\right\Vert ^{2}+\left\Vert \hat{p _{21}\right\Vert ^{2}-\left\Vert \hat{p}_{32}\right\Vert ^{2}}{2} \label{eq:3D-OB:zeta3-equal-base} \end{align where the law of cosines were used. This leads to \begin{equation} \left\Vert p_{31}\right\Vert ^{2}+\left\Vert p_{21}\right\Vert ^{2}-\left\Vert p_{32}\right\Vert ^{2}=\left\Vert \hat{p}_{31}\right\Vert ^{2}+\left\Vert \hat{p}_{21}\right\Vert ^{2}-\left\Vert \hat{p _{32}\right\Vert ^{2}. \label{eq:3D-OB:zeta3-equal-two-frame} \end{equation} From $\varphi _{3}(p)=\varphi _{3}(\hat{p})$ and (\ref{eq:varphi-3}), we get \begin{equation} \breve{S}_{123}(p)=\breve{S}_{123}(\hat{p}). \label{eq:3D-OB:S-equal-two-frame} \end{equation Applying (\ref{regular area}) to (\ref{eq:3D-OB:S-equal-two-frame}) yields \begin{align} & 2\left\Vert p_{21}\right\Vert ^{2}\left\Vert p_{31}\right\Vert ^{2}+2\left\Vert p_{21}\right\Vert ^{2}\left\Vert p_{32}\right\Vert ^{2}+2\left\Vert p_{31}\right\Vert ^{2}\left\Vert p_{32}\right\Vert ^{2} \notag \\ & \quad -\left\Vert p_{21}\right\Vert ^{4}-\left\Vert p_{31}\right\Vert ^{4}-\left\Vert p_{32}\right\Vert ^{4} \notag \\ & =2\left\Vert \hat{p}_{21}\right\Vert ^{2}\left\Vert \hat{p _{31}\right\Vert ^{2}+2\left\Vert \hat{p}_{21}\right\Vert ^{2}\left\Vert \hat{p}_{32}\right\Vert ^{2}+2\left\Vert \hat{p}_{31}\right\Vert ^{2}\left\Vert \hat{p}_{32}\right\Vert ^{2} \notag \\ & \quad -\left\Vert \hat{p}_{21}\right\Vert ^{4}-\left\Vert \hat{p _{31}\right\Vert ^{4}-\left\Vert \hat{p}_{32}\right\Vert ^{4}. \label{eq:3D-OB:varphi4-equal-two-frame} \end{align Combining (\ref{eq:3D-OB:p-21-two-frame}), (\re {eq:3D-OB:zeta3-equal-two-frame}), and (\re {eq:3D-OB:varphi4-equal-two-frame}) gives $\left\Vert p_{31}\right\Vert =\left\Vert \hat{p}_{31}\right\Vert $ and $\left\Vert p_{32}\right\Vert =\left\Vert \hat{p}_{32}\right\Vert $. Next, from $\zeta _{4}(p)=\zeta _{4}(\hat{p})$, $\varphi _{4}(p)=\varphi _{4}(\hat{p})$, and $\vartheta _{4}(p)=\vartheta _{4}(\hat{p})$, we get \begin{equation} \left\Vert p_{41}\right\Vert ^{2}+\left\Vert p_{21}\right\Vert ^{2}-\left\Vert p_{42}\right\Vert ^{2}=\left\Vert \hat{p}_{41}\right\Vert ^{2}+\left\Vert \hat{p}_{21}\right\Vert ^{2}-\left\Vert \hat{p _{42}\right\Vert ^{2}, \label{eq:3D-zeta4-tilde-equal-0-base} \end{equation \begin{align} & -\left\Vert p_{21}\right\Vert ^{4}+\left\Vert p_{21}\right\Vert ^{2}\left\Vert p_{31}\right\Vert ^{2}+\left\Vert p_{21}\right\Vert ^{2}\left\Vert p_{32}\right\Vert ^{2}+\left\Vert p_{21}\right\Vert ^{2}\left\Vert p_{41}\right\Vert ^{2} \notag \label{eq:3D-varphi4-tilde-equal-0-base} \\ & +\left\Vert p_{21}\right\Vert ^{2}\left\Vert p_{42}\right\Vert ^{2}-2\left\Vert p_{43}\right\Vert ^{2}\left\Vert p_{21}\right\Vert ^{2}-\left\Vert p_{31}\right\Vert ^{2}\left\Vert p_{41}\right\Vert ^{2} \notag \\ & +\left\Vert p_{31}\right\Vert ^{2}\left\Vert p_{42}\right\Vert ^{2}+\left\Vert p_{32}\right\Vert ^{2}\left\Vert p_{41}\right\Vert ^{2}-\left\Vert p_{32}\right\Vert ^{2}\left\Vert p_{42}\right\Vert ^{2} \notag \\ & =-\left\Vert \hat{p}_{21}\right\Vert ^{4}+\left\Vert \hat{p _{21}\right\Vert ^{2}\left\Vert \hat{p}_{31}\right\Vert ^{2}+\left\Vert \hat p}_{21}\right\Vert ^{2}\left\Vert \hat{p}_{32}\right\Vert ^{2}+\left\Vert \hat{p}_{21}\right\Vert ^{2}\left\Vert \hat{p}_{41}\right\Vert ^{2} \notag \\ & +\left\Vert \hat{p}_{21}\right\Vert ^{2}\left\Vert \hat{p}_{42}\right\Vert ^{2}-2\left\Vert \hat{p}_{43}\right\Vert ^{2}\left\Vert \hat{p _{21}\right\Vert ^{2}-\left\Vert \hat{p}_{31}\right\Vert ^{2}\left\Vert \hat p}_{41}\right\Vert ^{2} \notag \\ & +\left\Vert \hat{p}_{31}\right\Vert ^{2}\left\Vert \hat{p}_{42}\right\Vert ^{2}+\left\Vert \hat{p}_{32}\right\Vert ^{2}\left\Vert \hat{p _{41}\right\Vert ^{2}-\left\Vert \hat{p}_{32}\right\Vert ^{2}\left\Vert \hat p}_{42}\right\Vert ^{2}, \end{align} and \begin{align} & \left\vert \begin{array}{ccccc} 0 & 1 & 1 & 1 & 1 \\ 1 & 0 & \left\Vert p_{21}\right\Vert ^{2} & \left\Vert p_{31}\right\Vert ^{2} & \left\Vert p_{41}\right\Vert ^{2} \\ 1 & \left\Vert p_{21}\right\Vert ^{2} & 0 & \left\Vert p_{32}\right\Vert ^{2} & \left\Vert p_{42}\right\Vert ^{2} \\ 1 & \left\Vert p_{31}\right\Vert ^{2} & \left\Vert p_{32}\right\Vert ^{2} & 0 & \left\Vert p_{43}\right\Vert ^{2} \\ 1 & \left\Vert p_{41}\right\Vert ^{2} & \left\Vert p_{42}\right\Vert ^{2} & \left\Vert p_{43}\right\Vert ^{2} & \end{array} \right\vert \notag \\ & =\left\vert \begin{array}{ccccc} 0 & 1 & 1 & 1 & 1 \\ 1 & 0 & \left\Vert \hat{p}_{21}\right\Vert ^{2} & \left\Vert \hat{p _{31}\right\Vert ^{2} & \left\Vert \hat{p}_{41}\right\Vert ^{2} \\ 1 & \left\Vert \hat{p}_{21}\right\Vert ^{2} & 0 & \left\Vert \hat{p _{32}\right\Vert ^{2} & \left\Vert \hat{p}_{42}\right\Vert ^{2} \\ 1 & \left\Vert \hat{p}_{31}\right\Vert ^{2} & \left\Vert \hat{p _{32}\right\Vert ^{2} & 0 & \left\Vert \hat{p}_{43}\right\Vert ^{2} \\ 1 & \left\Vert \hat{p}_{41}\right\Vert ^{2} & \left\Vert \hat{p _{42}\right\Vert ^{2} & \left\Vert \hat{p}_{43}\right\Vert ^{2} & \end{array \right\vert \label{eq:3D-vartheta4-tilde-equal-0-base} \end{align Since we know that $\left\Vert p_{21}\right\Vert =\left\Vert \hat{p _{21}\right\Vert $, $\left\Vert p_{31}\right\Vert =\left\Vert \hat{p _{31}\right\Vert $, and $\left\Vert p_{32}\right\Vert =\left\Vert \hat{p _{32}\right\Vert $, we can use (\ref{eq:3D-zeta4-tilde-equal-0-base}), (\re {eq:3D-varphi4-tilde-equal-0-base}), and (\re {eq:3D-vartheta4-tilde-equal-0-base}) to show $\left\Vert p_{41}\right\Vert =\left\Vert \hat{p}_{41}\right\Vert $, $\left\Vert p_{42}\right\Vert =\left\Vert \hat{p}_{42}\right\Vert $, and $\left\Vert p_{43}\right\Vert =\left\Vert \hat{p}_{43}\right\Vert $. Repeating the same analysis on $\zeta _{l}$, $\varphi _{l}$, and $\vartheta _{l}$, $l=5,\ldots ,N$ gives $\mathbf{V}(p)=\mathbf{V}(\hat{p})$ and $\gamma (p)=\gamma (\hat{p})$. Then, by Lemma \ref{lem:scgt-3d}, we know $F$ and \hat{F}$ are strongly congruent. \subsection{Lemma \protect\ref{lem:3d-error-variables}} \label{Sec:proof:error-variables} Since agents $i$, $j$, and $k$ are located at their desired inter-agent distances, we can let $p_{i}=\left[ -d_{ji}/2,0,0\right] $, $p_{j}=\left[ d_{ji}/2,0,0\right] $, $p_{k}=\left[ x_{k},y_{k},0\right] $, and $p_{l} \left[ x_{l},y_{l},z_{l}\right] $ without the loss of generality. We also assume $y_{k}>0$ for simplicity. From the above coordinates, we get \begin{equation} \left\Vert p_{li}\right\Vert ^{2}=\left( x_{l}+\frac{d_{ji}}{2}\right) ^{2}+y_{l}^{2}+z_{l}^{2} \label{eq:3D-OB:p-li-square} \end{equation \begin{equation} \left\Vert p_{lj}\right\Vert ^{2}=\left( x_{l}-\frac{d_{ji}}{2}\right) ^{2}+y_{l}^{2}+z_{l}^{2}. \label{eq:3D-OB:p-lj-square} \end{equation After solving for $x_{l}$, we arrive at \begin{equation} x_{l}=\frac{\left\Vert p_{li}\right\Vert ^{2}-\left\Vert p_{lj}\right\Vert ^{2}}{2d_{ji}}. \label{eq:3d-sol-x} \end{equation When $\tilde{\zeta}_{l}=0$, we know from (\ref{eq:3D-OB:zeta-l}) and (\re {eq:zeta-error-3d}) that $\left\Vert p_{li}\right\Vert ^{2}-\left\Vert p_{lj}\right\Vert ^{2}=d_{li}^{2}-d_{lj}^{2}$. Substituting this into (\re {eq:3d-sol-x}) gives \begin{equation} x_{l}=\dfrac{d_{li}^{2}-d_{lj}^{2}}{2d_{ji}}. \label{xl} \end{equation This means that all points satisfying $\tilde{\zeta}_{l}=0$ lie on the plane defined by (\ref{xl}) (blue plane in Figure \ref{fig:ortho-3D-errors}), which is normal to vector $p_{ji}$ . Now, substituting the known coordinates of $p_{i}$, $p_{j}$, $p_{k}$, and p_{l}$ into (\ref{eq:3D-OB:varphi-l}) yields \begin{equation} \varphi _{l}=d_{ji}\left\Vert n_{ijk}^{\ast }\right\Vert y_{l} \label{varphi_l1} \end{equation where $n_{ijk}^{\ast }:=n_{ijk}(p^{\ast })$ and \begin{equation*} \left\Vert n_{ijk}^{\ast }\right\Vert =\left\{ \begin{array}{ll} \left\Vert p_{ki}^{\ast }\times p_{kj}^{\ast }\right\Vert & \text{if \{i,j,k\}\neq \{1,2,3\} \\ 1 & \text{if }\{i,j,k\}=\{1,2,3\} \end{array \right. \end{equation* When $\tilde{\varphi}_{l}=0$, we have from (\ref{varphi_l1}) and (\re {eq:varphi-error-3d}) that \begin{equation} y_{l}=\frac{\varphi _{l}^{\ast }}{d_{ji}\left\Vert n_{ijk}^{\ast }\right\Vert }. \label{eq:3d-sol-y} \end{equation This indicates that all the points satisfying $\tilde{\varphi}_{l}=0$ lie on the plane defined by (\ref{eq:3d-sol-y}) (red plane in Figure \re {fig:ortho-3D-errors}), which is orthogonal to plane $\tilde{\zeta}_{l}=0$. Finally, we can use (\ref{eq:3D-OB:vartheta-l}) and (\ref{nstar}) to writ \begin{equation} \vartheta _{l}=p_{li}^{\intercal }\frac{n_{ijk}}{\left\Vert n_{ijk}\right\Vert }\left\Vert n_{ijk}\right\Vert =p_{li}^{\intercal }\frac p_{ki}\times p_{kj}}{\left\Vert p_{ki}\times p_{kj}\right\Vert }\left\Vert n_{ijk}\right\Vert . \label{varthetal1} \end{equation From the known coordinates of $p_{i}$, $p_{j}$, $p_{k}$, and $p_{l}$, we obtain $p_{li}^{\intercal }\left( p_{ki}\times p_{kj}\right) =- z_{l}\left\Vert n_{ijk}\right\Vert $. Since $\left\Vert p_{ji}\right\Vert =d_{ji}$, $\left\Vert p_{ki}\right\Vert =d_{ki}$, and $\left\Vert p_{kj}\right\Vert =d_{kj}$, we get from (\ref{eq:n}) that $\left\Vert n_{ijk}\right\Vert =\left\Vert n_{ijk}^{\ast }\right\Vert $. Therefore, (\re {varthetal1}) simplifies to \begin{equation} \vartheta _{l} = - \left\Vert n_{ijk}^{\ast }\right\Vert z_{l}. \label{varthetal2} \end{equation When $\tilde{\vartheta}_{l}=0$, we can use (\ref{eq:vartheta-error-3d}) and \ref{varthetal2}) to get \begin{equation} z_{l} = - \frac{\vartheta _{l}^{\ast }}{\left\Vert n_{ijk}^{\ast }\right\Vert }. \label{eq:3d-sol-z} \end{equation That is, all the points satisfying $\tilde{\vartheta}_{l}=0$ are on the plane defined by (\ref{eq:3d-sol-z}) (green plane in Figure \re {fig:ortho-3D-errors}), which is orthogonal to planes $\tilde{\zeta}_{l}=0$ and $\tilde{\varphi}_{l}=0$. \begin{figure}[tbph] \centering \adjincludegraphics[scale=0.55, trim={{0.0\width} {0.0\height} {0.0\width} {0.00\height}},clip]{ortho-3D-errors} \caption[Graphical representation of projection errors in 3D.]{Graphical representation of projection errors: Points on the blue plane satisfy \tilde{\protect\zeta}_{l}=0$; points on the red plane satisfy $\tilde \protect\varphi}_{l}=0$; points on the green plane satisfy $\tilde{\protec \vartheta}_{l}=0$.} \label{fig:ortho-3D-errors} \end{figure} \section{Desired Projection Variables} \label{Sec:desired-projection-variables-3d} In the following, we show how the desired values for the 3D projection variables can be computed in terms of the desired inter-agent distances. \noindent First, we have from (\ref{eq:3D-OB:zeta-l}) that \begin{align} \zeta _{l}^{\ast }=& (p_{li}^{\ast })^{\intercal }p_{ji}^{\ast } \notag \\ =& \left\Vert p_{li}^{\ast }\right\Vert \left\Vert p_{ji}^{\ast }\right\Vert \dfrac{\left\Vert p_{li}^{\ast }\right\Vert ^{2}+\left\Vert p_{ji}^{\ast }\right\Vert ^{2}-\left\Vert p_{lj}^{\ast }\right\Vert ^{2}}{2\left\Vert p_{li}^{\ast }\right\Vert \left\Vert p_{ji}^{\ast }\right\Vert } \notag \\ =& \dfrac{d_{li}^{2}+d_{ji}^{2}-d_{lj}^{2}}{2}. \label{zeta*_l} \end{align} Next, from (\ref{eq:3D-OB:varphi-l}): \begin{eqnarray} \varphi _{l}^{\ast } &=&(p_{li}^{\ast })^{\intercal }(n_{ijk}^{\ast }\times p_{ji}^{\ast })=(n_{ijk}^{\ast })^{\intercal }\left( p_{ji}^{\ast }\times p_{li}^{\ast }\right) \notag \\ &=&\left\Vert n_{ijk}^{\ast }\right\Vert \frac{\left( n_{ijk}^{\ast }\right) ^{\intercal }}{\left\Vert n_{ijk}^{\ast }\right\Vert }\left( p_{ji}^{\ast }\times p_{li}^{\ast }\right) . \label{varphi*_l0} \end{eqnarray} Notice from (\ref{eq:n}) that \begin{equation} \frac{n_{ijk}^{\ast }}{\left\Vert n_{ijk}^{\ast }\right\Vert }=\frac p_{ki}^{\ast }\times p_{kj}^{\ast }}{\left\Vert p_{ki}^{\ast }\times p_{kj}^{\ast }\right\Vert } \label{nstar} \end{equation} for both $\{i,j,k\}\neq \{1,2,3\}$ and $\{i,j,k\}=\{1,2,3\}$. After applying (\ref{nstar}) to (\ref{varphi*_l0}), we obtain \begin{equation} \varphi _{l}^{\ast }=\frac{\left\Vert n_{ijk}^{\ast }\right\Vert } \left\Vert p_{ki}^{\ast }\times p_{kj}^{\ast }\right\Vert }\left( p_{ki}^{\ast }\times p_{kj}^{\ast }\right) ^{\intercal }\left( p_{ji}^{\ast }\times p_{li}^{\ast }\right) . \label{varphi*_l2} \end{equation} From (\ref{eq:n}), \begin{equation} \frac{\left\Vert n_{ijk}^{\ast }\right\Vert }{\left\Vert p_{ki}^{\ast }\times p_{kj}^{\ast }\right\Vert }=\left\{ \begin{array}{ll} 1 & \text{if }\{i,j,k\}\neq \{1,2,3\} \\ \dfrac{1}{\left\Vert p_{31}^{\ast }\times p_{32}^{\ast }\right\Vert } & \text{if }\{i,j,k\}=\{1,2,3\} \end{array \right. \label{term1} \end{equation} Given (\ref{regular area}) and the fact that $\left\Vert p_{ji}^{\ast }\right\Vert =d_{ji}$, $\forall (j,i)\in \mathcal{E}$, it is obvious that \ref{term1}) is only dependent on the desired distances. Now, \begin{eqnarray} &&\left( p_{ki}^{\ast }\times p_{kj}^{\ast }\right) ^{\intercal }\left( p_{ji}^{\ast }\times p_{li}^{\ast }\right) \notag \\ &=&\left( -p_{kj}^{\ast }\times p_{ki}^{\ast }\right) ^{\intercal }\left( -p_{li}^{\ast }\times p_{ji}^{\ast }\right) \notag \\ &=&\left( -p_{kj}^{\ast }\times p_{kj}^{\ast }-p_{kj}^{\ast }\times p_{ji}^{\ast }\right) ^{\intercal }\left( -p_{lj}^{\ast }\times p_{ji}^{\ast }-p_{ji}^{\ast }\times p_{ji}^{\ast }\right) \notag \\ &=&\left( p_{kj}^{\ast }\times p_{ij}^{\ast }\right) ^{\intercal }\left( p_{lj}^{\ast }\times p_{ij}^{\ast }\right) \notag \\ &=&\left\Vert p_{kj}^{\ast }\right\Vert \left\Vert p_{lj}^{\ast }\right\Vert \dfrac{\left\Vert p_{kj}^{\ast }\right\Vert ^{2}+\left\Vert p_{lj}^{\ast }\right\Vert ^{2}-\left\Vert p_{lk}^{\ast }\right\Vert ^{2}}{2\left\Vert p_{kj}^{\ast }\right\Vert \left\Vert p_{lj}^{\ast }\right\Vert }\left\Vert p_{ij}^{\ast }\right\Vert ^{2} \notag \\ &&-\left\Vert p_{kj}^{\ast }\right\Vert \left\Vert p_{ij}^{\ast }\right\Vert \dfrac{\left\Vert p_{kj}^{\ast }\right\Vert ^{2}+\left\Vert p_{ij}^{\ast }\right\Vert ^{2}-\left\Vert p_{ik}^{\ast }\right\Vert ^{2}}{2\left\Vert p_{kj}^{\ast }\right\Vert \left\Vert p_{ij}^{\ast }\right\Vert } \notag \\ &\cdot &\left\Vert p_{ij}^{\ast }\right\Vert \left\Vert p_{lj}^{\ast }\right\Vert \dfrac{\left\Vert p_{ij}^{\ast }\right\Vert ^{2}+\left\Vert p_{lj}^{\ast }\right\Vert ^{2}-\left\Vert p_{li}^{\ast }\right\Vert ^{2}} 2\left\Vert p_{ij}^{\ast }\right\Vert \left\Vert p_{lj}^{\ast }\right\Vert }, \label{term2} \end{eqnarray which is also only dependent on the desired distances. Thus, (\re {varphi*_l2}) is only a function of $d_{ji}$, $\forall (j,i)\in \mathcal{E}$. Finally, it follows from (\ref{eq:3D-OB:vartheta-l}) that \begin{equation} \vartheta _{l}^{\ast }=(p_{li}^{\ast })^{\intercal }n_{ijk}^{\ast }=(p_{li}^{\ast })^{\intercal }n_{ijk}^{\ast }-(p_{ki}^{\ast })^{\intercal }n_{ijk}^{\ast }=(p_{lk}^{\ast })^{\intercal }n_{ijk}^{\ast }. \label{vartheta*_l0} \end{equation If $\{i,j,k\}\neq \{1,2,3\}$, then from (\ref{eq:n}) \begin{align} \vartheta _{l}^{\ast }=& (p_{lk}^{\ast })^{\intercal }\left( p_{ki}^{\ast }\times p_{kj}^{\ast }\right) \notag \\ =& (p_{lk}^{\ast })^{\intercal }\left[ \left( p_{li}^{\ast }-p_{lk}^{\ast }\right) \times \left( p_{lj}^{\ast }-p_{lk}^{\ast }\right) \right] \notag \\ =& (p_{lk}^{\ast })^{\intercal }\left( p_{li}^{\ast }\times p_{lj}^{\ast }-p_{li}^{\ast }\times p_{lk}^{\ast }-p_{lk}^{\ast }\times p_{lj}^{\ast }+p_{lk}^{\ast }\times p_{lk}^{\ast }\right) \notag \\ =& (p_{lk}^{\ast })^{\intercal }\left( p_{li}^{\ast }\times p_{lj}^{\ast }\right) =-6V_{ijkl}^{\ast } \label{vartheta*_l1} \end{align where $V_{ijkl}^{\ast }:=V(p_{i}^{\ast },p_{j}^{\ast },p_{k}^{\ast },p_{l}^{\ast })$ from (\ref{eq:signed-volume}). Note that $V_{ijkl}^{\ast }$ can be calculated using (\ref{eq:V*}) with $\left\Vert p_{ji}\right\Vert =d_{ji}$ where the sign is based on the desired ordering of vertices $i,j,k$ per the convention described in Section \ref{Sec: Strong Congr}. If \{i,j,k\}=\{1,2,3\}$, then from (\ref{eq:n}) and the calculations in (\re {vartheta*_l1}), we arrive at \begin{equation} \vartheta _{l}^{\ast }=-\frac{6V_{123l}^{\ast }}{\left\Vert p_{31}^{\ast }\times p_{32}^{\ast }\right\Vert } \label{vartheta*_l2} \end{equation where (\ref{regular area}) is used to calculate the denominator in terms of the desired distances. The desired projections $\zeta _{2}^{\ast }$, $\zeta _{3}^{\ast }$, and \varphi _{3}^{\ast }$ are special cases of the above variables. From (\re {eq:zeta-2}), we have that \begin{equation} \zeta _{2}^{\ast }=\left( p_{21}^{\ast }\right) ^{\intercal }\dfrac p_{21}^{\ast }}{\left\Vert p_{21}^{\ast }\right\Vert }=\left\Vert p_{21}^{\ast }\right\Vert =d_{21}. \end{equation} From (\ref{eq:zeta-3}) and (\ref{zeta*_l}), we obtain \begin{equation*} \zeta _{3}^{\ast }=(p_{31}^{\ast })^{\intercal }p_{21}^{\ast }=\dfrac d_{31}^{2}+d_{21}^{2}-d_{32}^{2}}{2}. \end{equation*} Finally, from (\ref{eq:varphi-3}), we have \begin{align} \varphi _{3}^{\ast }=& \left( p_{31}^{\ast }\right) ^{\intercal }\left( n_{123}^{\ast }\times p_{21}^{\ast }\right) = \left( n_{123}^{\ast }\right) ^{\intercal }\left( p_{31}^{\ast }\times p_{32}^{\ast }\right) \notag \\ =& \left\Vert p_{31}^{\ast }\times p_{32}^{\ast }\right\Vert = 2\breve{S}_{123}^{\ast } \label{eq:varphi-3d} \end{align where $\breve{S}_{123}^{\ast }:=\breve{S}_{123}(p_{31}^{\ast}, p_{32}^{\ast})$ from (\ref{regular area}) is only dependent on the desired distances. \end{appendices} \bibliographystyle{dcu}
1,116,691,499,992
arxiv
\section{Introduction} Optomechanics explores the interaction between light and mechanical objects, with a strong experimental focus on micro- and nanoscale systems. It has potential to observe quantum effects like entanglement on a macroscopic object and to apply them to quantum information processing. Furthermore, it may provide a new paradigm for quantum metrology, precision measurement and non-linear dynamical systems. Quantum effects in optomechanical systems (OMS) led to the demonstration that non-classical states can be generated in an optical cavity \cite{Bose,Mancini}. Moreover, the entanglement between a cavity mode and a mechanical oscillator has been studied both in the steady state \cite{Vitaliprl,Genes} and in the time domain \cite{Mari09,Jie}. In the case of hybrid OMS, the bipartite entanglement between an atomic ensemble, cavity modes and a mirror \cite{Genes08} has shown that a strongly coupled system showing robust tripartite entanglement which can be realized in continuous variable (CV) quantum interfaces \cite{Ian}. Several schemes have been proposed to generate entanglement between a pair of oscillators interacting with a common bath or in a two-cavity OMS \cite{Paz,Liao}. The long-lived entanglement between two membranes inside a cavity has been studied \cite{Michael} for two mirrors coupled to a cavity \cite{Ling,Ge13a,Ge13,Mancini02}. Several other works have used atomic coherence to induce entanglement \cite{Mancini,Ge13,Ling}. At this point, we should note that entanglement in microcavities is by no means restricted to Raman lasers. As a matter of fact, the formalism can be extended, e.g. to the intersubband case \cite{Auer}, where a strong interplay between photons and the cavity can play an important role for both fundamental and applied physics in the THz to mid Infrared range \cite{Mauro14,Mauro15,Mauro16,Mauro17}. There have been several quantum features of OMS investigating the generation of macroscopic entangled states for cavity fields due to the atomic coherence in a two-mode laser \cite{Xiong,Kiffner,Qamar,Qamar1,eyob15}, e.g. the generation of two-mode entangled radiation in a three-level cascade atomic medium \cite{Xiong,eyob15}, a four-level single atom \cite{Kiffner}, and a four-level Raman-driven quantum-beat laser \cite{Qamar}. Entanglement of nanomechanical oscillators and two-mode fields has been achieved via radiation pressure coupling in a cascade configuration due to microscopic atomic coherence. In Ref. \cite{Ling}, two macroscopic mirrors were entangled via microscopic atomic coherence injected into the cavity and Ref. \cite{Ge13} proposed an additional scheme is proposed for entangling two-mode fields whose entanglement can be transferred to two movable mirrors through radiation pressure in a controlled emission laser. The two photon coherence is generated by strong external classical fields which couples the same levels (dipole-forbidden) in the cascade scheme. In this paper, we consider a scheme for entanglement generation of two micromechanical mirrors in a four-level atoms in an N configuration through two-mode fields generated by a correlated laser source in a doubly resonant cavity. All the transitions of interest are dipole allowed. We take the initial state of a four-level atom to be a coherent superposition between of one of the two lower and upper atomic levels, respectively. We show that, in contrast to previous work to the usual the $S$-shaped bistability observed in single-mode optomechanics \cite{Tredicucci,Dorsel,Aspelmeyer}, our scheme shows that the optical intensities of the two cavity modes exhibit bistabilities for all values of detuning, due to the parametric amplification-type coupling induced by the two-photon coherence. We also studied the entanglement created between two movable mirrors in the adiabatic regime and our scheme can control the degree of entanglement with an external field driving the gain medium. This paper is organized as follows. The scheme and the Hamiltonian of the system are introduced in Sec. \ref{sec:2}. In Sec. \ref{sec:3} we analyze the bistability and entanglement between the movable mirrors, by means of a master equation for the two-mode laser coupled to thermal reservoirs. In Sec \ref{sec:4} we derive the linearized quantum Langevin equations for the field-mirror subsystem. In Sec. \ref{sec:5} we study the coupling induced by the two photon coherence on the bistability of the mean intracavity photon numbers for both cases (RWA and BRWA). We present a method in Sec. \ref{sec:6} to study the entanglement generation between two mirrors using a full numerical analysis of the system. Concluding remarks are given in Sec. \ref{sec:7}. \begin{figure}[t] \includegraphics[width=8cm]{fig1.pdf} \caption{(a) Schematics of a two-mode laser coupled to two movable mirrors $M_{1}$ and $M_{2}$. The doubly-resonant cavity is driven by two external lasers of frequency $\omega_{L_{1}}$ and $\omega_{L_{2}}$, and the cavity modes, filtered by a beam splitter (BS), are coupled to their respective movable mirrors by radiation pressure. (b) The gain medium is a single Raman atom. Two external laser drives of frequencies $\omega_{\rm p}$ and $\omega$ are also applied to generate two-photon coherence.}\label{fig1} \end{figure} \section{Model of the system} \label{sec:bigsec2} \subsection{Hamiltonian} \label{sec:2} The OMS which we consider consists of a Fabry-P\'erot cavity of length $L$ with two movable mirrors driven by two mode coherent fields as shown in Fig. \ref{fig1}(a). The laser system is consists of a gain medium of four level atoms in a N configuration as shown in Fig. \ref{fig1}(b). We take the initial state of a four-level atom to be in a coherent superposition of either the state between $\ket a$ and $\ket c$ or $\ket d$ and $\ket b$. Moreover, a driven laser with amplitude $\Omega_{p}$ and frequency $\omega_{p}$ couple the levels $\ket d$ and $\ket c$ and another laser with amplitude $\Omega$ and frequency $\omega$ couple the levels $\ket a$ and $\ket b$. The atoms are injected into the doubly resonant cavity at a rate $r_a$ and removed after time $\tau$, which is longer than the spontaneous emission time. For the purpose of this paper we take the initial state between $\ket a$ and $\ket c$. The two cavity modes with frequencies $\nu_1$ and $\nu_2$ interact nonresonantly with each atom. We consider the movable mirrors as a quantum harmonic oscillators, so the system can be described by the following Hamiltonian ($\hbar =1$) \begin{align} \label{eq:Ham} H&=\sum_{j=a,b,c,d}\omega_{j}|j\rangle\langle j|+\sum_{j=1}^{2}\nu_{j}a_{j}^{\dag}a_{j}\notag\\ &+g_{1}(a_{1}^{\dag}|b\rangle\langle d|+a_{1}|d\rangle \langle b|)+g_{2}(a_{2}^{\dag}|c\rangle \langle a|+a_{2}|a\rangle \langle c|)\notag\\ &+\Omega_{p}(|d\rangle\langle c|e^{-i\omega_{p}t}+\text{H.c.})+\Omega(|a\rangle\langle b|e^{-i\omega t}+\text{H.c.})\notag\\ &+\sum_{j=1}^{2}[\omega_{\rm m_{j}}b_{j}^{\dag}b_{j}+G_{j}a_{j}^{\dag}a_{j}(b_{j}+b_{j}^{\dag})]\notag\\ &+i\sum_{j=1}^{2}(\varepsilon_{j}a_{j}^{\dag}e^{-i\omega_{\rm L_{j}}t}-\text{H.c.}) \end{align} where $\omega_{j}$ is the frequency of the $j$th level, $\nu_{j}$ is the frequency of the $ j $th cavity mode, $g_{j}$ is the atom-field coupling, $\Omega_{p}$ and $\Omega$ are the amplitudes of the drive lasers that couple the $|c\rangle\rightarrow |d\rangle$ and $|b\rangle\rightarrow |a\rangle$ transitions respectively, and $\omega_{p}$, $\omega$ are the frequencies of the drive lasers. $\omega_{\rm m_{j}}$ are the frequencies of the movable mirrors, $b_{j}(b^{\dagger}_{j})$ are the annihilation (creation) operators for the mechanical modes and the relation $G_{j}=(\nu_i/L_j)\sqrt{\hbar/m_{j}\omega_{m_j}}$ is the optomechanical coupling strength, $\left|\varepsilon_{j}\right|=\sqrt{\kappa_j P_j/\hbar\omega_{\rm L_{j}}}$ is the amplitude of the external pump field that drive the doubly resonant cavity, with $\kappa_j$ , $P_j$, and $\omega_{\rm L_{j}}$ being the cavity decay rate related with outgoing modes, the external pump field power and the frequencies of the pump laser, respectively. In (\ref{eq:Ham}), the first two terms denote the free energy of the atom and the cavity modes, the third and the fourth terms represent the atom-cavity mode interaction, the fifth and sixth terms describe the coupling of the levels $|d\rangle \leftrightarrow |c\rangle$ and $|a\rangle \leftrightarrow |b\rangle$ by the drive laser and the last three terms describe the free energy of mechanical oscillators, the coupling of the mirrors with the cavity modes and the coupling of the external laser derives with the cavity modes, respectively. Using the fact that $\sum_{j}|j\rangle\langle j|=1$, the Hamiltonian (\ref{eq:Ham}) can be rewritten (after dropping the unimportant constant $\omega_{b}$) as $H = H_{0}+ H_{I}$ where \begin{align} H_{0}&=\omega|a\rangle\langle a|+(\omega-\nu_{2})|c\rangle\langle c|+\nu_{1}|d\rangle\langle d|\notag\\ &+\nu_{1}a_{1}^{\dag}a_{1}+\nu_{2}a_{2}^{\dag}a_{2} \end{align} Now applying the transformation $\exp(iH_{0}t)H_{I}\exp(-iH_{0}t)$, we obtain the Hamiltonian in the interaction picture as the sum of the following terms \begin{align} \label{eq:cavity-fields} V_{1}&=\Delta_{c}|a\rangle\langle a|+(\Delta_{c}-\Delta_{2})|c\rangle\langle c|+\Delta_{1}|d\rangle\langle d|\nonumber \\ &+g_{1}(a_{1}^{\dag}|b\rangle\langle d|+a_{1}|d\rangle \langle b|)+g_{2}(a_{2}^{\dag}|c\rangle \langle a|+a_{2}|a\rangle \langle c|)\nonumber \\ &+\Omega_{p}(|d\rangle\langle c|+\text{H.c.})+\Omega(|a\rangle\langle b|+\text{H.c.})\\ V_{2}&=\sum_{j=1}^{2}[\omega_{\rm m_{j}}b_{j}^{\dag}b_{j}+G_{j}a_{j}^{\dag}a_{j}(b_{j}+b_{j}^{\dag})]\nonumber \\ &+i\sum_{j=1}^{2}(\varepsilon_{j}a_{j}^{\dag}e^{i\delta_{j}t}-\text{H.c.})\label{eq:opt1} \end{align} where we have assumed for simplicity $\nu_{1}+\nu_{2}=\omega_{p}+\omega$ and we define $\delta_{j}=\nu_{j}-\omega_{\rm L_{j}}$. The master equation for the laser system can be derived using the terms that involve the atomic states $V_{1}$ following the standard laser theory methods \cite{Scu-book97}. In order to obtain the reduced master equation for the cavity modes, it is convenient to trace out the atomic states. \subsection{Master equation} \label{sec:3} In order to obtain the dynamics of the system we require the master equation corresponding to the Hamiltonian (\ref{eq:cavity-fields}). Our model for this system is similar to many earlier treatments of a two-mode three-level laser \cite{Scu-book97,eyob11,eyob15}. Following these works, we give a derivation of the master equation for our case (proof can be found in the Appendix \ref{app:master}), here we just show the main result. Assuming that the atoms are injected into the cavity at a rate $r_a$, we can write the density matrix for the atomic and cavity system $\rho_{AR}$ at time $t$ as the sum of the density operator of the cavity modes plus a single atom injected at an earlier time multiplied by the total number of atoms in the cavity for an interval $\Delta t$. Taking the continuum limit, assuming that the atoms are uncorrelated with the electromagnetic modes at the time the atoms are injected into the cavity and when they are removed, and tracing out the atomic degrees of freedom, we obtain the following equation for the density matrix of the cavity modes: \begin{align}\label{eq:master_eq} \frac{d}{dt}\rho(t)=&-ig_{1}\left(a_{1}^{\dagger}\rho_{db}+a_{1}\rho_{bd}-\rho_{bd}a_{1}-\rho_{db}a_{1}^{\dagger}\right)\notag\\ &-ig_{2}\left(a_{2}\rho_{ca}+a_{2}^{\dagger}\rho_{ac}-\rho_{ac}a_{2}^{\dagger}-\rho_{ca}a_{2}\right)\notag\\ &+\kappa_1\mathcal{L}[a_1]\rho+\kappa_2\mathcal{L}[a_2]\rho \end{align} The last terms in (\ref{eq:master_eq}) are the Lindblad dissipation terms \cite{walls}, where $\kappa_{j}$ are the cavity damping rates added to account for the coupling of the cavity modes with thermal Markovian reservoirs. The conditional density operators $\rho_{ac}={\bra a}\rho_{AR}\ket{c}$ and $\rho_{db}=\bra d\rho_{AR}{\ket b}$ can be obtained from the master equation of the atomic and cavity system, resulting in \begin{align} \label{eq:rhoac&bd} \frac{d}{dt} \rho_{ac}(t)&=-(\gamma_{ac}+i\Delta_2)\rho_{ac}-ig_{2}(a_2\rho_{cc}-\rho_{aa}a_2)\notag\\ & -i(\Omega\rho_{bc}-\Omega_p\rho_{ad}) \end{align} \begin{align} \label{eq:rhoac&bd1} \frac{d}{dt} \rho_{bd}(t)&=-(\gamma_{bd}-i\Delta_1)\rho_{bd}+ig_{1}(\rho_{bb}a_{1}^{\dagger}-a_{1}^{\dagger}\rho_{dd})\notag\\ & -i(\Omega\rho_{ad}-\Omega_p\rho_{bc}) \end{align} where $\gamma_{ac}$ and $\gamma_{bd}$ are the dephasing rates for the off-diagonal density matrix elements. We make use of the linear approximation by keeping terms only up to second order in the coupling constants $g_{j}$ in the master equation. This is justified because the coupling constants of the two quantum fields are small as compared to other system parameters \cite{kiffner}. After obtaining the zeroth-order dynamical equations for the conditional density operators other than $\rho_{ac}$ and $\rho_{bd}$, the good-cavity limit is applied, where the cavity damping rate is much smaller than the dephasing and spontaneous emission rates. The cavity variables then vary more slowly than the atomic ones. The atomic variables reach the steady state earlier than the cavity ones, so we can set the time derivatives of the aforementioned conditioned density operators to zero, being able to solve the system of equations analytically (see Appendix \ref{app:master}). Here we consider the case in which the atoms are injected into the cavity in a coherent superposition of the levels $|a\rangle$ and $|c\rangle$. We take the initial state as $\ket {\Psi_{A}(0)}={C_{a}(0)}{\ket a}+C_{c}(0){\ket c}$, so the initial density operator for a single atom has the form \begin{align}\label{eq:superpos} \rho_{A}(0)=&\ket \Psi_{A}\bra\Psi_A=\rho^{(0)}_{aa}\ket a\bra a+\rho^{(0)}_{cc}\ket c\bra c\notag\\ +&(\rho^{(0)}_{ac}\ket a\bra c+ \text{H.c}), \end{align} where $\rho^{(0)}_{aa}=|C_a|^2$, $\rho^{(0)}_{cc}=|C_{c}|^2$, and $\rho^{(0)}_{ca}=C_{c}C_{a}^*$ are the initial two level atomic coherence, which can lead to squeezing and entanglement accompanying light amplification \cite{Xiong,Kiffner,eyob07,eyob07a,eyob08,Ge}. It is convenient to introduce the quantity $\eta\in\left[-1,1\right]$ to parametrize the initial density matrix as $\rho_{aa}^{(0)}=\frac{1-\eta}{2}$, and we set the initial coherence to $\rho_{ac}{(0)}=\frac{1}{2}(1-\eta^2)^{1/2}$. The equations of motion, (\ref{eq:rhoac&bd}) and (\ref{eq:rhoac&bd1}) can be solved using the adiabatic approximation, so that the following equations are obtained: \begin{align} \label{eq:coupled1} -ig_{1}\rho_{bd}=\xi_{6}a_{2}\rho-\xi_5\rho a_{2}+\xi_2\rho a_{1}^{\dagger}-\xi_1a_{1}^{\dagger}\rho\\ -ig_{2}\rho_{ac}=-\xi_{3}a_{2}\rho+\xi_4\rho a_{2}-\xi_7\rho a_{1}^{\dagger}+\xi_8a_{1}^{\dagger}\rho \end{align} The explicit expressions for the coefficients $\xi_{i}$ can be found in Appendix \ref{app:master}. Finally, the master equation for the cavity modes takes the form \begin{align} \label{eq:master_final} \frac{d}{dt}\rho(t)&=\xi_{1}(a_{1}^{\dagger}\rho a_1-a_1a_{1}^{\dagger}\rho)+\xi_{1}^{*}(a_{1}^{\dagger}\rho a_1-\rho a_1a_{1}^{\dagger})\notag\\ &+\xi_{2}(a_1\rho a_{1}^{\dagger}-\rho a_{1}^{\dagger}a_1)+\xi_{2}^{*}(a_1\rho a_{1}^{\dagger}-a_{1}^{\dagger}a_1\rho)\notag\\ &+\xi_{3}(a_2\rho a_{2}^{\dagger}-a_{2}^{\dagger}a_2\rho)+\xi_{3}^{*}(a_2\rho a_{2}^{\dagger}-\rho a_{2}^{\dagger}a_2 )\notag\\ &+\xi_{4}(a_{2}^{\dagger}\rho a_2-\rho a_2 a_{2}^{\dagger})+\xi_{4}^{*}(a_2^{\dagger}\rho a_{2}-a_{2} a_{2}^{\dagger}\rho)\notag\\ &+\xi_{5}(\rho a_{2} a_{1}-a_{1}\rho a_2)+\xi_{5}^{*}(a_{1}^{\dagger}a_{2}^{\dagger}\rho-a_{2}^{\dagger}\rho a_{1}^{\dagger})\notag\\ &+\xi_{6}(a_{1} a_{2}\rho-a_{2} \rho a_{1})+\xi_{6}^{*}(\rho a^{\dagger}_{2} a_{1}^{\dagger}-a_{1}^{\dagger}\rho a_{2}^{\dagger})\notag\\ &+\xi_{7}(\rho a_{1}^{\dagger} a_{2}^{\dagger}-a_{2}^{\dagger}\rho a_{1}^{\dagger})+\xi_{7}^{*}(a_{2} a_{1}\rho-a_{1}\rho a_{2})\notag\\ &+\xi_{8}(a_{2}^{\dagger} a_{1}^{\dagger}\rho-a_{1}^{\dagger}\rho a_{2}^{\dagger})+\xi_{8}^{*}(\rho a_{1} a_{2}-a_{2}\rho a_{1})\notag\\ &+\frac{1}{2}\sum_{i=1}^{2}\kappa_{i}[(N_{i}+1)(2a_{i}\rho a_{i}^{\dagger}-a_{i}^{\dagger}a_{i}\rho-\rho a_{i}^{\dagger}a_{i})\notag\\ &+N_{i}(2a_{i}^{\dagger} \rho a_{i}-a_{i}a_{i}^{\dagger}\rho-\rho a_{i}a_{i}^{\dagger})] , \end{align} where we have included the damping of the cavity modes by two independent thermal reservoirs with mean photon number $N_{i}$. \subsection{Heisenberg-Langevin formulation} \label{sec:4} In order to study the entanglement between the two movable mirrors, we need the quantum Langevin equations for the cavity modes and the mechanical system. Including the creation and annihilation operators for the mechanical system in the Hamiltonian $V_{2}$ and making use of the expression $\left\langle \dot{\mathcal{O}}\right\rangle =Tr\left(\mathcal{O}\dot{\rho}\right)$ we obtain \begin{align} \dot a_{1}&=-\bigg(\frac{\kappa_1}{2}-\xi_{11}\bigg)a_1+\xi_{12}a^{\dagger}_{2}\notag\\ &-iG_{1}a_1(b_{1}^{\dagger}+b_1)+\varepsilon_{1}e^{i\delta_{1}t}+F_1 \label{eq:stability} \\ \dot a_{2}&=-\bigg(\frac{\kappa_2}{2}+\xi_{22}\bigg)a_2-\xi_{21}a^{\dagger}_{1}\notag\\ &-iG_{2}a_2(b_{2}^{\dagger}+b_2)+\varepsilon_{2}e^{i\delta_{2}t}+F_2 \label{eq:stability1a} \\ \dot b_{j}&=-i\omega_{m_j}b_j-\frac{\gamma_{m_j}}{2}b_j-iG_{j}a_{j}^{\dagger}a_j+\sqrt{\gamma_{m_j}}f_{j} \label{eq:stability1} \end{align} where $\xi_{11}=\xi_{1}^{\ast}-\xi_{2}^{\ast}$, $\xi_{12}=\xi_{5}^{\ast}-\xi_{6}^{\ast}$, $\xi_{21}=\xi_{7}-\xi_{8}$ and $\xi_{22}=\xi_{3}-\xi_{4}$. The quantum noise operators $F_1$ and $F_2$ appear as a result of the coupling of the external vacuum with the cavity modes and through spontaneous emission. The terms $f_j$ are the noise operators corresponding with a thermal reservoir coupled to the mechanical oscillators. The quantum noise operators $F_{\mu\nu}$ have zero mean and second-order correlations given by \begin{equation} \langle {F_{\mu}(t)} F_{\nu}(t^{\prime})\rangle=2D^{\mu\nu}\delta(t-t^{\prime}). \end{equation} Using the generalized Einstein's relations \cite{Cohen,Hald} \begin{equation} 2\langle D_{\mu\nu}\rangle=-\langle A_{\mu}(t)D_{\nu}(t)\rangle-\langle D_{\mu}(t)A_{\nu}(t)\rangle+\frac{d}{dt}\langle A_{\mu}(t)A_{\nu}(t)\rangle \end{equation} where we find that the only nonzero noise correlations between $F_1$, $F_2$, $F_{1}^{\dagger}$ and $F_{2}^{\dagger}$ are \begin{align} \langle {F_{1}^{\dagger}(t)} F_{1}(t^{\prime})\rangle&=2[\text{Re} (\xi_{1})+\kappa_1N_{1}]\delta(t-t^{\prime}),\\ \langle {F_{1}(t)} F_{1}^{\dagger}(t^{\prime})\rangle&=2[\text{Re} (\xi_{2})+N_{1}(N_{1}+1)]\delta(t-t^{\prime}),\\ \langle {F_{2}^{\dagger}(t)} F_{2}(t^{\prime})\rangle&=2[\text{Re} (\xi_{4})+\kappa_2N_{2}]\delta(t-t^{\prime}),\\ \langle {F_{2}(t)} F_{2}^{\dagger}(t^{\prime})\rangle&=2[\text{Re} (\xi_{3})+\kappa_2(N_{2}+1)]\delta(t-t^{\prime}),\\ \langle {F_{2}(t)} F_{1}(t^{\prime})\rangle&=[\xi_{6}^{*}+\xi_8]\delta(t-t^{\prime}). \end{align} Meanwhile, $f_j$ are the noise operators with zero mean contributed by mechanical oscillators and fully characterized by their correlation functions \begin{align} \langle {f_{j}^{\dagger}(t)} f_{j}(t^{\prime})\rangle&=n_{j}\delta(t-t^{\prime}),\\ \langle {f_{j}(t)} f_{j}^{\dagger}(t^{\prime})\rangle&=(n_{j}+1)\delta(t-t^{\prime}), \end{align} where $n_j=[\exp(\hbar\omega_{m_{j}}/\kappa_{B}T_j)-1]^{-1}$, is the mean thermal occupation number and $\kappa_{B}$ represents the Boltzmann constant, and $T_j$ is describing the temperature of the reservoir of the mechanical resonator. \section{Bistability of intracavity mean photon numbers}\label{sec:5} \subsection{Mean field expansion} Typically the single-photon coupling is very weak, but the optomechanical interaction can be greatly enhanced by employing a coherently driven cavity. Bistability has been observed in driven cavity optomechanical systems using a Fabry-P\'erot-type optomechanical system in the optical domain \cite{Dorsel,Jiang}. In this section we proceed to study the effect of the coupling induced by the two photon coherence on the bistability of the mean intracavity photon numbers. In order to understand the bistability from the perspective of the intracavity photon number, we consider the steady-state solutions of (\ref{eq:stability})-(\ref{eq:stability1}). This can be performed by transforming the cavity field to its rotating frame, defined by $\tilde{a}_{j}=a_j e^{-i\delta_j t}$, and expanding operators around their mean value: \begin{align} \tilde{a}_{j}=\langle \tilde{a}_{j}\rangle+\delta\tilde{a}_{j} \nonumber \\ \tilde{b}_{j}=\langle \tilde{b}_{j}\rangle+\delta\tilde{b}_{j}. \end{align} Here, $\langle \tilde{a}_{j}\rangle$ is the average cavity field produced by the laser derive (in the absence of optomechanical coupling), and $\delta\tilde{a}_{j}$ represents the quantum fluctuations around the mean (assumed to be small). We have also neglected the highly oscillating terms $\exp[-i(\sigma_1+\sigma_2)t]$ in the transformed frame that contains both the fluctuations $\delta\tilde{a}_{j}$ and classical mean values $\langle \tilde{a}_{j}\rangle$. In order to obtain the solutions for $\langle \tilde{a}_{j}\rangle$ in the steady state, one must either simplify the equations by making a rotating wave approximation which neglects the fast oscillating terms, or solve a set of self-consistent equations. We show both of these approaches in the following subsections. \subsection{Rotating wave approximation} In the Rotating wave approximation (RWA), we neglect fast oscillating terms in the transformed quantum Langevin approach to determine the evolution for $\langle \tilde{b}_{j}\rangle$ and $\langle \tilde{a}_{j}\rangle$. This gives the steady-state solutions according to \begin{align} \langle b^{\dagger}_j+b_j\rangle&=-\frac{2\omega_{m_{j}}G_{j}I_j}{\gamma^{2}_{m_{j}}/4+\omega^2_{m_{j}}},\\ \langle \tilde{a}_{j}\rangle&=\frac{\varepsilon_{j}}{i\delta_j+\kappa_{j}/2+(-1)^j\eta_j}, \label{eq:mean} \end{align} where \begin{align} I_j=|\langle \tilde{a}_{j}\rangle|^2 \end{align} is the steady-state intracavity mean photon number, \begin{align} \delta_{j}=\nu_{j}-\omega_{L_{j}}+G_{j}\langle b^{\dagger}_{j}+b_{j}\rangle \end{align} is the cavity mode detuning, and we have chosen the frequency shift due to radiation pressure \begin{align} \delta\nu_j\equiv G_{j}\langle b^{\dagger}_j+b_j\rangle \end{align} for convenience. We have also defined \begin{align} \eta_{1}& =\xi^{*}_{1}-\xi^{*}_{2} \nonumber \\ \eta_2 & = \xi_{3}-\xi_{4} . \end{align} We can then write the equations for the intracavity mean photon numbers to have the implicit form \begin{equation} \label{eq:bistability} I_j\bigg|i(\delta_{0j}-\beta_{j} I_{j})^2+\frac{\kappa_j}{2}+(-1)^j\eta_{j}\bigg|^2=|\varepsilon_{j}|^2, \end{equation} where we have used \begin{align} \delta_{0j}=\nu_{j}-\omega_{L_{j}} \label{delta0jdef} \end{align} and \begin{align} \beta_{j}=(2\omega_{m_{j}}G^2_{j})/(\gamma^2_{m_{j}}/4+\omega^2_{m_j}). \end{align} Eq. (\ref{eq:bistability}) is of the form of the standard equations for ${\rm S}$-shaped bistabilities for intracavity intensities in an optomechanical system with effective cavity damping rates $\kappa_j++2(-1)^{j}\eta_j$. We would note that typically in RWA, there is no coupling between the intensities of the cavity modes that is due to the two-photon coherence induced in the system. To demonstrate the bistable behavior of the mean intracavity photon numbers in doubly resonant cavity, we use a set of particular parameters from recent available experimental setups \cite{Gr,Ar}. We consider mass of the mirrors $m=145{\rm ng}$, cavity with lengths $L_1= 112{\rm\mu m}$, $L_2=88.6{\rm\mu m}$, pump laser wavelengths $\lambda_1 =810{\rm nm}$, $\lambda_2 =1024{\rm nm}$, rate of injection of atoms $r_a =1.6{\rm MHz}$, mechanical oscillator damping rates $\gamma_{m_{1}} =\gamma_{m_{2}} =2\pi\times 60{\rm MHz}$, mechanical frequencies $\omega_{m_{1}} =\omega_{m_{2}} =2\pi\times3{\rm MHz}$, and without loss of generality, we assume that the dephasing and spontaneous emission rates for the atoms $\gamma_{ac} =\gamma_{bd} =\gamma_{cd} =\gamma_{ab}=\gamma_{bc} =\gamma_{ad} =\gamma_{a} =\gamma_{b} =\gamma_{c} =\gamma_d =\gamma =3.4 {\rm MHz}$. For the purpose of this paper, we assume a Gaussian distribution for the atom density and set both one- and two-photon detunings to $0$, $\Delta_{p} =0$ and $\Delta_{c} =0$, respectively. We first illustrate the bistability of the steady-state intracavity mean photon number for the first cavity mode. The first example we present in Fig. \ref{f:bista} is the steady-state intracavity photon number under the red-detuned (${\rm \delta_{01}}>0$) frequency range. We point out that we have introduced the effective detuning for our system (\ref{delta0jdef}), where the red-detuned regime occurs for all positives values of the effective detuning, which is the opposite to prior conventions \cite{Clerk}. The left panel of Fig. (\ref{f:bista}) shows that the optical bistability regime persists for a broader range of the external pump fields. The right panel shows that an {\rm S}-shaped behavior of the bistable intra-cavity mean photon number for ${\rm I_{1}}$. The strength of the bistability is changed by increasing the intensity of the external field and the detuning. For the second cavity mode, we have found almost exactly the same results for the bistability behavior of the steady-state intracavity mean photon number. \begin{figure}[tb] \vspace{0.5cm} \begin{center} \includegraphics[width=0.4955\columnwidth,height=2in]{fig1a_bis.pdf} \includegraphics[width=0.4955\columnwidth,height=2in]{fig1b_bis.pdf} \end{center} \vspace{-0.5cm} \caption{\label{f:bista} Tunable optical bistability of intracavity field. The left panel shows the phase diagram for the intra-cavity mean photon number ${\rm I_{1}}$ for different values of cavity laser detuning ${\rm \delta_{01}}$ and external pump field strength (cavity drive laser) $P$ in the rotating wave approximation. The right panel red, green, and blue curves are the cross section of the phase diagram for cavity laser detuning $\delta_{01} =6\pi{\rm MHz}, 4\pi{\rm MHz}, 0$, respectively. Here $g_{1} =g_{2} =2\pi\times3{\rm MHz}$, $\Omega/\gamma =10$, $\kappa_{1} =\kappa_{2} =2\pi \times 215{\rm kHz}$, and assuming that all atoms are initially in their excited state $|\psi_{A}(0)\rangle=\ket{c}$ corresponding to the parameter $\eta=1$.} \end{figure} \subsection{Beyond the rotating wave approximation} Let us analyze the bistability behavior of the intracavity mean photon number in the NRWA. In this case we are able to see the effect of the two-photon coherence. To study the bistabilty in the regime, we consider the rotating frame defined by the bare cavity frequencies $\nu_{j}$. This is equivalent to the assumption that the cavity mode detunings $\delta\nu_{j}=0$ in the Hamiltonian (\ref{eq:opt1}). It stays in the counter-rotating terms in the Langevin equations for $\tilde{a}_{j}$. This approach can be traced back the condition \begin{align} {\rm \delta_{02}}=-{\rm \delta_{01}}\equiv-{\rm \delta_{0}}. \end{align} The expectation values of the cavity mode operators with this choice of detuning are \begin{align} \label{eq:exp1} \langle \tilde{a}_{1}\rangle&=\frac{\varepsilon_{1}\alpha^{*}_2+\varepsilon_{2}(\xi^{*}_{5}-\xi^{*}_{6})}{\alpha_2\alpha^{*}_2+(\xi^{*}_{5}-\xi^{*}_{6})(\xi^{*}_{7}-\xi^{*}_{8})} \\ \langle \tilde{a}_{2}\rangle&=\frac{\varepsilon_{2}\alpha^{*}_1-\varepsilon_{1}(\xi_{8}-\xi_{7})}{\alpha^{*}_{1}\alpha_{2}+(\xi_{5}-\xi_{6})(\xi_{7}-\xi_{8})},\label{eq:exp2} \end{align} where \begin{align} \alpha_1& =i(\delta_{0}-\beta_1I_1)+\kappa_{1}/2-\eta_{1} \nonumber \\ \alpha_2& =-i(\delta_{0}+\beta_{2}I_{2})+\kappa_{2}/2+\eta_{2}. \end{align} As can be seen in (\ref{eq:exp1}) and (\ref{eq:exp2}), the coupling between $\langle \tilde{a}_{1}\rangle$ and $\langle \tilde{a}_{2}\rangle$ is due to the coefficients $\xi_7$ and $\xi_8$, which are proportional to the coherence induced either by the coupling of atomic levels by an external laser, or by injecting the atoms in a coherent superposition of upper and lower levels. Here, we consider a more general expression by introducing a new parameter that relates the cavity drive amplitudes $ (P_{2}\sim\mu^{2}P_{1})$ \begin{equation} |\varepsilon_{2}|=\mu|\varepsilon_{1}|\equiv\mu|\varepsilon|. \end{equation} We thus obtain an equivalent relation for the intracavity mean photon number \begin{align} \label{eq:exp1a} \frac{|\alpha_{1}(I_1)\alpha^{*}_{2}(I_{2})+(\xi^{*}_{5}-\xi^{*}_{6})(\xi^{*}_{7}-\xi^{*}_{8})|^2}{|\alpha^{*}_{2}(I_{2})+\mu(\xi^{*}_{5}-\xi^{*}_{6})|^2}I_1=|\varepsilon|^2,\\ \frac{|\alpha^{*}_{1}(I_1)\alpha_{2}(I_{2})+(\xi_{5}-\xi_{6})(\xi_{7}-\xi_{8})|^2}{|\mu\alpha^{*}_{1}(I_{1})-(\xi_{7}-\xi_{8})|^2}I_2=|\varepsilon|^2\label{eq:exp2a}. \end{align} The above transformation provides an elegant approach to understanding the effect of the coupling on the bistability behavior of the cavity modes by examining the limits of the parameter $\mu^2$. In the limit where $\mu^2\ll1$ ($P_2\ll P_1$), the denominator in (\ref{eq:exp2a}) can be approximated as \begin{align} & |\mu\alpha^{*}_{1}-(\xi_{7}-\xi_{8})|^2 \nonumber \\ & \approx |\mu(-i\delta_0+\kappa_{1}/2-(\xi_{1}-\xi_{2}))-(\xi_{7}-\xi_{8})|^2 \end{align} for $\mu^2\beta_{1}I_{1}/|(\xi_{7}-\xi_{8})|^2\ll 1$. In this case, the ratio of (\ref{eq:exp1a}) and (\ref{eq:exp2a}) yields a cubic equation \begin{align} I_1/I_{2}= \frac{|\alpha^{*}_{2}(I_{2})+\mu(\xi^{*}_{5}-\xi^{*}_{6})|^2}{|\mu(-i\delta_0+\kappa_{1}/2-(\xi^{*}_{1}-\xi^{*}_{2}))-(\xi_{7}-\xi_{8})|^2}. \end{align} Note that this implies that $I_{2}$ exhibits bistability when the intensity of the first cavity mode is varied. \begin{figure}[tb] \vspace{0.5cm} \begin{center} \includegraphics[width=0.45\columnwidth,height=2in]{fig2a_bis.pdf} \includegraphics[width=0.45\columnwidth,height=2in]{fig2a1_bis.pdf}\\ \vspace{0.5cm} \includegraphics[width=0.45\columnwidth,height=2in]{fig2b_bis.pdf} \includegraphics[width=0.45\columnwidth,height=2in]{fig2b1_bis.pdf} \end{center} \vspace{-0.5cm} \caption{\label{f:bista1} Cross section of the phase diagram at $\delta_{0}/2\pi=6{\rm MHz}$, $\delta_{0}/2\pi=3{\rm MHz}$, $\delta_{0}/2\pi=1{\rm MHz}$, and $\delta_{0}/2\pi=0.25{\rm MHz}$. Notice that the bistability appears for positive values of detuning, which good agreement is achieved in the ``red detuned'' regime which allows in single-mode optomechanics \cite{Tredicucci,Dorsel,Aspelmeyer}. Here we have used $\mu=0.1$ ($P_2=0.08P_1$), and atoms are initially injected into the cavity in state $|\psi_{A}(0)\rangle=\ket{c}$, that is, for the value of the parameter $\eta=1$. See text and Fig. \ref{f:bista} for the other parameters.} \end{figure} An exact numerical analysis on Eqs. (\ref{eq:exp1a}) and (\ref{eq:exp2a}) is shown in Fig. (\ref{f:bista1}), which indicates that the behavior of the cavity mode mean photon number is very sensitive to the sign of detuning. As can be seen in the RWA case, the bistabilty occurs in the ``red detuned'' regime ($\delta_{0}>0$)-good agreement is achieved in the regimes of validity of each model. We also observe that the bistable region widens with increasing detuning and derive laser power. \section{Dynamics of continuous variable entanglement}\label{sec:6} In this section, we investigate the degree of entanglement of the movable mirrors of the doubly resonant cavity in the adiabatic regime. The detection of entanglement in similar contexts has been investigated by many groups recently \cite {Xiong,eyob07,eyob07a,eyob08,Rist,Palo,Wang}. Although there is no entanglement between the cavity fields and the movable mirrors, here we will show that the entanglement between the two-mode fields can be transferred to entanglement between the movable mirrors of the doubly resonant cavity. Indeed, optimal entanglement transfer from the two-mode cavity field to the mechanical modes is achieved by eliminating adiabatically the dynamics of the field modes, specifically in circumstances where $\kappa_j\gg\gamma_{m_{j}}$. We introduce the slowly varying fluctuation operators $\delta a_{j}\equiv\delta \tilde{a}_{j}e^{i\delta_{j}t}$ and $\tilde{b}_{j}\equiv b_{j}e^{i\omega_{m_{j}}t}$ and using (\ref{eq:stability})-(\ref{eq:stability1}), the corresponding linear quantum Langevin equations are written \begin{align} \delta \dot {a}_1&=-\frac{\kappa^{\prime}_{1}}{2}\delta a_1+\xi_{12}\delta a^{\dagger}_{2}-i G_{1}\langle \tilde{a}_{1}\rangle(\delta\tilde{b}^{\dagger}_{1}e^{i(\delta_1+\omega_1)t}\notag\\ &+\delta \tilde{b}_{1}e^{i(\delta_1-\omega_1)t})+F_{1}\\ \delta \dot {a}_2&=-\frac{\kappa^{\prime}_{2}}{2}\delta a_2-\xi_{21}\delta a^{\dagger}_{1}-i G_{2}\langle\tilde{a}_{2}\rangle(\delta \tilde{b}^{\dagger}_{2}e^{i(\delta_2+\omega_2)t}\notag\\ &+\delta \tilde{b}_{2}e^{i(\delta_2-\omega_2)t})+F_{2}\\ \delta\dot{\tilde{b}}_{j}&=-\frac{\gamma_{m_{j}}}{2}\delta\tilde{b}_{j}-i G_{j}\left\langle \tilde{a}_{j}\right\rangle \delta a_{j}^{\dagger}e^{i\left(\omega_{m_{j}}+\delta_{j}\right)t}\notag\\ &-i G_{j}\left\langle \tilde{a}_{j}^{\dagger}\right\rangle \delta a_{j}e^{i\left(\omega_{m_{j}}-\delta_{j}\right)t}+\sqrt{\gamma_{m_{j}}}f_{j} \end{align} where $\kappa'_{1}=\kappa_{1}-2\xi_{11}$, $\kappa'_{2}=\kappa_{2}+2\xi_{22}$. Here we have the choice of using using the RWA when evaluating $ \langle \tilde{a}_{j} \rangle $. In the RWA, the model should not enter the regime where the measurement is capable of resolving the zero-point motion of the oscillator in a time short compared with the mechanical oscillation period. This regime -- which requires very strong optomechanical coupling -- exhibits interesting behavior, including dynamical mechanical squeezing \cite{Doherty,Warwick}. From the perspective of quantum state transfer, it has been shown in \cite{Pinard,Aspelmeyer} that the optomechanical interaction and consequently the field-mirror entanglement are enhanced when the detuning of each cavity-driving field is $\delta_{j}=-\omega_{m_{j}}$. To avoid these issues, we explicitly compute the $ \langle \tilde{a}_{j} \rangle $ without using the RWA by using a self-consistent iterative approach. Setting $\delta_{j}=-\omega_{m_{j}}$ and using the adiabatic approximation for the $\delta a_j$ equations we get the expressions for the mirror variables. Moreover, we can choose the phase of the driving laser in such a way that $\left\langle \tilde{a}_{j}\right\rangle =-i\left|\left\langle \tilde{a}_{j}\right\rangle \right|$. Hence, we have the final expressions \begin{align} \delta\dot{\tilde{b}}_{1}&=-\frac{\gamma_{m_{1}}}{2}\delta\tilde{b}_{1}+\alpha_{1}\left(e^{2i\delta_{2}t}-e^{-2i\delta_{1}t}\right)\delta\tilde{b}_{2}\notag\\ &+\alpha_{1}\left(1-e^{-2i\left(\delta_{1}+\delta_{2}\right)t}\right)\delta\tilde{b}_{2}^{\dagger}+\widetilde{F}_{1},\\ \delta\dot{\tilde{b}}_{2}&=-\frac{\gamma_{m_{2}}}{2}\delta\tilde{b}_{2}+\alpha_{2}\left(e^{-2i\delta_{2}t}-e^{2i\delta_{1}t}\right)\delta\tilde{b}_{1}\notag\\ &+\alpha_{2}\left(e^{-2i\left(\delta_{1}+\delta_{2}\right)t}-1\right)\delta\tilde{b}_{1}^{\dagger}+\widetilde{F}_{2} \end{align} where \begin{align} \alpha_{1}&\equiv\frac{4\xi_{12}}{\kappa}G_{1}G_{2}\left|\left\langle \tilde{a}_{1}\right\rangle \right|\left|\left\langle \tilde{a}_{2}\right\rangle \right| \notag\\ \widetilde{F}_{1}&\equiv\frac{2\kappa'_{2}}{\kappa}G_{1}\left|\left\langle \tilde{a}_{1}\right\rangle \right|\left(e^{-2i\delta_{1}t}F_{1}-F_{1}^{\dagger}\right)\notag\\ &+\frac{4\xi_{12}}{\kappa}G_{1}\left|\left\langle \tilde{a}_{1}\right\rangle \right|\left(e^{-2i\delta_{1}t}F_{2}^{\dagger}-F_{2}\right)+\sqrt{\gamma_{m_{1}}}f_{1}\notag\\ \alpha_{2}&\equiv\frac{4\xi_{21}}{\kappa}G_{1}G_{2}\left|\left\langle \tilde{a}_{1}\right\rangle \right|\left|\left\langle \tilde{a}_{2}\right\rangle \right|\notag\\ \widetilde{F}_{2}&\equiv\frac{2\kappa'_{1}}{\kappa}G_{2}\left|\left\langle \tilde{a}_{2}\right\rangle \right|\left(e^{-2i\delta_{2}t}F_{2}-F_{2}^{\dagger}\right)\notag\\ &+\frac{4\xi_{21}}{\kappa}G_{2}\left|\left\langle \tilde{a}_{2}\right\rangle \right|\left(F_{1}-e^{-2i\delta_{2}t}F_{1}^{\dagger}\right)+\sqrt{\gamma_{m_{2}}}f_{2} \end{align} with $\kappa'_{1}=\kappa_{1}-2\xi_{11}$, $\kappa'_{2}=\kappa_{2}+2\xi_{22}$ and $\kappa=\kappa'_{1}\kappa'_{2}+4\xi_{12}\xi_{21}$. In order to study the entanglement between the mirrors it is convenient to define the position and momentum operators as $\delta q_{j}=\frac{\delta\tilde{b}_{j}+\delta\tilde{b}_{j}^{\dagger}}{\sqrt{2}}$ and $\delta p_{j}=i\frac{\delta\tilde{b}_{j}^{\dagger}-\delta\tilde{b}_{j}}{\sqrt{2}}$. Once we get the expressions for these fluctuation operators we can write a matrix equation of the form \begin{equation} \label{eq:time_dep} \dot{\bf u}\left(t\right)={\bf M}\left(t\right){\bf u}\left(t\right)+{\bf n}\left(t\right), \end{equation} where we define \begin{align} {\bf u}=\left(\begin{array}{cccc} \delta q_{1}, & \delta p_{1},& \delta q_{2},& \delta p_{2}\end{array}\right)^{T} . \end{align} Here ${\bf M}$ is a matrix containing the coupling between the fluctuations and the vector ${\bf n}$ contains the noise operators of both the cavity and the mirrors. This inhomogeneous differential equation can be solved numerically. The evolution of the quadrature fluctuations is described by the general solution of (\ref{eq:time_dep}) is formally expressed as \cite{Mari,Mari09,Jie} \begin{equation} {\bf u}\left(t\right)=\mathbb{\bf G}\left(t\right){\bf u}\left(0\right)+\mathbb{\bf G}\left(t\right)\int_{0}^{t}\mathbb{\bf G}^{-1}\left(\tau\right){\bf n}\left(\tau\right)d\tau, \end{equation} where \begin{align} \mathbb{\bf G}\left(t\right)=e^{\int M\left(s\right)ds} \end{align} and the initial condition satisfies $\mathbb{\bf G}\left(0\right)=\mathbb{I}$, where $\mathbb{I}$ is the identity matrix. To bring quantum effects to the macroscopic level, one important way is the creation of entanglement between the optical mode and the mechanical mode. If the initial state of the system is Gaussian, the statistics remain Gaussian under continuous linear measurement for all time. The entanglement can therefore be quantified via the logarithmic negativity. The logarithmic negativity is a convenient and commonly used parameter to quantify the strength of a given entanglement resource and has the attractive properties of both being additive for multiple independent entangled states and quantifying the maximum distillable entanglement \cite{Vidal}. Here, we will quantify the entanglement by means of the logarithmic negativity. In particular, such measurement can be obtained from the correlation matrix $\mathbb{\bf V}$ with elements given by \begin{align} V_{i,j}\equiv\frac{1}{2}\left\langle u_{i}u_{j}+u_{j}u_{i}\right\rangle+\left\langle u_{i}\right\rangle \left\langle u_{j}\right\rangle \end{align} fully characterizes the mechanical and optical variances (It also includes information on the quantum correlation between the two mechanical and the optical cavity modes), giving rise to a block structure: \begin{equation} \mathbb{\bf V}=\left(\begin{array}{cc} \mathbb{\bf A} & \mathbb{\bf C}\\ \mathbb{\bf C}^{T} & \mathbb{\bf B} \end{array}\right) \end{equation} The corresponding logarithmic negativity $E_{\mathcal{N}}$ is given by \cite{Adesso,Ferraro} \begin{equation} E_{\mathcal{N}}=\max \left( 0, -\ln 2\eta^{-}\right), \label{entdef} \end{equation} where \begin{align} \eta^{-} =\frac{1}{\sqrt{2}}\sqrt{\Sigma-\sqrt{\Sigma^{2}-\det V}} \end{align} is the symplectic eigenvalue with regards to quantum correlations and \begin{align} \Sigma & =\text{det} \mathbb{\bf A}+\det \mathbb{\bf B}-2 \det \mathbb{\bf C} . \end{align} The interesting quantities in the present model are the quadrature fluctuations of the cavity and the mirror. Since the fluctuations are time dependent, so will be the elements of the correlation matrix. In order to compute its elements, we define a covariance matrix $\mathbb{\bf R}\left( t \right)$ by the elements % \begin{align} \mathbb{\bf R}_{\ell,\ell^{\prime}}\left(t\right)=\left\langle u_{\ell}\left(t\right)u_{\ell^{\prime}}\left(t\right)\right\rangle \end{align} for $\ell,\ell^{\prime}=1, 2,3, 4$. In order to quantify the two-mode entanglement, we need to determine the covariance matrix $\mathbb{\bf R}\left(t\right)$. Taking into account the (\ref{eq:time_dep}) and assuming that the correlation between its elements and the noise operators at the initial state is zero, the general expression for the covariance matrix $\mathbb{\bf R}\left(t\right)$ at an arbitrary time has takes the form \begin{equation} \mathbb{\bf R}\left(t\right)=\mathbb{\bf G}\left(t\right) \mathbb{\bf R}\left(0\right)\mathbb{\bf G}^{T}\left(t\right)+\mathbb{\bf G}\left(t\right)\mathbb{\bf Z}\left(t\right)\mathbb{\bf G}^{T}\left(t\right), \end{equation} where \begin{align} \mathbb{\bf Z}\left(t\right)=\int_{0}^{t}\int_{0}^{t}\mathbb{\bf G}^{-1}\left(\tau\right)\mathbb{\bf C}\left(\tau,\,\tau'\right)\left[\mathbb{\bf G}^{-1}\left(\tau^{\prime}\right)\right]^{T}d\tau\,d\tau'. \end{align} The elements of the matrix $\mathbb{\bf C}\left(\tau,\,\tau'\right)$ are the correlation between the elements of the vector $\mathbb{\bf n}$, that is, \begin{align} \mathbb{\bf C}_{l,m}\left(\tau,\,\tau'\right)=\left\langle \mathbb{\bf n}_{l}\left(\tau\right)\mathbb{\bf n}_{m}\left(\tau'\right)\right\rangle . \end{align} Those elements can be easily calculated by using the generalized Einstein relation for the noise operators. Moreover, since the expectation value for the noise operators is zero, the equation for the mean value of the fluctuations is \begin{align} \left\langle \mathbb{\bf u}\left(t\right)\right\rangle =\mathbb{\bf G}\left(t\right)\left\langle \mathbb{\bf u}\left(0\right)\right\rangle. \end{align} The above are the formal equations for the evolution of quadrature operators of the mirrors. We now assume the density matrix of the initial conditions of the mirrors is separable and the mechanical bath is, as usual, in a thermal state at temperature $T$ with occupancy $\mathbb{\bf n}_{th}$ and the cavity mode is in vacuum state. Therefore, the initial density matrix for the $i$th mechanical oscillator is given by \begin{equation} \rho_{m_{i}}=\sum_{j=0}^{\infty}\frac{n_{i}^{j}}{\left(1+n_{i}\right)^{j+1}}\left|j\right\rangle \left\langle j\right|. \end{equation} Under this assumption the $\mathbb{\bf R}$ matrix at the initial state is given by \begin{equation} \mathbb{\bf R}\left(0\right)=\left(\begin{array}{cccc} n_{1}+\frac{1}{2} & \frac{i}{2} & 0 & 0\\ -\frac{i}{2} & n_{1}+\frac{1}{2} & 0 & 0\\ 0 & 0 & n_{2}+\frac{1}{2} & \frac{i}{2}\\ 0 & 0 & -\frac{i}{2} & n_{2}+\frac{1}{2} \end{array}\right). \end{equation} From (\ref{entdef}), the entanglement of the movable mirrors can be easily computed numerically. In Fig. \ref{f:nega1}, we plot the degree of entanglement of the two movable mirrors tunable as a function of time $t$ for different $\Omega$ at fixed input laser powers $P$, thermal noises $n$ and thermal photon numbers $N$. We consider the standard case where the case of symmetric mechanical damping ($\gamma_1=\gamma_2=\gamma$), symmetric thermal occupation of the mechanical baths ($n_1=n_2=n$) and symmetric thermal photon numbers ($N_1=N_2=N$). This allows fast numerical results for the time dependent second moments to be evaluated. The assumption of equal thermal occupations is reasonable for most experimental situations, while it turns out that our results are not sensitive to unequal mechanical damping rates provided that they are both small. We observe that the amount of entanglement decreases with time and it is clear that the amount of entanglement are the same provided that with increasing the external driving field, $\Omega$ and saturated for all values of the external driving field. In turn this analysis shows that the generated entanglement can be controlled by adjusting experimental conditions, particularly the external driving field $\Omega$. \begin{figure}[tb] \vspace{0.5cm} \begin{center} \includegraphics[width=0.995\columnwidth]{nega1.pdf} \end{center} \vspace{-0.5cm} \caption{\label{f:nega1} Logarithmic negativity $E_{\mathcal{N}}$ of the micromechanical mirrors for the cavity drive lasers' for thermal phonon numbers, $n_1=n_2=50$ and thermal photon numbers $N_1=N_2=1$ as a function of $t$ at constant ${\rm \Omega_{p}/\gamma}=0.018$ with $\Omega=15$ (solid green line), $\Omega=20$ (dotted red line), and $\Omega=30$ (dotdashed black line). Here ${\rm g_{1}}={\rm g_{2}}=2\pi\times4{\rm MHz}$, ${\rm \kappa_{1}}={\rm \kappa_{2}}=2\pi\times215{\rm kHz}$, and assuming that all atoms are initially in their excited state $|\psi_{A}(0)\rangle=\ket{c}$, that is, for the value of the parameter $\eta=1$.} \end{figure} To see the effect of the cavity-driving laser powers $P$ on the output entanglement, we plot the time dependence of the entanglement for various cavity-driving laser powers when all atoms are initially in their excited state $|\psi_{A}(0)\rangle=\ket{c}$, that is, for the value of the parameter $\eta=1$ in Fig. \ref{f:nega2} for thermal phonon numbers $n_1=n_2=n=5$ and thermal photon numbers $N_1=N_2=N=1$ . We observe that the degree of entanglement $E_{\mathcal{N}}$ increases and persists for longer time when the cavity-driving power $P$ decreases and the two movable mirrors are entangled for a wide range of the drive laser powers and saturated (for $\text{P}<0.05\mu\text{W}$). This is due to the coupling of the cavity-field mode to a mirror. \begin{figure}[tb] \vspace{0.5cm} \begin{center} \includegraphics[width=0.995\columnwidth]{nega2.pdf} \end{center} \vspace{-0.5cm} \caption{\label{f:nega2} Logarithmic negativity $E_{\mathcal{N}}$ of the micromechanical mirrors for thermal phonon numbers $n_1=n_2=5$ and thermal photon numbers $N_1=N_2=1$ as a function of $t$ at constant ${\rm \Omega_{p}/\gamma}=0.018$ and ${\rm \Omega/\gamma}=5$ with $\text{P}_1=\text{P}_2=\text{P}=0.5\text{nW}$ (solid red line), $\text{P}_1=\text{P}_2=\text{P}=0.05\mu\text{W}$ (dotted blue line), and $\text{P}_1=\text{P}_2=\text{P}=0.5\text{mW}$ (solid green line). Here ${\rm g_{1}}={\rm g_{2}}=2\pi\times4{\rm MHz}$, ${\rm \kappa_{1}}={\rm \kappa_{2}}=2\pi\times215{\rm kHz}$, and assuming that all atoms are initially in their excited state $|\psi_{A}(0)\rangle=\ket{c}$, i.e., for the value of the parameter $\eta=1$.} \end{figure} We next examine the effect of the thermal noise on the degree of entanglement. The degree of entanglement of the two movable mirrors, as a function of time with the external driving field held constant, are shown in Fig. \ref{f:nega3}. We observe in the figures that the degree of entanglement has a similar curve to the effects of the cavity drive lasers for a small input power $P$ and a small thermal noise $n$. We also see that the degree of entanglement for the mirrors is reduced with increasing temperature. We see that the critical time above which the logarithmic entanglement $E_{\mathcal{N}}$ disappears increases with decreasing phonon thermal numbers. This is reminiscent of entanglement sudden-death where it does not exponentially decay but goes to zero at a critical time \cite{yu,lin}. \begin{figure}[tb] \vspace{0.5cm} \begin{center} \includegraphics[width=0.995\columnwidth]{nega3.pdf} \end{center} \vspace{-0.5cm} \caption{\label{f:nega3} Logarithmic negativity $E_{\mathcal{N}}$ of the micromechanical mirrors for thermal photon numbers $N_1=N_2=1$ as a function of $t$ at constant ${\rm \Omega_{p}/\gamma}=0.018$ and ${\rm \Omega/\gamma}=5$ for fixed cavity drive laser at $\text{P}_1=\text{P}_2=\text{P}=0.02\text{nW}$ with $n_1=n_2=n=100$ (solid green curve), $n_1=n_2=n=50$ (dotdashed magenta line), $n_1=n_2=n=10 $ (blue dotted line) and $n_1=n_2=n=5$ (red dashed line). See text and the above figures for other parameters.} \end{figure} Finally, we address the environmental temperature dependence of the two movable mirrors, as shown in Fig. \ref{f:nega4}. We see that at zero thermal phonon temperature and fixed cavity drive power $P_1=P_2=P=0.02\text{nW}$, the entanglement decreases irrespective of the number of thermal photons and persists for longer time. Moreover, we see that the critical time above which the entanglement disappears remains the same with varying thermal photons. \begin{figure}[tb] \vspace{0.5cm} \begin{center} \includegraphics[width=0.995\columnwidth]{nega4.pdf} \end{center} \vspace{-0.5cm} \caption{\label{f:nega4} Logarithmic negativity $E_{\mathcal{N}}$ of the micromechanical mirrors when the temperature of the thermal phonon bath is zero, $T=0K$ ($n_1=n_2=n=0$) as a function of $t$ at constant ${\rm \Omega_{p}/\gamma}=0.018$ and ${\rm \Omega/\gamma}=5$ for fixed cavity drive laser at $\text{P}_1=\text{P}_2=\text{P}=0.02\text{nW}$ with $N_1=N_2=N=100$ (solid green line), $N_1=N_2=N=50$ (dotted red line), and $N_1=N_2=N=5$ (dotdashed black line) for the value of the parameter $\eta=-1$. See text and the above figures for other parameters.} \end{figure} \section{Conclusion}\label{sec:7} We have presented a study of the optical bistability and entanglement between two mechanical oscillators coupled to the cavity modes of a two-mode laser via optical radiation pressure with realistic parameters. In stark contrast to the usual S-shaped bistability observed in single-mode dispersive optomechanical coupling, we have found that the optical intensities of the two cavity modes exhibit bistabilities for all large values of the detuning, due to the parametric amplification-type coupling induced by the two-photon coherence. We have also investigated the entanglement of the movable mirror by exploiting the intermode correlation induced by the two-photon coherence. We have here focused on the dynamics of the quantum fluctuations of the mirror. We have shown that strong mirror-mirror entanglement can be created in the adiabatic regime. The degree of entanglement $E_{\mathcal{N}}$ is significant for a low thermal noise $n$ and a low cavity-driving laser powers $P$. The entanglement is supported by direct numerical calculations for realistic parameters. Our results suggest that for experimentally accessible parameters \cite{Gr,Ar}, macroscopic entanglement for two movable mirrors can be achieved with current technology and have important implications for quantum logic gates based on EIT schemes \cite{Feizpour}. \section*{Acknowledgments} B. T. gratefully acknowledges numerous discussions with Eyob A. Sete. B. T. is supported by the Shanghai Research Challenge Fund; New York University Global Seed Grants for Collaborative Research; National Natural Science Foundation of China (61571301); the Thousand Talents Program for Distinguished Young Scholars (D1210036A); and the NSFC Research Fund for International Young Scientists (11650110425); NYU-ECNU Institute of Physics at NYU Shanghai; the Science and Technology Commission of Shanghai Municipality (17ZR1443600); and the China Science and Technology Exchange Center (NGA-16-001) and by Khalifa University Internal Research Fund (8431000004).
1,116,691,499,993
arxiv
\section{Introduction} Van Assche (1987) introduced the notion of a random variable $Z$ uniformly distributed between two independent random variables $X_1$ and $X_2$, which arose in studying the distribution of products of random $2\times2$ matrices for stochastic search of global maxima. By letting $X_1$ and $X_2$ to have identical distributions, he derived that: (i) for $X_1$ and $X_2$ on $[-1,1]$, $Z$ is uniform on $[-1,1]$ if and only if $X_1$ and $X_2$ have an Arcsin distribution; and (ii) $Z$ possesses the same distribution as $X_1$ and $X_2$ if and only if $X_1$ and $X_2$ are degenerated or have a Cauchy distribution. Soltani and Homei (2009) following Johnson and Kotz (1990) extended Van Assche's results. They put $X_1,\cdots,X_n$ to be independent, and considered $$ S_n = R_{1}X_1 + R_2X_2 + \cdots +R_{n-1}X_{n-1} + R_nX_n,\;\;\;\; n\geq 2\;, $$ where random proportions are $R_{i}=U_{(i)}-U_{(i-1)},\; i=1,...,n-1$ and $R_n= 1-\sum_{i=1}^{n-1} R_i$, $U_{(1)},...,U_{(n-1)}$ are order statistics from a uniform distribution on $[0,1]$, and $U_{(0)}=0$. These random proportions are uniformly distributed over the unit simplex. They employed Stieltjes transform and derived that: (i) $S_{n}$ possesses the same distribution as $X_{1}$,...,$X_{n}$ if and only if $X_{1}$,...,$X_{n}$ are degenerated or have a Cauchy distribution; and (ii) Van Assche's (1987) result for Arcsin holds for $Z$ only. In this paper, we introduce two families of distributions, suggested by an anonymous referee of the article, to whom the author expresses his deepest gratitude. We say that $Z_1$ is a random variable between two independent random variables with power distribution, if the conditionally distribution of $Z_1$ given at $X_1=x_1, X_2=x_2$ is $$ F_{Z_{1}|x_1,x_2}(z)=\left\{ \begin{array}{cc} 1 & z\geq {\rm max}(x_1,x_2), \\ (\frac{z-x_1}{x_2-x_1})^n & x_1<z<x_2, \\ 1-(\frac{z-x_1}{x_2-x_1})^n & x_2<z<x_1, \\ 0 & z\leq {\rm {min}}(x_1,x_2). \end{array} \right.\eqno(1.1) $$ The distribution $F_{Z_1|x_1,x_2}$ will be said to follow a conditionally directed power distribution, when $n$ is an integer. For $n=1$, the distribution given by (1.1) simplifies to the distribution $Z$ that was introduced by Van Assche (1987). For $n=2$, we call $Z_1$ directed triangular random variable. For further generalizing Van Assche results, we introduce a seemingly more natural conditionally power distribution. We call $Z_2$ two-sided power (TSP) random variable, if the conditionally distribution of $Z_2$ given at $X_1=x_1, X_2=x_2$ is $$ F_{Z_{2}|x_1,x_2}(z)=\left\{ \begin{array}{cc} 1 & z\geq y_2, \\ (\frac{z-y_1}{y_2-y_1})^n & y_1<z<y_2, \\ 0 & z\leq y_1. \end{array} \right.\eqno(1.2) $$ The distribution $F_{Z_2|x_1,x_2}$ will be said to follow a conditionally undirected power distribution, when $y_{1}={\rm min}(x_1,x_2), y_{2}={\rm max}(x_1,x_2)$ and $n$ is an integer. For $n=2$, we call $Z_2$ undirected triangular random variable.\\ Again, for $n=1$, the distribution given by (1.1) simplifies to the distribution $Z$ that was introduced by Van Assche (1987). The main aim of this article is providing a couple of generalizations to the results of Van Assche (1987) for some other values of $n$ (other than $n=1$). This article is organized as follows. We introduce preliminaries and previous works in section 2. In section 3, we give some characterizations for distribution $Z_1$ given in (1.1), when $n=2$. In section 4, we find distribution of $Z_2$ given in (1.2) by direct method, and give some examples of such distributions. \section{Preliminaries and previous works} In this section, we first review some results of Van Assche (1987) and then modify them a little bit to fit in our framework, to be introduced in the forthcoming sections. Using the Heaviside function ($U(x)=0,\;x<0,\; =1,\; x\geq0$) we conclude that for any given distinct values $x_{1}$ and $x_{2} $, the conditional distribution $F_{Z_1|x_1,x_2}(z)$ in (1.1) is $$F_{Z_1|x_1,x_2}(z)=(\frac{z-x_1}{x_2-x_1})^{n}U(z-x_1)-\sum_{i=1}^{n}{n\choose i}(\frac{z-x_2}{x_2-x_1})^iU(z-x_2). \eqno(2.1)$$ \textbf{Lemma 2.1.} For distinct reals $x_{1},x_{2},z$ and integer $n$, we have $$\frac{-1}{(z-x_1)(x_2-x_1)^n}+\frac{(-1)^n}{(n-1)!}\frac{d^{n-1}}{dx_{2}^{n-1}}(\frac{1}{z-x_2}.\frac{1}{(x_1-x_2)})=\frac{1}{(x_1-z)(x_2-z)^n}.$$\\ \textbf{Proof.} It easily follows from the Leibniz formula. \hfill$\Box$ \noindent Another tool for proving our main theorem is the following formula taken from the Schwartz distribution theory, namely, $$\int_{-\infty}^{\infty}\varphi(x)\Lambda^{[n]}(dx)=\frac{(-1)^{n}}{n!} \int_{-\infty}^{\infty}\frac{d^{n}}{dx^{n}}\varphi(x)\Lambda(dx), \eqno(2.2)$$ where $\Lambda$ is a distribution function and $\Lambda^{[n]}$ is the $n$-th distributional derivative of $\Lambda$.\\ The conditional distribution $F_{Z_1|x_{1},x_{2}}(z)$ given by (1.1) leads us to a linear functional on complex-valued functions $f:\mathbb{R}\rightarrow \mathbb{C}$, defined on the set of real numbers $\mathbb{R}$: $$F_{Z_{1}|x_{1},x_{2}}(f)=\frac{f(x_1)}{(x_2-x_1)^n}-\sum_{i=1}^{n}\frac{1}{(n-i)!(x_2-x_1)^{i}}\frac{d^{n-i}}{dz^{n-i}}f(x_2).$$ It easily follows that $$F_{Z_{1}|x_{1},x_{2}}(af+bg)=aF_{Z_{1}|x_{1},x_{2}}(f)+bF_{Z_{1}|x_{1},x_{2}}(g), \eqno(2.3) $$ for any complex-valued functions $f,g$ and complex constants $a,b$. We note that $F_{Z_{1}|x_{1},x_{2}}(z)=F_{Z_{1}|x_{1},x_{2}}(f_z)$, whenever $f_{z}(x)=(z-x)^{n}U(z-x)$ and\\ $$F_{Z_{1}|x_{1},x_{2}}(f_z)=\frac{f_{z}(x_1)}{(x_2-x_1)^n}-\sum_{i=1}^{n}\frac{1}{(n-i)!(x_2-x_1)^{i}}\frac{d^{n-i}}{dz^{n-i}}f_{z}(x_2).$$ Also we note that $U(z-x)= \frac{(-1)^{n}}{(n)!} \frac{d^{n}}{dx^{n}} f_{z}(x).$ Thus $$P(Z_{1}\leq z)=\int_{\mathbb{R}}U(z-x)dF_{Z_{1}}(x)= \int_{\mathbb{R}^{2}}F_{Z_{1}|x_{1},x_{2}}(z)\prod_{i=1}^{2} F_{X_{i}}(dx_{i}),$$ can be viewed as: $$\int_{\mathbb{R}}\frac{(-1)^{n}}{(n)!} \frac{d^{n}}{dx^{n}}f_{z}(x)dF_{Z_1}(x)=\int_{\mathbb{R}^{2}}F_{Z_{1}|x_{1},x_{2}}(f_z) \prod_{i=1}^{2}F_{X_{i}}(dx_{i}).\eqno(2.4) $$ Therefore by using (2.3) along with (2.4) and a standard argument in the integration theory, we obtain that $$\int_{\mathbb{R}}\frac{(-1)^{n}}{(n)!} \frac{d^{n}}{dx^{n}}f(x)dF_{Z_1}(x)=\int_{\mathbb{R}^{2}}F_{Z_{1}|x_{1},x_{2}}(f) \prod_{i=1}^{2}F_{X_{i}}(dx_{i}), \eqno(2.5)$$ for any infinitely differentiable functions $f$ for which the corresponding integrals are finite. Now (2.5) together with (2.2) lead us to $$\int_{\mathbb{R}}f(x)dF_{Z_1}^{(n)}(x)=\int_{\mathbb{R}^{2}}F_{Z_{1}|x_{1},x_{2}}(f) \prod_{i=1}^{2}F_{X_{i}}(dx_{i}), \eqno(2.6)$$ for the above mentioned functions $f$, where $F_{Z_1}^{(n)}$ is the $(n)$-th distributional derivative of the distribution of $Z_1$.\\ Let us denote the Stieltjes transform of a distribution $H$ by $${\cal S}(H,z)=\int_{\mathbb{R}}\frac{1}{z-x}H(dx),$$ for every $z$ in the set of complex numbers $\mathbb{C}$ which does not belong to the support of $H$, i.e., $z \in \mathbb{C}\cap (\mbox{supp}H)^{\cal C}$. For more on the Stieltjes transform, see Zayed (1996). The following lemma indicates how the Stieltjes transform of $Z_1$, and $X_1,X_2$ are related. \textbf{Lemma 2.2. } Let $Z_1$ be a random variables that satisfies (1.1). Suppose that the random variables $X_1$ and $X_2$ are independent and continuous with distribution functions $F_{X_1}$ and $F_{X_2}$, respectively. Then $$\frac{1}{n}{\cal S}^{(n)}(F_{Z_1},z)=-{\cal S}(F_{X_1},z){\cal S}^{(n-1)}(F_{X_2},z), \;\;\;z\in \mathbb{C}\bigcap_{i=1}^2 (\mbox{supp}F_{X_i})^{\cal C}.$$ \textbf{Proof.} It follows from (2.6) that $${\cal S}(F_{Z_{1}}^{(n)},z)=\int_{\mathbb{R}^{2}} F_{Z_{1}|x_{1},x_{2}}(g_z) \prod_{i=1}^{2}F_{X_{i}}(dx_{i}),$$ and $$\frac{1}{n!}\frac{d^{n}}{dz^{n}}{\cal S}(F_{Z_{1}},z)=\int_{\mathbb{R}^{2}}F_{Z_{1}|x_{1},x_{2}}(g_z)\prod_{i=1}^{2} F_{X_{i}}(dx_{i}),$$ for $g_{z}(x)=\frac{1}{z-x}$. Now, it follows that $$F_{Z_{1}|x_{1},x_{2}}(g_z)=\frac{\frac{1}{z-x_1}}{(x_2-x_1)^n}-\sum_{i=1}^{n}\frac{1}{(n-i)!(x_2-x_1)^i}\frac{d^{n-i}}{dz^{n-i}}\frac{1}{z-x_2},$$ and by using Lemma 2.1, we have \begin{eqnarray*} F_{Z_{1}|x_{1},x_{2}}(g_z)&=&\frac{(-1)^n}{(z-x_1)(z-x_2)^n}. \end{eqnarray*} Therefore, $$\frac{1}{n!}\frac{d^{n}}{dz^{n}}{\cal S}(F_{Z_1},z)=\int_{\mathbb{R}^{2}} \frac{(-1)^n}{(z-x_1)(z-x_2)^n} \prod_{i=1}^{2} F_{X_{i}}(dx_{i}), $$ and $$\frac{1}{n}{\cal S}^{(n)}(F_{Z_1},z )=-{\cal S}(F_{X_{1}},z){\cal S}^{(n-1)}(F_{X_{2}},z), \;\;\;z\in \mathbb{C}\bigcap_{i=1}^2 (\mbox{supp}F_{X_i})^{\cal C}.\eqno(2.7)$$ This finishes the proof. \hfill$\Box$ Note that Van Assche's lemma is the case of $n=1$: $$-{\cal S}^{'}(F_{Z_1},z)={\cal S}(F_{X_{1}},z){\cal S}(F_{X_{2}},z).$$ We also note that the Stieltjes transform of Cauchy distribution, i.e., ${\cal S}(F,z)=\frac{1}{z+c}$, satisfies (2.7). \section{Directed triangular random variable} Let us now review Van Assche's result for directed triangular random variables. \textbf{Theorem 3.1.} If $X_1$ and $X_2$ are independent random variables with a common distribution $F_X$, then the characterizations of $Z_1$ for $n=1$ and $n=2$ are identical. \textbf{Proof.} We note that $X_1$ and $X_2$ have a common distribution function $F_X$. By using Lemma 2.2 for $n=2$, we have $$-\frac{1}{2}{\cal S}^{''}(F_{Z_1},z)={\cal S}(F_X,z){\cal S}^{'}(F_X,z),$$ and so $$-{\cal S}^{''}(F_{Z_1},z)=\frac{d}{dz}{\cal S}^{2}(F_X,z),$$ and $$-{\cal S}^{'}(F_{Z_1},z)={\cal S}^{2}(F_X,z). \eqno(3.1)$$ We note that the Stieltjes transform tends to zero, when $z$ is sufficiently large. In that case the constant in the differential equation will be zero. The equation (3.1) is exactly the equation obtained by Van Assche (1987) when $X_1$ and $X_2$ have a common distribution; so his results hold in our framework as well. \hfill$\Box$ \\ This clever proof is due to the anonymous referee. Now, we apply Lemma 2.2 for some characterizations, when $X_1$ and $X_2$ are not identically distributed. \textbf{Theorem 3.2.} Let $X_1$ and $X_2$ be independent random variables and $Z_1$ be a directed triangular random variable satisfying $(1.1)$. For $n=2$, we have, \vskip 3mm (a) if $X_1$ has uniform distribution on $[-1,1]$, then $Z_1$ has semicircle distribution on $[-1,1]$ if and only if $X_2$ has Arcsin distribution on $[-1,1]$; (b) if $X_1$ has uniform distribution on $[-1,1]$, then $Z_1$ has power semicircle distribution if and only if $X_2$ has power semicircle distribution, i.e., $$f(z)=\frac{3(1-z^2)}{4}\;,\;\;\;\;-1\leq z\leq 1;$$ (c) if $X_1$ has Beta$(1,1)$ distribution on $[0,1]$, then $Z_1$ has Beta$(\frac{3}{2},\frac{3}{2})$ distribution if and only if $X_2$ has Beta$(\frac{1}{2},\frac{1}{2})$ distribution; (d) if $X_1$ has uniform distribution on $[0,1]$, then $Z_1$ has Beta$(2,2)$ distribution if and only if $X_2$ has Beta$(2,2)$ distribution. \textbf{Proof.} (a) For the ``if" part we note that the random variable $X_1$ has uniform distribution and $X_2$ has arcsin distribution on [-1,1]; then $${\cal S}(F_{X_1},z)=\frac{1}{2}(\ln|z+1|-\ln|z-1|),$$ and $${\cal S}(F_{X_2},z)=\frac{1}{\sqrt{z^2-1}}.$$ From Lemma 2.2 and substituting the corresponding Stieltjes transforms of distributions, we get $${ \cal S}^{''}(F_{Z_1},z)=\frac{2}{(z^2-1)^{\frac{3}{2}}}.$$ The solution ${\cal S}(F_{Z_1},z)$ is $${\cal S}(F_{Z_1},z)=2(z-\sqrt{z^2-1}),$$ which is the Stieltjes transform of the semicircle distribution on $[-1,1]$.\\ For the ``only if" part we assume that the random variable $Z_1$ has semicircle distribution. Then it follows from lemma 2.2 that $${\cal S}(F_{X_2},z)\; \frac{1}{1-z^2} = \frac{-1}{(z^2-1)^{\frac{3}{2}}}.$$ The proof is completed. (b) By an argument similar to that given in (a) and solving the following differential equations, $$S^{''}(F_{Z},z)=\frac{2}{(z^2-1)}(\frac{3z}{2}+\frac{3}{4}(1-z^2)({\rm ln}|z+1|-{\rm ln}|z-1|)), \ \textrm{(for the ``if" part), and}$$ $$\frac{1}{1-z^2}S(F_{X_2},z)=\frac{3}{4}\frac{2z+(1-z^2)({\rm ln}|z+1|-{\rm ln}|z-1|)}{(1-z^2)}, \ \textrm{(for the ``only if" part)},$$ the proof can be completed. (c) By Lemma (2.2), we have $$-\frac{1}{2}S^{''}(F_{Z},z)=\frac{-1}{z(z-1)}\frac{1}{\sqrt{z(z-1)}}, \ \textrm{(for the ``if" part), and}$$ $$\frac{-1}{z(z-1)}S(F_{X_2},z)=\frac{-1}{z(z-1)\sqrt{z(z-1)}}, \ \textrm{(for the ``only if" part)}.$$ The proof can be completed by solving the above differential equations. (d) By Lemma (2.2), we have $$\hfill{\cal S}^{''}(F_{Z_1},z)= \frac{-2}{z(z-1)}(6(z^2-z)(\ln|z|-\ln|z-1|)-6z+3), \ \textrm{(for the ``if" part), and}$$ $${\cal S}(F_{X_2},z) = 6(z-z^2)(\ln|z|-\ln|z-1|)+6z-3, \ \textrm{(for the ``only if" part)}.$$ Solving the differential equations, can complete the proof. \hfill$\Box$ \section{TSP random variables} In section 3, we used a powerful method, based on the use of Stieltjes transforms, to obtain the distribution of $Z_1$ given in (1.1). It seems that one can not use that method to find the distribution of $Z_2$ given in (1.2). So we employ a direct method to find the distribution of $Z_2$. Let us follow Lemma 4.1 to find a simple method to get the distribution of $Z_2$. The work of Soltani and Homei (2009b) leads us to the following lemma. \textbf{Lemma 4.1.} Suppose $W$ has a power distribution with parameter $n$, $n\geq 1$, $n$ is an integer, and let $Y_1={\rm{Min}}(X_1,X_2)$, $Y_2={\rm Max}(X_1,X_2),$ where $X_1$ and $X_2$ are independent random variables. Let $$X=Y_1+W(Y_2-Y_1). \eqno(4.1)$$ Then (a) $X$ is a TSP random variable. (b) $X$ can be equivalently defined by $$X=\frac{1}{2}(X_1+X_2)+(W-\frac{1}{2})|X_1-X_2|.$$ \textbf{Proof.} (a)\begin{eqnarray*} F_{X|x_1,x_2}(z)&=& P(Y_1+W(Y_2-Y_1)\leq z|X_1=x_1,X_2=x_2)\\ &=&P(y_1+W(y_2-y_1)\leq z)\\ &=&(\frac{z-y_1}{y_2-y_1})^n. \end{eqnarray*} (b) The proof can be completed by substuting ${\rm Min}(X_1,X_2)$ and ${\rm Max}(X_1,X_2)$ with $Y_1$ and $Y_2$ in (4.1). \hfill$\Box$ \subsection{Moments of TSP random variables} The following theorem provides equivalent conditions for $\mu_{k}^{'}=EZ_{2}^{k}$. \textbf{Theorem 4.1.1.} Suppose that $Z_2$ is a TSP random variable satisfing (1.2). If $X_1$ and $X_2$ are random variables and $E|X_i|^{k}<\infty$, $i=1,2$, for all integers $k$, then\\ (a) $EZ_{2}^{k}=n\frac{\Gamma(k+1)}{\Gamma(k+n+1)}\sum_{i=0}^{k}\frac{\Gamma(k-i+n)}{\Gamma(k-i+1)}E(Y_{1}^{i}Y_{2}^{k-i})$;\\ (b) $EZ_{2}^{k}=\sum_{i=0}^{k}{k\choose i}(\frac{1}{2})^{k-i}E(W-\frac{1}{2})^{i}E(X_1+X_2)^{k-i}|X_1-X_2|^{i}$;\\ (c) $EZ_{2}^{k}=\sum_{i=0}^{k}{k\choose i}\frac{n}{n+i}E(Y_{1}^{k-i}(Y_2-Y_1)^i)$. \textbf{Proof.} (a) By using Lemma 4.1, we obtain that \begin{eqnarray*} EZ_{2}^{k}&=& E(\sum_{i=0}^{k}{k \choose i}(1-W)^{i}Y_{1}^{i}W^{k-i}Y_{2}^{k-i})\\ &=&\sum_{i=0}^{k}{k \choose i}E(W^{k-i}(1-W)^{i})E(Y_{1}^{i}Y_{2}^{k-i})\\ &=&n\frac{\Gamma(k+1)}{\Gamma(k+n+1)}\sum_{i=0}^{k}\frac{\Gamma(k-i+n)}{\Gamma(k-i+1)}EY_{1}^{i}Y_{2}^{k-i}.\\ \end{eqnarray*} (b) This can be easily proved by Lemma 4.1(b). (c) It straightforwardly follows from (4.1). \hfill$\Box$ Let us consider expectation and variance of $Z_2$. First, we suppose that ${\rm E}Y_1=\mu_1$, ${\rm E}Y_2=\mu_2$, ${\rm Var} Y_1=\sigma_1^{2}$, ${\rm Var} Y_2=\sigma_{2}^{2}$, and ${\rm Cov}(Y_1,Y_2)=\sigma_{12}$. Then $$EZ_2=\frac{\mu_1+n\mu_2}{n+1},$$ and also, if ${\rm E}X_1={\rm E}X_2=0$, then $$E(Z_2)=EY_{1}+\frac{n}{n+1}(EY_{2}-EY_{1}).$$ By $Y_{1}+Y_{2}=X_{1}+X_{2}$, we have $$E(Z_2)=E(Y_1)+\frac{n}{n+1}(-2EY_{1})=\frac{1-n}{1+n}EY_1.\eqno(4.2)$$ It can easily follow from (4.2) that the Arcsin result of Van Assche (1987) is only true for $n=1$, and also, one can see that Theorem (3.2) in section 3 does not hold for the above $Z_2$.\\ About the variance, we have $$Var Z_2=\frac{n(\mu_{1}-\mu_2)^2+n(n+1)^2 \sigma_{2}^{2}+2(n+1)(\sigma_{1}^{2}+n \sigma_{12})}{(n+1)^2(n+2)}.$$ Following the computation of expectation and variance, we evaluate them for some well-known distributions. If $X_1$ and $X_2$ have standard normal distributions, then from Theorem 4.1.19b) and the fact that $X_1-X_2$ and $X_1+X_2$ are independent, it follows that their first, second and third order moments are equal, respectively, to $$EZ_2=\frac{1}{\sqrt{\pi}}(\frac{n-1}{n+1}),$$ $$EZ_{2}^{2}=\frac{n^2+n+2}{(n+1)(n+2)}, \;\; {\rm and}$$ $$EZ_{2}^{3}=\frac{1}{2\sqrt{\pi}}\frac{5n^3+12n^2+13n-30}{(n+3)(n+2)(n+1)}.$$ Also, in case $X_1$ and $X_2$ have uniform distributions, Theorem 4.1.1(b) implies that, $$EZ_{2}^{k}=n\frac{\Gamma(k+1)}{\Gamma(n+k+1)}\sum_{i=0}^{k}\frac{\Gamma(k-i+n)}{\Gamma(k-i+1)}\frac{2}{(k+2)(i+1)},$$ $$EZ_{2}=\frac{2n+1}{3(n+1)}, \;\; {\rm and}$$ $$Var(Z_2)={\frac{1}{18}}\frac{n^3+3n^2+6n+2}{(n+1)^2(n+2)}.$$ Since some distributions do not have any moments, Theorem 4.1.1 is not applicable for investigating Van Assche's results for them, whence, we prove the following theorem: \textbf{Theorem 4.1.2.} Suppose that $Z_2$ is a TSP random variable satisfying (4.1). Then\\ (a) $Z_2$ is location invariant;\\ (b) if $X_1$ and $X_2$ have symmetric distribution around $\mu$, then $Z_2$ has symmetric distribution around $\mu$, only when $n=1$. \textbf{Proof.} (a) Is immediate. (b) We can assume without loss of generality that $\mu=0$. If $Z_2$ has a symmetric distribution around zero, then $$Y_{1}+W(Y_{2}-Y_{1})\stackrel{d}{=}-[Y_{1}+W(Y_2-Y_1)].$$ We note that $$Y_{1}+W(Y_{2}-Y_{1})\stackrel{d}{=}[-Y_{1}+W(-Y_2-(-Y_1))].$$ Since, $-{\rm {Min}}(X_1,X_2)={\rm {Max}}(-X_1,-X_2)$, $X_1\stackrel{d}{=}-X_1$ and $X_2\stackrel{d}{=}-X_2$, we have $$Y_{1}+W(Y_{2}-Y_{1})\stackrel{d}{=}Y_{2}+W(Y_1-Y_2).\eqno(4.3)$$ By equating the conditional distributions given at $X_1=x_1$ and $X_2=x_2$ in (4.2), we conclude that $n=1$. \hfill$\Box$\\ It can also easily follow from Theorem (4.1.1) that the Cauchy result of Van Assche (1987) is true only for $n=1$. \subsection{Distributions of TSP random variables} In this subsection, we investigate computing distributions by the direct method. We will give two examples of derivation based on (4.1). This method may be complicated in some cases, but we have chosen some easy to follow examples. \textbf{Example 4.2.1.} Let $X_1,X_2$ and $W$ be independent random variables such that $X_1$ and $X_2$ are uniformly distributed over $[0,1]$, and $W$ has a power function distribution with parameter $n$. We find the value $f_{Z_2}(z;n)$ by means of $f_{Z_2|W}(z|w)$; therefore $$ f_{Z_2|W}(z|w)=\left\{ \begin{array}{cc} \frac{2z}{w},\;\;0<z<w,\\ \frac{2(1-z)}{1-w},\;\;w<z<1.\\ \end{array} \right.\eqno(4.4) $$ By using the distribution of $W$, the density function $f_{Z_2}(z;n)$ can be expressed in terms of the Gauss hypergeometric function $F(a,b,c;z)$, which is a well-known special function. Indeed according to Euler's formula, the Gauss hypergeometric function assumes the integral representation $$F(a,b,c;z)=\frac{\Gamma(c)}{\Gamma(b)\Gamma(c-b)}\int_{0}^{1}t^{b-1}(1-t)^{c-b-1}(1-tz)^{-a}dt,$$ where $a,b,c$ are parameters subject to $-\infty<a<+\infty$, $c>b>0$, whenever they are real, and $z$ is the variable (see Zayed 1996). By using Euler's formula, the density function of $Z_2$ can be expressed as follows: $$f_{Z_2}(z;n)=\frac{2nz}{n-1}(1-z^{n-1})+2(1-z)z^{n}F(1,n,n+1,z),\;0<z<1,\eqno(4.5)$$ where $n>0$ and $n\neq 1$. When $n=1$, similar calculations lead to the following distribution $$f_{Z_2}(z)=-2(1-z){\rm log}(1-z)-2z {\rm log}(z),\;\;0<z<1.$$ The probability density function $f_{Z_2}(z)$ was introduced by Johnson and Kotz (1990), for the first time, under the title ``uniformly randomly modified tine". So $f_{Z_2}(z;n)$ can be seen as an extension of the above mentioned distribution. We note that, from (4.1) and a simple Monte Carlo procedure using only simulated uniform variables, one is able to simulate the distribution (4.5). \textbf{Example 4.3.1.} Let $X_1$ and $X_2$ be independent random variables with Beta$(1,2)$ distribution. Then if $W$ has Beta$(3,1)$ distribution, $Z_1$ has Beta$(2,3)$ distribution. In the following theorem we compute the Stieltjes transform of $Z_2$ for $n=2$. Let us remark that the complexity of the integral in the theorem indicates that for this case the direct method is preferred. \textbf{Theorem 4.4.1} Let $Z_2$ be a undirected triangular random variable that satisfies (1.2). Suppose that the random variables $X_1$ and $X_2$ are independent and continuous with the distribution functions $F_{X_1}$ and $F_{X_2}$, respectively. Then $$-\frac{1}{2}{\cal S}^{'''}(F_Z,z)={\cal S}^{'}(F_{X_1},z){\cal S}^{'}(F_{X_2},z)+2{\cal S}(F_{X_1},F_{X_2},z),$$ where $${\cal S}(F_{X_1},F_{X_2},z)=\int_{\mathbb{R}^{2}}\frac{1}{(z-x_1)(z-x_2)(x_2-x_1)^2}\prod_{i=1}^{2} F_{X_i}(dx_i).$$ \textbf{Proof.} By using an argument similar to that given in Section 3, we can conclude that $$\int f(x)dF_{Z_2}^{(2)}(x)=\int_{\mathbb{R}^{2}} F_{Z_{2}|x_{1},x_{2}}(f) \prod_{i=1}^{2}F_{X_{i}}(dx_{i}).$$ So, $$-\frac{1}{2}{\cal S}^{'''}(F_{Z_2},z)=\int_{\mathbb{R}^{2}} F_{Z_{2}|x_{1},x_{2}}(g_z) \prod_{i=1}^{2}F_{X_{i}}(dx_{i}),$$ for $g_{z}(x)=\frac{1}{(z-x)^2}$. From $$F_{Z_2|x_1,x_2}(g_z)=\frac{\frac{1}{(z-x_1)^2}}{(x_2-x_1)^2}+\frac{\frac{1}{(z-x_2)^2}}{(x_1-x_2)^2}$$ and by using partial fractional rule, we have $$F_{Z_2|x_1,x_2}(g_z)=\frac{1}{(z-x_1)^{2}(z-x_2)^2}+\frac{2}{(x_2-x_1)^2}\frac{1}{(z-x_1)(z-x_2)}.$$ Therefore, $$-\frac{1}{2}{\cal S}^{'''}(F_{Z_2},z)=\int_{\mathbb{R}^{2}}(\frac{1}{(z-x_1)^{2}(z-x_2)^2}+\frac{2}{(x_2-x_1)^{2}(z-x_1)(z-x_2)})\prod_{i=1}^{2} F_{X_{i}}(dx_{i}),$$ and $$-\frac{1}{2}{\cal S}^{'''}(F_{Z_2},z)={\cal S}^{'}(F_{X_1},z){\cal S}^{'}(F_{X_2},z)+2{\cal S}(F_{X_1},F_{X_2},z).$$ This finishes the proof. \hfill$\Box$ It is worth mentioning that the present method yields other extensions too; the following is such an example. \textbf{Example 4.3.2.} Suppose that $X_1,X_2,W$ are independent random variables. If $X_1$ and $X_2$ have uniform distributions on $[0,1]$ and $W$ has Beta$(2,2)$ distribution, then $Z_2$ has the same distribution as $W$. If the product moments of order statistics are known, those of $W$ can be derived from that of $Z_2$ by using Theorem 4.1.1(a). Then the distribution of $W$ is characterized by that of $Z_2$. By an argument similar to the one given in Example 4.2.1, when $W$ has a Beta distribution with parameters $n$ and $m$, we find the distribution $f_{Z_2}(z;n,m)$ as $$\frac{B(n-1,m)}{B(n,m)}2z(1-I_{z}(n-1,m))+\frac{B(n,m-1)}{B(n,m)}2(1-z)I_{z}(n,m-1),\;\;0<z<1,$$ where $I_{x}(a,b)$ is incomplete Beta function: $$I_{x}(a,b)=\frac{1}{B(a,b)}\int_{0}^{x}t^{a-1}(1-t)^{b-1}dt,\;\; (a,b>0).$$ \section{Conclusions} We have described how (a) methods of Stieltjes transform, and (b) directed methods, could be used for obtaining the distributions, characterizations and properties of the random mixture of variables defined in (1.1) and (1.2). Of course each one of the methods (a) or (b) has its own advantages and disadvantages, and none of them has a preference over the other. The TSP random variable when $X_1$ and $X_2$ have uniform distributions, led us to a new family of distributions which can be regarded as some generalization of ``uniformly randomly modified tine". The proposed model in the direct method can easily lead to distribution generalizations, though this is not possible for the first method, but here the characteristics can be easily computed. \section{Acknowledgment} The author is deeply grateful to the anonymous referee for reading the original manuscript very carefully and for making valuable suggestions.
1,116,691,499,994
arxiv
\section{Introduction} In this short note, I wish to describe a family of Schr\"odinger operators on $l^2({\mathbb N})$ whose spectrum is an interval. To set the stage introduce for a bounded sequence $V: {\mathbb N}\to{\mathbb R}$ and $u \in l^2({\mathbb N})$ the Schr\"odinger operator $H_V$ defined by \begin{align} (H_V u) (n) & = u(n+1) + u(n-1) + V(n) u(n) & n\geq 2\\ \nonumber (H_V u) (1) & = u(2) + V(1) u(1). \end{align} We will denote by $\sigma(V)$ the spectrum of the operator $H_V$. It is well known that if $V(n)$ is a sequence of independent identically distributed random variables with distribution $\mu$ satisfying $\mathrm{supp}(\mu) = [a,b]$, we have that for almost every $V$ the spectrum $\sigma(V)$ is $[a - 2, b + 2]$. For the Almost--Mathieu Operator with potential \begin{equation} V_{\lambda,\alpha,\theta}(n) = 2 \lambda \cos(2 \pi (n \alpha + \theta)), \end{equation} where $\lambda > 0$, $\alpha \notin \mathbb{Q}$, the set $\sigma(V_{\lambda,\alpha,\theta})$ contains no interval \cite{aj}. Bourgain conjectured in \cite{bbook}, that if one considers the potential \begin{equation} V_{\lambda, \alpha, x, y} (n) = \lambda \cos \left(\frac{n (n-1)}{2} \alpha + n x + y \right) \end{equation} with $\lambda > 0$, $\alpha \notin \mathbb{Q}$, the spectrum $\sigma(V_{\lambda, \alpha, x, y})$ is an interval. Denote by $\mathbb{T} = \mathbb{R} / \mathbb{Z}$ the circle. I will prove the following result \begin{theorem}\label{thm:main} For any continuous function $f: \mathbb{T} \to {\mathbb R}$, any $\alpha\neq 0$, $\theta$, and $\rho > 0$ not an integer, introduce the potential \begin{equation}\label{eq:vrho} V(n) = f(\alpha n^\rho + \theta). \end{equation} Then we have that \begin{equation} \sigma(V) = [\min(f) - 2, \max(f) + 2]. \end{equation} \end{theorem} Potentials of the type \eqref{eq:vrho}, where already discussed in Bourgain \cite{b}, Griniasty--Fishman \cite{gf}, Last--Simon \cite{ls}, and Stolz \cite{s}. In particular, the case $0 < \rho < 1$ is due to Stolz \cite{s} under an additional regularity assumption on $f$. The proof of this theorem depends essentially on the following lemma on the distribution of $n^\rho$, which is a consequence of a result of Boshernitzan \cite{bosh}. \begin{lemma}\label{lem:main} Let $r \geq 0$ be an integer and $r < \rho < r+1$. Given any $\alpha \neq 0$, $\theta$, $K \geq 1$, $\varepsilon > 0$, $a_0, \dots, a_r$, there exists an integer $n \geq 1$ such that \begin{equation} \sup_{|k| \leq K} \| \alpha (n + k)^\rho + \theta - \sum_{j=0}^r a_j k^j\| \leq \varepsilon, \end{equation} where $\|x\|= \mathrm{dist}(x, {\mathbb Z})$ denotes the distance to the closest integer. \end{lemma} We will prove this lemma in the next section. $\| . \|$ is not a norm, but it obeys the triangle inequality \begin{equation}\label{eq:triangle} \| x + y \| \leq \|x \| + \|y\| \end{equation} for any $x,y \in {\mathbb R}$. In particular for any integer $N$, we have that $\| N x \| \leq |N| \|x\|$. \begin{proof}[Proof of Theorem~\ref{thm:main}] By Lemma~\ref{lem:main}, we can find for any $x \in [0,1)$ a sequence $n_l$ such that $$ \sup_{|k| \leq l} \| \alpha (n_l + k)^\rho + \theta - x\| \leq 1/l. $$ Hence, the sequence $V_l(n) = V(n - n_l)$ converges pointwise to $f(x)$. The claim now follows from a Weyl--sequence argument. \end{proof} It is remarkable that combined with the Last--Simon semicontinuity of the absolutely continuous spectrum \cite{ls}, one also obtains the following result \begin{theorem}\label{thm:ac} For $r \geq 0$ an integer, $r < \rho < r + 1$, and $f$ a continuous function on $\mathbb{T}$, introduce the set $\mathcal{B}_r(f)$ as \begin{equation} \mathcal{B}_r (f) = \bigcap_{a_0, \dots, a_r} \sigma_{\mathrm{ac}} (f( \sum_{j = 0}^{r} a_j n^j)). \end{equation} Then for $\alpha\neq 0$ and any $\theta$ \begin{equation}\label{eq:ac} \sigma_{\mathrm{ac}}( f(\alpha n^\rho + \theta)) \subseteq \mathcal{B}_r(f). \end{equation} Here $\sigma_{\mathrm{ac}}(V)$ denotes the absolutely continuous spectrum of $H_V$. \end{theorem} We note that for $r = 0$, we have that \begin{equation} \mathcal{B}_0 (f) = [-2 + \max(f), 2 - \min(f)]. \end{equation} Under additional regularity assumptions on $f$ and $r=0$, Stolz has shown in \cite{s} that we have equality in \eqref{eq:ac}. Furthermore note that \begin{equation} \mathcal{B}_{r+1}(f) \subseteq \mathcal{B}_r(f). \end{equation} In \cite{ls}, Last and Simon have stated the following conjecture \begin{equation} \mathcal{B}_{1} (2 \lambda \cos(2 \pi .)) = \emptyset \end{equation} for $\lambda > 0$. They phrased this in poetic terms as '\textit{Does Hofstadter's Butterfly have wings?}'. The best positive result in this direction as far as I know, is by Bourgain \cite{b} showing \begin{equation} |\mathcal{B}_1 (2 \lambda \cos(2 \pi .))| \to 0, \quad \lambda \to 0. \end{equation} It is an interesting question if Theorem~\ref{thm:main} holds for $\rho \geq 2$ an integer. In the particular case of $f(x) = 2 \lambda \cos(2 \pi x)$, $\rho = 2$, this would follow from Bourgain's conjecture. However, there is also the following negative evidence. Consider the skew-shift $T: \mathbb{T}^2 \to \mathbb{T}^2$ given by \begin{align} T(x,y) &= (x + \alpha, x + y), \\ \nonumber T^n(x,y) &= (x + n \alpha, \frac{n(n-1)}{2} \alpha + n x + y), \end{align} where $\alpha \notin \mathbb{Q}$. Then Avila, Bochi, and Damanik \cite{abd1} have shown that for generic continuous $f:\mathbb{T}^2 \to {\mathbb R}$, the spectrum $\sigma(f(T^n(x,y)))$ contains no interval. So, it is not clear what to expect in this case. As a final remark, let me comment on a slight extension. If one replaces $V$ by the following family of potentials $$ V(n) = f(\alpha n^\rho + \sum_{k=1}^{K} \alpha_k n^{\beta_k}), $$ where $\beta_k < \rho$ and $\alpha_k$ are any numbers, then Theorem~\ref{thm:main} and \ref{thm:ac} remain valid. \section{Proof of Lemma~\ref{lem:main}} Let in the following $r$ be an integer such that $r < \rho < r +1$. By Taylor expansion, we have that \begin{equation}\label{eq:taylor} \alpha (n + k)^\rho = \sum_{j = 0}^{r} x_j(n) k^j + \alpha \frac{\rho \dots (\rho -r)}{(r+1)!} (n + \xi)^{\rho - r - 1} k^{r + 1} \end{equation} for some $|\xi| \leq k$ and \begin{equation} x_j(n) = \alpha \frac{\rho \dots (\rho - j + 1)}{j!} n^{\rho - j}. \end{equation} We now first note the following lemma \begin{lemma}\label{lem:taylor} For any $K \geq 1$ and $\varepsilon > 0$, there exists an $N_0(K, \varepsilon)$ such that \begin{equation} |\alpha (n + k)^\rho - \sum_{j = 0}^{r} x_j(n) k^j| \leq \varepsilon \end{equation} for $|k| \leq K$ and $n \geq N_0(K,\varepsilon)$. \end{lemma} \begin{proof} This follows from \eqref{eq:taylor} and that $\rho - r - 1 < 0$. \end{proof} A sequence $x(n)$ is called uniformly distributed in $\mathbb{T}^{r+1}$ if for any $0 \leq a_j < b_j \leq 1$, $j = 0, \dots, r$ we have that \begin{equation} \lim_{n \to \infty} \frac{1}{n} \# \{1 \leq k \leq n:\, x_j(k) \in (a_j, b_j),\quad j=0,\dots,r\} = \prod_{j=0}^{r} (b_j - a_j). \end{equation} If $x(n)$ is a sequence in ${\mathbb R}^{r+1}$, we can view it as a sequence in $\mathbb{T}^{r+1}$ by considering $x(n) \pmod 1$, and call it uniformly distributed if $x(n) \pmod 1$ is. We need the following consequence of Theorem~1.8 in \cite{bosh}. \begin{theorem}[Boshernitzan] Let $(f_1, \dots, f_s)$ be functions ${\mathbb R} \to {\mathbb R}$ of subpolynomial growth, that is, there is an integer $N$ such that \begin{equation} \lim_{x \to \infty} f_j(x) x^{-N} = 0,\quad 1 \leq j \leq s. \end{equation} Then the following two conditions are equivalent \begin{enumerate} \item The sequence $\{f_1(n), \dots, f_s(n)\}_{n\geq 1}$ is uniformly distributed in $\mathbb{T}^s$. \item For any $(m_1, \dots, m_s) \in \mathbb{Z}^s \backslash \{0\}$, and for every polynomial $p(x)$ with rational coefficients, we have that \begin{equation} \lim_{x \to \infty} \frac{ \sum_{j=1}^{s} m_j f_j(x) - p(x)}{\log(x)} = \pm \infty. \end{equation} \end{enumerate} \end{theorem} We will use the following consequence of this theorem \begin{lemma}\label{lem:ud} The sequence \begin{equation} x(n) = \begin{pmatrix} x_0(n) & \dots & x_r(n) \end{pmatrix} \end{equation} is uniformly distributed in $\mathbb{T}^{r+1}$. \end{lemma} \begin{proof} This follows from the fact that for any polynomial $p(n)$ and integer vector $(m_0, \dots, m_r)$ we have that $$ |\sum_{j=0}^r m_j x_j(n) - p(n)| $$ grows at least like $n^{\rho - r}$, which grows faster than $\log(n)$. \end{proof} Now we come to \begin{proof}[Proof of Lemma~\ref{lem:main}] By Lemma~\ref{lem:taylor}, there exists an $N_0$ such that $$ \|\alpha (n + k)^\rho - \sum_{j = 0}^{r} x_j(n) k^j\| \leq \frac{\varepsilon}{2} $$ for any $n \geq N_0$ and $|k| \leq K$. By Lemma~\ref{lem:ud}, we can now find $n \geq N_0$ such that $$ \| x_l(n) - \tilde{a}_l \| \leq \frac{\varepsilon}{2 (r+1) K^l},\quad l=0,\dots,r, $$ where $\tilde{a}_0 = a_0 - \theta$ and $\tilde{a}_l = a_l$, $l\geq 1$. For $|k| \leq K$, we now have that using \eqref{eq:triangle} \begin{align} \nonumber \|\alpha (n + k)^\rho & + \theta - \sum_{j = 0}^{r} a_j k^j\| \\ \nonumber & \leq \|\alpha (n + k)^\rho - \sum_{j = 0}^{r} x_j(n) k^j\| + \|\sum_{j = 0}^{r} x_j(n) k^j - \sum_{j = 0}^{r} a_j k^j + \theta\| \\ \label{eq:defal} & \leq \frac{\varepsilon}{2} + \sum_{j = 0}^{r} \| (x_j(n) - \tilde{a}_j) k^j \| \\ \label{eq:integer} &\leq \frac{\varepsilon}{2} + \sum_{j = 0}^{r} |k^j| \| x_j(n) - \tilde{a}_j \| \\ \nonumber & \leq \frac{\varepsilon}{2} + \sum_{j = 0}^{r} K^j \frac{\varepsilon}{2 (r+1) K^j} = \varepsilon, \end{align} where we used the definition of $\tilde{a}_l$ in \eqref{eq:defal}, and that $k$ is an integer in \eqref{eq:integer}. This finishes the proof. \end{proof} \section*{Acknowledgments} I am indebted to J. Chaika, D. Damanik, and A. Metelkina for useful discussions, and the referees for many useful suggestions on how to improve the presentation.
1,116,691,499,995
arxiv
\section*{Introduction} The crucial difference of ``exterior algebra" in the super case from the usual case is that the analog of the ``top exterior power" for a ${\mathbb Z_{2}}$-graded vector space cannot be obtained by tensor operations. This is because the determinant in the super case (the Berezinian) is not a polynomial expression, but a fraction whose numerator and denominator separately are not multiplicative. Thus the space $\Ber V$ (which corresponds to the usual $\det V$) enters independently of the ``naive" generalization of exterior multiplication by the sign rule. A complete theory of ``exterior forms" has to be built upon the Berezinian from the beginning. This fact has far reaching consequences. ``Naive" differential forms on a supermanifold $M^{n|m}$ are, of course, (locally) polynomials in $dx^A$, where $x^A$ are coordinates. Experts know that there are two possible conventions for the parity and commutation relations for the differentials (see~\cite{manin}). According to one of them, $dx^A$ is assigned the same parity as $x^A$ and the differentials anticommute: the flip of $dx^A$ and $dx^B$ results in the factor $-(-1)^{\tilde A \tilde B}$. The other convention assigns to $dx^A$ the parity opposite to that of $x^A$ and the differentials are regarded as commuting variables. We shall refer to them in the sequel as to skew-commutative and commutative conventions, respectively. From the viewpoint of integration, the fatal drawback of such naive forms is that they can't be integrated over $M=M^{n|m}$ (unless $m=0$). Because of that, some remedies were suggested. Bernstein and Leites~\cite{berl:int} defined ``integral forms" as tensor products of multivector fields with Berezin volume forms. This permitted integration over $M^{n|m}$ and an analog of Gauss-Ostrogradsky formula. If we are integration-minded, we expect that the correct forms on supermanifolds should be graded by super dimensions $r|s$ (dimensions of surfaces or chains over which a form can be integrated). Thus, integral forms should correspond to ``$r|m$-forms" ($s=m$) and volume forms to ``$n|m$-forms". Naive differential forms from this point of view correspond to ``$r|0$-forms" ($s=0$.) What about other values of $r$, $s$? For {\it non-polynomial} functions of $dx^A$ (with the commutative convention) Bernstein and Leites~\cite{berl:pdf} showed that they also can be integrated over $M^{n|m}$ provided they sufficiently rapidly decrease in $d\xi^{\mu}$, where $\xi^{\mu}$ are odd coordinates. Such ``pseudodifferential forms" are very beautiful. However, since they do not have any grading (and, in fact, are good for integration only for a particular type of orientation and not good for others, see~\cite{tv:git}) they do not solve the problem. A crucial step towards the theory of ``$r|s$-forms" was made by A.S.~Schwarz, M.A.~Baranov, A.V.~Gajduk, O.M.~Khudaverdian and A.A.~Rosly in the beginning of 1980-s and was motivated by quantum field theory. They based their investigation of the ``objects of integration" on supermanifolds directly on the notion of Berezinian and studied Lagrangians of parameterized surfaces $\Gamma: I^{r|s}\to M^{n|m}$ which induce volume forms on $r|s$-dimensional space $U^{r|s}$. They are called {\it densities}. The key result was the concept of ``closedness" of a density~\cite{ass:bar,ghs,hov:viniti}: a density is said to be {\it closed} if the corresponding action is identically stationary. (On ordinary manifolds, for densities corresponding to closed forms this follows from the Stokes' formula.) As the author discovered, this notion of ``closedness" precisely follows from a certain construction of a differential in terms of variational derivatives. Densities, initially defined only for embedded surfaces (hence $0\leq r\leq n$, $0\leq s\leq m$), should be replaced by more general ``covariant Lagrangians", for which $r\geq 0$ can exceed $n$, and a certain system of differential equations with respect to the components of tangent vectors is imposed upon Lagrangians. Roughly speaking, this system (see Eq.~(\ref{eqs}) below) is a nontrivial analog of multilinearity/skew symmetry property of the usual exterior forms. (The odd-odd part of the system amazingly coincides with the equations introduced by F.~John~\cite{john} and Gelfand-Shapiro-Gindikin-Graev (see~\cite{gel:ggg}) for the description of the image of Radon-like transforms in integral geometry.) The theory of $r|s$-forms in this sense was developed by the author with A.V.~Zori{\'c}~\cite{tv:compl,tv:pdf,tv:bord, tv:coh} and the author~\cite{tv:git}. The differential has degree $+1$, so $r|s$-forms are mapped to $r+1|s$-forms. The complex obtained in this way possesses all natural properties of the usual Cartan-de~Rham complex like functoriality in a suitable category, Stokes' formula and homotopy invariance, and also has some similarity with extraordinary cohomology (an analog of the Atiyah-Hirzebruch spectral sequence), see~\cite{tv:git}. For $s=0$, it naturally incorporates the ``naive" generalization of differential forms. For $s=m$ and $r\geq 0$ it also incorporated integral forms of Bernstein and Leites. However, an {\it ad hoc} augmentation of the complex had to be introduced~\cite{tv:git} to achieve homotopy invariance. The existence of Bernstein-Leites integral forms of negative degree has also hinted to ``hidden" $r|s$-forms with $r<0$. Such objects were indeed discovered in~\cite{tv:dual}. Together with forms considered in~\cite{tv:git} they give a desired de Rham complex stretching both in positive and negative directions. The solution is based on the idea of a {\it dual form}~\cite{tv:dual} (important results were independently obtained in~\cite{hov:bv}). Geometrically, dual forms are Lagrangians of surfaces specified by maps $M^{n|m}\supset U^{n|m}\to\mathbb R^{p|q}$ (copaths) rather than maps $I^{r|s}\to M^{n|m}$ (paths). To define a complex, dual forms are not enough. One has to introduce new independent parameters and to allow to increase their number. An intermediate product is labeled ``mixed form". A whole bunch of isomorphisms enters the stage, and the final picture is the result of a stabilization (see~\cite{tv:dual} and subsection~\ref{algebra:st} below). (Geometrically, one gets a sort of ``virtual surfaces", which can have both negative and positive dimension.) \bigskip In the current paper we develop the algebraic and differential theory of {\it stable forms} (the unified complex). We do not touch integration. The main result of the paper is an analog of Cartan calculus that includes module structures for forms and the relation between the differential, Lie derivative and a ``contraction operator" with a vector field (which is defined in this paper). All results are new. They will be used to study the homotopy properties of stable forms and the de~Rham cohomology of supermanifolds. The paper is organized as follows. In Section~\ref{algebra} we define dual and mixed forms on a superspace $V$, the stability isomorphisms and isomorphisms with forms considered in~\cite{tv:git}. Operators $e(\a)$ and $e(v)$ are introduced, where $u\in V$, $\a\in V^*$. We prove that they are stable (commute with the stability isomorphisms) and relate them with operators on forms of~\cite{tv:git} (Theorem~\ref{stability}). Then we find the relations that they obey. We get a ``skew-commutative" version of a Clifford algebra involving a stability operator $\sigma$ as an additional central element (Theorem~\ref{commut}). As a corollary, we obtain module structures over the exterior algebras $\E (V^*)$ and $\E (V)$ (the skew-commutative versions). In Section~\ref{analiz} we consider the complex of stable forms on a supermanifold $M$. We prove the Leibniz identity (=differential module structure) for the multiplication by naive differential forms $\o\in\O^{{\boldsymbol \cdot}}(M)$ (Theorem~\ref{leibniz}). Then we consider the Lie derivative for mixed forms. We prove that the anticommutator of the differential and the operator $e(X)$, where $X$ is a vector field, equals the Lie derivative multiplied by the operator $\sigma$ (Theorem~\ref{cart}). It immediately implies a ``Cartan's homotopy identity" for stable forms. The results are discussed in Section~\ref{discuss}. We mainly follow the notation and terminology of the book~\cite{tv:git}. \smallskip {\bf Acknowledgements:} Questions related to the topic of this paper were discussed at various times with J.N.~Bernstein, O.M.~Khudaverdian and A.~Belopolsky. I am very much grateful to them. \section{Algebraic theory}\label{algebra} \subsection{Construction of forms. Stability isomorphisms} \label{algebra:st} Consider a superspace $V$ over $\mathbb R$ of dimension $\dim V=n|m$. We identify vector superspaces with the corresponding supermanifolds. By $\Vol V:=\Ber V^*$ we denote the space of volume forms on $V$. In the following we consider functions whose arguments are vectors or covectors. Components of vectors are written as rows, components of covectors as columns. Recall the following definition. \begin{de}[{\normalfont see~\cite{tv:compl,tv:coh,tv:git}}]\label{straight} A {\it form} on $V$ of degree $r|s$ is a smooth map $L:\underbrace{V\times\dots V}_{r}\times \underbrace{\P V\times\dots \times\P V}_{s} \to \mathbb R$ satisfying the following conditions~\eqref{bers} and~\eqref{eqs}: \begin{align} &L(gv)= L(v) \Ber g,\label{bers}\\ \intertext{for all $g\in \GL(r|s)$ and} &\dder{L}{{v_F{}^A}}{{v_G{}^B}} +(-1)^{{\tilde F}{\tilde G}+({\tilde F}+{\tilde G}){\tilde B}}\dder{L}{{v_G{}^A}}{{v_F{}^B}}=0\label{eqs}. \end{align} In our notation the argument of the function $L$ is written as a matrix $v=({v_F{}^A})$ whose rows $v_F$ are vectors (written in components). The condition~\eqref{bers} implies that $L(v)$ is defined only if odd vectors $v_K$, ${\tilde K}=1$, are linearly independent. Hence $0\leq s \leq m$, while $r\geq 0$ can be arbitrary. \end{de} Though this definition provides no efficient description of forms, such a description can be given in special cases (corresponding to naive differential forms and to Bernstein-Leites integral forms) and in other cases various examples can be provided. See~\cite{tv:git}. In particular, if $m>0$, for $s\neq m$ there are nonzero forms with $r>n$. We shall give here an illustrative example of an $r|s$-form. \begin{ex} Let $\a^F\in V^*$ be an array of covectors of suitable parity. Then from the properties of the Berezinian it follows that the function $L(v):=\Ber(\langle v_F, \a^G)\rangle)$ satisfies~\eqref{bers},\eqref{eqs}. So it is a form. If $s>0$, $L$ has a pole at those odd vectors whose linear span is not transverse to the annihilator of the linear span of the odd part of $(\a^G)$. If $s=0$, then $L(v)=\det(\langle v_i, \a^j)\rangle)$, where $i,j=1,\dots,r$, so $L$ is nothing else than the exterior product $\a^1\wedge\dots\wedge\a^r$. In general, this form with a singularity should be regarded as a ``nonlinear analog" of the exterior product of an array of even and odd covectors $\a^F$. It naturally appears in physical context (e.g.,~\cite{hov:bv},\cite{bel:pco}). \end{ex} As shown in~\cite{tv:dual}, the above construction of forms is not sufficient and must be supplemented in order to obtain $r|s$-forms with $r\in \mathbb Z$ arbitrary, including negative values. This is achieved by the following ``dualization" and the subsequent ``stability argument". When we shall need to distinguish forms in the sense of Definition~\ref{straight}, we shall call them ``straight forms". We shall denote the space of (straight) $r|s$-forms on $V$ by $\E^{r|s}(V)$. \begin{de} A {\it dual form} on $V$ of codegree $p|q$ is a smooth map $\L:\underbrace{V^*\times\dots V^*}_{p}\times \underbrace{V^*\P\times\dots \times V^*\P}_{q} \to \Vol V$ satisfying the conditions \begin{align} &\L(ph)=\L(p) \Ber h,\label{berr}\\ \intertext{for all $h\in \GL(p|q)$ and} &\dder{\L}{{p_A{}^K}}{{p_B{}^L}} +(-1)^{{\tilde A}{\tilde B}+({\tilde A}+{\tilde B}){\tilde L}}\dder{\L}{{p_B{}^K}}{{p_A{}^L}}=0\label{eq}. \end{align} The arguments of $\L$ (covectors) are written as vector-columns, and they are organized in a matrix $p=({p_A{}^K})$. Notice that due to the condition~\eqref{berr}, odd covectors $p^K$, $\tilde K=1$, should be linearly independent, hence $0\leq q\leq m$. \end{de} Fix a dimension $r|s$ and consider $V\oplus \R{r|s}$. \begin{de} A {\it mixed form} on $V$ of codegree $p|q$ and additional degree $r|s$ is a smooth map $$\L:\underbrace{(V\oplus \R{r|s})^*\times\dots\times (V\oplus \R{r|s})^*}_{p}\times \underbrace{(V\oplus \R{r|s})^*\P\times\dots\times (V\oplus \R{r|s})^*\P}_{q} \to \Vol V $$ satisfying the following conditions~(\ref{mberr})--(\ref{eq3}): \begin{align} &\L(ph,wh)=\L(p,w) \Ber h,\label{mberr}\\ \intertext{for all $h\in \GL(p|q)$,} &\L(p+aw,gw)=\L(p,w) \Ber g,\label{mberl}\\ \intertext{for all $g\in \GL(r|s)$ and all $a\in\Mat(r|s\times n|m)$, and} &\dder{\L}{{p_A{}^K}}{{p_B{}^L}}+ (-1)^{{\tilde A}{\tilde B}+({\tilde A}+{\tilde B}){\tilde L}}\dder{\L}{{p_B{}^K}}{{p_A{}^L}}=0, \label{eq1}\\ &\dder{\L}{{p_A{}^K}}{{w_F{}^L}}+ (-1)^{{\tilde A}{\tilde F}+({\tilde A}+{\tilde F}){\tilde L}}\dder{\L}{{w_F{}^K}}{{p_A{}^L}}=0, \label{eq2}\\ &\dder{\L}{{w_F{}^K}}{{w_G{}^L}}+ (-1)^{{\tilde F}{\tilde G}+({\tilde F}+{\tilde G}){\tilde L}}\dder{\L}{{w_G{}^K}}{{w_F{}^L}}=0 \label{eq3}, \end{align} where $p=({p_A{}^K})$, $w=({w_F{}^L})$ and for a given $K$ the entries ${p_A{}^K},{w_F{}^K}$ are the components of a covector on $V\oplus \R{r|s}$ (where $K$ is the number of the covector). Matrix notation suggests placing $p$ over $w$ in the argument of $\L$, but for typographic reasons we shall do it only when convenient. Notice that $s\leq q \leq m+s$ because of \eqref{mberr},\eqref{mberl}. \end{de} Examples of dual and mixed forms can be mimicked from the examples of straight forms (since they are defined via similar conditions), and we skip them. Notation: $\E_{p|q}(V)$ and $\E_{p|q}^{r|s}(V)$ for the spaces of dual and mixed forms on $V$, respectively. We shall omit the indication to $V$ when no confusion is possible. Notice that $\E_{p|q}(V)=\E_{p|q}^{0|0}(V)$ Consider the following homomorphisms: $\sigma=\sigma_{k|l}: \E_{p|q}^{r|s}\to \E_{p+k|q+l}^{r+k|s+l}$ and $\sigma^{-1}=\sigma_{k|l}^{-1}:\E_{p+k|q+l}^{r+k|s+l} \to \E_{p|q}^{r|s}$, \begin{align}\label{stab} &(\sigma\L)\left(\begin{array}{cc} {p_1} & {p_2} \\ {w_{11}} & {w_{12}}\\ {w_{21}} & {w_{22}} \end{array}\right):=\L\left(\begin{array}{c} {p_1}-{p_2}{w_{22}}^{-1}{w_{21}} \\ {w_{11}}-{w_{12}}{w_{22}}^{-1}{w_{21}} \end{array}\right)\cdot\Ber{w_{22}},\\ &(\sigma^{-1}\L^*)\left(\begin{array}{c}p \\w\end{array}\right):=\L^* \left(\begin{array}{cc} p & 0 \\ w & 0\\ 0 & 1 \end{array}\right), \end{align} where $\L\in \E_{p|q}^{r|s}$, $\L^*\in \E_{p+k|q+l}^{r+k|s+l}$. (We write arguments of forms as matrices and subdivide them into blocks corresponding to the ``first" and ``last" rows and columns.) \begin{thm}[\cite{tv:dual}] \label{stabil} Maps $\sigma$ and $\sigma^{-1}$ are well-defined (in particular, $\sigma$ uniquely extends to all admissible arguments of $\L$) and are indeed mutually inverse isomorphisms of the spaces $\E_{p|q}^{r|s}$ and $\E_{p+k|q+l}^{r+k|s+l}$. The equality $\sigma_{k|l}\sigma_{k'|l'}=\sigma_{k+k'|l+l'}$ holds. \end{thm} Define $\EE^{k|l}(V):={\mathop{\varinjlim}\limits_{N,M}} \E_{p+N|q+M}^{r+N|s+M}(V)$, where $k|l=r+n-p|s+m-q$ and call it the space of {\it stable $k|l$-forms} on $V$. Note that $k\in \mathbb Z$ (may be negative), while $l=0,\dots,m$ . It's not hard to produce an example of a stable $k|l$-form with negative $k$ (if $l > 0$). Take as a representative a dual form with the number of even arguments greater that $n$ (exactly as in examples of straight $r|s$-forms with $r>n$, cf.~\cite{tv:git}). Similarly, if $l < m$, there are nonzero $k|l$-forms with $k>n$. Obviously, $\EE^{k|l}(V)\cong\E_{p|q}^{r|s}(V)$ if $k=r+n-p$, $l=s+m-q$, for all $r,s,p\geq 0$ and $s\leq q\leq s+m$. \begin{co}$ \EE^{k|l}(V)\cong \E_{n-k|m-l}(V)$ for $k\leq n$. \end{co} Consider the following homomorphisms: $\t=\t_{r|s}:\E^{r|s}\to \E_{n|m}^{r|s}$ and $\t^{-1}=\t_{r|s}^{-1}:\E_{n|m}^{r|s}\to \E^{r|s}$, \begin{align}\label{iso} (\t L) \left( \begin{array}{c}p \\w \end{array} \right)&:=L(wp^{-1})\cdot\Ber p,\\ (\t^{-1} \L)(v)&:=\L \left( \begin{array}{c}1 \\ v \end{array} \right), \end{align} where $\L\in \E_{n|m}^{r|s}$, $L\in \E^{r|s}$. \begin{thm}[\cite{tv:dual}] \label{isom}Maps $\t$ and $\t^{-1}$ are well-defined (in particular, $\t$ uniquely extends to all admissible arguments of $L$) and are indeed mutually inverse isomorphisms of the spaces $\E_{n|m}^{r|s}$ and $\E^{r|s}$. \end{thm} \begin{co}$\EE^{k|l}(V)\cong \E^{k|l}(V)$ for $k\geq 0$. \end{co} \begin{rem}In view of Theorems~\ref{stabil} and \ref{isom} one may regard it excessive to consider all spaces of mixed forms. Indeed, it is sufficient to consider only $\E^{r|s}$ and $\E_{p|q}$ together with the isomorphism $\E^{r|s}\cong\E_{n-r|m-s}$ defined in the range $0\leq r\leq n$. However, it would be practically restrictive. It is easier to work with various operations in terms of mixed forms. \end{rem} \subsection{Operators $e(\a)$, $e(u)$. Commutation relations and the module structure} Consider a covector $\a\in V^*$. We introduce an operator $e(\a):\E^{r|s}_{p|q}\to \E^{r+1|s}_{p|q}$ by the following formula: \begin{equation}\label{ext} e(\a)\L:= (-1)^r \a_A w_{r+1}^K (-1)^{\tilde \a\tilde A}\der{\L}{{p_A{}^K}}, \end{equation} where $\a=e^A\a_A$. Likewise, consider a vector $u\in V$. Define $e(u):\E^{r|s}_{p|q}\to \E^{r|s}_{p+1|q}$ by the formula \begin{multline}\label{int} e(u)\L:= \\ (-1)^r u^A \left( p_A^{p+1} - (-1)^{{\tilde B}{\tilde K}} {p_A{}^K} p_B^{p+1}\der{}{{p_B{}^K}}-(-1)^{{\tilde F}{\tilde K}}{p_A{}^K} w_F^{p+1}\der{}{{w_F{}^K}} \right)\L, \end{multline} where $u=u^A e_A$. Here $(e_A)$ and $(e^A)$ are dual bases of $V$ and $V^*$. \begin{rem} On dual forms, $e(u):\E_{p|q}\to\E_{p+1|q}$, \begin{equation}\label{intdual} e(u)\L= (-1)^r u^A \left(p_A^{p+1} - (-1)^{{\tilde B}{\tilde K}} {p_A{}^K} p_B^{p+1}\der{}{{p_B{}^K}}\right)\L. \end{equation} \end{rem} The proof that $e(\a)$ and $e(u)$ indeed map forms to forms and do not depend on the choice of bases is postponed until Section~\ref{analiz}. The parities of $e(\a)$ and $e(u)$ are the same as the respective parities of $\a$ and $u$; operators $e(\a)$ and $e(u)$ depend on $\a$ and $u$ linearly. \begin{thm} \label{stability} The operators $e(\a)$ and $e(u)$ are stable, i.e., they commute with all isomorphisms $\sigma_{k|l}$. Under the isomorphism~\eqref{iso}, the operator $e(\a)$ corresponds to the operator $e_{\a}: \E^{r|s}\to \E^{r+1|s}$, \begin{align}\label{ext1} e_{\a}&=(-1)^r \left( {\vvone{A}}\a_A -(-1)^{{\tilde \alpha}{\tilde F}+{\tilde B}}{v_F{}^A}\a_A\,{\vvone{B}}\,\der{}{{v_F{}^B}} \right) \intertext{and if $r>0$ the operator $e(u)$ corresponds to the operator $i_u: \E^{r|s}\to \E^{r-1|s}$,} \label{int1} i_u&=(-1)^{r-1}u^A\,\der{}{{v_r{}^A}}, \end{align} the substitution of $u\in V$ into the last even slot of $L\in\E^{r|s}$. Here $L=L(v)$, $v=({v_F{}^A})$. (The operators $e_{\a}$, $i_u$ were introduced in~{\em \cite{tv:git}}.) \end{thm} \begin{proof} Consider $e(u)$. We have to check that $e(u)$ commutes with $\sigma_{1|0}$ and $\sigma_{0|1}$. We shall consider $\sigma_{1|0}$ (the case of $\sigma_{0|1}$ is similar, but simpler). Denote $\sigma:=\sigma_{1|0}$. It is sufficient to give proof for $\L\in\E_{p|q}$, then the general case will follow. Consider the diagram \begin{equation} \begin{CD} \E_{p|q}@>{\sigma}>>\E_{p+1|q}^{1|0}\\ @V{e(u)}VV @VV{e(u)}V\\ \E_{p+1|q}@>>{\sigma}>\E_{p+2|q}^{1|0} \end{CD} \end{equation} Take $\L\in\E_{p|q}$. Apply $\sigma$. We get $\L^*\in\E_{p+1|q}^{1|0}$, where $\L^*\left(\begin{smallmatrix}p & p'\\ w & w' \end{smallmatrix}\right)=\L\left(p-p'{w'}^{-1}w\right)\,w'$. Here $p=({p_A{}^K})$, $w=({w_F{}^K})$, $p'=({\ppone{A}})$, $w'=w^{p+1}$. Apply $e(u)$. We obtain \begin{multline}\label{sigmae} (e(u)\L^*)\left( \begin{matrix}p & p' &p''\\ w & w'&w'' \end{matrix}\right)= -u^A\left({\pptwo{A}} -(-1)^{{\tilde B}{\tilde{K^*}}}{p_A{}^{K^*}}{\pptwo{B}}\,\der{}{{p_B{}^{K^*}}}\,- \right. \\ \left. {p_A{}^{K^*}}{w^{p+2}}\,\der{}{{w^{K^*}}} \right)\L^*= -u^A\left( {\pptwo{A}} -(-1)^{{\tilde B}{\tilde K}}{p_A{}^K}{\pptwo{B}}\,\der{}{{p_B{}^K}}- \right. \\ \left. {\ppone{A}}{\pptwo{B}}\,\der{}{{\ppone{A}}}\,- {p_A{}^K}{w^{p+2}}\,\der{}{{w^K}} - {\ppone{A}}{w^{p+2}}\,\der{}{{w^{p+1}}} \right) % \L\left(p-p'{w'}^{-1}w\right)\,{w^{p+1}} \\= -u^A\left({\pptwo{A}}\L {w^{p+1}} - (-1)^{{\tilde B}{\tilde K}}{p_A{}^K}{\pptwo{B}}\,\der{\L}{{p_B{}^K}} {w^{p+1}} + \right.\\ {\ppone{A}}{\pptwo{B}}{w^K}\,\der{\L}{{p_B{}^K}} + {p_A{}^K}{w^{p+2}}{\ppone{A}}\, \der{\L}{{p_B{}^K}} (-1)^{{\tilde B}{\tilde K}}+ {\ppone{A}}{w^{p+2}}{\ppone{A}}{w^K}\,\der{\L}{{p_B{}^K}} \\ \left.\bigl(-\frac{1}{({w^{p+1}})^2}\bigr){w^{p+1}} - {\ppone{A}}{w^{p+2}}\,\L \right), \end{multline} where in the last expression the argument of $\L$ and $\lder{\L}{p}$ is $p-p'{w'}^{-1}w$ and we denote $p'':=({\pptwo{A}})$, $w'':=({w^{p+2}})$. Now let us apply first $e(u)$, then $\sigma$. Calculate: \begin{equation} (e(u)\L)\bigl(\begin{matrix}p &p''\end{matrix}\bigr)= u^A\left({\pptwo{A}}-(-1)^{{\tilde B}{\tilde K}}{p_A{}^K}{\pptwo{B}}\,\der{}{{p_B{}^K}}\right)\L(p); \end{equation} applying $\sigma$ we obtain \begin{multline} (\sigma e(u)\L)\left( \begin{matrix}p & p'' &p'\\ w & w''&w' \end{matrix}\right)=\left(e(u)\L\right) \bigl(p-p'{w'}^{-1}w, p''-p'({w^{p+1}})^{-1}{w^{p+2}}\bigr)\,{w^{p+2}} \\=u^A\Biggl( ({\pptwo{A}} - {\ppone{A}}({w^{p+1}})^{-1}{w^{p+2}}) \, \L - (-1)^{{\tilde B}{\tilde K}} ({p_A{}^K} - {\ppone{A}}({w^{p+1}})^{-1}{w^K}) \Biggr. \\ \Biggl. ({\pptwo{B}} - {\ppone{A}}({w^{p+1}})^{-1}{w^{p+2}}) \der{\L}{{p_B{}^K}}\Biggr) {w^{p+1}}, \end{multline} where the argument of $\L$ and $\lder{\L}{p}$ in the last expression is $p-p'{w'}^{-1}w$. Multiplying through, we obtain exactly the same terms as in~\eqref{sigmae} with the opposite sign. Notice that $\sigma e(u)\L$ as a form is skew-symmetric in even columns. Thus we can swap $\left(\begin{smallmatrix}p' \\ w'\end{smallmatrix}\right)$ and $\left(\begin{smallmatrix}p'' \\ w''\end{smallmatrix}\right)$, cancelling the minus sign, and obtain \begin{equation} (\sigma e(u)\L)\left( \begin{matrix}p & p' &p''\\ w & w'&w'' \end{matrix}\right)= (e(u)\sigma \L)\left( \begin{matrix}p & p' &p''\\ w & w'&w'' \end{matrix}\right), \end{equation} as desired. Stability of $e(\a)$ is proved in the same way, and we omit the calculation. Let us turn to the relation with the isomorphisms~\eqref{iso}. Consider the following diagram. \begin{diagram} \E_{n|m}^{r|s} & \rTo^{e(u)} & \E_{n+1|m}^{r|s} & \rTo^{\sigma^{-1}} & \E_{n|m}^{r-1|s}\\ \dTo<{\t^{-1}} & & & & \dTo >{\t^{-1}} \\ \E^{r|s} & & \rTo_{i_u} & &\E^{r-1|s} \\ \end{diagram} The claim is that it is commutative. To check this, take $\L\in \E_{n|m}^{r|s}$. We have: \begin{multline*} (i_u\t^{-1}\L)(v)=(-1)^{r-1}{u^A} \,\der{}{{v_r{}^A}}(\t^{-1}\L)(v) =(-1)^{r-1}{u^A}\,\der{}{{v_r{}^A}}\L \begin{pmatrix} 1 \\ v \end{pmatrix}= \\ (-1)^{r-1}{u^A} \,\der{\L}{{w_r{}^A}} \begin{pmatrix} 1 \\ v \end{pmatrix}; \end{multline*} now, \begin{align*} \begin{split} (e(u)\L) \begin{pmatrix} p & {p^{n+1}} \\ w & {w^{n+1}} \end{pmatrix}=(-1)^r {u^A}\left({p_A{}^{n+1}}-(-1)^{{\tilde B}{\tilde K}}{p_A{}^K}{p_B{}^{n+1}}\, \der{}{{p_B{}^K}} - \right. \\ \left.(-1)^{{\tilde F}{\tilde K}}{p_A{}^K}{w_F{}^{n+1}}\,\der{}{{w_F{}^K}}\right) \L\begin{pmatrix} p \\ w \end{pmatrix}; \end{split}\\ \begin{split} &(\sigma^{-1}e(u)\L)\begin{pmatrix}p \\w^*\end{pmatrix}= (e(u)\L)\begin{pmatrix}p & {p^{n+1}} \\w & {w^{n+1}} \end{pmatrix}_{\left|\begin{aligned}&\scriptstyle w_r^{n+1}=1\\ &\scriptstyle {w_r{}^K} =0 \quad (K\neq n+1)\\ &\scriptstyle {w_F{}^{n+1}} =0 \quad (F\neq r)\\ &\scriptstyle {p_A{}^{n+1}}=0 \end{aligned} \right.}= \\&(-1)^{r}u^A\left(0-(-1)^{0}{p_A{}^K}\,\der{}{{w_r{}^K}}\right) \L\begin{pmatrix}p\\ w^*\\ 0 \end{pmatrix}= (-1)^{r}u^A\left(-{p_A{}^K}\,\der{\L}{{w_r{}^K}}\begin{pmatrix}p\\ w \end{pmatrix} \right); \end{split}\\ \intertext{hence} &(\t^{-1}\sigma^{-1}e(u)\L)(v)=(-1)^r \left(-u^A\der{\L}{{w_r{}^A}} \begin{pmatrix} 1 \\ v \end{pmatrix} \right)=i_u \t^{-1}\L(v), \end{align*} as desired. (Here $w^*$ stands for $w$ without the row $w_r$.) In a similar way the equality $e(\a)\t=\t e_{\a}:\E^{r|s}\to\E_{n|m}^{r+1|s}$ is checked. \end{proof} \begin{co}For exterior forms on a purely even space $V$ the operator $e(\a)$ corresponds to the usual exterior multiplication $\a\,\wedge\,$. The operator $e(u)$ corresponds to the usual interior multiplication or contraction $i_u=u{\lrcorner}\,$. \end{co} Note that in our mixed description both operators increase respective degrees and thus have appearance of ``exterior" products. \begin{thm} \label{commut}The operators $e(\a)$ and $e(u)$ obey the following relations: \begin{align} e(u) e(v) + (-1)^{{\tilde u}{\tilde v}} e(v) e(u) & =0, \label{commut1}\\ e(\a) e(\b) + (-1)^{\tilde\a\tilde\b}e(\b) e(\a) & =0, \label{commut2}\\ e(u) e(\a) + (-1)^{\tilde\a{\tilde u}}e(\a) e(u) & = \langle u,\a\rangle\,\sigma.\label{cliff} \end{align} Here $u,v\in V$, $\a,\b\in V^*$, and $\sigma=\sigma_{1|0}:\E_{p|q}^{r|s}\to \E_{p+1|q}^{r+1|s}$ is the stability isomorphism~\eqref{stab}. \end{thm} \begin{proof} To find relations between $e(u)$ and $e(v)$, for $u,v\in V$, it is sufficient to consider the case $r=s=0$. (The general case is formally reduced to it by considering dual forms on extended space $V\oplus \R{r|s}$ and by setting $u^F=v^F=0$.) Then for $\L\in\E_{p|q}$ we have \begin{multline}\label{vykl1} e(u)\,e(v)\L \\=u^A\left(p_A^{p+2}-(-1)^{{\tilde B}{\tilde K}}{p_A{}^K} p_B^{p+2}\,\der{}{{p_B{}^K}} \right) v^C\left(p_C^{p+1}-(-1)^{{\tilde D}{\tilde L}}{p_C{}^L} p_D^{p+1}\,\der{}{{p_D{}^L}} \right)\L \\=u^A v^C (-1)^{({\tilde v}+{\tilde C}){\tilde A}} \Biggl({\pptwo{A}}{\ppone{C}} - {\ppone{A}}{\pptwo{C}} - (-1)^{{\tilde C}{\tilde D}}{\pptwo{A}}{\ppone{D}}{p_C{}^L}\,\der{}{{p_D{}^L}}-\\ (-1)^{{\tilde B}{\tilde C}+{\tilde A}({\tilde B}+{\tilde C})}{\pptwo{B}}{\ppone{C}}{p_A{}^L}\,\der{}{{p_B{}^L}} + (-1)^{{\tilde A}({\tilde C}+{\tilde D})}{\pptwo{C}}{\ppone{D}}{p_A{}^L}\,\der{}{{p_D{}^L}}+\\ (-1)^{{\tilde C}{\tilde D}}{\ppone{A}}{\pptwo{D}}{p_C{}^L}\,\der{}{{p_D{}^L}}+ (-1)^{a}\, {\pptwo{B}}{\ppone{D}} {p_A{}^K}{p_C{}^L}\,\dder{}{{p_B{}^K}}{{p_D{}^L}}\Biggr)\L, \end{multline} where $a={\tilde B}{\tilde C}+{\tilde B}{\tilde L}+{\tilde B}{\tilde D}+{\tilde C}{\tilde K}+{\tilde K}{\tilde L}+{\tilde A}{\tilde B}+{\tilde A}{\tilde D}+{\tilde C}{\tilde D}$. Notice that the range of $K$ in the first line of~\eqref{vykl1} contains $p+1$. Simultaneously interchanging $u$ and $v$ and the indices $A$ and $C$, we obtain \begin{multline} e(v)\,e(u)\L \\= (-1)^{{\tilde u}{\tilde v}}u^A v^C (-1)^{({\tilde v}+{\tilde C}){\tilde A}} \Biggl({\ppone{A}}{\pptwo{C}} - {\pptwo{A}}{\ppone{C}} - (-1)^{{\tilde A}({\tilde C}+{\tilde D})}{\pptwo{C}}{\ppone{D}}{p_A{}^L}\,\der{}{{p_D{}^L}} \\ - (-1)^{{\tilde C}{\tilde D}}{\ppone{A}}{\pptwo{D}}{p_C{}^L}\,\der{}{{p_D{}^L}}+ (-1)^{{\tilde C}{\tilde D}}{\pptwo{A}}{\ppone{D}}{p_C{}^L}\,\der{}{{p_D{}^L}} + \\(-1)^{{\tilde A}{\tilde D}+{\tilde A}{\tilde C}+{\tilde C}{\tilde D}}{\pptwo{D}}{\ppone{C}}{p_A{}^L}\,\der{}{{p_D{}^L}}+ (-1)^b{\pptwo{B}}{\ppone{D}}{p_A{}^K}{p_C{}^L}\,\dder{}{{p_D{}^K}}{{p_B{}^L}} \Biggr)\L, \end{multline} where $b={\tilde C}{\tilde K}+{\tilde A}{\tilde B}+{\tilde K}{\tilde L}+{\tilde B}{\tilde C}+{\tilde C}{\tilde D}+{\tilde A}{\tilde D}+{\tilde L}{\tilde D}$. Now we see that all terms except for the last one in $(-1)^{{\tilde u}{\tilde v}}e(v)e(u)\L$ would cancel the similar terms in $e(u)e(v)\L$. Notice that $a+b={\tilde B}{\tilde D}+({\tilde B}+{\tilde D}){\tilde L}$. It follows that \begin{multline} \left(e(u)e(v)+(-1)^{{\tilde u}{\tilde v}}e(v)e(u)\right)\L=(-1)^a {\pptwo{B}}{\ppone{D}}{p_A{}^K}{p_C{}^L}\, \\ \left( \dder{\L}{{p_B{}^K}}{{p_D{}^L}}+ (-1)^{{\tilde B}{\tilde D}+({\tilde B}+{\tilde D}){\tilde L}}\dder{}{{p_D{}^K}}{{p_B{}^L}}\right), \end{multline} which equals zero by the equation~\eqref{eq}. Consider now $e(\a)$ and $e(\b)$. For $\L\in\E_{p|q}^{r|s}$ we readily have \begin{multline}\label{vykl2} e(\a)e(\b)\L = (-1)^{r+1}\a_A {\wwtwo{K}} \,\der{}{{p_A{}^K}}\, \left( (-1)^r\b_B {\wwtwo{L}}\,\der{\L}{{p_B{}^L}}(-1)^{\tilde\b {\tilde B}} \right)= \\-(-1)^{{\tilde \alpha}{\tilde A}+{\tilde \beta}{\tilde B}}\a_A\b_B{\wwtwo{K}}{\wwone{L}}\,\dder{\L}{{p_A{}^K}}{{p_B{}^L}} (-1)^{({\tilde \beta}+{\tilde B}){\tilde A}+({\tilde A}+{\tilde K}){\tilde L}}. \end{multline} Similarly, for $e(\b)e(\a)$ we obtain \begin{multline} e(\b)e(\a)\L = -(-1)^{{\tilde \alpha}{\tilde \beta} +{\tilde \alpha}{\tilde A} +{\tilde \beta}{\tilde B} +({\tilde B}+{\tilde K}){\tilde L} +{\tilde A}{\tilde \beta}} \a_A\b_B{\wwtwo{K}}{\wwone{L}}\,\dder{\L}{{p_B{}^K}}{{p_A{}^L}}= \\ (-1)^{{\tilde \alpha}{\tilde A}+{\tilde \beta}{\tilde A}_{\tilde \alpha}{\tilde \beta}+{\tilde K}{\tilde L}+{\tilde A}{\tilde B}+{\tilde A}{\tilde L}}\a_A\b_B{\wwtwo{K}}{\wwone{L}}\, \dder{\L}{{p_A{}^K}}{{p_B{}^L}}= \\-(-1)^{{\tilde \alpha}{\tilde \beta}}e(\a)e(\b)\L, \end{multline} again by the equation~\eqref{eq}. Finally, let us find the relation between operators $e(u)$ and $e(\a)$. Notice that $e(u)e(\a), \ e(\a)e(u):\E_{p|q}^{r|s}\to\E_{p+1|q}^{r+1|s}$. For $\L\in\E_{p|q}^{r|s}$ by a direct calculation similar to~\eqref{vykl1},\eqref{vykl2} using the equations~\eqref{eq1},\eqref{eq2}, we obtain the equality \begin{multline}\label{cliff1} \left(e(u) e(\a) + (-1)^{\tilde\a{\tilde u}}e(\a) e(u)\right)\L = \\u^A \a_A \left( w_{r+1}^{p+1} -(-1)^{{\tilde B}{\tilde K}}{\wwone{K}}{\ppone{A}}\,\der{}{{p_B{}^K}}-(-1)^{{\tilde F}{\tilde K}}{\wwone{K}} w_{F}^{p+1}\,\der{}{{w_F{}^K}} \right)\L. \end{multline} Apply now the transformation $\sigma^{-1}:\E_{p+1|q}^{r+1|s}\to \E_{p|q}^{r|s}$. That means setting $w_{r+1}^{p+1}:=1$, ${\wwone{K}}:=0$, ${\ppone{A}}:=0$, $w_{F}^{p+1}:=0$. We arrive at \begin{equation} \sigma^{-1}\left(e(u) e(\a) + (-1)^{\tilde\a{\tilde u}}e(\a) e(u)\right)\L=\langle u,\a\rangle\L, \end{equation} from where~\eqref{cliff} follows. Notice that by this calculation we showed that the operator in the r.h.s. of~\eqref{cliff1} gives another expression for the isomorphism $\sigma_{1|0}:\E_{p|q}^{r|s}\to\E_{p+1|q}^{r+1|s}$. \end{proof} \begin{co} {\em(1)} The space $\E_{{\boldsymbol \cdot}|q}^{{\boldsymbol \cdot}|s}(V)$ is a module over exterior algebras $\E^{{\boldsymbol \cdot}}(V)$ and $\E^{{\boldsymbol \cdot}}(V^*)$ defined by relations $u v=-(-1)^{{\tilde u}{\tilde v}}v u$ and $\a \b=-(-1)^{\tilde \a\tilde \b}\b \a$. {\em(2)} The space of stable forms $\EE^{{\boldsymbol \cdot}|s}(V)$ is a module over a Clifford algebra $\Cliff(V\oplus V^*)$ defined by relations $u v=- (-1)^{{\tilde u}{\tilde v}}v u$, $\a \b=- (-1)^{\tilde \a\tilde \b}\b \a$ and $u\a +(-1)^{{\tilde u}\tilde \a}\a u=\langle u,\a\rangle$. \end{co} \begin{rem} Notice that we arrive at the relations of exterior and Clifford algebras (in ``skew" versions) not as conventions but as actual identities between linear operators. It is also worth noting that the anticommutation relations obtained here for $e(u)$ and $e(\a)$ are not at all obvious. While under the isomorphism with straight or dual forms one of the operators $e(u)$ or $e(\a)$ can be interpreted as a substitution into a suitable even slot (hence the anticommutativity between such operators will become transparent), the other one will remain an ``exterior product" defined by a formula like~\eqref{ext1}, which involves both even and odd slots. By duality $e(u)$ transforms into $e(\a)$ and vice versa. However, this can be exploited only in the common range $0\leq r\leq n$ where dual and straight forms are both good. Hence a certain portion of tedious calculations is unavoidable to get all the relations~{\eqref{commut1}--\eqref{cliff}}. \end{rem} \section{Cartan calculus}\label{analiz} \subsection{Differential} Consider a supermanifold $M=M^{n|m}$. For forms on $M$, i.e., sections of the corresponding vector bundles associated with $TM$, we shall use the notation $\O^{r|s}$, $\O_{p|q}$, $\O^{r|s}_{p|q}$ and $\boldsymbol \O^{r|s}$. By $\O^{{\boldsymbol \cdot}}=\oplus\O^{k}$ we shall denote the algebra of ``naive" differential forms with the skew-commutative convention (and the even differential, cf.~\cite{manin}). A differential ${\,\,}{\bar{\smash{\!\!\mathit d}}}:\O^{r|s}_{p|q}\to\O^{r+1|s}_{p|q}$ is defined by the formula \begin{equation}\label{dif} {\,\,}{\bar{\smash{\!\!\mathit d}}}\L:=(-1)^r {\wwone{K}}(-1)^{{\tilde A}{\tilde K}}\,\der{}{{x^A}}\der{\L}{{p_A{}^K}} \end{equation} (see~\cite{tv:dual}). In~\cite{tv:dual} it is proved that the operator ${\,\,}{\bar{\smash{\!\!\mathit d}}}$ is stable, hence we have a complex ${\,\,}{\bar{\smash{\!\!\mathit d}}}:\OO^{{\boldsymbol \cdot}\,|s}\to\OO^{{\boldsymbol \cdot}\,+1|s}$. For ${\boldsymbol \cdot}\geq 0$, this complex is isomorphic to the ``straight" complex $d:\O^{{\boldsymbol \cdot}\,|s}\to\O^{{\boldsymbol \cdot}\,+1|s}$ studied in~\cite{tv:git} and for ${\boldsymbol \cdot}\!\leq n$ to the complex of dual forms ${}{\bar{\smash{\delta}}}:\O_{n-{\boldsymbol \cdot}+1|m-s}\to\O_{n-{\boldsymbol \cdot}|m-s}$ introduced in~\cite{tv:dual}: { \newcommand{\O^{0|s}}{\O^{0|s}} \newcommand{\O^{1|s}}{\O^{1|s}} \newcommand{\O^{n|s}}{\O^{n|s}} \newcommand{\O^{n+1|s}}{\O^{n+1|s}} \newcommand{\OO^{-1|s}}{\OO^{-1|s}} \newcommand{\OO^{0|s}}{\OO^{0|s}} \newcommand{\OO^{1|s}}{\OO^{1|s}} \newcommand{\OO^{n|s}}{\OO^{n|s}} \newcommand{\OO^{n+1|s}}{\OO^{n+1|s}} \newcommand{\O_{n+1|m-s}}{\O_{n+1|m-s}} \newcommand{\O_{n|m-s}}{\O_{n|m-s}} \newcommand{\O_{n-1|m-s}}{\O_{n-1|m-s}} \newcommand{\O_{0|m-s}}{\O_{0|m-s}} \newarrow{Isom}===== \begin{diagram}[width=2em,height=1.5em] & & 0&\rTo &\O^{0|s}&\rTo&\O^{1|s}&\rTo&\dots&\rTo&\O^{n|s}&\rTo&\O^{n+1|s}&\rTo&\dots\\ & & & & \dIsom & & \dIsom & & & & \dIsom & & \dIsom& & \\ \dots&\rTo&\OO^{-1|s}&\rTo&\OO^{0|s}&\rTo&\OO^{1|s}&\rTo&\dots&\rTo&\OO^{n|s}&\rTo&\OO^{n+1|s}&\rTo&\dots\\ & &\dIsom & &\dIsom& &\dIsom& & & & \dIsom & & & & \\ \dots&\rTo&\O_{n+1|m-s}&\rTo&\O_{n|m-s}&\rTo&\O_{n-1|m-s}&\rTo&\dots&\rTo&\O_{0|m-s}& \rTo&0 & & \\ \end{diagram} } (vertical lines are isomorphisms). Consider a mixed form $\L$ and a function $f$. Calculate ${\,\,}{\bar{\smash{\!\!\mathit d}}}(f\L)$: \begin{multline} {\,\,}{\bar{\smash{\!\!\mathit d}}}(f\L)=(-1)^r {\wwone{K}}(-1)^{{\tilde A}{\tilde K}}\,\der{}{{x^A}}\der{}{{p_A{}^K}}(f\L) =\\(-1)^r {\wwone{K}}(-1)^{{\tilde A}{\tilde K}}\,\der{}{{x^A}}f\der{\L}{{p_A{}^K}}(-1)^{{\tilde F}({\tilde A}+{\tilde K})} =\\(-1)^r {\wwone{K}}(-1)^{{\tilde A}{\tilde K}}\left((-1)^{{\tilde F}({\tilde A}+{\tilde K})}{\partial_A} f\,\der{\L}{{p_A{}^K}} +(-1)^{{\tilde F}{\tilde K}}f\,\der{}{{x^A}}\der{\L}{{p_A{}^K}}\right) =\\ f\,{\,\,}{\bar{\smash{\!\!\mathit d}}}\L + (-1)^r{\partial_A}\,f{\wwone{K}}\,\der{\L}{{p_A{}^K}}(-1)^{{\tilde F}{\tilde A}} =f\,{\,\,}{\bar{\smash{\!\!\mathit d}}}\L + e(df)\,\L, \end{multline} where $df=dx^A{\partial_A} f$ is considered as an element of $\O^1(M)$. We stress that the algebra with the {\it even} differential is considered. Since ${\,\,}{\bar{\smash{\!\!\mathit d}}}(f\L)$ is a form and $f\,{\,\,}{\bar{\smash{\!\!\mathit d}}}\L$ is a form, it follows that $e(df)\,\L$ is a well-defined form. We can conclude that for arbitrary $1$-form $\a$ the operation $e(\a)$ is also well-defined, i.e., does not depend on the choice of coordinates and maps mixed forms into mixed forms. The formula~\eqref{ext} is extracted from this calculation. Similar calculation gives the formula~\eqref{ext1} for $e_{\a}$ on straight forms; by duality it can be rewritten to produce a formula~\eqref{intdual} for $e(u)$ on dual forms, from which we get our formula~\eqref{int} on mixed forms. Thus it follows that both operators $e(u)$, $e(\a)$ on mixed forms are well-defined, which justifies our consideration in the previous section. It is not easy to give a purely algebraic proof of this fact. \begin{rem} The stability of $e(u)$, $e(\a)$ as well can be deduced from the stability of ${\,\,}{\bar{\smash{\!\!\mathit d}}}$. \end{rem} In the previous Section we got the module structure of mixed forms over $\O^{{\boldsymbol \cdot}}(M)$. \begin{thm} \label{leibniz}Leibniz formula holds: \begin{equation}\label{leib} {\,\,}{\bar{\smash{\!\!\mathit d}}}(\o\,\L)= d\o\,\L + (-1)^k \o\,{\,\,}{\bar{\smash{\!\!\mathit d}}}\L, \end{equation} for $\o\in\O^k$ and $\L\in\O^{r|s}_{p|q}$. \end{thm} \begin{proof} Since $\O^{{\boldsymbol \cdot}}(M)$ is a differential graded algebra, generated by elements $df$ over $C^{\infty}(M)$ (locally), it is sufficient to check the formula~\eqref{leib} for two cases: $\o=f$ and $\o=df$, where $f$ is a function. The first case was considered above. Consider $\o=df$. Then, by definition, \begin{equation} df\,\L={\,\,}{\bar{\smash{\!\!\mathit d}}}(f\L) - f\,{\,\,}{\bar{\smash{\!\!\mathit d}}}\L. \end{equation} Apply ${\,\,}{\bar{\smash{\!\!\mathit d}}}$. We obtain \begin{multline} {\,\,}{\bar{\smash{\!\!\mathit d}}}(df\,\L)={\,\,}{\bar{\smash{\!\!\mathit d}}}\D(f\L) - {\,\,}{\bar{\smash{\!\!\mathit d}}}(f\,{\,\,}{\bar{\smash{\!\!\mathit d}}}\L) =0-df\,{\,\,}{\bar{\smash{\!\!\mathit d}}}\L - f{\,\,}{\bar{\smash{\!\!\mathit d}}}\D\L=-df\,{\,\,}{\bar{\smash{\!\!\mathit d}}}\L =\\ ddf\,\L+(-1)^1 df\,{\,\,}{\bar{\smash{\!\!\mathit d}}}\L, \end{multline} as desired. \end{proof} Therefore, $\OO^{{\boldsymbol \cdot}\,|s}$ is a graded differential module over $\O^{{\boldsymbol \cdot}}$ for all s. \begin{rem} Notice that $\E^{{\boldsymbol \cdot}}\cong\EE^{{\boldsymbol \cdot}|0}$, $\O^{{\boldsymbol \cdot}}\cong\OO^{{\boldsymbol \cdot}|0}$ as modules. \end{rem} \subsection{Homotopy identity} Consider a vector field $X\in \Vect M$ and the corresponding infinitesimal transformation: $x^A\mapsto x^A + \varepsilon X^A(x)$, $\varepsilon^2=0$. By a straightforward calculation we obtain the following formula for the Lie derivative on mixed forms: \begin{equation} \d_X\L= X^A\,\der{\L}{x^A}- (-1)^{{\tilde A}{\tilde X}}\der{X^B}{x^A}\ {p_B{}^K}\der{\L}{{p_A{}^K}} + (-1)^{{\tilde A}({\tilde X}+1)} \der{X^A}{x^A}\,\L, \end{equation} where we picked the notation $\d_X$ to avoid overloading the letter {\it `L'}. The Lie derivative $\d_X$ has the same parity as $X$. It preserves all degrees and is obviously a derivation for all kinds of natural multiplications. Operation $\d_X$ commutes with the stability isomorphisms~\eqref{stab} and with the isomorphisms~\eqref{iso}. \begin{thm} \label{cart} For mixed forms on a supermanifold $M$, the following identity holds: \begin{equation}\label{cartan1} {\,\,}{\bar{\smash{\!\!\mathit d}}}\, e(X) + e(X)\,{\,\,}{\bar{\smash{\!\!\mathit d}}} = \d_X\,\sigma, \end{equation} where $\sigma=\sigma_{1|0}:\O^{r|s}_{p|q}\to\O^{r+1|s}_{p+1|q}$ is the stability isomorphism. \end{thm} \begin{proof} Let $\L$ be in $\O_{p|q}^{r|s}$. Consider $\sigma^{-1}:\O_{p+1|q}^{r+1|s}\to\O_{p|q}^{r|s}$. Recall that the action of this operator consists in setting ${\ppone{A}}=0$, $w_{F}^{p+1}=0$, ${\wwone{K}}=0$, $w_{r+1}^{p+1}=1$ in the argument. We shall find $\sigma^{-1}e(X){\,\,}{\bar{\smash{\!\!\mathit d}}}\L$ and $\sigma^{-1}{\,\,}{\bar{\smash{\!\!\mathit d}}} e(X)\L$. Directly from~\eqref{int}: \begin{multline}\label{ed} \sigma^{-1}e(X){\,\,}{\bar{\smash{\!\!\mathit d}}}\L=(-1)^{r+1}{X^A}\left(-{p_A{}^K}\,\der{}{{\wwone{K}}}\,{\,\,}{\bar{\smash{\!\!\mathit d}}}\L\right) =\\ (-1)^{r}{X^A} {p_A{}^K}\,\der{}{{\wwone{K}}}\, \left((-1)^r{\wwone{L}}(-1)^{{\tilde B}{\tilde L}}\der{}{{x^B}}\der{}{{p_B{}^L}}\L\right) =\\ {X^A}{p_A{}^K}(-1)^{{\tilde B}{\tilde K}}\der{}{{x^B}}\der{\L}{{p_B{}^K}}; \end{multline} now, \begin{multline} \sigma^{-1}{\,\,}{\bar{\smash{\!\!\mathit d}}} e(X)\L=(-1)^r w_{r+1}^{K^*}(-1)^{{\tilde A}\tilde{K^*}}\der{}{{x^A}}\der{}{{p_A{}^{K^*}}} (e(X)\L)_{\left| \begin{aligned} \scriptstyle w_F^{p+1}&\scriptstyle=0,\quad & \scriptstyle{\ppone{A}}&\scriptstyle=0, \\ \scriptstyle{\wwone{K}}&\scriptstyle=0,\quad & \scriptstyle w_{r+1}^{p+1}&\scriptstyle=1 \end{aligned} \right.} =\\ (-1)^r\left(\der{}{{x^A}}\der{}{{\ppone{A}}}(e(X)\L)\right)_{\left| \scriptstyle p^{p+1}=0, \ w^{p+1}=0\right.} =\left(\der{}{{x^B}}\der{}{{\ppone{A}}}\left( {X^A}\Bigl(p_A^{p+1}\L \Bigr.\right.\right. \\ \Bigl.\left.\left. -(-1)^{{\tilde C}{\tilde K}}{p_A{}^K} p_C^{p+1}\,\der{\L}{{p_C{}^K}} -(-1)^{{\tilde F}{\tilde K}}{p_A{}^K} w_F^{p+1}\,\der{\L}{{w_F{}^K}} \Bigr) \right) \right)_{\left| \scriptstyle p^{p+1}=0,\ w^{p+1}=0\right.} =\\ \der{}{{x^B}}\left( {X^A}(-1)^{{\tilde B}({\tilde A}+{\tilde X})}\left({\d_A{}^B}\L-(-1)^{{\tilde A}{\tilde B}} {p_A{}^K}\,\der{\L}{{p_B{}^K}} \right) \right) =\\ (-1)^{{\tilde B}({\tilde X}+1)}\der{{X^B}}{{x^B}}\,\L + {X^B}\,\der{\L}{{x^B}}\, -\der{{X^A}}{{x^B}}(-1)^{{\tilde B}{\tilde X}}{p_A{}^K}\,\der{\L}{{p_B{}^K}}\, \\ -(-1)^{{\tilde B}{\tilde K}}{X^A} {p_A{}^K} \,\der{}{{x^B}}\der{\L}{{p_B{}^K}}. \end{multline} Comparing with~\eqref{ed}, we immediately conclude that \begin{multline}\label{sum} \sigma^{-1}\bigl(e(X)\,{\,\,}{\bar{\smash{\!\!\mathit d}}} + {\,\,}{\bar{\smash{\!\!\mathit d}}} \,e(X)\bigr)\L= \\ (-1)^{{\tilde B}({\tilde X}+1)}\der{{X^B}}{{x^B}}\,\L + {X^B}\,\der{\L}{{x^B}}\, -\der{{X^A}}{{x^B}}(-1)^{{\tilde B}{\tilde X}}{p_A{}^K}\,\der{\L}{{p_B{}^K}}=\d_X \L. \end{multline} Applying $\sigma$ to both sides of~\eqref{sum}, we obtain the desired identity~\eqref{cartan1}. (Notice that $\sigma$ and $\d_X$ commute.) \end{proof} \begin{co}\label{ccart} In the complex of stable forms $\OO^{{\boldsymbol \cdot}|s}$ we have the usual form of ``Cartan's homotopy identity": \begin{equation}\label{cartan2} {\,\,}{\bar{\smash{\!\!\mathit d}}} \,e(X) + e(X)\,{\,\,}{\bar{\smash{\!\!\mathit d}}} = \d_X. \end{equation} \end{co} \section{Discussion}\label{discuss} We introduced the operators $e(u)$ and $e(\a)$ on the space of mixed forms, where $u$ is a vector and $\a$ is a covector. They are analogs of the contraction $u{\lrcorner}\,$ and of the exterior product $\a\wedge\,$ for usual forms on purely even vector space. Though these operations change only even part of degrees, their construction involves all (even and odd) arguments. We proved that these operations are stable, hence they induce the corresponding operations on the space of stable forms. We established the anticommutation relations for the operators $e(u)$ and $e(\a)$. They yield the relations of a super Clifford algebra (or, before stabilization, with an additional central element $\sigma$). It is remarkable that a ``skew-commutative" version of Clifford relations (anticommutators without parity reversion) rather than more popular choice of commutators and reversed parity naturally appears here. The main incentive of considering these operators was the necessity to straighten out the Cartan calculus for forms on supermanifolds. The homotopy identity found in~\cite{tv:git} was valid only for $r|s$-forms with $r>0$; the case $r=0$ had to be mended with the help of an {\it ad hoc} augmentation. The existence of Bernstein-Leites integral forms of negative degree has given another hint to a ``hidden" part of the super {Cartan-de~Rham} complex. This hidden part was discovered in~\cite{tv:dual}. The entire complex (incorporating positive and negative halves) is made up by stable forms, for which mixed forms are representatives. In the current paper we established the relation between the differential and the operator $e(X)$, where $X$ is a vector field. Again, for mixed forms it contains the element $\sigma$ and after stabilization an analog of the usual form of the homotopy identity is reproduced. Thus, the introduction of the stable complex indeed solves the problem. What is next? We need to check the functorial behaviour of stable forms and get a ``generalized" version of the homotopy identity, which will imply the homotopy invariance of the complex (note that $\d_X$ in~(\ref{cartan1},\ref{cartan2}) corresponds to an infinitesimal diffeomorphism; we need perturbations of arbitrary maps), hence an analog of the Atiyah-Hirzebruch sequence (cf.~\cite{tv:git}). The investigation of ``point cohomology" of stable forms will require more detailed analysis of their algebraic properties. Another topic, which we did not touch here at all, is, of course, integration. We hope to consider these subjects elsewhere. In the paper~\cite{tv:lag}, the author showed that the variational differential can be used to make a complex of arbitrary Lagrangians of paths, not just forms. It would be interesting to combine this fact with the results of~\cite{tv:dual} and of the current paper.
1,116,691,499,996
arxiv
\section{Introduction} We use $|U|$ to denote the cardinality of set $U$. Let $G=(V_{G},E_{G})$ be a graph with vertex set $V_{G}$ and edge set $E_{G}$. Let $n_{G}:=|V_{G}|$ and $m_{G}:=|E_{G}|$ be the order and size of $G$, respectively. Denote by $N_{G}(u)$ the neighbors of vertex $u$, $d_{G}(u):=|N_{G}(u)|$ the degree of vertex $u$ in $G$. We use $\Delta_{G}$ and $\delta_{G}$ to denote the maximum degree and minimum degree in $G$, respectively. We call $G$ is a $\delta$-regular graph if $d_{u}=\delta$ for any $u\in V_{G}$. A $(\Delta,\delta)$-biregular graph is the bipartite graph with $d_{u}=\Delta$, $d_{v}=\delta$ for any $uv\in E_{G}$. For convenience, we sometimes write $d_{G}(u)$ as $d_{u}$ without causing confusion. If $E_{G}\neq \emptyset$, we call $G$ is a nontrivial graph, we only consider connected nontrivial graphs in this paper. Denote by $C_{n}$, $K_{n}$, $S_{n}$ and $P_{n}$, the cycle, complete graph, star graph and path with order $n$, respectively. In this paper, all notations and terminologies used but not defined can refer to Bondy and Murty \cite{bond2008}. The line graph $\mathcal{L}(G)$ is the graph whose vertices set is the edge sets of $G$ and two vertices in $\mathcal{L}(G)$ are adjacent if the corresponding two edges has one common vertex in $G$. We use $\Delta_{G}$ (resp., $\delta_{G}$) to denote the maximum degree (resp., minimum degree) of graph $G$. We use $\Delta_{\mathcal{L}(G)}$ (resp., $\delta_{\mathcal{L}(G)}$) to denote the maximum degree (resp., minimum degree) of line graph $\mathcal{L}(G)$. The first and second Zagreb indices \cite{gutr1972} are defined as \begin{eqnarray*} M_{1}(G)=\sum\limits_{uv\in E_{G}}(d_{u}+d_{v})=\sum\limits_{u\in V_{G}}d_{u}^{2},\ \ M_{2}(G)=\sum\limits_{uv\in E_{G}}d_{u}d_{v}. \end{eqnarray*} They are often used to study molecular complexity, chirality, and other chemical properties. Others see \cite{dehy2007,guda2004,gfvp2015,helz2019} and the references within. The first and second general Zagreb indices \cite{brsg2014,lizh2005} are defined as \begin{eqnarray*} M_{1}^{\alpha}(G)=\sum\limits_{u\in V_{G}}d_{u}^{\alpha},\ \ M_{2}^{\alpha}(G)=\sum\limits_{uv\in E_{G}}(d_{u}d_{v})^{\alpha}, \end{eqnarray*} with $\alpha\in R$. The general sum-connectivity index \cite{zhtr2010} is defined as \begin{eqnarray*} \chi_{\alpha}(G)=\sum\limits_{uv\in E_{G}}(d_{u}+d_{v})^{\alpha}, \end{eqnarray*} with $\alpha\in R$. The geometric-arithmetic index (GA) \cite{vufr2009} is defined as \begin{eqnarray*} GA(G)=\sum\limits_{uv\in E_{G}}\frac{2\sqrt{d_{u}d_{v}}}{d_{u}+d_{v}}. \end{eqnarray*} In 2010, Vuki\v{c}evi\'{c} and Ga\v{s}perov proposed the symmetric division deg index (SDD) \cite{vuga2010}, which is defined as \begin{eqnarray*} SDD(G)=\sum\limits_{uv\in E_{G}}(\frac{d_{u}}{d_{v}}+\frac{d_{v}}{d_{u}}). \end{eqnarray*} Since then, the SDD index has attracted much attention of researchers. Furtula et al. \cite{fudg2018} showed that the SDD index gains the comparable correlation coefficient with a well-known geometric-arithmetic index, the applicative potential of SDD is comparable to already well-established VDB structure descriptors. Vasilyev \cite{vasi2014} determined lower and upper bounds of symmetric division deg index in some classes of graphs and determine the corresponding extremal graphs. Das et al. \cite{dama2019} obtained some new bounds for SDD index and presented a relation between SDD index and other topological indices. Pan et al. \cite{pali2019} determined the extremal SDD index among trees, unicyclic graphs and bicyclic graphs. They also determined the minimum SDD index of tricyclic graphs \cite{lpli2020}. Ali et al. \cite{alem2020} characterized the graphs with fifth to ninth minimum SDD indices from the class of all molecular trees. One can refer to \cite{dusu2021,gulo2016,glsr2016,pjic2019,gzam2021,rase2020,sgdu2021} for more details about SDD index. The relations between GA index (resp. AG index, general sum-connectivity index, Harmonic index) and the line graphs had been considered in \cite{cgpp2020,cpst2020,ligz2022,pest2019}. We take further the line by investigating the SDD index. In this paper, we first determine some sharp bounds on the SDD index of graphs, then determine some sharp bounds on the SDD index of line graphs. In this paper, we only consider connected nontrivial graphs. Let $\mathcal{G}_{n}$ be the set of connected nontrivial graphs with order $n$, $\mathcal{G}_{n,m}$ the set of connected nontrivial graphs with order $n$ and size $m$. \section{Preliminaries} Recall that we only consider connected nontrivial graphs in this paper. We write graphs to denote connected nontrivial graphs without causing confusion. \begin{lemma}\label{l2-2} Let $f(x,y)=\frac{x}{y}+\frac{y}{x}$, and real number $a,b$ satisfied that $0<a\leq x\leq y\leq b$. Then $1\leq f(x,y)\leq \frac{a}{b}+\frac{b}{a}$, with left equality if and only if $x=y$, right equality if and only if $x=a, y=b$. \end{lemma} \begin{proof} The binary functions $f(x,y)=\frac{x}{y}+\frac{y}{x}$ and $0<a\leq x\leq y\leq b$. Let $g(t)=t+\frac{1}{t}$ $(t\geq 1)$. Since $g'(t)=1-\frac{1}{t^{2}}\geq 0$, then $g(t)$ is monotonically increasing for $t\geq 1$. Since $0<a\leq x\leq y\leq b$, then $1\leq \frac{y}{x}\leq \frac{b}{a}$. Thus $2=g(1)\leq f(x,y)=g(t)\leq g(\frac{b}{a})=\frac{a}{b}+\frac{b}{a}$, with left equality if and only if $x=y$, right equality if and only if $x=a, y=b$. \end{proof} \begin{lemma}\label{l2-3}\rm(\cite{ross2018}\rm) Let $G$ be a graph with maximum degree $\Delta$ and minimum degree $\delta$, and $\alpha>0$. Then $$\frac{\delta^{\alpha}}{2}M_{1}^{\alpha+1}(G)\leq M_{2}^{\alpha}(G)\leq \frac{\Delta^{\alpha}}{2}M_{1}^{\alpha+1}(G), $$ with both equalities hold if and only if $G$ is regular. \end{lemma} \begin{lemma}\label{l2-4}\rm(\cite{pest2019}\rm) Let $G$ be a graph and $G\ncong P_{n}$. Then $m_{G}\leq m_{\mathcal{L}(G)}$. \end{lemma} \begin{lemma}\label{l2-5}\rm(\cite{cgpp2020}\rm) Let $G$ be a graph. Then $m_{\mathcal{L}(G)}= \frac{1}{2}M_{1}(G)-m_{G}$. \end{lemma} We also need these simple Facts in the proof of our results. \begin{fact}\label{f2-6}\rm(\cite{ligz2022}\rm) $(i)$ If $\mathcal{L}(G)\cong S_{2}$, then $G\cong P_{3}$; $(ii)$ If $\mathcal{L}(G)\cong S_{3}$, then $G\cong P_{4}$; $(iii)$ If $\mathcal{L}(G)\cong S_{n}$ $(n\geq 4)$, then $G=\emptyset$; $(iv)$ If $\mathcal{L}(G)\cong C_{3}$, then $G\cong C_{3}$ or $S_{4}$; $(v)$ If $\mathcal{L}(G)\cong C_{n}$ $(n\geq 4)$, then $G=\emptyset$; $(vi)$ If $\mathcal{L}(G)\cong P_{n}$, then $G\cong P_{n+1}$. \end{fact} \begin{fact}\label{f2-7}\rm(\cite{pest2019}\rm) Let $G$ be a connected nontrivial graph. Then $\mathcal{L}(G)$ is regular if and only if $G$ is regular or biregular. \end{fact} \begin{fact}\label{f2-8}\rm(\cite{ligz2022}\rm) Let $G$ be a connected nontrivial graph with maximum degree $\Delta$, minimum degree $\delta$. If $e=uv\in E_{G}$, then $e\in V_{\mathcal{L}(G)}$, $d_{\mathcal{L}(G)}(e)=d_{G}(u)+d_{G}(v)-2$ and $\max\{2\delta-2,1\}\leq \delta_{\mathcal{L}(G)}\leq \Delta_{\mathcal{L}(G)}\leq 2\Delta-2$, with left equality if and only if $G$ is $\max\{2\delta-2,1\}$-regular, with right equality if and only if $G$ is $2\Delta-2$-regular. \end{fact} \section{Sharp bounds for the SDD index of graphs} Vasilyev \cite{vasi2014} obtained some bounds for the SDD index of graphs, including the following lower bound of Theorem \ref{t3-1}. \begin{theorem}\label{t3-1} Let $G\in \mathcal{G}_{n,m}$. Then $2m\leq SDD(G)\leq m(n-1+\frac{1}{n-1})$, with left equality if and only if $G$ is regular, right equality if and only if $G\cong S_{n}$. \end{theorem} \begin{proof} By Lemma \ref{l2-2}, one has $$SDD(G)=\sum\limits_{uv\in E_{G}}(\frac{d_{u}}{d_{v}}+\frac{d_{v}}{d_{u}})\geq 2m,$$ with equality if and only if $G$ is regular. $$ SDD(G)=\sum\limits_{uv\in E_{G}}(\frac{d_{u}}{d_{v}}+\frac{d_{v}}{d_{u}})\leq \sum\limits_{uv\in E_{G}}(n-1+\frac{1}{n-1})=m(n-1+\frac{1}{n-1}),$$ with equality if and only if $G\cong S_{n}$. \end{proof} Since the numbers of cycle $\eta=m-n+1\geq 0$, thus $m\geq n-1$. By Theorem \ref{t3-1}, we have \begin{corollary}\label{c3-2} Let $G\in \mathcal{G}_{n}$. Then $SDD(G)\geq 2(n-1)$, with equality if and only if $G\cong K_{2}$. \end{corollary} $ID(G)=\sum\limits_{u\in V_{G}}\frac{1}{d_{u}}$ is called the inverse degree index \cite{fajt1987}. \begin{theorem}\label{t3-3} Let $G\in \mathcal{G}_{n}$ with maximum degree $\Delta$ and minimum degree $\delta$. Then $$ \delta^{2}\cdot ID(G)\leq SDD(G)\leq \Delta^{2}\cdot ID(G),$$ with both equalities if and only if $G$ is regular. \end{theorem} \begin{proof} By the definition of SDD index \begin{eqnarray*} SDD(G) & = & \sum_{uv\in E_{G}}(\frac{d_{u}}{d_{v}}+\frac{d_{v}}{d_{u}}) \\ & = & \sum_{uv\in E_{G}}(\frac{1}{d_{v}^{2}}+\frac{1}{d_{u}^{2}})d_{u}d_{v} \\ & \geq & \delta^{2}\sum_{uv\in E_{G}}(\frac{1}{d_{v}^{2}}+\frac{1}{d_{u}^{2}}) \\ & = & \delta^{2}\cdot ID(G), \end{eqnarray*} with equality if and only if $G$ is regular. The proof of the upper bound is similar, we omit it. \end{proof} \begin{theorem}\label{t3-4} Let $G$ be a graph with $|E_{G}|=m$, maximum degree $\Delta$ and minimum degree $\delta$. Then $$ SDD(G)\leq m(\frac{\Delta}{\delta}+\frac{\delta}{\Delta}),$$ with equality if and only if $G$ is regular or biregular. \end{theorem} \begin{proof} Suppose that $1\leq \delta \leq d_{v}\leq d_{u}\leq \Delta$, and by Lemma \ref{l2-2}, we have \begin{eqnarray*} SDD(G) & = & \sum_{uv\in E_{G}}(\frac{d_{u}}{d_{v}}+\frac{d_{v}}{d_{u}}) \\ & \leq & \sum_{uv\in E_{G}}(\frac{\Delta}{\delta}+\frac{\delta}{\Delta}) \\ & = & m(\frac{\Delta}{\delta}+\frac{\delta}{\Delta}), \end{eqnarray*} with equality if and only if $d_{u}=\Delta$ and $d_{v}=\delta$ for all $uv\in E_{G}$, i.e., $G$ is regular or biregular. \end{proof} \begin{corollary}\label{c3-5} Let $G\in \mathcal{G}_{n,m}$ with maximum degree $\Delta\leq n-2$. Then $$ SDD(G)< m(n-2+\frac{1}{n-2}).$$ \end{corollary} \begin{proof} Suppose that $1\leq \delta \leq \Delta \leq n-2$, by Theorem \ref{t3-4}, we have $ SDD(G)\leq m(n-2+\frac{1}{n-2})$ with equality if and only if $G$ is $(n-2,1)$-biregular, which is a contradiction with $G$ is a connected graph. Thus $ SDD(G)< m(n-2+\frac{1}{n-2})$. \end{proof} \begin{theorem}\label{t3-6} Let $G$ be a graph with $|E_{G}|=m$, maximum degree $\Delta$ and minimum degree $\delta$. Then $$ SDD(G)\geq \frac{2\delta^{2}m^{\frac{\alpha+1}{\alpha}}}{(M_{2}^{\alpha}(G))^{\frac{1}{\alpha}}},\ \ SDD(G)\geq \frac{\delta^{2}(2m)^{\frac{\alpha+1}{\alpha}}}{\Delta(M_{1}^{\alpha}(G))^{\frac{1}{\alpha}}} $$ with both equalities if and only if $G$ is regular. \end{theorem} \begin{proof} By the definition of SDD index, we have \begin{eqnarray*} \frac{1}{m}SDD(G) & = & \frac{1}{m}\sum_{uv\in E_{G}}(\frac{d_{u}^{2}+d_{v}^{2}}{d_{u}d_{v}}) \\ & \geq & \left(\prod_{uv\in E_{G}}\frac{d_{u}^{2}+d_{v}^{2}}{d_{u}d_{v}} \right)^{\frac{1}{m}}\\ & \geq & \left( 2^{m}\delta^{2m} \prod_{uv\in E_{G}}\frac{1}{d_{u}d_{v}} \right)^{\frac{1}{m}}, \end{eqnarray*} with first equality if and only if $\frac{d_{u}^{2}+d_{v}^{2}}{d_{u}d_{v}}$ is a constant for any $uv\in E_{G}$, second equality if and only if $d_{u}=d_{v}=\delta$ for all $uv\in E_{G}$. Thus \begin{eqnarray*} (SDD(G))^{\alpha} & \geq & (2m)^{\alpha}\delta^{2\alpha} \left(\prod_{uv\in E_{G}}(\frac{1}{d_{u}d_{v}})^{\alpha} \right)^{\frac{1}{m}}\\ & \geq & (2m)^{\alpha}\delta^{2\alpha}\cdot \frac{m}{\sum\limits_{uv\in E_{G}}(d_{u}d_{v})^{\alpha}}\\ & = & \frac{2^{\alpha}m^{\alpha+1}\delta^{2\alpha}}{M_{2}^{\alpha}(G)}, \end{eqnarray*} with first equality if and only if $G$ is regular, second equality if and only if $d_{u}d_{v}$ is a constant for any $uv\in E_{G}$. Thus $SDD(G)\geq\frac{2\delta^{2}m^{\frac{\alpha+1}{\alpha}}}{(M_{2}^{\alpha}(G))^{\frac{1}{\alpha}}}$ with equality if and only if $G$ is regular. By Lemma \ref{l2-3}, $M_{2}^{\alpha}(G)\leq \frac{\Delta^{\alpha}}{2}M_{1}^{\alpha+1}(G)$ with equality if and only if $G$ is regular. Thus $SDD(G)\geq \frac{\delta^{2}(2m)^{\frac{\alpha+1}{\alpha}}}{\Delta(M_{1}^{\alpha}(G))^{\frac{1}{\alpha}}} $ with equality if and only if $G$ is regular. \end{proof} $F(G)=\sum\limits_{uv\in E_{G}}(d_{u}^{2}+d_{v}^{2})$ is called the forgotten index \cite{fugu2015}. \begin{theorem}\label{t3-7} Let $G$ be a graph with $|E_{G}|=m$. Then $$ SDD(G)\geq \frac{2m^{2}}{M_{2}(G)},\ \ SDD(G)\geq \frac{4m^{2}}{F(G)} $$ with both equalities if and only if $G\cong K_{2}$. \end{theorem} \begin{proof} Since \begin{eqnarray*} m & = & \sum_{uv\in E_{G}} \left( \frac{d_{u}d_{v}}{d_{u}^{2}+d_{v}^{2}} \right)^{\frac{1}{2}} \left( \frac{d_{u}^{2}+d_{v}^{2}}{d_{u}d_{v}} \right)^{\frac{1}{2}} \\ & \leq & \left( \sum_{uv\in E_{G}}\frac{d_{u}d_{v}}{d_{u}^{2}+d_{v}^{2}} \right)^{\frac{1}{2}} \left( \sum_{uv\in E_{G}}\frac{d_{u}^{2}+d_{v}^{2}}{d_{u}d_{v}} \right)^{\frac{1}{2}}, \end{eqnarray*} with equality if and only if $\frac{d_{u}^{2}+d_{v}^{2}}{d_{u}d_{v}}$ is a constant for any $uv\in E_{G}$. Since $d_{u}\geq 1$ for any $u\in V_{G}$, then $\sum\limits_{uv\in E_{G}}\frac{d_{u}d_{v}}{d_{u}^{2}+d_{v}^{2}}\leq \frac{1}{2}\sum\limits_{uv\in E_{G}}d_{u}d_{v}=\frac{1}{2}M_{2}(G)$, with equality if and only if $d_{u}=d_{v}=1$ for any $uv\in E_{G}$. Thus $SDD(G)\geq \frac{2m^{2}}{M_{2}(G)}$ with equality if and only if $G\cong K_{2}$. Since $d_{u}\geq 1$ for any $u\in V_{G}$, then $\frac{d_{u}d_{v}}{d_{u}^{2}+d_{v}^{2}}\leq \frac{d_{u}^{2}+d_{v}^{2}}{4}$, with equality if and only if $d_{u}=d_{v}=1$ for any $uv\in E_{G}$. Then $\sum\limits_{uv\in E_{G}}\frac{d_{u}d_{v}}{d_{u}^{2}+d_{v}^{2}}\leq \frac{1}{4}\sum\limits_{uv\in E_{G}}d_{u}^{2}+d_{v}^{2}=\frac{1}{4}F(G)$. Thus $SDD(G)\geq \frac{4m^{2}}{F(G)}$ with equality if and only if $G\cong K_{2}$. \end{proof} In the following, we consider the connected graphs with minimal SDD index. \begin{theorem}\label{t3-8} Let $G\in \mathcal{G}_{n,m}$. Then $(i)$ $SDD(G)\geq 2$, with equality if and only if $G\cong K_{2}$; $(ii)$ There is no such graphs with $2<SDD(G)\leq 4$; $(iii)$ If $4<SDD(G)\leq 6$, then $G\in \{S_{3},C_{3}\}$ with $SDD(S_{3})=5$ and $SDD(C_{3})=6$; $(iv)$ If $6<SDD(G)\leq 8$, then $G\in \{P_{4},C_{4}\}$ with $SDD(P_{4})=7$ and $SDD(C_{4})=8$. \end{theorem} \begin{proof} $(i)$ By Theorem \ref{t3-1}, $SDD(G)\geq 2m\geq 2$, with equality if and only if $G\cong K_{2}$. Suppose that $n\geq 3$, then we have $(ii)$ If $2<SDD(G)\leq 4$, then $4\leq 2(n-1)\leq 2m\leq SDD(G)\leq 4$, then $n=3$ and $m=2$. Thus $G\cong S_{3}$, while $SDD(S_{3})=5>4$, which is a contradiction. $(iii)$ If $4<SDD(G)\leq 6$, then $4\leq 2(n-1)\leq 2m\leq SDD(G)\leq 6$, then $n=3$ or $4$ and $m\leq 3$. Thus $G\in \{S_{3}, S_{4},P_{4},C_{3}\}$, while $SDD(S_{3})=5$, $SDD(S_{4})=10>6$, $SDD(P_{4})=7>6$, $SDD(C_{3})=6$. Thus $G\in \{S_{3},C_{3}\}$. $(iv)$ If $6<SDD(G)\leq 8$, then $4\leq 2(n-1)\leq 2m\leq SDD(G)\leq 8$, then $n=3$ or $4$ or $5$ and $m\leq 4$. If $n=3$ and $m\leq 4$, then $G\in \{S_{3},C_{3}\}$ which is a contradiction with $SDD(G)\leq 8$. If $n=4$ and $m\leq 4$, then $G\in \{S_{4},C_{4},P_{4},C_{3}^{*}\}$, where $C_{3}^{*}$ is the graph obtained from $C_{3}$ by adding a pendent vertex to one vertex of $C_{3}$. $SDD(S_{4})=10>8$, $SDD(P_{4})=7$, $SDD(C_{4})=8$, $SDD(C_{3}^{*})=9+\frac{2}{3}>8$. Thus $G\in \{P_{4},C_{4}\}$ in this case. If $n=5$ and $m\leq 4$, since $m\geq n-1=4$, thus $m=4$. then $G\in \{P_{5},P_{4}^{*},S_{5}\}$, where $P_{4}^{*}$ is the graph obtained from $P_{4}$ by adding a pendent vertex to one vertex with degree two of $P_{4}$. $SDD(P_{5})=9>8$, $SDD(P_{4}^{*})=11+\frac{1}{3}>8$, $SDD(S_{5})=17>8$. Thus $G=\emptyset$ in this case. \end{proof} The inverse problem for the SDD index is also interesting, thus we propose the following problem. \begin{problem}\label{p3-1} Solve the inverse problem for the SDD index of graphs or chemical graphs. \end{problem} We call $u_{0}v_{0}\in E_{G}$ is a minimal edge in $G$ if $d_{u_{0}}\leq d_{u}$ for all $u\in N_{G}(u_{0})\setminus \{v_{0}\}$ and $d_{v_{0}}\leq d_{u}$ for all $u\in N_{G}(v_{0})\setminus \{u_{0}\}$. \begin{theorem}\label{t3-9} Let $G$ be a graph with a minimal edge $u_{0}v_{0}$. Let $G^{*}=G-u_{0}v_{0}$. Then $$ SDD(G^{*})>SDD(G)-\frac{(d_{u_{0}})^{2}+(d_{v_{0}})^{2}}{d_{u_{0}}d_{v_{0}}}.$$ \end{theorem} \begin{proof} Since $G^{*}=G-u_{0}v_{0}$, then $V_{G}=V_{G^{*}}$. Let $d_{u}\in V_{G}$ and $d_{u}^{*}\in V_{G^{*}}$, then $d_{u_{0}}^{*}=d_{u_{0}}-1$, $d_{v_{0}}^{*}=d_{v_{0}}-1$ and $d_{u}^{*}=d_{u}$ for all $u\in V_{G}\setminus \{u_{0},v_{0}\}$. Let $E_{G}\supseteq E_{0}=\{uv\in E_{G}|u\notin\{u_{0},v_{0}\},v\notin\{u_{0},v_{0}\}\}$. Then \begin{eqnarray*} & & SDD(G)-SDD(G^{*}) \\ & = & \sum_{uv\in E_{0}} \left(\frac{d_{u}^{2}+d_{v}^{2}}{d_{u}d_{v}}-\frac{(d_{u}^{*})^{2} +(d_{v}^{*})^{2}}{d_{u}^{*}d_{v}^{*}}\right)+ \sum_{u\in N_{G^{*}}(u_{0})} \left(\frac{d_{u_{0}}^{2}+d_{u}^{2}}{d_{u_{0}}d_{u}}-\frac{(d_{u_{0}}^{*})^{2} +(d_{u}^{*})^{2}}{d_{u_{0}}^{*}d_{u}^{*}}\right)\\ & \quad & +\sum_{u\in N_{G^{*}}(v_{0})} \left(\frac{d_{v_{0}}^{2}+d_{u}^{2}}{d_{v_{0}}d_{u}}-\frac{(d_{v_{0}}^{*})^{2} +(d_{u}^{*})^{2}}{d_{v_{0}}^{*}d_{u}^{*}}\right)+ \frac{d_{u_{0}}^{2}+d_{v_{0}}^{2}}{d_{u_{0}}d_{v_{0}}}\\ & = & \sum_{u\in N_{G^{*}}(u_{0})} \left(\frac{d_{u_{0}}^{2}+d_{u}^{2}}{d_{u_{0}}d_{u}}-\frac{(d_{u_{0}}^{*})^{2} +(d_{u}^{*})^{2}}{d_{u_{0}}^{*}d_{u}^{*}} \right)+\sum_{u\in N_{G^{*}}(v_{0})} \left( \frac{d_{v_{0}}^{2}+d_{u}^{2}}{d_{v_{0}}d_{u}}-\frac{(d_{v_{0}}^{*})^{2} +(d_{u}^{*})^{2}}{d_{v_{0}}^{*}d_{u}^{*}} \right)\\ & \quad & + \frac{d_{u_{0}}^{2}+d_{v_{0}}^{2}}{d_{u_{0}}d_{v_{0}}}. \end{eqnarray*} Since $u_{0}v_{0}$ is a minimal edge in $G$, then $1\leq d_{u_{0}}\leq d_{u}$ for all $u\in N_{G}(u_{0})\setminus \{v_{0}\}$. Since $d_{u}(d_{u_{0}}-1)(d_{u_{0}}^{2}+d_{u}^{2})-d_{u}d_{u_{0}}((d_{u_{0}}-1)^{2}+d_{v}^{2}) =d_{u}(d_{u_{0}}^{2}-d_{u}^{2}-1)<0$, then $\frac{d_{u_{0}}^{2}+d_{u}^{2}}{d_{u_{0}}d_{u}}-\frac{(d_{u_{0}}^{*})^{2} +(d_{u}^{*})^{2}}{d_{u_{0}}^{*}d_{u}^{*}}<0$. Similarly, we also have $\frac{d_{v_{0}}^{2}+d_{u}^{2}}{d_{v_{0}}d_{u}}-\frac{(d_{v_{0}}^{*})^{2} +(d_{u}^{*})^{2}}{d_{v_{0}}^{*}d_{u}^{*}}<0$. Thus we have $SDD(G^{*})>SDD(G)-\frac{(d_{u_{0}})^{2}+(d_{v_{0}})^{2}}{d_{u_{0}}d_{v_{0}}}$. \end{proof} \section{Sharp bounds for the SDD index of line graphs} It is obvious that $SDD(\mathcal{L}(G))=0$ if and only if $G$ is a trivial graph, i.e., $G\cong K_{2}$. Thus in the following, we suppose $G\ncong K_{2}$. \begin{theorem}\label{t4-1} Let $G$ be a graph with $|E_{G}|=m$. Then $(i)$ If $G\ncong P_{m+1}$, then $SDD(\mathcal{L}(G))\geq 2m$, with equality if and only if $G\in \{S_{4},C_{m}\}$; $(ii)$ If $G\ncong K_{2}$, then $SDD(\mathcal{L}(G))\leq (\frac{1}{2}M_{1}(G)-m_{G})(m_{G}-1+\frac{1}{m_{G}-1})$, with equality if and only if $G\in \{P_{3},P_{4}\}$. \end{theorem} \begin{proof} By Lemma \ref{l2-4}, $m_{\mathcal{L}(G)}\geq m_{G}=n_{\mathcal{L}(G)}$ with equality if and only if $\mathcal{L}(G)$ is a unicyclic graph. By Theorem \ref{t3-1}, $SDD(\mathcal{L}(G))\geq 2m_{\mathcal{L}(G)}\geq 2m$, with equality if and only if $\mathcal{L}(G)$ is regular unicyclic graph, i.e., $\mathcal{L}(G)\cong C_{m}$, then $G\in \{S_{4},C_{m}\}$. By Fact \ref{f2-6}, Theorem \ref{t3-1} and Lemma \ref{l2-5}, we have that if $G\ncong K_{2}$, then $SDD(\mathcal{L}(G))\leq (\frac{1}{2}M_{1}(G)-m_{G})(m_{G}-1+\frac{1}{m_{G}-1})$, with equality if and only if $G\in \{P_{3},P_{4}\}$. \end{proof} Combine Theorem \ref{t3-3} and Fact \ref{f2-6}, we have \begin{theorem}\label{t4-2} Let $G$ be a graph with maximum degree $\Delta$ and minimum degree $\delta$. If $G\ncong K_{2}$ , then $$\max\{4(\delta-1)^{2},1\}\cdot ID(\mathcal{L}(G))\leq SDD(\mathcal{L}(G))\leq 4(\Delta-1)^{2}\cdot ID(\mathcal{L}(G)),$$ with left equality if and only if $G$ is regular or $G\cong S_{3}$, right equality if and only if $G$ is regular. \end{theorem} \begin{theorem}\label{t4-3} Let $G$ be a graph with $|E_{G}|=m$, maximum degree $\Delta$ and minimum degree $\delta$. If $G\ncong K_{2}$ , then $$M_{1}(G)-2m\leq SDD(\mathcal{L}(G))\leq \frac{1}{2}(M_{1}(G)-2m)\left( \frac{2\Delta-2}{\max\{ 2\delta-2,1\}}+ \frac{\max\{ 2\delta-2,1\}}{2\Delta-2} \right),$$ with left equality if and only if $G$ is regular or biregular, right equality if and only if $G\cong P_{4}$ or $G$ is regular. \end{theorem} \begin{proof} By Lemma \ref{l2-5} and Theorem \ref{t3-1}, we have $SDD(\mathcal{L}(G))\geq 2m_{\mathcal{L}(G)}=M_{1}(G)-2m$, with equality if and only if $\mathcal{L}(G)$ is regular, i.e., $G$ is regular or biregular. By Theorem \ref{t3-4},Lemma \ref{l2-2}, Lemma \ref{l2-5} and Fact \ref{f2-8}, we have \begin{eqnarray*} SDD(\mathcal{L}(G)) & \leq & m_{\mathcal{L}(G)} \left( \frac{\Delta_{\mathcal{L}(G)}}{\delta_{\mathcal{L}(G)}}+\frac{\delta_{\mathcal{L}(G)}} {\Delta_{\mathcal{L}(G)}} \right) \\ & = & \frac{1}{2}(M_{1}(G)-2m) \left( \frac{\Delta_{\mathcal{L}(G)}}{\delta_{\mathcal{L}(G)}}+\frac{\delta_{\mathcal{L}(G)}} {\Delta_{\mathcal{L}(G)}} \right) \\ & \leq & \frac{1}{2}(M_{1}(G)-2m)\left( \frac{2\Delta-2}{\max\{ 2\delta-2,1\}}+ \frac{\max\{ 2\delta-2,1\}}{2\Delta-2} \right), \end{eqnarray*} with first equality if and only if $\mathcal{L}(G)$ is regular or biregular, second equality if and only if $\delta_{\mathcal{L}(G)}=\max\{ 2\delta-2,1\}$ and $\Delta_{\mathcal{L}(G)}=2\Delta-2$. If $G\cong P_{4}$ or $G$ is regular, we have the equality holds. $SDD(\mathcal{L}(P_{4}))=SDD(S_{3})=5=\frac{1}{2}(10-2\times 3)(\frac{2\times2-2}{1}+\frac{1}{2\times2-2})$, and $SDD(\mathcal{L}(G))=2m_{\mathcal{L}(G)}=M_{1}(G)-2m=\frac{1}{2}(M_{1}(G)-2m)\left( \frac{2\Delta-2}{\max\{ 2\delta-2,1\}}+ \frac{\max\{ 2\delta-2,1\}}{2\Delta-2} \right)$. In the following, we suppose that $\mathcal{L}(G)$ is regular or biregular, and $\delta_{\mathcal{L}(G)}=\max\{ 2\delta-2,1\}$ and $\Delta_{\mathcal{L}(G)}=2\Delta-2$. {\bf Case 1}. $\delta=1$. Then $\delta_{\mathcal{L}(G)}=\max\{ 2\delta-2,1\}=1$. Since $G\ncong K_{2}$, then $\Delta\geq 2$ and $\Delta_{\mathcal{L}(G)}=2\Delta-2\geq 2$. Then $\mathcal{L}(G)$ is a $(\Delta_{\mathcal{L}(G)},1)$-biregular graph. Thus $\mathcal{L}(G)\cong S_{\Delta_{\mathcal{L}(G)}+1}$ with $n_{\mathcal{L}(G)}=\Delta_{\mathcal{L}(G)}+1\geq 3$. By Fact \ref{f2-6}, we have $G\cong P_{4}$. {\bf Case 2}. $\delta\geq 2$. In this case, $\mathcal{L}(G)$ is regular or biregular, and $2\delta-2= \delta_{\mathcal{L}(G)}\leq \Delta_{\mathcal{L}(G)}=2\Delta-2$. If $\mathcal{L}(G)$ is biregular, we have $\delta_{\mathcal{L}(G)}< \Delta_{\mathcal{L}(G)}$, then $\delta<\Delta$, which is a contradiction with the definition of biregular graphs and $2\delta-2= \delta_{\mathcal{L}(G)}\leq \Delta_{\mathcal{L}(G)}=2\Delta-2$. Then $\mathcal{L}(G)$ is regular, thus $2\delta-2= \delta_{\mathcal{L}(G)}= \Delta_{\mathcal{L}(G)}=2\Delta-2$. Thus $G$ is regular in this case. \end{proof} In the following, we consider the Nordhaus-Gaddum-type results for the SDD index of a graph $G$ and its line graph $\mathcal{L}(G)$. \begin{corollary}\label{c4-4} Let $G$ be a graph with maximum degree $\Delta$ and minimum degree $\delta$. If $G\ncong K_{2}$ , then $$M_{1}(G)\leq SDD(G)+SDD(\mathcal{L}(G))\leq \frac{1}{2}M_{1}(G)\left( \frac{2\Delta-2}{\max\{ 2\delta-2,1\}}+ \frac{\max\{ 2\delta-2,1\}}{2\Delta-2} \right),$$ with both equalities if and only if $G$ is regular. \end{corollary} \begin{proof} Combine Theorem \ref{t3-1} and Theorem \ref{t4-3}, we have $SDD(G)+SDD(\mathcal{L}(G))\geq M_{1}(G)$, with equality if and only if $G$ is regular. For the upper bound of $SDD(G)+SDD(\mathcal{L}(G))$, we consider the following two cases. {\bf Case 1}. $\delta=1$. Then $G$ is not a regular graph $(G\ncong K_{2})$, thus $\Delta\geq 2$. By Theorem \ref{t3-4} and Theorem \ref{t4-3}, we have \begin{eqnarray*} & & SDD(G)+SDD(\mathcal{L}(G)) \\ & < & m \left( \frac{\Delta^{2}+1}{\Delta} \right)+ \frac{1}{2}(M_{1}(G)-2m)\left( \frac{4(\Delta-1)^{2}+1}{2(\Delta-1)} \right)\\ & = & \frac{1}{4}M_{1}(G)\left( \frac{4(\Delta-1)^{2}+1}{\Delta-1} \right)+m \left( \frac{\Delta^{2}+1}{\Delta}-\frac{4(\Delta-1)^{2}+1}{2(\Delta-1)} \right) \\ & \leq & \frac{1}{4}M_{1}(G)\left( \frac{4(\Delta-1)^{2}+1}{\Delta-1} \right). \end{eqnarray*} {\bf Case 2}. $\delta\geq 2$. It is easy to proof that $\frac{\Delta^{2}+\delta^{2}}{\Delta\delta}\leq \frac{(\Delta-1)^{2}+(\delta-1)^{2}}{(\Delta-1)(\delta-1)}$. Then \begin{eqnarray*} & & SDD(G)+SDD(\mathcal{L}(G)) \\ & \leq & m \left( \frac{\Delta^{2}+\delta^{2}}{\Delta\delta} \right)+ \frac{1}{2}(M_{1}(G)-2m)\left( \frac{(\Delta-1)^{2}+(\delta-1)^{2}}{(\Delta-1)(\delta-1)} \right)\\ & \leq & m \left( \frac{(\Delta-1)^{2}+(\delta-1)^{2}}{(\Delta-1)(\delta-1)} \right)+ \frac{1}{2}(M_{1}(G)-2m)\left( \frac{(\Delta-1)^{2}+(\delta-1)^{2}}{(\Delta-1)(\delta-1)} \right) \\ & = & \frac{1}{2}M_{1}(G)\left( \frac{(\Delta-1)^{2}+(\delta-1)^{2}}{(\Delta-1)(\delta-1)} \right), \end{eqnarray*} with equality if and only if $G$ is regular. Thus we have $SDD(G)+SDD(\mathcal{L}(G))\leq \frac{1}{2}M_{1}(G)\left( \frac{2\Delta-2}{\max\{ 2\delta-2,1\}}+ \frac{\max\{ 2\delta-2,1\}}{2\Delta-2} \right)$, with equality if and only if $G$ is regular. \end{proof} \begin{theorem}\label{t4-5} Let $G$ be a graph with $|E_{G}|=m$, maximum degree $\Delta$ and minimum degree $\delta$. Then $$SDD(\mathcal{L}(G))\geq \frac{\Delta \delta^{2}\chi_{\alpha+1}(G)}{(\Delta-1)^{2}(\chi_{\alpha}(G))^{\frac{1}{\alpha}}},$$ with equality if and only if $G$ is regular. \end{theorem} \begin{proof} It is obvious that the conclusion holds for $\delta=1$. In the following, we consider $\delta\geq 2$. By Fact \ref{f2-8}, Lemma \ref{l2-5} and Theorem \ref{t3-6}, we have $$ SDD(\mathcal{L}(G))\geq \frac{(\delta_{\mathcal{L}(G)})^{2}(2m_{\mathcal{L}(G)})^{\frac{\alpha+1}{\alpha}}} {\Delta_{\mathcal{L}(G)}(M_{1}^{\alpha}(\mathcal{L}(G)))^{\frac{1}{\alpha}}} \geq \frac{2(\delta-1)^{2}(M_{1}(G)-2m)^{\frac{\alpha+1}{\alpha}}} {(\Delta-1)(M_{1}^{\alpha}(\mathcal{L}(G)))^{\frac{1}{\alpha}}}, $$ with equality if and only if $\mathcal{L}(G)$ is $(2\Delta-2)$-regular. Since $M_{1}^{\alpha}(\mathcal{L}(G))\leq (\frac{\Delta-1}{\Delta})^{\alpha}\chi_{\alpha}(G)$ for $\alpha>0$, with equality if and only if $G$ is $\Delta$-regular \cite{ligz2022}. Thus $SDD(\mathcal{L}(G))\geq \frac{\Delta \delta^{2}\chi_{\alpha+1}(G)}{(\Delta-1)^{2}(\chi_{\alpha}(G))^{\frac{1}{\alpha}}}$, with equality if and only if $G$ is regular. \end{proof} \begin{theorem}\label{t4-6} Let $G$ be a graph with $|E_{G}|=m$ and maximum degree $\Delta$. If $G\ncong K_{2}$, then $$SDD(\mathcal{L}(G))>\frac{\Delta^{3}(M_{1}(G)-2m)^{2}}{(\Delta-1)^{3}\chi_{3}(G)}.$$ \end{theorem} \begin{proof} Since $\frac{d_{u}+d_{v}-2}{d_{u}+d_{v}}\leq \frac{\Delta-1}{\Delta}$ for any vertices $u,v\in V_{G}$ with maximum $\Delta$, the equality holds if and only if $d_{u}=d_{v}=\Delta$. Then $d_{\mathcal{L}(G)}(uv)=d_{u}+d_{v}-2\leq (d_{u}+d_{v})\frac{\Delta-1}{\Delta}$. Since $F(\mathcal{L}(G))=\sum\limits_{uv\in V_{\mathcal{L}(G)}}(d_{\mathcal{L}(G)}(uv))^{3} \leq (\frac{\Delta-1}{\Delta})^{3}\sum\limits_{uv\in E_{G}}(d_{u}+d_{v})^{3}=\frac{\Delta^{3}(M_{1}(G)-2m)^{2}}{(\Delta-1)^{3}\chi_{3}(G)}$, with equality if and only if $G$ is $\Delta$-regular. Then By Lemma \ref{l2-5} and Theorem \ref{t3-7}, we have $$ SDD(\mathcal{L}(G))\geq \frac{4(m_{\mathcal{L}(G)})^{2}}{F(\mathcal{L}(G))} = \frac{(M_{1}(G)-2m)^{2}}{F(\mathcal{L}(G))}\geq \frac{\Delta^{3}(M_{1}(G)-2m)^{2}}{(\Delta-1)^{3}\chi_{3}(G)}, $$ with first equality if and only if $\mathcal{L}(G)\cong P_{2}$, i.e., $G\cong P_{3}$, second equality if and only if $G$ is regular. This is a contradiction. $SDD(\mathcal{L}(G))>\frac{\Delta^{3}(M_{1}(G)-2m)^{2}}{(\Delta-1)^{3}\chi_{3}(G)}$. \end{proof} In the following, we consider the line graphs with minimal SDD index. Combine the Fact \ref{f2-6} and Theorem \ref{t3-8}, we have the following result. The proof of \ref{t4-7} is similar to Theorem \ref{t3-8}, we omit it. \begin{theorem}\label{t4-7} Let $G$ be a graph with $G\ncong K_{2}$. Then $(i)$ $SDD(\mathcal{L}(G))\geq 2$, with equality if and only if $G\cong P_{3}$; $(ii)$ There is no such graphs with $2<SDD(\mathcal{L}(G))\leq 4$; $(iii)$ If $4<SDD(\mathcal{L}(G))\leq 6$, then $G\in \{P_{4},C_{3},S_{4}\}$ with $SDD(\mathcal{L}(P_{4}))=5$ and $SDD(\mathcal{L}(C_{3}))=SDD(\mathcal{L}(S_{4}))=6$; $(iv)$ If $6<SDD(\mathcal{L}(G))\leq 8$, then $G\in \{P_{5},C_{4}\}$ with $SDD(\mathcal{L}(P_{5}))=7$ and $SDD(\mathcal{L}(C_{4}))=8$. \end{theorem} The inverse problem for the SDD index of line graphs $\mathcal{L}(G)$ is also interesting, thus we propose the following problem. \begin{problem}\label{p4-1} Solve the inverse problem for the SDD index of line graphs $\mathcal{L}(G)$. \end{problem} \begin{theorem}\label{t4-8} Let $G\in \mathcal{G}_{n}$ with maximum degree $\Delta$ and minimum degree $\delta$, and $G\ncong K_{2}$. Then $$\frac{SDD(\mathcal{L}(G))}{SDD(G)}\leq \frac{\Delta^{2}-\delta}{4\delta} \left( \frac{4(\Delta-1)^{2}+(\max\{ 2\delta-2,1\})^{2}}{(\Delta-1)\cdot \max\{ 2\delta-2,1\}} \right),$$ with equality if and only if $G$ is regular. \end{theorem} \begin{proof} We know that $G\ncong K_{2}$, $\Delta_{\mathcal{L}(G)}\leq 2\Delta-2$, $\delta_{\mathcal{L}(G)}\geq \max\{ 2\delta-2,1\}$. Since $M_{1}(G)\leq \frac{2m\Delta^{2}}{\delta}$, with equality if and only if $G$ is regular \cite{ligz2022}. By Theorem \ref{t3-4} and Lemma \ref{l2-2}, we have \begin{eqnarray*} SDD(\mathcal{L}(G)) & \leq & m_{\mathcal{L}(G)} \left( \frac{\Delta_{\mathcal{L}(G)}}{\delta_{\mathcal{L}(G)}}+\frac{\delta_{\mathcal{L}(G)}} {\Delta_{\mathcal{L}(G)}} \right) \\ & = & \frac{1}{2}(M_{1}(G)-2m) \left( \frac{\Delta_{\mathcal{L}(G)}}{\delta_{\mathcal{L}(G)}}+\frac{\delta_{\mathcal{L}(G)}} {\Delta_{\mathcal{L}(G)}} \right) \\ & \leq & \frac{1}{2}(M_{1}(G)-2m)\left( \frac{2\Delta-2}{\max\{ 2\delta-2,1\}}+ \frac{\max\{ 2\delta-2,1\}}{2\Delta-2} \right) \\ & \leq & (\frac{m\Delta^{2}}{\delta}-m)\left( \frac{2\Delta-2}{\max\{ 2\delta-2,1\}}+ \frac{\max\{ 2\delta-2,1\}}{2\Delta-2} \right) \\ & = & 2m(\frac{\Delta^{2}-\delta}{2\delta})\left( \frac{2\Delta-2}{\max\{ 2\delta-2,1\}}+ \frac{\max\{ 2\delta-2,1\}}{2\Delta-2} \right) \\ & \leq & SDD(G)\cdot (\frac{\Delta^{2}-\delta}{2\delta})\left( \frac{2\Delta-2}{\max\{ 2\delta-2,1\}}+ \frac{\max\{ 2\delta-2,1\}}{2\Delta-2} \right) , \end{eqnarray*} with equality if and only if $G$ is regular. \end{proof} \section{Conclusions} In a recent paper \cite{fudg2018}, Furtula et al. determined the quality of SDD index exceeds that of some more popular VDB indices, in particular that of the GA index. They shown a close connection between the SDD index and the earlier well-established GA index. Thus it is meaningful and important to consider the chemical and mathematical properties of the SDD index. Liu et al. \cite{lpli2020} determined the minimum and second minimum SDD index of tricyclic graphs. By the way, using a similar way of \cite{deeb2018,lhua2022}, we can also determine minimum and second minimum SDD index of tetracyclic (chemical) graphs. \baselineskip=0.20in
1,116,691,499,997
arxiv
\section{Introduction} Echo State Networks (ESN) are neural networks designed for performing complex non-linear regression or classification tasks, such as non-linear time-series forecasting \cite{jaeger2001short, jaeger2004harnessing}. As an instance of a more general framework called reservoir computing \cite{lukovsevivcius2009survey}, the ESN architecture is based on a randomly connected recurrent neural network, called reservoir, which is driven by a temporal input. The state of the reservoir is a rich representation of the history of the inputs \cite{buonomano1995temporal}, so that a simple linear combination of the reservoir neurons is often a good predictor of the future of the inputs. The computation of the output connections can be done explicitly and corresponds to the minimization of the relative entropy between the network and the inputs dynamics \cite{galtier2014relative}, for which the associated gradient descent may be implemented with biologically plausible learning rules \cite{galtier2013biological}. In this paper, we focus on the input-driven reservoir, which may be governed by a variety of dynamical systems beyond random neural networks \cite{dambre2012information}, provided they produce consistent reservoir dynamics for a given input. This condition is of primary importance since its violation systematically leads to irrelevant results. In the original paper \cite{jaeger2001short}, Jaeger has given a condition, which he names \textit{Echo State Property} (ESP), guaranteeing that the network states are consistent. This definition of the ESP and the equivalent formulations manipulate left infinite input time-series assuming that the initial condition occurs at $t=-\infty$. If $n$ is the number of neurons in the reservoir, $\mathbf{x}(t) \in \mathbb{R}^n$ is the state of the reservoir at time $t\in \mathbb{Z}$ and $u(t) \in \mathbb{R}$ is the input to the reservoir of time $t\in \mathbb{Z}$. The ESP definition can be summarized as \begin{defi}[ESP \cite{jaeger2001short}] A network has the ESP if the network state $\mathbf{x}(t)$ is uniquely determined by any left-infinite input sequence $\{u(t-s) : s \in \mathbb{N}\}$. \label{def: ESP} \end{defi} In other words, it means that the initial condition of the network (at $t = -\infty$) does not influence the trajectory of the states, which corresponds to the property that the input-driven network has a unique global attractor \cite{cheban2004global}. The ESP seems to be important in practice to design efficient reservoirs. Indeed, a network without ESP would have a poor accuracy in the inevitable presence of perturbations or noise: a small perturbation could bring the network to states it has never seen before, destroying the prediction capabilities of the network. Put differently, the network has to have some fading memory so that the initial conditions and perturbations do not impact the accuracy in the long term. A fundamental result is that a bound on the maximum singular value $\eta$ of the network connectivity matrix $\mathbf{J} \in \mathbb{R}^{n \times n}$ can provide the global ESP for every input. More specifically, if the dynamics of the network is governed by \begin{equation} \mathbf{x}_{i}(t+1) = S\left(\sum_{j=1}^n \mathbf{J}_{ij} \mathbf{x}_j(t) + \mathbf{m}_i u(t)\right):= G_i(\mathbf{x}(t),t) \end{equation} where $\mathbf{m} \in \mathbb{R}^n$ is the input matrix, and $S(.)$ is a sigmoid function with unit slope at the origin, then the following result holds: \begin{thm}[\cite{jaeger2001short}] If $\eta<1$, then the global ESP holds for every input. \label{thm: sing val} \end{thm} It is important to observe that the sufficient condition in \ref{thm: sing val} holds for the largest singular value $\eta$ and not for the largest eigenvalue modulus $\rho$ (also called spectral radius), which are different for most matrices. Indeed, as pointed out in \cite{zhang2012nonlinear}, the theory of random matrices gives a relationship between the maximum singular value $\eta$ and the maximum eigenvalue $\rho$ of the random matrix $\mathbf{J}$ when the number of neurons tends to infinity. First, using recent results on the empirical spectral distribution of random matrices \cite{tao2010random}, one can show that large random matrices, whose entries are i.i.d. random variables with mean $0$, finite variance $\frac{\sigma^2}{\sqrt{n}}$ , have eigenvalues which tend to cover uniformly the disk of radius $\sigma$ as the number of neurons tends to infinity. For these matrices, the non-scaled standard deviation of the weights $\sigma$ is in fact equal to the spectral radius $\rho$. Second, one can use results concerning the right edge of the Marchenko-Pastur convergence \cite{marvcenko1967distribution, geman1980limit, bai2010spectral} to show that $\eta \to 2 \sigma$ when the number of neurons tends to infinity. From this result, as mentioned in \cite{zhang2012nonlinear}, it is clear that the condition on the singular values translates to \begin{thm} When the number of neurons tends to infinity (and with the appropriate scaling of the weights variance by $\frac{1}{\sqrt{n}}$) the ESP holds for all inputs if $\rho = \sigma<1/2$. \end{thm} Interestingly, there is here a clear gap between the theoretical sufficient condition $\eta<1$ (i.e $\sigma<1/2$) and the condition $\rho < 1$ (i.e $\sigma < 1$) which seems to be valid in practice \cite{lukovsevivcius2012practical}. Based on the notion of structured singular value and on concepts from control theory \cite{lohmiller1998contraction}, a tighter sufficient condition has been derived involving the computation of the infimum of the maximal singular values of the connectivity matrix for variety of underlying norms \cite{buehner2006tighter}. Despite its improvement over the classical singular value, this criterion is difficult to compute in practice, remains poorly understood from the point of view of random matrix theory, and does not respond to the problem of finding a criterion which depends on input, as we will discuss below. It is also interesting to mention the recent work \cite{zhang2012nonlinear}, where the concentration of measure phenomenon \cite{ledoux2005concentration} is used to prove that: \begin{thm}[\cite{zhang2012nonlinear}] If $\rho < 1-\epsilon$, then for any $\mathbf{x},\tilde{\mathbf{x}} \in \mathbb{R}^n$, the probability that $||G(\mathbf{x},t)-G(\tilde{\mathbf{x}},t)||>||\mathbf{x}-\tilde{\mathbf{x}}||$ is exponentially small when the number of neurons is large. \label{thm: sigma < 1} \end{thm} This result may seem sufficient to prove the contraction property with high probability, implying the ESP when $\sigma<1$ with high probability. Actually, one must be careful because this result does not imply that $\mathbb{P}[\forall \mathbf{x},\tilde{\mathbf{x}} \in \mathbb{R}^n,\forall t>0, ||G(\mathbf{x},t)-G(\tilde{\mathbf{x}},t)||>||\mathbf{x}-\tilde{\mathbf{x}}||]$ is small with high probability, which is a much stronger result. However, the authors claim that their result shows why choosing $\sigma$ close but smaller than one is sufficient in practice. In a sense, they argue that networks which do not verify criterion of Theorem \ref{thm: sing val} can still perform well in applications. On the other side, it is also instructive to look for a necessary condition for the ESP. When the spectral radius $\rho$ is larger than one, then the trivial null equilibrium of the system with zero input is linearly unstable, and Jaeger has shown that: \begin{thm}[\cite{jaeger2001short}] When $\rho > 1$, the ESP does not hold for the null input. \end{thm} This result is in fact related to the existence of chaotic attractors as shown in \cite{sompolinsky1988chaos}. Therefore, there is no hope for an ESP for all inputs beyond $\rho = 1$. However, in practice \cite{lukovsevivcius2012practical}, it may be important to increase $\rho$ above $1$ to improve the ESN performance (to increase the memory for instance). If we want to go beyond $\rho = 1$, we need to drop the requirement to have the ESP for all inputs. It has recently been argued that one can define an ESP with respect to a particular input (or a set of inputs) \cite{manjunath2013echo}. Intuitively, this means that a network driven by an input will not display excessive irregularity if it has the ESP with respect to that input. In \cite{manjunath2013echo}, a bound for the ESP is also provided \begin{thm}[\cite{manjunath2013echo}] If $\underset{j \to \infty}{\mbox{lim sup}} \sum_{i=-1}^{-j}\Big(C_i - (1+\ln(2))\Big) I(C_i>2) >\frac{\ln(\|\mathbf{J}\|)}{2}$, with $C_i$ the smallest absolute component of the vector $\mathbf{m} u(i)$ and $I$ is the indicator function, then the network has the ESP with respect to $u$. \end{thm} Intuitively, this bound plays with the saturation of the sigmoid and will be efficient if the inputs are strong enough to drive the network in the saturating regime. Although this is a loose bound, it has the interesting property that the network may have temporarily non-contracting dynamics and still have the ESP. These ideas are clearly related to the fact that stimulating a chaotic system can result in a synchronized non-chaotic response, as shown in the context of random neural networks in \cite{rajan2010stimulus}. In this paper, we aim at contributing to the debate about the ESP using a mean-field approach applied to non-autonomous random neural networks in the large $n$ limit. This theory derives a self-consistent statistical description of the reservoir dynamics unravelling the transition between regularity and irregularity in the network, based on a Lyapunov stability analysis. Although brought very recently into the field of echo-state networks by \cite{massar2013mean}, this theoretical approach has a long history, dating back to early works on spin-glass models \cite{sompolinsky1981dynamic,sompolinsky1982relaxational}, followed by applications to random neural networks dynamics as in \cite{sompolinsky1988chaos, cessac1994mean, PhysRevLett.69.3717, faugeras2009constructive}. The rigorous justification of this heuristic approach is non-trivial and has been resolved by \cite{arous1995large,moynot2002large,cabana2013large} using large deviations techniques. These mathematical results actually requires to add an (arbitrary) small white-noise perturbation to the reservoir dynamics, in order to be able to use a change of probability formula (e.g. Girsanov Theorem) which is at the heart of the large deviation proof. The rigorous proof of the mean-field equations when this additional noise is removed remains open to our knowledge, but this is not a real problem in the ESN framework since adding such noise term is actually used in practice as a form of regularization, shown to be equivalent to the classical Tikhonov regularization \cite{bishop1995training}. The network we consider in this paper is a leaky integrator ESN \cite{jaeger2007optimization} defined over a regular graph with degree $ \alpha n$, proportional to $n$. This means that every neuron in the network is only connected to $\alpha n$ other neurons, which is often used in practice to reduce computational complexity. To apply the mean-field theory, we will assume that $n$ goes to infinity, but consider $\alpha \in (0,1]$ to be a constant. The connections between neurons are weighted: we write $\mathbf{J}_{ij}$ the weight from neuron $j$ to neuron $i$. The weights are independent random variables satisfying: \begin{equation*} \mathbb{E}(\mathbf{J}_{ij}) = 0 \quad \mbox{and} \quad \mathbb{E}(\mathbf{J}_{ij}^2)= \frac{\sigma^2}{n} <+\infty \end{equation*} This quenched hypothesis excludes any dynamics on the weights: they are kept constant after having been randomly drawn. Given a one-dimensional input time series $u:\{1\cdots T\} \to \mathbb{R}$, the classical neural network discrete dynamics is \begin{equation} \mathbf{x}_{i}(t+1) = (1-l \tau)\mathbf{x}_i(t) + \tau S\left(\sum_{j\to i} \mathbf{J}_{ij} \mathbf{x}_j(t) + \mathbf{m}_i u(t)\right) \label{eq: system} \end{equation} where $\mathbf{x}(t) \in \mathbb{R}^n$ corresponds to the activity of all the neurons in the network at time $t$. The vector of feedforward connections $\mathbf{m} \in \mathbb{R}^n$ is made of i.i.d. random variables satisfying $\mathbb{E}(\mathbf{m}_i)=0$, $\mathbb{E}(\mathbf{m}_i^2) = m^2$. The numbers $l$ and $\tau$ are in $[0,1]$ and control the timescale of the ESN dynamics. The function $S(.)$ is a typical odd sigmoid with $S(0)=0$, $S'(0)=1$, $S'(x)>0$ and $x S''(x)\leq 0$. Note that it implies it is a 1-Lipschitz function. Actually, the following computations become explicit when a particular choice is made: $S(x)=\text{erf}(\frac{\sqrt{\pi}}{2} x)$ (which follows the requirements above). We write $\displaystyle \sum_{j \to i}$ the summation of incoming information to a neuron which is only done over the neurons which are connected (through the graph) to the considered neuron. The paper is organized as follows: in section \ref{sec: mean field}, we derive a mean field theory of driven leaky integrator recurrent neural networks (RNNs) on a regular graph, and we show how it can be used to find the frontier between order and disorder for the network dynamics. Then, in section \ref{sec: local ESP} we show how this can be used to define a computable condition guaranteeing an operational version of the ESP. \section{Mean-field theory for leaky ESN on regular graphs}\label{sec: mean field} \subsection{Mean-field equations} From the seminal work \cite{sompolinsky1988chaos}, recently extended to the framework of stimulus driven RNN \cite{rajan2010stimulus, massar2013mean}, one can derive a self-consistent equation describing the statistical properties of the reservoir activity in the large $n$ limit, which is known as the mean-field theory. In this section, we present an extension of \cite{massar2013mean} to leaky RNNs on regular graphs. The key idea is to make the assumption that the variables $\mathbf{x}_i(t)$ are i.i.d. and independent of $\mathbf{J}$ and $\mathbf{m}$. This makes possible to use the central limit theorem on $\sum_{j\to i} \mathbf{J}_{ij} \mathbf{x}_j(t)$ which can thus be considered as a Gaussian process. When $k = \alpha n \to + \infty$, all the $\mathbf{a}_i(t)=\sum_{j\to i} \mathbf{J}_{ij} \mathbf{x}_j(t) + \mathbf{m}_i u(t)$ for $i \in \{1..n\}$ tend to behave as centered Gaussian variables with variance \begin{equation*} a^2(t) = \mathbb{E}[\mathbf{a}_i(t)^2] = \alpha \sigma^2 \gamma^2(t) + m^2 u(t)^2 \end{equation*} where $\gamma^2(t)$ denotes the variance of $\mathbf{x}_i(t)$ (independent of $i$). The iteration equation $\mathbf{x}_i(t+1)=(1- l \tau) \mathbf{x}_i(t) + \tau S\big(\mathbf{a}_i(t)\big)$ is going to help us derive the mean-field dynamical system describing the variance of the $\mathbf{x}_i$. However, the independence between $\mathbf{x}_i(t)$ and $S(\mathbf{a}_i(t))$ is not granted and we cannot simply add their variance. Nonetheless, we can compute \begin{equation} \gamma^2(t+1) = (1- l \tau)^2 \gamma^2(t) + \tau^2 F\big(a^2(t)\big) + 2\tau(1-l \tau) R(t,t) \label{eq: consistency variances} \end{equation} with \begin{equation} F(z^2) = (2\pi)^{-1/2}\int_\mathbb{R} S^2(zx)e^{-x^2/2}dx= \frac{2}{\pi}\arcsin\left(\frac{\pi z^2}{2+\pi z^2}\right) \label{eq: F} \end{equation} according to the technical result in the appendix of \cite{williams1998computation}, and \begin{equation} R(s,t) = \mathbb{E}[\mathbf{x}_i(s)S\big(\mathbf{a}_i(t)\big)]= (1 - l \tau) R(s-1,t) + \tau Q(s-1,t) \label{eq: R} \end{equation} where \begin{equation} Q(s,t) = \mathbb{E}\big[S\big(\mathbf{a}_i(s)\big)S\big(\mathbf{a}_i(t)\big)\big] \end{equation} Using again the result in \cite{williams1998computation}, we can show that \begin{multline} Q(s,t) = G\Big(C(s,t), \gamma^2(s), \gamma^2(t) \Big)\\ = \frac{2}{\pi} \sin^{-1}\left(\frac{\pi}{2} \frac{\alpha \sigma^2 C(s,t) + m^2 u(s) u(t)}{\sqrt{\big(1 + \frac{\pi}{2}a^2(s)\big)\big(1 + \frac{\pi}{2} a^2(t)}\big)} \right) \end{multline} where \begin{equation} C(s,t) = \mathbb{E}[\mathbf{x}_i(s) \mathbf{x}_i(t)] = (1-l \tau) C(s,t-1) + \tau R(s,t-1) \label{eq: C} \end{equation} The recursive combination of equations \eqref{eq: consistency variances}, \eqref{eq: R} and \eqref{eq: C} provides a consistent description of the global variance of the neurons. An algorithm is provided in algorithm \ref{alg: lambda(sigma)}. \subsection{Order-disorder transition}\label{sec: order - disorder} The consistency equation \eqref{eq: consistency variances} characterizes the transition between order and disorder in the network as a function of the variance of the connections $\sigma^2$ and the sparsity coefficient $\alpha$. We first illustrate this phenomenon in the autonomous case and then discuss its impact in the input driven case. \subsubsection{Without input} the terms $\mathbf{x}_i(t)$ and $S(\mathbf{a}_i(t))$ are independent, and the third term in \eqref{eq: consistency variances} disappears. Thus, let us study the autonomous dynamical system $\gamma^2(t+1) = (1- l \tau)^2\gamma^2(t) + \tau^2 F\big(\alpha \sigma^2 \gamma^2(t)\big)$. Due to the properties of the sigmoid function $S$, the function $F$ is increasing, concave and satisfies $F(0)=0$ and $F'(0)=1$. Therefore, the function $\Psi:x \mapsto (1- l \tau)^2 x + \tau^2 F(\alpha \sigma^2 x)$ is also increasing and concave. Therefore, the slope at $0$, denoted $\mu=\Psi'(0)$, is the effective parameter controlling the phase transition, and is given by \begin{equation} \mu = (1- l \tau)^2 + \tau^2 \alpha \sigma^2 \end{equation} This leads to a simple characterization of the behavior of the system for different values of $\mu$: \begin{itemize} \item $\gamma^2(t)$ converges to $\gamma^2_{\infty}=0$ if $\mu < 1$ \item $\gamma^2(t)$ converges to a limit value $\gamma^2_{\infty}>0$ if $\mu > 1$ \end{itemize} In the first situation $\sigma<\sigma^*=\sqrt{\frac{l}{\alpha}(\frac{2}{\tau} - l)}$, all neuron variables converge to the quiescent state, whereas the network behavior becomes irregular as soon as $\sigma > \sigma^*$. Note that this generalizes the classical results of \cite{sompolinsky1988chaos,cessac1994mean} dealing with the case $\tau = \alpha = l = 1$, a case which is also treated in \cite{hermans2012recurrent}, where stability criteria are established for dynamical systems defining recurrent kernels for infinite-dimensional ESN. \subsubsection{With inputs, largest Lyapunov exponent} When the system is driven by external inputs, the network will never go to a quiescent state. Indeed, it is clear from equation \eqref{eq: consistency variances} that the situation $\gamma^2(t)=0$ will never happen. But one should not conclude that the network is always disordered because it could be strongly locked to the inputs, which is another way of defining the notion of order in such systems. The network will be said to be in order (resp. disorder) when a small perturbation independent of the inputs will vanish (rep. grow) with time. This corresponds to the notion of Lyapunov stability for the input driven system. The largest Lyapunov is below 1 in the case of robustness of the dynamics to small perturbations (order), and above 1 when the dynamics is significantly impacted by small perturbations, as is the case in chaotic systems (disorder). Formally, the largest lyapunov exponent can be defined as: \begin{equation} \lambda[u]:=\lim_{t\to \infty,\delta(0)\to 0} \left(\frac{\delta^2(t)}{\delta^2(0)}\right)^{1/t} \label{eq: lyapu def} \end{equation} where $\delta(t)$ is a distance at time $t$ between two trajectories of \eqref{eq: system} starting with different initial conditions separated by $\delta(0)$. More precisely, let us define $\delta(t)$ such that $\mathbf{x}_i(t)-\mathbf{x}'_i(t) \sim \mathcal{N}$ where $\mathbf{x}_i$ and $\mathbf{x}'_i$ are two solutions of \eqref{eq: system} starting from two different initial conditions with $\mathbf{x}_i(0)-\mathbf{x}'_i(0) \sim \mathcal{N}(0,\delta(0)^2)$. In the situation where $\delta(t)$ is small, we have the following recurrence equation: \begin{equation*} \begin{array}{rcl} &&\mathbf{x}_i(t+1)-\mathbf{x}'_i(t+1)\\ &=& (1-l\tau) \big(\mathbf{x}_i(t)-\mathbf{x}'_i(t) \big) + \tau \left(S\big(\mathbf{a}_i(t)\big) - S\big(\mathbf{a}_i'(t)\big) \right)\\ &=& (1-l\tau) \big(\mathbf{x}_i(t)-\mathbf{x}'_i(t) \big) \\ &+& \tau S'\big(\mathbf{a}_i(t)\big)\left(\sum_{j\to i}\mathbf{J}_{ij} \big(\mathbf{x}_j(t) - \mathbf{x}'_j(t)\big) \right) + o\big(\delta(t)\big)\\ \end{array} \end{equation*} Therefore, one obtains the following relationship on the variances: \begin{equation} \begin{array}{rcl} \delta^2(t+1) &=& (1-l\tau)^2 \delta^2(t) \\ & + & \tau^2 \alpha \sigma^2 \Phi\left(\alpha\sigma^2\gamma^2(t)+ m^2 u(t)^2 \right)\delta^2(t) \\ &+&o(\delta^2(t)) \end{array} \label{eq: lyapu recurrence} \end{equation} with \begin{equation} \Phi(z^2) := (2\pi)^{-1/2}\int S'^2(z x)e^{-x^2/2}dx = \frac{1}{\sqrt{1 + \pi z^2}}. \label{eq: phi} \end{equation} When $\gamma^2(t)$ is obtained by solving iteratively \eqref{eq: consistency variances}, one can find the local Lyapunov exponent: \begin{equation} \lambda(t):=(1-l\tau)^2 + \tau^2 \alpha \sigma^2 \Phi\left(\alpha\sigma^2\gamma^2(t)+ m^2 u(t)^2 \right) \label{eq:local} \end{equation} When $\lambda(t)[u]<1$, local asymptotic stability is ensured and the reservoir tends to be synchronized by the input, whereas when $\lambda(t)[u]>1$, small perturbations are exponentially amplified and the reservoir is likely to enter a chaotic regime. It is natural that this measure depends on time because, for instance in the case $\sigma>1$, synchronized states will only appear during periods when the input is sufficiently large compared to $\sigma$. Combining \eqref{eq: lyapu def} and \eqref{eq: lyapu recurrence}, one can define a global finite horizon largest Lyapunov exponent as: \begin{equation} \lambda_T[u]:=\left(\prod_{t=1}^T \lambda(t) \right)^{\frac{1}{T}} \label{eq: lambda T} \end{equation} where $\lambda(t)$ is defined in \eqref{eq:local}. Furthermore, at this stage, one already obtains an important property, showing that adding external input can only stabilize the system. Indeed, since $\Phi \leq 1$ (due to the fact that $|S'|\leq 1$), we always have the following inequality: \begin{equation} \lambda_T \leq \mu \label{eq:inequality} \end{equation} Therefore, if the system without external input is in the ordered phase, namely when $\mu<1$, then it is also in the ordered phase ($\lambda_T[u]<1$) for all input. This results supports the fact that, in practice, $\rho<1$ is a sufficient condition for the ESP. \begin{figure*}[t] \centering \includegraphics[width=0.4\textwidth]{3Dlyap_no_input.pdf} \includegraphics[width=0.43\textwidth]{3Dlyap_sinus_input.pdf} \caption{Numerical estimation of the global largest Lyapunov exponent $\Lambda_T$ using algorithm \ref{alg: lambda(sigma)} as a function of $\tau$ and $\sigma$. \textbf{Left:} Input $u(t)=0$. \textbf{Right:} Input $u(t)=\sin(\omega t)$ with $\omega=0.25$. Other parameters: $T=1000$, $l=\alpha=m=1$. } \label{fig:lyapunov2} \end{figure*} In figure \ref{fig:lyapunov2}, we have applied algorithm \ref{alg: lambda(sigma)} to estimate $\Lambda_T$ in the case where $u(t)=0$ (left) and where $u(t) = \sin(\omega t)$ (right) for various values of parameters $\sigma$ and $\tau$. In this figure, one observes that $\Lambda_T$ is an increasing function of $\sigma$, which is a consequence of the fact that both $\gamma(t)$ and $\lambda(t)$ are increasing functions of $\sigma$, and corresponds with the intuition that increasing the disorder level would increase the unstability of the dynamics. The case of null input (left) with $\tau=1$ corresponds to the classical case \cite{PhysRevLett.69.3717}, and displays a kink at $\sigma=1$, whose consequences in terms of information processing has been discussed in \cite{toyoizumi2011beyond}. However, the impact of the leak rate $\tau$ on the Lyapunov exponent has not been studied so far to our knowledge, and reveals an interesting U-shaped behavior indicating that there exists an optimal intermediate value of $\tau$ which minimizes the instability of the system. Our purpose in the present paper is to evaluate the Lyapunov exponent when the system is driven by an external time-series, which is displayed on the right panel of figure \ref{fig:lyapunov2} with $u(t) = \sin(\omega t)$. This figure shows that the overall behavior is similar to the null-input case, with the expected difference that $\sigma$ must be set much larger than one (around 1.6 when $\tau=1$) to observe an exponent $\Lambda_T>1$. Intuitively, the driven system is more stable because the input acts as a time-dependent bias in the sigmoid transfer function, hence reducing its average slope $|S'|$ along a trajectory, and therefore the norm of the Jacobian matrix which controls the local expansion rate. Notice that the quantity $\Phi$ defined in \eqref{eq: phi} corresponds to the average squared slope of $S$, where the average will be taken with respect to the Gaussian distribution with appropriate time-dependent variance \eqref{eq:local}. \section{Local Echo State Property}\label{sec: local ESP} In this section, we discuss in more details the connection between the Lyapunov exponent $\Lambda_T$ and the ESP. \subsection{Definition} The intuition behind the ESP is that the network should follow a reproducible and robust attractor. If the attractor is not stable, then the output connectivity matrix would be learned on a trajectory which could be different from the trajectory observed during the prediction or test phase, leading to poor accuracy. A key element to quantify the stability of the network trajectory is to measure the impact of small perturbations. If these perturbations are amplified over time then the dynamics is too irregular for good performance, the network is chaotic. Therefore, we define a local version of the ESP which guarantees the robustness of the dynamics to perturbations: \begin{defi}[Local ESP] A driven dynamical system has the local Echo State Property if a small perturbation $\tilde{\mathbf{x}}(t_0) = \mathbf{x}(t_0) + \delta$ applied at time $t_0$ decreases to $0$ in the large time asymptotic limit, namely $||\mathbf{x}(t)-\tilde{\mathbf{x}}(t)||\to 0$ when $t\to \infty$ for $\delta$ sufficiently small. \end{defi} This definition differs from the traditional ESP Definition \ref{def: ESP} in two aspects: first, it deals with perturbations which do no necessarily occur at time $t=-\infty$. This definition only asks the perturbed solution to converge eventually towards the unperturbed solution, whereas the traditional definition asks that the solutions are identical based on the fact that the perturbation occurred an infinite number of time steps before. This definition is closer to the practical application of ESN where the initial condition corresponds to $t=0$. Second, this definition only guarantees a local stability of the trajectories asking them to be robust only to small enough perturbations. On the other hand the traditional ESP requires that even large perturbations leave the trajectory unchanged. Put differently the traditional ESP guarantees a unique globally stable attractor, whereas the local ESP guarantees local stability of possibly many attractors (which have the same statistical properties). We claim that the local ESP is sufficient for the good behavior of the network for most applications. More precisely, the only danger for systems that satisfy the local ESP, and not the traditional global ESP, is when learning is made on one attractor and prediction / test is made on another. In applications, if prediction / test is made immediately after learning such that we are sure to stay on the same attractor, then the local ESP is sufficient. On the other hand, if the initialization of the prediction / test phase is done randomly, then the network may converge to a different attractor than that explored during learning. In that case, one would expect the performance to be poor. \subsection{Characterization} Measuring the evolution of small perturbations precisely corresponds to computing the largest Lyapunov exponent. Indeed, if $\lambda < 1$ then a small perturbation will eventually vanish and the perturbed solution will converge to the unperturbed solution. Therefore, by construction we have the following quantitative criterion for the local ESP: \begin{thm} If $\lambda[u] < 1$ then the network has the local ESP for the input $u$. \label{thm: lambda < 1} \end{thm} {Some remarks:} \begin{itemize} \item The local ESP can be valid for systems experiencing temporary growth of perturbations as long as they are followed by a more important decrease. What matters in the definition of the local ESP is the balance of growth and decrease over a long time. \item From the key inequality \eqref{eq:inequality}, we deduce that the local ESP hold for all inputs whenever $\mu<1$. This is a further argument supporting the practical criterion of a spectral radius below 1 should work for all inputs. \item There is a unique $\sigma_{L}$ such that $\lambda(\sigma=\sigma_L) = 1$ and that the local ESP holds for all $\sigma < \sigma_L$. Indeed, we claim first that for any input $u$, the mapping $\sigma^2 \mapsto \lambda[u]$ is increasing. The proof is as follows. The function $F$ is increasing, concave with $F'(0) = 1$. Therefore, equation \eqref{eq: consistency variances} shows that $\gamma(t)$ increases sublinearly with $\sigma^2$. Performing a simple change of variable in equation \eqref{eq: phi}, it is easy to see that $\Phi(z^2)$ decreases slower that $1/z$ when $z^2$ increases. Therefore, $\sigma^2 \Phi\left(\alpha\sigma^2\gamma^2(t)+ m^2 u(t)^2 \right)$ increases with $\sigma^2$ and so does $\lambda_T$ according to equation \eqref{eq: lambda T}. Finally, one observes that $\lambda(\sigma=0) \leq 1$ and $\lambda(\sigma=+\infty) = + \infty$. \end{itemize} \subsection{Numerical experiments} We now present an algorithm to compute $\lambda[u]$. A dichotomy algorithm, or any zero search algorithm for non-linear functions, could be implemented to find an approximation of $\sigma_L$, but given the cheap computational cost of computing $\lambda[u]$ for any $\sigma$, we will rather perform a grid search in this paper. The algorithm to compute $\lambda_T[u]$ is stated below, when $\sigma, m, \alpha, l, \tau$ and $u$ have been fixed. \begin{algorithm} \caption{Computing $\lambda_T[u]$} \begin{algorithmic}[1] \State $\lambda \gets 1$ \State $\gamma^2 \gets 0$ \State $R, C, \gamma_{\text{hist}} \gets 0 \in \mathbb{R}^k$ \For{t = 1 : T} \For{s = 1 : k-1} \State $C[s] \gets (1-l\tau) C[s+1] + \tau R[s+1]$ \State $R[s+1] \gets (1-l\tau) R[s] + \tau G(C[s],\gamma^2, \gamma_{\text{hist}}[s])$ \EndFor \State $a \gets \alpha \sigma^2 \gamma^2 + m^2 u(t)^2$ \State $\lambda \gets \lambda \big((1 - l \tau)^2 + \tau^2 \alpha \sigma^2 \Phi(a) \big)^{1/T}$ \State $\gamma^2 \gets (1 - l \tau)^2 \gamma^2 + \tau^2 F(a) + 2\tau (1 - l \tau) R[-1]$ \State $\gamma_{\text{hist}}[:-1] \gets \gamma_{\text{hist}}[1:]$ \State $\gamma_{\text{hist}}[-1], C[k] \gets \gamma^2$ \EndFor \State return $\lambda$ \end{algorithmic} \label{alg: lambda(sigma)} \end{algorithm} Note that this algorithm is computationally cheap, in $O(T)$, especially compared to the simulation of the full network. \begin{figure}[t] \centering \includegraphics[width=9 cm]{right.pdf} \caption{\textbf{Top:} this figure displays the prediction accuracy (mean-square error on a testing set) of an ESN when $\sigma$ vary from $0$ to $2$. \textbf{Bottom:} this figure display the Lyapunov exponent $\Lambda_T$ as a function of $\sigma$. The dashed lines help seeing the critical value $\sigma^* \simeq 1.57$ which both corresponds to $\Lambda_T =1$ and to the transition to a regime of poor accuracy for the ESN. For the simulations the parameters were $n = 2000$, $\alpha = l = \tau = 1$, $m = 1.$ and $T = 2000$. The time-series to predict is a solution of the Mackey-Glass chaotic dynamical system with $\delta_{MG}=18$.} \label{fig: input} \end{figure} \begin{figure}[t] \centering \includegraphics[height=7 cm]{sigmastar_delta_MG.pdf} \includegraphics[height=7 cm]{mackeyglass_MSEtest_deltaMG_sigma.pdf} \caption{Analysis of ESN performance as a function of the delay $\delta_{MG}$ in the Mackey-Glass prediction task. \textbf{Left:} Using Algorithm 1, we were able to compute the value $\sigma^*$ of the weights variance for which $\Lambda_T$ becomes larger than $1$, corresponding to the edge of chaos. The value of $\sigma^*$ depends on the delay parameter $\delta_{MG}$: it appears that increasing $\delta_{MG}$ leads to a smaller value of $\sigma^*$. \textbf{Right:} Mean-square error (testing set) as a function of the variance $\sigma$ for the Mackey-Glass prediction task, for different values of the delay $\delta_{MG}$. This figure confirms the prediction made in the Left panel : as indicated by the \textbf{black arrow}, for higher values of $\delta_{MG}$, the value of $\sigma$ where the performance starts to become poorer appears earlier. For the simulations the parameters of the MG system were $a=0.2$, $b=0.1$ with a time-step $\delta t=1$ and the ESN parameters were $n = 100$, $\alpha = l = \tau = 1$, $m = 1$, for time-series of length $T = 2000$.} \label{fig: mackeyglass} \end{figure} To show on a numerical example that the local ESP guarantees good accuracy, we have computed the prediction performance for a prediction task. More precisely, we consider here the classical task of Mackey-Glass (MG) time-series prediction. The MG dynamical system \cite{mackeyglass} is given by the following delayed differential equation: \begin{equation} \dot{u}(t) = - b u(t) + \frac{a u(t-\delta_{MG})}{1+ u(t-\delta_{MG})^{10}} \end{equation} For each time-series, the task is to predict $u(t+1)$ (one-step ahead) given the past $u(1), ..., u(t-1),u(t)$. Training is done on half of the time-series and predictions are made for the other half. For different variances of the recurrent weights, we have plotted the accuracy of an ESN in figure \ref{fig: input} (top). This accuracy corresponds to the quantity $H = \frac{1}{T} \sum_{t=0}^{T-1} \big(u(t+1) - \mathbf{w}'.\mathbf{x}(t)\big)^2$, where $\mathbf{w} \in \mathbb{R}^n$ was computed with the usual Wiener-Hopf solution: $\mathbf{w} = \Big(\sum_{t=0}^{T-1} \mathbf{x}(t).\mathbf{x}(t)'\Big)^{-1}.\Big(\sum_{t=0}^{T-1} \mathbf{x}(t) u(t+1)\Big)$. We see that even for some $\sigma > 1$ the accuracy is good although the global ESP for all inputs is not satisfied any more. However, the accuracy becomes significantly poorer after a certain critical value for $\sigma$. In figure \ref{fig: input} (bottom), we have plotted the value of the Lyapunov exponent $\Lambda_T$ computed with the algorithm above. We see that it crosses $1$ quite precisely at a critical value $\sigma^*$ for which the accuracy moves to a regime of much higher values. In order to further investigate the link between the local Lyapunov exponent $\Lambda_T$ and ESN performance, we have generated several discrete time-series corresponding to various values of $\delta_{MG} \in \{10,12,14,16,18,20,22\}$ with parameters $a=0.2$ and $b=0.1$. In figure \ref{fig: mackeyglass} (right), ESN performance is measured by the Mean Square Error on a testing set and is displayed as a function of the variance parameter $\sigma$, for various values of the delay $\delta_{MG}$. Good performance is typically achieved for an intermediate range of values of $\sigma$, and one observes that the upper value of this range is smaller for higher values of $\delta_{MG}$, as indicated by the black arrow. We interpret this loss of performance for high values of $\sigma$ as related to the loss of the ESP. If this is indeed the case, then it should be possible to predict this behavior by using Algorithm 1 to compute $\sigma^*$, the value of $\sigma$ for which local Lyapunov exponent $\Lambda_T$ becomes larger than $1$. As displayed in figure \ref{fig: mackeyglass} (left), $\sigma^*$ is a decreasing function of $\delta_{MG}$, which is perfectly consistent with the above observation. This numerical example illustrates that the proposed theoretical advance presented in this article helps predicting and understanding the behavior of the performance curve for Echo-State Networks. However, finding the optimal value of all the hyper-parameters, beyond a systematic cross-validation procedure, remains a challenging theoretical problem. \section{Conclusion} In this paper, we have shown that the mean field theory for ESN developed in \cite{massar2013mean} can be, first, extended to leaky integrator networks on regular graphs; and, second, used to compute accurately a condition for the local ESP corresponding to the edge of chaos. We argue that the local ESP with respect to the given input is the useful condition to check in many applications, to ensure that the ESN representation is stable to small perturbations. We do not claim that the edge of chaos is always the best regime, but it has been shown that for some applications, typically requiring a lot of memory, it was optimal \cite{bertschinger2004real}. We believe that the proposed method to assess the local ESP should be systematically used to make sure the ESN has a regular dynamics leading to good accuracy. However, finding the optimal values of the hyper-parameters (e.g. $\sigma, \tau, \alpha$ etc.) for a given supervised learning task necessitates to take into account both the input and the target, which goes beyond the scope of the present approach : we provide a method to compute a bound for these parameters, given the input time-series, to ensure the ESP. The theory has only been detailed for one dimensional inputs, but the extension of this approach to multidimensional inputs is not difficult (see \cite{massar2013mean}). Extending this method to other types of dynamics should be feasible as long as the computation of $F$ and $\Phi$ can be numerically done or conveniently reformulated. Finally, the mean-field approach only deals with the limit of very large networks $n\to \infty$, whereas in practice the aim might be to perform a given task with the smallest possible reservoir to avoid over-fitting issues. Therefore, a further investigation of the finite-size effects around the mean-field limit would be of interest. For instance, a related question has been studied in \cite{wainrib2013optimal}, where it is shown that networks with a variance parameter $\sigma<1$ have a probability to be unstable which is maximal for a specific size of the reservoir. \section*{References} \bibliographystyle{elsarticle-num}
1,116,691,499,998
arxiv
\section{Introduction} The collective properties of stable Pd isotopes (Z=46) have been the focus of several experimental and theoretical studies in the past decades. They have been considered as ``transitional'' nuclei, displaying a character that varies from vibrational to $\gamma$ unstable. Indeed, detailed analyses (see Ref. \cite{kim,giannatiempo1998}) provided a good description of even-even Pd nuclei as pertaining to a region of transition from the vibrational U(5) limit to the $\gamma$-soft O(6) limit of the IBA-2 model \cite{iachello}. This interpretation has been recently questioned in a systematic study of the even mass isotopes of Mo, Ru, Pd, Cd, and Te \cite{garrett2018}. The authors concluded that the existence of low-energy quadrupole vibrations in some of these nuclei must be questioned and that the study of collective states must involve not only electromagnetic observable such as B(E2) values and quadrupole moments, which by definition only sample the charge and/or current distributions, but also other electromagnetic probes that are sensitive to shape coexistence and configuration mixing, such as, for instance, the electric monopole (E0) transitions. The question of whether Pd nuclei may actually exhibit a nearly-harmonic quadrupole structure has been recently addressed by two experiments involving the neutron inelastic scattering, devoted to the study of the structure of the $^{106}$Pd isotope \cite{prados,peters}. In the first one, a characterization of the low-lying excited states up to $\approx$2.4 MeV for spin $\leq6$ was obtained. The level scheme was organized into rotational bands, each characterized by a definite value of $K$. In the second experiment, on the basis of previously measured internal conversion electron \cite{colvin} and new lifetime data, the strength of E0 transitions between 2$^+$ states were determined. The authors concluded that the extracted monopole transition strength values provide evidence for shape coexistence between the bands with the same $K$ value. The existing data on conversion electrons for $^{106}$Pd isotope are rather limited and affected by a large uncertainty: for instance, two values differing by a factor $\approx$ 3 are available for the internal conversion coefficient of the $2_3^+\longrightarrow 2_1^+$ transition, thus preventing a definite conclusion on the amount of mixing between these two levels. The aim of the present work is to provide further information to better understand the structure of low-lying levels in the Pd isotopes with N$\sim60$. The E0 transitions between both 0$^+$ and $2^+$ states in $^{106}$Pd have been studied via internal conversion electron spectroscopy, performed by means of the apparatus that we have very recently developed \cite{marchini}. The new data, combined with those obtained in the re-analysis of data previously acquired on $^{104}$Pd by the nuclear spectroscopy group in Florence, help to clarify the properties of the $0^+$ and $2^+$ states up to 2.3 MeV in the $^{104,106}$Pd isotopes. \section{Experiment Details} A dedicated experiment to study the structure of $^{106}$Pd at low excitation energy was performed at the INFN Legnaro National Laboratories (LNL) in Italy. The nucleus of interest was populated in the EC-$\beta^+$ decay of $^{106g}$Ag (T$_{1/2}$=24~min) and $^{106m}$Ag (T$_{1/2}$=8~d) produced via the $(p,n)$ reaction on a self-supporting target of $^{106}$Pd 3~mg/cm$^2$ thick (96$\%$ enriched). The 5.5~MeV proton beam was delivered by the LNL Van der Graaff CN accelerator with an average intensity of 200~nA. In order to favor the fast decay activity of $^{106}$Ag, which populates the $0^+$ levels in $^{106}$Pd, measurements have been performed by alternating bombarding and measuring periods of 35~min. A 5~min waiting time was inserted to allow the decay of the short-lived $^{108}$Ag (T$_{1/2}$=3.4~min) produced in the $(p,n)$ reaction on the 1\% $^{108}$Pd isotope present in the target. The $^{108}$Ag beta decays mostly (95\%) to the ground state of $^{108}$Cd ($Q(\beta^-)=1650(7)$ keV) increasing the background in the electron spectra. By inserting the above mention waiting time this background is reduced by a factor $\approx4$ while only the 10\% of the $^{106}$Ag activity is lost. The internal conversion electrons emitted in the de-excitation of the states populated in the decay of $^{106g}$Ag were detected by the SLICES spectrometer \cite{marchini}, used for the first time in the present experiment. SLICES setup utilizes a 6.8 mm thick segmented lithium-drifted silicon detector coupled to a magnetic transport system to guide the electrons around a central photon shield towards the detector. The efficiency of the spectrometer can be optimized by changing the shape of the magnetic transport system components. For the configuration adopted in this experiment, the maximum of the efficiency curve is about 12$\%$ for transmitted energy of 1~MeV, as shown in Fig. \ref{effi-mos}. The adopted configuration and the related efficiency curve have been studied in detail in Ref. \cite{marchini}. An HPGe detector with an energy resolution of 2.4~keV (FWHM) at 1.3~MeV was used to detect $\gamma$ rays deexciting the nuclear states. \begin{figure} \includegraphics[width=\columnwidth]{Immagini/effi-mos.pdf} \caption{GEANT4 simulated absolute efficiency curve of SLICES for the detector–source distance of 117 mm and using four magnet clusters. For more details, see Ref \cite{marchini}.} \label{effi-mos} \end{figure} \subsection{RESULTS} \begin{table} \begin{tabular}{ccccc} \hline $J_i^\pi \longrightarrow J_f^\pi$ & $E_{\gamma}$~[keV] & $\alpha_K^{exp.}\cdot10^3$ & $\alpha_K^{th}(E2)\cdot 10^3$ & $\alpha_{K}^{th}(M1)\cdot 10^3$ \\ \hline $2_2^+ \longrightarrow 2_1^+$ & $616$ & $2.97(11)$ & $2.89$ & $2.97$ \\ $2_2^+ \longrightarrow 0_1^+$ & $1128$ & $0.64(9)$ & $0.68$ & \\ $2_3^+ \longrightarrow 2_1^+$ & $1050$ & $1.06(7)$ & $0.79$ & $0.89$ \\ $0_2^+ \longrightarrow 2_1^+$ & $621$ & $2.6(2)$ & $2.8$ & \\ $0_3^+ \longrightarrow 2_1^+$ & $1195$ & $0.71(13)$ & $0.60$ & \\ $0_4^+ \longrightarrow 2_2^+$ & $873$ & $1.23(8)$ & $1.20$ & \\ \hline \end{tabular} \caption{Experimental K-internal conversion coefficients, $\alpha_K$, for transitions in $^{106}$Pd compared with the calculated values from BRICC \cite{bricc}.} \label{alfa} \end{table} \begin{figure} \includegraphics[width=\columnwidth]{Immagini/elettroni106Pd.pdf} \caption{Section of the SLICES energy spectrum; K-conversion lines are labeled. } \label{Si} \end{figure} Conversion electron measurements have been performed to determine K-internal conversion coefficients, $\alpha_K$, and to evaluate the monopole strength, $\rho^2$(E0), of E0 transitions between states having the same spin and parity. The $\alpha_K$ values obtained for the transitions of interest are summarized in Table \ref{alfa}. The agreement between the experimental and the theoretical values for pure E2 transitions on one hand is a test of the reliability of SLICES apparatus in performing in-beam measurements, on the other hand, of the correct determination of $\alpha_K(2_3^+ \longrightarrow 2_1^+)$. Two different $\alpha_K$ values for this transition are reported in the literature \cite{colvin,farzin}. The value obtained in the present work is in agreement with the one determined in Ref. \cite{colvin}. The experimental $\alpha_K(2_3^+ \longrightarrow 2_1^+)$ value, large with respect to the calculated one, suggests the presence of a strong E0 component in this transition. The section of the electron spectrum in the energy range around 1~MeV is shown in Fig. \ref{Si}. The K-conversion electron peak of the $2_3^+ \longrightarrow 2_1^+$ transition is in a clean region of the spectrum. Measurements of internal conversion electrons can also provide information on the $\rho^2(E0$). For a transition between states with $J^+_i = J^+_f = 0$, it is related to the ratio \begin{equation}q^2_{ifj}={I_K(E0; 0_i^+\rightarrow 0_f^+)/ I_K(E2;0_i^+\rightarrow 2_j^+)}\label{q2}\end{equation} between the intensity of the E0 and E2 K-conversion lines de-exciting a given $0_i^+$ level. The E0 strength can be determined via the expression: \begin{equation} \rho^2(E0; J_i^+\rightarrow J_f^+) = q^2_{ijf}(E0/E2) \times \frac{\alpha_K(E2)}{\Omega_K(E0)} \times W_{\gamma}(E2) \end{equation} where $\Omega_K$ is the electronic factor for the K-conversion of the E0 transition obtained from Ref. \cite{bricc}, $\alpha_K$(E2) is the K-conversion coefficient for the E2 transition and W$_{\gamma}$(E2) is the $\gamma$-ray E2 transition probability. In the case of $J_i^+=J_f^+\neq0$, the E0 and E2 transitions in Eq. (\ref{q2}) connect the same initial and final levels. Since the contributions due to the different multipolarities to the same transition are indistinguishable, $q^2_{ijf}(E0/E2)$ is extracted from the internal conversion coefficient, which in the case of mixed E0, E2 and M1 multipolarities, has the expression: \begin{equation} \alpha_K={\alpha_K^{th}(M1)+(1+q_{ifj}^2)\cdot \delta^2 \cdot\alpha_K^{th}(E2) \over(1 + \delta^2)} \end{equation} where $\delta$ is the (E2/M1) mixing-ratio, and $\alpha_K^{th}(M1)$, $\alpha_K^{th}(E2)$ are the theoretical values of the internal conversion coefficient from the Band-Raman Internal Conversion Coefficents (BRICC) database \cite{bricc}. The $q^2$(E0/E2) and $\rho^2$(E0) values extracted in the present work are summarized in Table \ref{rho_exp}. The analysis of the $2_2^+ \longrightarrow 2_1^+$ K-electron line was made difficult by the presence of the predominant 616 keV peak due to the K-conversion electrons of the $0_2^+ \longrightarrow 2_1^+$ transition. As a consequence, the obtained $q^2$($2_2^+ \longrightarrow 2_1^+$) value has a large uncertainty. \begin{table*} \begin{tabular}{ccccccccc} \hline \multicolumn{5}{ c }{ } & \multicolumn{2}{c }{$q^2$(E0/E2)}& \multicolumn{2}{ c }{$\rho^2$(E0)$\cdot10^3$} \\ \hline $J_i^\pi \longrightarrow J_f^\pi$ & $E_{\gamma}$~[keV] & $\tau$~[fs] & $\delta(E2/M1)$ & I$_\gamma$ & Present & Previous & Present & Previous \\ \hline $0_2^+ \longrightarrow 0_1^+$ & $1134$&$8400(1900$)& & & $0.166(15)$ & $0.162(7)\footnote[1]{Reference \cite{smallcomb}.}$ & $17(4)$&$16.4(40)\footnotemark[1]$ \\ $0_3^+ \longrightarrow 0_1^+$ & $1706$ &$4000(700)$& &$ 0.857(34)$& $0.09(15)$ & &$2(4)$& $<3\footnotemark[2]$ \\ $0_4^+ \longrightarrow 0_1^+$ & $2001$ &$>1200$& & & $0.124(18)$ & $ $ & $<19$&$ $ \\ $0_4^+ \longrightarrow 0_2^+$ & $867$ &$>1200$& & & $0.22(6)$ & $ $ & $<90$&$ $ \\ $2_2^+ \longrightarrow 2_1^+$ & $616$&$4500(360)$ &$-8.7^{+17}_{-19}$&$0.647(24)$&$0.027(38)$ & & $5(8)$&$ $ \\ $2_3^+ \longrightarrow 2_1^+$ & $1050$&$1900(190)$& $0.24(1)$& $0.853(34)$ & $4.2(18)$ & $5.8(33)\footnotemark[1]$& $26(11)$ & $34(22)\footnote[2]{Reference \cite{peters}.}$\\ \hline \end{tabular} \caption{A comparison between the E0 transition strengths $\rho^2$(E0) and $q^2$(E0/E2) extracted in the present work and in previous analyses. Transition energy, lifetimes for the parent state, multipole mixing ratios $\delta$(E2/M 1) and branching ratios I$_\gamma$ are taken from Ref. \cite{prados}} \label{rho_exp} \end{table*} \begin{figure} \begin{center} \includegraphics[width=1.4\columnwidth, angle=-90]{Immagini/gamma106.pdf} \end{center} \caption{Portions of the $\gamma$ ray energy spectrum. Some peaks of interest are labeled with level spin and parity. The insets show the regions around 450 kev and 1500 keV respectively.} \label{Ge} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{Immagini/scheme-106pd.pdf} \caption{Low-lying level scheme in $^{106}$Pd. The observed $\gamma$-transitions with the related branching ratios extracted in this experiment are reported on the arrow.} \label{branching} \end{figure*} The coupling of SLICES with an HPGe detector allows us not only to extract the internal conversion coefficients but also to study in detail the decay scheme of the levels in $^{106}$Pd. An example of the $\gamma$ rays energy spectrum is reported in Fig. \ref{Ge}, where the transitions relevant to this work are shown. The part of the spectrum below 300 keV is dominated by the Compton edge of the 511 keV annihilation transition, which also covers the 512 keV, $2_1^+ \longrightarrow 0_1^+$ transition of $^{106}$Pd. Fig. \ref{branching} shows the level scheme, up to an energy $\sim 2.3$ MeV. The decay branching ratios reported for each level were obtained in the present work. The level scheme of $^{106}$Pd has been recently studied also in a ($n,n'\gamma$) reaction \cite{prados}, where a number of new transitions are reported. We cannot confirm the existence of the 347~keV, $2_4^+ \longrightarrow 2_3^+$, 352~keV, $2_4^+ \longrightarrow 3_1^+$ and 782~keV, $2_4^+ \longrightarrow 2_2^+$ transitions since their intensities are below the sensitivity level of this work. As to the peak at 680~keV, it is too intense to be due only to the $2_5^+ \longrightarrow 2_3^+$ and $5_2^+ \longrightarrow 4_3^+$ well-known transitions. This could provide a hint of a possible contribution to this peak due to the new proposed $2_4^+ \longrightarrow 4_1^+$ transition. A 439 keV transition from the $0_4^+$ state to the $2_3^+$ one, not reported in \cite{prados}, is visible in the $\gamma$-spectrum (see inset in Fig. \ref{Ge} upper panel). A small peak at an energy of $\sim 1489$ keV has been assigned to the $0_4^+ \longrightarrow 2_1^+$ transition (see inset in Fig. \ref{Ge} lower panel). Both these transitions were previously reported in Ref. \cite{nndc}. Some years ago the Florence spectroscopy group has performed measurements of internal conversion electrons to investigate E0 transitions in $^{104}$Pd. The deduced E0 strengths are reported in Table III. In the same experiment, $\gamma-\gamma$ coincidences have been also measured \cite{bellizzi}. We have now re-analyzed the data in order to gain a deeper insight on the existence of a $0^+_4$ state, reported in Ref. \cite{nndc} at 2103(2) keV. This level has been seen only in a ($p,p'$) reaction and no information is given on its decay properties. In the $\gamma$ spectrum acquired in coincidence with the 786 keV $2_2^+\longrightarrow2_1^+$ transition, a small peak is visible (see. Fig. 5) at 759.3(5)~keV. In the single spectra, this peak is completely dominated by the more intense 758.8 keV $4_2^+\longrightarrow4_1^+$ transition. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Immagini/gate_786_new.pdf} \end{center} \caption{Section of the $\gamma$ spectrum gated on the 786 keV, $2_2^+\longrightarrow2_1^+$ transition showing the region around 750 keV. A small peak is visible at an energy of 759 keV (indicated by the arrow).} \label{gate786} \end{figure} Assuming that the peak corresponds to the $0_4^+\longrightarrow 2_2^+$ transition, the energy of the initial level would be 2101.0(5) keV. We looked for possible decays from this level to the $2_1^+$ state and to the $0_2^+$, $0_3^+$ ones in the $\gamma$ and electron spectra, respectively. The energies corresponding to the $\gamma$ and E0 transitions would be 1545.2 keV, 743.0(5) keV, and 283.8(5) keV, respectively. A new 1545.2(3) keV transition was indeed identified in Ref. \cite{bellizzi} but was assigned to the decay from the 2868.7 keV level. The 743 keV E0 transition in our data would be completely covered by the much more intense K-conversion line of the 768 keV, $4_1^+\longrightarrow 2_1^+$ transition, while a small peak at 284 keV is visible in the electron spectrum of Fig. \ref{ele104pd}. Since in the corresponding $\gamma$ spectrum (Fig. \ref{ele104pd}, lower panel), there is no peak at an energy $\sim 308$ keV (while the peak corresponding to the 289 keV transition from the $4^+$ level at 3158 keV is clearly visible) we tentatively assign E0 multipolarity to the transition and hence spin-parity $0^+$ to the level at an energy of 2101 keV in $^{104}$Pd. \begin{figure} \begin{center} \includegraphics[width=1.4\columnwidth, angle=-90]{Immagini/figura104Pd.pdf} \end{center} \caption{Upper panel: Section of the electron spectrum in the 230 -- 300 keV energy range. The small peak visible at an energy of 284 keV has been assigned to the $0_4^+\longrightarrow 0_3^+$ transition. Lower panel: Portion of the $\gamma$ spectrum in the 250 -- 350 energy range. Only the peak corresponding to the 289 keV transition from the $4^+$ level at 3158 keV is visible.} \label{ele104pd} \end{figure} \section{Discussion} \begin{figure*} \includegraphics[width=\textwidth]{Immagini/levels3.pdf} \caption{Low-lying levels in even-even $^{104,106}$Pd isotopes. The B(E2) transition strengths normalized to the B(E2; 2$_1^+ \rightarrow$ 0$_1^+$) value are reported on the arrows. Data are taken from Ref. \cite{nndc}.} \label{multiphonon} \end{figure*} The interpretation of low-lying levels in even mass Pd isotopes is still controversial and different models have been employed to describe their properties. In the present work, we consider the contribution that the study of E0 transition can provide to clarify the structure of the low lying states in $^{104-106}$Pd. The excitation energy pattern of the low-lying levels in the $^{104-106}$Pd isotopes might suggest a vibrational structure, with a triplet of states with $J^\pi=0^+,\,2^+,\,4^+$ whose energy is approximately twice that of the first $2^+$ state. However, the B(E2) values of transitions from these states to the $2_1^+$ cast some doubts on their vibrational character. The values of the ${B(E2; I^\pi \rightarrow 2_1^+)}$ for the decays of the two-phonon states ($0^+$,$2^+$,$4^+$) should be identical and twice the value of the ${B(E2; 2^+_1 \rightarrow 0^+_1)}$ one-phonon decay, instead they differ considerably and are smaller than expected (see Fig. \ref{multiphonon}). As to the identification of the three-phonon quintuplet, it is made difficult by the presence of additional levels with $J^\pi=0^+,\,2^+$. They have been considered as intruder states resulting from proton-pair excitations across the Z = 50 shell. Two signatures are commonly given for the identification of an intruder states: i) the characteristic V-shape pattern of their excitation energies versus neutron number ii) the enhanced cross-section for single- and two-nucleon transfer reactions with respect to those between collective states. Low-lying intruder configurations have been studied in even-even Pd isotopes in Refs. \cite{Lhersonneau,wang}. Based on the energy systematic, the 0$^+_3, 2_4^+$ pair of states is suggested to have intruder character until N=60 and again for N=70, while the pair $0_2^+, 2_3^+$ becomes intruder state for N=62,64. Within this hypothesis, the V-shaped pattern of the excitation energy is granted. The interpretation of the $0_3^+$ states as intruder states in $^{104,106}$Pd isotopes was also supported in Ref.\cite{giannatiempo1998} by the analysis of their decay properties. The only available data for the ($^3$He,$n$) transfer reaction is an upper limit for the cross-section to the 0$^+_2$ for N=58 isotope reported in Ref. \cite{garrett2016}, which is much smaller than the ground-state to ground-state cross-section in $^{104}$Pd and $^{106}$Pd. A detailed analysis of excitation energy patterns and electromagnetic properties of positive-parity levels in even $^{100-116}$Pd (Z = 46) was performed some years ago \cite{giannatiempo1998} in the framework of the IBA-2 model, which is particularly suitable to study the evolution of an isotopic chain as a function of the neutron number. In that work, all the excitation energies and electromagnetic properties, available at the time for the low-lying levels in the even $^{100-116}$Pd isotopes, were investigated, with the exception of E0 transitions. An analogous study had been also performed by the same authors in the even $^{98-114}$Ru isotopes \cite{giannatiempoRu}. The parameters were requested to vary smoothly along each isotopic chain and among isotones in neighboring isotopic chains. An overall satisfactory agreement was obtained. The conclusion was that the even palladium isotopes could be considered as lying close to a transitional region between the vibrational U(5) limit to the $\gamma$-soft O(6) limit of the IBA-2 model. More recently a new IBA-2 work has been published \cite{giannatiempo2018}, which uses the parameters of Ref. \cite{giannatiempo1998} to study the large body of new experimental data become available over the years on the even Pd isotopic chain. In this analysis, which is mainly centered on the vibrational-$\gamma$ band structure, also all the known quadrupole moments were taken into account. A comparison between the available experimental values (from Ref. \cite{nndc}) and the calculated values from Ref. \cite{giannatiempo2018} of the $Q$ normalized to the $Q(2_1^+)$ for the 2$^+_2$ and 4$^+_1$ levels is reported in Fig. \ref{Q_values}. The agreement is good and the calculations are able to correctly predict the inversion of sign measured for the $Q(2_2^+)$. \begin{figure} \includegraphics[width=\columnwidth]{Immagini/q2.pdf} \caption{Experimental values (crosses) of Q for the $2_2, 4_1$ two-phonon candidates, normalized to the Q(${2_1}$) as a function of mass number. Data for the 4$_1^+$ and 2$_2^+$ levels are reported in green and red respectively. The values are taken from Ref. \cite{giannatiempo2018}. } \label{Q_values} \end{figure} An interpretation very different from that of Ref. \cite{giannatiempo1998, giannatiempo2018} was given in Ref. \cite{garrett2018}. Here, the authors compare the properties of the low-lying levels of $^{102-110}$Pd with the predictions of the harmonic vibrator and it is underlined that in none of the considered Pd isotopes the B(E2) values for decays to the $2_1^+ $ state meet the vibrational requirements. The conclusion is that the harmonic spherical vibrator interpretation breaks down already at the two-phonon levels. In particular, in the $^{106}$Pd isotope the author assigns the $0_2^+$ state as the head of an intruder shape-coexisting band (in agreement with Ref. \cite{prados}) while the $2_5^+$ is suggested to be the member of a $\gamma$ band built on the $0_2^+$ state. In order to further clarify to what extent the interpretation of the Pd isotopes in the framework of the IBA-2 model is valid, we performed an analysis of the available experimental data on the E0 transition between the low-lying states in $^{104-106}$Pd. The analysis has been performed by using the Hamiltonian and the parameters of Ref. \cite{giannatiempo1998}. The Hamiltonian has been diagonalized in the U$_{\pi,\nu}(5)$ basis, using the NPBOS code \cite{npbos}, which gives in its output the d-boson number components for each state. Excitation energies, E2 and M1 transitions in $^{104,106}$Pd isotopes have been already investigated in detail in Ref. \cite{giannatiempo1998}. In the present work, we limited the analysis to the monopole strengths between low-lying 0$^+$ and 2$^+$ levels, which were not previously considered. In the IBA-2 model the E0 transition operator has the expression \cite{iachello}: \begin{equation}\begin{split} \hat{T}(E0)&=\beta_{0\nu}\hat{T}_\nu(E0)+\beta_{0\pi}\hat{T}_\pi(E0) \\ &= \beta_{0\nu}(d^\dagger_\nu \times \tilde{d}_\nu)^{(0)} + \beta_{0\pi}(d^\dagger_\pi \times \tilde{d}_\pi)^{(0)} \end{split} \end{equation} \begin{equation} \begin{split} \rho^2(E0; J_i^+\rightarrow J_f^+)=\frac{Z^2}{e^2R^4}&[\beta_{0\nu} \langle J_f | \hat{T}_\nu(E0) | J_i \rangle \\ &+\beta_{0\pi} \langle J_f | \hat{T}_\pi(E0) | J_i \rangle]^2 \end{split} \end{equation} where R=1.2A$^{1/3}$~fm, and the parameters $\beta_{0\nu}$ and $\beta_{0\pi}$ are expressed in $e$ fm$^2$. One of the biggest difficulties in the study of the E0 transitions is related to the lack of systematics on the values of the E0 effective charges, which prevents defining a range of proper values. In the present work, in order to evaluate the effective monopole charges, the experimental data on $\rho^2$(E0) have been compared with the corresponding theoretical values by performing a standard $\chi^2$ minimization procedure restricted to the range [-1,+1] $e$ fm$^2$. The $\rho^2$(E0) values used in the comparison are marked with asterisks in Table \ref{E0_calc}. We included the $\rho^2(E0; 0_2^+\longrightarrow0_1^+)$ values measured in the isotone $^{100-102}$Ru nuclei to further constrain the minimization procedure. In the comparison we assumed that the $0^+$ intruder state is the $0_3^+$ level in $^{104,106}$Pd. In Fig. \ref{chi2} the contour plot for the normalized $\chi^2$ is reported. The minimum is centered at $\beta_{0\nu}$=0.194~$e$~fm$^2$ and $\beta_{0\pi}$=0.009~$e$~fm$^2$. By using these values for the effective monopole charges we have calculated the $\rho^2$(E0) reported in Table \ref{E0_calc}. \begin{table} \begin{tabular}{cccccc} \hline Nuclide&$J_i^\pi \longrightarrow J_f^\pi$ & $E_{\gamma}$~[keV] & $\rho^2(E0)_{exp}\cdot10^3$ & & $\rho^2(E0)_{calc}\cdot10^3$ \\ \hline $^{104}$Pd & $0_2^+ \longrightarrow 0_1^+$ & $1334$ &$11(2)\footnote[1]{Reference \cite{bellizzi}.}$&*& $10$ \\ $^{104}$Pd & $2_2^+ \longrightarrow 2_1^+$ & $786$ &$5(4)\footnote[2]{Calculated in the present work from the data of Ref. \cite{bellizzi}.}$&*& $1$ \\ \hline $^{106}$Pd & $0_2^+ \longrightarrow 0_1^+$ & $1134$ &$17(4)\footnote[3]{Present work.}$&*& $16$ \\ $^{106}$Pd & $0_4^+ \longrightarrow 0_1^+$ & $2001$ &$<19\footnotemark[3]$& & $0.3$ \\ $^{106}$Pd & $0_4^+ \longrightarrow 0_2^+$ & $867$ &$<90\footnotemark[3]$& & $4$ \\ $^{106}$Pd & $2_2^+ \longrightarrow 2_1^+$ & $616$ &$5(8)\footnotemark[3]$ & & $1$ \\ $^{106}$Pd & $2_3^+ \longrightarrow 2_1^+$ & $1050$ & $26(11)\footnotemark[3]$&*& $28$ \\ $^{106}$Pd & $2_4^+ \longrightarrow 2_1^+$ & $1398$ &$21^{+10}_{-21}\footnote[4]{Reference \cite{smallcomb}.}$& & $0.1$\\ & & &$18^{+10}_{-18}\footnotemark[4]$& \\ $^{106}$Pd & $2_5^+ \longrightarrow 2_2^+$ & $1115$ &$96^{+43}_{-61}\footnotemark[4]$ & & $18$ \\ \hline $^{100}$Ru & $0_2^+ \longrightarrow 0_1^+$ & $1130$ &$10.3(18)\footnote[5]{Reference \cite{kibedi}}$&*& $11.4$ \\ $^{102}$Ru & $0_2^+ \longrightarrow 0_1^+$ & $944$ &$14(3)\footnotemark[5]$&*& $17$ \\ \hline \end{tabular} \caption{Experimental values of $\rho^2$(E0) in $^{104,106}$Pd and $^{100,102}$Ru compared to theoretical ones evaluated using the Hamiltonian parameters from Ref. \cite{giannatiempo1998} and the E0 effective charges $\beta_{0\nu}$=0.194~$e$~fm$^2$, $\beta_{0\pi}$=0.009~$e$~fm$^2$ deduced in the present work. The values marked by an asterisk have been used in the $\chi^2$ minimization procedure.} \label{E0_calc} \end{table} Limiting our considerations to the Pd isotopes, we note the agreement between experimental and calculated values of the $\rho^2$(E0) for the transition de-exciting the $0_2^+$ levels supporting the interpretation of this state as belonging to the IBA-2 model space. Also for the $2_2^+$ level, we observe that the IBA-2 calculations of the $\rho^2$(E0) values do not contradict the interpretation of these states as belonging to the $n_d=2$ triplet. For what concerns the fourth experimental 0$^+$ state in $^{106}$Pd, the calculated $\rho^2$(E0; $0_4^+ \longrightarrow 0_1^+$) value is much smaller than the $\rho^2$(E0; $0_4^+ \longrightarrow 0_2^+$) one as suggested by the experimental limits. A comparison between the experimental B(E2) values and the calculated ones for this state is reported in Table \ref{E2_calc}. The experimental values for the $0_4^+$ state are calculated using the limit on the lifetime recently reported in Ref. \cite{prados} and the branching ratios from the present analysis. It preferentially decays to the 2$^+_2$ state as expected for the member of the $n_d=3$ quintuplet. For $^{104}$Pd, in the present work hints for the existence of the fourth experimental 0$^+$ state at 2101~keV which preferentially decays to the 2$^+_2$ state are presented, but no experimental B(E2) values from this level are known. The agreement found between the calculated and experimental $\rho^2$(E0; $2_3^+ \longrightarrow 2_1^+$) values seems to exclude the interpretation of this state as a member of an intruder band. Since also all the other electromagnetic properties of this state were reasonably reproduced by the calculations in Ref. \cite{giannatiempo1998} we are led to confirm that it lies within the model space. We note that the IBA-2 calculations closely reproduce the experimental value of the B(E2; $2_3^+ \longrightarrow 4_1^+$), which is not included in the decay scheme proposed in Ref. \cite{prados,garrett2018}. The scenario is different for the $\rho^2$(E0; $2_4^+ \longrightarrow 2_1^+$) and $\rho^2$(E0; $2_5^+ \longrightarrow 2_1^+$) strengths still known with limited precision. In Table \ref{E2_calc} the comparison between the experimental B(E2) and B(M1) values (from Ref. \cite{prados}) and the calculated ones of the 2$^+_4$ and 2$^+_5$ states is reported. For both states, the electromagnetic transition probabilities are known with a large uncertainty so that the comparison with the calculation is not decisive. As a consequence, no definite conclusion could be drawn on the interpretation of the $2_4^+$ as a member of the intruder band built on the 0$^+_3$ state, as suggested in Ref. \cite{wang}. We note also that no evidence of the $2^+_{4} \rightarrow 0^+_{3}$ transition has been reported so far and also in the present work no such transition was observed. Similarly, no definite conclusions can be drawn on the character of the state $2_5^+$. However since this state does not decay to the $4_1^+$, while the $2_4^+$ does, it seems preferable to associate the $2_4^+$ to the $2^+$ member of a $n_d=3$ quintuplet and the $2_5^+$ level to a coexisting configuration as suggested in Ref. \cite{garrett2018}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{Immagini/chiminimo.pdf}\end{center} \caption{Contour plot for the reduced $\chi^2$ variable based on the comparison of theoretical and experimental $\rho^2$(E0) values as a function of the effective monopole charges $\beta_{0\nu}$ and $\beta_{0\pi}$ (in $e$~fm$^2$). } \label{chi2} \end{figure} \begin{table*} \begin{tabular}{cccccc} \hline $J_i^\pi \longrightarrow J_f^\pi$ & $E_{\gamma}$~[keV] & $B(E2)_{exp}$ & $B(E2)_{calc}$ & $B(M1)_{exp}$ & $B(M1)_{calc}$ \\ \hline $0_4^+ \longrightarrow 2_3^+$ & $439$ &$<32$& $30$ \\ $0_4^+ \longrightarrow 2_2^+$ & $873$ &$<1600$& $1500$ \\ $0_4^+ \longrightarrow 2_1^+$ & $1489$ &$<2$& $3$ \\ $2_4^+ \longrightarrow 2_3^+$ & $347$ &$3600^{+2400}_{-3600}$ & $14$ \\ $2_4^+ \longrightarrow 4_1^+$ & $680$ &$155^{+9}_{-8}$ & $240$ \\ $2_4^+ \longrightarrow 0_2^+$ & $776$ & $60^{+36}_{-30}$& $515$ \\ $2_4^+ \longrightarrow 2_2^+$ & $782$ & $13^{+12}_{-8}$& $230$& $0.13^{+0.29}_{-0.10}$ & $7 \cdot 10^{-4}$ \\ $2_4^+ \longrightarrow 2_1^+$ & $1397$ &$0.4^{+0.4}_{-0.2}$& $2$& $7.3^{+4.2}_{-3.8}$ & $5.1$\\ & &$36^{+23}_{-20}$& $2$& $2.8^{+2.2}_{-1.6}$ & $5.1$\\ $2_4^+ \longrightarrow 0_1^+$ & $1909$ &$5.7^{+3.3}_{-3.0}$ & $0.4$ \\ $2_5^+ \longrightarrow 2_3^+$ & $680$ &$1400^{+900}_{-700}$ & $680$ & $42^{+43}_{-32}$ & $0.543$ \\ $2_5^+ \longrightarrow 3_1^+$ &$684$ &$1600^{+690}_{-600}$ & $44$ &$3.8^{+3.8}_{-2.0}$&$0.10$\\ $2_5^+ \longrightarrow 0_2^+$ & $1109$ &$160^{+6}_{-7}$ & $1$ \\ $2_5^+ \longrightarrow 2_2^+$ & $115$ &$100^{+150}_{-70}$ & $1.7$ & $19^{+15}_{-11}$ & $0.87$ \\ $2_5^+ \longrightarrow 2_1^+$ & $1731$ &$5.0^{+25}_{-21}$ & $1.2$ & $0.177^{+22}_{-11} $ & $0.7$ \\ & &$(6^{+155}_{-6}) \cdot 10^{-3}$ & $1.2$ & $(1.20^{+47}_{-42}) \cdot 10^{-3}$ & $(6.97) \cdot 10^{-4}$ \\ $2_5^+ \longrightarrow 0_1^+$ & $2242$ &$1.8^{+7}_{-6}$ & $1.2$ \\ \hline \end{tabular} \caption{The experimental values of B(E2) in $10^{-4}\,e^2b^2$ and B(M1) value in $10^{-3}\,\mu_N^2$ of the transition de-exiciting the $0^+_4$, $2^+_4$ and $2^+_5$ levels of $^{106}$Pd. The experimental data are taken from Ref. \cite{prados}. The theoretical values have been evaluated using the parameters from Ref. \cite{giannatiempo1998}. } \label{E2_calc} \end{table*} The presence of a large transition strength is considered as a signature of strong mixing between two states with different deformation (Ref. \cite{wood}) and according to Ref. \cite{peters} the $\rho^2(E0)$ values measured in $^{106}$Pd are large enough to provide evidence for shape coexistence in this nucleus. We have therefore compared the experimental value of $\rho^2(0_i^+ \longrightarrow 0_f^+)$ with that evaluated in a simple mixing model, following the procedure described in Refs. \cite{wood,giannatiempo_kr}. The $0^+_2$ and $0^+_1$ states are assumed to be a linear combination of two basic configurations $\vert1\rangle$ and $\vert2\rangle$ of different deformations: \begin{equation} \vert0_1^+\rangle=b\vert1\rangle+a \vert2\rangle\,\,\,\,\,\, \vert0_2^+\rangle=a\vert1\rangle-b \vert2\rangle\,\,\,\,\, (a^2+b^2=1) \end{equation} It is possible to deduce an approximate expression for the monopole operator in terms of the deformation variables in a quadrupole deformation space \cite{davynov}: \begin{equation} \hat{T(E0)}={3Z\over4\pi} \left( \beta^2+{5\sqrt5\over 21\sqrt\pi} \beta^3 cos3\gamma \right) \label{rozeroth} \end{equation} In this approximation one obtains for $\rho^2(0_2^+ \longrightarrow 0_1^+)$ the expression: \begin{equation}\begin{split} \rho^2(0_2^+ \to 0_1^+)=&({3Z\over4\pi})^2a^2(1-a^2)[(\beta_1^2 -\beta_2^2) \\ &+{5\sqrt5\over 21\sqrt\pi}(\beta_1^3 cos3\gamma_1 -\beta_2^3 cos3\gamma_2)]^2 \label{rozeroth} \end{split} \end{equation} by neglecting the non-diagonal term $\langle2 \vert T(E0) \vert1\rangle$. The parameters $\beta_1,\,\,\gamma_1$ and $,\beta_2,\,\,\gamma_2$ refer to the $\vert1\rangle$ and $\vert2\rangle$ unmixed states, respectively. \begin{figure} \includegraphics[width=\columnwidth]{Immagini/gamma.pdf} \caption{Values of $\rho^2$ calculated as a function of the deformation parameter $cos(3 \gamma_2)$ for different values of the squared mixing amplitude $a^2$, assumed $\gamma _1$, $\beta_1$ and $\beta_2$ reported in Ref. \cite{svensonn}. The horizontal lines indicate the experimental value together with the $\pm \sigma$ statistical uncertainty.} \label{gamma2} \end{figure} As a first step, we have considered only the terms up to the second order in $\beta$. The values of the deformation parameters $\beta^2(0_1)=0.050(2)$ and $\beta^2(0_2)=0.069(3)$ state have been extracted from the data of a Coulomb Excitation experiment performed some years ago in Ref. \cite{svensonn}. They can be expressed as a function of the unmixed $\beta_1$ and $\beta_2$ one as: \begin{equation} \begin{split} \beta^2(0_1) = a^2 \beta_1^2 + b^2 \beta_2^2\\ \beta^2(0_2) = b^2 \beta_1^2 - a^2 \beta_2^2 \label{beta_mixed}\end{split} \end{equation} Inserting the experimental values in the Eqs. (\ref{rozeroth},\ref{beta_mixed}) the mixing coefficient $a^2$ has been calculated to be $\approx 0.1$. Since this value corresponds to a small mixing between the ground state and the $0^+_2$ state, the assumption was made that the deformations of the mixed $0_1^+$ ($0_2^+$) and unmixed $\vert1\rangle$ ($\vert2\rangle$) states are similar. Under this hypothesis the value of $\rho^2(0_2^+ \longrightarrow 0_1^+)$ has been calculated keeping all the terms in Eq. (\ref{rozeroth}). The value of deformation parameter of the ground state $\gamma=20(2)^\circ$ has been taken from Ref. \cite{svensonn}. We assume in the calculations the values for $\sqrt{\beta^2(0_1)}=0.22$, $\gamma_1=20^\circ$ and $\sqrt{\beta^2(0_2)}=0.26$ while the deformation parameter $\gamma_2$ has been varied in a reasonable range for three different set of values for $a^2$, corresponding to small mixing. The calculated values of $ \rho^2(0_2^+ \longrightarrow 0_1^+)$ are compared to the experimental one in Fig. \ref{gamma2}. The comparison with the experimental value implies $\gamma_2 \approx 50^\circ$ for the $\vert2\rangle$ state, and hence for the $0_2^+$ state. This result would imply the coexistence of different shapes, triaxial for the ground state and oblate for the first excited $0^+$ state, in agreement with the conclusions drawn in Ref. \cite{peters}. \section{CONCLUSIONS} In summary, the E0 transitions in $^{106}$Pd between both $0^+$ and $2^+$ states were investigated by internal conversion electron measurements at the INFN Legnaro National Laboratories. The experiment used the newly installed SLICES setup, together with an HPGe detector. A set of K-internal conversion coefficients and monopole transition strengths was extracted. The obtained data allow us to discriminate between the two discrepant values reported in the literature for the $\alpha_K$ of the $2_3^+ \longrightarrow 2_1^+$ transitions. The first observation of the E0 transitions from the fourth 0$^+$ was provided but only limits on $\rho^2$(E0) values were extracted due to the limit on its lifetime. In $^{104}$Pd isotope hints of the existence of the fourth 0$^+$ state at 2101~keV were found re-analyzing the data of an experiment previously performed. Calculations of the $\rho^2$(E0) values in $^{104,106}$Pd were performed in the framework of the interacting boson model, using the parameters reported in Ref. \cite{giannatiempo1998} and the monopole boson charges extracted in the present work. The agreement between theoretical results and measurements is good, once the experimental 0$^+_3$ is considered as intruder state. For both $^{104,106}$Pd isotopes predicted states having a structure resembling that of states belonging to the $n_d$=2,3 multiplets of the U(5) limit have been associated to the experimental states. Further experimental studies aiming to give additional information about excited 0$^+$ and 2$^+$ states in the neighboring palladium isotopes are necessary to establish their interpretation as lying within the IBA-2 model space. The experimental value of the $\rho^2(E0; 0_2^+ \longrightarrow 0_1^+)$ has been also compared to that calculated in a simple two-states mixing model, to obtain further insights on the mixing and deformation of the first two $0^+$ states in this nucleus. \section{ACKNOWLEDGEMENTS} The authors would like to thank the staff of the CN accelerator (LNL) for providing the beams used in this experiment, M.~Loriggiola for producing the targets, and the mechanical workshops of the INFN divisions of Florence and the University of Camerino for their contribution. E. R. G. wishes to acknowledge the Centro E. Fermi for financially supporting his postdoctoral fellowship through the project BESTRUCTURE.
1,116,691,499,999
arxiv
\section{Introduction} A few months after its launch in December 2013, the ESA-Gaia mission (Gaia Collaboration \citeyear{gaia16}) started to produce regularly Alerts on photometric variability of targets which crossed its field of view \footnote{http://gsaweb.ast.cam.ac.uk/alerts}. The variability is derived from comparison of actual photometry with values measured during previous passages, without any reference to magnitudes available in external catalogues. This procedure avoids the problem of having to compare magnitudes in various different systems, with all the associated calibration issues, but does not lead to an immediate identification of the variable object. A note is provided with the Alert on possible associations with catalogued objects, but is based on positional proximity only. Some complementary information is available through the very low dispersion spectra (R $\sim 30$) obtained on board by the Blue and the Red Photometers \citep[BP and RP,][]{gaia16}, but this low resolution does not always allow an unambiguous classification. \cite{blago14} have made simulations showing that about 75$\%$ of the transients should be robustly classified by BP/RP, but in practice this classifier requires to be trained first with secured classifications. After a few years of experience, which helped to improve the internal classification, it appears for example that type Ia SNe are rather reliably identified, but other types of objects much less so. As a result, many objects still have the qualification of ``Unknown" in the table of Alerts, particularly at the beginning of the survey. The light curve is another, important element for classification, but the irregular sampling with Gaia itself means that a long period of time is necessary before a proper light curve is obtained. It is therefore important to obtain complementary ground-based data to identify the exact nature of each variable source. \\ We have therefore started to search for associations between objects appearing in the Gaia Alerts and spectra obtained with the LAMOST Survey \citep{Cui2012, Luo2015} and/or with the Sloan Digital Sky Survey \citep[SDSS-DR15;][]{Aguado2019}. In addition to those Gaia Alerts, we have also included in our search the Gaia Nuclear Transients (GNT) detected by a different method by \cite{kostrz18}. This paper presents the spectral information retrieved during our search. In Section 2, we present the search of associations and the basic properties of the surveys used for this purpose. Results are given in Section 3, divided into stars, galaxies and quasars, while the conclusions are given in Section 4. \section{Confronting Alerts with Archive Data} We have selected all Alerts appearing in the Gaia Alerts website from the beginning (first one Gaia14aaa, detected on August 30th, 2014), to the end of October 2018 (last one Gaia18dge, detected on October 31st, 2018), that is a total of 6308 candidates. \cite{kostrz18} have investigated the detectability of nuclear transients by Gaia using a method different from the one used for the standard Alerts (AlertPipe) and found 482 candidates in the period ranging from June 2016 to June 2017, only 5 of which were also detected by the standard Gaia AlertPipe system. We have included in our search all candidates from this GNT list. As these authors have, by definition, targeted galaxies only, by cross-matching on source position a Gaia source with the sample of SDSS-DR12 \citep{Alam2015} catalogued galaxies or quasars, each object from their sample already has one associated SDSS entry, but not necessarily a spectrum for classification (they have only 142 spectral classifications, out of 482 objects). We therefore extended the search to find possibly other spectra and characteristics of their objects, and look for long term variability. Our sample has been cross-correlated with the LAMOST DR5 database\footnote{http://dr5.lamost.org}, see also \cite{Luo2015}, using a search radius of 3$\arcsec$. LAMOST is a quasi-meridian reflecting Schmidt telescope, with an effective aperture between 3.6 m and 4.9 m (depending on the declination and hour angle of the pointing) and a field of view of 5 degrees diameter \citep{Cui2012, Wang1996, Su2014}. With a wavelength coverage from 3700 to 9000$\AA$ and a spectral resolution of R $\approx$ 1800, LAMOST can observe up to 4000 objects simultaneously. In July 2017, LAMOST has finished its first five-year regular survey, collected about 9 millions of spectra released in DR5, including $\sim$8 millions of stars, $\sim$150 thousands of galaxies, $\sim$50 thousands of QSOs and $\sim$640 thousands of unclassified objects. The second five-years regular survey started in September 2017. The flux calibration of the LAMOST spectroscopic survey is relative only \citep{Luo2015}. Comparing a sample of targets in common with the SDSS indicates that the accuracy of the LAMOST flux calibration is about 10\% at wavelengths between 4100 and 8900\AA, but decreases to 15\% at both ends due to the rapid decline of the instrument throughput \citep{Du2016, Xiang2015}. The spectral response curves for individual LAMOST spectrographs can differ by up to 30\% for a given night, and sometimes more from night to night \citep{Xiang2015} but are always calibrated. The final spectral distribution may also show some minor bumps between 5700 and 5900\AA, in the connecting region between the blue and the red spectra. So when comparing multi-epoch spectra, these uncertainties at the blue end and in the overlap region should be taken into account. The limiting magnitude for the Galactic survey was r$\sim$17.8, going down to 18.5 mag in some limited fields, see \cite{Yuan2015} for more details. For the Extragalactic survey, the limiting magnitude for galaxies (that is, extended objects) was approximately r $\sim$18, see \cite{Shen2016} for example; for point like objects like QSOs, the limiting magnitude was $i=20$, see \cite{Ai2016} and \cite{Huo2015} for more details. Since the LAMOST field of view is circular, field overlapping was necessary to achieve a continuous sky coverage. During the survey, some regions of the sky were therefore covered several times, and, as a consequence, some of the Gaia objects have been measured several times also (3 times on average). Although the ground-based observation is rarely coincident with the alerting date, this opens the possibility to look for intrinsic variability independently of the Alert from Gaia. In order to complement this, we have thus also cross-correlated our sample with the Sloan Digital Sky Survey (SDSS) sample \citep{Aguado2019, york00}, up to DR15, to provide more observations, using the same search radius, and this yielded a larger number of identifications. The DR15 is the latest Data Release from SDSS-III \citep{Aguado2019} and the SDSS globally provides spectra with an average resolving power of R $\sim$ 2000, from $\sim$ 1300 at the blue end (3600$\AA$) to $\sim$ 2500 at the red end ($10000\AA$) \citep{smee13}. The spectra are divided in two parts, blue and red, with a dichroic splitter at 6000 $\AA$, giving thus a small uncertainty around this wavelength. The entrance fiber sampled 3$\arcsec$ on the sky at the beginning of the SDSS survey \citep{york00}, from DR1 to DR8, and then switched to a 2$\arcsec$ entrance fiber with the start of the BOSS survey \citep{dawson13}, from DR9 to the present release. The search radius of 3$\arcsec$ used to find counterparts to our Gaia Alerts is thus consistent with the ground-based surveys properties. Spectral information for the Gaia Alerts and the GNT sample are given in the Tables. A few lines are given as example in the Appendix (with description of the different columns), while the full Tables are available online only. \section{Results} \subsection {Stars and Cataclysmic Variables} From the Gaia Alerts studied here, sixteen have single epoch SDSS or LAMOST spectra labelled as ``STAR". There are none among the GNT list, obviously because those alerts targeted, by definition, only galaxies. Among these sixteen, there are two late-type stars: Gaia14aaq, an M-type star with a SDSS spectrum, was also classified by \cite{jonk15} during one of the first ground-based follow-ups of Gaia Alerts; and Gaia17bsx, an M-type carbon star with a LAMOST spectrum, but already classified in an earlier survey by \cite{maur04}. Two others look like early type stars, as indicated from their LAMOST spectra: Gaia14adz has a spectrum labelled as G6 star, and its Gaia light curve shows no concluding features; and Gaia17akp was labelled A1V in the LAMOST DR5 Version 1 release, but its spectrum, while showing mainly hydrogen lines and Na in absorption, displays also a small H$\alpha$ emission and further faint Balmer emissions are seen inside the broader, higher order, Balmer absorption lines, and is therefore possibly a CV (revised as such in the DR5 Version 3 release). Two others are white dwarfs from their SDSS spectra, also confirmed in Simbad: Gaia14aab is a sdA sub-dwarf from \cite{Kepler2016}, while Gaia14adk was confirmed as a white dwarf by \cite{Eisenstein2006}. But as none of those spectra are contemporary to the Gaia Alerts, it is difficult to assess what caused the alert, and how it would have reflected in their spectrum. Two others are mis-labelled stars: Gaia16acx, alerted as a SN candidate by Gaia on Feb. 9th, 2016, has a LAMOST spectrum within a distance of 2.75$\arcsec$, taken on Dec. 25th, 2012, which was mis-labelled as "STAR" due to its low SNR, but corresponds in fact to the galaxy UGC 691 at the redshift of 0.04907 \citep{Mahdavi2004}; it is thus not contradictory with the Gaia Alert being indeed due to a SN in this host galaxy; and Gaia17bqd, labelled as ``STAR" in SDSS, is in fact a BL Lac object at z=0.7269 identified by \cite{Paris2014}, quoted by Gaia Alerts, but the spectrum is very flat and the emission lines are barely seen, the redshift is highly uncertain. Here also, the large time interval between the Gaia Alert (June 25th, 2017) and the date of the SDSS spectrum (Jan. 2nd, 2003) does not allow to conclude on the nature of the observed variation. All the eight others are Cataclysmic Variables (CVs). Five of them were already known, and include: Gaia16apa = CRTS CSS090918 J001538+263657, a known CV from the Catalina Real-time Transient Survey (CRTS; \citealt{Thorstensen2016, Drake2014a, Szkody2014}); Gaia18bvj = V521 Peg, a variable star in \cite{Kholopov1998}; Gaia18bwz = QZ Vir (also named T Leo, \citealt{Kato1997}), a known dwarf nova in outburst caught by Gaia in July 2018; Gaia18crs = QW Ser, a known SU UMa-type dwarf nova with a 5 magnitude brightening outburst caught by Gaia \citep[see][]{Patterson2003, Nogami2004, Szkody2009}; Gaia18cwe = V493 Ser, a known CV from the CRTS \citep{Drake2014a}. \\ % Three CVs are newly identified: Gaia15abi, Gaia17bjn, and Gaia18cln, their SDSS or LAMOST spectra are shown in Figure \ref{fig:star_CV}. All of those CVs, except one, show the classical Hydrogen and Helium lines in emission, with the addition sometimes of the $\lambda\lambda\AA$ 5169 FeII, 4645 NIII-CIII or 4267 CII lines. The exception is the known CV QWSer = Gaia18crs which displays only strong Balmer absorption lines, but the SDSS spectrum is from 2006, thus many years before the Gaia Alert of 2018. The spectrum would thus indicate that the object was caught in a high-accretion phase at that time, while no spectrum is available during the Gaia Alert. Following this line, it is probable that the stellar spectrum mentioned earlier for Gaia17akp, with Balmer absorptions and only weak Balmer emissions, taken by LAMOST about two years before the Gaia Alert, indicates in fact also a CV but close to an outburst state. The points of the Gaia light curve start only on March 29, 2015, that is about 2 months after the LAMOST spectrum was taken, and remain low until the first alert on 13th Feb. 2017, but several outbursts are then seen over the next two years, confirming its CV nature. In addition to those single-epoch spectra, thirteen objects alerted by Gaia have multi-epoch spectra labelled as ``STAR". Among them: Gaia14acd is an F type star classified by LAMOST, the reason for the alert is still unknown (was not reobserved since then). Gaia14ado is a WD+MD binary with 2 epochs SDSS spectra \citep{Eisenstein2006}. Gaia16bft is a young stellar object in the Simbad database. Gaia18asf is an M type star with emission lines seen in the SDSS spectra. And Gaia18bla, which has a SDSS spectrum labelled as star, is in fact a BL-Lac object at redshift z = 0.212 \citep{alb07}, correctly identified by Gaia Alert due to the coincidence in position. Eight objects are CVs, which include Gaia16adh, 16ahb, 16ahl, 16bnz, 18cqo, 18crc, 18cry and 18cxq. Gaia16adh is a CV with an M type donor, confirmed by \cite{Kepler2016} who used the SDSS spectra, which were taken in 2013 and 2015. This object had already been pointed out as a SU UMa-type candidate from its photometric behavior \footnote{ http://ooruri.kusastro.kyoto-u.ac.jp/mailarchive/vsnet-alert/1231} but is not in the SIMBAD database. Gaia16ahb is a CRTS variable identified as CV candidate by \citeauthor{Drake2014a} (2014a, 2014b) and spectroscopically confirmed. Its light curve shows only one eruption on March 1st, 2016 during the whole Gaia survey until end of 2019. Gaia16ahl = IW And is a Nova-like star known from photometry, and confirmed as CV by \cite{Gentile2015} using LAMOST spectra: it was observed by LAMOST from 2012 to 2017 four times, and Balmer absorption/emission lines in different states were detected; it is the Z Cam system IW And as noted by Gaia Alert. Gaia16bnz is a new CV with 2 epochs LAMOST spectra taken in 2015 October and December: Balmer absorption/emission lines are present and changed between the two dates, which are however anterior to the Gaia Alert of Oct. 17th, 2016 (Figure \ref{fig:star_CV}). Its Gaia light-curve does not display strong variations during the $\sim$ 5 years of survey, the Alert being issued because of a long-term decline, and was thus not readily identified as a CV. While the first LAMOST spectrum shows essentially broad Balmer absorption lines, the second spectrum about 2 months later shows a strengthening of the Balmer emission lines within weakening absorptions, consistent with a progressive return to the low state. Gaia18cqo = TX Tri is a CV identified by CRTS (\citeauthor{Drake2014a}, 2014a). Gaia18crc =TW Tri is a CV with spectra taken at 5 different epochs by LAMOST, none coincident with the Gaia Alert, the object was investigated by \cite{Thorstensen1998} and \cite{Gentile2015}. Gaia18cry is a CV (catalogued in SIMBAD), with spectra at 3 different epochs, one taken by the SDSS and two by LAMOST. And finally Gaia18cxq = EG Lyn is a CV (catalogued by SIMBAD) with 2 epochs SDSS spectra, see Figure \ref{fig:star_CV}. For none of those are the LAMOST or SDSS spectra contemporary to the Gaia Alert. The spectrum of Gaia16adh is remarkable as the Balmer emission lines are double-peaked: the Full Width at Zero Intensity is measured at 4300 km/s for H$\alpha$ (or 4380 at H$\beta$), and the FWHM is 2200 km/s. This is quite typical of accretion disks viewed at high inclination and confirm the CV nature. There is no significative difference seen between the two SDSS spectra of, respectively, 18th January 2013 and December, 1st, 2015, but we have no data to estimate whether the object remained in a high state during this interval. The Gaia outburst detected later on February 11th, 2016 was a short, but bright outburst, but no further spectrum is available at, or after, this event. The next outburst seen in the light curve is only on Feb.5th, 2020, that is about 4 years later (compared to 3 years between the two spectra): it is thus quite plausible that the period of recurrence of this object is about 3-4 years. A few other spectra of CVs (like Gaia16ahb, 18bvj or 18cwe) mentioned above show a similar, double-peaked emission line profile, the most notable of them being V521Peg = Gaia18bvj, whose SDSS spectrum shows a FWHM of about 2000 km/s, but was taken about 4 years before the Gaia Alert of July 2018, no other outburst being recorded during the period covered by Gaia. It would obviously be quite desirable to get regularly spaced spectra of those objects to monitor their evolution. Discarding the miss-identified spectra of galaxies or quasars, we can conclude from this sample that a large fraction of the Gaia Alerts where the corresponding SDSS or LAMOST spectra were labelled as stellar are in fact CVs. \begin{figure*} \includegraphics[width=3.4in]{zyhuo_fig1a.eps} \includegraphics[width=3.4in]{zyhuo_fig1b.eps} \includegraphics[width=3.4in]{zyhuo_fig1c.eps} \includegraphics[width=3.4in]{zyhuo_fig1d.eps} \includegraphics[width=3.4in]{zyhuo_fig1e.eps} \includegraphics[width=3.4in]{zyhuo_fig1f.eps} \caption{\label{fig:star_CV} Some representative spectra of CVs from the Gaia Alerts. From left to right and top to bottom: Gaia15abi, Gaia17bjn, Gaia18cin, Gaia16bnz, Gaia18cry and Gaia18cxq. The dates of the Gaia Alert are given in each figure, together with the dates when the spectra were obtained. Typical lines are labelled. } \end{figure*} \subsection{Galaxies} Among the Gaia Alerts, hundred and fifty-seven have been found associated with a galaxy where a single epoch spectrum was available in either LAMOST or SDSS. From the GNT list of \cite{kostrz18}, forty one are in the same situation, giving us a total of 198 galaxies with single epoch spectra \footnote {Note that although the initial GNT sample comprises 482 targets selected as having a SDSS counterpart, only about one third of them have a SDSS spectrum for classification, the rest of the sample having only SDSS photometry.}. % Among those galaxies, excluding 4 where the spectra are of very low SNR \footnote{These are Gaia17aqc, a blue source with previous variability in CSS, too faint for LAMOST, with a photometric redshift of 0.925; Gaia17bmd, a UV and radio source with no measured redshift, but a single epoch LAMOST spectrum shows a single emission line giving z = 0.159 if identified to H$\alpha$, to be confirmed; Gaia17cdu, a candidate SN in a LEDA galaxy with cz = 31028 km/s, alerted at g =18.9, giving an absolute magnitude of -19.3, compatible with a type Ia SN; and GNT J000121-0011, a NED galaxy with z = 0.462, but no identifiable features in the SDSS spectrum.}, only 22 (Gaia Alerts) + 7 (GNT list) show no emission lines in their spectrum, the remaining 132+33 galaxies all having emission lines of various types and strengths. For most of the galaxies, some star formation is therefore still present, consistent with the idea that the Alert was often caused by the explosion of a SN. However, some of them show a broad H$\alpha$ component underlying the narrow emission line, while no broad component is detected under H$\beta$ (examples are Gaia14adg, GNTJ115000+3503, or GNTJ140030+5653, under others). The broad H$\alpha$ component is strong in only a few of them, but even then no broad H$\beta$ component is apparent. These objects are therefore all Seyfert galaxies of type 1.9. In those cases, the variability detected by Gaia could well be due to changes in the accretion rate or in the obscuration in front of the AGNs. In favor of this interpretation is the fact that we have proportionally more cases of broad lines in the sample of galaxies detected in the GNT list than in the standard Gaia Alerts ($\sim$ 27\% versus $3\%$). The fact that only Sey 1.9 are seen here could lead to the conclusion that the variations are due to changes in obscuration only: this is however misleading and probably a selection effect. Indeed, variations in Sey 1 or quasars are seen also (see next section), and the above objects appear only in the list of ``Galaxy" because their Seyfert properties were not recognized in the surveys. \\ We have therefore also looked if some of the emission line objects could be AGNs, particularly type 2 Seyfert galaxies. Type 2 Seyfert galaxies are easily detectable because they have a strong $[\rm{OIII}] 5007\AA$ line compared to H$\beta$, and a comparatively strong $[\rm{NII}] 6584\AA$ with respect to H$\alpha$, ratios which are quasi independent of the reddening. To quantify this, we used the well known BPT diagram \citep{BPT1981} in its revised form from \cite{Kewley2006}, derived from a large sample of SDSS spectra. This allows also to detect ambiguous cases, Liners or composites (a mix of Star Formation and AGN). At the end, very few emission line objects clearly appear in the Seyfert part of the diagram: e.g. Gaia17bip , and GNT014153+0101 (both classified as AGN broad line by SDSS, but there is no clear broad line component in their spectra! so they are rather Sey 2), or Gaia14aak, Gaia18crv, GNT080115+1101, GNT081437+1722 and GNT120346+5100 which are all genuine Seyferts. A few others fall at the limit between Starbursts and AGNs or Liners, like Gaia18dbt, or GNT131839+4630, or have an ambiguous classification because, although their [NII] is strong, the H$\beta$ line is not well detected above the continuum, such as Gaia14aaa, Gaia18cuj or GNT003719+2613. Interestingly, in a few cases, although the galaxy was classified as active, the Alert was nevertheless probably due to a SN, as seen from the light curve (e.g. Gaia17bip, or Gaia18crv (spectroscopically classified SN2018gho), or GNT081437+1722). For the other AGNs, the light curve is more complex and points towards intrinsic variations. \\ For all the other emission line objects, the starburst nature is therefore clear, and in most cases the Gaia light curve is compatible with a SN. We highlight one example, GNTJ105100+6559, which shows a strong emission-lines spectrum typical of starburst galaxies, but also a ``blue bump" around $ \lambda\lambda 4650\AA$ characteristic of a Wolf-Rayet (WR) contribution. This galaxy is thus forming massive stars and it is then plausible that Gaia has detected a SN, even if the light curve does not allow to establish its type. The Gaia light curve data (although poorly sampled) are given in \cite{kostrz18} and shown here in Fig. \ref{fig:GNT1051_LightCurve}, together with its spectrum in Fig. \ref{fig:GNTJ105100}. \begin{figure} \includegraphics[width=3.1in, height=2.2in]{zyhuo_fig2.eps} \caption{The light curve of the galaxy GNTJ105100+6559, as given by \cite{kostrz18}, looking like a SN outburst. \label{fig:GNT1051_LightCurve}} \end{figure} \begin{figure} \includegraphics[width=3.3in]{zyhuo_fig3.eps} \caption{The galaxy GNTJ105100+6559, showing strong emission lines and a blue complex around $ \lambda\lambda 4650 \AA$, characteristic of the presence of WR stars. This is a star-forming galaxy. \label{fig:GNTJ105100}} \end{figure} But to be able to really identify the source of variation in all those galaxies, spectra close to the detected Gaia variation would be essential, together with a reference spectrum at a different epoch. We have therefore looked with particular attention to those galaxies where several spectra were available in the databases. We have found 63 of those, including 13 from the GNT list. Gaia14aak and Gaia14abw are distant by only 0.58$\arcsec$, and correspond to the same target in the SDSS and LAMOST archives, a type 2 Seyfert AGN with z=0.064\footnote{They probably also correspond to a unique Gaia target, but detected at different scanning angles, not recognized as such in the very early phase of the survey.}. So the final sample of galaxies with multi epoch spectra contains 62 objects. However, in practically all those cases, the available spectra were taken at an epoch far away from the date of the Alert, therefore bringing no clues on the cause of Alert. There is therefore no surprise that we found no evidence of a SN residual in their spectrum, except for two cases. One is GNTJ170213+2543, where the LAMOST spectrum was obtained on April 21st, 2017, that is one week after the increase was noticed by Gaia (from G=19.5 to G=18.6). We subtracted the reference SDSS spectrum of March 2005 from the LAMOST spectrum to obtain the residual presented in Figure \ref{fig:galaxy-SN}. Fitting the residual with SNID \citep{snid07} shows that it corresponds to a type Ia SN about 15 days after its brightness maximum. The redshift derived from the residual spectrum is z = 0.117, perfectly compatible with the one derived from the galaxy (z =0.1155) and Gaia thus detected the increase in brightness around 1 week after the maximum only, as its time coverage is very irregular. No more recent spectrum has been acquired since. The second case is the galaxy Gaia17aal, alerted on Dec 10th, 2016. One of the LAMOST observations was taken on Dec 18th, 2016, that is only one week later and the spectrum shows significant bumps compared to the SDSS reference spectrum taken in 2004. The galaxy looks ``quiet" again in the second LAMOST spectrum of 2018. The LAMOST spectrum of 2016 displays a significant jump at 5900$\AA$, probably caused by the blue/red band combination described in Section 2, so we used the blue and the red spectra independently. The residual spectrum was obtained by subtracting the SDSS spectrum from the LAMOST 2016 one, and fitted with SNID separately for the blue and the red band. Both fittings indicate a typical, normal, type Ia SN near its maximum brightness, and the redshift of 0.05 obtained from the residual spectrum is compatible with the one from the galaxy , see also Figure \ref{fig:galaxy-SN}. \begin{figure*} \includegraphics[width=3.4in]{zyhuo_fig4a.eps} \includegraphics[width=3.4in]{zyhuo_fig4b.eps} \caption{Left: Spectra of GNTJ170213+2543 observed by SDSS on March 17th, 2005 and by LAMOST on April 21st, 2017 (top panel); residual after subtraction of the earliest spectrum from the later one, showing a type Ia SN spectrum, overlaid with the fit from SNID (bottom panel). Right: Spectra of Gaia17aal observed by SDSS on Apr. 25th, 2004 and by LAMOST on Dec. 18th, 2016 and Apr. 16th, 2018 (top panel); residual after subtraction of the earlier SDSS spectrum from the LAMOST spectrum taken in 2016 Dec, overlaid with the fit from SNID (bottom panel, also a type Ia SN). } \label{fig:galaxy-SN} \end{figure*} In most of these 62 galaxies with multi-epoch spectra, we do not see any significant change in continuum or emission lines, thus suggesting that the change in magnitude noticed by Gaia was indeed due to a SN, although not confirmed here (but some of them were confirmed independently at different telescopes, as noted on Astronomer's Telegrams: e.g. Gaia16aty on Atel9208, Gaia16avs on ATel 8935, Gaia17bka on ATel 10352, or Gaia18cgj on ATel 11938, etc...). In 6 out of 62 galaxies (Gaia16cav, 17bka, 17bwd, 18afc, 18awi, GNTJ085416.+2903), that is, $10\%$ of them, we do not see any emission line, not even H$\alpha$, those are therefore dominated by an older stellar population. In the Gaia alerts fraction of them, in only two cases (4\%), Gaia17bcr and 17dbr, do we see a weak, broad H$\alpha$ component (but for these two, the light curve is more indicative of a SN than of nuclear variation), therefore there is no reason to suspect AGN variation as the main cause of the magnitude change in this sample. On the contrary, in the GNT list, this proportion is much higher, 5 galaxies out of 13 with multi-epoch spectra ($\sim$40\%) : GNTJ084157+0526, GNTJ084535+3439, GNTJ131101+0003, GNTJ143445+3328 and GNTJ172959 +6242 show a broad H$\alpha$ emission line. It is however small number statistics, further biased by the random availability of spectra in databases. But if we look at the whole GNT sample, among the 146 objects (out of 482) where at least one spectrum is now available, two thirds are classified as ``QSO", as compared to only $17\%$ in the full Gaia Alerts, supporting the idea that a large fraction of the variations seen in the GNT sample are due to the central AGN itself. This is consistent with the purpose of \cite{kostrz18} who looked specifically for nuclear variations. There are a few cases where a change of slope is seen in the blue part of the continuum (e.g. Gaia15afx, 16ajl, 16aru, 17bcr, 17bij or 17bwd, 17bwl, 17byx), and the increase is relatively large, of the order of at least $50\%$. It is difficult to say whether all this change is intrinsic or partly due to losses because of atmospheric differential refraction, poor centering into the entrance fiber, or larger uncertainty in the LAMOST relative flux calibration, specially when close to the LAMOST magnitude limits. A couple of cases deserve a short comment (see Figure \ref{fig:GNTJ143445}). In Gaia15afx, the 3 available spectra all show clear emission lines, mainly (from red to blue) [SII], [NII], H$\alpha$, [OIII] 5007 and H$\beta$ (emission over an underlying absorption). The LAMOST spectrum taken on Jan 12th, 2014, although of somewhat lower S/N, shows the same line intensities but a much bluer continuum compared to the two SDSS spectra taken in March 2002. It is difficult to say whether this change is related to the Alert, which occurred much later (May 23, 2015) and no later spectrum is available either. GNTJ143445+3328, initially classified as a galaxy, is in fact a 2MASX AGN with a redshift of 0.197, and has associated spectra at four different epochs. But one of the SDSS spectra (the one of March 2010) corresponds in fact to a different object at a separation of 2.98$\arcsec$ and with a redshift of 0.246, and belongs therefore to a background galaxy SDSS J143445.33+332823.5. The spectra of the AGN taken by SDSS on May 1st, 2005 and by LAMOST on April 27th, 2014 show significant emission lines, in particular a broad H$\alpha$ component, compared to the later LAMOST spectrum of May 24th, 2017 (red tracing in Fig. \ref{fig:GNTJ143445}), which was taken about half a year later than the Gaia alert, and where the broad H$\alpha$ component and [OIII] have significantly weakened, therefore suggesting a change in the AGN properties (a Changing Look Quasar, see next section). \begin{figure*} \includegraphics[width=3.4in]{zyhuo_fig5a.eps} \includegraphics[width=3.4in]{zyhuo_fig5b.eps} \caption{Left: Spectra of Gaia15afx, where the latest LAMOST spectrum (in blue) seems to indicate a change in the blue part of the continuum. Right: Spectra of GNTJ143445+3328. The lower (pink) spectrum belongs to a different object, but the latest LAMOST spectrum (red) shows a distinct weakening of the broad H$\alpha$ component in the AGN (see text) with respect to earlier spectra. The earlier LAMOST spectrum (blue) taken on Apr 27, 2014, seems to correspond to the earlier SDSS spectrum of 2005 (black), but is of lower quality. } \label{fig:GNTJ143445} \end{figure*} \subsection{Active Galactic Nuclei (AGN) and Changing Look Quasars (CLQ)} In addition to the AGNs discussed above, there are 30 Gaia alerts and 68 GNT targets with spectra classified as ``QSO" in LAMOST or SDSS\footnote{The SDSS class QSO comprises two subclasses, either ``Broadline", or ``Starburst/Broadline", the latter corresponding more to lower luminosity objects where the broadline component is not dominant} , with single epoch spectra only. None of them is coincident with, or close to, the date of the Alert, so that no clue can be given on the cause of the Alert (either a SN, or intrinsic variability of the AGN, or a Tidal Disruption Event). But twelve Gaia alerts and twenty-three GNT targets have two or more spectra classified as ``QSO" in these surveys, and are therefore more interesting to consider in details. For a given object, we have normalized the spectra in flux to each other at the red end (where atmospheric dispersion is minimal), to reveal possible spectral changes. In many cases, no significant changes have been noticed. Taking into account a probable $10\%$ (or sometimes even larger) uncertainty in the blue fluxes in the LAMOST spectra, we consider, in this work, principally changes in the broad emission lines to qualify for a ``changing look" quasar, as described in \citet{mcleod16}, while changes in the continuum are considered as an additional, less reliable argument. A dozen of targets (out of 35) show some changes in either the emission lines or in the intensity of the continuum in the blue, or both, as illustrated in Figure \ref{fig:Gaia_QSOs} and \ref{fig:GNTlist_QSOs}. All of them are classified as QSOs (except Gaia16abw which is in fact a blazar). For instance, the 2 SDSS spectra of Gaia16abw taken at 12 years interval show a difference of more than a factor of 2 in the continuum, with corresponding changes in the intensity of the emission lines, but a general shape otherwise totally similar: the normalization then reveals a significant decrease of the intensity of the MgII 2800$\AA$ emission line between 2002 and 2014, its EW changed by more than a factor of two, see Figure \ref{fig:Gaia_QSOs}, and a similar decrease of the CIII] 1909 line. With the high redshift of this blazar (z = 1.41), it is unlikely that this change is due to the change of entrance fiber, from 3" to 2", from DR9 onwards, so it should be considered as real. The Gaia data, between early 2015 and end of 2019, show indeed its G magnitude oscillating between 19.5 and 15 (the Alert being issued on a high point in Jan. 2016) and the spectra are consistent with the lines responding to a change in the continuum. Gaia16acj, a QSO at redshift 0.17, shows significant changes in the broad line H$\beta$ and H$\alpha$ intensity, and the LAMOST spectrum was taken only two days after the Gaia alert (Feb. 5th, 2016, pointing to a brightening of the QSO): the smaller equivalent width of the emission lines seen in the LAMOST spectrum may thus indeed be due to the brightening of the continuum, before the lines could respond to this change. Gaia17bum, observed at three different epochs, shows a significant increase in the MgII, H$\delta$, H$\gamma$ and H$\beta$ emission intensities between the first spectrum of Nov. 3rd, 2015 and the last one (Jan. 1st, 2016, blue line in Fig. \ref{fig:Gaia_QSOs}) as well as a continuum getting bluer. Although the Gaia Alert was emitted about 1.5 years later than this last spectrum, the Gaia light curve showed some previous peaks (not alerted for) closer to the spectrum, for instance on Feb. 8th, 2016. But it apparently remained in the low state between June 2015 and the last low point on Jan. 9th, 2016 (that is, after the last LAMOST spectrum of Jan. 1st, showing significant increases), before getting high on Feb. 8th. There seems thus to be a lag between photometric and spectroscopic changes, the value of which is however impossible to assess here with so irregular data points (no Gaia points are available at dates close to the other two, earlier LAMOST spectra). The interval between significative changes, as seen from its Gaia light curve, seems to be some weeks or months, but not years. \\ Two other Gaia Alerts show some weaker changes: Gaia16ack in the H$\beta$ line, and Gaia17czm in the CIV line, but the odd shape around 5900$\AA$ (due to the LAMOST red/blue band splitting) makes the estimate of the continuum rather uncertain there. \begin{table*} \footnotesize \caption{Changing look AGN and QSOs.} \label{table_cl} \centering \begin{tabular}{l c c c c c c c c c c c} \hline\hline Name & Ra & Dec & Pre Gmag & Alert Gmag & Epoch & Redshift & Emission Line & Continuum Shape \\ \hline Gaia16abw & 158.46428 & 60.85202 & & 15.66 & 2 & 1.40871 & decrease & \\ Gaia16acj & 196.21654 &11.13351 & & 18.01 & 2 & 0.16914 & decrease & \\ Gaia17bum & 18.209680 &32.13809 & 17.31& 15.75 & 3 & 0.60384 & increase & bluer \\ \hline \hline Name & Ra & Dec & Pre Gmag & Gmag Peak & Epoch & Redshift & Emission Line & Continuum Shape \\ \hline GNTJ143445.35+332820.56 & 218.68896& 33.47238 & 19.21 & 18.85 & 3 & 0.197561 & decrease & \\ \hline GNTJ092334.70+281526.86 &140.89458 &28.25746 & 19.61 & 19.17 & 2 & 0.55305 & decrease & \\ GNTJ100036.45+511652.91 &150.15186 &51.28136 & 19.67 &18.69 & 3 & 0.11668 & decrease & \\ GNTJ100220.18+450927.30 &150.58410 &45.15758 & 19.32 &18.60 & 3 & 0.40078 & decr/incr & redder/bluer\\ GNTJ102707.45+602633.62 &156.78105 &60.44267 & 18.92 &18.42 & 2 & 0.37012 & increase & bluer \\ GNTJ131428.09+054307.34 &198.61705 &5.71871 & 19.63 &18.87 & 2 & 0.16331 & decrease & redder \\ GNTJ131437.60+142503.90 &198.65667 &14.41775 & 19.91 &19.32 & 2 & 0.2506 & decrease & redder\\ GNTJ150019.08+000249.02 &225.07951 &0.04695 & 19.08 &18.39 & 3 & 0.37637 & increase & bluer \\ \hline \end{tabular} \end{table*} \begin{figure*} \includegraphics[width=3.4in]{zyhuo_fig6a.eps} \includegraphics[width=3.4in]{zyhuo_fig6b.eps} \includegraphics[width=3.4in]{zyhuo_fig6c.eps} \includegraphics[width=3.4in]{zyhuo_fig6d.eps} \caption{Three Gaia alerts of QSOs showing some changes in the broad emission lines and/or in the continuum, with multi-epoch spectra normalized at the red end. The light-curve of Gaia17bum is from Gaia Alerts, with dates of 3 spectra marked by red vertical lines. } \label{fig:Gaia_QSOs} \end{figure*} \begin{figure*} \includegraphics[width=3.4in]{zyhuo_fig7a.eps} \includegraphics[width=3.4in]{zyhuo_fig7b.eps} \includegraphics[width=3.4in]{zyhuo_fig7c.eps} \includegraphics[width=3.4in]{zyhuo_fig7d.eps} \includegraphics[width=3.4in]{zyhuo_fig7e.eps} \includegraphics[width=3.4in]{zyhuo_fig7f.eps} \includegraphics[width=3.4in]{zyhuo_fig7g.eps} \caption{Seven QSOs from the GNT list with multi-epoch spectra (normalized at the red end) and showing some changes in the broad emission lines and/or in the blue continuum. For GNTJ100036+5116, the spectra are shown not normalized to each other, for clarity in the figure. } \label{fig:GNTlist_QSOs} \end{figure*} In the QSOs from the GNT list with multi-epoch spectra, about ten of them display changes, seven of which are found to be strong, see Figure \ref{fig:GNTlist_QSOs}. This larger fraction is obviously due to the target selection in this list (``Nuclear Transients"). In GNTJ092334+2815 and GNTJ100036+5116, the broad H$\beta$ line disappeared in the LAMOST spectra compared to the SDSS ones taken about 10 years earlier, while the H$\alpha$ line also looks weaker in GNTJ100036+5116 (but it falls outside the red end of the spectrum of GNTJ092334+2815 and can thus not be checked for that one). The quasar GNTJ100220+4509 was found by Gaia to have increased its G magnitude from 19.32 to 18.64 \citep{kostrz18} on Sept. 17th, 2016, thus after the last SDSS spectrum was taken. The first SDSS spectrum, from Apr 12th, 2002, shows a classical broad H$\beta$ component with weak [OIII] lines. Only the blue side of the broad H$\alpha$ component is visible at the edge of the spectrum because of its redshift z= 0.401. The second SDSS spectrum, obtained on 2014 Jan 26th, thus still before the Gaia Alert, shows that the broad H$\beta$ component has disappeared. The blue continuum has slightly weakened also. We have obtained a new, ground-based spectrum in Feb. 2019 with the Nordic Optical Telescope (NOT), which shows that the H$\beta$ broad component has reappeared, in an interval of therefore 5 years, see Fig. \ref{fig:GNTlist_QSOs}. We are therefore here in the presence of a ``changing look quasar", as described in McLeod et al. (\citeyear{mcleod16})\footnote{Note that this object is the only one in common with their sample}, showing both disappearance and reappearance of the broad Balme component. The fact that the observing dates are not coincident between Gaia and ground facilities does not allow to decide whether the reappearance of the broad Balmer component is directly linked to the brightness increase detected by Gaia in 2016 and what would be the time delay between photometric and spectroscopic changes. GNTJ102707+6026 shows a clear increase in the blue continuum from 2002 to 2014, with no changes in the Balmer lines. This continuum increase also better reveals the strength of the FeII multiplets redwards of the MgII line (e.g. \citealt{grandi81} for reference), but the alerting date (Sep. 9th, 2016) is not covered by the spectra. For GNTJ131428+0543, the Balmer emission lines (except H$\alpha$) have disappeared, and the blue continuum dropped, between the SDSS spectrum of March 2002, and the LAMOST spectrum of Feb. 8th, 2016, a few months before Gaia alerted on a change on July 11th the same year. For GNTJ131437+1425, the situation is almost the same, but the drop in blue continuum is only marginal. Another case, with opposite changes, is detected in GTNJ150019+0002: a broad H$\beta$ component, weak or absent in the two SDSS spectra of 2000 and 2001, has appeared on the LAMOST spectrum of May 23rd, 2017, that is a few months after the Gaia detection of Feb. 13th the same year, while no change is noticed in the H$\alpha$ line (which sits however at the red edge of the spectrum). The blue continuum has also increased, and there is evidence that the FeII multiplets redwards of the MgII line have increased also. We have thus here another clear case of ``Changing Look Quasar", possibly associated with the magnitude change noticed by Gaia. We list the information for these ten CLQs as well as the one CLAGN in Section 3.2 in Table \ref{table_cl}. We also note here three cases with relatively weaker changes: GNTJ121613+5242 shows a weak change in the shape of the H$\delta$ and H$\beta$ lines, GNTJ150149+2830 in H$\beta$ only, while GNTJ232841+2248 displays changes in the shapes of both H$\beta$ and H$\alpha$. From the few cases seen here, it seems thus that an increase in H$\beta$ line intensity is coming together with the increase of the blue continuum, and vice-versa. While this small sample is not statistically sufficient, it goes along the findings of \cite{mcleod16} and almost doubles their sample. All the objects investigated here, like theirs, have first been selected because of photometric variability. On the contrary, \cite{Yang18} have recently looked for CLQ's by searching directly for repeat spectra in the LAMOST or SDSS databases and found another 21 cases, confirming the trend of bluer when brighter, but surprisingly, there is no object in common with our sample! The reason for this (selection effects ?) needs to be investigated further. Similarly, while this paper was with the referee, \cite{Graham2020} published a list 73 CLQ's selected by photometric variability in the CRTS survey of known quasars, and having confirmed spectroscopic changes: none of them is in common with our list, but a full cross-check of the Gaia Alerts with CRTS remains to be done. What is really needed in the future, is a closer monitoring in time, to establish the possible link, and time delay between the changes in the magnitude, and the observed spectral changes. \subsection{Various} Eight objects (7 Gaia alerts and 1 GNT) have LAMOST spectra whose (automatic) classification was marked as ``Unknown", essentially because of too low a S/N ratio, but none of the spectra was coincident in time with the Alert. A closer examination of them allows to extract further information. For GNT154833-0017, alerted on Jan.16, 2017, a much earlier (2012) LAMOST spectrum exists, but its low S/N did not allow to conclude about the nature of the galaxy. A new, NOT spectrum was obtained by us on Feb. 28, 2019, showing a galaxy at redshift 0.062 with no emission lines (see Fig. \ref{fig:GNT1548}). This WISE galaxy is catalogued at mag g=17.6, without redshift, and is therefore the host of the event, but no clue can be given on the nature of that event. Note that another WISE galaxy is found about 1' NE with z = 0.061 and thus belongs to the same group. \begin{figure} \centering \includegraphics[width=3.3in]{zyhuo_fig8.eps} \caption{Spectrum of the galaxy GNT J154833-0017 obtained at the NOT in Feb. 2019.} \label{fig:GNT1548} \end{figure} Gaia17cdi was alerted as a candidate Ia SN on August 22, 2017, but the LAMOST spectrum dates back to Jan. 29, 2014, and shows possibly H$\alpha$, [NII] and [SII] in emission, indicating in fact a galaxy with a redshift of $\sim 0.073$, which needs however confirmation. Nothing can be concluded from the LAMOST spectra nor from the extra-galactic databases for Gaia17cxa (no catalogued host), or Gaia18bxf (coincident with a 2MASX host, but no redshift available). For Gaia17djf, the much earlier (Oct. 2013) LAMOST spectrum shows in fact clearly H$\alpha$ and [NII] in emission, or Na in absorption, thus giving a redshift of 0.048, consistent with the one of 0.049 found in the databases for a 2MASX galaxy (some other, broader features appear further in the red, possibly related to another, background object, which needs confirmation). For Gaia18adz, the 2MASX host has no measured redshift in the databases, but the LAMOST spectrum displays a possible H$\alpha$ in emission, suggesting a redshift of $\sim 0.19$, but this needs confirmation (the corresponding absolute magnitude for the SN candidate would be too high). For Gaia18aov, the LAMOST spectrum taken several years earlier (June 2012), although classified as ``Unknown", shows clearly H$\alpha$, [NII] and [SII] in emission, indicating a redshift of 0.0118, giving a plausible, absolute magnitude of -15.3 for the SN candidate. This is in line with the redshift of 0.011826 found in the NED database for a very closeby galaxy, CGCG 275-003, thus probably the host of the SN. Finally, Gaia18bzv, a possible SN candidate in the galaxy UGC2536, has been identified as a type Ia SN (SN2018eqq) at z= 0.016 (corresponding to the redshift of UGC2536) and the LAMOST spectrum of Nov. 2012, although of low S/N and classified as ``Unknown", shows a strong Na, and weaker MgI or H$\alpha$ absorptions at this redshift, and has therefore recorded the spectrum of the host galaxy. \section{Conclusions} \begin{enumerate} \item We have compared over 6300 Gaia Alerts, complemented with 482 Gaia Nuclear Transients (GNT), with objects whose spectra had been recorded in the LAMOST spectroscopic survey and/or in the SDSS DR15 survey. \item We have identified, among the Gaia Alerts, 14 stars, 157 galaxies and 30 QSOs with single epoch spectra, plus 42 galaxies and 68 QSOs among the GNT candidates. \\ In addition to those, some objects have multi-epoch spectra (without distinction between LAMOST or SDSS): we identify 12 stars (plus a mis-identified one which is in fact a BL-Lac object), 12 QSOs and 49 galaxies in the Gaia alert sample; and 13 galaxies and 23 QSOs in the GNT sample. \\ The relatively small number of identifications (compared to the total number of input objects) is partly due to the limiting magnitude and sampling of spectral observations, and to the fact that both LAMOST and SDSS cover only the northern part of the sky while Gaia is an all-sky survey. \item Among the Gaia Alerts having LAMOST or SDSS spectra labelled as ``STARS", most of them (13) are cataclysmic variables, some of which display double-peaked emission lines. \item For most galaxies, the alert was probably caused by a SN explosion, but no confirmation can be given here as the spectra do not coincide in time with the Alert. In only two cases, GNT J170213+2543 and Gaia17aal, are the ground-based spectra contemporary to the alerting date and their spectra reveal residuals from a SN: these two, previously unclassified, candidates are now confirmed here as type Ia SNe. \item Ten quasars with multi-epoch spectra show significant changes in the broad emission lines, with sometimes also changes in the slope of the blue continuum. A few candidates with weaker changes are also identified. These changes qualify these objects as ``Changing Look Quasars", as described in \cite{mcleod16}, almost doubling their sample, and confirm the trend of the object getting bluer, with a stronger broad Balmer component, when getting brighter. What is needed to better understand the physics underlying those changes are spectra taken closer to the photometric Alert and at regular intervals to quantify the time scales and eventual time-delays between photometric and spectroscopic changes. \end{enumerate} \acknowledgments This work was initiated during a visit to SWIFAR (in the frame of the Chinese-French LIA ``Origins" program), whose hospitality and support are gratefully acknowledged by MD. ZYH thanks Dr. Wei Zhang, Zhongrui Bai and Dongwei Fan for helpful discussions. ZYH and XWL have been partially supported by the National Key Basic Research Program of China 2014CB845700. TMZ is supported by the NSFC 11633002. We acknowledge the use of data from the ESA Gaia Satellite, DPAC and the Photometric Science Alerts Team (http://gsaweb.ast.cam.ac.uk/alerts). \\ % The Guoshoujing Telescope (the Large Sky Area Multi-Object Fiber Spectroscopic Telescope LAMOST) is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. \\ % We used data from the Sloan Digital Sky Survey (SDSS) public releases. Funding for the SDSS has been provided by the Alfred P. Sloan Foundation, the U.S. Department of Energy Office of Science, and the Participating Institutions (details on the SDSS web site at www.sdss.org). \\ This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France, and of the NASA/IPAC Extragalactic Database (NED), which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. \\ We thank an anonymous referee for valuable comments which helped to improve the manuscript. \\ \nocite{*} \bibliographystyle{spr-mp-nameyear-cnd}
1,116,691,500,000
arxiv
\section{Introduction} The totally asymmetric simple exclusion process (TASEP) is a stochastic interacting particle system which serves as a paradigmatic model for nonequilibrium behaviour \cite{Spoh91,Ligg99,Gunter}. The dynamics of this lattice gas model is characterized by the updating law. In one dimension on the integer lattice ${\mathbb Z}$ the most important cases of discrete-time updates are the backward-sequential, parallel and sublattice parallel updates \cite{Raje98}. For a finite number of particles these dynamics can be defined through a master equation of the form \bel{1-1} P(\mathbf{x},t+1) = \sum_{\mathbf{x}'} p_{\mathbf{x},\mathbf{x}'} P(\mathbf{x}',t) \end{equation} where ${\bf x} = \{x_i\}$ describes the positions of the particles and $p_{\mathbf{x},\mathbf{x}'}$ is the transition probability to go in one time step from a configuration ${\bf x}^{'}$ to a configuration ${\bf x}$. This transition probability is different for the various update schemes. For the backward-sequential update, each particle may take one step to the right with probability $v$ if the target site is vacant at the beginning of the time step or becomes vacant at the end of the time step (due to motion of the particle in front). For the parallel update, the motion to the right is allowed only if the target site is vacant at the beginning of the time step. By iterating (\ref{1-1}) one obtains the solution of the master equation for any given initial configuration $\mathbf{x}^0$, i.e., the conditional probability to find a particle configuration $\mathbf{x}$ at time step $t$, given that the process started from configuration $\mathbf{x}^0$. This stochastic many-body dynamics have a natural interpretation in field-theoretic terms \cite{Matt98,Gunter} where specific realizations of the process correspond to paths in the path integral representation of field theoretic quantities. Therefore, in analogy to the corresponding terminology in field theory, we refer to this time-dependent conditional transition probability as the Green function. For the first two cases, backward-sequential and parallel update, the Green functions of transition probabilities have been found by explicit solution of the master equations for the systems defined on an infinite lattice \cite{Shelest,Rako05,parallel}. The Green function has a determinantal representation similar to the one first discovered for the continuous-time definition of the process \cite{Schu97} where particles jump independently after an exponentially distributed random time with fixed rate 1 \cite{Spoh91,Ligg99,Gunter}. This representation allows for a direct derivation of the current distribution \cite{Joha00,Naga04,Rako05} and has inspired a considerable amount of subsequent detailed analysis of dynamical properties of the TASEP and related models, see e.g. \cite{Sasa05, Sasa07,Boro07,Boro08} and also of the ASEP where particles are allowed to jump in both directions \cite{Tracy1,Tracy2}. The third type of discrete-time update, sublattice parallel, was first considered in some detail in \cite{sublattice,open} and has subsequently been studied for various applications both analytically and numerically \cite{Raje98,anal}. In this paper, we derive the Green function of the TASEP with sublattice parallel update which is defined as follows. Consider the process on ${\mathbb Z}$, i.e. the one-dimensional infinite chain. Each site labeled by an integer $i$ is occupied by at most one particle which can hop only to the right in a discrete time. At the first moment of time, we look at all $(2i,2i+1)$ pairs. In each of them if the vertex $2i+1$ is free and the site $2i$ is occupied, the particle of the vertex $2i$ hops to the right with probability $v$ and doesn't move with probability $1-v$. If both sites in a pair are occupied or empty or if site $2i$ is empty and site $2i+1$ occupied, the pair remains unchanged at that time step. At the next step of time we apply this rule of hopping to all pairs $(2i+1,2i+2)$. Continuing, we apply the updating rule to $(2i,2i+1)$ pairs at each odd moment of time and to $(2i+1,2i+2)$ pairs at each even moment\footnote{We remark that these dynamics can be interpreted as the action of the transfer matrix for the six vertex model on a diagonal lattice \cite{Kand90,sublattice,open,Hone97}.}. \section{The equivalence of the TASEP with sublattice parallel and the backward sequential updates} As noted above, the conditional probability to find $N$ particles at positions $x_1 < x_2 < \ldots < x_N$ at discrete time $t$ if these are in positions $x_1^0 < x_2^0 < \ldots < x_N^0$ at initial moment of time is called the Green function of the process. The discrete space-time dynamics can be described by a set of trajectories on a triangle lattice which is obtained from the square lattice by adding a diagonal bond between the upper left and the lower right corners of each elementary square. Being occupied by an trajectory, diagonal bonds have a statistical weight $v$ and vertical ones can have weights $1$ or $1-v$. It is convenient to draw trajectories of particles on a chessboard (Fig. \ref{Fig1}), where black rounds show initial positions of particles. We notice that diagonal bonds of trajectories can be located only on white squares. \begin{figure}[tbp] \unitlength=1mm \makebox(60,60)[cc] {\psfig{file=Fig1.eps,width=60mm}} \caption{Space-time trajectories of four particles with appropriate weights on a chessboard.} \label{Fig1} \end{figure} If we select a sublattice which contains upper left and lower right sites of white squares of the chessboard denoted by white circles in Fig. \ref{Fig1}, we can see that particles effectively move on the sublattice of white vertices. There are some exceptions at the start and at the end of trajectories. Then, we have to consider four different cases to find a generalized determinant formula of the Green function. Consider first the case when space-time trajectories of $N$ particles start and end on the sublattice with white vertices, Fig. \ref{Fig2}a (the case of arbitrary initial conditions will be considered in the next section). If we choose initial points on the white vertices of the first row, coordinates of particles $\{x_i^0\}$ at initial times $\{T_i^0=0\}\;(i=1,2,\ldots,N)$ are even. \begin{figure}[tbp] \unitlength=1mm \makebox(150,95)[cc] {\psfig{file=Fig2.eps,width=150mm}} \caption{Space-time trajectories of particles starting and ending on the sublattice of white vertices (a) and rotated trajectories by 45 degrees clockwise (b). The time axis on the rotated lattice is directed down.} \label{Fig2} \end{figure} The first transformation we use is a rotation of the set of trajectories by $\pi/4$ clockwise around the initial point $\{x=0,\; T=0\}$. Considering the vertical axis as the new time coordinate, we obtain a new set of trajectories (Fig. \ref{Fig2}b). This set represents a new discrete-time process on the square lattice with the unit time and space intervals corresponding to vertical and horizontal distances between neighboring sites. Starting points change their space-time coordinates as \begin{eqnarray} &T_i'^0&=x_i^0/2,\\ \nonumber &x_i'^0&=x_i^0/2. \label{rotation} \end{eqnarray} Now vertical bonds have weights $v$ and diagonal ones have weights $1$ or $1-v$. We want to map them to the space-time paths of particles of the TASEP with backward-sequential update. To this end, we shift the coordinates in each row with respect to the previous above row by $i\rightarrow i+1$. Due to the second transformation, vertical and diagonal bonds are interchanged as it is shown in Fig. \ref{Fig3}. The transformation of coordinates (in new units) can be written as \begin{equation} (x'',T'')=(x,\frac{x+T}{2}). \label{second_trans} \end{equation} \begin{figure}[tbp] \unitlength=1mm \makebox(60,70)[cc] {\psfig{file=Fig3.eps,width=60mm}} \caption{Final worldlines of particles after appropriate transformations.} \label{Fig3} \end{figure} From Fig. \ref{Fig3} we see that worldlines on the transformed lattice represent trajectories of particles of the TASEP with backward-sequential update with initial space-time coordinates $\{x_i''^0,T_i''^0\}$ and final coordinates $\{x_i'',T_i''\},\; i=1,2,\ldots,N$. The transition probability from space-time coordinates $\{x_i''^0,T_i''^0\}$ to $\{x_i'',T_i''\}$ is given by generalized determinant formula \cite{Shelest} \begin{equation} P\left(x_1'',T_1'';x_2'',T_2'';\ldots;x_N'',T_N''|x_1''^0,T_1''^0;x_2''^0,T_2''^0;\ldots;x_N''^0,T_N''^0\right)=\det M^{(N)}, \label{determ} \end{equation} where the matrix elements of $N\times N$ matrix $M^{(N)}$ are \begin{equation} M_{ij}^{(N)}=F_{i-j}\left(x_i''-x_j''^0,T_i''-T_j''^0\right), \label{matelem} \end{equation} with the function $F_{m}\left(x,T\right)$ introduced in \cite{Shelest} \begin{equation} F_{m}(x,T)=\frac{1}{2\pi i}\int_{|z|=1-0}dz(1-v+\frac{v}{z})^T(1-z)^{-m}z^{x-1}. \label{ffunc} \end{equation} Substituting transformation (\ref{second_trans}) to the determinant formula (\ref{determ}) we obtain the Green function of the TASEP with sublattice parallel update \begin{equation} P\left(x_1,x_2,\ldots,x_N|x_1^0,x_2^0,\ldots,x_N^0;T\right)=\det M^{(N)} \label{determ2} \end{equation} with matrix elements \begin{equation} M_{ij}^{(N)}=F_{i-j}\left(x_i-x_j^0,\frac{T+x_i}{2}-\frac{x_j^0}{2}\right). \end{equation} \begin{figure}[h!] \includegraphics[width=150mm]{Fig4.eps} \caption{\label{Fig4} Space-time trajectories of particles in the case, when the $i$-th worldline starts from the non-white vertex and ends on a white one (a) and rotated version of trajectories by 45 degrees clockwise (b).} \end{figure} \section{Other cases of starting and ending points } Consider the case when the $i$-th particle starts its motion from an odd site and ends on an even one (Fig. \ref{Fig4}a). As the weight of the first bond of the $i$-th trajectory is $1$, we can set the beginning of motion at the nearest white (sublattice) site and then rotate the sublattice as in the previous case (Fig. \ref{Fig4}b). Applying the shift transformation, we obtain for the initial coordinates of $i$-th particle: \begin{equation} (x_i^{''0},T_i^{''0})=(x_i^0,\lceil x_i^0/2\rceil) \label{second_case} \end{equation} where $\lceil x\rceil$ is the ceiling function. Substituting expressions of all starting and end points into the determinant formula ({\ref{determ}), we derive the Green function for this case. The third case is when the $i$-th trajectory starts from the white (even) vertex and ends on a non-white one. We see, that if we add an additional vertical bond with weight $1$ to the last node of that trajectory, the total weight of the whole path will not be changed. Then we can set the endpoint of $i$-th particle at time $T+1$. Repeating two transformations, we obtain for coordinates of the end point: \begin{equation} (x''_i,T''_i)=(x_i,\lceil \frac{T+x_i}{2}\rceil) \label{third_case} \end{equation} The last case when the $i$-th trajectory starts and ends on non-white vertices is an obvious combination of the second and the third case. Generalizing all four cases of boundary conditions, we derive following transformations for the initial and final coordinates for all types of trajectories \begin{eqnarray} \nonumber &T_i''^0&=\lceil x_i^0/2 \rceil,\\ \nonumber &x_i''^0&=x_i^0,\\ \nonumber &T_i''&=\lceil \frac{T+x_i}{2}\rceil,\\ &x_i''&=x_i. \label{generaltransform} \end{eqnarray} Substituting new coordinates (\ref{generaltransform}) into the determinant formula (\ref{determ}) we obtain the Green function of the TASEP with sublattice parallel update \begin{equation} P\left(x_1,x_2,\ldots,x_N|x_1^0,x_2^0,\ldots,x_N^0;T\right)=\det M^{(N)}, \label{determ2} \end{equation} with matrix elements \begin{equation} M_{ij}^{(N)}=F_{i-j}\left(x_i-x_j^0,\lceil \frac{T+x_i}{2}\ \rceil-\lceil\frac{x_j^0}{2}\rceil\right). \end{equation} \section{Discussion} Having explicit determinant expressions for the Green function, we can compare their relative advantages and disadvantages for the three basic updates, the backward-sequential, parallel and sublattice-parallel one. Criteria for the comparison follow from practical use of the Green function in probabilistic calculations. To find a probability distribution for a selected particle or a correlation function for several particles in the TASEP, detailed information contained in function $P\left(x_1,x_2,\ldots,x_N|x_1^0,x_2^0,\ldots,x_N^0;T\right)$ should be reduced by summation over a part of the final coordinates $\{x_i\}$ for fixed initial coordinates $\{x_i^0\}$ (see e.g. \cite{Rako05}). Then, the first criterion for the comparison is simplicity of the summation procedure in different cases. The second criterion is simplicity of the matrix $M^{(N)}_{ij}$ itself, because asymptotic calculations for large $N$ and $T$ need an elaborated analysis of resulting determinant expressions (see e.g. \cite{Boro07,Boro08}). The third criterion is the presence or lack of particle-hole symmetry, which is essential for the derivation of single-particle probability distributions in some particular cases \cite{Rako05}. (A) {\it The backward-sequential update.} The form of the matrix elements $M^{(N)}_{ij}$ in this case is especially simple \begin{equation} M^{(N)}_{ij}=F_{i-j}(x_i-x_j^0,T) \label{BSU} \end{equation} where function $F_m(x,T)$ is given by Eq.(\ref{ffunc}). The Green function $P\left(x_1,x_2,\ldots,x_N|x_1^0,x_2^0,\ldots,x_N^0;T\right)$ is uniform in variables $\{x_i\}$, so the summation procedure is straightforward \cite{Rako05}. A shortcoming of this update is lack of the particle-hole symmetry. Indeed, due to possible transitions for one time step $x_i \rightarrow x_i+1,x_{i+1}=x_i+1 \rightarrow x_{i+1}+1, \dots, x_{i+k}=x_{i+k-1}+1 \rightarrow x_{i+k}+1$, a hole can move in the opposite direction by jumps of length $k>1$. (B){\it The parallel update}. The form of the matrix $M^{(N)}_{ij}$ in this case is more complicated \cite{parallel}: \begin{equation} M^{(N)}_{ij}=\tilde{F}_{i-j}(x_i-x_j^0,T) \label{PAR} \end{equation} where \begin{equation} \tilde{F}_{\pm m}(N,T)=\sum_{n=0}^m\sum_{k=-n}^{\infty}(\pm 1)^n\frac{m(m+k+n-1)!}{(k+n)!n!(m-n)!}(\pm \frac{v}{1-v})^n F_0(N\pm k,T) \label{PARffunct} \end{equation} The Green function obeys the particle-hole symmetry, but a drawback is in the determinant formula \begin{equation} P\left(x_1,x_2,\ldots,x_N|x_1^0,x_2^0,\ldots,x_N^0;T\right)=(1-v)^n \det M^{(N)} \label{PAR-Green} \end{equation} which depends on the number of pairs $n$ of neighboring particles in the final configuration. Therefore, the sum over $\{ x_i\}$ splits into groups by the number of clusters of connected particles. (C) {\it The sublattice parallel update.} The Green function for this update is free of shortcomings of two previous cases. It is uniform in variables $\{x_i\}$, obeys the particle-hole symmetry and has a relatively simple analytical form (\ref{final}). A fee for this advantage is a rather complicated time dependence in (\ref{final}) which involves both initial and final coordinates and the ceiling function $\lceil x \rceil$. Thus, we may conclude that a proper choice of the discrete time Green function strongly depends on peculiarities of the corresponding probabilistic problem. \section*{Acknowledgments} This work was supported by the RFBR grants 07-02-91561-a, 09-01-00271-a and the DFG grant 436 RUS 113/909/0-1(R).
1,116,691,500,001
arxiv
\section{Introduction} \noindent Let $G_n = GL_n, U_n, Sp_{2n}, O_n$ be one of the families of general linear, unitary, symplectic, and orthogonal groups. In this paper we consider the centres of the group algebras of $G_n(\mathbb{F}_q)$. Our main results are certain ``stability'' properties for the multiplication in $Z(\mathbb{Z}G_n(\mathbb{F}_q))$ which allow us to define a ``universal'' algebra $\mathrm{FH}_q^{G}$ interpolating these centres of group algebras in the parameter $n$. \newline \newline \noindent To illustrate the nature of these results, let us first consider the case of symmetric groups. The centre $Z(\mathbb{Z}S_n)$ has a basis consisting of conjugacy-class sums. Suppose that $g \in S_n$ has cycle type $\mu = (\mu_1, \mu_2, \ldots)$. The \emph{reduced cycle type} of $g$ is the partition $(\mu_1-1, \mu_2-1, \ldots)$ obtained by deleting the first column of the Young diagram of $\mu$. For any partition $\mu$, let $X_{\mu}$ denote the sum of all cycles of reduced cycle type $\mu$ in $S_{n}$ (which is zero if there are no such elements); the nonzero $X_\mu$ form a basis of $Z(\mathbb{Z}S_n)$. For example, $X_{(1)}$ is the sum of all transpositions. Then the square of the sum of all transpositions in $S_n$ is \[ X_{(1)}^2 = 2 X_{(1,1)} + 3 X_{(2)} + {n \choose 2} X_{\varnothing}, \] which is valid for any $n \geq 0$. The key feature of the above equation is that the coefficients are polynomials in $n$. Farahat and Higman \cite{FarahatHigman} proved in general that $X_\mu X_\nu$ is a linear combination of $X_{\lambda}$ with coefficients in $\mathcal{R}$, the ring of integer-valued polynomials (which has a $\mathbb{Z}$-basis consisting of binomial coefficients). They constructed a $\mathcal{R}$-algebra, $\mathrm{FH}$, with an $\mathcal{R}$-basis given by symbols $K_\mu$, and multiplicative structure constants given by the polynomials we have just described. By construction, there are surjective homomorphisms $\mathrm{FH} \to Z(\mathbb{Z}S_n)$ given by evaluating the polynomials in $\mathcal{R}$ at $n$ and sending $K_\mu$ to $X_\mu$. Farahat and Higman used this to study the modular representation theory of symmetric groups. The algebra $\mathrm{FH}$ turns out to be isomorphic to $\mathcal{R} \otimes \Lambda$, where $\Lambda$ is the ring of symmetric functions, which leads to a theory of ``content evaluation'' character formulae for the symmetric group \cite{CorteelGoupilSchaeffer}. Wang \cite{Wang2002} defined an analogous version of $\mathrm{FH}$ for the wreath products $G \wr S_n$, where $G$ is a fixed finite group, and used an associated graded version to study the cohomology of certain Hilbert schemes of points. We direct the reader to the prequel to the present paper, \cite{Ryba}, for more details regarding the algebra $\mathrm{FH}$. \newline \newline \noindent In this paper we construct an analogous version of $\mathrm{FH}$ for classical groups $G_n$, interpolating the centres $Z(\mathbb{Z}G_n(\mathbb{F}_q))$. For the case $G_n = GL_n$, an analogue of reduced cycle type for the general linear group called \emph{modified type} was defined by Wan and Wang \cite{Wan_Wang}. If $X_{\bm\mu}$ is the sum of all elements of $GL_n(\mathbb{F}_q)$ of modified type $\boldsymbol\mu$, let the structure constants $a_{\bm\mu\bm\nu}^{\bm{\lambda}}(n)$ be defined by \[ X_{\bm\mu} X_{\boldsymbol\nu} = \sum_{\boldsymbol\lambda} a_{\bm\mu\bm\nu}^{\bm{\lambda}}(n) X_{\bm\lambda}. \] Modified types have a notion of size denoted $| \cdot |$, which puts a filtration on $Z(\mathbb{Z}GL_n(\mathbb{F}_q))$. Wan and Wang show that if $\bm\mu,\bm\nu,\bm\lambda$ are modified types, the structure constants $a_{\bm\mu\bm\nu}^{\bm{\lambda}}(n)$ are nonzero only when $| \bm\lambda | \leq | \bm\mu | + | \bm\nu |$, and are independent of $n$ when equality is achieved. In particular, this enables them to construct an algebra that specialises to the associated graded algebra of $Z(\mathbb{Z}GL_n(\mathbb{F}_q))$ for each $n$. The equivalent result for $G_n = Sp_{2n}$ was obtained by \"{O}zden in \cite{OZDEN2021263}. We prove that for any $\bm\mu,\bm\nu,\bm\lambda$ the structure constants $a_{\bm\mu\bm\nu}^{\bm{\lambda}}(n)$ are polynomials in $q^n$ (and provide a bound on the degree), thus generalising the results mentioned above. This allows us to define an algebra $\mathrm{FH}_q^{G}$ with a basis $K_{\bm\mu}$ indexed by modified types and whose coefficients lie in the \emph{ring of quantum integer-valued polynomials}, $\mathcal{R}_q$, defined by Harman and Hopkins \cite{HarmanHopkins}. This comes equipped with ``specialisation'' maps $\mathrm{FH}_q^{G} \to Z(\mathbb{Z}G_n(\mathbb{F}_q))$, analogous the case of $\mathrm{FH}$ for symmetric groups. \newline \newline \noindent In \cite{ivanovKerov2001symmetric}, Ivanov and Kerov construct the algebra $\mathrm{FH}$ in a very elegant way using a construction they call ``partial permutations''. These are pairs $(g, I)$ where $g \in S_\infty = \varinjlim_n S_n$ and $I$ is a finite subset of $\mathbb{Z}_{> 0}$ such that if $i \notin I$, then $i$ is a fixed point of $g$. Thus $I$ controls which symmetric groups $S_n \subseteq S_\infty$ can contain $g$. Inspired by the approach of Ivanov and Kerov, we define \emph{bounding pairs} (\emph{bounding triples} in the $GL_n$ case). These are pairs $(g, V)$ where $g \in G_\infty(\mathbb{F}_q) = \varinjlim_n G_n(\mathbb{F}_q)$ and $V$ is a finite codimension subspace of the natural representation $\mathbb{F}_q^\infty$ of $G_\infty$ ($\mathbb{F}_{q^2}^\infty$ in the case of unitary groups) on which $g$ acts as the identity. Analogously, $V$ controls which $G_n(\mathbb{F}_q) \subseteq G_\infty(\mathbb{F}_q)$ can contain $g$. A similar idea is considered for general linear groups in \cite{Meliot_GLnFq}, whose Theorem 3.7\footnote{Issues have been raised with the proof which we do not know how to address; see Remark 3.6 of \cite{Wan_Wang}.} is very similar to our Theorem \ref{gl_structure_constant_theorem}. \newline \newline \noindent The structure of the paper is as follows. In Section 2, we review some basic facts about quantum integer-valued polynomials and conjugacy classes in the general linear group $GL_n(\mathbb{F}_q)$. In Section 3, we recall the definitions of the classical groups and prove some basic facts about them and their conjugacy classes; a reader who is only interested in the $GL_n$ case may skip this section. In Section 4, we consider the general linear group and introduce bounding triples, which serve as an analogue to the partial permutations introduced in \cite{ivanovKerov2001symmetric}. We show that bounding triples form an algebra with an action of $GL_\infty(\mathbb{F}_q)$, and hence obtain an analogue of the construction of Ivanov and Kerov. In Section 5, we show that there are specialisation morphisms from the general linear Ivanov-Kerov algebra that surject on to $Z(\mathbb{Z}GL_n(\mathbb{F}_q))$. In Section 6, we explicitly construct the general linear Farahat-Higman algebra $\mathrm{FH}_q^{GL}$. In Section 7, we introduce bounding pairs, which are adapted for classical groups. Finally in Section 8, we construct the Farahat-Higman algebras $\mathrm{FH}_q^{G}$ for unitary, symplectic, and orthogonal groups. \newline \newline \noindent Note that this paper is not the paper mentioned in \cite{Ryba} that will address Iwahori-Hecke algebras of type $A$ and connections to $GL_n(\mathbb{F}_q)$. That paper will appear in due course. \newline \newline \noindent \textbf{Acknowledgements.} The first author would like to thank his undergraduate advisor, Weiqiang Wang, for introducing him to this problem. Both authors would like to thank Pavel Etingof for helpful discussions. This paper is based upon work supported by The National Science Foundation Graduate Research Fellowship Program under Grant No. 1842490 awarded to the first author. \section{Background} \subsection{Quantum Integer-Valued Polynomials} \label{qivp_subsection} Let $q$ be a formal variable, which later on we will often specialize to be a power of a prime number. For any integer $n$, we let $[n]_q$ denote the $q$-integer \[ [n]_q = 1 + q + \cdots + q^{n-1} = \frac{q^n - 1}{q - 1}. \] We can then define the $q$-factorial $[n]_q!$ of a $q$-integer $[n]_q$. Let $[0]_q! = 1$ and define \[ [n]_q! = [n]_q[n-1]_q \cdots [1]_q = \frac{(q^n-1)(q^{n-1} - 1)\cdots (q-1)}{(q-1)^n}. \] Finally, we can define the $q$-binomial coefficient, or Gaussian binomial coefficient, by \[ \qbinom{n}{k}_q = \frac{[n]_q!}{[k]_q![n-k]_q!}. \] for $0 \leq k \leq n$, which turns out to be an element of $\mathbb{Z}[q,q^{-1}]$ (in fact of $\mathbb{Z}_{\geq 0}[q]$). Notice that at $q = 1$, we recover the usual definitions of an integer, factorial, and binomial coefficient. These $q$-integers are useful for describing various quantities associated to finite fields. Let us briefly set $q$ to be the size of the finite field $\mathbb{F}_q$. It is well known that \[ |GL_n(\mathbb{F}_q)| = (q^n-1)(q^n-q)\cdots (q^{n} - q^{n-1}) = q^{\frac{n(n-1)}{2}}(q-1)^n[n]_q!. \] Similarly, the number of $k$-dimensional subspaces of $\mathbb{F}_q^n$ is $\qbinom{n}{k}_{q}$. \newline \newline \noindent Returning to the case where $q$ is a formal variable, we now discuss the ring of \emph{quantum integer valued polynomials} introduced by Harman and Hopkins in \cite{HarmanHopkins}. \begin{definition}[Section 4 \cite{HarmanHopkins}] \label{qivp_definition} Let $\mathcal{R}_q$ be the set of polynomials $f(x) \in \mathbb{Q}(q)[x]$ such that \[ f([n]_q) \in \mathbb{Z}[q,q^{-1}] \] for all $n \in \mathbb{Z}$. \end{definition} \noindent It is not difficult to see that $\mathcal{R}_q$ is a ring (in fact a $\mathbb{Z}[q,q^{-1}]$-algebra). Much like the usual ring of integer-valued polynomials, this ring admits a concise explicit description. \begin{theorem}[Propositions 1.2, 4.3, \cite{HarmanHopkins}] \label{qivp_structure_theorem} As a $\mathbb{Z}[q,q^{-1}]$-module, $\mathcal{R}_q$ is free with basis \[ \qbinom{x}{k} = \frac{x(x-[1]_q) \cdots(x-[k-1]_q)}{q^{k \choose 2} [k]_q!} \] where $k \in \mathbb{Z}_{\geq 0}$. (Here $\qbinom{x}{0} = 1$.) \end{theorem} \noindent These basis elements obey \[ \qbinom{[n]_q}{k} = \qbinom{n}{k}_q, \] and this is the main reason why $\mathcal{R}_q$ will be important for us. We will require a ring that includes $q^{-1}$ and also $\qbinom{n}{k}_q$ where $n$ is viewed as a variable. We achieve this by working with $\qbinom{x}{k}$ and evaluating at $x = [n]_q$ where necessary. Performing the change of variables $t = 1 + (q-1)x$, we see that $x = [n]_q$ is equivalent to $t = q^n$, and \[ \qbinom{x}{k} = \frac{x(x-[1]_q) \cdots(x-[k-1]_q)}{q^{k \choose 2} [k]_q!} = \frac{(t-1)(t-q) \cdots (t-q^{k-1})}{(q^k -1) (q^k - q) \cdots (q^k - q^{k-1})}. \] If we express $\mathcal{R}_q$ in terms of the variable $t$ instead of $x$, we get the ring of polynomials $f(t) \in \mathbb{Q}(q)[t]$ such that $f(q^n) \in \mathbb{Z}[q,q^{-1}]$. \newline \newline \noindent The original definition of $\mathcal{R}_q$ using $\qbinom{x}{k}$ is better behaved at $q=1$ (our change of variables is singular at $q=1$). Working with the variable $t$ allows us to phrase statements in terms of evaluating at $q^n$, which may be slightly more transparent than evaluating at $[n]_q$. Ultimately we set $q$ to be the size of a finite field (obtaining a subring of $\mathbb{Q}[x]$ or $\mathbb{Q}[t]$), and the reader may choose whichever incarnation of $\mathcal{R}_q$ they prefer. \begin{lemma} \label{shifted_qivp_lemma} Let $d \in \mathbb{Z}$ and $h \in \mathbb{Z}_{\geq 0}$. Then there is an element $f(x) \in \mathcal{R}$ such that \[ f([m]_q) = \qbinom{m+d}{h}_q \] \end{lemma} \begin{proof} We have \[ \qbinom{m+d}{h}_q = \qbinom{[m+d]_q}{h} = \qbinom{q^{d}[m]_q + [d]_q}{h} \] is an element of $\mathbb{Z}[q,q^{-1}]$ but also the evaluation of $\qbinom{q^{d}x + [d]_q}{h} \in \mathbb{Q}(q)[x]$ at $x = [m]_q$. So $\qbinom{q^{d}x + [d]_q}{h}$ satisfies Definition \ref{qivp_definition} and is hence an element of $\mathcal{R}_q$. We conclude that $\qbinom{m+d}{h}_q$ is obtained by evaluating an element of $\mathcal{R}_q$ at $[m]_q$. \end{proof} \subsection{Multipartitions and Conjugacy Classes in \texorpdfstring{$GL_n$}{General Linear Groups}} A \emph{partition} $\lambda$ is a non-increasing sequence in $\mathbb{Z}_{\geq 0}$, $\lambda = (\lambda_1, \lambda_2, \ldots)$ of which only finitely many terms are nonzero. We write $\mathcal{P}$ for the set of partitions. The numbers $\lambda_1, \lambda_2, \ldots$ are called the $\emph{parts}$ of $\lambda$. An alternative way of describing a partition is to specify the number of times a given element of $\mathbb{Z}_{>0}$ appears. So if $\lambda$ has $m_i$ parts of size $i$, we write $\lambda = (1^{m_1} 2^{m_2} \cdots )$. Similarly, we write $m_i(\lambda)$ to mean the number of times $i$ appears as a part of $\lambda$. The \emph{length} of $\lambda$, $l(\lambda)$, is the number of nonzero parts: \[ l(\lambda) = \sum_{i \in \mathbb{Z}_{>0}} m_i(\lambda). \] The \emph{size} of $\lambda$, $|\lambda|$, is the sum of the parts: \[ |\lambda| = \sum_{i \geq 1} \lambda_i. \] One statistic of partitions that turns out to be important is the following. \begin{definition} \label{partition_n_definition} If $\lambda \in \mathcal{P}$, let \[ \mathbf{n}(\lambda) = \sum_{i \geq 1} (i-1) \lambda_i. \] \end{definition} \noindent We use boldface to avoid confusion with $n$, which will typically denote the size of a matrix. \newline \newline \noindent If $k$ is any field, conjugacy classes in $GL_n(k)$ are determined by primary rational canonical form. To an $n \times n$ matrix $M$, we associate a function $\boldsymbol\mu$, called a \emph{multipartition}, from the set of monic irreducible polynomials over $k$ to $\mathcal{P}$, such that $m_i(\boldsymbol\mu(r))$ is the multiplicity of $r(t)^i$ as an elementary divisor of the $k[t]$-module $k^n$, where $t$ acts by the matrix $M$. We refer to $\boldsymbol\mu$ as the \emph{type} of $M$. Thus two matrices are similar if and only if their corresponding types are equal. \noindent From now on, we will only consider invertible matrices. This amounts to excluding the irreducible polynomial $t$. Let $\Phi_q$ be the set of monic irreducible polynomials in $\mathbb{F}_q[t]$ that are different from $t$. Thus the types of matrices in $GL_n(\mathbb{F}_q)$ are functions $\boldsymbol\mu: \Phi_q \to \mathcal{P}$ such that \[ \sum_{r \in \Phi_q} \deg(r) |\boldsymbol\mu(r)| = n. \] We refer to the above sum as the size of $\boldsymbol\mu$ and write $\boldsymbol\mu \vdash n$ to indicate this. We will sometimes write $\boldsymbol\mu \cup (1^d)_{t-1}$ to indicate a multipartition obtained from $\boldsymbol \mu$ by appending $d$ parts of size 1 to $\boldsymbol\mu(t-1)$. Concretely, if $g \in GL_n(\mathbb{F}_q)$ has type $\boldsymbol\mu$, then $\boldsymbol\mu \cup (1^d)_{t-1}$ is the type of the image of $g$ under the embedding into $GL_{n+d}(\mathbb{F}_q)$ by adding $1$-s along the main diagonal. \begin{example}\label{basic_example} Consider the following matrix $g \in GL_{11}(\mathbb{Q})$: \[ \setcounter{MaxMatrixCols}{11} \begin{bmatrix} 1 & 1 & 0 & & & & & & & & \\ 0 & 1 & 1 & & & & & & & & \\ 0 & 0 & 1 & & & & & & & & \\ & & & 1& & & & & & & \\ & & & & 1& & & & & & \\ & & & & & 0 & 0 & 0 & -1 & & \\ & & & & & 1 & 0 & 0 & 0 & & \\ & & & & & 0 & 1 & 0 & -2 & & \\ & & & & & 0 & 0 & 1 & 0 & &\\ & & & & & & & & & 0& -1 \\ & & & & & & & & & 1 & 0 \end{bmatrix}, \] where empty entries are assumed to be zero. Then, the type $\boldsymbol\mu$ of $g$ is given by $\boldsymbol\mu(t-1) = (3,1,1)$, $\boldsymbol\mu(t^2 + 1) = (2,1)$, and $\boldsymbol\mu(f) = \varnothing$ for all other irreducible polynomials $f$ over $\mathbb{Q}$ (the fourth block in the matrix above is the companion matrix of $(t^2 + 1)^2$). Notice that $\deg(t-1)|(3,1,1)| + \deg(t^2+1)|(2,1)| = (1)(5) + (2)(3) = 11$, which confirms $\boldsymbol\mu \vdash 11$. \end{example} \noindent There is a formula for the sizes of centralisers in $GL_n(\mathbb{F}_q)$. \begin{proposition}[Section 4.2 \cite{Macdonald1995}] \label{gl_centraliser_prop} If $M \in GL_n(\mathbb{F}_q)$ has type $\boldsymbol\mu$, then the size of the centraliser of $M$ is \[ \prod_{r \in \Phi_q} q_r^{|\boldsymbol\mu(r)| + 2\mathbf{n}(\boldsymbol\mu(r))} \prod_{i \geq 1} \varphi_{m_i(\boldsymbol\mu(r))}(q_r^{-1}), \] where $q_r = q^{\deg(r)}$, $\varphi_k(t) = \prod_{i=1}^k (1-t^i)$, and $\mathbf{n}(\boldsymbol\mu(r))$ is the function defined in Definition \ref{partition_n_definition}. \end{proposition} \noindent We will only need this formula for the following result. \begin{corollary} \label{gln_centraliser_corollary} If $g \in GL_n(\mathbb{F}_q)$. Then the block matrix $\tilde{g} = \bigl( \begin{smallmatrix}g & 0 \\ 0 & \Id_d \end{smallmatrix}\bigr)$ may be viewed as an element of $GL_{n+d}(\mathbb{F}_q)$. The ratio of the sizes of the centralisers of these elements is \[ \frac{|C_{GL_{n+d}(\mathbb{F}_q)}(\tilde{g})|}{|C_{GL_n(\mathbb{F}_q)}(g)|} = q^{d(2k +d)} \prod_{i=h+1}^{h+d} (1 - q^{-i}), \] where $\boldsymbol\mu$ is the type of $g$, $k=l(\boldsymbol\mu(t-1)$ and $h=m_1(\boldsymbol\mu(t-1))$. \end{corollary} \begin{proof} The type of $\tilde{g}$ is $\boldsymbol\mu \cup (1^d)_{t-1}$. Applying Proposition \ref{gl_centraliser_prop}, for both $g$ and $\tilde{g}$, we find that the factors associated to $r \neq t-1$ are equal for both elements and cancel out in the fraction. Similarly $\varphi_{m_i(\boldsymbol\mu(t-1))}$ is unchanged for $i \geq 2$ and cancels out. Appending $d$ parts of size $1$ to $\boldsymbol\mu(t-1)$ increases $\mathbf{n}(\boldsymbol\mu(t-1))$ by $(k) + (k+1) + \cdots + (k+d-1) = kd + d(d-1)/2$. So we are left with \[ q^{d + 2kd + d(d-1)} \frac{\varphi_{h+d}(q^{-1})}{\varphi_h(q^{-1})} = q^{d(2k + d)} \prod_{i=h+1}^{h+d} (1 - q^{-i}). \] \end{proof} \section{Finite Classical Groups}\label{finite_classical_groups_section} \noindent In this section, we recall some basic properties about the finite classical groups, by which we mean to include unitary, symplectic, and orthogonal groups over a finite field. The main reference for this section is the paper \cite{Wall}, although Peter Cameron's notes \cite{CameronNotes} are also helpful. The classical groups are each defined as a symmetry group preserving a non-degenerate sesquilinear form. Up to a point, these may be treated simultaneously. However, in characteristic 2, orthgonal groups must be treated using quadratic forms instead, so whenever dealing with orthogonal groups or symmetric bilinear forms, we assume the characteristic is different from 2. \newline \newline \noindent We fix an automorphism $\sigma$ of the ground field $k$. For the symplectic and orthogonal groups, $\sigma$ will be the identity map on $k = \mathbb{F}_q$. For the unitary groups, we take $k=\mathbb{F}_{q^2}$ and $\sigma$ will be the nontrivial element of $\mathrm{Gal}(\mathbb{F}_{q^2}/\mathbb{F}_q)$, which may be written as $x \mapsto x^q$. A $\sigma$-\emph{sesquilinear form} on a vector space $V$ is a map $B: V \times V \to k$ which is linear in the first argument, and $\sigma$-linear in the second argument, i.e. for $x_1, x_2 \in k$ and $v_1, v_2, v_3 \in V$, \begin{eqnarray*} B(x_1 v_1 + x_2 v_2, v_3) &=& x_1 B(v_1, v_3) + x_2 B(v_2, v_3), \\ B(v_1, x_1 v_2 + x_2 v_3) &=& \sigma(x_1) B(v_1, v_2) + \sigma(x_2) B(v_1, v_3). \end{eqnarray*} In the case where $\sigma$ is the identity map, this is simply a bilinear form. If $B(v,w) = \sigma(B(w,v))$, then $B$ is said to be $\sigma$-\emph{Hermitian}. Since a nonzero bilinear form takes every value in the ground field, the equation $B(v,w) = \sigma^2(B(v,w))$ shows that nontrivial cases only arise when $\sigma^2 = 1$. We shall at times refer to a $\sigma$-sesquilinear form as simply a sesquilinear form. \newline \newline \noindent A bilinear form $B$ is said to be \emph{reflexive} provided that $B(v,w) = 0$ if and only if $B(w,v) = 0$. This condition guarantees that if $U$ is a subspace of $V$, then the left and right orthogonal spaces to $U$ coincide. In particular, when $U$ is the trivial subspace, we see that the left and right radicals of $B$ coincide: \[ \{ v \in V | B(v,w) = 0, \hspace{5mm} \forall w \in V\} = \{w \in V | B(v,w) = 0, \hspace{5mm} \forall v \in V\}. \] It is a standard fact that a reflexive bilinear $B$ form must be either symmetric (i.e. $B(v,w) = B(w,v)$) or alternating ($B(v,v) = 0$). Similarly, a reflexive $\sigma$-sesquilinear form is either alternating or a scalar multiple of a $\sigma$-Hermitian form. So we do not lose too much generality by restricting ourselves to symmetric, alternating, or Hermitian forms (each of which is automatically reflexive). Henceforth, we assume that we are in one of the following three cases: \begin{enumerate} \item $V$ is a vector space over $\mathbb{F}_{q^2}$ and $B$ is a $\sigma$-Hermitian form where $\sigma$ is the nontrivial element of $\mathrm{Gal}(\mathbb{F}_{q^2}/\mathbb{F}_q)$. \item $V$ is a vector space over $\mathbb{F}_q$ and $B$ is an alternating bilinear form, \item $V$ is a vector space over $\mathbb{F}_q$ which has characteristic different from 2, and $B$ is a symmetric bilinear form, \end{enumerate} \noindent In the latter two cases we will sometimes write $\sigma$ for the identity map in order to treat all three cases simultaneously. So $\sigma$ will be an automorphism of the ground field obeying $\sigma^2 = \Id$. \begin{remark} The ``alternating $\sigma$-Hermitian'' constraint $\sigma(B(w,v)) = -B(v,w)$ is compatible with our setup, but not meaningfully different from the $\sigma$-Hermitian case. This is immediate in characteristic 2, so assume that we are in odd characteristic. Note that there is a solution to $x^{q-1} = -1$ in $\mathbb{F}_{q^2}$ because $\mathbb{F}_{q^2}^\times$ is cyclic of order $q^2-1$, so we may take $x$ to be any element of order $2(q-1)$. Then since $\sigma(x) = x^q = -x$, we have $\sigma(B(w,v)) = -B(v,w)$ if and only if $\sigma(xB(w,v)) = xB(v,w)$. So we may interconvert between ``alternating $\sigma$-Hermitian'' and $\sigma$-Hermitian forms by multiplying the forms by the scalar $x$. \end{remark} \noindent Suppose that the vector space $V$ has two sesquilinear forms $B_1$ and $B_2$. We say that these forms are equivalent if there is $g \in GL(V)$ such that $B_1(gv, gw) = B_2(v,w)$. \begin{definition} Let $B: V \times V \to k$ be a non-degenerate sesquilinear form. The symmetry group of $B$ is \[ G_B(V) = \{g \in GL(V) \mid B(gv,gw) = B(v,w), \hspace{2mm} v,w \in V\}. \] \end{definition} \noindent Concretely, we may choose a basis of $V$ and let $M_B$ be the matrix associated to the bilinear form $B$, so that $B(v,w) = v^T M_B \sigma(w)$. Then the forms $B_1$ and $B_2$ are equivalent when there is a matrix $g$ such that $g^T M_{B_1} \sigma(g) = M_{B_2}$. Similarly, the symmetry group $G_B(V)$ consists of matrices $g$ such that $g^T M_B \sigma(g) = M_B$. The orthogonal, symplectic, and unitary groups are the symmetry groups of non-degenerate symmetric, alternating, and Hermitian forms, respectively. However, in general, two forms $B_1$ and $B_2$ on $V$ of the same kind might not be equivalent. In particular, it might happen that $G_{B_1}(V)$ and $G_{B_2}(V)$ are not isomorphic, so the term orthogonal/symplectic/unitary group could be ambiguous if the form $B$ is not specified. This leads us to the next topic: a classification of such forms over finite fields. \begin{definition} Suppose that $B$ is a sesquilinear form on $V$. A \emph{hyperbolic plane} for $V$ is a two dimensional subspace of $V$ spanned by vectors $u_1,u_2$ such that $B(u_1,u_1) = B(u_2,u_2)= 0$ and $B(u_1,u_2) = 1$. \end{definition} \noindent Of course, the value of $B(u_2,u_1)$ is determined by whether the form is symmetric, alternating, or Hermitian. The utility of hyperbolic planes rests on the following result. \begin{lemma} \label{hyperbolic_plane_splitting_lemma} Let $B$ be a non-degenerate sesquilinear form on $V$. Suppose that $u_1,u_2$ span a hyperbolic plane $U$ in $V$. Then $V = U \oplus U^\perp$. \end{lemma} \noindent Before we elaborate on how hyperbolic planes allow us to classify sesquilinear forms, we record some technical results. \begin{lemma} \label{trace_value_lemma} Suppose that $B$ is a $\sigma$-Hermitian form on $V$, where $\sigma$ is a field automorphism of $k$ of order $2$. Then for any $v \in V$ there exists $x \in k$ such that $B(v,v) = x + \sigma(x)$. \end{lemma} \begin{proof} Let $k_0$ be the subfield of $k$ fixed pointwise by $\sigma$, and let $T$ denote the set of elements in $k$ of the form $x + \sigma(x)$. Because $\sigma$ is an involution, $T$ is contained in $k_0$. Now, notice that $T$ is closed under addition as well as by multiplication by elements of $k_0$. If we view $k_0$ as a one-dimensional vector space over itself, this means that $T$ is a subspace of $k_0$. Therefore, either $T=k_0$ or $T = 0$. If $T = 0$, then $x + \sigma(x) = 0$ for all $x$. But $\sigma(x) = -x$ only defines an automorphism if the characteristic of $k$ is 2, in which case $\sigma$ is the identity map, contradicting the assumption on $\sigma$. Therefore, $T = k_0$. By the $\sigma$-Hermitian property, $\sigma(B(v,v)) = B(v,v)$, so that $B(v,v) \in k_0 = T$. \end{proof} \begin{proposition} \label{hyperbolic_plane_splitting_prop} Let $B$ be a nondegenerate sesquilinear form on a vector space $V$. Suppose that there is $u_1 \in V$ such that $B(u_1, u_1) = 0$. Then there exists $u_2 \in V$ such that $u_1, u_2$ span a hyperbolic plane. \end{proposition} \begin{proof} Since $B$ is non-degenerate, the must exist some $v \in V$ with $B(u_1,v) \neq 0$, which we rescale to assume $B(u_1, v) = 1$. Then \begin{itemize} \item if $B$ is Hermitian, let $x \in k$ be such that $B(v,v) = x + \sigma(x)$ (such $x$ is guaranteed to exist by Lemma \ref{trace_value_lemma}). Then $B(v-xu_1, v-xu_1) = 0$, so we may take $u_2 = v - xu_1$. \item if $B$ is alternating, $B(v,v) = 0$, so we may take $u_2 = v$. \item if $B$ is symmetric, let $x = \frac{1}{2} B(v,v)$. Then $B(v-xu_1, v-xu_1) = 0$ and we may take $u_2 = v-xu_1$. \end{itemize} \end{proof} \begin{lemma} \label{lagrangian_subspace_lemma} Suppose that $V$ is a $2n$-dimensional vector space with a non-degenerate sesquilinear form $B$, and $W$ is an $n$-dimensional subspace of $V$ such that $B$ restricted to $W$ is zero. Then there is an $n$-dimensional subspace $W^\prime$ of $V$ such that $V = W^\prime \oplus W$ and $B$ restricted to $W^\prime$ is zero. \end{lemma} \begin{proof} Given any nonzero vector $w \in W$, we have $B(w, w) = 0$, so Proposition \ref{hyperbolic_plane_splitting_prop} shows there exists $w' \in V$ such that $w$ and $w'$ span a hyperbolic plane $U$. In particular, this means that $U^\perp$ is $2(n-1)$ dimensional and $w, w' \not\in U^{\perp}$ by Lemma \ref{hyperbolic_plane_splitting_lemma}, and $w' \not \in W$ because $B(w, w') = 1$. Therefore, $W \cap U^{\perp}$ is $n-1$ dimensional. The restriction of $B$ to $U^{\perp}$ is nondegenerate, and $B$ vanishes on $W \cap U^\perp$. Proceeding inductively on $U^{\perp}$ instead of $V$ and $W \cap U^{\perp}$ instead of $W$, we deduce that $V$ is a direct sum of $n$ hyperbolic planes spanned by pairs $\{w, w^\prime\}$, where the $w$ form a basis of $W$. We take $W^\prime$ to be spanned by the $w^\prime$. \end{proof} \noindent The following proposition tells us when two subspaces are equivalent under the action of the symmetry group $G_B(V)$. It turns out that it is necessary and sufficient that $B$ should restrict to an equivalent sesquilinear form on the two subspaces. This result is sometimes known as Witt's Lemma. \begin{proposition}[Theorem 1.2.1, \cite{Wall}] \label{witt_lemma_proposition} Suppose that $V$ is equipped with a non-degenerate sesquilinear form $B$ and $W_1, W_2$ are subspaces of $V$ such that there is a bijective linear map $g: W_1 \to W_2$ preserving $B$, i.e. $B(gw, gw^\prime) = B(w,w^\prime)$ for $w,w^\prime \in W_1$. Then there exists $\tilde{g} \in G_B(V)$ such that the restriction of $\tilde{g}$ to $W_1$ is g. \end{proposition} \noindent Now we resume our discussion of the classification of sesquilinear forms. \begin{definition} If every nonzero vector $v \in V$ obeys $B(v,v) \neq 0$, we say that $(V, B)$ is \emph{anisotropic}. \end{definition} \noindent Anisotropic spaces are automatically non-degenerate. Note that a zero-dimensional space is considered to be anisotropic. \begin{proposition} \label{sesquilinear_structure_prop} If $V$ is a space with a sesquilinear form $B$, we may write $V = \ker(B) \oplus U^{\oplus r} \oplus W$, where $U$ is a hyperbolic plane, $W$ is anisotropic, and each summand is orthogonal to all the others under $B$. \end{proposition} \begin{proof} We pick an arbitrary splitting $V = \ker(B) \oplus M$, which is automatically orthogonal. The restriction of $B$ to $M$ is non-degenerate. If $M$ is not anisotropic, there is a vector $u \in M$ such that $B(u,u) = 0$, and hence there is a hyperbolic plane $U$ contained inside $M$. Then Lemma \ref{hyperbolic_plane_splitting_lemma} shows we may split off this hyperbolic plane to get $M = U \oplus U^\perp$. Repeating this with $U^\perp$ in place of $M$, the dimension decreases until we eventually obtain an anisotropic space (possibly zero). \end{proof} \noindent The decomposition $V = \ker(B) \oplus U^{\oplus r} \oplus W$ in Proposition \ref{sesquilinear_structure_prop} is not unique. However, the multiplicity $r$ and the equivalence class of $W$ (equipped with the restriction of $B$) are determined; $r$ is called the \emph{polar rank} of $V$ and $(W, B)$ is called the \emph{germ} or \emph{core} of $V$. The assertion that these are indeed invariants of $(V, B)$ is sometimes called Witt's Theorem. As a result, the classification of spaces $(V, B)$ is reduced to the classification of anisotropic forms. We merely state the result. \begin{theorem} \label{anisotropic_space_classification_theorem} Up to equivalence, the nonzero anisotropic spaces $(V,B)$ are as follows: \begin{itemize} \item if $B$ is Hermitian, the only nonzero anisotropic space is 1-dimensional (spanned by $v$, say, with form $B(v, v) = 1$). \item if $B$ is alternating, there are no nonzero anisotropic spaces, \item if $B$ is symmetric, there are two 1-dimensional anisotropic spaces, and one 2-dimensional space. The 1-dimensional spaces may be presented by the forms $B(x, y) = m x y$, where $m \in \mathbb{F}_q$ either is, or is not, a perfect square. A form representing the two dimensional anisotropic space is \[ B((x_1,x_2), (y_1, y_2)) = x_1y_1 - m x_2y_2, \] where $m$ is a non-square in $\mathbb{F}_q$. \end{itemize} \end{theorem} \noindent A consequence of this theorem is the following proposition. \begin{proposition} There is a unique Hermitian form (up to equivalence) on $V$ regardless of $\dim(V)$. A non-degenerate alternating form on $V$ can only exist when $\dim(V)$ is even and any two such forms on $V$ are equivalent. There are two equivalence classes of symmetric forms on $V$ in any dimension. \end{proposition} \begin{proof} A non-degenerate space $(V, B)$ is obtained from its germ by adding some number of hyperbolic planes, which means that the dimension of the germ has the same parity as $\dim(V)$. In the Hermitian case, we see that there is only one anisotropic space whose dimension has a given parity, so the decomposition of a $V$ into hyperbolic planes and the germ is determined by $\dim(V)$. We also see that any alternating form is a direct sum of hyperbolic planes. On the other hand, in the symmetric case there are two anisotropic symmetric forms whose dimension has a given parity. So adding an appropriate number of hyperbolic planes gives two equivalence classes of forms. \end{proof} \noindent The salient consequence of this classification is that the symmetry groups of non-degenerate Hermitian or alternating forms of a given dimension are all isomorphic, so we may unambiguously refer to their symmetry groups without specifying the sesquilinear form they preserve. Furthermore, we may pick a particular form without loss of generality. The case of a symmetric bilinear form is more subtle. \begin{definition} If $k$ is any field, the \emph{Witt ring} of $k$, denoted $W(k)$ consists of equivalence classes of anisotropic spaces over $k$. The the sum of $(V_1, B_1)$ and $(V_2, B_2)$ is the germ of $(V_1 \oplus V_2, B_1 \oplus B_2)$. (There is also a multiplication given by the tensor product, but we will not need it.) \end{definition} \noindent Let $q$ be an odd prime power, and consider the Witt ring $W(\mathbb{F}_q)$. Let $\mathbf{0}$ be the zero-dimensional anisotropic space, while $\mathbf{1}$ and $\delta$ are the one-dimensional anisotropic spaces corresponding to the square and non-square cases respectively. Finally, let $\omega$ be the two-dimensional anisotropic space. Then $W(\mathbb{F}_q) = \{\mathbf{0}, \mathbf{1}, \delta, \omega\}$. Here $\mathbf{0}$ is the additive identity. We will denote the sum in $W(\mathbb{F}_q)$ with $\oplus$, so for example $\omega \oplus \omega = \mathbf{0}$. (It turns out that $W(k)$ is isomorphic to $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ if $q$ is $1$ modulo $4$, and $\mathbb{Z}/4\mathbb{Z}$ if $q$ is $3$ modulo $4$.) \begin{lemma} Let $V$ be an odd-dimensional space over $\mathbb{F}_q$. Then the symmetry groups of any two non-degenerate symmetric bilinear forms on $V$ are isomorphic. \end{lemma} \begin{proof} Suppose the form $B$ is represented by the matrix $M$ with respect to some choice of basis. For any $g \in GL(V)$, $\det(g^T M g) = \det(M) \det(g)^2$, so we may characterise the two equivalence classes of forms as having matrices whose determinant is or is not a square in $\mathbb{F}_q$. Multiplying the form by a non-square scalar $m$ does not change the symmetry group, but multiplies $\det(M)$ by $m^{\dim(V)}$ (which is a non-square), yielding a form in the other equivalence class. \end{proof} \noindent It turns out that there is no such coincidence in the even-dimensional case; the two equivalence classes of symmetric forms have different symmetry groups when $\dim(V)$ is even. \begin{definition} We write $U_n(\mathbb{F}_q)$ for the symmetry group of a Hermitian bilinear form on an $n$-dimensional vector space over $\mathbb{F}_{q^2}$. Similarly, we write $Sp_{2n}(\mathbb{F}_q)$ for the symmetry group of an alternating bilinear form on a $2n$-dimensional vector space over $\mathbb{F}_q$. It is standard to write $O_{2n+1}(\mathbb{F}_q)$ for the symmetry group of a symmetric form on a $(2n+1)$-dimensional vector space over $\mathbb{F}_q$, as well as $O_{2n}^+(\mathbb{F}_q)$ and $O_{2n}^-(\mathbb{F}_q)$ for the symmetry groups of symmetric forms on a space of dimension $2n$ over $\mathbb{F}_q$, whose germs are $\mathbf{0}$ and $\omega$, respectively. When necessary, we will also write $O_{2n+1}^{+}(\mathbb{F}_q)$ and $O_{2n+1}^{-}(\mathbb{F}_q)$ for the symmetry groups for the symmetric bilinear forms on $\mathbb{F}_q^{2n+1}$ with germs $\mathbf{1}$ and $\delta$ respectively, even though these groups are isomorphic. \end{definition} \noindent In Section \ref{classical_groups_section}, we will need some numerical information about finite classical groups. We recall the sizes of the groups, as well as the sizes of centralisers of (elements of) conjugacy classes. \begin{proposition}[Subsection 2.6, \cite{Wall}] \label{classical_group_sizes_proposition} We have \begin{eqnarray*} |U_n(\mathbb{F}_q)| &=& q^{n \choose 2} \prod_{i=1}^n (q^i - (-1)^i), \\ |Sp_{2n}(\mathbb{F}_q) &=& q^{n^2} \prod_{i=1}^n (q^{2i}-1). \end{eqnarray*} If $q$ is odd, then \begin{eqnarray*} |O_{2n+1}(\mathbb{F}_q)| &=& 2 q^{n^2} \prod_{i=1}^n (q^{2i}-1), \\ |O_{2n}^{\pm}(\mathbb{F}_q)| &=& 2 q^{n^2-n}(q^n \mp 1) \prod_{i=1}^{n-1} (q^{2i}-1). \end{eqnarray*} \end{proposition} \noindent Now we state some facts about the ratios of the sizes of certain centralisers in classical groups. These sizes were first calculated by Wall in Subsection 2.6 of \cite{Wall}, but the results are restated more transparently in \cite{Fulman_cycle_indices}. We defer the proofs of these facts to the appendix due to technical details that we will not need later in the paper. In the statements below, when we refer to the type of an element in a classical group, we mean the the type of that element when viewed as an element of the ambient general linear group. \par \begin{toappendix} \noindent Let us retain the notation from Section \ref{finite_classical_groups_section}. \begin{definition} If $r(t) = \sum_i a_i t^i \in \Phi_{q^2}$ is a monic irreducible polynomial with coefficients in $\mathbb{F}_{q^2}$ other than $t$, define \[ r^*(t) = \frac{t^{\deg(r)}}{\sigma(r(0))} \sum_i \sigma(a_i)t^{-i} \] (where $\sigma(x) = x^q$), noting that $r^*(t) \in \Phi_{q^2}$ also. \end{definition} \noindent The only example that will be relevant is that this operation sends $t-1$ to itself. It turns out that the conjugacy class of an element of $U_n(\mathbb{F}_q)$ is determined by its conjugacy class as an element of $GL_n(\mathbb{F}_{q^2})$, i.e. its type. However only types $\boldsymbol\mu$ with $\boldsymbol\mu(r) = \boldsymbol\mu(r^*)$ correspond to elements of the unitary group. \begin{definition} We define two functions, $A$ and $B$, that implicitly depend upon $g \in U_n(\mathbb{F}_q)$. Let $\boldsymbol\mu$ be the type of $g$. Let $A$ be the following function defined on $\Phi_{q^2} \times \mathbb{Z}_{>0}$. \[ A(r, i) = \left\{ \begin{array}{ll} |U_{m}(\mathbb{F}_Q)| & \quad \mbox{if } r = r^* \\ |GL_{m}(\mathbb{F}_{Q^2})|^{\frac{1}{2}} & \quad \mbox{if } r \neq r^* \end{array} \right. \] where $m = m_i(\boldsymbol\mu(r))$ and $Q = q^{\deg(r)}$. Now define \[ B(r) = Q^{\sum_{i < j} 2i m_i(\boldsymbol\mu(r))m_j(\boldsymbol\mu(r)) + \sum_{i \geq 1} (i-1) m_i(\boldsymbol\mu(r))^2} \prod_{i \geq 1} A(r, i). \] \end{definition} \begin{proposition}[Section 2.6, \cite{Wall}] The centraliser of an element $g \in U_n(\mathbb{F}_q)$ has size \[ \prod_{r \in \Phi_{q^2}} B(r). \] \end{proposition} \noindent Although $A$ and $B$ are complicated, ultimately we will only care about what happens when when $g$ is extended to a larger matrix by taking a block sum with the identity matrix of some size $d$. \end{toappendix} \begin{corollaryrep} \label{unitary_centraliser_ratio_corollary} Suppose that $g \in U_n(\mathbb{F}_q)$. Then the block matrix $\tilde{g} = \bigl( \begin{smallmatrix}g & 0 \\ 0 & \Id_d \end{smallmatrix}\bigr)$ may be viewed as an element of $U_{n+d}(\mathbb{F}_q)$. The ratio of the centralisers of these elements is \[ \frac{C_{U_{n+d}(\mathbb{F}_q)}(\tilde{g})}{C_{U_n(\mathbb{F}_q)}(g)} = q^{2d(k - h)} \frac{|U_{h + d}(\mathbb{F}_q)|}{|U_{h}(\mathbb{F}_q)|}, \] where $\boldsymbol\mu$ is the type of $g$, $k=l(\boldsymbol\mu(t-1))$ and $h=m_1(\boldsymbol\mu(t-1))$. \end{corollaryrep} \begin{toappendix} \begin{proof} The type of $\tilde{g}$ is $\boldsymbol\mu \cup (1^{d})_{t-1}$. Computing the size of the centraliser, the factors $B(r)$ for $r \neq t-1$ are identical for $g$ and $\tilde{g}$, and so they cancel out in the fraction. As for $B(t-1)$, we have $Q = q$ and incrementing $m_1(\boldsymbol\mu(t-1))$ by $d$ increases the exponent of $Q$ by $\sum_{1 < j} 2d m_j(\boldsymbol\mu(t-1)) = 2d (l(\boldsymbol\mu(t-1) - m_1(\boldsymbol\mu(t-1)))$. Further, $A(t-1, i)$ remains unchanged for $i \geq 2$. So in conclusion, the new value of $B(t-1)$ is larger than the old one by a factor \[ q^{2d(k - h)} \frac{|U_{h + d}(\mathbb{F}_q)|}{|U_{h}(\mathbb{F}_q)|}. \] \end{proof} \end{toappendix} \noindent Now, for the remainder of this section, where we deal with the symplectic and orthogonal groups, suppose that $q$ is the power of an odd prime. \begin{toappendix} \noindent Now, assume $q$ is an odd prime power. In order to describe the sizes of the centralisers of symplectic and orthogonal groups, Wall defines \emph{Hermitian invariants} from elementary divisors of $g$. The Hermitian invariants are bilinear forms whose symmetry groups appear in describing the conjugacy class and centraliser of $g$. Unlike the case of the unitary group, the conjugacy class of $g$ in the general linear group does not uniquely define a conjugacy class in the symplectic or orthogonal group. The Hemitian invariants of $g$ are precisely the additional information that is needed to determine the conjugacy class of $g$. We will not need to use them, so we omit their definition. \begin{definition} If $r(t) = \sum_i a_i t^i \in \Phi_{q}$ is a monic irreducible polynomial with coefficients in $\mathbb{F}_{q}$ other than $t$, define \[ r^*(t) = \frac{t^{\deg(r)}}{r(0)} \sum_i a_i t^{-i}, \] which is also a monic irreducible polynomial. \end{definition} \noindent A \emph{symplectic signed partition} is a partition $\lambda$ such that \begin{itemize} \item for odd $i$, $m_i(\lambda)$ is even, \item a choice of sign $\epsilon_i \in\{ +,-\}$ for each even $i$ such that $m_i(\lambda) > 0$ is given. \end{itemize} These signs record the Hermitian invariants of an element $g \in Sp_{2n}(\mathbb{F}_q)$. Conjugacy classes in $Sp_{2n}(\mathbb{F}_q)$ are indexed by multipartitions $\boldsymbol\mu$ of size $2n$ such that $\boldsymbol\mu(r) = \boldsymbol\mu(r^*)$, except that $\boldsymbol\mu(t \pm 1)$ must be symplectic signed partitions. The type of $g$ (i.e. conjugacy class in $GL_{2n}(\mathbb{F}_q)$) is recovered by forgetting the signs of $\boldsymbol\mu(t \pm 1)$ to recover ordinary partitions. \begin{definition} Let $\boldsymbol\mu$ label the conjugacy class of $g \in Sp_{2n}(\mathbb{F}_q)$. Let $A$ be the following function defined on $\Phi_{q} \times \mathbb{Z}_{>0}$. First of all, if $r \neq t \pm 1$, \[ A(r, i) = \left\{ \begin{array}{ll} |U_{m}(\mathbb{F}_{Q^{\frac{1}{2}}})| & \quad \mbox{if } r = r^* \\ |GL_{m}(\mathbb{F}_{Q})|^{\frac{1}{2}} & \quad \mbox{if } r \neq r^* \end{array} \right. \] where $m = m_i(\boldsymbol\mu(r))$ and $Q = q^{\deg(r)}$. For $r = t \pm 1$, we instead have \[ A(r, i) = \left\{ \begin{array}{ll} |Sp_{m}(\mathbb{F}_{q})| & \quad \mbox{if $i$ is odd} \\ q^{\frac{m}{2}}|O_{m}^{\epsilon_i}(\mathbb{F}_{q})| & \quad \mbox{if $i$ is even} \end{array} \right. \] with $m = m_i(\boldsymbol\mu(r))$ as before. Note that if $m$ is odd, $O_{m}^{+}(\mathbb{F}_{q})$ and $O_{m}^{-}(\mathbb{F}_{q})$ are isomorphic. Now define \[ B(r) = Q^{\sum_{i < j} i m_i(\boldsymbol\mu(r))m_j(\boldsymbol\mu(r)) + \frac{1}{2}\sum_{i \geq 1} (i-1) m_i(\boldsymbol\mu(r))^2} \prod_{i \geq 1} A(r, i). \] \end{definition} \begin{proposition} Then the centraliser of $g \in Sp_{2n}(\mathbb{F}_q)$ has size \[ \prod_{r \in \Phi_q} B(r). \] \end{proposition} \noindent As in the unitary case, we will only need the following consequence of this result. \end{toappendix} \begin{corollaryrep} \label{symplectic_centraliser_ratio_corollary} Suppose that $g \in Sp_{2n}(\mathbb{F}_q)$. Let $d$ be a positive even integer. Then $\tilde{g} = \bigl( \begin{smallmatrix}g & 0 \\ 0 & \Id_d \end{smallmatrix}\bigr)$ may be viewed as an element of $Sp_{2n+d}(\mathbb{F}_q)$. The ratio of the sizes of the centralisers of these elements is \[ \frac{C_{Sp_{2n+d}(\mathbb{F}_q)}(\tilde{g})}{C_{Sp_{2n}(\mathbb{F}_q)}(g)} = q^{d(k-h)}\frac{|Sp_{h + d}(\mathbb{F}_q)|}{|Sp_{h}(\mathbb{F}_q)|}, \] where $\boldsymbol\mu$ is the type of $g$, $k=l(\boldsymbol\mu(t-1))$ and $h=m_1(\boldsymbol\mu(t-1))$. \end{corollaryrep} \begin{toappendix} \begin{proof} Under the operation of extending $g$ by a size $d$ identity matrix, we add $(1^{d})$ to the signed symplectic partition $\boldsymbol\mu(t-1)$ without changing any of the signs. The rest of the proof is identical to the unitary case. \end{proof} \begin{remark} \label{symplectic_char_2_remark} Corollary \ref{symplectic_centraliser_ratio_corollary} holds even if $q$ is a power of 2, however in this case the description of conjugacy classes of $Sp_{2n}(\mathbb{F}_q)$ is more intricate, and requires applying the more complicated Theorem 3.7.4 of \cite{Wall}. Nevertheless, the same kind of cancellation takes place, and the factor $2^k$ appearing in the formula also cancels because $k$ is defined in terms of the multiplier of $g$, which is determined by the subspace $im(\Id - g)$, which coincides with $im(\Id - \tilde{g})$ upon identifying $\mathbb{F}_q^{2n}$ with a subspace of $\mathbb{F}_q^{2n+d}$. \end{remark} \noindent A \emph{orthogonal signed partition} is a partition $\lambda$ such that \begin{itemize} \item for even $i$, $m_i(\lambda)$ is even, \item a choice of sign $\epsilon_i \in \{+, -\}$ for each odd $i$ such that $m_i(\lambda)>0$ is given. \end{itemize} These signs record the Hermitian invariants of an element $g$ of a finite orthogonal group. Conjugacy classes of orthogonal groups $O_n^{\pm}(\mathbb{F}_q)$ are indexed by multipartitions $\boldsymbol\mu$ of size $n$ such that $\boldsymbol\mu(r) = \boldsymbol\mu(r^*)$, except that $\boldsymbol\mu(t \pm 1)$ must be orthogonal signed partitions. Such $\boldsymbol\mu$ describes a conjugacy class in exactly one of $O_n^{+}(\mathbb{F}_q)$ or $O_n^{-}(\mathbb{F}_q)$ in a way that can be determined from $\boldsymbol\mu$ but we omit here. If $n$ is odd, $O_n^{+}(\mathbb{F}_{q})$ and $O_n^{-}(\mathbb{F}_{q})$ are isomorphic, so we obtain two parametrisations of the conjugacy classes of $O_n(\mathbb{F}_q)$. \begin{definition} Let $\boldsymbol\mu$ label the conjugacy class of $g$, either an element of $O_{n}^{+}(\mathbb{F}_q)$ or $O_{n}^{-}(\mathbb{F}_q)$. Let $A$ be the following function defined on $\Phi_{q} \times \mathbb{Z}_{>0}$. First of all, if $r \neq t \pm 1$, \[ A(r, i) = \left\{ \begin{array}{ll} |U_{m}(\mathbb{F}_{Q^{\frac{1}{2}}})| & \quad \mbox{if } r = r^* \\ |GL_{m}(\mathbb{F}_{Q})|^{\frac{1}{2}} & \quad \mbox{if } r \neq r^* \end{array} \right. \] where $m = m_i(\boldsymbol\mu(r))$ and $Q = q^{\deg(r)}$. For $r = t \pm 1$, we instead have \[ A(r, i) = \left\{ \begin{array}{ll} |O_{m}^{\epsilon_i}(\mathbb{F}_{q})| & \quad \mbox{if $i$ is odd} \\ q^{-\frac{m}{2}}|Sp_{m}(\mathbb{F}_{Q})| & \quad \mbox{if $i$ is even} \end{array} \right. \] with $m = m_i(\boldsymbol\mu(r))$ as before. Now define \[ B(r) = Q^{\sum_{i < j} i m_i(\boldsymbol\mu(r))m_j(\boldsymbol\mu(r)) + \frac{1}{2}\sum_{i \geq 1} (i-1) m_i(\boldsymbol\mu(r))^2} \prod_{i \geq 1} A(r, i). \] \end{definition} \end{toappendix} \noindent This result actually remains true in characteristic two, although the proof is slightly more complicated; see Remark \ref{symplectic_char_2_remark}. Finally, we state the orthogonal version of the previous corollaries. \begin{corollaryrep} For $\eta \in W(\mathbb{F}_q)$, let $O_n^\eta(\mathbb{F}_q)$ be the symmetry group of the form with germ $\eta$ (note that the germ determines the parity of $n$). Suppose that $g \in O_n^{\tau}(\mathbb{F}_q)$. Let $d$ be a positive integer, and let $\rho$ be the germ of a nondegenerate symmetric bilinear form on $\mathbb{F}_q^d$. Then $\tilde{g} = \bigl( \begin{smallmatrix}g & 0 \\ 0 & \Id_d \end{smallmatrix}\bigr)$ may be viewed as an element of $O_{n+d}^{\tau \oplus \rho}(\mathbb{F}_q)$. The ratio of the sizes of the centralisers of these elements is \[ \frac{C_{O_{n+d}^{\tau \oplus \rho}(\mathbb{F}_q)}(\tilde{g})}{C_{O_{n}^{\tau}(\mathbb{F}_q)}(g)} = q^{d(k-h)}\frac{|O_{h + d}^{\epsilon_1 \oplus \rho}(\mathbb{F}_q)|}{|O_{h}^{\epsilon_1}(\mathbb{F}_q)|}, \] where $\boldsymbol\mu$ is the type of $g$, $k=l(\boldsymbol\mu(t-1))$ and $h=m_1(\boldsymbol\mu(t-1))$.\end{corollaryrep} \begin{proof} Under the operation of extending $g$ by a size $d$ identity matrix, we add $(1^{d})$ to the signed orthogonal partition $\boldsymbol\mu(t-1)$ and adding $\rho$ to the sign $\epsilon_1$ (addition takes place in the Witt ring). (If $\boldsymbol\mu(t-1)$ was empty, the new sign corresponds to the germ $\rho$.) The rest of the proof is identical to the unitary case. \end{proof} \begin{toappendix} \noindent This final case is a little more complicated than the others because the resulting quantity depends on the sign $\epsilon_1$ which is not determined by the germ $\tau$ of the ambient orthogonal group. \end{toappendix} \section{General Linear Groups} \label{GL_section} \noindent Let $\mathbb{F}_q^{\infty}$ be a countably infinite dimensional vector space over $\mathbb{F}_q$ with basis $\{e_1, e_2, \ldots \}$. We may write \[ \mathbb{F}_q^\infty = \varinjlim \mathbb{F}_q^n, \] where $\mathbb{F}_q^n$ is has basis $\{e_1, e_2, \ldots, e_n\}$, and the maps in the directed limit are the obvious inclusions. \begin{definition} Say that a subspace $V$ of $\mathbb{F}_q^\infty$ is \emph{smooth} if it contains $e_i$ for all sufficiently large $i$. \end{definition} \noindent It is immediate that the intersection of two smooth subspaces of $\mathbb{F}_q^\infty$ is again smooth. Note that while smooth subspaces have finite codimension, not every finite codimension subspace is smooth, for example if $L$ is the linear map $L:\mathbb{F}_q^\infty \to \mathbb{F}_q$ defined by $L(e_i) = 1$ for all $i$, then $\ker(L)$ has codimension $1$, but does not contain any $e_i$. \newline \newline \noindent Let us view an element of the general linear group $GL_n(\mathbb{F}_q)$ as a matrix with respect to the basis $\{e_1, \ldots, e_n\}$. Then for $n < m$ we have (injective) homomorphisms $\rho_{n,m}: GL_n(\mathbb{F}_q) \to GL_m(\mathbb{F})$ defined by \[ \rho_{n,m}(g)(e_i) = \begin{cases} g(e_i) & i \leq n \\ e_i & i > n \end{cases} \] which may be viewed extending an $n \times n$ matrix to an $m \times m$ matrix by taking the block sum with the identity matrix, i.e. $\rho_{n,m}(g) = \bigl( \begin{smallmatrix}g & 0 \\ 0 & \Id_{m-n} \end{smallmatrix}\bigr)$. \begin{definition} Let $GL_\infty(\mathbb{F}_q)$ be \[ \varinjlim GL_n(\mathbb{F}_q), \] where the maps in the directed system are the inclusions $\rho_{n,m}$. In other words, $GL_\infty(\mathbb{F}_q)$ consists of invertible matrices (whose entries are indexed by $\mathbb{Z}_{>0} \times \mathbb{Z}_{>0}$) that differ from the identity matrix in finitely many entries. We frequently view $GL_n(\mathbb{F}_q)$ as a subgroup of $GL_\infty(\mathbb{F}_q)$. \end{definition} \noindent It is immediate that if $V$ is a smooth subspace of $\mathbb{F}_q^\infty$ and $g \in GL_\infty(\mathbb{F}_q)$, then $gV$ is also a smooth subspace. Note that since we have chosen a basis to work with, there is a notion of transpose. We write $g^T$ for the transpose of $g \in GL_\infty(\mathbb{F}_q)$. \begin{definition} A \emph{bounding triple} is a tuple $(W, g, V)$ where $W$ and $V$ are smooth subspaces of $\mathbb{F}_q^\infty$ and $g \in GL_\infty(\mathbb{F})$ is such that $g$ acts as the identity on $V$ and $g^T$ acts as the identity on $W$. Additionally, we let $BT_n$ be the set of bounding triples $(W,g,V)$ with $\{e_{n+1}, e_{n+2}, \ldots\} \subseteq V, W$. \end{definition} \noindent Of course, $BT_1 \subseteq BT_2 \subseteq \cdots$, and each bounding triple belongs to some $BT_n$, so the $BT_n$ ``filter'' the set of bounding triples. As the next lemma shows, the two spaces $W$, $V$ in a bounding triple $(W,g,V)$ constrain the entries where $g$ can differ from the identity matrix. Thus $W$ and $V$ serve to ``bound'' the behaviour of $g$. \begin{lemma} \label{Bn_lemma} Suppose that $(W,g,V) \in BT_n$. Then $g \in GL_n(\mathbb{F}_q)$, i.e. $g$ agrees with the identity matrix starting at the $(n+1)$-th row and $(n+1)$-th column. \end{lemma} \begin{proof} The condition that $\{e_{n+1}, e_{n+2}, \ldots\} \subseteq V$ implies that $g(e_i) = e_i$ for $i > n$, which agrees with the identity matrix. Similarly the condition on $W$ shows $g^T(e_i) = e_i$ for $i > n$, which agrees with the identity matrix. So our $g$ may be viewed as an element of $GL_n(\mathbb{F}_q)$. \end{proof} \begin{definition} Let us say that a bounding triple $(W,g,V)$ is \emph{tight} if $V = \ker(g-1)$ and $W = \ker(g^T - 1)$. \end{definition} \noindent In a bounding triple, $g-1$ and $g^T-1$ act by zero on $V$ and $W$ respectively. So in a tight bounding triple, $V$ and $W$ are as large as possible. It is clear that each $g \in GL_\infty(\mathbb{F}_q)$ is contained in a unique tight bounding triple. \begin{proposition} \label{triple_conj_prop} There is an action of $GL_\infty(\mathbb{F}_q)$ on the set of all bounding triples via \[ x \cdot (W, g, V) = (x^{-T}W, xgx^{-1}, xV), \] where $x^{-T} = (x^{-1})^T = (x^T)^{-1}$. We refer to this action as ``conjugation''. \end{proposition} \begin{proof} First we observe that $xgx^{-1}$ acts as the identity on $xV$, and similarly $(xgx^{-1})^T = x^{-T} g^T x^T$ acts as the identity on $x^{-T}W$, so $x \cdot (W,g,V)$ is a bounding triple. Clearly $\Id \cdot (W, g, V) = (W, g, V)$. All that remains to be checked is associativity: \begin{eqnarray*} x_1 \cdot ( x_2 \cdot (W, g, V) ) &=& x_1 \cdot (x_2^{-T}W, x_2 g x_2^{-1}, x_2 V) \\ &=& (x_1^{-T}x_2^{-T}W, x_1 x_2 g x_2^{-1} x_1^{-1}, x_1 x_2 V) \\ &=& ((x_1x_2)^{-T}W, x_1x_2 g (x_1x_2)^{-1} , x_1 x_2 V). \end{eqnarray*} \end{proof} \subsection{Conjugacy of Bounding Triples} \noindent In order to further understand bounding triples $(W,g,V)$, we introduce a bilinear pairing $\mathbb{F}_q^\infty \times \mathbb{F}_q^\infty \to \mathbb{F}_q$. Here we think of $W$ as being a subspace of the first factor, and $V$ as a subspace of the second factor. In order to be compatible with the conjugacy action, we consider the following action of $GL_\infty(\mathbb{F}_q)$ on $\mathbb{F}_q^\infty \times \mathbb{F}_q^\infty$: \[ x \cdot (w \otimes v) = (x^{-T}w) \otimes (xv), \] so the two factors of $\mathbb{F}_q^\infty$ transform dually to each other. \begin{definition} Let $\langle -,- \rangle$ be the bilinear pairing on $\mathbb{F}_q^\infty \times \mathbb{F}_q^\infty$ defined by $\langle e_i, e_j \rangle = \delta_{i,j}$. \end{definition} \noindent By construction, $\langle -,- \rangle$ is $GL_\infty(\mathbb{F}_q)$-invariant. In particular, for smooth subspaces $W, V \subseteq \mathbb{F}_q^\infty$, we have a (possibly degenerate) pairing $W \times V \to \mathbb{F}_q$ by restricting $\langle -,- \rangle$. Although the action of $x \in GL_\infty(\mathbb{F}_q)$ may not preserve $W$ or $V$, we have $\dim(W \cap V^\perp) = \dim((x^{-T}W) \cap (xV)^\perp)$ and $\dim(W^\perp \cap V) = \dim((x^{-T}W)^\perp \cap (xV))$. \begin{lemma} \label{first_decomposition_lemma} Suppose that $(W,g,V) \in BT_n$. There is a vector space decomposition $\mathbb{F}_q^\infty = U_1 \oplus U_2 \oplus U_3$ such that: \begin{itemize} \item $U_1, U_2 \subseteq \mathbb{F}_q^n$ and $U_3$ contains $e_{n+1}, e_{n+2}, \ldots$, \item $U_2 \oplus U_3 = V$, \item $U_2 = V \cap W^\perp$ and the restricted pairing $W \times U_3 \to \mathbb{F}_q$ is non-degenerate on the right. \end{itemize} \end{lemma} \begin{proof} Since $W$ contains $e_{n+1}, e_{n+2}, \ldots$, we have $W^\perp \subseteq \mathbb{F}_q^n$. So if we take $U_2 = V \cap W^\perp$, we get $U_2 \subseteq \mathbb{F}_q^n$. Then we take $U_3$ to be any complement to $U_2$ in $V$ that contains $e_{n+1}, e_{n+2}, \ldots$. The pairing $W \times U_3$ is non-degenerate on the right because $U_3 \cap W^\perp \subseteq V \cap W^\perp = U_2$ and $U_2$ intersects $U_3$ trivially. Finally, we pick $U_1$ to be any complement to $V$ in $\mathbb{F}_q^\infty$ that is contained inside $\mathbb{F}_q^n$ (which exists because $V$ contains $e_{n+1}, e_{n+2}, \ldots$). \end{proof} \begin{proposition} \label{standard_shape_prop} Suppose that $(W,g,V) \in BT_n$. Let $a = \dim(W \cap V^\perp)$, $b = \dim(V \cap W^\perp)$, and let $c$ be the rank of the pairing $\langle -,- \rangle$ restricted to $(W \cap \mathbb{F}_q^n) \times (V \cap \mathbb{F}_q^n)$. Then there is $x \in GL_n(\mathbb{F}_q)$ and a decomposition $\mathbb{F}_q^\infty = E_1 \oplus E_2 \oplus E_3 \oplus E_4$ such that $x \cdot (W, g, V) = (W^\prime , g^\prime, V^\prime)$, where: \begin{itemize} \item $E_1 = \mathbb{F}_q\{e_1, \ldots, e_a\}$, \item $E_2 = \mathbb{F}_q\{e_{a+1}, \ldots, e_{n-b-c}\}$, \item $E_3 = \mathbb{F}_q\{e_{n-b-c+1}, \ldots, e_{n-c}\}$, \item $E_4 = \mathbb{F}_q\{e_{n-c+1}, e_{n-c+2}, \ldots\}$, \item $V^\prime = E_3 \oplus E_4$, \item $W^\prime = E_1 \oplus E_4$. \end{itemize} Moreover, when viewed as an element of $GL_n(\mathbb{F}_q)$, $g^\prime$ has the block matrix form \[ \begin{bmatrix} \Id & 0 & 0 & 0\\ C & A & 0 & 0 \\ D & B & \Id & 0 \\ 0 & 0 & 0 & \Id \end{bmatrix} \] where the sizes of the blocks are $a, n-a-b-c, b, c$, and $A,B,C,D$ are appropriately sized matrices. \end{proposition} \begin{proof} We first perform a preliminary conjugation to pin down $V^\prime$ before we conjugate a second time to obtain the required form of $W^\prime$. Consider the decomposition $\mathbb{F}_q^\infty = U_1 \oplus U_2 \oplus U_3$ provided by Lemma \ref{first_decomposition_lemma}, noting that $V = U_2 \oplus U_3$. We may construct a matrix $x^{\prime \prime}$ that describes a change of basis between the standard basis of $\mathbb{F}_q^\infty$ and a basis obtained by concatenating bases of $U_1, U_2, U_3$. In particular, we may take the basis of $U_3$ to consist of $c$ vectors in $\mathbb{F}_q^n$ followed by $e_{n+1}, e_{n+2}, \ldots$. Together with the fact that $U_1, U_2 \subseteq \mathbb{F}_q^n$ this guarantees that $x^{\prime \prime} \in GL_n(\mathbb{F}_q)$. Now we consider $(W^{\prime \prime}, g^{\prime \prime}, V^{\prime \prime}) = x^{\prime \prime} \cdot (W, g, V)$. \newline \newline \noindent By construction, $V^{\prime \prime} = \mathbb{F}_q\{e_{r+1}, e_{r+2} \ldots\}$, where $r = \dim(U_1) = \codim(V) = n - b - c$. So in particular, $V^{\prime \prime} = E_3 \oplus E_4$. This implies that $(V^{\prime \prime})^\perp = \mathbb{F}_q\{e_1, \ldots, e_r\} = E_1 \oplus E_2$. With respect to the decomposition $\mathbb{F}_q^n = \mathbb{F}_q^r \oplus (V^{\prime \prime} \cap \mathbb{F}_q^n)$, $g^{\prime \prime}$ has the block form \[ \begin{bmatrix} M_1 & 0 \\ M_2 & \Id \end{bmatrix}, \] because $g^{\prime \prime}$ fixes $V^{\prime \prime}$ pointwise. Now we will conjugate by a second element $x^\prime$ to get $x^\prime \cdot (W^{\prime \prime}, g^{\prime \prime}, V^{\prime \prime}) = (W^\prime, g^\prime, V^\prime)$. We take $x^\prime \in GL_n(\mathbb{F}_q)$ to have block form \[ \begin{bmatrix} P_1 & 0 \\ P_2 & P_3 \end{bmatrix}, \] so that $V^{\prime} = x^\prime V^{\prime \prime} = V^{\prime \prime}$. This allows $(x^{\prime})^{-T}$ to be an arbitrary invertible block-upper-triangular matrix. We use this freedom to control $W^\prime = (x^\prime)^{-T}W^{\prime \prime}$. The fact that $(x^\prime)^{-T}$ is block upper-triangular means we must preserve $\mathbb{F}_q^r = (V^{\prime \prime})^\perp$, but there are no other restrictions on the change of basis. So we choose to move $W^{\prime \prime} \cap \mathbb{F}_q^r = W^{\prime \prime} \cap (V^{\prime \prime})^\perp$ to $\mathbb{F}_q \{e_1, \ldots, e_a\} = E_1$ (which remains contained in $\mathbb{F}_q^r$). We move the rest of $W \cap \mathbb{F}_q^n$ to $\mathbb{F}_q \{e_{n-c+1}, \ldots, e_n\} = E_4 \cap \mathbb{F}_q^n$. \newline \newline \noindent Putting this all together, if $x = x^\prime x^{\prime \prime}$, then $x \cdot (W, g, V) = (W^\prime, g^\prime, V^\prime)$ where we have $V^\prime = E_3 \oplus E_4$ and $W^\prime = E_1 \oplus E_4$. Now, with respect to the decomposition $\mathbb{F}_q^n = E_1 \oplus E_2 \oplus E_3 \oplus E_4$, $g^\prime$ has the block matrix form \[ \begin{bmatrix} \Id & 0 & 0 & 0\\ C & A & 0 & 0 \\ D & B & \Id & 0 \\ 0 & 0 & 0 & \Id \end{bmatrix} \] because $g^\prime$ acts by the identity on $V^\prime = E_3 \oplus E_4$, and because $(g^\prime)^{T}$ acts by the identity on $W^\prime = E_1 \oplus E_4$ (since $(g^\prime)^{-T}$ does). \end{proof} \begin{definition} We say that a bounding triple $(W^\prime,g^\prime,V^\prime) \in BT_n$ in the form described in Proposition \ref{standard_shape_prop} has \emph{standard shape}. \end{definition} \begin{proposition} \label{gl_inf_gl_n_conjugacy_prop} Two bounding triples in $BT_n$ that are conjugate by an element of $GL_\infty(\mathbb{F}_q)$ are conjugate by an element of $GL_n(\mathbb{F}_q)$. \end{proposition} \begin{proof} Suppose we are given bounding triples $(W_1, g_1, V_1), (W_2, g_2, V_2) \in BT_n$ that are conjugate by an element of $GL_\infty(\mathbb{F}_q)$. By Proposition \ref{standard_shape_prop}, they are $GL_n(\mathbb{F}_q)$-conjugate to triples in standard shape. So to prove our bounding triples are $GL_n(\mathbb{F}_q)$-conjugate, we may assume that $(W_1, g_1, V_1)$ and $(W_2, g_2, V_2)$ are in standard shape. \newline \newline \noindent Since $(W_1, g_1, V_1)$ and $(W_2, g_2, V_2)$ are $GL_\infty(\mathbb{F}_q)$-conjugate, they are conjugate by an element of $GL_{m}(\mathbb{F}_q)$ for some $m$. We may take $m \geq n$, which guarantees $(W_1, g_1, V_1), (W_2, g_2, V_2) \in BT_m$. Note that if $(W,g,V) \in BT_n$ has block sizes $a,n-a-b-c,b,c$ in its standard shape, then viewing it as an element of $BT_m$ instead gives a standard shape with block sizes $a, n-a-b-c, b, m-n+c$. This is because $a = \dim(W \cap V^\perp)$ and $b = \dim(V \cap W^\perp)$ are unchanged, while the rank of the bilinear form on $(W \cap \mathbb{F}_q^m) \otimes (V \cap \mathbb{F}_q^m)$ is $m-n$ larger than the rank of the bilinear form on $(W \cap \mathbb{F}_q^n) \otimes (V \cap \mathbb{F}_q^n)$ since both $V$ and $W$ contain $e_{n+1}, \ldots, e_m$. \newline \newline \noindent We deduce from this that $(W_1, g_1, V_1)$ and $(W_2, g_2, V_2)$ have blocks of the same size when put in standard shape (as elements of $BT_m$). In particular, this implies that $W_1 = W_2$ and $V_1 = V_2$ since these spaces are spanned by certain basis vectors determined by the block sizes. We recall the block structure of our matrices: \[ g_1 = \begin{bmatrix} \Id & 0 & 0 & 0\\ C_1 & A_1 & 0 & 0 \\ D_1 & B_1 & \Id & 0 \\ 0 & 0 & 0 & \Id \end{bmatrix}, \hspace{10mm} g_2 = \begin{bmatrix} \Id & 0 & 0 & 0\\ C_2 & A_2 & 0 & 0 \\ D_2 & B_2 & \Id & 0 \\ 0 & 0 & 0 & \Id \end{bmatrix}, \] where the block sizes are $a, n-a-b-c, b, m-n+c$. Now we consider $x \in GL_{m}(\mathbb{F}_q)$ which conjugates one bounding triple to the other. In terms of the same block structure, in order for $x^{-T}$ to preserve $W=W_1=W_2$, $x$ must have the form \[ \begin{bmatrix} * & 0 & 0 & *\\ * & * & * & * \\ * & * & * & * \\ * & 0 & 0 & * \end{bmatrix}, \] where $*$ indicates an unconstrained entry. Similarly, for $x$ to preserve $V=V_1=V_2$, it must take the form \[ \begin{bmatrix} * & * & 0 & 0\\ * & * & 0 & 0 \\ * & * & * & * \\ * & * & * & * \end{bmatrix}. \] Combining these conditions, we may take \[ x = \begin{bmatrix} x_{11} & 0 & 0 & 0 \\ x_{21} & x_{22} & 0 & 0 \\ x_{31} & x_{32} & x_{33} & x_{34} \\ x_{41} & 0 & 0 & x_{44} \end{bmatrix}, \] and we note that such $x$ has determinant $\det(x) = \det(x_{11}) \det(x_{22}) \det(x_{33}) \det(x_{44}) \neq 0$. The condition $xg_1x^{-1} = g_2$ is equivalent to $x (g_1 - \Id) = (g_2 - \Id) x$, which we now explicitly work out: \begin{eqnarray*} x(g_1-\Id) &=& \begin{bmatrix} 0 & 0 & 0 & 0\\ x_{22} C_1 & x_{22}(A_1 - \Id) & 0 & 0 \\ x_{32} C_1 + x_{33} D_1 & x_{32}(A_1 - \Id) + x_{33} B_1 & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}, \\ (g_2 - \Id) x &=& \begin{bmatrix} 0 & 0 & 0 & 0\\ C_2 x_{11} + (A_2 - \Id) x_{21} & (A_2 - \Id) x_{22} & 0 & 0 \\ D_2 x_{11} + B_2 x_{21} & B_2 x_{22} & 0 & 0 \\ 0 & 0 & 0 & 0 \end{bmatrix}. \end{eqnarray*} The equality of these two matrices imposes no restrictions on the entries $x_{41}$, $x_{34}$, $x_{44}$ (or $x_{31}$), so we may replace $x_{41}$ and $x_{34}$ with zero matrices, and set $x_{44}$ to be the identity matrix. This modified matrix $x_{new}$ has block form \[ x_{new} = \begin{bmatrix} x_{11} & 0 & 0 & 0 \\ x_{21} & x_{22} & 0 & 0 \\ x_{31} & x_{32} & x_{33} & 0 \\ 0 & 0 & 0 & \Id \end{bmatrix}, \] and is invertible because $\det(x_{11})\det(x_{22})\det(x_{33}) \neq 0$. Therefore $x_{new}$ may be viewed as an element of $GL_{n-c}(\mathbb{F}_q) \subseteq GL_n(\mathbb{F}_q)$. Additionally, $x_{new}g_1 = g_2 x_{new}$ and $x_{new}V = V$, $x_{new}^{-T}W = W$, as needed. \end{proof} \subsection{Products of Bounding Triples} \label{bounding_triples_subsection} \begin{definition} We define a multiplication on bounding triples via the formula \[ (W_1, g_1, V_1) \times (W_2, g_2, V_2) = (W_1 \cap W_2, g_1 g_2, V_1 \cap V_2). \] \end{definition} \noindent Since $g_1$ and $g_2$ both act on $V_1 \cap V_2$ as the identity (and similarly for $g_1^T$ and $g_2^T$ on $W_1 \cap W_2$), the product of two bounding triples is again a bounding triple. \begin{proposition} Multiplication of bounding triples is associative and also $GL_\infty(\mathbb{F}_q)$-equivariant. \end{proposition} \begin{proof} Associativity follows from associativity of intersections and of multiplication in $GL_\infty(\mathbb{F}_q)$. Let $x \in GL_\infty(\mathbb{F}_q)$. Then \begin{eqnarray*} (x \cdot (W_1, g_1, V_1)) \times (x \cdot (W_2, g_2, V_2)) &=& (x^{-T}W_1, xg_1x^{-1}, xV_1) \times (x^{-T}W_2, xg_2x^{-1}, xV_2) \\ &=& (x^{-T}W_1 \cap x^{-T}W_2, xg_1g_2x^{-1}, xV_1 \cap x V_2) \\ &=& (x^{-T}(W_1 \cap W_2), x g_1 g_2 x^{-1}, x (V_1 \cap V_2)) \\ &=& x \cdot ((W_1, g_1, V_1) \times (W_2, g_2, V_2)). \end{eqnarray*} \noindent Here we have used the fact that $xV_1 \cap x V_2= x (V_1 \cap V_2)$ (and similarly for $W_1$ and $W_2$), which follows from the fact that $x$ is a bijection. \end{proof} \noindent Hence, the set of all bounding triples forms a semigroup, and each $BT_n$ is also a semigroup. \begin{proposition} \label{finite_product_prop} There are only finitely many ways in which a given bounding triple can be expressed as a product of two bounding triples. \end{proposition} \begin{proof} Suppose $(W_3, g_3, V_3)$ is given, and we would like to find solutions to $(W_1, g_1, V_1) \times (W_2, g_2, V_2) = (W_3, g_3, V_3)$, in other words, \[ (W_1 \cap W_2, g_1 g_2, V_1 \cap V_2) = (W_3, g_3, V_3). \] Let $n$ be such that $W_3$ and $V_3$ both contain $e_{n+1}, e_{n+2}, \ldots$. Then $W_1, W_2, V_1, V_2$ must also contain these elements. There are only finitely many subspaces containing $e_{n+1}, e_{n+2}, \ldots$ (such subspaces are in bijection with subspaces of $\mathbb{F}_q^{\infty} / \mathbb{F}_q \{e_{n+1}, e_{n+2}, \ldots\}$ which is finite dimensional). By Lemma \ref{Bn_lemma}, it also follows that $g_1$ and $g_2$ correspond to elements of the finite group $GL_n(\mathbb{F}_q)$. \end{proof} \begin{definition} Let $\mathcal{A}$ be the set of functions from the set of all bounding triples to $\mathbb{Z}$. It is an abelian group with pointwise addition. We equip $\mathcal{A}$ with the following convolution product. If $f_1, f_2 \in \mathcal{A}$, then \[ (f_1 * f_2) (W_3, g_3, V_3) = \sum_{(W_1, g_1, V_1) \times (W_2, g_2, V_2) = (W_3, g_3, V_3)} f_1(W_1, g_1, V_1) f_2(W_2, g_2, V_2). \] \end{definition} \noindent By Proposition \ref{finite_product_prop}, the sum is finite, and therefore well defined. \begin{example} If $f_1$ and $f_2$ are the indicator functions of $(W_1, g_1, V_1)$ and $(W_2, g_2, V_2)$ respectively, then $f_1 * f_2$ is the indicator function of $(W_1, g_1, V_1) \times (W_2, g_2, V_2)$. As a result, we may view $\mathcal{A}$ as a completed version of the monoid algebra (over $\mathbb{Z}$) of bounding triples. \end{example} \noindent There is an action of $GL_\infty(\mathbb{F}_q)$ on $\mathcal{A}$ via \[ (x \cdot f) (W, g, V) = f( x^{-1} \cdot (W, g, V)). \] Because the multiplication is equivariant for the action of $GL_\infty(\mathbb{F}_q)$, the product of two $GL_\infty(\mathbb{F}_q)$-invariant elements is again invariant. \begin{definition} Let $\mathcal{A}^{GL_\infty}$ be the subspace of $\mathcal{A}$ consisting of elements that are invariant for the action of $GL_\infty(\mathbb{F}_q)$ and are supported on finitely many $GL_\infty(\mathbb{F}_q)$-orbits of bounding triples. We call $\mathcal{A}^{GL_\infty}$ the \emph{general linear Ivanov-Kerov algebra}. \end{definition} \begin{proposition} \label{Ivanov_Kerov_subalg_proposition} We have that $\mathcal{A}^{GL_\infty}$ is a subalgebra of $\mathcal{A}$. \end{proposition} \begin{proof} All we need to check is that the product of the indicator functions of two $GL_\infty(\mathbb{F}_q)$-orbits of bounding triples is supported on finitely many $GL_\infty(\mathbb{F}_q)$-orbits. To see this, we consider the product of two elements in the orbits of fixed bounding triples: \begin{eqnarray*} (W_3, g_3, V_3) &=& (x_1 \cdot (W_1, g_1, V_1)) \times (x_2 \cdot (W_2, g_2, V_2)) \\ &=& (x_1^{-T}W_1 \cap x_2^{-T}W_2, x_1g_1x_1^{-1}x_2g_2x_2^{-1}, x_1V_1 \cap x_2 V_2). \end{eqnarray*} This bounding triple is contained in $BT_n$ for some sufficiently large $n$. By Proposition \ref{standard_shape_prop}, we may conjugate it to lie in $BT_{n-c}$, where $c$ is the rank of the paring $(W_3 \cap \mathbb{F}_q^n) \times (V_3 \cap \mathbb{F}_q^n) \to \mathbb{F}_q$. But the rank of this pairing is bounded below by $n - \codim(W_3) - \codim(V_3) \geq n - \codim(W_1) - \codim(W_2) - \codim(V_1) - \codim(V_2)$. We conclude that any bounding triple $(W_3, g_3, V_3)$ arising in this way is conjugate to an element of $BT_m$, where $m = \codim(W_1) + \codim(W_2) + \codim(V_1) + \codim(V_2)$ is independent of $x_1$ and $x_2$. Since $BT_m$ contains finitely many elements, it intersects finitely many orbits. \end{proof} \section{Specialisation Homomorphisms} \label{specitalisation_section} \begin{proposition} For any $n \in \mathbb{Z}_{\geq 0}$, there is a surjective homomorphism $\Psi_n: \mathcal{A} \to \mathbb{Z}GL_n(\mathbb{F}_q)$ defined by \[ \Psi_n(f) = \sum_{(W, g, V) \in BT_n} f(W,g,V)g. \] \end{proposition} \begin{proof} As argued in Proposition \ref{finite_product_prop}, there are only finitely many $(W, g, V)$ appearing in the sum, so the formula is well defined. It is clear that the formula respects addition. For multiplication we find \begin{eqnarray*} \Psi_n(f_1 * f_2) &=& \sum_{(W, g, V) \in BT_n} \sum_{(W_1, g_1, V_1) \times (W_2, g_2, V_2) = (W, g, V)} f_1(W_1, g_1, V_1) f_2(W_2, g_2, V_2) g \\ &=& \left(\sum_{(W_1, g_1, V_1) \in BT_n} f_1(W_1, g_1, V_1) g_1 \right) \left( \sum_{(W_2, g_2, V_2) \in BT_n} f_2(W_2, g_2, V_2) g_2 \right) \\ &=& \Psi_n(f_1) \Psi_n(f_2). \end{eqnarray*} Here we used the fact that $(W_1, g_1, V_1) \times (W_2, g_2, V_2) \in BT_n$ if and only if $(W_1, g_1, V_1) \in BT_n$ and $(W_2, g_2, V_2) \in BT_n$. Now consider the indicator function $f$ of $(W,g,V)$ where $g \in GL_n(\mathbb{F}_q)$ is arbitrary while $V = W = \mathbb{F}_q\{ e_{n+1}, e_{n+2}, \ldots \}$. By definition $\Psi_n(f) = g$, and surjectivity follows by linearity. \end{proof} \begin{proposition} The image of $\mathcal{A}^{GL_\infty}$ under $\Psi_n$ is precisely the centre $Z(\mathbb{Z}GL_n(\mathbb{F}_q))$. \end{proposition} \begin{proof} Firstly we check that the map $\Psi_n$ is $GL_n(\mathbb{F}_q)$-equivariant, where the action on $\mathbb{Z}GL_n(\mathbb{F}_q)$ is conjugation. Let $x \in GL_n(\mathbb{F}_q)$. Then \begin{eqnarray*} \Psi_n( x \cdot f) &=& \sum_{(W, g, V) \in BT_n} f(x^{T}W, x^{-1}gx, x^{-1}V) g \\ &=& \sum_{(W, g, V) \in BT_n} f(W, g, V) xgx^{-1} \\ &=& x \Psi_n(f) x^{-1}. \end{eqnarray*} In order to reindex the sum we used the fact that $x^{-T}W$ and $xV$ contain $\{e_{n+1}, e_{n+2}, \ldots\}$ if and only if $W$ and $V$ do, and also that $g \in GL_n(\mathbb{F}_q)$ if and only if $xgx^{-1} \in GL_n(\mathbb{F}_q)$. \newline \newline \noindent Since a $GL_\infty(\mathbb{F}_q)$-orbit is a union of $GL_n(\mathbb{F}_q)$ orbits, a function $f \in \mathcal{A}^{GL_\infty}$ is constant on $GL_n(\mathbb{F}_q)$-orbits. It follows that $\Psi_n(f)$ is fixed under conjugation by $GL_n(\mathbb{F}_q)$, so it is in the centre of the group algebra. To see that this map surjects onto the centre, it is enough to show that conjugacy-class sums are in the image. For $x \in GL_n(\mathbb{F}_q)$, we take $f$ to be the indicator function of tight bounding triples $(W, g, V)$ where $g$ is conjugate to $x$ in $GL_\infty(\mathbb{F}_q)$ (recall that for tight bounding triples, $g$ determines $W$ and $V$, so there is only one for each $g$). Proposition \ref{gl_inf_gl_n_conjugacy_prop} shows that the only such bounding triples $(W, g, V)$ in $BT_n$ are those where $g$ and $x$ are conjugate by elements of $GL_n(\mathbb{F}_q)$. Then $\Psi_n(f)$ is precisely the sum of all $g$ that are conjugate to $x$ and contained in $GL_n(\mathbb{F}_q)$. \end{proof} \begin{proposition} \label{specialisation_function_gl_proposition} Let $f$ be the indicator function of the $GL_\infty(\mathbb{F}_q)$-orbit of $(W, g, V)$, where $(W, g, V) \in BT_n$. Let $a,b,c$ have the same meanings as in Proposition \ref{standard_shape_prop}, which shows that up to conjugation we may assume $(W,g,V) \in BT_{n-c}$. Let $\boldsymbol\mu$ be the type of $g$ viewed as an element of $GL_{n-c}(\mathbb{F}_q)$. Suppose that $m \geq n-c$, and let $Cl(g)$ be the sum of all elements in $GL_{m}(\mathbb{F}_q)$ that are conjugate to $g$. Then \[ \Psi_m(f) = K \qbinom{m-n+c+h}{h}_q q^{(m-n+c)(2k-h-a-b)} Cl(g), \] where $k = l(\boldsymbol\mu(t-1))$ and $h = m_1(\boldsymbol\mu(t-1))$, and $K$ is an integer independent of $m$. \end{proposition} \begin{proof} It is immediate that that $\Psi_m(f) = P \cdot Cl(g)$ for some integer $P$. The coefficient $P$ is equal to the number of different pairs of subspaces $(W^\prime, V^\prime)$ such that $(W^\prime, g, V^\prime)$ and $(W, g, V)$ are $GL_{m}(\mathbb{F}_q)$-conjugate. Explicitly, for some $x \in GL_{m}(\mathbb{F}_q)$, \begin{eqnarray*} V^\prime &=& xV \\ g &=& xgx^{-1} \\ W^\prime &=& x^T W. \end{eqnarray*} The condition $g = xgx^{-1}$ is saying that $x$ is in the centraliser $C_{GL_{m}(\mathbb{F}_q)}(g)$. So $P$ is the size of the orbit of the pair $(W,V)$ under the action of $C_{GL_{m}(\mathbb{F}_q)}(g)$. Then, the orbit-stabiliser relation implies \[ P = \frac{|C_{GL_{m}(\mathbb{F}_q)}(g)|}{|C_{GL_{m}(\mathbb{F}_q)}(g) \cap \mathrm{Stab}_m(W,V)|}, \] where $\mathrm{Stab}_m(W,V)$ is the subgroup of $GL_{m}(\mathbb{F}_q)$ stabilising the pair $(W,V)$ under the action $x \cdot (W,V) = (x^{-T}W, xV)$. By the same computation as in Proposition \ref{gl_inf_gl_n_conjugacy_prop}, taking $g$ to be in standard shape \[ g= \begin{bmatrix} \Id & 0 & 0 & 0\\ C & A & 0 & 0 \\ D & B & \Id & 0 \\ 0 & 0 & 0 & \Id \end{bmatrix}, \] we find $\mathrm{Stab}_m(V,W)$ consists of block matrices of the form \[ x = \begin{bmatrix} x_{11} & 0 & 0 & 0\\ x_{21} & x_{22} & 0 & 0 \\ x_{31} & x_{32} & x_{33} & x_{34} \\ x_{41} & 0 & 0 & x_{44} \end{bmatrix}, \] where the block sizes are $a, n-a-b-c, b, m-n+c$. To compute the size of the intersection of $C_{GL_{m}(\mathbb{F}_q)}(g)$ and $\mathrm{Stab}_m(W,V)$, we use the fact that the condition of commuting with $g$ does not depend on the blocks $x_{14}$, $x_{43}$, $x_{44}$. This means that any element $x \in C_{GL_{m}(\mathbb{F}_q)}(g) \cap \mathrm{Stab}_m(V,W)$ is obtained by extending some element \[ x_{small} = \begin{bmatrix} x_{11} & 0 & 0\\ x_{21} & x_{22} & 0\\ x_{31} & x_{32} & x_{33} \end{bmatrix} \in GL_{n-c}(\mathbb{F}_q) \cap C_{GL_{m}(\mathbb{F}_q)}(g) \cap \mathrm{Stab}_m(V,W) \] with arbitrary choices of $x_{41} \in \mathbb{F}_q^{a (m-n+c)}$, $x_{34} \in \mathbb{F}_q^{(m-n+c)b}$, $x_{44} \in GL_{m-n+c}(\mathbb{F}_q)$. So if we let \[ N = |GL_{n-c}(\mathbb{F}_q) \cap C_{GL_{m}(\mathbb{F}_q)}(g) \cap \mathrm{Stab}_m(V,W)| \] be the number of possible choices of $x_{small}$, then \[ |C_{GL_{m}(\mathbb{F}_q)}(g) \cap \mathrm{Stab}_m(V,W)| = N q^{(a+b) (m-n+c)} \prod_{i=1}^{m-n+c}(q^{m-n+c}-q^{m-n+c-i}), \] where $N$ does not depend on $m$. \newline \newline \noindent The conjugacy class of $g$ in $GL_{m}(\mathbb{F}_q)$ is labelled by the multipartition $\boldsymbol \mu \cup (1^{m-n+c})_{t-1}$. By Corollary \ref{gln_centraliser_corollary}, \[ \frac{|C_{GL_{m}(\mathbb{F}_q)}(g)|}{|C_{GL_{n-c}(\mathbb{F}_q)}(g)|} = q^{(m-n+c)(2k + m-n+c)} \prod_{i=h+1}^{h+m-n+c} (1 - q^{-i}), \] Since $N$ is the order of a subgroup of $C_{GL_{n-c}(\mathbb{F}_q)}(g)$, it follows that $K = |C_{GL_{n-c}(\mathbb{F}_q)}(g)|/N$ is an integer independent of $m$. Finally, we have \begin{eqnarray*} P &=& \frac{|C_{GL_{m}(\mathbb{F}_q)}(g)|}{|C_{GL_{m}(\mathbb{F}_q)}(g) \cap \mathrm{Stab}_m(W,V)|} \\ &=& \frac{|C_{GL_{n-c}(\mathbb{F}_q)}(g)|}{N q^{(a+b) (m-n+c)} \prod_{i=1}^{m-n+c}(q^{m-n+c}-q^{m-n+c-i})}\frac{|C_{GL_{m}(\mathbb{F}_q)}(g)|}{|C_{GL_{n-c}(\mathbb{F}_q)}(g)|} \\ &=& K \frac{q^{(m-n+c)(2k + m-n+c)} \prod_{i=h+1}^{h+m-n+c} (1 - q^{-i})}{q^{(a+b) (m-n+c)} \prod_{i=1}^{m-n+c}(q^{m-n+c}-q^{m-n+c-i})} \\ &=& K q^{(m-n+c)(2k-a-b-h)} \frac{\prod_{i=h+1}^{m-n+c+h} (q^i - 1)}{\prod_{i=1}^{m-n+c} (q^i - 1)} \\ &=& K \qbinom{m-n+c+h}{h}_q q^{(m-n+c)(2k-a-b-h)}. \end{eqnarray*} \end{proof} \begin{lemma} \label{lower_case_specialisation_correctness_lemma} The formula in Proposition \ref{specialisation_function_gl_proposition} is correct not just for $m \geq n-c$, but for all integers $m \geq 0$ (we interpret $Cl(g)$ to be zero if there are no elements of $GL_{m}(\mathbb{F}_q)$ conjugate to $g$). \end{lemma} \begin{proof} If $m < n-c$, then $\Psi_m(f) = 0$ for the following reason. Assume that $(W, g, V) \in BT_{n-c}$ so that the rank of the pairing on $(W \cap \mathbb{F}_q^{n-c}) \otimes (V \cap \mathbb{F}_q^{n-c})$ is zero. Then if $(W,g,V) \in BT_m$ with $m < n-c$, let the rank of the pairing on $(W \cap \mathbb{F}_q^m) \otimes (V \cap \mathbb{F}_q^m)$ be $c^\prime \geq 0$. Thus the rank of the pairing on $(W \cap \mathbb{F}_q^{n-c}) \otimes (V \cap \mathbb{F}_q^{n-c})$ is $c^\prime - (m-n+c) > 0$, a contradiction. On the other hand, the formula in Proposition \ref{specialisation_function_gl_proposition} involves the terms $\qbinom{m-n+c+h}{h}_q$ and $Cl(g)$, and we now argue that one of these must vanish if $m < n-c$. \newline \newline \noindent For $Cl(g)$ to be nonzero in $\mathbb{Z}GL_{m}(\mathbb{F}_q)$, we must have that $|\boldsymbol\mu| - m_1(\boldsymbol\mu(t-1)) \leq m$. But $|\boldsymbol\mu| = n-c$ and $m_1(\boldsymbol\mu(t-1)) = h$, so we get $m-n+c+h \geq 0$. Since $m < n-c$, we in fact have $ 0 \leq m-n+c+h < h$, from which it follows that $\qbinom{m-n+c+h}{h}_q = 0$. \end{proof} \begin{proposition} \label{exponent_nonnegativity_proposition} The quantity $2k-a-b-h$ appearing in Proposition \ref{specialisation_function_gl_proposition} is non-negative. \end{proposition} \begin{proof} Viewing $g$ as an element of $GL_{n-c}(\mathbb{F}_q)$, we have the block matrix form \[ g = \begin{bmatrix} \Id & 0 & 0\\ C & A & 0 \\ D & B & \Id \end{bmatrix} \] with blocks of size $a$, $n-a-b-c$, $b$. We need to determine the conjugacy classes in $GL_{n-c}(\mathbb{F}_q)$ that have an element of the above form. Problems of this nature are addressed in Chapter 4, Section 3 of \cite{Macdonald1995}, where it is shown that this reduces to multiplication of Hall-Littlewood polynomials whose indexing partitions come from the types of the diagonal blocks (in this case $\Id, A, \Id$). In more detail, let $P$ be the parabolic subgroup consisting the the block shape of the matrix above. Then inflating the class function which is the indicator function of the conjugacy class of $\mathrm{diag}(\Id, A, \Id) \in GL_{a}(\mathbb{F}_q) \times GL_{n-a-b-c}(\mathbb{F}_q) \times GL_{b}(\mathbb{F}_q)$ to $P$, and the inducing up to $GL_{n-c}(\mathbb{F}_q)$ we obtain a class function supported on elements of $GL_{n-c}(\mathbb{F}_q)$ conjugate to a matrix of the above form. It turns out that this parabolic induction operation is described by Hall polynomials $g_{\mu, \nu}^\lambda$. If $\pi_1, \pi_2$ are indicator functions of conjugacy classes of types $\boldsymbol\mu^{(1)}, \boldsymbol\mu^{(2)}$ respectively, then the parabolic induction of $\pi_1$ and $\pi_2$ contains the indicator function of $\boldsymbol\mu$ with coefficient \[ \prod_{r \in \Phi_q} g_{\boldsymbol\mu^{(1)}(r), \boldsymbol\mu^{(2)}(r)}^{\boldsymbol\mu(r)} (q^{\deg(r)}), \] by Equation 3.3 of Chapter 4, Section 3 of \cite{Macdonald1995}. In our situation, we have a three-fold product, where two of the factors (corresponding to identity matrices) have types vanishing away from $r(t) = t-1$, where they take the values $(1^a)$ and $(1^b)$. So the above product reduces just to the case where $r(t)=t-1$, and by Equation 4.6 of Chapter 2, Section 4 of \cite{Macdonald1995}, $g_{\mu, (1^d)}^\lambda$ is zero unless $\lambda/\mu$ is a vertical strip of size $d$. \newline \newline \noindent If $\boldsymbol\nu$ is the type of the matrix $A$ and $\boldsymbol\mu$ be the type of $g$, then $\boldsymbol \mu(r) = \boldsymbol \nu(r)$ for all irreducible polynomials $r$ different from $t-1$. In terms of Young diagrams, $\boldsymbol \mu(t-1)$ is obtained from $\boldsymbol \nu (t-1)$ by adding $a$ boxes, no two in the same row, and then adding $b$ boxes, no two in the same row. For each row (i.e. part) of $\boldsymbol\nu(t-1)$, let us consider the cases where either 2, 1, or 0 boxes are added in order to obtain $\boldsymbol\mu(t-1)$. \newline \newline \noindent To each row we associate a number computed as follows. We begin with 2, and subtract the number of boxes that were added to the row in the above procedure. If the result has length 1, we also subtract 1. Then the sum of all these numbers will be exactly $2k - a- b- h$. The only way one of these numbers could be negative is if two boxes were added and the resulting length was 1. This is impossible, so each individual number is non-negative. In particular, their sum, $2k - a - b- h$ is non-negative. \end{proof} \begin{remark} In fact, $2k - a - b - h$ is zero if the original bounding triple was tight because this forces two boxes to be added to each part of $\boldsymbol\nu(t-1)$, so the number associated to each row is zero. \end{remark} \section{Stable Centres for General Linear Groups}\label{stable_centres_for_general_linear_groups} \noindent We now use the theory we have developed to explicitly address stability properties of centres of group algebras of $GL_n(\mathbb{F}_q)$ as $n$ varies. First of all, we need a way to discuss conjugacy classes of these groups for all $n$ at once. \begin{definition}[Subsection 2.3 \cite{Wan_Wang}]\label{modified_type_definition} If $g \in GL_n(\mathbb{F}_q)$, we define the \emph{modified type} of $g$ to be the multipartition $\boldsymbol \nu$ obtained from the type, $\boldsymbol \mu$, of $g$ in the following way. For each irreducible polynomial $r$ other than $t-1$, $\boldsymbol\nu(r) = \boldsymbol\mu(r)$. On the other hand, the partition $\boldsymbol\nu(t-1)$ is obtained form $\boldsymbol\mu(t-1)$ by subtracting $1$ from each part (i.e. we delete the first column in the Young diagram representation of $\boldsymbol\mu(t-1)$). \end{definition} \begin{example} \label{modified_type_example} The identity element of $GL_n(\mathbb{F}_q)$ has type $\boldsymbol\mu$ where $\boldsymbol\mu(t-1)=(1^n)$ and $\boldsymbol\mu(r)$ is the empty partition for $r \neq t-1$, and so its modified type is the empty multipartition. \end{example} \begin{example} \label{modified_type_example_2} Consider the matrix $g$ of type $\boldsymbol\mu$ in Example \ref{basic_example}. The modified type $\boldsymbol\nu$ of $g$ is $\boldsymbol\nu(t-1) = (2)$ and $\boldsymbol\nu(f) = \boldsymbol\mu(f)$ for all other polynomials $f(t) \neq t$ irreducible over $\mathbb{Q}$. \end{example} \noindent The reason for defining the modified type is that it is invariant under the embedding $GL_n(\mathbb{F}_q) \rightarrow GL_{n+1}(\mathbb{F}_q)$. We may recover the type $\boldsymbol\mu$ from the modified type $\boldsymbol\nu$ provided $n$ is known: we must add $1$ to each part of $\boldsymbol\nu(t-1)$ (possibly including some parts of size zero), so that the total size increases to $n$. This means that $n - |\boldsymbol\nu|$ parts of $\boldsymbol\nu(t-1)$ are incremented, and so this is possible precisely when the number of parts of $\boldsymbol\nu(t-1)$ is less than or equal to $n - |\boldsymbol\nu|$. (If this condition does not hold, then $GL_n(\mathbb{F}_q)$ does not contain any elements of modified type $\boldsymbol \nu$.) \begin{definition} Let $X_{\boldsymbol\nu, n}$ denote the sum of all elements of modified type $\boldsymbol\nu$ in $GL_n(\mathbb{F}_q)$, viewed as an element of $Z(\mathbb{Z}GL_n(\mathbb{F}_q))$. \end{definition} \noindent Note that $X_{\boldsymbol\nu, n}$ is either the sum of elements in a conjugacy class, or zero, according to whether $GL_n(\mathbb{F}_q)$ does, or does not, have elements of modified type $\boldsymbol\nu$. In either case, it is a central element of the group algebra. \begin{theorem} \label{gl_structure_constant_theorem} There is a family of elements $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda} \in \mathcal{R}_q$ interpolating the structure constants of $Z(\mathbb{Z}GL_{m}(\mathbb{F}_q))$ in the following way: \[ X_{\boldsymbol\mu,m} X_{\boldsymbol\nu, m} = \sum_{\boldsymbol\lambda} r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}([m]_q) X_{\boldsymbol\lambda, m}. \] Here, $\boldsymbol\mu, \boldsymbol\nu$ are arbitrary multipartitions and the sum ranges over all multipartitions. \end{theorem} \begin{proof} Let us work in $\mathcal{A}^{GL_\infty}$. Let $f_{\boldsymbol\mu}$ be the indicator function of the $GL_\infty(\mathbb{F}_q)$-orbit of a tight bounding triple for an element of modified type $\boldsymbol\mu$, and similarly define $f_{\boldsymbol\nu}$. Using the results of Section \ref{specitalisation_section} we argue as follows. Firstly, $\Psi_m(f_{\boldsymbol\mu}) = X_{\boldsymbol\mu, m}$ and $\Psi_m(f_{\boldsymbol\nu}) = X_{\boldsymbol\nu, m}$, so \[ \Psi_m(f_{\boldsymbol\mu}f_{\boldsymbol\nu}) = X_{\boldsymbol\mu, m}X_{\boldsymbol\nu, m}. \] Now, $f_{\boldsymbol\mu}f_{\boldsymbol\nu}$ a linear combination (with integer coefficients) of indicator functions of $GL_\infty(\mathbb{F}_q)$-orbits of some bounding triples $(W,g,V)$. But by Proposition \ref{specialisation_function_gl_proposition} such an indicator function specialises to \[ K \qbinom{m-n+c+h}{h}_q q^{(m-n+c)(2k-h-a-b)} X_{\boldsymbol\lambda, m}, \] where $\boldsymbol\lambda$ is the modified type of $g$, $K$ is an integer independent of $m$, and $a,b,c,h,k,n$ are the usual quantities associated to the bounding triple $(W,g,V)$. Now, \[ q^{(m-n+c)(2k-a-b-h)} = (1+(q-1)[m]_q)^{2k-a-b-h} q^{-(n-c)(2k-a-b-h)} \] is a polynomial in $[m]_q$ with coefficients in $\mathbb{Z}[q,q^{-1}]$ (and so it is the evaluation of an element of $\mathcal{R}_q$ at $x=[m]_q$). By Lemma \ref{shifted_qivp_lemma}, $\qbinom{m-n+c+h}{h}_q$ is the evaluation of some element of $\mathcal{R}_q$ at $x=[m]_q$. Combining these, we conclude that $K \qbinom{m-n+c+h}{h}_q q^{(m-n+c)(2k-h-a-b)}$ is obtained by evaluating an element of $\mathcal{R}_q$ at $[m]_q$. Finally, summing over all orbits where $g$ has modified type $\boldsymbol\lambda$, we obtain the required element $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}$. \end{proof} \noindent Some comments are in order. First of all, following the discussion in Subsection \ref{qivp_subsection}, we may instead view $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}$ as a polynomial in $q^m$ (rather than $[m]_q$) if we wish. Secondly, suppose that we wished to consider $Z(\mathbb{F}_q GL_n(\mathbb{F}_q))$, for example, to study the representation theory of $GL_n(\mathbb{F}_q)$ in defining characteristic. We would want to evaluate the quantities \[ K \qbinom{m-n+c+h}{h}_q q^{(m-n+c)(2k-h-a-b)} X_{\boldsymbol\lambda, m}\] modulo $q$. Note that $2k-a-b-h \geq 0$ (Proposition \ref{exponent_nonnegativity_proposition}) and $m-n+c \geq 0$ whenever the above quantity is nonzero (Lemma \ref{lower_case_specialisation_correctness_lemma}). This means that only non-negative powers of $q$ arise, so there is no issue in passing to coefficients in $\mathbb{F}_q$. However, the polynomials $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}$ will have coefficients in $\mathbb{Z}[q,q^{-1}]$, and hence might not be defined over $\mathbb{F}_q$. This is a facet of how the representation theory of the general linear group in defining characteristic is very different from non-defining characteristic. \begin{definition} Let $\mathrm{FH}_q^{GL}$ be the free $\mathcal{R}_q$-module with basis given by symbols $K_{\boldsymbol\mu}$ for multipartitions $\boldsymbol\mu$. We equip $\mathrm{FH}_q^{GL}$ with a bilinear multiplication defined on basis elements via \[ K_{\boldsymbol\mu} K_{\boldsymbol\nu} = \sum_{\boldsymbol\lambda} r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda} K_{\boldsymbol\lambda}, \] where $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda} \in \mathcal{R}_q$ are the elements from Theorem \ref{gl_structure_constant_theorem}. We call $\mathrm{FH}_q^{GL}$ the \emph{general linear Farahat-Higman algebra}. \end{definition} \begin{corollary}\label{gl_special_hom_cor} There is a ``specialisation'' homomorphism $ \Theta_n: \mathrm{FH}_q^{GL} \to Z(\mathbb{Z}GL_n(\mathbb{F}_q))$ defined by $\Theta_n(K_{\boldsymbol\mu}) = X_{\boldsymbol\mu,n}$ and by evaluating the coefficients (elements of $\mathcal{R}_q$) at $[n]_q$. \end{corollary} \begin{proof} This is the content of Theorem \ref{gl_structure_constant_theorem}. \end{proof} \begin{proposition}\label{glfh_is_nice_prop} We have that $\mathrm{FH}_q^{GL}$ is an associative, commutative, unital $\mathcal{R}_q$-algebra. \end{proposition} \begin{proof} To check these properties, we appeal to the maps $\Theta_n$. Since $Z(\mathbb{Z}GL_n(\mathbb{F}_q))$ is commutative, we have $X_{\boldsymbol\mu, n}X_{\boldsymbol\nu, n} = X_{\boldsymbol\nu, n}X_{\boldsymbol\mu, n}$. This means that \[ \sum_{\boldsymbol\lambda} r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}([n]_q) X_{\boldsymbol\lambda, n} = \sum_{\boldsymbol\lambda} r_{\boldsymbol\nu, \boldsymbol\mu}^{\boldsymbol\lambda}([n]_q) X_{\boldsymbol\lambda, n}. \] For $n$ large enough that $X_{\boldsymbol\lambda, n}$ is nonzero, we get $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}([n]_q) = r_{\boldsymbol\nu, \boldsymbol\mu}^{\boldsymbol\lambda}([n]_q)$. But the set of $[n]_q$ for $n$ sufficiently large is a Zariski-dense subset of $\mathbb{Z}$, so in fact we have an equality of polynomials $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda} = r_{\boldsymbol\nu, \boldsymbol\mu}^{\boldsymbol\lambda}$, which proves commutativity. Similarly considering \[ \Theta_n((K_{\boldsymbol\mu}K_{\boldsymbol\nu})K_{\boldsymbol\lambda}) = \Theta_n(K_{\boldsymbol\mu}(K_{\boldsymbol\nu}K_{\boldsymbol\lambda})) \] which follows form associativity in $Z(\mathbb{Z}GL_n(\mathbb{F}_q))$, we deduce associativity in $\mathrm{FH}_q^{GL}$. Similarly again, we see that $X_{\varnothing}$ is the identity element, because $\Theta_n(X_{\varnothing})$ is the identity element of $Z(\mathbb{Z}GL_n(\mathbb{F}_q))$ (see Example \ref{modified_type_example}). \end{proof} \begin{lemma} \label{degree_bound_lemma} The degree of the polynomial $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}$ is at most $2(|\boldsymbol\mu| + |\boldsymbol\nu| - |\boldsymbol\lambda|)$. \end{lemma} \begin{proof} In Theorem \ref{gl_structure_constant_theorem}, $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}$ is calculated by considering the product of tight bounding triples corresponding to modified types $\boldsymbol\mu$ and $\boldsymbol\nu$. We seek to understand what happens when the result has modified type $\boldsymbol\lambda$. Let $(W_1, g_1, V_1)$ be a tight bounding triple where the modified type of $g_1$ is $\boldsymbol\mu$ and let $(W_2, g_2, V_2)$ be a tight bounding triple where the modified type of $g_2$ is $\boldsymbol\nu$. Note that we have $\codim(W_1) = \codim(V_1) = |\boldsymbol\mu|$ and $\codim(W_2) = \codim(V_2) = |\boldsymbol\nu|$. So, for \[ (W_1, g_1, V_1) \times (W_2, g_2, V_2) = (W_1 \cap W_2, g_1g_2, V_1 \cap V_2) \] we have $\codim(W_1 \cap W_2) \leq |\boldsymbol\mu| + |\boldsymbol\nu|$ and $\codim(V_1 \cap V_2) \leq |\boldsymbol\mu| + |\boldsymbol\nu|$. Now let us view this product of bounding triples in standard shape: \[ g_1g_2 = \begin{bmatrix} \Id & 0 & 0\\ C & A & 0 \\ D & B & \Id \end{bmatrix} \] where as usual, the block sizes are $a, n-a-b-c, b$. The codimension of $W = W_1 \cap W_2$ is $n-a-c$ and the codimension of $V = V_1 \cap V_2$ is $n-b-c$. Let $\boldsymbol\rho$ be the (unmodified) type of this $(n-c) \times (n-c)$ matrix. Hence $|\boldsymbol\rho| = n-c$ and if $\boldsymbol\lambda$ is modified type of the above matrix, we have $|\boldsymbol\lambda| = |\boldsymbol\rho| - l(\boldsymbol\rho(t-1))$. \newline \newline \noindent The contribution of the indicator function of the orbit of $(W_1 \cap W_2, g_1g_2, V_1 \cap V_2)$ to $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}([m]_q)$ is \[ K \qbinom{m-n+c+h}{h}_q q^{(m-n+c)(2k-h-a-b)} \] which (viewed as a polynomial in $[m]_q$, or equivalently, $q^m$) has degree $2k-a-b$. Recall that $k = l(\boldsymbol\rho(t-1))$. We must show that $2k-a-b \leq 2 (|\boldsymbol\mu| + |\boldsymbol\nu| - |\boldsymbol\lambda|)$. Right away we point out that \[ 2(|\boldsymbol\mu| + |\boldsymbol\nu|) \geq \codim(W_1 \cap W_2) + \codim(V_1 \cap V_2) = 2n - a - b - 2c = 2(n-c-k) + 2k-a-b. \] Upon identifying $n-c-k = |\boldsymbol\rho| - l(\boldsymbol\rho(t-1)) = |\boldsymbol\lambda|$, we obtain the required inequality. \end{proof} \begin{proposition}[Theorem 3.4, \cite{Wan_Wang}]\label{gl_assoc_graded} The algebra $\mathrm{FH}_q^{GL}$ is filtered, where $K_{\boldsymbol\mu}$ is in filtration degree $|\boldsymbol\mu|$. Moreover the structure constants of the associated graded algebra are integers (rather than arbitrary elements of $\mathcal{R}_q$). \end{proposition} \begin{proof} This follows from Lemma \ref{degree_bound_lemma}, noting that in the associated graded algebra, the structure constants are $r_{\boldsymbol\mu,\boldsymbol\nu}^{\boldsymbol\lambda}$ where $|\boldsymbol\mu| + |\boldsymbol\nu| = |\boldsymbol\lambda|$ and hence $r_{\boldsymbol\mu,\boldsymbol\nu}^{\boldsymbol\lambda}$ is a degree zero polynomial. \end{proof} \begin{remark} There is a canonical algebra map $\sigma: \mathcal{A}^{GL_\infty} \to \mathrm{FH}_q^{GL}$ such that \[ \Theta_n \circ \sigma = \Psi_n. \] It sends the indicator function of the orbit of $(W,g,V)$ to $p(x) K_{\boldsymbol\mu}$ where $\boldsymbol\mu$ is the modified type of $g$ and $p(x) \in \mathcal{R}_q$ obeys \[ p([m]_q) = K \qbinom{m-n+c+h}{h}_q q^{(m-n+c)(2k-h-a-b)} \] as in Proposition \ref{specialisation_function_gl_proposition}. However, it is not injective as we now show. \newline \newline \noindent Consider $(U_1,g,U_2)$ where $g$ is the identity, and $U_1 = \mathbb{F}_q^\infty$, while $U_2 = \mathbb{F}_q\{e_2, e_3, \ldots\}$. If $f_1$ is the indicator function of the orbit of this triple, then \[ \Psi_n(f_1) = |Gr(n-1,n)| \mathrm{Id}, \] where $Gr(n-1,n)$ is the Grassmannian of $(n-1)$-dimensional subspaces inside $\mathbb{F}_q^n$; each such space corresponds to a choice of $U_2^\prime$ such that $(U_1,g,U_2^\prime) \in B_n$ and $(U_1,g,U_2^\prime)$ is conjugate to $(U_1,g,U_2)$. (Incidentally, $|Gr(n-1,n)| = [n]_q$, so $p(x)=x$.) However, we could also let $f_2$ be the indicator function of the orbit of $(U_2, g, U_1)$. The same argument shows \[ \Psi_n(f_2) = |Gr(n-1,n)| \mathrm{Id}, \] and it follows that $\sigma(f_1) = \sigma(f_2)$. But $f_1 \neq f_2$ in $\mathcal{A}^{GL_\infty}$. So unlike the symmetric group case, the Ivanov-Kerov algebra $\mathcal{A}^{GL_\infty}$ and the general linear Farahat-Higman algebra $\mathrm{FH}_q^{GL}$ are not interchangeable. \end{remark} \section{Classical Groups} \label{classical_groups_section} \noindent We now make the appropriate modifications to the constructions and arguments from Section \ref{GL_section} to obtain analogous results for classical groups. First of all, we make particular choices of sesquilinear forms. \begin{definition} \label{standard_form_definition} Let $n$ be a positive integer, and let $x_i$ and $y_i$ denote the components of vectors $x$ and $y$. The \emph{standard Hermitian form} on $V_n = \mathbb{F}_{q^2}^{n}$ is defined by \[ B_n(x,y) = \sum_{i=1}^n x_i \sigma(y_i), \] where $\sigma(z) = z^q$. The \emph{standard alternating form} on $V_n = \mathbb{F}_q^{2n}$, is defined by \[ B_n(x,y) = \sum_{i=1}^{n} x_{2i-1} y_{2i} - x_{2i}y_{2i-1}. \] The \emph{standard positive-symmetric form} on $V_n = \mathbb{F}_q^{2n}$ is defined by \[ B_n(x,y) = \sum_{i=1}^n x_{2i-1} y_{2i} + x_{2i}y_{2i-1}, \] The \emph{standard negative-symmetric form} on $V_n = \mathbb{F}_q^{2n}$ is defined by \[ B_n(x,y) = x_1 y_1 - m x_2y_2 + \sum_{i=2}^n x_{2i-1} y_{2i} + x_{2i}y_{2i-1}, \] where $m$ is a fixed non-square in $\mathbb{F}_q$. (This is chosen to have non-zero germ.) \newline \noindent The \emph{standard odd-symmetric form} on $V_n = \mathbb{F}_q^{2n+1}$ is defined by \[ B_n(x,y) = x_1 y_1 + \sum_{i=1}^n x_{2i} y_{2i+1} + x_{2i+1}y_{2i}, \] When we wish to refer to all five cases at once, we will say that $B_n$ is a \emph{standard sesquilinear form} and let $V_n$ be the corresponding vector space as defined above. \end{definition} \noindent The names positive, negative, and odd are not standard, but we need some way to refer to these cases separately in order to study the groups $O_{2n}^{+}(\mathbb{F}_q)$, $O_{2n}^-(\mathbb{F}_q)$ and $O_{2n+1}(\mathbb{F}_q)$. We make these particular choices of symmetric forms for technical reasons; what is really essential is that in each case passing from $n$ to $n+1$ amounts to taking the direct sum of $V_n$ with a hyperbolic plane to get $V_{n+1}$. As a result we obtain inclusions of certain classical groups. Let $B_n$ be a standard sesquilinear form. Then we see that the inclusion of $V_n$ into $V_{n+1}$ preserves the form, i.e. if $v,w \in V_n$, then $B_{n}(v,w) = B_{n+1}(v,w)$. In each case, $V_{n+1} = V_n \oplus V_{n}^\perp$ and we get that $G_{B_n}(V_n) \subseteq G_{B_{n+1}}(V_{n+1})$ is the subgroup fixing $V_n^\perp$ pointwise. (If $g \in G_{B_{n+1}}(V_{n+1})$ satisfies $g V_n^\perp = V_n^\perp$, then also $g V_n = V_n$ because $g$ preserves the sesquilinear form and $V_n^{\perp \perp} = V_n$.) \begin{definition} We define \[ V_\infty = \varinjlim V_n, \] which depends on which kind of form we are considering (because $V_n$ does). There is a sesquilinear form $B_\infty$ defined on $V_\infty$ as follows. If $v,w \in V_n$, then \[ B_\infty(v,w) = B_m(v,w) \] for any $m \geq n$ (this is independent of $m$ by the discussion above). We also say that a subspace of $V_\infty$ is \emph{smooth} if it contains $V_n^\perp$ for some $n$. \end{definition} \noindent Implicitly we have equipped $V_\infty$ with a basis coming from the standard bases of the $V_n$. As in the general linear case, a smooth subspace has finite codimension in $V_\infty$, but not every subspace with finite codimension is smooth. \begin{lemma} We have $V_\infty = V_n \oplus V_n^\perp$ and inside $V_\infty$, $V_n^{\perp \perp} = V_n$ for any $n$. \end{lemma} \begin{proof} This is straightforward to check in each case; $V_n$ has a basis consisting of the first $n$ (or $2n$, or $2n+1$ depending on the case) basis vectors of $V_\infty$, while the remaining basis vectors form a basis $V_n^\perp$. \end{proof} \begin{definition} We define $U_\infty(\mathbb{F}_q)$, $Sp_\infty(\mathbb{F}_q)$, $O_\infty^+(\mathbb{F}_q)$, $O_\infty^-(\mathbb{F}_q)$, $O_\infty(\mathbb{F}_q)$ to be $\varinjlim G_{B_n}(V_n)$ for each of the five respective cases in Definition \ref{standard_form_definition}. The maps are the inclusions arising from the inclusions $V_n \hookrightarrow V_{n+1}$. In order to treat the five cases uniformly, we use the notation $G_\infty(\mathbb{F}_q)$ to refer to any of the symmetry groups of the sesquilinear forms. \end{definition} \noindent In each case we may view these infinite groups as consisting of matrices that differ from the identity matrix in finitely many entries and preserve a suitable sesquilinear form on $V_\infty$. \begin{definition} Let us fix one of the five different choices of $G_\infty(\mathbb{F}_q)$. A \emph{sesquilinear bounding pair} is a pair $(g, V)$, where $g \in G_\infty(\mathbb{F}_q)$, and $V$ is a smooth subspace of $V_\infty$ that is fixed pointwise by $g$. \end{definition} \noindent \begin{definition} Let us say that a bounding pair $(g,V)$ is \emph{tight} if $V = \ker(g-1)$. \end{definition} \noindent As in the general linear case, in a tight bounding pair, $V$ is as large as possible. So each $g$ is contained in a unique tight bounding pair. The following proposition is directly analogous to Proposition \ref{triple_conj_prop}, so we omit the proof. \begin{proposition} There is an action of $G_\infty(\mathbb{F}_q)$ on the set of bounding pairs, defined by \[ x \cdot (g, V) = (xgx^{-1}, xV). \] \end{proposition} \begin{definition} Let $BP_n$ be the set of bounding pairs $(g, V)$ such that $V \supseteq V_n^\perp$. \end{definition} \noindent Now we will see why the fact that $g$ preserves a sesquilinear allows us to work with pairs rather than triples. \begin{lemma} If $(g, V) \in BP_n$, then we may view $g$ as an element of $G_{B_n}(V_n)$, i.e. $g$ fixes $V_n^\perp$ pointwise and $g V_n \subseteq V_n$. \end{lemma} \begin{proof} By the definition of $BP_n$, it follows that $g$ fixes $V_n^\perp \subseteq V$ pointwise. Now, suppose that $v \in V_n$. We have $B(v,w) = 0$ for all $w$ in $V_n^\perp$. So also $B(gv, gw) = B(gv, w) = 0$ because $g$ preserves the sesquilinear form and $gw = w$. We conclude that $gv \in V_n^{\perp\perp} = V_n$. So $g$ preserves the decomposition $V_\infty = V_n \oplus V_n^\perp$ and acts by the identity on the second summand. \end{proof} \begin{definition} We say $(g,V) \in BP_n$ has \emph{standard shape} with respect to a decomposition $V_n = W_1 \oplus W_2 \oplus W_3 \oplus W_4$ if the following conditions hold: \begin{itemize} \item $W_3 = V \cap V^\perp$ is the radical of $B_n$ when restricted to $V$, \item $V \cap V_n = W_3 \oplus W_4$, where $B_n$ restricts to a non-degenerate form on $W_4$, \item $V^\perp = W_2 \oplus W_3$, where $B_n$ is non-degenerate when restricted to $W_2$, \item $W_1$ is orthogonal to $W_1 \oplus W_2 \oplus W_4$ and $B_n$ induces a non-degenerate pairing between $W_1$ and $W_3$. \end{itemize} \end{definition} \begin{lemma} For any bounding pair $(g, V) \in BP_n$, we may find a decomposition $V_n = W_1 \oplus W_2 \oplus W_3 \oplus W_4$ putting $(g, V)$ in standard shape. \end{lemma} \begin{proof} Note that $W_3 = V \cap V^\perp$ is automatically determined; since $V_n^\perp \subseteq V$, we have $V^\perp \subseteq V_n$ which means $W_3 \subseteq V_n$. We may then pick $W_4$ to be a complement of $W_3$ in $V \cap V_n$ (similarly to Proposition \ref{sesquilinear_structure_prop} this forces $B_n$ to be non-degenerate on $W_4$). Also we may take $W_2$ to be a complement to $W_3$ in $V^\perp$. Since the orthogonal space to $V^\perp$ in $V_n$ is $V^{\perp \perp} \cap V_n = V \cap V_n$, the radical of $B_n$ when restricted to $V^\perp$ is $V^\perp \cap V \cap V_n = W_3 \cap V_n = W_3$. This implies that $B_n$ is non-degenerate when restricted to $W_2$. Now we concern ourselves with $W_1$. First of all, $B_n$ is non-degenerate on $W_2 \oplus W_4$ since it is non-degenerate on each space, and the two spaces are orthogonal. This means that $(W_2 \oplus W_4)^\perp \oplus (W_2 \oplus W_4) = V_n$. \newline \newline \noindent Let $d_1 = \dim(V \cap V_n)$ and $d_2 = \dim(V \cap V^\perp) = \dim(W_3)$. Then $\dim(W_4) = d_1 - d_2$, also $\dim(V^\perp) = \dim(V_n)-d_1$ and hence $\dim(W_2) = \dim(V_n)-d_1-d_2$. We conclude \[ \dim((W_2 \oplus W_4)^\perp \cap V_n) = \dim(V_n) - \dim(W_2) - \dim(W_4) = 2 d_2 = 2 \dim(W_3). \] Note that $W_3$ is contained inside $(W_2 \oplus W_4)^\perp$, so we may apply Lemma \ref{lagrangian_subspace_lemma} to find a complement $W_1$ to $W_3$ in $(W_2 \oplus W_4)^\perp$ such that $B_n$ restricts to zero on $W_1$. \end{proof} \begin{lemma} \label{random_sesquilinear_realisation_lemma} Suppose some $(g, V) \in BP_n$ has standard shape with respect to the decomposition $V_n = W_1 \oplus W_2 \oplus W_3 \oplus W_4$. Then for any sesquilinear form $B^\prime$ on $W_1$, there exists a linear map $J: W_1 \to W_3$ such that \[ B^\prime (u,v) = B_n(u, Jv) + B_n(Ju, v), \] for any $u,v \in W_1$. \end{lemma} \begin{proof} Fix a basis $u_i$ of $W_1$ and take $W_3$ to have the dual basis $u_i^\prime$ under $B_n$, so that $B_n(u_i, u_j^\prime) = \delta_{ij}$ and $B(u_j^\prime, u_i) = \varepsilon \delta_{ij}$, where $\varepsilon = 1$ if $B_n$ is Hermitian or symmetric, and $\varepsilon = -1$ if $B_n$ is alternating. An element of $W_1$ may be viewed as column vector whose $i$-th entry is the coefficient of $u_i$. Let us assume that $J = D J^\prime$ where $D: W_1 \to W_3$ maps $u_i$ to $u_i^\prime$ and $J^\prime : W_1 \to W_1$ is a matrix to be determined. Then for $u,v \in W_1$ the sesquilinear form \[ B_n(u, Jv) + B_n(Ju, v) = u^T \sigma(J^\prime) \sigma(v) + \varepsilon u^T (J^\prime)^T \sigma(v) = u^T(\sigma(J^\prime) + \varepsilon (J^\prime)^T)\sigma(v) \] has matrix $\sigma(J^\prime) + \varepsilon (J^\prime)^T$. Finally, we show that the matrix of an arbitrary sesquilinear form $B^\prime$ on $W_1$ may be expressed in this way. \newline \newline \noindent The matrix $M$ of a Hermitian form obeys $\sigma(M) = M^T$. We take $J^\prime$ to be the upper-triangular matrix whose entries above the diagonal agree with those of $M^T$. The diagonal entries $M_{ii}$ obey $M_{ii} = \sigma(M_{ii})$, and hence by Lemma \ref{trace_value_lemma} we may choose the diagonal entry $J_{ii}^\prime$ to be some $x \in \mathbb{F}_{q^2}$ such that $x + \sigma(x) = M_{ii}$. \newline \newline \noindent The matrix $M$ of an alternating form obeys $M^T = -M$, and the diagonal entries are zero (in characteristic 2 this follows from $B_n(v,v) = 0$). So we may simply take $J^\prime$ to be the upper-triangular matrix whose diagonal entries are zero, and above-diagonal entries agree with $M$. \newline \newline \noindent The matrix $M$ of symmetric form obeys $M^T = M$. When working with symmetric bilinear forms, we have assumed that the characteristic is odd. So in this case we may take $J^\prime = \frac{1}{2}M$. \end{proof} \begin{remark} \label{simplification_remark} Suppose that $V_n = W_1 \oplus W_2 \oplus W_3 \oplus W_4$ puts some element $g$ in standard shape. If we have two vectors $u, v \in V_n$ with $u = u_1 + u_2 + u_3 + u_4$ and $v = v_1 + v_2 + v_3 + v_4$, where $u_i, v_i \in W_i$ for $i=1,2,3,4$. Then we have \[ B_n(u,v) = B_n(u_1, v_3) + B_n(u_2, v_2) + B_n(u_3, v_1) + B_n(u_4, v_4) \] by taking into account which $W_i$ and $W_j$ are orthogonal. \end{remark} \begin{lemma} \label{sesquilinear_shape_lemma} Suppose that $(g, V) \in BP_n$ is in standard shape with respect to $W_1 \oplus W_2 \oplus W_3 \oplus W_4$. We may write $g$ as a block matrix with respect to this decomposition, in which case it takes the form \[ g = \begin{pmatrix} \Id & 0 & 0 & 0\\ g_{21} & g_{22} & 0 & 0\\ g_{31} & g_{32} & \Id & 0 \\ 0 & 0 & 0 & \Id \end{pmatrix}. \] \end{lemma} \begin{proof} Since $g$ acts by the identity on $V \cap V_n = W_3 \oplus W_4$, the last two columns must take the stated form. Additionally, since $W_2$ is orthogonal to $V$, $gW_2$ is orthogonal to $gV = V$. It follows that $gW_2 \subset V^\perp = W_2 \oplus W_3$, which explains the structure of the second column. Suppose that $u \in W_1$ and $v \in W_3$. Then \[ B_n(u,v) = B_n(gu, gv) = B_n(gu, v). \] Thus $B_n(gu-u, v) = 0$ for all $v \in W_3$ and hence $gu-u \in W_3^\perp = W_2 \oplus W_3 \oplus W_4$. In particular the component of $gu$ in $W_1$ equals the component of $u$ in $W_1$, so the top-left entry of the matrix is the identity. Finally, the bottom-left entry must be zero because $g$ fixes $W_4$, so $g$ must preserve $W_4^\perp = W_1 \oplus W_2 \oplus W_3$. \end{proof} \begin{lemma} \label{sesquilinear_extension_lemma} Suppose that $(g, V) \in BP_n$ is in standard shape with respect to $V_n = W_1 \oplus W_2 \oplus W_3 \oplus W_4$. Any $x \in G_{B_n}(V_n)$ such that $xV = V$ may be written in the block matrix form \[ x = \begin{pmatrix} x_{11} & 0 & 0 & 0\\ x_{21} & x_{22} & 0 & 0\\ x_{31} & x_{32} & x_{33} & x_{34} \\ x_{41} & 0 & 0 & x_{44} \end{pmatrix}. \] Now suppose that $x$ is any matrix of the above form, not necessarily in $G_{B_n}(V_n)$. For any values of the entries $x_{11} \in \hom(W_1, W_1)$ and $x_{41} \in \hom(W_1, W_4)$, there exists $J \in \hom(W_1, W_3)$ such that $x$ is an element of $G_{B_n}(V_n)$ if and only if the three following conditions hold: \begin{itemize} \item Firstly, the following matrix is an element of $G_{B_n}(V_n)$: \[ x_{new} = \begin{pmatrix} x_{11} & 0 & 0 & 0\\ x_{21} & x_{22} & 0 & 0\\ x_{31} + J & x_{32} & x_{33} & 0 \\ 0 & 0 & 0 & \Id \end{pmatrix}. \] \item Secondly, $x_{44} \in G_{B_n}(W_4)$. \item Thirdly, $x_{34} \in \hom(W_4, W_3)$ has the value uniquely determined by the equation \[ B_n(x_{11}u_1, x_{34}v_4) + B_n(x_{41}u_1, x_{44}v_4) = 0 \] for $u_1 \in W_1$ and $v_4 \in W_4$. \end{itemize} \end{lemma} \begin{proof} The condition $xV = V$ becomes $x(W_3 \oplus W_4) = W_3 \oplus W_4$. Since $x$ is an isometry, we also have $xV^\perp = V^\perp$ which becomes $x(W_2 \oplus W_3) = W_2 \oplus W_3$. Similarly to the proof of Proposition \ref{gl_inf_gl_n_conjugacy_prop}, this implies that $x$ has the stated form. \newline \newline \noindent Let us consider compare the actions of $x$ and $x_{new}$ on the spaces $W_i$ to determine when they preserve $B_n$. Both matrices act equally on $W_2 \oplus W_3$, so one preserves $B_n$ on this space if and only if the other does. We have that $x_{new}$ trivially preserves the form on $W_4$, where it acts as the identity, while if $u_4, v_4 \in W_4$, then by Remark \ref{simplification_remark}, \[ B_n(x u_4, x v_4) = B_n(x_{44} u_4, x_{44} v_4). \] This shows that $x$ preserves $B_n$ on $W_4$ if and only if $x_{44} \in G_{B_n}(W_4)$. Note that $(W_2 \oplus W_3)^\perp = W_3 \oplus W_4$, so the shapes of $x$ and $x_{new}$ force them both to preserve orthogonality between $W_2 \oplus W_3$ and $W_4$. \newline \newline \noindent Finally, let us consider $W_1$. Suppose that $u_i \in W_i$ ($i=1,2,3,4$). Applying Remark \ref{simplification_remark} to $B_n(xu, xv)$ and $B_n(x_{new}u, x_{new}v)$ for suitable $u$ and $v$: \begin{eqnarray*} B_n(x u_1, x u_2) &=& B_n(x_{11} u_1, x_{32} u_2) + B_x(x_{21} u_1, x_{22}u_2) = B_n(x_{new} u_1, x_{new} u_2), \\ B_n(x u_1, x u_3) &=& B_n(x_{11} u_1, x_{33} u_3) = B_n(x_{new} u_1, x_{new} u_3), \end{eqnarray*} showing that one of $x$ and $x_{new}$ preserves the pairing between $W_1$ and $W_2 \oplus W_3$ if and only if the other does. The isometry condition \[ B_n(x_{11} u_1, x_{33} u_3) = B_n(u_1, u_3) \] also shows that $x_{11}$ must be invertible. Now, $x$ preserves the orthogonality of $W_1$ and $W_4$ if and only if \[ 0 = B_n(x u_1, x u_4) = B_n(x_{11} u_1, x_{34}u_4) + B_n(x_{41}u_1, x_{44}u_4). \] We observe that the change of variables $u_1 \to x_{11}^{-1}u_1$ gives \[ B_n(u_1, x_{34} u_4) = - B_n(x_{41}x_{11}^{-1}u_1, x_{44} u_4), \] which in turn uniquely determines $x_{34}$ since $B_n$ gives a non-degenerate pairing between $W_1$ and $W_3$. On the other hand, $x_{new}$ automatically preserves the orthogonality of $W_1$ and $W_4$ since $x_{new}W_1 \subseteq W_1 \oplus W_2 \oplus W_3 = W_4^\perp$ and $x_{new} W_4 = W_4$. It remains to determine the conditions under which $x$ and $x_{new}$ preserve $B_n$ on $W_1$. Suppose $u_1, v_1 \in W_1$. Then, \begin{eqnarray*} B_n(x u_1, x v_1) &=& B_n(x_{11} u_1, x_{31} v_1) + B_n(x_{21} u_1, x_{21} v_1) + B_n(x_{31} u_1, x_{11} v_1) + B_n(x_{41}u_1, x_{41} v_1), \\ B_n(x_{new}u_1, x_{new}v_1) &=& B_n(x_{11} u_1, (x_{31}+J) v_1) + B_n(x_{21} u_1, x_{21} v_1) + B_n((x_{31}+J) u_1, x_{11} v_1). \end{eqnarray*} These two expressions will be equal if we can find $J$ such that \[ B_n(x_{11}u_1, Jv_1) + B_n(Ju_1, x_{11}v_1) = B_n(x_{41}u_1, x_{41}v_1), \] which in turn will guarantee that $B_n(x u_1, x v_1) = B_n(u_1, v_1)$ if and only if $B_n(x_{new}u_1, x_{new}v_1) = B_n(u_1,v_1)$. \newline \newline \noindent Since $x_{41}$ could be arbitrary, all we know about $B_m(x_{41}u_1, x_{41}v_1)$ is that it is some possibly degenerate sesquilinear form on $W_1$. Performing the change of coordinates $u_1 \to x_{11}^{-1}u_1$, $v_1 \to x_{11}^{-1}v_1$, $J \to Jx_{11}$, we would have to find $J \in \hom(W_1, W_3)$ such that \[ B_n(u_1, Jv_1) + B_n(Ju_1, v_1) = B_n(x_{41}x_{11}^{-1}u_1, x_{41}x_{11}^{-1}v_1). \] This can be achieved by Lemma \ref{random_sesquilinear_realisation_lemma} (and this $J$ may be viewed as a function of $x_{41}$ and $x_{11}$). \end{proof} \begin{theorem} \label{sesquilinear_conjugacy_theorem} Two bounding pairs in $BP_n$, $(g, V)$ and $(g^\prime, V^\prime)$ are conjugate by $G_\infty(\mathbb{F}_q)$ if and only if they are conjugate by $G_{B_n}(V_n)$. \end{theorem} \begin{proof} If we have $x \in G_\infty(\mathbb{F}_q)$ such that $g^\prime = xgx^{-1}$ and $V^\prime = xV$, $x$ must be a member of some finite $G_{B_m}(V_m)$ (we may assume $m \geq n$). In particular, it follows that $V \cap V_m$ is isometric to $V^\prime \cap V_m$ (both spaces being equipped with the restriction of $B_m$). As in Proposition \ref{sesquilinear_structure_prop}, these spaces decompose as $\ker(B_m|_{V}) \oplus U^{\oplus r} \oplus W$, where $U$ is a hyperbolic plane, and $W$ is the germ. \newline \newline \noindent First we show that there exists $y \in G_{B_n}(V_n)$ such that $V^\prime = yV$. By Proposition \ref{witt_lemma_proposition}, this is equivalent to $V \cap V_n$ and $V^\prime \cap V_n$ being isometric. But passing from $V \cap V_n$ and $V^\prime \cap V_n$ to $V \cap V_m$ and $V^\prime \cap V_m$ respectively amounts to adding $m-n$ hyperbolic planes. This implies that each of $V \cap V_n$ and $V^\prime \cap V_n$ is isometric to $\ker(B_m|_{V}) \oplus U^{\oplus (r-(m-n))} \oplus W$, and therefore they are isometric to each other. The upshot of this is that when considering whether $(g, V)$ and $(g^\prime, V^\prime)$ are conjugate by an element of $G_{B_n}(V_n)$, we may assume $V = V^\prime$, and therefore we may put both in standard shape with the same decomposition $V_m = W_1 \oplus W_2 \oplus W_3 \oplus W_4$. By construction, $V_n^\perp \cap V_m \subseteq W_4$ and $W_1 \oplus W_2 \oplus W_3$ is contained in $V_n$. \newline \newline \noindent By Lemma \ref{sesquilinear_extension_lemma}, $x$ must have the following block form with respect to the decomposition $V_m = W_1 \oplus W_2 \oplus W_3 \oplus W_4$: \[ x = \begin{pmatrix} x_{11} & 0 & 0 & 0\\ x_{21} & x_{22} & 0 & 0\\ x_{31} & x_{32} & x_{33} & x_{34} \\ x_{41} & 0 & 0 & x_{44} \end{pmatrix}. \] The condition $xgx^{-1} = g^\prime$ may be rewritten as $x(g-\Id) = (g^\prime-\Id) x$ (subject to $x \in G_{B_m}(V_m)$). By Lemma \ref{sesquilinear_shape_lemma}, the equation $x(g-\Id) = (g^\prime-\Id) x$ expressed in terms of block matrices is the same as was computed in the proof of Proposition \ref{gl_inf_gl_n_conjugacy_prop}. In particular, the matrix blocks $x_{31}, x_{34}, x_{41}, x_{44}$ do not appear in this equation. This means we may replace $x_{34}$ and $x_{41}$ with zero and $x_{44}$ with the identity matrix and $x_{31}$ by $x_{31}+J$ for some matrix $J$ (determined by $x_{11}$ and $x_{41}$ as in Lemma \ref{sesquilinear_extension_lemma}), and the resulting matrix \[ x_{new} = \begin{pmatrix} x_{11} & 0 & 0 & 0\\ x_{21} & x_{22} & 0 & 0\\ x_{31} + J & x_{32} & x_{33} & 0 \\ 0 & 0 & 0 & \Id \end{pmatrix} \] still satisfies $x_{new}gx_{new}^{-1} = g^\prime$. Moreover because $W_4$ contains $V_n^\perp \cap V_m$, and the complement $W_1 \oplus W_2 \oplus W_3$ is contained in $V_n$, $x_{new}$ may be viewed as an element of $G_{B_n}(V_n)$ which obeys $x_{new} \cdot (g, V) = (g^\prime, V^\prime)$. \end{proof} \begin{proposition} There is an associative multiplication on bounding pairs of the same type defined by \[ (g, V) \times (g^\prime, V^\prime) = (g g^\prime, V \cap V^\prime). \] This multiplication is $G_{\infty}(\mathbb{F}_q)$-equivariant, meaning that if $x \in G_\infty(\mathbb{F}_q)$, then \[ (x \cdot(g, V)) \times (x \cdot (g^\prime, V^\prime)) = x \cdot ((g, V) \times (g^\prime, V^\prime)). \] Finally, for any bounding pair $(h, U)$ there are only finitely many pairs $(g, V), (g^\prime, V^\prime)$ such that \[ (g, V) \times (g^\prime, V^\prime) = (h, U). \] \end{proposition} \begin{proof} The arguments are the same as in Subsection \ref{bounding_triples_subsection}. \end{proof} \begin{definition} Let $\mathcal{A}^\prime$ be the set if functions from the set of all bounding pairs to $\mathbb{Z}$. It is an abelian group with pointwise addition. We equip $\mathcal{A}^\prime$ with the following convolution product. If $f_1, f_2 \in \mathcal{A}^\prime$, then \[ (f_1 * f_2) (g^{\prime \prime}, V^{\prime \prime}) = \sum_{(g, V) \times (g^\prime, V^\prime) = (g^{\prime \prime}, V^{\prime \prime})} f_1(g, V) f_2(g^\prime, V^\prime). \] \end{definition} \noindent As in the case of general linear groups, this is well defined because the sum is finite. There is an action of $G_\infty(\mathbb{F}_q)$ on $\mathcal{A}^\prime$ via \[ (x \cdot f)(g, V) = f(x^{-1} \cdot (g, V)). \] Because the multiplication is equivariant for the action of $G_\infty(\mathbb{F}_q)$, the product of two $G_\infty(\mathbb{F}_q)$-invariant functions is again invariant. \begin{definition} Let $\mathcal{A}^{G_\infty}$ be the subspace of $\mathcal{A}^\prime$ consisting of elements that are invariant for the action of $G_\infty(\mathbb{F}_q)$ and are supported on finitely many $G_\infty(\mathbb{F}_q)$-orbits of bounding pairs. \end{definition} \begin{proposition} We have that $\mathcal{A}^{G_\infty}$ is a subalgebra of $\mathcal{A}^\prime$. \end{proposition} \begin{proof} We modify the argument from Proposition \ref{Ivanov_Kerov_subalg_proposition}. Suppose that $f_1$ and $f_2$ are the indicator functions of two $G_\infty(\mathbb{F}_q)$-orbits containing $(g, V)$ and $(g^\prime, V^\prime)$ respectively. Then $(g, V) \times (g^\prime, V^\prime) = (gg^\prime, V \cap V^\prime)$ is an element of $BP_m$ for some $m$. We write $V_m = W_1 \oplus W_2 \oplus W_3 \oplus W_4$ putting $(gg^\prime, V \cap V^\prime)$ in standard shape. Note that $\codim(V \cap V^\prime) = \dim(W_1) + \dim(W_2)$. Since $\dim(W_1) = \dim(W_3)$, \[ \dim(V_m) - \dim(W_4) = \dim(W_1) + \dim(W_2) + \dim(W_3) \leq 2 \codim(V \cap V^\prime). \] Hence $\dim(W_4) \geq \dim(V_m) - 2 \codim(V \cap V^\prime)$. \newline \newline \noindent The form $B_m$ is non-degenerate when restricted to $W_4$, so $W_4 = W \oplus U^{\oplus r}$ where $W$ is the germ of $W_4$ and $U$ is a hyperbolic plane. By the classification of anisotropic spaces (Theorem \ref{anisotropic_space_classification_theorem}), $\dim(W) \leq 2$. By counting dimensions and substituting the previous inequality, \[ 2r \geq \dim(W_4)-2 \geq \dim(V_m) - 2 - 2\codim(V \cap V^\prime). \] This means that $V \cap V^\prime$ contains $V_m^\perp$ in addition to at least $\frac{\dim(V_m) - 2 - 2\codim(V \cap V^\prime)}{2}$ hyperbolic planes. By Proposition \ref{witt_lemma_proposition}, we may apply an element of $G_\infty(\mathbb{F}_q)$ to the sum of $V_m$ and these hyperbolic planes to obtain $V_{d}$, where $d$ is as follows. We have \[ \dim(V_d) = \dim(V_m) - 2r \leq \dim(V_m) - (\dim(V_m) - 2 - 2\codim(V \cap V^\prime)) = 2 + 2\codim(V \cap V^\prime). \] Since $\dim(V_d)$ is either $d$, $2d$, or $2d+1$ (depending on the nature of the sesquilinear form we are working with) in any case we have \[ d \leq 2 + 2 \codim(V) + 2 \codim(V^\prime). \] We conclude that the $G_\infty(\mathbb{F}_q)$-orbit of $(gg^\prime, V \cap V^\prime)$ intersects $BP_{2 + 2 \codim(V) + 2 \codim(V^\prime)}$, and there are finitely many $G_\infty(\mathbb{F}_q)$-orbits in any $BP_d$, so $f_1 * f_2$ is supported on finitely many orbits. \end{proof} \begin{proposition}\label{surj_hom_classical_groups} For any $n \in \mathbb{Z}_{\geq 0}$ there is a surjective homomorphism $\Psi_n: \mathcal{A}^\prime \to \mathbb{Z}G_{B_n}(V_n)$ defined by \[ \Psi_n(f) = \sum_{(g, V) \in BP_n} f(g,V)g. \] Moreover, the image of $\mathcal{A}^{G_\infty}$ is precisely the centre $Z(\mathbb{Z}G_{B_n}(V_n))$. \end{proposition} \begin{proof} The arguments are the same as in Section \ref{specitalisation_section}. \end{proof} \begin{proposition} \label{specialisation_partial_computation_proposition} Let $f$ be the indicator function of $G_\infty(\mathbb{F}_q)$ orbit of $(g,V)$, where $(g, V) \in BP_n$. Let $m \geq n$ and let $Cl(g)$ be the sum of all elements in $G_{B_m}(V_m)$ that are conjugate to $g$. Then \[ \Psi_m(f) = K \frac{|C_{G_{B_m}(V_m)}(g)|}{|C_{G_{B_n}(W_1 \oplus W_2 \oplus W_3)}(g)| |\hom(W_1,W_4)| |G_{B_m}(W_4)|} Cl(g), \] where $V_m = W_1 \oplus W_2 \oplus W_3 \oplus W_4$ puts $g$ in standard shape, and $K$ is a positive integer independent of $m$. \end{proposition} \begin{proof} We mimic the proof of Proposition \ref{specialisation_function_gl_proposition}. Any $(g^\prime, V^\prime) \in BP_m$ that is in the orbit of $(g, V)$ is conjugate by an element of $G_{B_m}(V_m)$, so $g^\prime$ is in the same $G_{B_m}(V_m)$-conjugacy class as $g$. Since $\Psi_m(f)$ is in the centre of the group algebra, we have $\Psi_m(f) = P \cdot Cl(g)$, and it remains to determine the scalar $P$ as a function of $m$. This multiplicity is equal to the number of elements $(g, V^\prime) \in BP_m$ that are conjugate to $(g, V)$. Explicitly, for some $x \in G_{B_m}(V_m)$, \begin{eqnarray*} V^\prime &=& xV \\ g &=& xgx^{-1}. \end{eqnarray*} So $P$ is the size of the orbit of $V$ under the action of the centraliser $C_{G_{B_m}(V_m)}(g)$. By the orbit-stabiliser relation, \[ P = \frac{|C_{G_{B_m}(V_m)}(g)|}{|C_{G_{B_m}(V_m)}(g) \cap \mathrm{Stab}_m(V)|}, \] where $\mathrm{Stab}_m(V)$ is the subgroup of $G_{B_m}(V_m)$ stabilising the subspace $V$ of $V_\infty$. We choose a decomposition $V_m = W_1 \oplus W_2 \oplus W_3 \oplus W_4$ putting $(g,V)$ in standard shape so that the elements of $\mathrm{Stab}_m(V)$ necessarily have the block matrix form \[ x = \begin{pmatrix} x_{11} & 0 & 0 & 0\\ x_{21} & x_{22} & 0 & 0\\ x_{31} & x_{32} & x_{33} & x_{34} \\ x_{41} & 0 & 0 & x_{44} \end{pmatrix}, \] where the block sizes are $a,\codim(V)-a,a,m-\codim(V)-a$ (where $a = \dim(V\cap V^\perp) = \dim(W_3)$). \newline \newline \noindent Now we reverse the argument in the proof of Theorem \ref{sesquilinear_conjugacy_theorem} to go from an element \[ x_{small} = \begin{pmatrix} x_{11} & 0 & 0 & 0\\ x_{21} & x_{22} & 0 & 0\\ x_{31} & x_{32} & x_{33} & 0 \\ 0 & 0 & 0 & 1 \end{pmatrix} \in G_{B_n}(W_1 \oplus W_2 \oplus W_3) \subseteq G_{B_m}(V_m) \] that fixes $V$ and commutes with $g$ to an element \[ x = \begin{pmatrix} x_{11} & 0 & 0 & 0\\ x_{21} & x_{22} & 0 & 0\\ x_{31} - J & x_{32} & x_{33} & x_{34} \\ x_{41} & 0 & 0 & x_{44} \end{pmatrix} \in G_{B_m}(V_m) \] that also fixes $V$ and commutes with $g$. Using Lemma \ref{sesquilinear_extension_lemma}, we see that we see that each $x_{small}$ gives rise to $|\hom(W_1, W_4)||G_{B_m}(W_4)|$ matrices $x$ via this construction (corresponding to choices of the entries $x_{41}$ and $x_{44}$). Moreover, since $J$ may be taken to be a function of $x_{11}$ and $x_{41}$, $x_{small}$ is determined by $x$. \newline \newline \noindent Let $N$ be the number of matrices $x_{small}$ that satisfy $x_{small}V = V$ and commute with $g$. Then \[ |C_{G_{B_m}(V_m)}(g) \cap \mathrm{Stab}_m(V)| = N |\hom(W_1, W_4)| |G_{B_m}(W_4)|. \] So we finally get \[ P = \frac{|C_{G_{B_n}(W_1 \oplus W_2 \oplus W_3)}(g)|}{N} \frac{|C_{G_{B_m}(V_m)}(g)|}{|C_{G_{B_n}(W_1 \oplus W_2 \oplus W_3)}(g)| |\hom(W_1,W_4)| |G_{B_m}(W_4)|}. \] Here we let $K = \frac{|C_{G_{B_n}(W_1 \oplus W_2 \oplus W_3)}(g)|}{N}$, which is an integer because the set of matrices $x_{small}$ is a subgroup of $C_{G_{B_n}(W_1 \oplus W_2 \oplus W_3)}(g)$. \end{proof} \noindent Now we are in a position to compute, $\Psi_m(f)$ with cases for the unitary, symplectic, and orthogonal groups. In the notation of Proposition \ref{specialisation_partial_computation_proposition}, we let $a = \dim(W_1) = \dim(W_3)$, so that $\dim(W_2) = \codim(V) - \dim(W_1)$ and $\dim(W_4) = m - \dim(W_1) - \dim(W_2) - \dim(W_3) = m - \codim(V) - a$. Suppose that $g \in G_{B_n}(W_1 \oplus W_2 \oplus W_3)$ has type $\boldsymbol\mu$. \begin{proposition} \label{unitary_specialisation_proposition} In the setting of Proposition \ref{specialisation_partial_computation_proposition}, suppose we are working with Hermitian forms and unitary groups. Let $r = \dim(W_1 \oplus W_2 \oplus W_3) = \codim(V) + a$, and let $\boldsymbol\mu$ be the (unmodified) type of $g$ viewed as an element of $U_r(\mathbb{F}_q)$. Then \[ \Psi_m(f) = K q^{2(m - r)(k-\frac{h}{2}-a)} \qbinom{m-r+h}{h}_{-q} Cl(g), \] where $k = l(\boldsymbol\mu(t-1))$ and $h = m_1(\boldsymbol\mu(t-1))$. \end{proposition} \begin{proof} When viewed as an element of $G_{B_m}(V_m)$, the type of $g$ is $\boldsymbol\mu \cup (1^{m-r})_{t-1}$. By Corollary \ref{unitary_centraliser_ratio_corollary}, we have \begin{eqnarray*} \Psi_m(f) &=& K \frac{|C_{G_{B_m}(V_m)}(g)|}{|C_{G_{B_n}(W_1 \oplus W_2 \oplus W_3)}(g)| |\hom(W_1,W_4)| |G_{B_m}(W_4)|} Cl(g) \\ &=& K q^{2(m-r)(k-h)} \frac{|U_{m-r+h}(\mathbb{F}_q)|}{|U_h(\mathbb{F}_q)|}\frac{Cl(g)}{|\hom(W_1,W_4)| |U_{m-r}(\mathbb{F}_q)|}. \end{eqnarray*} Now we use the fact that the ground field is $\mathbb{F}_{q^2}$, so $|\hom(W_1, W_4)| = q^{2\dim(W_1)\dim(W_4)} = q^{2a(m-r)}$. This gives us \begin{eqnarray*} \Psi_m(f) &=& K q^{2(m - r)(k-h-a)} \frac{ q^{m-r+h \choose 2} \prod_{i=1}^{m-r+h} (q^i - (-1)^i) }{ q^{m-r \choose 2} \prod_{i=1}^{m-r} (q^i - (-1)^i) \cdot q^{h \choose 2} \prod_{i=1}^{h} (q^i - (-1)^i) } Cl(g) \\ &=&K q^{2(m - r)(k-\frac{h}{2}-a)} \frac{\prod_{i=m-r+1}^{m-r+h} ((-q)^i - 1)}{\prod_{i=1}^{h} ((-q)^i - 1)} Cl(g) \\ &=& K q^{2(m - r)(k-\frac{h}{2}-a)} \qbinom{m-r+h}{h}_{-q} Cl(g). \end{eqnarray*} \end{proof} \begin{remark} \label{exponent_nonnegativity_remark} Similarly to Lemma \ref{lower_case_specialisation_correctness_lemma}, the proposition remains correct for any $m \geq 0$ (not just $m \geq n$). Proposition \ref{exponent_nonnegativity_proposition} applies to $g$ (where $b = \dim(W_3) = \dim(W_1) = a$), showing that $2k - 2a - h \geq 0$, and hence $k - \frac{h}{2} - a \geq 0$. As a result, the quantity appearing in Proposition \ref{unitary_specialisation_proposition} may be viewed as a polynomial in $(-q)^m$. Although the minus sign may seem peculiar, it may be viewed as a facet of \emph{Ennola duality}, which asserts that that a wide range of quantities associated to $U_n(\mathbb{F}_q)$ can be obtained from those for $GL_n(\mathbb{F}_q)$ under the substitution $q \to -q$. See \cite{FRST} for more details on Gaussian binomial coefficients at $-q$, and \cite{ThiemVinroot} for a detailed discussion of Ennola duality. \end{remark} \begin{proposition} \label{symplectic_specialisation_proposition} In the setting of Proposition \ref{specialisation_partial_computation_proposition}, suppose we are working with alternating forms and symplectic groups and $q$ is odd. Let $r = \dim(W_1 \oplus W_2 \oplus W_3) = \codim(V) + a$, and let $\boldsymbol\mu$ be the (unmodified) type of $g$ viewed as an element of $Sp_r(\mathbb{F}_q)$. Then \[ \Psi_m(f) = K q^{(2m-r)(k-\frac{h}{2}-a)} \qbinom{m - \frac{r}{2} + \frac{h}{2}}{\frac{h}{2}}_{q^2} Cl(g). \] where $k = l(\boldsymbol\mu(t-1))$ and $h = m_1(\boldsymbol\mu(t-1))$. \end{proposition} \begin{proof} When viewed as an element of $G_{B_m}(V_m)$, the type of $g$ is $\boldsymbol\mu \cup (1^{2m-r})_{t-1}$. By Corollary \ref{symplectic_centraliser_ratio_corollary}, \begin{eqnarray*} \Psi_m(f) &=& K \frac{|C_{G_{B_m}(V_m)}(g)|}{|C_{G_{B_n}(W_1 \oplus W_2 \oplus W_3)}(g)| |\hom(W_1,W_4)| |G_{B_m}(W_4)|} Cl(g) \\ &=& K q^{(2m-r)(k-h)} \frac{|Sp_{2m-r+h}(\mathbb{F}_q)|}{|Sp_h(\mathbb{F}_q)|}\frac{Cl(g)}{|\hom(W_1,W_4)| |Sp_{2m-r}(\mathbb{F}_q)|}. \end{eqnarray*} Now we use the fact that the ground field is $\mathbb{F}_q$, so $|\hom(W_1, W_4)| = q^{a(2m-r)}$. This gives us \begin{eqnarray*} \Psi_m(f) &=& K q^{(2m-r)(k-h-a)} \frac{q^{(\frac{2m-r+h}{2})^2}\prod_{i=1}^{\frac{2m-r+h}{2}}(q^{2i}-1)}{q^{(\frac{2m-r}{2})^2}\prod_{i=1}^{\frac{2m-r}{2}}(q^{2i}-1) \cdot q^{(\frac{h}{2})^2}\prod_{i=1}^{\frac{h}{2}}(q^{2i}-1)} Cl(g) \\ &=& K q^{(2m-r)(k-\frac{h}{2}-a)} \frac{\prod_{i=\frac{2m-r}{2}+1}^{\frac{2m-r+h}{2}}(q^{2i}-1)}{\prod_{i=1}^{\frac{h}{2}}(q^{2i}-1)} Cl(g) \\ &=& K q^{(2m-r)(k-\frac{h}{2}-a)} \qbinom{m - \frac{r}{2} + \frac{h}{2}}{\frac{h}{2}}_{q^2}. \end{eqnarray*} \end{proof} \begin{remark} Similarly to Lemma \ref{lower_case_specialisation_correctness_lemma}, the proposition remains correct for any $m \geq 0$ (not just $m \geq n$). As in Remark \ref{exponent_nonnegativity_remark}, we have $k - \frac{h}{2} - a \geq 0$. So the quantity appearing in Proposition \ref{symplectic_specialisation_proposition} may be viewed as a polynomial in $q^{2m}$. Since $r$ is necessarily even (as it is the dimension of a space with a non-degenerate symplectic form), we may take the coefficients to be in $\mathbb{Z}[q^2, q^{-2}]$. \end{remark} \begin{remark} \label{symplectic_char_2_remark_2} By Remark \ref{symplectic_char_2_remark}, Corollary \ref{symplectic_centraliser_ratio_corollary} still applies in characteristic 2. So the calculation in Proposition \ref{symplectic_specialisation_proposition} also goes through in characteristic 2. \end{remark} \begin{proposition} \label{orthogonal_specialisation_proposition} In the setting of Proposition \ref{specialisation_partial_computation_proposition}, suppose we are working with symmetric forms and orthogonal groups and $q$ is odd. Let $r = \dim(W_1 \oplus W_2 \oplus W_3) = \codim(V) + a$. Then for a certain $P(t) \in \mathcal{R}_q[\tfrac{1}{2}]$, we have $\Psi_m(f) = P(q^m) Cl(g)$. \end{proposition} \begin{proof} Let $M = \dim(V_m)$, which is either $2m$ or $2m+1$ according to whether the ambient group is $O_{2m}^{\pm}(\mathbb{F}_q)$ or $O_{2m+1}(\mathbb{F}_q)$. Also let $\boldsymbol\mu$ be the (unmodified) type of $g$ viewed as an element of $O_r^{\pm}(\mathbb{F}_q)$, $k = l(\boldsymbol\mu(t-1))$, and $h = m_1(\boldsymbol\mu(t-1))$ as before. We have \[ \Psi_m(f) = K q^{(M-r)(k-h-a)} \frac{|O_{h+M-r}^{\epsilon_1 \oplus \epsilon_2}|}{|O_h^{\epsilon_1}||O_{M-r}^{\epsilon_2}|}, \] and there are in principle 16 cases according to the four possible values of each of the two germs $\epsilon_1, \epsilon_2$ (which determine a third germ $\epsilon_3 = \epsilon_1 \oplus \epsilon_2$). For simplicity we group these cases according to the parities of $h$ and $M-r$ and omit the intermediate calculations (which are similar to the unitary and symplectic cases). Instead of writing $+$ or $-$ for the germs $\epsilon_i$, we write $+1$ or $-1$, so that the formula for $|O_{2n}^\epsilon|$ (given in Proposition \ref{classical_group_sizes_proposition}) contains a factor of $(q^n - \epsilon)$. \newline \newline \noindent Case 1: $h$ and $M-r$ both odd. \[ \Psi_m(f) = \frac{K}{2} q^{\frac{M-r}{2}(2k-h-2a) - \frac{1}{2}} \qbinom{\frac{M-r-1}{2} + \frac{h-1}{2}}{\frac{h-1}{2}}_{q^2} (q^{\frac{M-r+h}{2}} - \epsilon_3). \] Note that in this case we have a denominator, $2$. \newline \newline \noindent Case 2: $h$ odd and $M-r$ even. \[ \Psi_m(f) = \frac{K}{2} q^{\frac{M-r}{2}(2k-h-2a)} \qbinom{\frac{M-r}{2} + \frac{h-1}{2}}{\frac{h-1}{2}}_{q^2} (q^{\frac{M-r}{2}} + \epsilon_2). \] There is also a denominator in this case. \newline \newline \noindent Case 3: $h$ even and $M-r$ odd. \[ \Psi_m(f) = \frac{K}{2} q^{\frac{M-r}{2}(2k-h-2a)} \qbinom{\frac{M-r-1}{2} + \frac{h}{2}}{\frac{h}{2}}_{q^2} (q^{\frac{h}{2}} + \epsilon_1). \] In this case the denominator $2$ divides $q^{\frac{h}{2}}+\epsilon_1$, which is a constant independent of $m$. \newline \newline \noindent Case 4: $h$ and $M-r$ both even. In this case we can write $\epsilon_3 = \epsilon_1 \epsilon_2$ (a multiplicative way of expressing $\mathbf{0} \oplus \omega = \omega$, $\omega \oplus \omega = \mathbf{0}$, etc. in the Witt ring). \[ \Psi_m(f) = \frac{K}{2} q^{(\frac{M-r}{2})(2k-h-2a)}\qbinom{\frac{M-r}{2} + \frac{h}{2}-1}{\frac{h}{2}-1}_{q^2} \frac{(q^{\frac{M-r}{2}} + \epsilon_2)(q^{\frac{M-r+h}{2}}-\epsilon_3)}{(q^{\frac{h}{2}}-\epsilon_1)}. \] In this case, the denominator $(q^{\frac{h}{2}}-\epsilon_1)$ is constant with respect to $m$. \newline \newline \noindent Note that $\qbinom{m}{k}_{q^2}$ is a polynomial in $q^{2m}$, and therefore also a polynomial in $q^m$ (of twice the degree). Remark \ref{exponent_nonnegativity_remark} guarantees $(2k-h-2a) \geq 0$, so in any of the above cases, the result is a polynomial in the variable $q^m$. \newline \newline \noindent In Case 4 we have a polynomial in $q^m$ with rational coefficients. For notational convenience, let $u = \frac{m-r}{2}$ and $v = \frac{h}{2}$. To confirm that we actually have an element of $\mathcal{R}_q[\tfrac{1}{2}]$, we show that for any $u$ and $v$, \[ \qbinom{u+v-1}{v-1}_{q^2} \frac{(q^{u} + \epsilon_2)(q^{u+v}-\epsilon_3)}{(q^{v}-\epsilon_1)} \in \mathbb{Z}[q,q^{-1}]. \] First we point out that \[ \frac{(q^{u} + \epsilon_2) (q^{u+v}-\epsilon_1\epsilon_2)}{(q^{v}-\epsilon_1)} = \epsilon_2 q^{u} + 1 + q^{v} \left( \frac{q^{2u}-1}{q^{v} - \epsilon_1}\right) \] So it is enough to show that the quantity $\qbinom{u+v-1}{v-1}_{q^2} \frac{q^{2u}-1}{q^{v} - \epsilon_1}$ is an element of $\mathbb{Z}[q,q^{-1}]$. To do this, we factor the expression in terms of cyclotomic polynomials $\Phi_d(q)$ which obey $q^n - 1 = \prod_{d|n} \Phi_d(q)$ and are themselves elements of $\mathbb{Z}[q]$. The multiplicity $\mathrm{Mult}_d$ of $\Phi_d(q)$ in \[ \qbinom{u+v-1}{v-1}_{q^2} = \frac{\prod_{i=1}^{u+v-1} (q^{2i}-1)}{\prod_{j=1}^{u} (q^{2j}-1) \prod_{k=1}^{v-1} (q^{2k}-1)} \] is \[ \mathrm{Mult}_d = \left\{ \begin{array}{ll} \lfloor \frac{u+v-1}{d} \rfloor - \lfloor \frac{u}{d} \rfloor - \lfloor \frac{v-1}{d} \rfloor & \quad d \mbox{ odd} \\ \lfloor \frac{u+v-1}{d/2} \rfloor - \lfloor \frac{u}{d/2} \rfloor - \lfloor \frac{v-1}{d/2} \rfloor & \quad d \mbox{ even} \end{array} \right. \] Subcase 4.1: $\epsilon_2 = -1$. The denominator $q^v+1$ is the product of $\Phi_d(q)$ over $d$ dividing $2v$ but not $v$. Such $d$ are necessarily even, and $d/2$ divides $v$. Accordingly, $\mathrm{Mult}_d$ is $0$ if $d/2$ divides $u$, and $1$ otherwise. In the former case, $d$ divides $2u$ and hence $\Phi_d(q)$ is a factor of the numerator $q^{2u}-1$, and in the latter case, we may take the factor of $\Phi_d(q)$ from $\qbinom{u+v-1}{v-1}_{q^2}$. \newline \newline \noindent Subcase 4.2: $\epsilon_2 = 1$. The denominator $q^v-1$ is the product of $\Phi_d(q)$ over $d$ dividing $v$. If $d$ is odd and $d$ does not divide $2u$, then also $d$ does not divide $u$, and $\mathrm{Mult}_d = 1$. If $d$ is even and $d$ does not divide $2u$, $\mathrm{Mult}_d = 1$. So regardless of parity, if $d$ does not divide $2u$, we are done. If $d | 2u$, we may take the factor $\Phi_d(q)$ from the numerator $q^{2u}-1$. \end{proof} \section{Stable Centres for the Classical Groups} \noindent In this section, we state stability properties about the centres of group algebras of the classical groups, analogous to what was done in Section \ref{stable_centres_for_general_linear_groups}. We will handle each case separately. Let us retain the notation from Section \ref{classical_groups_section}. \subsection{Stable Centres for the Unitary Groups} As discussed in the Appendix, for any $g \in U_n(\mathbb{F}_q)$, its type as an element in $g \in GL_n(\mathbb{F}_{q^2})$ determines its conjugacy class. Additionally, there is an involution $r \mapsto r^*$ on $\Phi_{q^2}$ such that the type $\boldsymbol\mu$ of $g \in U_n(\mathbb{F}_q)$ obeys $\boldsymbol\mu(r) = \boldsymbol\mu(r^*)$. Let us say that multipartitions that satisfy this operation are \textit{invariant under $*$}. Like with the general linear group, we use the notion of a modified type (c.f. Definition \ref{modified_type_definition} and subsequent discussion) in order to discuss conjugacy classes for all $n$ at once. In particular, if $g \in U_n(\mathbb{F}_q)$ has modified type $\boldsymbol\mu$, then so does $g$ when viewed as an element of $U_{n+1}(\mathbb{F}_q)$ under the prescribed inclusion. Recall that the type of $g$ can be recovered from the modified type if we are given $n$. \begin{definition} Let $X_{\boldsymbol\mu, n}$ denote the sum of all elements of modified type $\boldsymbol\mu$ in $U_n(\mathbb{F}_q)$, viewed as an element of $Z(\mathbb{Z}U_n(\mathbb{F}_q))$. \end{definition} \noindent Once again, this is either the sum over a conjugacy class in $U_n(\mathbb{F}_q)$, or it is zero. We now state the analogue of Theorem \ref{gl_structure_constant_theorem} for the unitary groups. \begin{theorem} \label{u_structure_constant_theorem} There is a family of elements $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda} \in \mathcal{R}_{-q}$ interpolating the structure constants of $Z(\mathbb{Z}U_n(\mathbb{F}_q))$ in the following way: \[ X_{\boldsymbol\mu,n} X_{\boldsymbol\nu,n} = \sum_{\boldsymbol\lambda} r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}([n]_{-q}) X_{\boldsymbol\lambda, n}. \] Here, $\boldsymbol\mu, \boldsymbol\nu$ are arbitrary multipartitions invariant under $*$ and the sum ranges over all multipartitions invariant under $*$. \end{theorem} \begin{proof} The proof is almost identical to that of Theorem \ref{gl_structure_constant_theorem}, except instead of appealing to results in Section \ref{specitalisation_section} for the general linear group, we use the analogous results for the unitary group from Section \ref{classical_groups_section}. \end{proof} \noindent As before, we can construct an algebra that interpolate the centers of the group algebras of the unitary groups: \begin{definition} Let $\mathrm{FH}_q^{U}$ be the free $\mathcal{R}_{-q}$-module with basis given by symbols $K_{\boldsymbol\mu}$ for multipartitions $\boldsymbol\mu$ on $\Phi_{q^2}$ invariant under $*$. We equip $\mathrm{FH}_q^{U}$ with a bilinear multiplication defined on basis elements via \[ K_{\boldsymbol\mu} K_{\boldsymbol\nu} = \sum_{\boldsymbol\lambda} r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda} K_{\boldsymbol\lambda}, \] where $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda} \in \mathcal{R}_{-q}$ are the elements from Theorem \ref{u_structure_constant_theorem}. We call $\mathrm{FH}_q^{U}$ the \emph{unitary Farahat-Higman algebra}. \end{definition} \noindent We have the following analogue of Corollary \ref{gl_special_hom_cor}: \begin{corollary}\label{u_special_hom_cor} There is a ``specialisation'' homomorphism $ \Theta_n: \mathrm{FH}_q^{U} \to Z(\mathbb{Z}U_n(\mathbb{F}_q))$ defined by $\Theta_n(K_{\boldsymbol\mu}) = X_{\boldsymbol\mu,n}$ and by evaluating the coefficients (elements of $\mathcal{R}_{-q}$) at $[n]_{-q}$, where $\boldsymbol\mu$ is any multipartition invariant under $*$. \end{corollary} \begin{proof} This is a consequence of Theorem \ref{u_structure_constant_theorem}. \end{proof} \begin{proposition}\label{ufh_is_nice_prop} We have that $\mathrm{FH}_q^{U}$ is an associative, commutative, unital $\mathcal{R}_{-q}$-algebra. \end{proposition} \begin{proof} The proof of this proposition is analogous to the proof of Proposition \ref{glfh_is_nice_prop}. \end{proof} \begin{lemma} \label{u_degree_bound_lemma} For any multipartitions $\boldsymbol\mu, \boldsymbol\nu, \boldsymbol\lambda$ invariant under $*$, the degree of the polynomial $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}$ is at most $2(|\boldsymbol\mu| + |\boldsymbol\nu| - |\boldsymbol\lambda|)$. \end{lemma} \begin{proof} The proof is very similar to that of Lemma \ref{degree_bound_lemma}, but there are some modifications that need to be made to account for the different standard form. Let $g, g' \in U_\infty(\mathbb{F}_q)$ be any elements of modified type $\boldsymbol\mu, \boldsymbol\nu$, respectively, and let $V, V'$ be their fixed point spaces, respectively. Let $f_{\boldsymbol\mu}$ and $f_{\boldsymbol\nu}$ be the indicator functions on the $U_\infty(\mathbb{F}_q)$ orbits of the tight bounding pairs $(g, V)$ and $(g', V')$, respectively. Then, recalling the surjective homomorphism $\Psi_m: \mathcal{A}^{U_\infty} \rightarrow Z(\mathbb{Z}U_m(\mathbb{F}_q))$ from Proposition \ref{surj_hom_classical_groups}, we have the following by the proof of Theorem \ref{u_structure_constant_theorem}: \[ \Psi_m(f_{\boldsymbol\mu}f_{\boldsymbol\nu}) = \Psi_m(f_{\boldsymbol\mu})\Psi_m(f_{\boldsymbol\nu}) = X_{\boldsymbol\mu, m}X_{\boldsymbol\mu, m} = \sum_{\boldsymbol\lambda} r^{\boldsymbol\lambda}_{\boldsymbol\mu,\boldsymbol\nu}([m]_{-q}) X_{\boldsymbol\lambda,m}.\] The argument of $\Psi_m$ in the leftmost term is an integral linear combination of various indicator functions on $U_\infty(\mathbb{F}_q)$ orbits of elements of the form $(s'', W'')$, where $(s'', W'') = (s, W)\times (s',W')$ and $(s,W), (s',W')$ are $U_\infty(\mathbb{F}_q)$-conjugate to $(g,V), (g',V')$, respectively. In particular, $(s,W)$ and $(s',W')$ will necessarily be tight and have modified type $\boldsymbol\mu$ and $\boldsymbol\nu$, respectively. Let us pick one such $(s'', W'')$, and suppose it has modified type $\boldsymbol\lambda$. Writing it in standard form with respect to $V_n$ for some suitable $V_n$, the image of the corresponding indicator function under $\Psi_m$ is \[P((-q)^m)X_{\boldsymbol\lambda, m} = K q^{2(m - r)(k-\frac{h}{2}-a)}\qbinom{m-r+h}{h}_{-q} X_{\boldsymbol\lambda,m}\] by Proposition \ref{unitary_specialisation_proposition} (using the notation from that proposition). This means that $P$ is a polynomial of $(-q)^m$ with coefficients in $\mathbb{Z}[q, q^{-1}]$ of degree $h + 2(k - h/2 - a) = 2k - 2a$. On the other hand, by tightness, we have $\codim(W) = |\boldsymbol\mu|$ and $\codim(W') = |\boldsymbol\nu|$, so that $\codim(W'') \leq |\boldsymbol\mu| + |\boldsymbol\nu|$. But $\codim(W'') = r - a$ (see Proposition \ref{unitary_specialisation_proposition}). By the definition of $r$ and the fact that $\boldsymbol\lambda$ is the modified type of $s''$, we have $|\boldsymbol\lambda| = r - k$. Finally, putting this together, we have $2(|\boldsymbol\mu| + |\boldsymbol\nu|) \geq 2(r-a) = 2(r-k) + 2(k-a) = 2|\boldsymbol\lambda| + \deg P$, which gives the desired inequality for the polynomial $P$. Now, in order to get the contribution to $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}$, we need to sum such polynomials $P$ over all possible choices of $(s, W)$ and $(s',W')$ such that $(s'', W'')$ is of modified type $\boldsymbol\lambda$. However, the bound only depends on $\boldsymbol\mu, \boldsymbol\nu, \boldsymbol\lambda$, so we deduce that degree of $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}$ also satisfies the desired inequality. \end{proof} \noindent This gives rise to the analogue of Proposition \ref{gl_assoc_graded} for the unitary groups. \begin{theorem} The algebra $\mathrm{FH}_q^{U}$ is filtered, where $K_{\boldsymbol\mu}$ is in filtration degree $|\boldsymbol\mu|$. Moreover the structure constants of the associated graded algebra are integers. \end{theorem} \begin{proof} This follows from Lemma \ref{u_degree_bound_lemma} (see the proof of Proposition \ref{gl_assoc_graded}). \end{proof} \noindent This addresses one of the further directions recommended in \cite{Wan_Wang}, and also in \cite{Meliot_Hecke}. \subsection{Stable Centres for the Symplectic Groups} For the symplectic group $Sp_{2n}(\mathbb{F}_q)$, two elements are conjugate only if their types match. However, this is not a sufficient condition; as we note in the appendix, conujugacy classes in $Sp_{2n}(\mathbb{F}_q)$ are indexed by certain multipartitions $\boldsymbol\mu$ of size $2n$ with some additional signs making $\boldsymbol\mu(r \pm 1)$ into symplectic signed partitions. \newline \newline \noindent Let $g \in Sp_{2n}(\mathbb{F}_q)$ be an element of the conjugacy class labelled by the signed multipartition $\boldsymbol\mu$. We define the \emph{symplectic modified type} of $g$ to be the signed multipartition obtained by subtracting $1$ from each part of $\boldsymbol\mu(t-1)$. Note that this turns $\boldsymbol\mu(t-1)$ from a symplectic signed partition into an orthogonal signed partition. The symplectic modified type of $g$ is the same when $g$ is viewed as an element of $Sp_{2(n+1)}(\mathbb{F}_q)$. The conjugacy class of $g$ can be recovered from the symplectic modified type of $g$ if $n$ is known. \begin{definition} Let $X_{\boldsymbol\mu, n}$ denote the sum of all elements of symplectic modified type $\boldsymbol\mu$ in $Sp_{2n}(\mathbb{F}_q)$, viewed as an element of $Z(\mathbb{Z}Sp_{2n}(\mathbb{F}_q))$. \end{definition} \noindent This is either the sum over a conjugacy class in $Sp_{2n}(\mathbb{F}_q)$, or it is zero. We now state the analogue of Theorem \ref{gl_structure_constant_theorem} for the symplectic groups. \begin{theorem} \label{sp_structure_constant_theorem} There is a family of elements $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda} \in \mathcal{R}_{q^2}$ interpolating the structure constants of $Z(\mathbb{Z}Sp_{2n}(\mathbb{F}_q))$ in the following way: \[ X_{\boldsymbol\mu,n} X_{\boldsymbol\nu,n} = \sum_{\boldsymbol\lambda} r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}([n]_{q^2}) X_{\boldsymbol\lambda, n}. \] Here, $\boldsymbol\mu, \boldsymbol\nu$ are arbitrary signed multipartitions corresponding to a symplectic modified type and the sum ranges over all signed multipartitions corresponding to a symplectic modified type. \end{theorem} \begin{proof} The proof is almost identical to that of Theorem \ref{gl_structure_constant_theorem}, except instead of appealing to results in Section \ref{specitalisation_section} for the general linear group, we use the analogous results for the symplectic group from Section \ref{classical_groups_section}. \end{proof} \noindent As before, we can construct an algebra that interpolate the centers of the group algebras of the symplectic groups: \begin{definition} Let $\mathrm{FH}_q^{Sp}$ be the free $\mathcal{R}_{q^2}$-module with basis given by symbols $K_{\boldsymbol\mu}$ for signed multipartitions $\boldsymbol\mu$ on $\Phi_q$ that correspond to a symplectic modified type. We equip $\mathrm{FH}_q^{Sp}$ with a bilinear multiplication defined on basis elements via \[ K_{\boldsymbol\mu} K_{\boldsymbol\nu} = \sum_{\boldsymbol\lambda} r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda} K_{\boldsymbol\lambda}, \] where $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda} \in \mathcal{R}_{q^2}$ are the elements from Theorem \ref{sp_structure_constant_theorem}. We call $\mathrm{FH}_q^{Sp}$ the \emph{symplectic Farahat-Higman algebra}. \end{definition} \noindent We have the following analogue of Corollary \ref{gl_special_hom_cor}: \begin{corollary}\label{sp_special_hom_cor} There is a ``specialisation'' homomorphism $ \Theta_n: \mathrm{FH}_q^{Sp} \to Z(\mathbb{Z}Sp_{2n}(\mathbb{F}_q))$ defined by $\Theta_n(K_{\boldsymbol\mu}) = X_{\boldsymbol\mu,n}$ and by evaluating the coefficients (elements of $\mathcal{R}_{q^2}$) at $[n]_{q^2}$, where $\mu$ is any signed multipartition corresponding to a symplectic modified type. \end{corollary} \begin{proof} This is a consequence of Theorem \ref{sp_structure_constant_theorem}. \end{proof} \begin{proposition}\label{spfh_is_nice_prop} We have that $\mathrm{FH}_q^{Sp}$ is an associative, commutative, unital $\mathcal{R}_{q^2}$-algebra. \end{proposition} \begin{proof} The proof of this proposition is analogous to the proof of Proposition \ref{glfh_is_nice_prop}. \end{proof} \begin{lemma} \label{sp_degree_bound_lemma} For any signed multipartitions $\boldsymbol\mu, \boldsymbol\nu, \boldsymbol\lambda$ corresponding to a symplectic modified type, the degree of the polynomial $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}$ is at most $|\boldsymbol\mu| + |\boldsymbol\nu| - |\boldsymbol\lambda|$. \end{lemma} \begin{proof} The proof is almost identical to that of Lemma \ref{u_degree_bound_lemma}, except we lose a factor of $2$ because the polynomials lie in $\mathcal{R}_{q^2}$ and not $\mathcal{R}_{-q}$. \end{proof} \noindent This gives rise to the analogue of Proposition \ref{gl_assoc_graded} for the symplectic groups. \begin{proposition}[Theorem 4.30, \cite{OZDEN2021263}] The algebra $\mathrm{FH}_q^{Sp}$ is filtered, where $K_{\boldsymbol\mu}$ is in filtration degree $|\boldsymbol\mu|$. Moreover the structure constants of the associated graded algebra are integers. \end{proposition} \begin{proof} This follows from Lemma \ref{sp_degree_bound_lemma} (see the proof of Proposition \ref{gl_assoc_graded}). \end{proof} \begin{remark} By Remark \ref{symplectic_char_2_remark_2}, it is also possible to construct a version of $\mathrm{FH}_q^{Sp}$ when $q$ is a power of 2. However, the indexing set of conjugacy classes is somewhat more complicated, so we forgo the details. \end{remark} \subsection{Stable Centres for the Orthogonal Groups} Let $G_n(\mathbb{F}_q)$ be one of the the three families of groups $O_{2n+1}(\mathbb{F}_q)$, $O_{2n}^+(\mathbb{F}_q)$, $O_{2n}^-(\mathbb{F}_q)$, so that passing from $n$ to $n+1$ does not change the germ of the natural representation of the group. \newline \newline \noindent Similarly to the symplectic case, conjugacy classes in orthogonal groups are indexed by certain multipartitions $\boldsymbol\mu$ with additional signs making $\boldsymbol\mu(t \pm 1)$ orthogonal signed partitions. If $\boldsymbol\mu$ is the signed multipartition describing a the conjugacy class of an element $g$ of an orthogonal groups, we define the \emph{orthogonal modified type} of $g$ to be the signed multipartition obtained by subtracting 1 from each part of $\boldsymbol\mu(t-1)$. This turns $\boldsymbol\mu(t-1)$ from an orthogonal signed partition into a symplectic signed partition, with one complication. If $1$ appears as a part in $\boldsymbol\mu(t-1)$, it has an associated sign, and subtracting 1 results in this sign being associated to parts of size zero (which is not part of the data associated to a symplectic signed partition). Nevertheless, the orthogonal modified type of $g$ is unchanged by the prescribed inclusions of orthogonal groups (provided that in the absence of a sign associated to parts of size $0$ in $\boldsymbol\mu(t-1)$, we take the sign to be $+$; this is because the germ of a zero dimensional space is zero). Again, the conjugacy class of $g$ can be recovered from the orthogonal modified type if $n$ is known. \begin{definition} Let $X_{\boldsymbol\mu, n}$ denote the sum of all elements of orthogonal modified type $\boldsymbol\mu$ in $G_{n}(\mathbb{F}_q)$, viewed as an element of $Z(\mathbb{Z}G_{n}(\mathbb{F}_q))$. \end{definition} \noindent Once again, this is either the sum over a conjugacy class in $G_{n}(\mathbb{F}_q)$, or it is zero. We now state the analogue of Theorem \ref{gl_structure_constant_theorem} for the orthogonal groups. \begin{theorem} \label{o_structure_constant_theorem} There is a family of elements $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda} \in \mathcal{R}_{q}[\frac{1}{2}]$ interpolating the structure constants of $Z(\mathbb{Z}G_{n}(\mathbb{F}_q))$ in the following way: \[ X_{\boldsymbol\mu,m} X_{\boldsymbol\nu, m} = \sum_{\boldsymbol\lambda} r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}([m]_{q}) X_{\boldsymbol\lambda, m}. \] Here, $\boldsymbol\mu, \boldsymbol\nu$ are arbitrary multipartitions corresponding to orthogonal modified types and the sum ranges over all multipartitions corresponding to orthogonal modified types. \end{theorem} \begin{proof} The proof is almost identical to that of Theorem \ref{gl_structure_constant_theorem}, except instead of appealing to results in Section \ref{specitalisation_section} for the general linear group, we use the analogous results for the orthogonal group from Section \ref{classical_groups_section}. \end{proof} \noindent As before, we can construct an algebra that interpolate the centers of the group algebras of the orthogonal groups: \begin{definition} Let $\mathrm{FH}_q^{O,+}, \mathrm{FH}_q^{O,-}, \mathrm{FH}_q^{O,odd}$ be the free $\mathcal{R}_{q}[\frac{1}{2}]$-modules with basis given by symbols $K_{\boldsymbol\mu}$ indexed by orthogonal modified types $\boldsymbol\mu$ coming from groups of the form $O_{2n}^+(\mathbb{F}_q), O_{2n}^-(\mathbb{F}_q), O_{2n+1}(\mathbb{F}_q)$, respectively. We equip each $\mathrm{FH}_q^{O,*}$ with a bilinear multiplication defined on basis elements via \[ K_{\boldsymbol\mu} K_{\boldsymbol\nu} = \sum_{\boldsymbol\lambda} r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda} K_{\boldsymbol\lambda}, \] where $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda} \in \mathcal{R}_{q}[\frac{1}{2}]$ are the elements from Theorem \ref{o_structure_constant_theorem}. We call $\mathrm{FH}_q^{O,+},\mathrm{FH}_q^{O,-},\mathrm{FH}_q^{O,odd}$ the \emph{positive orthogonal Farahat-Higman algebra}, the \emph{negative orthogonal Farahat-Higman algebra}, and the \emph{odd orthogonal Farahat-Higman algebra}, respectively. \end{definition} \noindent We have the following analogue of Corollary \ref{gl_special_hom_cor}: \begin{corollary}\label{o_special_hom_cor} There are ``specialisation'' homomorphisms $ \Theta_n: \mathrm{FH}_q^{O,*} \to Z(\mathbb{Z}G_{n}(\mathbb{F}_q))$ (here $* = +,-,odd$ according to whether $G_{n} = O_{2n}^+, O_{2n}^-, O_{2n+1}$) defined by $\Theta_n(K_{\boldsymbol\mu}) = X_{\boldsymbol\mu,n}$ and by evaluating the coefficients (elements of $\mathcal{R}_{q}[\frac{1}{2}]$) at $[n]_{q}$. \end{corollary} \begin{proof} This is a consequence of Theorem \ref{o_structure_constant_theorem}. \end{proof} \begin{proposition}\label{ofh_is_nice_prop} We have that each $\mathrm{FH}_q^{O,*}$ is an associative, commutative, unital $\mathcal{R}_{q}[\frac{1}{2}]$-algebra. \end{proposition} \begin{proof} The proof of this proposition is analogous to the proof of Proposition \ref{glfh_is_nice_prop}. \end{proof} \begin{lemma} \label{o_degree_bound_lemma} The degree of the polynomial $r_{\boldsymbol\mu, \boldsymbol\nu}^{\boldsymbol\lambda}$ is at most $2(|\boldsymbol\mu| + |\boldsymbol\nu| - |\boldsymbol\lambda|)$. \end{lemma} \begin{proof} The proof is analogous to that of Lemma \ref{u_degree_bound_lemma}, although there are four cases as in the proof of Proposition \ref{orthogonal_specialisation_proposition}. \end{proof} \begin{theorem} The algebras $\mathrm{FH}_q^{O, *}$ are filtered, where $K_{\boldsymbol\mu}$ is in filtration degree $|\boldsymbol\mu|$. Moreover the structure constants of the associated graded algebra are integers. \end{theorem} \begin{proof} In versions of this statement for the other classical groups, the proof was similar to that of proof of Proposition \ref{gl_assoc_graded}. In this case, if we apply similar logic, we can deduce that the first part of the statement that the structure constants are rational (because the polynomials $r_{\boldsymbol\mu,\boldsymbol\nu}^{\boldsymbol\lambda}$ lie in $\mathcal{R}_{q^2}[\frac{1}{2}]$). However, on the other hand, these structure constants are manifestly integral because they are counting the multiplicity of a certain conjugacy class sum appearing in the the expansion in the product of two other conjugacy class sums. \end{proof} \noindent This addresses one of the further directions recommended in \cite{Wan_Wang}. \printbibliography \end{document} British vs American Spellings to check at the end center ----> centre centralizer ----> centraliser specialize ----> specialise specialization ----> specialisation analog ----> analogue
1,116,691,500,002
arxiv
\section{Applications} \label{sec:Apps} In this section we illustrate the computational power of the slack ideal in answering three types of questions that one can ask about realizations of polytopes. We anticipate further applications. \subsection{Abstract polytope with no realizations} Checking if an abstract polytopal complex is the boundary of an actual polytope is the classical {\em Steinitz problem}, and an important ingredient in cataloging polytopes with few vertices. In \cite{alts85}, Altshuler and Steinberg enumerated all $4$-polytopes and $3$-spheres with $8$~vertices. The first non-polytopal $3$-sphere in \cite[Table 2]{alts85} has simplices and square pyramids as facets, and these facets have the following vertex sets $$12345, 12346, 12578, 12678, 14568, 34578, 2357, 2367, 3467, 4678.$$ If there was a polytope $P$ with these facets, its symbolic slack matrix would be $$S_P(\mathbf x)=\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & x_1 & x_2 & x_3 & x_4 & x_5 \\ 0 & 0 & 0 & 0 & x_6 & x_7 & 0 & 0 & x_8 & x_9 \\ 0 & 0 & x_{10} & x_{11} & x_{12} & 0 & 0 & 0 & 0 & x_{13} \\ 0 & 0 & x_{14} & x_{15} & 0 & 0 & x_{16} & x_{17} & 0 & 0 \\ 0 & x_{18} & 0 & x_{19} & 0 & 0 & 0 & x_{20} & x_{21} & x_{22} \\ x_{23} & 0 & x_{24} & 0 & 0 & x_{25} & x_{26} & 0 & 0 & 0 \\ x_{27} & x_{28} & 0 & 0 & x_{29} & 0 & 0 & 0 & 0 & 0 \\ x_{30} & x_{31} & 0 & 0 & 0 & 0 & x_{32} & x_{33} & x_{34} & 0 \\ \end{bmatrix}.$$ One can compute that the would-be slack ideal $I_P$ in this case is trivial, meaning that there is no rank five matrix with the support of $S_P(\mathbf x)$. In particular, there is no polytope with the given facial structure. In fact, there is not even a hyperplane-point arrangement in $\mathbb{R}^4$ or $\mathbb{C}^4$ with the given incidence structure. In some other cases, one can obtain non-empty slack varieties that have no positive part. A simple example of that behaviour can be seen in the {\em tetrahemihexahedron}, a polyhedralization of the real projective plane with $6$ vertices, and facets with vertex sets $235$, $346$, $145$, $126$, $2456$, $1356$, $1234$. Its slack matrix is therefore $$S_P(\mathbf x)=\begin{bmatrix} x_{1}& x_{2}& 0& 0& x_{3}& 0& 0\\ 0& x_{4}& x_{5}& 0& 0& x_{6}& 0\\ 0& 0& x_{7}& x_{8}& x_{9}& 0& 0\\ x_{10}& 0& 0& x_{11}& 0& x_{12}& 0\\ 0& x_{13}& 0& x_{14}& 0& 0& x_{15}\\ x_{16}& 0& x_{17}& 0& 0& 0& x_{18} \end{bmatrix}.$$ Computing the slack ideal from the 5-minors of $S_P(\mathbf x)$ we find that $I_P$ is generated by the binomials \begin{small} \begin{center} \begin{tabular}{ccc} $x_{8}x_{15}x_{17} + x_{7}x_{14}x_{18}$ & $x_{4}x_{15}x_{17} + x_{5}x_{13}x_{18}$& $x_{11}x_{15}x_{16} + x_{10}x_{14}x_{18}$\\ $x_{2}x_{15}x_{16} + x_{1}x_{13}x_{18}$& $x_{5}x_{12}x_{16} + x_{6}x_{10}x_{17}$& $x_{7}x_{11}x_{16} - x_{8}x_{10}x_{17}$\\ $x_{3}x_{7}x_{16} + x_{1}x_{9}x_{17}$& $x_{2}x_{5}x_{16} - x_{1}x_{4}x_{17}$& $x_{6}x_{11}x_{13} + x_{4}x_{12}x_{14}$\\ $x_{1}x_{11}x_{13} - x_{2}x_{10}x_{14}$& $x_{5}x_{8}x_{13} - x_{4}x_{7}x_{14}$& $x_{3}x_{8}x_{13} + x_{2}x_{9}x_{14}$\\ $x_{6}x_{7}x_{11} + x_{5}x_{8}x_{12}$& $x_{3}x_{8}x_{10} + x_{1}x_{9}x_{11}$& $x_{2}x_{6}x_{10} + x_{1}x_{4}x_{12}$ \end{tabular} \begin{tabular}{cc} $x_{6}x_{11}x_{15}x_{17} - x_{5}x_{12}x_{14}x_{18}$& $x_{2}x_{9}x_{15}x_{17} - x_{3}x_{7}x_{13}x_{18}$\\ $x_{4}x_{12}x_{15}x_{16} - x_{6}x_{10}x_{13}x_{18}$& $x_{3}x_{8}x_{15}x_{16} - x_{1}x_{9}x_{14}x_{18}$\\ $x_{2}x_{7}x_{14}x_{16} - x_{1}x_{8}x_{13}x_{17}$& $x_{5}x_{11}x_{13}x_{16} - x_{4}x_{10}x_{14}x_{17}$\\ $x_{2}x_{6}x_{9}x_{11} - x_{3}x_{4}x_{8}x_{12}$& $x_{2}x_{5}x_{8}x_{10} - x_{1}x_{4}x_{7}x_{11}$\\ $x_{3}x_{6}x_{7}x_{10} - x_{1}x_{5}x_{9}x_{12}$& $x_{3}x_{4}x_{7} + x_{2}x_{5}x_{9}$\\ \end{tabular} \end{center} \end{small} Since the slack ideal contains binomials whose coefficients are both positive, it has no positive zeros. In fact, by fixing some coordinates to one, it has a unique zero up to row and column scalings, where all entries are either $1$ or $-1$. \subsection{Non-prescribable faces of polytopes} Another classical question about polytopes is whether a face can be freely prescribed in a realization of a polytope with given combinatorics. We begin by observing that there is a natural relationship between the slack matrix/ideal of a polytope and those of each of its faces. For instance, if $F$ is a facet of a $d$-polytope $P$, a symbolic slack matrix $S_F(\mathbf x)$ of $F$ is the submatrix of $S_P(\mathbf x)$ indexed by the vertices of $F$ and the facets of $P$ that intersect $F$ in its $(d-2)$-dimensional faces. Let $\mathbf x_F$ denote the vector of variables in that submatrix. All $(d+1)$-minors of $S_F(\mathbf x)$ belong to the slack ideal $I_P$. To see this, consider a $(d+2)$-submatrix of $S_P(\mathbf x)$ obtained by enlarging the given $(d+1)$-submatrix of $S_F(\mathbf x)$ by a row indexed by a vertex $\mb{p} \not \in F$ and the column indexed by $F$. The column of $F$ in this bigger submatrix has all zero entries except in position $(\mb{p},F)$. The minor of this $(d+2)$-submatrix in $S_P(\mathbf x)$ after saturating out the variable in position $(\mb{p},F)$, is the $(d+1)$-minor of $S_F(\mathbf x)$ that we started with. Therefore, $$I_F \subseteq I_P \cap \mathbb{C}[\mathbf x_F].$$ By induction on the dimension, this containment is true for all faces $F$ of $P$. A face $F$ of a polytope $P$ is \textit{prescribable} if, given any realization of $F$, we can complete it to a realization of $P$. In our language, a face $F$ is prescribable in $P$ if and only if $$\mathcal{V}_+(I_F)=\mathcal{V}_+(I_P \cap \mathbb{C}[\mathbf x_F]).$$ Consider the four-dimensional prism over a square pyramid, {for which it was shown in \cite{Barn87} that its only cube facet $F$ is non-prescribable}. This polytope $P$ has $10$ vertices and $7$ facets and its symbolic slack matrix is $$S_P(\mathbf x)=\begin{bmatrix} \bf{\textcolor{red}{x_1}}& \bf{\textcolor{red}{0 }} & 0 & \bf{\textcolor{red}{0 }} & \bf{\textcolor{red}{x_{2}}} & \bf{\textcolor{red}{x_{3}}} & \bf{\textcolor{red}{ 0}}\\ \bf{\textcolor{red}{x_{4}}}& \bf{\textcolor{red}{0 }} & 0 & \bf{\textcolor{red}{0 }} & \bf{\textcolor{red}{ 0}} & \bf{\textcolor{red}{x_{5}}} &\bf{\textcolor{red}{x_{6}}}\\ \bf{\textcolor{red}{x_{7}}}& \bf{\textcolor{red}{0 }} & 0 & \bf{\textcolor{red}{x_{8}}}& \bf{\textcolor{red}{ 0}} & \bf{\textcolor{red}{ 0}} &\bf{\textcolor{red}{x_{9}}}\\ \bf{\textcolor{red}{x_{10}}}& \bf{\textcolor{red}{0 }} & 0 & \bf{\textcolor{red}{x_{11}}}& \bf{\textcolor{red}{x_{12}}} & \bf{\textcolor{red}{ 0}} & \bf{\textcolor{red}{ 0}}\\ x_{13} & 0 & x_{14}& 0 & 0 & 0 & 0\\ \bf{\textcolor{red}{0}} & \bf{\textcolor{red}{x_{15}}} & 0 & \bf{\textcolor{red}{0}} & \bf{\textcolor{red}{x_{16}}} & \bf{\textcolor{red}{x_{17}}} & \bf{\textcolor{red}{ 0}}\\ \bf{\textcolor{red}{0}} & \bf{\textcolor{red}{x_{18}}} & 0 & \bf{\textcolor{red}{0}} & \bf{\textcolor{red}{ 0}} & \bf{\textcolor{red}{x_{19}}} &\bf{\textcolor{red}{x_{20}}}\\ \bf{\textcolor{red}{0}} & \bf{\textcolor{red}{x_{21}}} & 0 & \bf{\textcolor{red}{x_{22}}}& \bf{\textcolor{red}{ 0}} & \bf{\textcolor{red}{ 0}} &\bf{\textcolor{red}{x_{23}}}\\ \bf{\textcolor{red}{0}} & \bf{\textcolor{red}{x_{24}} } & 0 & \bf{\textcolor{red}{x_{25}}}&\bf{\textcolor{red}{ x_{26}}} & \bf{\textcolor{red}{ 0}} & \bf{\textcolor{red}{ 0}}\\ 0 & x_{27} &x_{28} & 0 & 0 & 0 & 0 \end{bmatrix}.$$ In \textcolor{red}{bold} we mark $S_F(\mathbf x)$ sitting inside $S_P(\mathbf x)$. Computing $I_P$ and intersecting with $\mathbb{C}[\mathbf x_F]$, we obtain an ideal of dimension $15$. On the other hand, the slack ideal of a cube has dimension $16$, suggesting an extra degree of freedom for the realizations of a cube, and the possibility that the cubical facet $F$ cannot be arbitrarily prescribed in a realization of $P$. However, we need more of an argument to conclude this, since $I_F \neq I_P \cap \mathbb{C}[\mathbf x_F]$ does not immediately mean that $\mathcal{V}_+(I_F) \neq \mathcal{V}_+(I_P \cap \mathbb{C}[\mathbf x_F])$. We need to compute further to get Barnette's result. We first note that one can scale the rows and columns of $S_F(\mathbf x)$ to set $13$ of its $24$ variables to one, say $x_1,x_2,x_3,x_4,x_6,x_7,x_8,x_{10},x_{15},x_{16}, x_{18},x_{21}, x_{24}.$ Guided by the resulting slack ideal we further set $x_{20}=1, x_{11}=\frac{1}{2}, x_{17} = 2$ and $x_{25}=1$. Now solving for the remaining variables from the equations of the slack ideal, we get the following true slack matrix of a cube: $$\begin{bmatrix} 1&0&0&1&1&0\\ 1&0&0&0&2&1\\ 1&0&1&0&0&1\\ 1&0&1/2&1&0&0\\ 0&1&0&1&2&0\\ 0&1&0&0&3&1\\ 0&1&3/2&0&0&1\\ 0&1&1&1&0&0 \end{bmatrix}.$$ However, making the above-mentioned substitutions for $$x_1,x_2,x_3,x_4,x_6,x_7,x_8,x_{10},x_{11},x_{15},x_{16}, x_{17},x_{18}, x_{20},x_{21}, x_{24},x_{25}$$ in $S_P(\mathbf x)$ and eliminating $x_{13},x_{14},x_{27}$ and $x_{28}$ from the slack ideal results in the trivial ideal showing that the cube on its own admits further realizations than are possible as a face of $P$. \subsection{Non-rational polytopes} A combinatorial polytope is said to be \textit{rational} if it has a realization in which all vertices have rational entries. This has a very simple interpretation in terms of slack varieties. \begin{lemma} \label{lem:rational} A polytope $P$ is rational if and only if $\mathcal{V}_+(I_P)$ has a rational point. \end{lemma} The proof is trivial since any rational realization gives rise to a rational slack matrix and any rational slack matrix is itself a rational realization of the polytope $P$. Recall that any point in $\mathcal{V}_+(I_P)$ can be row scaled to be a true slack matrix by dividing each row by the sum of its entries, so a rational point in $\mathcal{V}_+(I_P)$ will provide a true rational slack matrix of $P$. \begin{figure} \begin{tikzpicture}[scale=0.5,line cap=round,line join=round,line width=.7pt] \clip(-4.1,-0.4) rectangle (6.2,5.5); {\tikzstyle{every node}=[circle,draw=red,fill=red,inner sep=0pt,minimum width=2.5pt] \draw (0.,0.)-- (2.,0.); \draw (2.,0.)-- (2.618033988749895,1.9021130325903064); \draw (2.618033988749895,1.9021130325903064)-- (1.,3.077683537175253); \draw (1.,3.077683537175253)-- (-0.6180339887498947,1.9021130325903073); \draw (-0.6180339887498947,1.9021130325903073)-- (0.,0.); \draw (-1.6180339887498945,4.979796569765561)-- (2.,0.); \draw (3.6180339887498953,4.9797965697655595)-- (0.,0.); \draw (5.236067977499787,0.)-- (-0.6180339887498947,1.9021130325903073); \draw (-3.2360679774997907,0.)-- (2.618033988749895,1.9021130325903064); \draw (-1.6180339887498945,4.979796569765561)-- (-0.6180339887498947,1.9021130325903073); \draw (-1.6180339887498945,4.979796569765561)-- (1.,3.077683537175253); \draw (1.,3.077683537175253)-- (3.6180339887498953,4.9797965697655595); \draw (3.6180339887498953,4.9797965697655595)-- (2.618033988749895,1.9021130325903064); \draw (2.,0.)-- (5.236067977499787,0.); \draw (2.618033988749895,1.9021130325903064)-- (5.236067977499787,0.); \draw (-3.2360679774997907,0.)-- (0.,0.); \draw (-3.2360679774997907,0.)-- (-0.6180339887498947,1.9021130325903073); \node at (0.,0.) {}; \node at (2.,0.) {}; \node at (2.618033988749895,1.9021130325903064) {}; \node at (-0.6180339887498947,1.9021130325903073) {}; \node at (-1.6180339887498945,4.979796569765561) {}; \node at (3.6180339887498953,4.9797965697655595) {}; \node at (5.236067977499787,0.) {}; \node at (-3.2360679774997907,0.) {}; \node at (1,1.38) {};} \end{tikzpicture} \caption{Non-rational line-point configuration} \label{fig:pointconfig} \end{figure} Unfortunately, the usual examples of non-rational polytopes tend to be too large for direct computations, so we illustrate our point on the non-rational point-line arrangement in the plane shown in Figure~\ref{fig:pointconfig} from \cite[Figure~5.5.1]{Grunbaum}. A true non-rational polytope can be obtained from this point-line arrangement by Lawrence lifting. We will show the non-rationality of this configuration by computing its slack ideal as if it were a $2$-polytope. Its symbolic slack matrix is the $9\times 9$ matrix $$S(\mathbf x)=\begin{bmatrix} x_{1}& {0}& x_{2}& {0}& x_{3}& x_{4}& x_{5}& x_{6}& {0} \\ x_{7}& x_{8}& x_{9}& {0}&x_{10}& {0}& {0}&x_{11}&x_{12}\\ x_{13}&x_{14}& {0}&x_{15}&x_{16}&x_{17}&x_{18}& {0}& {0} \\ x_{19}&x_{20}& {0}&x_{21}& {0}& {0}&x_{22}&x_{23}&x_{24}\\ x_{25}& {0}&x_{26}&x_{27}& {0}&x_{28}& {0}& {0}&x_{29}\\ {0}& {0}&x_{30}&x_{31}&x_{32}& {0}&x_{33}&x_{34}&x_{35}\\ {0}&x_{36}& {0}&x_{37}&x_{38}&x_{39}& {0}&x_{40}&x_{41}\\ {0}&x_{42}&x_{43}& {0}&x_{44}&x_{45}&x_{46}& {0}&x_{47}\\ {0}&x_{48}&x_{49}&x_{50}& {0}&x_{51}&x_{52}&x_{53}& {0} \end{bmatrix}.$$ One can simplify the computations by scaling rows and columns to fix $x_i=1$ for $i=1,2,8,14,20,26,30,36,42,44,47,48,50,51,52,53$, as this does not affect rationality. Then one sees that the polynomial $x_{46}^2 + x_{46} - 1$ is in the slack ideal, so $x_{46}=\frac{-1\pm\sqrt{5}}{2}$, and there are no rational realizations of this configuration. \begin{remark} We note that as illustrated by the above example, the slack matrix and slack ideal constructions are not limited to the setting of polytopes, but in fact, are applicable to the more general setting of any point/hyperplane configuration. \end{remark} \section{Background: Slack Matrices and Ideals of Polytopes} \label{sec:bg} In this section we first present several known results about slack matrices of polytopes needed in this paper. Many of these results come from \cite{slackmatrixpaper}. We then recall the slack ideal of a polytope from \cite{GPRT17} which will be our main computational engine. While much of this section is background, we also present new objects and results that play an important role in later sections. Suppose we are given a polytope $P \subset \mathbb{R}^d$ with $v$ {labelled} vertices and $f$ {labelled} facet inequalities. Assume that $P$ is a $d$-polytope, meaning that $\dim(P)=d$. Recall that $P$ has two usual representations: a $\mathcal{V}$-representation $P = \textup{conv}\{\mb{p}_1,\ldots, \mb{p}_v\}$ as the convex hull of vertices, and an $\mathcal{H}$-representation $P = \{\mathbf x\in\mathbb{R}^d : W\mathbf x \leq \mb{w}\}$ as the common intersection of the half spaces defined by the facet inequalities $W_j \mathbf x \leq {w}_j$, $j=1,\ldots, f$, where $W_j$ denotes the $j$th row of $W \in \mathbb{R}^{f \times d}$. Let $V \in \mathbb{R}^{v \times d}$ be the matrix with rows ${\mb{p}_1}^\top,\ldots, {\mb{p}_v}^\top$, and let $\mathbbm{1}$ denote a vector (of appropriate size) with all entries equal to $1$. Then the combined data of the two representations yields a {\em slack matrix} of $P$, defined as \begin{equation} \label{EQ:slackdef} S_P := \left[\begin{array}{cc} \mathbbm{1} & V\\ \end{array}\right] \left[\begin{array}{c} \mb{w}^\top \\ -W^\top \end{array}\right] \in\mathbb{R}^{v\times f}. \end{equation} The name comes from the fact that the $(i,j)$-entry of $S_P$ is ${w}_j - W_j\mb{p}_i$ which is the {\em slack} of the $i$th vertex $\mb{p}_i$ of $P$ with respect to the $j$th facet inequality $W_j \mathbf x \leq w_j$ of $P$. Since $P$ is a $d$-polytope, $\textup{rank}(\left[\begin{array}{cc} \mathbbm{1} & V\\ \end{array}\right]) = d+1$, and hence, $\textup{rank}\,(S_P) = d+1$. Also, $\mathbbm{1}$ is in the column span of $S_P$. While the $\mathcal{V}$-representation of $P$ is unique, the $\mathcal{H}$-representation is not, as each facet inequality $W_j\mathbf x \leq {w}_j$ is equivalent to the scaled inequality $\lambda W_j\mathbf x \leq \lambda {w}_j \text{ for }\lambda >0$, and hence $P$ has infinitely many slack matrices obtained by positive scalings of the columns of $S_P$. Let $D_t$ denote a diagonal matrix of size $t \times t$ with all positive diagonal entries. Then all slack matrices of $P$ are of the form $S_P D_f$ for some $D_f$. A polytope $Q$ is {\em affinely equivalent} to $P$ if there exists an invertible affine transformation $\psi$ such that $Q = \psi(P)$. If $Q$ is affinely equivalent to $P$, then $S_P$ is a slack matrix of $Q$ and thus $P$ and $Q$ have the same slack matrices (see Example~\ref{EG:quadslack}). In fact, a slack matrix of $P$ offers a representation of the affine equivalence class of $P$ by the following result. \begin{lemma}[{\cite[Theorem~14]{slackmatrixpaper}}] If $S$ is any slack matrix of $P$, then the polytope $Q = \textup{conv}(\textup{rows}(S)),$ is affinely equivalent to $P$. \label{LEM:rowrealiz} \end{lemma} By the above discussion, we may translate $P$ so that $0\in\text{int}(P)$ without changing its slack matrices. Subsequently, we may scale facet inequalities to set $\mb{w} = \mathbbm{1}$. Then the affine equivalence class of $P$ can be associated to the slack matrix \begin{equation} \label{EQ:slackrepdef} S^1_P = [\mathbbm{1}\; V]\left[\begin{array}{c}\mathbbm{1}\\ -W^\top\end{array}\right] \end{equation} which has the special feature that the all-ones vector of the appropriate size is present in both its row space and column space. Again this matrix is not unique as it depends on the position of $0 \in \textup{int}(P)$. Recall that the polar of $P$ is $P^\circ = \{ \mathbf y \in (\mathbb{R}^d)^\ast \,:\, \langle \mathbf x, \mathbf y \rangle \leq 1 \,\,\,\forall \,\, \mathbf x \in P \}$. Under the assumption that $0\in\text{int}(P)$ and that $\mb{w} = \mathbbm{1}$, $P^\circ$ is again a polytope with $0$ in its interior and representations \cite[Theorem 2.11]{Ziegler}: $$P^\circ = \textup{conv}\{ W_1^\top, \ldots, W_f^\top\} = \{ \mathbf y \in (\mathbb{R}^d)^\ast \,:\, V \mathbf y \leq \mathbbm{1} \}.$$ This implies that $(S_P^1)^\top$ is a slack matrix of $P^\circ$ and all slack matrices of $P^\circ$ are of the form $(D_v S_P^1)^\top$. We now pass from the fixed polytope $P$ to its combinatorial class. Note that the zero-pattern in a slack matrix of $P$, or equivalently, the support of $S_P$, encodes the vertex-facet incidence structure of $P$, and hence the entire combinatorics (face lattice) of $P$ \cite{Joswig}. A labelled polytope $Q$ is {\em combinatorially equivalent} to $P$ if $P$ and $Q$ have {the same face lattice under the identification of vertex $\mb{p}_i$ in $P$ with vertex $\mb{q}_i$ in $Q$ and the identification of facet inequality $f_j$ in $P$ with facet inequality $g_j$ in $Q$. } The {\em combinatorial class} of $P$ is the set of all labelled polytopes that are combinatorially equivalent to $P$. A {\em realization} of $P$ is a polytope $Q$, embedded in some $\mathbb{R}^k$, that is combinatorially equivalent to $P$. By our labelling assumptions, all realizations of $P$ have slack matrices with the same support as $S_P$. Further, since each realization $Q$ of $P$ is again a $d$-polytope, all its slack matrices have rank $d+1$ and contain $\mathbbm{1}$ in their column span. Interestingly, the converse is also true and is a consequence of \cite[Theorem 22]{slackmatrixpaper}. \begin{theorem} A nonnegative matrix $S$ is a slack matrix of some realization of the {labelled} $d$-polytope $P$ if and only if all of the following hold: \begin{enumerate} \item $\textup{supp}(S) = \textup{supp}(S_P)$ \label{EQ:support} \item $\textup{rank}\,(S) = \textup{rank}\,(S_P) = d+1$ \label{EQ:rank} \item $\mathbbm{1}$ lies in the column span of $S$. \label{EQ:colspan} \end{enumerate} \label{THM:slackconditions}\end{theorem} This theorem will play a central role in this paper. It allows us to identify the combinatorial class of $P$ with the set of nonnegative matrices having the three listed properties. A polytope $Q$ is {\em projectively equivalent} to $P$ if there exists a projective transformation $\phi$ such that $Q = \phi(P)$. Recall that a projective transformation is a map $$ \phi: \mathbb{R}^d \to \mathbb{R}^d, \,\,\, \mathbf x \mapsto \frac{B\mathbf x+\mathbf{b}}{\mathbf{c}^\top \mathbf x + \gamma}$$ for some $B\in \mathbb{R}^{d \times d}$, $\mathbf{b,c}\in \mathbb{R}^d$, $\gamma\in \mathbb{R}$ such that \begin{equation} \det\left[\begin{array}{cc} B & \mathbf{b} \\ \mathbf{c}^\top & \gamma \end{array}\right] \neq 0. \end{equation} The polytopes $P$ and $Q = \phi(P)$ are combinatorially equivalent. Projective equivalence within a combinatorial class can be characterized in terms of slack matrices. \begin{lemma}[{\cite[Corollary 1.5]{GPRT17}}] Two polytopes $P$ and $Q$ are projectively equivalent if and only if $D_vS_PD_f$ is a slack matrix of $Q$ for some positive diagonal matrices $D_v,D_f$. \label{lem:PEscaling} \end{lemma} Notice that Lemma~\ref{lem:PEscaling} does not say that {\em every} positive scaling of rows and columns of $S_P$ is a slack matrix of a polytope projectively equivalent to $P$, but rather that there is {\em some} scaling of rows and columns of $S_P$ that produces a slack matrix of $Q$. In particular, condition~\eqref{EQ:colspan} of Theorem~\ref{THM:slackconditions} requires $\mathbbm{1}$ to be in the column span of the scaled matrix. Not all row scalings will preserve $\mathbbm{1}$ in the column span. Regardless, we will be interested in all row and column scalings of slack matrices. \begin{definition} A {\em generalized slack matrix} of $P$ is any matrix of the form $D_v S_Q D_f$, where $Q$ is a polytope that is combinatorially equivalent to $P$ and $D_v, D_f$ are diagonal matrices with positive entries on the diagonal. Let $\mathfrak{S}_P$ denote the set of all generalized slack matrices of $P$. \end{definition} \begin{theorem} \label{THM:generalized slack matrices satisfy (1) and (2)} The set $\mathfrak{S}_P$ of generalized slack matrices of $P$ consists precisely of the nonnegative matrices that satisfy conditions~\eqref{EQ:support} and~\eqref{EQ:rank} of Theorem~\ref{THM:slackconditions}. \end{theorem} \begin{proof} By construction, every matrix in $\mathfrak{S}_P$ satisfies conditions~\eqref{EQ:support} and~\eqref{EQ:rank} of Theorem~\ref{THM:slackconditions}. To see the converse, we need to argue that if $S$ is a nonnegative matrix that satisfies conditions~\eqref{EQ:support} and~\eqref{EQ:rank} of Theorem~\ref{THM:slackconditions}, then there exists some $D_v,D_f$ such that $S = D_v S_Q D_f$ for some polytope $Q$ that is combinatorially equivalent to $P$, or equivalently, that there is some row scaling of $S$ that turns it into a slack matrix of a polytope combinatorially equivalent to $P$. By Theorem~\ref{THM:slackconditions}, this is equivalent to showing that $\mathbbm{1}$ lies in the column span of $D_v^{-1} S$. Choose the diagonal matrix $D_v^{-1}$ so that $D_v^{-1} S$ divides each row of $S$ by the sum of the entries in that row. Note that this operation is well-defined, as a row of all zeros would correspond to a vertex which is part of every facet. Then the sum of the columns of $D_v^{-1} S$ is $\mathbbm{1}$ making $D_v^{-1} S$ satisfy all three conditions of Theorem~\ref{THM:slackconditions}. Therefore, by the theorem, $D_v^{-1} S = S_Q$ for some polytope $Q$ in the combinatorial class of $P$. \end{proof} We illustrate the above results on a simple example. \begin{example} \label{EG:quadslack} Consider two realizations of a quadrilateral in $\mathbb{R}^2$, \begin{align*} P_1 & = \textup{conv}\{(0,0),(1,0),(1,1),(0,1)\}, \textup{ and } \\ P_2 & = \textup{conv}\{(1,-2),(1,2),(-1,2),(-1,-2)\}, \end{align*} where $P_2 = \psi(P_1)$ for the affine transformation $\renewcommand{\arraystretch}{0.7}\psi(\mathbf x) = \begin{bmatrix} 0 & -2 \\ 4 & 0 \end{bmatrix} \mathbf x + \begin{bmatrix}1\\-2\end{bmatrix}$. The most obvious choice of facet representation for $P_1$ yields the slack matrix \begin{align*} S_{P_1} = \begin{bmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \\ 1 & 0 & 1 \end{bmatrix} \begin{bmatrix} 0 & 1 & 1 & 0 \\ 0 & -1 & 0 & 1 \\ 1 & 0 & -1 & 0 \end{bmatrix} & = \begin{bmatrix} 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 1 \\ 1 & 1 & 0 & 0 \end{bmatrix}, \end{align*} which, by calculating the effect of $\psi$ on the facets of $P_1$, one finds is the same as the slack matrix for $P_2$, \begin{align*} S_{P_2} & = \begin{bmatrix} 1 & 1 & -2 \\ 1 & 1 & 2 \\ 1 & -1 & 2 \\ 1 & -1 & -2 \end{bmatrix} \begin{bmatrix} \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} \\ -\frac{1}{2} & 0 & \frac{1}{2} & 0 \\ 0 & -\frac{1}{4} & 0 & \frac{1}{4} \end{bmatrix}. \end{align*} Since $P_2$ also contains the origin in its interior, we can scale each column of its $\mathcal{H}$-representation from above by 2 to obtain a slack matrix of the form $S_{P_2}^1$. Finally, consider the following nonnegative matrix $$S = \begin{bmatrix} 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 2 \\ 1 & 2 & 0 & 0 \end{bmatrix}.$$ Since $S$ satisfies all three conditions of Theorem~\ref{THM:slackconditions}, it must be the slack matrix of some realization of a quadrilateral. In fact, it is easy to check that $S$ is the slack matrix of the quadrilateral with vertices $\{(0,0),(1,0),(2,1),(0,1)\}$. Since all quadrilaterals are projectively equivalent, by Lemma~\ref{lem:PEscaling} we must be able to obtain $S_{P_1}$ by scaling the columns and rows of $S$ and, in fact, multiplying its first column by $2$ and its last two rows by $1/2$ we recover $S_{P_1}$. \end{example} We now recall the {\em symbolic slack matrix} and {\em slack ideal} of $P$ which were defined in \cite{GPRT17}. Given a $d$-polytope $P$, its {\em symbolic slack matrix} $S_P(\mathbf x)$ is the sparse generic matrix obtained by replacing each nonzero entry of $S_P$ by a distinct variable. Suppose there are $t$ variables in $S_P(\mathbf x)$. The {\em slack ideal} of $P$ is the saturation of the ideal generated by the $(d+2)$-minors of $S_P(\mathbf x)$, namely \begin{equation} I_P := \langle (d+2)\text{-minors of }S_P(\mathbf x) \rangle :\left(\prod_{i=1}^t x_i\right)^\infty \subset \mathbb{C}[\mathbf x] := \mathbb{C}[x_1, \ldots, x_t]. \label{EQ:slackidealdef} \end{equation} The {\em slack variety} of $P$ is the complex variety $\mathcal{V}(I_P) \subset \mathbb{C}^t$. The saturation of $I_P$ by the product of all variables guarantees that there are no components in $\mathcal{V}(I_P)$ that live entirely in coordinate hyperplanes. If $\mathbf s \in \mathbb{C}^t$ is a zero of $I_P$, then we identify it with the matrix $S_P(\mathbf s)$. \begin{lemma} \label{lem:generalized slack matrices are in the slack variety} The set $\mathfrak{S}_P$ of generalized slack matrices is contained in the real part of the slack variety $\mathcal{V}(I_P)$. \end{lemma} \begin{proof} By Theorem~\ref{THM:generalized slack matrices satisfy (1) and (2)}, all matrices in $\mathfrak{S}_P$ have real entries, support equal to $\textup{supp}(S_P)$, and rank $d+1$. Therefore, $\mathfrak{S}_P$ is contained in the real part of $\mathcal{V}(I_P)$. \end{proof} To focus on ``true slack matrices'' of polytopes in the combinatorial class of $P$, meaning matrices that satisfy all conditions of Theorem~\ref{THM:slackconditions}, we define the {\em affine slack ideal} \begin{equation} \widetilde{I}_P = \langle (d+2)\text{-minors of }[S_P(\mathbf x)\;\mathbbm{1}] \rangle :\left(\prod_{i=1}^t x_i\right)^\infty \subset \mathbb{C}[\mathbf x],\label{EQ:trueslackidealdef} \end{equation} where $[S_P(\mathbf x)\;\mathbbm{1}]$ is the symbolic slack matrix with a column of ones appended. By construction, $\mathcal{V}(\widetilde{I}_P)$ is a subvariety of $\mathcal{V}(I_P)$. \begin{definition} Let $\widetilde{\mathfrak{S}}_P$ denote the set of true slack matrices of polytopes in the combinatorial class of $P$, or equivalently, the set of all nonnegative matrices that satisfy the three conditions of Theorem~\ref{THM:slackconditions}. \end{definition} \begin{lemma} \label{lem:true slack matrices are in the true slack variety} The set $\widetilde{\mathfrak{S}}_P$ of true slack matrices is contained in the real part of $\mathcal{V}(\widetilde{I}_P)$. \end{lemma} \begin{proof} By definition, all elements $S\in\widetilde{\mathfrak{S}}_P$ have real entries and $\textup{supp}(S)= \textup{supp}(S_P)$. It remains to show that $\textup{rank}\,([S \; \mathbbm{1}])\leq d+1$. This follows immediately from the fact that $S$ satisfies properties \eqref{EQ:rank} and \eqref{EQ:colspan} of Theorem~\ref{THM:slackconditions}. \end{proof} \begin{example} \label{EG:quadideal} For our quadrilateral $P_1$ from Example~\ref{EG:quadslack} and in fact any quadrilateral $P$ labeled in the same way as $P_1$, we have $$S_{P}(\mathbf x) = \begin{bmatrix} 0 & x_1 & x_2 & 0 \\ 0 & 0 & x_3 & x_4 \\ x_5 & 0 & 0 & x_6 \\ x_7 & x_8 & 0 & 0 \end{bmatrix}.$$ Its slack ideal is \begin{align*} {I}_P & = \langle 4\text{-minors of }S_{P}(\mathbf x)\rangle : \left(\prod_{i=1}^8 x_i\right)^\infty = \langle x_2x_4x_5x_8 - x_1x_3x_6x_7\rangle \subset \mathbb{C}[x_1,\ldots,x_8]. \end{align*} The affine slack ideal of $P$ is \begin{align*} \widetilde{I}_P = \langle 4\text{-minors of }[S_{P}(\mathbf x)\;\mathbbm{1}]\rangle : \left(\prod_{i=1}^8 x_i\right)^\infty & = \langle x_1x_3x_6-x_2x_4x_8+x_2x_6x_8-x_3x_6x_8, \\[-7pt] &\phantom{= \langle}x_2x_4x_5-x_2x_4x_7+x_2x_6x_7-x_3x_6x_7, \\ &\phantom{= \langle}x_1x_4x_5-x_1x_4x_7+x_1x_6x_7-x_4x_5x_8, \\ &\phantom{= \langle}x_1x_3x_5-x_1x_3x_7+x_2x_5x_8-x_3x_5x_8\rangle. \end{align*} Notice, for example, that the generalized slack matrix which corresponds to $\mathbf s = (2,2,2,1,8,2,2,1)$ is a zero of $I_P$ but not of $\widetilde{I}_P$ and indeed $\mathbbm{1}$ is not in the column span of $S_P(\mathbf s)$. \qed \end{example} \section{Introduction} \label{sec:introduction} An important focus in the study of polytopes is the investigation of their realization spaces. Given a $d$-polytope $P \subset \mathbb{R}^d$, its face lattice determines its combinatorial type. A realization space of $P$ is, roughly speaking, the set of all geometric realizations of the combinatorial type of $P$. This set, usually defined by fixing an affinely independent set of vertices in every realization of $P$, is a primary basic semialgebraic set, meaning that it is defined by a finite set of polynomial equations and strict inequalities. Foundational questions about polytopes such as whether there is a polytope with rational vertices in the combinatorial class of $P$, whether a combinatorial type has any realization at all as a convex polytope, or whether faces of a polytope can be freely prescribed, are all questions about realization spaces. In general, many of these questions are hard to settle and there is no straightforward way to answer them by working directly with realization spaces. Each instance of such a question often requires a clever new strategy; indeed, the polytope literature contains many ingenious methods to find the desired answers. In this paper, we introduce {a model for the realization space of a polytope in a given combinatorial class} modulo projective transformations. This space arises from the positive part of an algebraic variety called the {\em slack variety} of the polytope. {An explicit model for the realization space of }the projective equivalence classes of a polytope does not exist in the literature, although several authors have implicitly worked modulo projective transformations \cite{AP17,APT15,R96}. Using a related idea, we also construct a {model for the} realization space for a polytope that is rationally equivalent to the classical {model for the} realization space of the polytope. The ideal giving rise to the slack variety is called the {\em slack ideal} of the polytope and was introduced in \cite{GPRT17}. The slack ideal in turn was inspired by the {\em slack matrix} of a polytope. This is a nonnegative real matrix with rows (and columns) indexed by the vertices (and facets) of the polytope and with $(i,j)$-entry equal to the slack of the $i$th vertex in the $j$th facet inequality. Each vertex/facet representation of a $d$-polytope $P$ gives rise to a slack matrix $S_P$ of rank $d+1$. Slack matrices have found remarkable use in the theory of extended formulations of polytopes (see for example, \cite{Yannakakis}, \cite{FPTdW}, \cite{Rothvoss}, \cite{GPT2}, \cite{LeeSDP}). Their utility in creating a realization space model for polytopes was also observed in \cite{D14}. \subsection{Our contribution} By passing to a symbolic version of the slack matrix $S_P$, wherein we replace every positive entry by a distinct variable in the vector of variables $\mathbf x$, one gets a symbolic matrix $S_P(\mathbf x)$. The slack ideal $I_P$ is the ideal obtained by saturating the ideal of $(d+2)$-minors of $S_P(\mathbf x)$ with respect to all variables. The complex variety of $I_P$, $\mathcal{V}(I_P)$, is the slack variety of $P$. {We prove that modulo a group action, the positive part of $\mathcal{V}(I_P)$ is a realization space for the projective equivalence classes of polytopes that are combinatorially equivalent to $P$. This is the slack realization space of $P$ and it provides a new model for the realizations of a polytope modulo projective transformations. Working with a slightly modified ideal called the {\em affine slack ideal} of $P$, we also obtain a realization space for $P$ that is rationally equivalent to the classical realization space of $P$. We call this the {\em affine slack realization space} of $P$.} By the positive part of a complex variety we mean the intersection of the variety with the positive real orthant of the ambient space. The slack realization space has several nice features. The inequalities in its description are simply nonnegativities of variables in place of the determinantal inequalities in the classical model. By forgetting these inequalities one can study the entire slack variety, which is a natural algebraic relaxation of the realization space. The slack realization space naturally mods out affine equivalence among polytopes and, unlike in the classical construction, does not depend on a choice of affine basis. The construction leads to a natural way to study polytopes up to projective equivalence. Further, it serves as a realization space for both the polytope it was constructed from as well as the polar of the polytope. Additionally, the slack ideal provides a computational engine for establishing several types of results one can ask about the combinatorial class of a polytope. We exhibit three concrete applications of this machinery to determine non-rationality, non-prescribability of faces, and non-realizability of polytopes. We expect that further applications and questions on the important and difficult topic of realization spaces will be amenable to our algebraic geometry based approach. \subsection{Organization of the paper} In Section~\ref{sec:bg}, we summarize the results on slack matrices needed in this paper. We also define the slack ideal and affine slack ideal of a polytope. In Section~\ref{sec:RealSp}, we construct the slack and affine slack realization spaces of a polytope. We show that the affine slack realization space is rationally equivalent to the classical realization space of the polytope. In Section~\ref{sec:Apps} we illustrate how the slack ideal provides a computational framework for many classical questions about polytopes such as convex realizability of combinatorial polytopes, rationality, and prescribability of faces. \subsection{Acknowledgements} We thank Arnau Padrol and G\"unter Ziegler for helpful pointers to the literature and valuable comments on the first draft of this paper. The \texttt{SageMath} and \texttt{Macaulay2} software systems were invaluable in the development of the results below. All computations described in this paper were done with one of these two systems \cite{SageMath}, \cite{M2}. \section{Realization spaces from Slack Varieties} \label{sec:RealSp} Recall that a realization of a $d$-polytope $P \subset \mathbb{R}^d$ is a polytope $Q$ that is combinatorially equivalent to $P$. A {\em realization space} of $P$ is, essentially, the set of all polytopes $Q$ which are realizations of $P$, or equivalently, the set of all ``geometrically distinct'' polytopes which are combinatorially equivalent to $P$. We say ``essentially'' since it is typical to mod out by affine equivalence within the combinatorial class. The standard construction of a realization space of $P = \textup{conv}\{\mb{p}_1,\ldots, \mb{p}_v\}$ is as follows (see \cite{RG96}). Fix an {\em affine basis} of $P$, that is, $d+1$ vertex labels $B = \{b_0,\ldots, b_d\}$ such that the vertices $\{\mb{p}_{b}\}_{b\in B}$ are necessarily affinely independent in every realization of $P$. Then the {\em realization space of $P$ with respect to $B$} is $$\mathcal{R}(P,B) = \{\text{realizations $Q = \textup{conv}\{\mb{q}_1,\ldots,\mb{q}_v\}$ of $P$ with $\mb{q}_i=\mb{p}_i$ for all $i\in B$}\}.$$ Fixing an affine basis ensures that just one $Q$ from each affine equivalence class in the combinatorial class of $P$ occurs in $\mathcal{R}(P,B)$. Realization spaces of polytopes are {\em primary basic semialgebraic sets}, that is, they are defined by finitely many polynomial equations and strict inequalities. Recording each realization $Q$ by its vertices, we can think of $\mathcal{R}(P,B)$ as lying in $\mathbb{R}^{d \cdot v}$. Two primary basic semialgebraic sets $X\subseteq\mathbb{R}^m$ and $Y\subseteq\mathbb{R}^{m+n}$ are {\em rationally equivalent} if there exists a homeomorphism $f:X\to Y$ such that both $f$ and $f^{-1}$ are rational functions. The important result for us is that if $B_1,B_2$ are two affine bases of a polytope $P$, then $\mathcal{R}(P,B_1)$ and $\mathcal{R}(P,B_2)$ are rationally equivalent \cite[Lemma 2.5.4]{RG96}. Thus one can call $\mathcal{R}(P,B) \subset \mathbb{R}^{d \cdot v}$, {\em the} realization space of $P$. The main goal of this section is to construct models of realization spaces for $P$ from the slack variety $\mathcal{V}(I_P) \subset \mathbb{C}^t$ and affine slack variety $\mathcal{V}(\widetilde{I}_P)$ defined in Section~\ref{sec:bg}. Recall that we identify an element $\mathbf s$ in either variety with the matrix $S_P(\mathbf s)$. Then by Lemma~\ref{lem:generalized slack matrices are in the slack variety}, $\mathfrak{S}_P$, the set of all generalized slack matrices of all polytopes in the combinatorial class of $P$, is contained in $\mathcal{V}(I_P)$. Similarly, by Lemma~\ref{lem:true slack matrices are in the true slack variety}, $\widetilde{\mathfrak{S}}_P$, the set of all true slack matrices of polytopes in the combinatorial class of $P$, is contained in $\mathcal{V}(\widetilde{I}_P)$. In fact, $\mathfrak{S}_P$ is contained in the positive part of $\mathcal{V}(I_P)$, defined as \begin{equation} \label{EQ:slackmodel} \mathcal{V}_+(I_P) := \mathcal{V}(I_P)\cap \mathbb{R}^t_{>0}\end{equation} and $\widetilde{\mathfrak{S}}_P$ is contained in the positive part of $\mathcal{V}(\widetilde{I}_P)$ defined as \begin{equation} \label{EQ:slackmodel} \mathcal{V}_+(\widetilde{I}_P) := \mathcal{V}(\widetilde{I}_P)\cap \mathbb{R}^t_{>0}.\end{equation} These positive spaces are going to lead to realization spaces of $P$. In order to get there, we first describe these sets more explicitly. We start with a well-known lemma, whose proof we include for later reference. \begin{lemma} \label{lem:matrices have correct rank} Let $S$ be a matrix with the same support as $S_P$. Then $\textup{rank}\,(S) \geq d+1$. \end{lemma} \begin{proof} Consider a flag of $P$, i.e., a maximal chain of faces in the face lattice of $P$. Choose a sequence of facets $F_0,F_1, \ldots, F_d$ so that the flag is $$\emptyset \;= \; F_0\cap\cdots\cap F_d \;\subset\; F_1\cap\cdots\cap F_d \;\subset\; \cdots \;\subset\; F_{d-1}\cap F_d \;\subset\; F_d \;\subset\; P.$$ Next choose a sequence of vertices so that $v_0 = F_1\cap\cdots\cap F_d$ is the $0$-face in the flag, making $v_0 \not \in F_0$. Then choose $v_1 \in F_2 \cap \dots \cap F_d$ but $v_1 \not \in F_1$, $v_2 \in F_3 \cap \dots \cap F_d$ but $v_2 \not \in F_2$ and so on, until $v_{d-1} \in F_d$ but not in $F_{d-1}$. Finally, choose $v_d$ so that $v_d \not \in F_d$. Then the $(d+1)\times(d+1)$ submatrix of $S_P$ indexed by the chosen vertices and facets is lower triangular with a nonzero diagonal, hence has rank $d+1$. Now if $S$ is a matrix with $\textup{supp}(S) = \textup{supp}(S_P)$, $S$ will also have this lower triangular submatrix in it, thus $\textup{rank}\,(S) \geq d+1$. \end{proof} We remark that the vertices chosen from the flag in the above proof form a suitable affine basis to fix in the construction of $\mathcal{R}(P,B)$. \begin{theorem} \label{THM:description of positive slack variety} The positive part of the slack variety, $\mathcal{V}_+(I_P)$, coincides with $\mathfrak{S}_P$, the set of generalized slack matrices of $P$. Similarly, $\mathcal{V}_+(\widetilde{I}_P)$ coincides with $\widetilde{\mathfrak{S}}_P$, the set of true slack matrices of $P$. \end{theorem} \begin{proof} We saw that $\mathfrak{S}_P \subseteq \mathcal{V}_+(I_P)$ and by Theorem~\ref{THM:generalized slack matrices satisfy (1) and (2)}, $\mathfrak{S}_P$ is precisely the set of nonnegative matrices with the same support as $S_P$ and rank $d+1$. On the other hand, if $\mathbf s \in \mathcal{V}_+(I_P)$, then $S_P(\mathbf s)$ is nonnegative and $\textup{supp}(S_P(\mathbf s)) = \textup{supp}(S_P)$. Therefore, by Lemma~\ref{lem:matrices have correct rank}, $\textup{rank}\,(S_P(\mathbf s)) = d+1$. Thus, $\mathcal{V}_+(I_P) = \mathfrak{S}_P$. We saw that $\widetilde{\mathfrak{S}}_P \subseteq \mathcal{V}_+(\widetilde{I}_P)$. Also recall that $\mathcal{V}_+(\widetilde{I}_P)$ is contained in $\mathcal{V}_+(I_P)$. Therefore, by the first statement of the theorem, if $\mathbf s \in \mathcal{V}_+(\widetilde{I}_P)$, then $S_P(\mathbf s)$ is nonnegative, $\textup{supp}(S_P(\mathbf s)) = \textup{supp}(S_P)$ and $\textup{rank}\,(S_P(\mathbf s)) = d+1$. From the definition of $\widetilde{I}_P$, we have $\textup{rank}\,([S_P(\mathbf s)\ \mathbbm{1}]) \leq d+1$, so it follows that $\textup{rank}\,([S_P(\mathbf s)\ \mathbbm{1}]) = d+1$, or equivalently, $\mathbbm{1}$ lies in the column span of $S_P(\mathbf s)$. Therefore, the matrices in $\mathcal{V}_+(\widetilde{I}_P)$ satisfy all three conditions of Theorem~\ref{THM:slackconditions}, hence $\mathcal{V}_+(\widetilde{I}_P) = \widetilde{\mathfrak{S}}_P$. \end{proof} Since positive row and column scalings of a generalized slack matrix of $P$ give another generalized slack matrix of $P$, we immediately get that $\mathcal{V}_+(I_P)$ is closed under row and column scalings. Similarly, $\mathcal{V}_+(\widetilde{I}_P)$ is closed under column scalings. \begin{corollary} \label{cor:scale}\ \begin{enumerate} \item If $\mathbf s\in\mathcal{V}_+(I_P)$, then $D_v\mathbf s D_f\in \mathcal{V}_+(I_P)$, for all positive diagonal matrices $D_v,D_f$. \item Similarly, if $\mathbf s\in\mathcal{V}_+(\widetilde{I}_P)$, then $\mathbf s D_f\in \mathcal{V}_+(\widetilde{I}_P)$, for all positive diagonal matrices $D_f$. \end{enumerate} \end{corollary} Corollary~\ref{cor:scale} tells us that the groups $\mathbb{R}_{>0}^v\times\mathbb{R}_{>0}^f$ and $\mathbb{R}^f_{>0}$ act on $\mathcal{V}_+(I_P)$ and $\mathcal{V}_+(\widetilde{I}_P)$, respectively, via multiplication by positive diagonal matrices. Modding out these actions is the same as setting some choice of variables in the symbolic slack matrix to $1$, which means that we may choose a representative of each equivalence class (affine or projective) with ones in some prescribed positions. \begin{corollary} \label{THM:realizespmodel}\ \begin{enumerate} \item Given a polytope $P$, there is a bijection between the elements of\break $\mathcal{V}_+(I_P)/(\mathbb{R}^v_{>0}\times\mathbb{R}^f_{>0})$ and the classes of projectively equivalent polytopes of the same combinatorial type as $P$. In particular, each class contains a true slack matrix. \item Given a polytope $P$, there is a bijection between the elements of $\mathcal{V}_+(\widetilde{I}_P)/\mathbb{R}^f_{>0}$ and the classes of affinely equivalent polytopes of the same combinatorial type as $P$. \end{enumerate} \end{corollary} The last statement in Corollary~\ref{THM:realizespmodel} (1) follows from the fact that every generalized slack matrix admits a row scaling that makes it satisfy all three conditions of Theorem~\ref{THM:slackconditions}, thereby making it a true slack matrix. An explicit example of such a scaling can be seen in the proof of Theorem~\ref{THM:generalized slack matrices satisfy (1) and (2)}. By the above results we have that $\mathcal{V}_+(I_P)/(\mathbb{R}^v_{>0}\times\mathbb{R}^f_{>0})$ and $\mathcal{V}_+(\widetilde{I}_P)/\mathbb{R}^f_{>0}$ are parameter spaces for the projective (respectively, affine) equivalence classes of polytopes in the combinatorial class of $P$. Thus they can be thought of as realization spaces of $P$. \begin{definition} \label{def:realization spaces} Call $\mathcal{V}_+(I_P)/(\mathbb{R}^v_{>0}\times\mathbb{R}^f_{>0})$ the {\em slack realization space} of the polytope $P$, and $\mathcal{V}_+(\widetilde{I}_P)/\mathbb{R}^f_{>0}$ the {\em affine slack realization space} of the polytope $P$. \end{definition} We will see below that the affine slack realization space $\mathcal{V}_+(\widetilde{I}_P)/\mathbb{R}^f_{>0}$ is rationally equivalent to the classical model of realization space $\mathcal{R}(P,B)$ of the polytope $P$. On the other hand, our main object, the slack realization space $\mathcal{V}_+(I_P)/(\mathbb{R}^v_{>0}\times\mathbb{R}^f_{>0})$, does not have an analog in the polytope literature. This is partly because in every realization of $P$, fixing a projective basis does not guarantee that the remaining vertices in the realization are not at infinity. The slack realization space is a natural model for the realization space of projective equivalence classes of polytopes. {We note that in \cite{GPS17} the authors investigate the projective realization space of combinatorial hypersimplices and find an upper bound for its dimension. However they do not present an explicit model for it.} \begin{theorem} \label{THM:truerealizequiv} The affine slack realization space $\mathcal{V}_+(\widetilde{I}_P)/\mathbb{R}^f_{>0}$ is rationally equivalent to the classical realization space $\mathcal{R}(P,B)$ of the polytope $P$. \end{theorem} \begin{proof} We will show that $\mathcal{V}_+(\widetilde{I}_P)/\mathbb{R}^f_{>0}$ is rationally equivalent to $\mathcal{R}(P,B)$ for a particular choice of $B$. By \cite[Lemma 2.5.4]{RG96}, this is sufficient to show rational equivalence for any choice of basis. We have already shown that realizations of $P$ modulo affine transformations are in bijective correspondence with the elements of both $\mathcal{V}_+(\widetilde{I}_P)/\mathbb{R}^f_{>0}$ and $\mathcal{R}(P,B)$. So we just have to prove that this bijection induces a rational equivalence between these spaces, i.e., both the map and its inverse are rational. We will start by showing the map sending a polytope in $\mathcal{R}(P,B)$ to its slack matrix is rational. Fix a flag in $P$, as in the proof of Lemma~\ref{lem:matrices have correct rank}. Suppose the sequence of vertices and facets chosen from the flag in the proof are indexed by the sets $I$ and $J$ respectively. The vertices $\{\mb{p}_i\}_{i\in I}$ are affinely independent, so that $B = I$ is an affine basis of $P$. Moreover, by applying an affine transformation to $P$, we may assume that $0$ is in the convex hull of $\{\mb{p}_i\}_{i\in I}$, hence is in the interior of every element of $\mathcal{R}(P,B)$. Consider the map \[ g:\mathcal{R}(P,B) \to \mathcal{V}_+(\widetilde{I}_P), \,\,\,\,\, Q \mapsto S_Q^1. \] The polytope $Q$ is recorded in $\mathcal{R}(P,B)$ by its list of vertices, which in turn are the rows of the matrix $V$. Also, recall that $S^1_Q = [\mathbbm{1}\; V]\begin{bmatrix} \mathbbm{1} \\ -W^\top \end{bmatrix}$. To prove that $g$ is a rational map, we need to show that the matrix of facet normals $W$ is a rational function of $V$. Since we know the combinatorial type of $P$, we know the set of vertices that lie on each facet. For facet $j$, let $V(j)$ be the submatrix of $V$ whose rows are the vertices on this facet. Then the normal of facet $j$, or equivalently $W_j$, is obtained by solving the linear system $V(j) \cdot \mathbf x = \mathbbm{1}$ which proves that $W_j$ is a rational function of $V$. Then $\widetilde{g} = \pi\circ g$ is the desired rational map from $\mathcal{R}(P,B)$ to $\mathcal{V}_+(\widetilde{I}_P)/\mathbb{R}^f_{>0}$, where $\pi$ is the standard quotient map $\pi: \mathcal{V}_+(\widetilde{I}_P)\to\mathcal{V}_+(\widetilde{I}_P)/\mathbb{R}^f_{>0}$. It sends the representative in $\mathcal{R}(P,B)$ of an affine equivalence class of polytopes in the combinatorial class of $P$ to the representative of that class in $\mathcal{V}_+(\widetilde{I}_P)/\mathbb{R}^f_{>0}$. For the reverse map, we have to send a slack matrix $S_Q$ of a realization $Q$ of $P$ to the representative of its affine equivalence class in $\mathcal{R}(P,B)$. We saw in Lemma~\ref{LEM:rowrealiz} that the rows of $S_Q$ are the vertices of a realization $Q'$ of $P$ that is affinely equivalent to $Q$. So we just have to show that $Q'$ can be rationally mapped to the representative of $Q$ in $\mathcal{R}(P,B)$. To do that, denote by $\widehat{S}_Q$ the $(d+1) \times (d+1)$ lower triangular submatrix of $S_Q$ from our flag, with rows indexed by $I$ and columns indexed by $J$. Then $(\widehat{S}_Q)^{-1}$ consists of rational functions in the entries of $\widehat{S}_Q$. Let $\mathcal{B}$ be the $(d+1)\times d$ matrix whose rows are the vertices of $P$ indexed by $B$. Recall that these vertices are common to all elements of $\mathcal{R}(P,B)$, and in particular, they form an affine basis for the representative of $Q$ in $\mathcal{R}(P,B)$. Then the linear map $$\psi_{S_Q}:\mathbb{R}^{f}\to \mathbb{R}^d, \,\,\,\, \mathbf x \mapsto \mathbf x_J^\top\widehat{S}_Q^{-1}\mathcal{B},$$ where $\mathbf x_J$ is the restriction of $\mathbf x \in \mathbb{R}^f$ to the coordinates indexed by $J$, is defined rationally in terms of the entries of $S_Q$, and maps row $i$ of $S_Q$ to the affine basis vertex $\mb{p}_i$, for all $i \in I$. Now since $\psi_{S_Q}$ is a linear map, $\psi_{S_Q}(Q')$ is affinely equivalent to $Q'$ which is itself affinely equivalent to $Q$. Furthermore, $\psi_{S_Q}$ sends an affine basis of $Q'$ to the corresponding affine basis in $Q$, so in fact it must be a bijection between the two polytopes. Hence, $\psi_{S_Q}(\textup{rows of }S_Q)$ equals the representative of $Q$ in $\mathcal{R}(P,B)$, completing our proof. \end{proof} The slack realization space is especially elegant in the context of polarity. Let $P^\circ$ be the polar polytope of $P$. It is not immediately obvious from the standard model of a realization space, how $\mathcal{R}(P,B_1)$ and $\mathcal{R}(P^\circ,B_2)$ are related. In \cite{RG96}, it is shown that the realization spaces of $P$ and $P^\circ$ are {\em stably equivalent}, a coarser notion of equivalence than rational equivalence (see \cite[Definition 2.5.1]{RG96}); however, the proof of this fact in Theorem 2.6.3 is non-trivial. Now consider the slack model. Recall we know that one slack matrix of $P^\circ$ is $(S_P^1)^\top$, so that $S_{P^\circ}(\mathbf x) = S_P(\mathbf x)^\top$. In particular, this means that $I_{P^\circ} = I_P$, so that the slack varieties and realization spaces of $P$ and $P^\circ$ are actually the same when considered as subsets of $\mathbb{R}^t$. We simply need to interpret $\mathbf s\in\mathcal{V}_+(I_P) = \mathcal{V}_+(I_{P^\circ})$ as a realization of $P$ or $P^\circ$ by assigning its coordinates to $S_P(\mathbf x)$ along rows or columns. \begin{example} \label{EG:quadrealize} Let us return to the realization space of the unit square $P_1$ from Example~\ref{EG:quadslack}. Suppose we fix the affine basis $B = \{1,2,4\}$, where we had $\mb{p}_1 = (0,0), \mb{p}_2 = (1,0)$ and $\mb{p}_4=(0,1)$. Then the classical realization space $\mathcal{R}(P_1,B)$ consists of all quadrilaterals $Q = \textup{conv}\{\mb{p}_1,\mb{p}_2,(a,b), \mb{p}_4\}$, where $a,b\in\mathbb{R}$ must satisfy $a,b>0$ and $a+b>1$ in order for $Q$ to be convex. In the slack realization spaces, modding out by row and column scalings is equivalent to fixing some variables in $S_P(\mathbf x)$ to $1$. So for example, we could start with the following scaled symbolic slack and affine slack matrices $$S_P(\mathbf x) = \begin{bmatrix} 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 1 \\ 1 & x_8 & 0 & 0 \end{bmatrix}, \hspace{10pt} [S_P(\mathbf x)\;\mathbbm{1}] = \begin{bmatrix} 0 & 1 & 1 & 0 & 1 \\ 0 & 0 & x_3 & x_4 & 1\\ 1 & 0 & 0 & 1 & 1 \\ x_7 & x_8 & 0 & 0 & 1 \end{bmatrix}.$$ Computing the $4$-minors of these scaled symbolic slack matrices and saturating with all variables produces the scaled slack ideals \begin{align*} I_P^{\textup{scaled}}& = \langle x_8 - 1\rangle, \textup{ and } \\ \widetilde{I}_P^{\textup{scaled}} & = \langle x_3x_8 + x_4x_8 - x_3 - x_8, \,x_4x_7 + x_4x_8 - x_4 - x_7, \,x_3x_7 - x_4x_8\rangle. \end{align*} Therefore the slack realization space, $\mathcal{V}_+(I_P)/ ({\mathbb{R}^4_{>0} \times \mathbb{R}^4_{>0}})$, has the unique element $(1,1,1,1,1,1,1,1)$, and indeed, all convex quadrilaterals are projectively equivalent to $P_1$. From the generators of $\widetilde{I}_P^{\textup{scaled}}$ one sees that the affine slack realization space, $\mathcal{V}(\widetilde{I}_P)/ \mathbb{R}^4_{>0}$, is two-dimensional and parametrized by $x_3,x_4$ with $$ x_7 = \frac{x_4}{x_3 + x_4 -1} \,\,\textup{ and } \,\,x_8 = \frac{x_3}{x_3 + x_4 -1}.$$ Since all the four variables have to take on positive values in a (scaled) slack matrix of a quadrilateral, we get that the realization space $\mathcal{V}_+(\widetilde{I}_P)/\mathbb{R}^4_{>0}$ is cut out by the inequalities $ x_3 > 0, \,\, x_4 > 0, \,\, x_3 + x_4 > 1$. This description coincides exactly with that of $\mathcal{R}(P_1,B)$ that we saw earlier. \label{EX:square} \end{example} \begin{example} \label{ex:vertex split of vertex sum} Consider the $5$-polytope $P$ with vertices $\mb{p}_1,\ldots, \mb{p}_8$ given by \begin{align*} e_1,e_2,e_3,e_4,-e_1-2e_2-e_3,-2e_1-e_2-e_4,-2e_1-2e_2+e_5,-2e_1-2e_2-e_5 \end{align*} where $e_1,\ldots, e_5$ are the standard basis vectors in $\mathbb{R}^5$. It can be obtained by {splitting} the distinguished vertex $v$ of the vertex sum of two squares, $(\Box,v)\oplus(\Box,v)$ in the notation of \cite{McMull}. This polytope has 8 vertices and 12 facets and its symbolic slack matrix has the zero-pattern below $$\begin{bmatrix} 0&*&0&0&0&0&*&0&0&0&0&0 \\ 0&0&0&*&*&0&0&0&0&0&0&0 \\ 0&0&0&0&0&*&*&*&0&0&*&* \\ *&0&0&*&0&*&0&0&*&0&*&0 \\ *&*&*&0&0&0&0&0&*&*&0&0 \\ 0&0&*&0&*&0&0&*&0&*&0&* \\ *&0&*&0&0&*&0&*&0&0&0&0 \\ 0&0&0&0&0&0&0&0&*&*&*&* \end{bmatrix}.$$ By \cite[Theorem 5.3]{McMull} $P$ is not projectively unique, meaning its slack realization space $\mathcal{V}_+(I_P)/(\mathbb{R}_{>0}^v\times\mathbb{R}^f_{>0})$ will consist of more than a single point. Indeed, by fixing ones in the maximum number of positions, {marked in \textcolor{red}{bold} face below}, we find that $\mathcal{V}_+(I_P)/(\mathbb{R}_{>0}^v\times\mathbb{R}^f_{>0})$ is a one-dimensional space of projectively inequivalent realizations parametrized by slack matrices of the following form $$S_P(a) = \begin{bmatrix} 0&\bf{\textcolor{red}{1}}&0&0&0&0&\bf{\textcolor{red}{1}}&0&0&0&0&0 \\ 0&0&0&\bf{\textcolor{red}{1}}&\bf{\textcolor{red}{1}}&0&0&0&0&0&0&0 \\ 0&0&0&0&0&\bf{\textcolor{red}{1}}&\bf{\textcolor{red}{1}}&1&0&0&1&\bf{\textcolor{red}{1}} \\ \bf{\textcolor{red}{1}}&0&0&\bf{\textcolor{red}{1}}&0&\bf{\textcolor{red}{1}}&0&0&a&0&a&0 \\ \bf{\textcolor{red}{1}}&1&\bf{\textcolor{red}{1}}&0&0&0&0&0&{1}&1&0&0 \\ 0&0&1&0&\bf{\textcolor{red}{1}}&0&0&\bf{\textcolor{red}{1}}&0&a&0&a \\ 1&0&1&0&0&1&0&\bf{\textcolor{red}{1}}&0&0&0&0 \\ 0&0&0&0&0&0&0&0&\bf{\textcolor{red}{1}}&\bf{\textcolor{red}{1}}&\bf{\textcolor{red}{1}}&\bf{\textcolor{red}{1}} \end{bmatrix}.$$ If we wish to look at a representative of each equivalence class which is a true slack matrix, then we can scale the above to guarantee that $\mathbbm{1}$ is in the column space. \label{EQ:Amys5d} \end{example} \begin{remark} We have shown that $\mathcal{V}_+(I_P)$ is a natural model for the realization space of $P$, but it could be that $I_P$ is not the biggest ideal that vanishes on its Zariski closure. In other words, we have not proved that $\mathcal{V}_+(I_P)$ is Zariski dense in the slack variety $\mathcal{V}(I_P)$. Determining the vanishing ideal of $\mathcal{V}_+(I_P)$ would allow one to transfer invariants from the variety of this ideal to the realization space. For instance, whether one can compute the dimension of a realization space is an important and largely open question, and having the correct ideal would provide an algebraic tool for answering this question. \end{remark}
1,116,691,500,003
arxiv
\section{Introduction} Throughout the paper, we always assume that $H$ is a complex Hilbert space with inner product $<\cdot,\cdot>$, $B(H)$ is the $C^*$--algebra of all bounded linear operators on $H$ and $\aa$ is a $C^*$--algebra with the unit $1$. Let $\aa_+$ denote the set of all positive elements in $\aa$. It is well--known that $\aa$ has a faithful representation $(\psi ,H_\psi )$ with $\psi (1)=I$ (cf. \cite[Theorem 1.6.17]{HL} or \cite[Theorem 1.5.36]{Xue}). For $T\in B(H)$, let $\Ran(T)$ (resp. $\Ker(T)$) denote the range (resp. kernel) of $T$. Let $V_1,V_2$ be closed subspaces in $H$ such that $H=V_1\dotplus V_2=V_1^\perp\dotplus V_2$, that is, $V_1$ and $V_2$ is in generic position (cf. \cite{Ha}). Let $P_i$ be projection of $H$ onto $V_i$, $i=1,2$. Then $H=\Ran(P_1)\dotplus\Ran(P_2)=\Ran(I-P_1)\dotplus\Ran(P_2)$. In this case, Halmos gave very useful matrix representations of $P_1$ and $P_2$ in \cite{Ha}. Following Halmos' work, Sunder investigated in \cite{Su} the $n$--tuple closed subspaces $(V_1,\cdots,V_n)$ in $H$ which satisfying the condition $H=V_1\dotplus\cdots\dotplus V_n$. If let $P_i$ be the projection of $H$ onto $V_i$, $i=1,\cdots,n$, then the condition $H=V_1\dotplus\cdots\dotplus V_n$ is equivalent to $H=\Ran(P_1)\dotplus\cdots\dotplus\Ran(P_n)$. Now the question yields: when does the relation $H=\Ran(P_1)\dotplus\cdots\dotplus\Ran(P_n)$ hold for an $n$--tuple of projections $(P_1,\cdots,P_n)$? When $n=2$, Buckholdtz proved in \cite{BU} that $\Ran(P_1)\dotplus \Ran(P_2)=H$ iff $P_1-P_2$ is invertible in $B(H)$ iff $I-P_1P_2$ is invertible in $B(H)$ and iff $P_1+P_2-P_1P_2$ is invertible in $B(H)$. More information about two projections can be found in \cite{BS}. Koliha and Rako\v{c}evi\'{c} generalized Buckholdtz's work to the set of $C^*$--algebras and rings. They gave some equivalent conditions for decomposition $\mathfrak R=P\,\mathfrak R\dotplus Q\,\mathfrak R$ or $\mathfrak R=\mathfrak R\,P\dotplus \mathfrak R\,Q$ in \cite {KO} and \cite{KO2} for idempotent elements $P$ and $Q$ in a unital ring $\mathfrak R$. They also characterized the Fredhomness of the difference of projections on $H$ in \cite{KO3}. For $n\ge 3$, the question remains unknown so far. But there are some works concerning with this problem. For example, the estimation of the spectrum of the finite sum of projections on $H$ is given in \cite{BM} and the $C^*$--algebra generated by certain projections is investigated in \cite{Sh} and \cite{V}, etc.. Let $\P_n(\aa)$ denote the set of $n$--tuple ($n\ge 2$) of non--trivial projections in $\aa$ and put $$ \PC_n(\aa)=\{(P_1,\cdots,P_n)\in\P_n(\aa)\,\vert\,P_1\aa\dotplus\cdots\dotplus P_n\aa=\aa\}. $$ It is worth to note that if $\aa=B(H)$ and $(P_1,\cdots,P_n)\in\P_n(B(H))$, then $(P_1,\cdots,P_n)$ $\in\PC_n(B(H))$ if and only if $\Ran(P_1)\dotplus\cdots\dotplus\Ran(P_n)=H$ (see Theorem \ref{th1} below). In this paper, we will investigate the set $\PC_n(\aa)$ for $n\ge 3$. The paper consists of four sections. In Section 1, we give some necessary and sufficient conditions that make $(P_1,\cdots,P_n)\in\P_n(\aa)$ be in $\PC_n(\aa)$. In Section 2, using some equivalent conditions for $(P_1,\cdots,P_n)\in\PC_n(\aa)$ obtained in \S 1, we obtain an explicit expression of $P_{i_1}\vee\cdots\vee P_{i_k}$ for $\{i_1,\cdots,i_k\}\subset\{1,\cdots,n\}$. We discuss the perturbation problems for $(P_1,\cdots,P_n)\in\PC_n(\aa)$ in Section 3. We find an interesting result: if $\pn\in\P_n(\aa)$ with $A=\sum\limits_{i=1}^nP_i$ invertible in $\aa$, then $\|P_iA^{-1}P_j\|<\big[(n-1)\|A^{-1}\|\|A\|^2\big]^{-1}$, $i\not=j$ implies $P_iA^{-1}P_j=0$, $i\not=j$, $i,j=1,\cdots,n$ in this section. We show in this section that for given $\ep\in (0,1)$, if $\pn\in\P_n(\aa)$ satisfies condition $\|P_iP_j\|<\ep$, then there exists an $n$--tuple of mutually orthogonal projections $(P'_1,\cdots,P'_n)\in\P_n(\aa)$ such that $\|P_i-P'_i\|<2(n-1)\ep$, $\inn,$ which improves a conventional estimate: $\|P_i-P'_i\|<(12)^{n-1}n!\ep$, $i=1,\cdots,n$ (cf. \cite{HL}). In the final section, we will study the topological properties and equivalent relations on $\PC_n(\aa)$. \section{Equivalent conditions for complete $n$--tuples of projections in $C^*$--algebras} Let $GL(\aa)$ (resp. $U(\aa)$) denote the group of all invertible (resp. unitary) elements in $\aa$. Let $\m_k(\aa)$ denote matrix algebra of all $k\times k$ matrices over $\aa$. For any $a\in\aa$, we set $a\,\aa=\{ax\vert\,x\in\aa\} \subset\aa$. \begin{definition}\label{1da} An $n$--tuple of projections $(P_1,\cdots,P_n)$ in $\aa$ is called complete in $\aa$, if $(P_1,\cdots,P_n)\in\PC_n(\aa)$. \end{definition} \begin{theorem}\label{th1} Let $(P_1,\cdots,P_n)\in\P_n(\aa)$. Then the following statements are equivalent: \begin{enumerate} \item[$(1)$] $(P_1,\cdots,P_n)$ is complete in $\aa$. \item[$(2)$] $H_\psi =\Ran(\psi (P_1))\dotplus\cdots\dotplus\Ran(\psi (P_n))$ for any faithful representation $(\psi , H_\psi )$ of $\aa$ with $\psi (1)=I$. \item[$(3)$] $H_\psi =\Ran(\psi (P_1))\dotplus\cdots\dotplus\Ran(\psi (P_n))$ for some faithful representation $(\psi , H_\psi )$ of $\aa$ with $\psi (1)=I$. \item[$(4)$] $\sum\limits_{j\not=i}P_j+\lambda P_i\in GL(\aa)$, $i=1,2,\cdots,n$ and $\forall\,\lambda\in [1-n,0)$. \item[$(5)$] $\lambda\big(\sum\limits_{j\not=i}P_j\big)+ P_i\in GL(\aa)$ for $1\le i\le n$ and all $\lambda\in\mathbb C\backslash\{0\}$. \item[$(6)$] $A=\sum\limits_{i=1}^nP_i\in GL(\aa)$ and $P_iA^{-1}P_i=P_i$, $i=1,\cdots,n$. \item[$(7)$] $A=\sum\limits_{i=1}^nP_i\in GL(\aa)$ and $P_iA^{-1}P_j=0$, $i\not=j$, $i,j=1,\cdots,n$. \item[$(8)$] there is an $n$-tuple of idempotent operators ${(E_1,\cdots,E_n)}$ in $\aa$ such that $E_iP_i=E_i,\,P_iE_i=P_i$, $i=1,\cdots,n$ and $E_iE_j=0,\ i\not=j,\ i,j=1,\cdots,n,\ \sum\limits_{i=1}^nE_i=1.$ \end{enumerate} \end{theorem} In order to show Theorem \ref{th1}, we need following lemmas. \begin{lemma}\label{1L1} Let $B,\,C\in\aa_+\backslash\{0\}$ and suppose that $\lambda B+C$ is invertible in $\aa$ for every $\lambda\in \mathbb R\backslash\{0\}$. Then there is a non--trivial orthogonal projection $P\in\aa$ such that $$ B=(B+C)^{1/2}P(B+C)^{1/2},\quad C=(B+C)^{1/2}(1-P)(B+C)^{1/2}. $$ \end{lemma} \begin{proof}Put $D=B+C$ and $D_\lambda=\lambda B+C$, $\forall\,\lambda\in R\backslash\{0\}$. Then $D\ge 0$, $D$ and $D_\lambda$ are all invertible in $\aa$, $\forall\,\lambda\in\mathbb R\backslash\{0\}$. Put $B_1=D^{-1/2}BD^{-1/2}$, $C_1=D^{-1/2}CD^{-1/2}$. Then $B_1+C_1=1$ and $$ D^{-1/2}D_\lambda D^{-1/2}=\lambda B_1+C_1=\lambda +(1-\lambda)C_1=(1-\lambda)(\lambda(1-\lambda)^{-1}+C_1) $$ is invertible in $\aa$ for any $\lambda\in\mathbb R\backslash\{0,1\}$. Since $\lambda\mapsto\dfrac{\lambda}{1-\lambda}$ is a homeomorphism from $\mathbb R\backslash\{0,1\}$ onto $\mathbb R\backslash\{-1,0\}$, it follows that $\sigma(C_1) \subset\{0,1\}$. Note that $B_1$ and $C_1$ are all non--zero. So $\sigma(C_1)=\{0,1\}=\sigma(B_1)$ and hence $P=B_1$ is a non--zero projection in $\aa$ and $B=D^{1/2}PD^{1/2}$, $C=D^{1/2}(1-P)D^{1/2}$. \end{proof} \begin{lemma}\label{1L2} Let $B,\,C\in\aa_+\backslash\{0\}$. Then the following statements are equivalent: \begin{enumerate} \item[$(1)$] for any non--zero real number $\lambda$, $\lambda B+C$ is invertible in $\aa$. \item[$(2)$] $B+C$ is invertible in $\aa$ and $B(B+C)^{-1}B=B$. \item[$(3)$] $B+C$ is invertible in $\aa$ and $B(B+C)^{-1}C=0$. \item[$(4)$] $B+C$ is invertible in $\aa$ and for any $B',\,C'\in\aa_+$ with $B'\le B$ and $C'\le C$, $B'(B+C)^{-1}C'=0$. \end{enumerate} \end{lemma} \begin{proof} (1)$\Rightarrow$(2) By Lemma \ref{1L1}, there is a non--zero projection $P$ in $\aa$ such that $B=D^{1/2}PD^{1/2}$, $C=D^{1/2}(1-P)D^{1/2}$, where $D=B+C\in GL(\aa)$. So $$ B(B+C)^{-1}B=D^{1/2}PD^{1/2}D^{-1}D^{1/2}PD^{1/2}=B. $$ The assertion (2) $\Leftrightarrow$ (3) follows from $$ B(B+C)^{-1}B=B(B+C)^{-1}(B+C-C)=B-B(B+C)^{-1}C. $$ (3) $\Rightarrow$ (4) For any $C'$ with $0\le C'\le C$, $$ 0\le B(B+C)^{-1}C'(B+C)^{-1}B\le B(B+C)^{-1}C(B+C)^{-1}B=0, $$ we have $B(B+C)^{-1}C'=B(B+C)^{-1}C'^{1/2}C'^{1/2}=0.$ This implies $C'(B+C)^{-1}B=0$. In the same way, we get that for any $B'$ with $0\le B'\le B$, $C'(B+C)^{-1}B'=0$. (4)$\Rightarrow$(3) is obvious. (2)$\Rightarrow$(1) Set $X=(B+C)^{-1/2}B$ and $Y=(B+C)^{-1/2}C$. Then $X,\,Y\in\aa$ and $X^*X=B$, $X+Y=(B+C)^{1/2}$. Thus, for any $\lambda\in\mathbb R\backslash\{0\}$, \begin{align*} X+\lambda Y&=(B+C)^{-1/2}(B+\lambda C)\\ (X+\lambda Y)^*(X+\lambda Y)&=((1-\lambda)X+\lambda(B+C)^{1/2})^*((1-\lambda)X+\lambda(B+C)^{1/2})\\ &=(1-\lambda)^2 B+2\lambda(1-\lambda)B+\lambda^2(B+C)\\ &=B+\lambda^2 C \end{align*} and consequently, $(X+\lambda Y)^*(X+\lambda Y)\ge B+C$ if $|\lambda|>1$ and $(X+\lambda Y)^*(X+\lambda Y)\ge\lambda^2(B+C)$ when $|\lambda|<1$. This indicates that $(X+\lambda Y)^*(X+\lambda Y)$ is invertible in $\aa$. Noting that $B+C\ge\|(B+C)^{-1}\|^{-1}\cdot 1$, we have, for any $\lambda\in\mathbb R\backslash\{0\}$, \begin{align*} (B+\lambda C)^2&=(X+\lambda Y)^*(B+C)(X+\lambda Y)\\ &\ge\|(B+C)^{-1}\|^{-1}(X+\lambda Y)^*(X+\lambda Y). \end{align*} Therefore, $B+\lambda C$ is invertible in $\aa$, $\forall\,\lambda\in\mathbb R\backslash\{0\}$. \end{proof} Now we begin to prove Theorem \ref{th1}. (1)$\Rightarrow$(6) Statement (1) implies that there are $b_1,\cdots,b_n\in\aa$ such that $1=\sum\limits^n_{i=1}P_ib_i$. Put $\hat I=\begin{bmatrix}1\\ \ &0\\ \ &\ &\ddots\\ \ &\ &\ \ &0\end{bmatrix}$, $X=\begin{bmatrix}P_1&\cdots& P_n\\ 0&\cdots &0\\ \vdots &\ddots&\vdots\\ 0&\cdots &0\end{bmatrix}$ and $Y=\begin{bmatrix}b_1&0&\cdots&0\\ \vdots&\vdots&\ddots&\vdots\\ b_n&0&\cdots&0\end{bmatrix}$. Then $$ \hat I=XY=XYY^*X^*\le\|Y\|^2XX^*=\|Y\|^2\begin{bmatrix}\sum\limits^n_{i=1}P_i\\ \ &0\\ \ &\ &\ddots\\ \ &\ &\ \ &0\end{bmatrix} $$ and so that $A=\sum\limits^n_{i=1}P_i$ is invertible in $\aa$. Therefore, from $\aa=P_1\aa\dotplus\cdots\dotplus P_n\aa$ and $$ P_i=P_1A^{-1}P_i+\cdots+P_iA^{-1}P_i+\cdots+P_nA^{-1}P_i=\underbrace{0+\cdots+0}_{i-1}+P_i+\underbrace{0+\cdots+0}_{n-i}, $$ $i=1,\cdots,n$, we get that $P_i=P_iA^{-1}P_i$, $i=1,\cdots,n$. (2)$\Rightarrow$(3) is obvious. (3)$\Rightarrow$(4) Set $Q_i=\psi (P_i)$, $i=1,\cdots,n$. From $H_\psi =\Ran(Q_1)\dotplus\cdots\dotplus \Ran(Q_n)$, we obtain idempotent operators $F_1,\cdots,F_n$ in $B(H_\psi )$ such that $\sum\limits^n_{i=1}F_i=I$, $F_iF_j=0$, $i\not=j$ and $F_iH_\psi =Q_iH_\psi $, $i,j=1,\cdots,n$. So $F_iQ_i=Q_i$, $Q_iF_i=F_i$ and $F_jQ_i=0$, $i\not=j$, $1\le i,j\le n$. Using these relations, it is easy to check that \begin{align*} \big(\sum_{i=1}^n\lambda_iQ_i\big)\big(\sum_{i=1}^n\lambda_i^{-1}F_i^*F_i\big)&=\sum_{i=1}^nF_i=I,\\ \big(\sum_{i=1}^n\lambda_i^{-1}F_i^*F_i\big)\big(\sum_{i=1}^n\lambda_iQ_i\big)&=\sum_{i=1}^nF_i^*=I, \end{align*} for any non--zero complex number $\lambda_i$, $i=1,\cdots,n$. Particularly, for any $\lambda\in [1-n,0)$, $$ \big(\lambda\big(\sum\limits_{j\not=i}Q_j\big)+ Q_i\big)^{-1}=\lambda^{-1}\sum_{j\not=i}F_j^*F_j+F_i^*F_i $$ in $B(H_\psi )$. Thus, $\lambda\big(\sum\limits_{j\not=i}Q_j\big)+Q_i$ is invertible $\psi (\aa)$, $1\le i\le n$ by \cite[Corollary 1.5.8]{Xue} and so that $\lambda\big(\sum\limits_{j\not=i}P_j\big)+P_i\in GL(\aa)$ since $\psi $ is faithful and $\psi (1)=I$. (4)$\Rightarrow$(5) Put $A_i(\lambda)=\sum\limits_{j\not=i}P_j+\lambda P_i$, $i=1,\cdots,n$, $\lambda\in\mathbb R\backslash\{0\}$, then \begin{align*} (A_i(\lambda))^2&\le 2\big(\sum\limits_{j\not=i}P_j\big)^2+2\lambda^2 P_i\le 2(n-1)\sum\limits_{j\not=i}P_j +2\lambda^2 P_i\\ &\le 2\max\{n-1,\lambda^2\}(P_1+\cdots+P_n). \end{align*} So $A_i(\lambda)$ is invertible in $\aa$, $\forall\,\lambda\in [1-n,0)$ means that $A=P_1+\cdots+P_n$ is invertible in $\aa$. Consequently, $A_i(\lambda)$ is invertible in $\aa$ when $\lambda>0$, $\forall\,1\le i\le n$. Now we show that $A_i(\lambda)$ is invertible in $\aa$ for $i=1,\cdots,n$ and $\lambda <1-n$. Put $$ A_{1i}=P_iAP_i,\ A_{2i}=P_iA(1-P_i),\ A_{4i}=(1-P_i)A(1-P_i),\ i=1,\cdots,n. $$ Express $A_i(\lambda)$ as the form $A_i(\lambda)=\begin{bmatrix}A_{1i}+(\lambda-1)P_i&A_{2i}\\ A_{2i}^*&A_{4i} \end{bmatrix}$, $i=1,\cdots,n$. Noting that $A_{4i}$ is invertible in $(1-P_i)\aa(1-P_i)$ ($A\ge\|A^{-1}\|^{-1}\cdot 1$, $A_{4i}\ge\|A^{-1}\|^{-1}(1-P_i)$) and $$ A_i(\lambda)\begin{bmatrix}P_i&0\\ -A_{4i}^{-1}A_{2i}^*&1-P_i\end{bmatrix} =\begin{bmatrix}A_{1i}-A_{2i}A_{4i}^{-1}A_{2i}^*+(\lambda-1)P_i&A_{2i}\\ 0&A_{4i}\end{bmatrix}, $$ we get that $A_i(\lambda)$ is invertible iff $A_{1i}-A_{2i}A_{4i}^{-1}A_{2i}^*+(\lambda-1)P_i$ is invertible in $P_i\aa P_i$, $i=1,\cdots,n$. Since $A_{1i}\le nP_i$, it follows that $$ -A_{1i}+A_{2i}A_{4i}^{-1}A_{2i}^*-(\lambda-1)P_i\ge (1-n-\lambda)P_i+A_{2i}A_{4i}^{-1}A_{2i}^*\ge(1-n-\lambda)P_i $$ when $\lambda<1-n$, $i=1,\cdots,n$. Therefore, $A_i(\lambda)$ is invertible in $\aa$ for $\lambda<1-n$ and $i=1,\cdots,n$. Applying Lemma \ref{1L2} to $\sum\limits_{j\not=i}P_j$ and $P_i$, $1\le i\le n$, we can get the implications (5)$\Rightarrow$(6) and (6)$\Rightarrow$ (7) easily. (7)$\Rightarrow$(8) Set $E_i=P_iA^{-1}$, $i=1\cdots,n$. Then $E_i$ is an idempotent elements in $\aa$ and $E_iE_j=0$, $i\not=j$, $i,j=1,\cdots,n$. It is obvious that $\sum\limits_{i=1}^n E_i=1$ and $P_iE_i=E_i$, $E_iP_i=P_i$, $i=1,\cdots,n$. (8)$\Rightarrow$(1) Let $E_1,\cdots,E_n$ be idempotent elements in $\aa$ such that $E_iE_j=\delta_{ij}E_i$, $\sum\limits^n_{i=1}E_i=1$ and $E_iP_i=P_i$, $P_iE_i=E_i$, $i,j=1,\cdots,n$. Then $E_i\aa=P_i\aa$, $i=1,\cdots,n$ and $\aa=E_1\aa\dotplus\cdots E_n\aa=P_1\aa\dotplus\cdots\dotplus P_n\aa$. (8)$\Rightarrow$(2) Let $E_1,\cdots,E_n$ be idempotent elements in $\aa$ such that $E_iE_j=\delta_{ij}E_i$, $\sum\limits^n_{i=1}E_i=1$ and $E_iP_i=P_i$, $P_iE_i=E_i$, $i,j=1,\cdots,n$. Let $(\psi ,H_\psi )$ be any faithful representation of $\aa$ with $\psi (1)=I$. Put $E_i'=\psi (E_i)$ and $Q_i=\psi (P_i)$, $i=1,\cdots,n$. Then $E_i'E_j'=\delta_{ij}E_i'$, $\sum\limits^n_{i=1}E_i'=I$ and $\Ran(E_i')=\Ran(Q_i)$, $i,j=1,\cdots,n$. Consequently, $H_\psi =\Ran(Q_1)\dotplus\cdots\dotplus\Ran(Q_n)$. \qed \begin{remark}\label{rem1a} {\rm (1) Statement (3) in Theorem \ref{th1} can not be replaced by ``for any $1\le i\le n$, $P_i-\sum\limits_{j\not=i}P_j$ is invertible". For example, let $H^{(4)}=\bigoplus\limits^4_{i=1}H$ and put $\aa=B(H^{(4)})$, $$ P_1=\begin{bmatrix}I\\ \ & I\\ \ &\ &0\\ \ &\ &\ & 0\end{bmatrix},\quad P_2=\begin{bmatrix}I\\ \ & 0\\ \ &\ &I\\ \ &\ &\ &0\end{bmatrix},\quad P_3=\begin{bmatrix}I\\ \ &0\\ \ &\ &0\\ \ &\ &\ & I\end{bmatrix}. $$ Clearly, $P_i-\sum\limits_{j\not=i}{P_j}$ is invertible, $1\le i\le 3$, but $P_2+P_3-2P_1$ is not invertible, that is, $(P_1,P_2,P_3)$ is not complete in $\aa$. (2) According to the proof of (3)$\Rightarrow$(4) of Theorem \ref{th1}, we see that for $(P_1,\cdots,P_n)$ $\in\P_n(\aa)$, if $\sum\limits^n_{i=1}P_i\in GL(\aa)$, then $\sum\limits_{i\not=j}P_i-\lambda P_i\in GL(\aa)$, $\forall\,1\le i\le n$ and $\lambda>n-1$. } \end{remark} \begin{corollary}[{\cite[Theorem 1]{BU}}] Let $P_1,P_2$ be non--trivial projections in $B(H)$. Then $H=\Ran(P_1)\dotplus\Ran(P_2)$ iff $P_1-P_2$ is invertible in $B(H)$. \end{corollary} \begin{proof}By Theorem \ref{th1}, $H=\Ran(P_1)\dotplus\Ran(P_2)$ implies that $P_1-P_2\in GL(B(H))$. Conversely, if $P_1-P_2\in GL(B(H))$, then from $2(P_1+P_2)\ge(P_1-P_2)^2$, we get that $P_1+P_2\in GL(B(H))$ and so that $P_1-\lambda P_2, P_2-\lambda P_1\in GL(B(H))$, $\forall\,\lambda>1$ by Remark \ref{rem1a} (2). Thus, for any $\lambda\in (0,1]$, $P_1-\lambda P_2$ and $P_2-\lambda P_1$ are all invertible in $B(H)$. Consequently, $H=\Ran(P_1)\dotplus\Ran(P_2)$ by Theorem \ref{th1}. \end{proof} \section{Some representations concerning the complete $n$--tuple of projections} We first statement two lemmas which are frequently used in this section and the later sections. \begin{lemma}\label{lem3a} Let $B\in\aa_+$ such that $0\in\sigma(B)$ is an isolated point. Then there is a unique element $B^\dag\in\aa_+$ such that $$ BB^\dag B=B,\ B^\dag BB^\dag=B^\dag,\ BB^\dag=B^\dag B. $$ \end{lemma} \begin{proof} The assertion follows from Proposition 3.5.8, Proposition 3.5.3 and Lemma 3.5.1 of \cite{Xue}. \end{proof} \begin{remark} {\rm The element $B^\dag$ in Lemma \ref{lem3a} is called the Moore--Penrose inverse of $B$. When $0\not\in\sigma(B)$, $B^\dag\triangleq B^{-1}$. The detailed information can be found in \cite{Xue}. } \end{remark} The following lemma comes from \cite[Lemma 3.5.5]{Xue} and \cite[Lemma 1]{CX}: \begin{lemma}\label{lem3b} Let $P\in\aa$ be an idempotent element. Then \begin{enumerate} \item[$(1)$] $P+P^*-1\in GL(\aa)$. \item[$(2)$] $R=P(P+P^*-1)^{-1}$ is a projection in $\aa$ satisfying $PR=R$ and $RP=P$. \end{enumerate} Moreover, if $R'\in\aa$ is a projection such that $PR'=R'$ and $R'P=P$, then $R'=R$. \end{lemma} Let $(P_1,\cdots,P_n)\in\PC_n(\aa)$ and put $A=\sum\limits^n_{i=1}P_i$. By Theorem \ref{th1}, $A\in GL(\aa)$ and $E_i=P_iA^{-1}$, $1\le i\le n$ are idempotent elements satisfying conditions $$ E_iE_j=0,\ i\not=j,\ E_iP_i=P_i,\ P_iE_i=E_i,\ i=1,\cdots,n,\ \text{and}\ \sum\limits^n_{i=1}E_i=1. $$ By Lemma \ref{lem3b}, $P_i=E_i(E_i^*+E_i-1)^{-1}$, $1\le i\le n$. So the $C^*$--algebra $C^*(P_1,\cdots,P_n)$ generated by $P_1,\cdots,P_n$ is equal to the $C^*$--algebra $C^*(E_1,\cdots,E_n)$ generated by $E_1,\cdots,E_n$. Put $Q_i=A^{-1/2}P_iA^{-1/2}$, $i=1,\cdots,n$. Then $Q_iQ_j=\delta_{ij}Q_i$ by Theorem \ref{th1}, $i,j=1,\cdots,n$ and $\sum\limits_{i=1}^nQ_i=1$. Thus, \begin{equation}\label{3eqa} P_i=A^{1/2}Q_iA^{1/2}\ \text{and}\ E_i=P_iA^{-1}=A^{1/2}Q_iA^{-1/2},\ i=1,\cdots,n. \end{equation} \begin{proposition}\label{prop3a} Let $(P_1,\cdots,P_n)\in\PC_n(\aa)$ with $A=\sum\limits^n_{i=1}P_i$. Then for any $\lambda_i\not=0$, $i=1,\cdots,n$, $\big(\sum\limits_{i=1}^n\lambda_iP_i\big)^{-1}=A^{-1}\big(\sum\limits_{i=1}^n\lambda_i^{-1}P_i\big)A^{-1}.$ \end{proposition} {\it\noindent Proof.}\ \,Keeping the symbols as above. We have $\sum\limits_{i=1}^n\lambda_iP_i=A^{1/2}\big(\sum\limits^n_{i=1}\lambda_i Q_i\big)A^{1/2}$. Thus, $$ \hspace{1.7cm}\big(\sum\limits_{i=1}^n\lambda_iP_i\big)^{-1}=A^{-1/2}\big(\sum\limits^n_{i=1}\lambda_i^{-1} Q_i\big)A^{-1/2} =A^{-1}\big(\sum_{i=1}^n\lambda_i^{-1}P_i\big)A^{-1}.\hspace{1cm}\qed $$ Now for $i_1,i_2,\cdots,i_k\in\{1,2,\cdots,n\}$ with $i_1<i_2<\cdots<i_k$, put $A_0=\sum\limits^k_{r=1}P_{i_r}$ and $Q_0=\sum\limits^k_{r=1}Q_{i_r}$. Then $A_0,Q_0\in\aa$ and $Q_0$ is a projection. From (\ref{3eqa}), $A_0=A^{1/2}Q_0A^{1/2}$. Thus, $\sigma(A_0)\backslash\{0\} =\sigma(Q_0AQ_0)\backslash\{0\}$ (cf. \cite[Proposition 1.4.14]{Xue}). Since $Q_0AQ_0$ is invertible in $Q_0\aa Q_0$, it follows that $0\in\sigma(Q_0AQ_0)$ is an isolated point and so that $0\in\sigma(A_0)$ is also an isolated point. So we can define $P_{i_1}\vee\cdots\vee P_{i_k}$ to be the projection $A_0^\dag A_0\in\aa$ by Lemma \ref{lem3a}. This definition is reasonable: if $P\in\aa$ is a projection such that $P\ge P_{i_r}$, $r=1,\cdots,k$, then $PA_0=A_0$ and hence $PA_0A_0^\dag=A_0A_0^\dag$, i.e., $P\ge P_{i_1}\vee\cdots\vee P_{i_k}$; Since $A_0\ge P_{i_r}$, we have $$ 0=(1-A_0^\dag A_0)A_0(1-A_0^\dag A_0)\ge(1-A_0^\dag A_0)P_{i_r}(1-A_0^\dag A_0) $$ and consequently, $P_{i_r}(1-A_0^\dag A_0)=0$, that is, $P_{i_r}\le P_{i_1}\vee\cdots\vee P_{i_k}$, $i=1,\cdots,k$. \begin{proposition} \label{prop3b} Let $(P_1,\cdots,P_n)\in\PC_n(\aa)$ with $A=\sum\limits^n_{i=1}P_i$. Let $i_1,\cdots,i_k$ be as above and $\{j_1,\cdots,j_l\}=\{1,\cdots,n\}\backslash\{i_1,\cdots,i_k\}$ with $j_1<\cdots<j_l$. Then \begin{align} \label{3eqb} P_{i_1}\vee\cdots \vee P_{i_k}&= A^{1/2}\big[\big(\sum\limits_{r=1}^k Q_{i_r}\big)A\big(\sum\limits_{r=1}^k Q_{i_r}\big)\big]^{-1}A^{1/2}\\ \label{3eqc} &=\big(\sum_{r=1}^kP_{i_r}\big)\big[\big(\sum_{r=1}^kP_{i_r}\big)^2+\sum_{t=1}^lP_{j_t}\big]^{-1} \big(\sum_{r=1}^kP_{i_r}\big). \end{align} \end{proposition} \begin{proof} Using the symbols $P_i, Q_i, E_i$ as above. According to (\ref{3eqa}), $$ \sum\limits_{r=1}^k P_{i_r}=A^{1/2}\big(\sum\limits_{r=1}^k Q_{i_r}\big)A^{1/2},\ \sum\limits_{r=1}^k E_{i_r}=A^{1/2}\big(\sum\limits_{r=1}^k Q_{i_r}\big)A^{-1/2}. $$ Thus $\big(\sum\limits_{r=1}^k E_{i_r}\big)\big(\sum\limits_{r=1}^k P_{i_r}\big)=\sum\limits_{r=1}^k P_{i_r}$ and $\sum\limits_{r=1}^k E_{i_r}=\big(\sum\limits_{r=1}^k P_{i_r}\big)A^{-1}$. Then we have $$ \big(\sum\limits_{r=1}^k E_{i_r}\big)P_{i_1}\vee\cdots \vee P_{i_k}=P_{i_1}\vee\cdots \vee P_{i_k},\quad P_{i_1}\vee\cdots \vee P_{i_k}\big(\sum\limits_{r=1}^k E_{i_r}\big)=\sum\limits_{r=1}^k E_{i_r}, $$ according to the definition of $P_{i_1}\vee\cdots \vee P_{i_k}$. Since $\sum\limits_{r=1}^k E_{i_r}$ is an idempotent element in $\aa$, it follows from Lemma \ref{lem3b} that \begin{equation}\label{3eqd} P_{i_1}\vee\cdots \vee P_{i_k}=\big(\sum\limits_{r=1}^k E_{i_r}\big) \big[\sum\limits_{r=1}^k (E_{i_r}^*+E_{i_r})-1\big]^{-1}\in\aa. \end{equation} Noting that $\big(\sum\limits_{r=1}^k Q_{i_r}\big)A\big(\sum\limits_{r=1}^k Q_{i_r}\big)\in GL\big(\big(\sum\limits_{r=1}^k Q_{i_r}\big)\aa\big(\sum\limits_{r=1}^k Q_{i_r}\big)\big)$; $\big(\sum\limits_{t=1}^l Q_{j_t}\big)A\big(\sum\limits_{t=1}^k Q_{j_t}\big)$ is invertible in $\big(\sum\limits_{t=1}^k Q_{j_t}\big)\aa\big(\sum\limits_{t=1}^k Q_{j_t}\big)$ and \begin{align*} \sum\limits_{r=1}^k (E_{i_r}^*+E_{i_r})-1&=A^{-1/2}\big[\big(\sum\limits_{r=1}^k Q_{i_r}\big)A+ A\big(\sum\limits_{r=1}^k Q_{i_r}\big)-A\big]A^{-1/2}\\ &=A^{-1/2}\big[\big(\sum\limits_{r=1}^k Q_{i_r}\big)A\big(\sum\limits_{r=1}^k Q_{i_r}\big)\!-\! \big(\sum\limits_{t=1}^l Q_{j_t}\big)A\big(\sum\limits_{t=1}^l Q_{j_t}\big)\big]A^{-1/2}, \end{align*} we obtain that \begin{align*} \big[\sum\limits_{r=1}^k (E_{i_r}^*&+E_{i_r})-1\big]^{-1}\\ &=A^{1/2}\big[\big[\big(\sum\limits_{r=1}^k Q_{i_r}\big)A\big(\sum\limits_{r=1}^k Q_{i_r}\big)\big]^{-1}\!-\! \big[\big(\sum\limits_{t=1}^l Q_{j_t}\big)A\big(\sum\limits_{t=1}^l Q_{j_t}\big)\big]^{-1}\big]A^{1/2}. \end{align*} Combining this with (\ref{3eqd}), we can get (\ref{3eqb}). Note that $\sum\limits_{r=1}^kP_{i_r}=A^{1/2}\big(\sum\limits_{r=1}^kQ_{i_r}\big)A^{1/2}$, $\sum\limits_{t=1}^lP_{j_t}=A^{1/2}\big(\sum\limits_{t=1}^lQ_{j_t}\big)A^{1/2}$ and $ \big(\sum\limits_{r=1}^kP_{i_r}\big)^2$ $=A^{1/2}\big(\sum\limits_{r=1}^kQ_{i_r}\big)A\big(\sum\limits_{r=1}^kQ_{i_r}\big) A^{1/2}. $ Therefore, \begin{align*} \big(&\sum_{r=1}^kP_{i_r}\big)\big[\big(\sum_{r=1}^kP_{i_r}\big)^2+\sum_{t=1}^lP_{j_t}\big]^{-1} \big(\sum_{r=1}^kP_{i_r}\big)\\ &=A^{1/2}\big(\sum\limits_{r=1}^kQ_{i_r}\big)\big(\big[\big(\sum\limits_{r=1}^kQ_{i_r}\big)A \big(\sum\limits_{r=1}^kQ_{i_r}\big)\big]^{-1}+\sum\limits_{t=1}^lQ_{j_t}\big)\big(\sum\limits_{r=1}^kQ_{i_r}\big)A^{1/2}\\ &=P_{i_1}\vee\cdots \vee P_{i_k} \end{align*} by (\ref{3eqb}). \end{proof} \section{Perturbations of a complete $n$--tuple of projections} \setcounter{equation}{0} Recall that for any non--zero operator $C\in B(H)$, the reduced minimum modulus $\gamma(C)$ is given by $\gamma(C)=\{\|Cx\|\,\vert\,x\in(\Ker(C))^\perp,\,\|x\|=1\}$ (cf. \cite[Remark 1.2.10]{Xue}). We list some properties of the reduced minimum modulus as our lemma as follows. \begin{lemma}[\rm cf. \cite{Xue}]\label{lem4b} Let $C$ be in $B(H)\backslash\{0\}$, Then \begin{enumerate} \item[$(1)$] $\|Cx\|\ge\gamma(C)\|x\|$, $\forall\,x\in(\Ker(C))^\perp$. \item[$(2)$] $\gamma(C)=\inf\{\lambda\,\vert\,\lambda\in\sigma(|C|)\backslash\{0\}\}$, where $|C|=(C^*C)^{1/2}$. \item[$(3)$] $\gamma(C)>0$ iff $\Ran(C)$ is closed iff $0$ is an isolated point of $\sigma(|C|)$ if $0\in\sigma(|C|)$. \item[$(4)$] $\gamma(C)=\|C^{-1}\|^{-1}$ when $C$ is invertible. \item[$(5)$] $\gamma(C)\ge\|B\|^{-1}$ when $CBC=C$ for $B\in B(H)\backslash\{0\}$. \end{enumerate} \end{lemma} For $a\in\aa_+$, put $\beta(a)=\inf\{\lambda\vert\,\lambda\in\sigma(a)\backslash\{0\}\}$. Combining Lemma \ref{lem4b} with the faithful representation of $\aa$, we can obtain \begin{corollary}\label{coro4a0} Let $a\in\aa_+$. Then \begin{enumerate} \item[$(1)$] $\beta(a)>0$ if and only if $0\in\sigma(a)$ is isolated when $a\not\in GL(\aa)$. \item[$(2)$] $\beta(c)\ge \|c\|^{-1}$ when $aca=a$ for some $c\in\aa_+\backslash\{0\}$. \end{enumerate} \end{corollary} Let $\mathcal E$ be a $C^*$--subalgebra of $B(H)$ with the unit $I$. Let $(T_1,\cdots,T_n)$ be an $n$--tuple of positive operators in $\mathcal E$ with $\Ran(T_i)$ closed, $i=1,\cdots,n$. Put $H_0=\bigoplus\limits^n_{i=1}\Ran(T_i)\subset\bigoplus\limits^n_{i=1}H\triangleq\hat H$ and $H_1=\bigoplus\limits^n_{i=1}\Ker(T_i)\subset\hat H$. Since $H=\Ran(T_i)\oplus\Ker(T_i)$, $i=1,\cdots,n$, it follows that $H_0\oplus H_1=\hat H$. Put $T_{ij}=T_iT_j\big\vert_{\Ran(T_j)}$, $i,j=1,\cdots,n$ and set \begin{equation}\label{4eqa} T=\begin{bmatrix}T_1^2&T_1T_2&\cdots&T_1T_n\\ T_2T_1&T_2^2&\cdots&T_2T_n\\ \cdots&\cdots&\cdots&\cdots\\ T_nT_1&T_2T_2&\cdots&T_n^2\end{bmatrix}\in\m_n(\mathcal E),\ \hat T=\begin{bmatrix}T_{11}&T_{12}&\cdots&T_{1n}\\ T_{21}&T_{22}& \cdots & T_{2n}\\ \cdots&\cdots&\cdots&\cdots\\ T_{n1}& T_{n2}&\cdots& T_{nn}\end{bmatrix}\in B(H_0), \end{equation} Clearly, $H_1\subset\Ker(T)$ and it is easy to check that $\Ker(T)=H_1$ when $\Ker(\hat T)=\{0\}$. Thus, in this case, $T$ can be expressed as $T=\begin{bmatrix}\hat T&0\\ 0&0\end{bmatrix}$ with respect to the orthogonal decomposition $\hat H=H_0\oplus H_1$ and consequently, $\sigma(T)=\sigma(\hat T)\cup\{0\}$. \begin{lemma}\label{lem4a} Let $(T_1,\cdots,T_n)$ be an $n$--tuple of positive operators in $\mathcal E$ with $\Ran(T_i)$ closed, $i=1,\cdots,n$. Let $H_0,H_1,\hat H$ be as above and $T,\,\hat T$ be given in (\ref{4eqa}). Suppose that $\hat T$ is invertible in $B(H_0)$. Then \begin{enumerate} \item[$(1)$] $\sigma(\hat T)=\sigma\big(\sum\limits^n_{i=1}T_i^2\big)\backslash\{0\}$. \item[$(2)$] $0$ is an isolated point in $\sigma\big(\sum\limits^n_{i=1}T_i\big)$ if $0\in\sigma\big(\sum\limits^n_{i=1}T_i\big)$. \item[$(3)$] $\{T_1a_1,\cdots,T_na_n\}$ is linearly independent for any $a_1,\cdots,a_n\in\mathcal E$ with $T_ia_i\not=0$, $i=1,\cdots,n$. \end{enumerate} \end{lemma} \begin{proof} (1) Put $Z=\begin{bmatrix}T_1&\cdots&T_n\\ 0&\cdots&0\\ \vdots&\ddots&\vdots\\ 0&\cdots&0\end{bmatrix}\in\m_n(\mathcal E)$. Then $ZZ^*=\begin{bmatrix}\sum\limits^n_{i=1}T_i^2&\\ \ &0\\ \ &\ &\ddots\\ \ &\ &\ &0\end{bmatrix}$ and $Z^*Z=T$. Thus, $\sigma\big(\sum\limits^n_{i=1}T_i^2\big)\backslash\{0\}=\sigma(T)\backslash\{0\}=\sigma(\hat T)$. (2) According to (1), $0$ is an isolated point of $\sigma\big(\sum\limits^n_{i=1}T_i^2\big)$ if $\sum\limits^n_{i=1}T_i^2$ is not invertible in $\mathcal E$. So by Lemma \ref{lem3a}, there is $G\in\mathcal E_+$ such that $$ \big(\sum\limits^n_{i=1}T_i^2\big)G\big(\sum\limits^n_{i=1}T_i^2\big)=\sum\limits^n_{i=1}T_i^2,\ G\big(\sum\limits^n_{i=1}T_i^2\big)G=G,\ \big(\sum\limits^n_{i=1}T_i^2\big)G=G\big(\sum\limits^n_{i=1}T_i^2\big). $$ Put $P_0=I-\big(\sum\limits^n_{i=1}T_i^2\big)G\in\mathcal E$. Then $P_0$ is a projection with $\Ran(P_0)=\Ker\big(\sum\limits^n_{i=1}T_i^2\big)$. Noting that $\Ker\big(\sum\limits^n_{i=1}T_i^2\big)= \Ker\big(\sum\limits^n_{i=1}T_i\big)=\bigcap\limits^n_{i=1}\Ker(T_i)$, $\sum\limits^n_{i=1}T_i^2\in GL((I-P_0)\mathcal E(I-P_0))$ with the inverse $G$ and $\sum\limits^n_{i=1}T^2_i\le(\max\limits_{1\le i\le n}\|T_i\|) \sum\limits^n_{i=1}T_i$, we get that $\sum\limits^n_{i=1}T_i$ is invertible in $(I-P_0)\mathcal E(I-P_0)$. Thus, $0$ is an isolated point of $\sigma\big(\sum\limits^n_{i=1}T_i\big)$ when $0\in\sigma\big(\sum\limits^n_{i=1}T_i\big)$. (3) By Lemma \ref{lem4b} (3) and Lemma \ref{lem3a}, there is $T_i^\dag\in\mathcal E_+$ such that $T_iT^\dag_iT_i=T_i$, $T^\dag_iT_iT^\dag_i=T_i^\dag$, $T^\dag_iT_i=T_iT_i^\dag$, $i=1,\cdots,n$. Thus, $\Ran(T_i)=\Ran(T_iT_i^\dag)$, $i=1, \cdots, n$. Let $a_1,\cdots,a_n\in\mathcal E$ with $T_ia_i\not=0$, $i=1,\cdots,n$ such that $\sum\limits^n_{i=1}\lambda_i T_ia_i=0$ for some $\lambda_1,\cdots,\lambda_n\in\mathbb C$. For any $\xi\in H$, put $x=\bigoplus\limits^n_{i=1}\lambda_i T_iT_i^\dag a_i\xi\in H_0$. Then $\hat Tx=0$ and $x=0$ since $\hat T$ is invertible. Thus, $\lambda_iT_iT_i^\dag a_i\xi =0$, $\forall\,\xi\in H$ and hence $\lambda_i=0$, $i=1,\cdots,n$. \end{proof} The following result duo to Levy and Dedplanques is very useful in Matrix Theory: \begin{lemma}[cf. \cite{RR}]\label{lem4c} Suppose complex $n\times n$ self--adjoint matrix $C=[c_{ij}]_{n\times n}$ is strictly diagonally dominant, that is, $\sum\limits_{j\not=i}|c_{ij}|<c_{ii}$, $i=1,\cdots,n$. Then $C$ is invertible and positive. \end{lemma} \begin{proposition}\label{prop4a} Let $T_1,\cdots,T_n\in\aa_+$. Assume that \begin{enumerate} \item[$(1)$] $\gamma=\min\{\beta(T_1),\cdots,\beta(T_n)\}>0$ and \item[$(2)$] there exists $\rho\in (0,\gamma]$ such that $\eta=\max\{\|T_iT_j\|\,\vert\,i\not=j, i,j=1,\cdots,n\}<$ $(n-1)^{-1}\rho^2$. \end{enumerate} Then for any $\de\in [\eta,(n-1)^{-1}\rho^2)$, we have \begin{enumerate} \item[$(1)$] $\sigma\big(\sum\limits^n_{i=1}T_i^2\big)\backslash\{0\}\subset[\rho^2-(n-1)\de,\rho^2+(n-1)\de]$. \item[$(2)$] $0$ is an isolated point of $\sigma\big(\sum\limits^n_{i=1}T_i\big)$ if $0\in\sigma\big(\sum\limits^n_{i=1}T_i\big)$. \item[$(3)$] $\big(\sum\limits^n_{i=1}T_i\big)\aa=T_1\aa\dotplus\cdots\dotplus T_n\aa$. \end{enumerate} \end{proposition} \begin{proof} (1) Let $(\psi ,H_\psi )$ be a faithful representation of $\aa$ with $\psi (1)=I$. We may assume that $H=H_\psi $ and $\mathcal E=\psi (\aa)$. Put $S_i=\psi (T_i)$, $S_{ij}=S_iS_j\big\vert_{\Ran(S_j)}$, $i,j=1,\cdots,n$. Then $\max\{\|S_iS_j\|\,\vert\,1\le i\not=j\le n\}=\eta$ and $\gamma(S_i)=\beta(T_i)$ by Lemma \ref{lem4b}, $1\le i\le n$. Set $H_0=\bigoplus\limits^n_{i=1}\Ran(S_i)$ and $$ \hat S=\begin{bmatrix}S_{11}&S_{12}&\cdots&S_{1n}\\ S_{21}&S_{22}& \cdots & S_{2n}\\ \cdots&\cdots&\cdots&\cdots\\ S_{n1}& S_{n2}&\cdots& S_{nn}\end{bmatrix}\in B(H_0),\quad S_0=\begin{bmatrix}\rho^2-\lambda&-\|S_{12}\|&\cdots&-\|S_{1n}\|\\ -\|S_{21}\|&\rho^2-\lambda& \cdots &-\|S_{2n}\|\\ \cdots&\cdots&\cdots&\cdots\\ -\|S_{n1}\|& -\|S_{n2}\|&\cdots&\rho^2-\lambda\end{bmatrix}, $$ Then for any $\lambda<\rho^2-(n-1)\de$, we have $\sum\limits_{j\not=i}\|S_{ij}\|\le(n-1)\eta<\rho^2-\lambda.$ It follows from Lemma \ref{lem4c} that $S_0$ is positive and invertible. Therefore the quadratic form $$ f(x_1,x_2,\cdots,x_n)=\sum_{i=1}^n(\rho^2-\lambda)x_i^2-2\sum_{1\le i<j\le n}\|S_{ij}\|x_ix_j $$ is positive definite and hence there exists $\alpha>0$ such that for any $(x_1,\cdots,x_n)\in{\mathbb R^n}$, $$ f(x_1,\cdots,x_n)\ge \alpha(x_1^2+\cdots+x_n^2). $$ So for any $\xi=\bigoplus\limits^n_{i=1}\xi_i\in H_0$, $\|S_i\xi_i\|\ge\gamma(S_i)\|\xi_i\|\ge\rho\|\xi_i\|$, $\xi_i\in\Ran(S_i)=(\Ker(S_i))^\perp$, $i=1,\cdots,n$, by Lemma \ref{lem4b} and \begin{align*} <(\hat S-\lambda I)\xi,\xi>& =\sum_{i=1}^n\|S_i\xi_i\|^2-\sum_i^n\lambda\|\xi_i\|^2+\sum_{1\le i<j\le n}(<S_{ij}\xi_j,\xi_i>+<S_{ij}^*\xi_i,\xi_j>)\\ &\ge\sum_{i=1}^n(\rho^2-\lambda)\|\xi_i\|^2-2\sum_{1\le i<j\le n}\|S_{ij}\|\|\xi_i\|\|\xi_j\|\\ &=f(\|\xi_1\|,\cdots,\|\xi_k\|)\ge\alpha\sum_{i=1}^k\|\xi_i\|^2. \end{align*} Therefore, $\hat S-\lambda I$ is invertible. Similarly, for any $\lambda>\rho^2+(n-1)\de$, we can obtain that $\lambda I-\hat S$ is invertible. So $\sigma(\hat S)\subset[\rho^2-(n-1)\de,\rho^2+(n-1)\de]\subset (0,\rho^2+(n-1)\de]$ and consequently, $\sigma\big(\sum\limits^n_{i=1}T_i^2\big)\backslash\{0\}=\sigma\big(\sum\limits^n_{i=1}S_i^2\big)\backslash\{0\} \subset[\rho^2-(n-1)\de,\rho^2+(n-1)\de]$ by Lemma \ref{lem4a}. (2) Since $\sigma\big(\sum\limits^n_{i=1}T_i\big)=\sigma\big(\sum\limits^n_{i=1}S_i\big)$, the assertion follows from Lemma \ref{lem4a} (2). (3) By (2) and Lemma \ref{lem3a}, $\big(\sum\limits^n_{i=1}T_i\big)^\dag\in\aa$ exists. Set $E=\big(\sum\limits^n_{i=1}T_i\big)\big(\sum\limits^n_{i=1}T_i\big)^\dag$. Obviously, $E\aa=\big(\sum\limits^n_{i=1}T_i\big)\aa\subset T_1\aa+\cdots+T_n\aa$ for $E\big(\sum\limits^n_{i=1}T_i\big)= \sum\limits^n_{i=1}T_i$. From $T_i\le\sum\limits^n_{i=1}T_i$, we get that $(1-E)T_i(1-E)\le (1-E)\big(\sum\limits^n_{i=1}T_i\big)(1-E)=0$, i.e., $T_i=ET_i$, $i=1,\cdots,n$. So $T_i\aa\subset E\aa$, $i=1,\cdots,n$ and hence $$ T_1\aa+\cdots+T_n\aa\subset E\aa=\big(\sum\limits^n_{i=1}T_i\big)\aa\subset T_1\aa+\cdots+T_n\aa. $$ Since for any $a_1,\cdots,a_n\in\aa$ with $T_ia_i\not=0$, $i=1,\cdots,n$, $\{S_1\psi (a_i),\cdots,S_n\psi(a_n)\}$ is linearly independent in $\mathcal E$, we have $\{T_1a_1,\cdots,T_na_n\}$ is linearly independent in $\aa$. Therefore, $\big(\sum\limits^n_{i=1}T_i\big)\aa=E\aa=T_1\aa\dotplus\cdots\dotplus T_n\aa$. \end{proof} Let $P_1,P_2$ be projections on $H$. Buckholtz shows in \cite{BU} that $\Ran(P_1)\dotplus \Ran(P_2)=H$ iff $\|P_1+P_2-I\|<1$. For $(P_1,\cdots,P_n)\in\P_n(\aa)$, we have \begin{corollary}\label{coro4a} Let $(P_1,\cdots,P_n)\in\P_n(\aa)$ with $\big\|\sum\limits^n_{i=1}P_i-1\big\|<(n-1)^{-2}$. Then $\pn$ is complete in $\aa$. \end{corollary} \begin{proof}For any $i\not= j$, \begin{align*} \|P_iP_j\|^2&=\|P_iP_jP_i\|\le\big\|P_i\big(\sum_{k\not=i}P_k\big)P_i\big\|\\ &=\big\|P_i\big(\sum_{k=1}^nP_k-1\big)P_i\big\|\le\big\|\sum_{k=1}^nP_k-1\big\|<\frac{1}{(n-1)^2}. \end{align*} Thus $\|P_iP_j\|<(n-1)^{-1}$. Noting that $$ \rho=\min\{\beta(P_1),\cdots,\beta(P_n)\}=1,\ \eta=\max\{\|P_iP_j\|\,\vert\,1\le i<j\le n\}<\frac{1}{n-1}, $$ we have $\big(\sum\limits^n_{i=1}P_i\big)\aa=P_1\aa\dotplus\cdots\dotplus P_n\aa$ by Proposition \ref{prop4a}. From $\big\|\sum\limits^n_{i=1}P_i-1\big\|<(n-1)^{-2}$, we have $\sum\limits_{i=1}^nP_i$ is invertible in $\aa$ and so that $\aa=P_1\aa\dotplus\cdots\dotplus P_n\aa$. Thus, $\pn$ is complete in $\aa$. \end{proof} Combing Corollary \ref{coro4a} with Theorem \ref{th1} (3), we have \begin{corollary} Let $P_1,\cdots,P_n$ be non--trivial projections in $B(H)$ with $\big\|\sum\limits^n_{i=1}P_i-I\big\|<(n-1)^{-2}$. Then $H=\Ran(P_1)\dotplus\cdots\dotplus\Ran(P_n)$. \end{corollary} Let $(P_1,\cdots,P_n)\in\P_n(\aa)$. A well--known statement says: ``for any $\ep>0$, there is $\de>0$ such that if $\|P_iP_j\|<\de$, $i\not=j$, $i,j=1,\cdots,n$, then there are mutually orthogonal projections $P'_1,\cdots,P'_n \in\aa$ with $\|P_i-P_i'\|<\ep$, $i=1,\cdots,n$". It may be the first time appeared in Glimm's paper \cite{Gl}. By using the induction on $n$, he gave its proof. But how $\de$ depends on $\ep$ is not given. Lemma 2.5.6 of \cite{HL} states this statement and the author gives a slightly different proof. We can find from the proof of \cite[Lemma 2.5.6]{HL} that the relation between $\de$ and $\ep$ is $\de\le\dfrac{\ep}{(12)^{(n-1)}n!}$. The next corollary will give a new proof of this statement with the relation $\delta=\dfrac{\epsilon}{2(n-1)}$ for $\ep\in (0,1)$. \begin{corollary}\label{coro4c} Let $(P_1,\cdots,P_n)\in\P_n(\aa)$. Given $\ep\in (0,1)$. If $P_1,\cdots,P_n$ satisfy condition: $\|P_iP_j\|<\de =\dfrac{\epsilon}{2(n-1)}$, $1\le i<j\le n$, then there are mutually orthogonal projections $P'_1,\cdots,P'_n\in\aa$ such that $\|P_i-P_i'\|<\ep$, $i=1,\cdots,n$. \end{corollary} \begin{proof} Set $A=\sum\limits^n_{i=1}P_i$. Noting that $\gamma=\min\{\beta(P_1),\cdots,\beta(P_n)\}=1$, $\|P_iP_j\|<\dfrac{1}{n-1}$, $1\le i<j\le n$ and taking $\rho=1$, we have $\sigma(A)\backslash\{0\}\subset [1-(n-1)\delta,1+(n-1)\delta]$ by Proposition \ref{prop4a} (1). So the positive element $A^\dag$ exists by Lemma \ref{lem3a}. Set $P=A^\dag A=AA^\dag\in\aa$. From $AA^\dag A=A$ and $A^\dag AA^\dag=A^\dag$, we get that $P_i\le P$, $i=1,\cdots,n$ and $AP=PA=A$, $A^\dag P=PA^\dag=A^\dag$. So $A\in GL(P\aa P)$ with the inverse $A^\dag\in P\aa P$. Let $\sigma_{P\aa P}(A^\dag)$ stand for the spectrum of $A^\dag$ in $P\aa P$. Then \begin{align} \notag\sigma_{P\aa P}(A^\dag)&=\sigma(A^\dag)\backslash\{0\}=\{\lambda^{-1}\vert\,\lambda\in\sigma(A)\backslash\{0\}\}\\ \label{4eqb}&\subset [(1+(n-1)\delta)^{-1},(1-(n-1)\delta)^{-1}], \end{align} Now by Proposition \ref{prop4a}, $P\aa=A\aa=P_1\aa\dotplus\cdots\dotplus P_n\aa$. Thus, by using $P_i\le P$, $i=1,\cdots,n$, we have $P\aa P=P_1(P\aa P)\dotplus\cdots\dotplus P_n(P\aa P)$ and then $P_iA^\dag P_j=\delta_{ij}P_i$, $i,j=1,\cdots,n$ by Theorem \ref{th1}. Put $P_i'=(A^\dag)^{1/2}P_i(A^\dag)^{1/2}\in\aa$, $i=1,\cdots,n$. Then $P_1',\cdots,P_n'$ are mutually orthogonal projections and moreover, for $1\le i\le n$, \begin{align} \notag\|P'_i-P_i\|&\le\|(A^\dag)^{1/2}P_i(A^\dag)^{1/2}-P_i(A^\dag)^{1/2}\|+\|P_i(A^\dag)^{1/2}-P_i\|\\ \label{4eqc}&\le(\|(A^\dag)^{1/2}\|+1)\|(A^\dag)^{1/2}-P\|. \end{align} Note that $0<(n-1)\de< 1/2$. Applying Spectrum Mapping Theorem to (\ref{4eqb}), we get that $\|(A^\dag)^{1/2}\|)\le(1-(n-1)\delta)^{-1/2}<\sqrt{2}$ and $$ \|P-(A^\dag)^{1/2}\|\le (1-(n-1)\delta)^{-1/2}-1<\frac{2}{1+\sqrt{2}}\,(n-1)\de. $$ Thus $\|P'_i-P_i\|<2(n-1)\de=\ep$ by (\ref{4eqc}). \end{proof} Applying Theorem \ref{th1} and Corollary \ref{coro4c} to an $n$--tuple of linear independent unit vectors, we have: \begin{corollary}\label{coro4d} Let $(\alpha_1,\cdots,\alpha_n)$ be an $n$--tuple of linear independent unit vectors in Hilbert space $H$. \begin{enumerate} \item[$(1)$] There is an invertible, positive operator $K$ in $B(H)$ and an $n$--tuple of mutually orthogonal unit vectors $(\gamma_1,\cdots,\gamma_n)$ in $H$ such that $\gamma_i=K\alpha_i$, $i=1,\cdots,n$. \item[$(2)$] Given $\ep\in (0,1)$. If $|<\alpha_i,\alpha_j>|<\dfrac{\epsilon}{2(n-1)}$, $1\le i<j\le n$, then there exists an $n$--tuple of mutually orthogonal unit vectors $(\beta_1,\cdots,\beta_n)$ in $H$ such that $\|\alpha_i-\beta_j\|<2\ep$, $i=1,\cdots,n.$ \end{enumerate} \end{corollary} \begin{proof} Set $H_1=\mathrm{span}\{\alpha_1,\cdots,\alpha_n\}$ and $P_i\xi=<\xi,\alpha_i>\alpha_i$, $\forall\,\xi\in H_1$, $i=1,\cdots,n$. Then $(P_1,\cdots,P_n)\in\P_n(B(H_1))$ and $\Ran(P_1)\dotplus\cdots\dotplus\Ran(P_n)=H_1$. By Theorem \ref{th1}, $A_0=\sum\limits^n_{i=1}P_i$ is invertible in $B(H_1)$ and $P_iA_0^{-1}P_j=\delta_{ij}P_i$, $i,j=1,\cdots,n$. Put $K=A_0^{-1/2}+P_0$ and $\gamma_i=A_0^{-1/2}\alpha_i$, $i=1,\cdots,n$, where $P_0$ is the projection of $H$ onto $H_1^\perp$. It is easy to check that $K$ is invertible and positive in $B(H)$ with $\gamma_i=K\alpha_i$, $i=1,\cdots,n$ and $(\gamma_1,\cdots,\gamma_n)$ is an $n$--tuple of mutually orthogonal unit vectors. This proves (1). (2) Note that $\|P_iP_j\|=|<\alpha_i,\alpha_j>|<\dfrac{\ep}{2(n-1)}$, $1\le i<j\le n$. Thus, by Corollary \ref{coro4c}, there are mutually orthogonal projections $P'_1,\cdots,P'_n\in\aa$ such that $\|P_i-P_i'\|<\ep$, $i=1,\cdots,n$. Put $\beta_i'=P'_i\alpha_i$, $i=1,\cdots,n$. Then $\beta_1',\cdots,\beta_n'$ are mutually orthogonal and $\|\alpha_i-\beta'_i\|<\epsilon$, $i=1,\cdots,n$. Set $\beta_i=\|\beta_i'\|^{-1}\beta'_i$, $i=1,\cdots,n$. Then $<\beta_i,\beta_j>=\delta_{ij}\beta_i$, $i,j=1,\cdots,n$ and $$ \|\alpha_i-\beta_i\|\le\|\alpha_i-\beta'_i\|+|1-\|\beta'_i\||<2\ep, $$ for $i=1,\cdots,n$. \end{proof} Now we give a simple characterization of the completeness of a given $n$--tuple of projections in $C^*$--algebra $\aa$ as follows. \begin{theorem}\label{th2} Let $P_1,\cdots,P_n$ be projections in $\aa$. Then $(P_1,\cdots,P_n)$ is complete if and only if $A=\sum\limits_{i=1}^nP_i$ is invertible in $\aa$ and $\|P_iA^{-1}P_j\|<\big[(n-1)\|A^{-1}\|\|A\|^2\big]^{-1}$, $\forall\,i\not=j$, $i,j=1,\cdots,n$. \end{theorem} \begin{proof} If $\pn$ is complete, then by Theorem \ref{th1}, $A$ is invertible in $\aa$ and $P_iA^{-1}P_j=0$, $\forall\,i\not=j$, $i,j=1,\cdots,n$. Now we prove the converse. Put $T_i=A^{-1/2}P_iA^{-1/2}$, $1\le i\le n$. Then $\sum\limits^n_{i=1}T_i=1$. Since $T_i(A^{1/2}P_iA^{1/2})T_i=T_i$, we have $\beta(T_i)\ge\|A^{1/2}P_iA^{1/2}\|^{-1}\ge\|A\|^{-1}$, $i=1,\cdots,n$ by Corollary \ref{coro4a0}. Put $\rho=\|A\|^{-1}$. Then $$ \|T_iT_j\|\le\|A^{-1}\|\|P_iA^{-1}P_j\|<\big[(n-1)\|A\|^2\big]^{-1}=\frac{\rho^2}{n-1},\ i\not=j,\ i,j=1,\cdots,n. $$ Thus by Proposition \ref{prop4a} (3), $\aa=T_1\aa\dotplus\cdots\dotplus T_n\aa$. Note that $T_i\aa=A^{-1/2}(P_i\aa)$, $i=1,\cdots,n$. So $P_1\aa\dotplus\cdots\dotplus P_n\aa=A^{1/2}\aa=\aa$, i.e., $\pn\in\PC_n(\aa)$. \end{proof} \begin{corollary}\label{coro4b} Let $(P_1,\cdots,P_n)\in\PC_n(\aa)$ and let $(P'_1,\cdots,P'_n)\in\P_n(\aa)$. Assume that $\|P_i-P'_i\|<\big[4n^2(n-1)\|A^{-1}\|^2(n\|A^{-1}\|+1)\big]^{-1}$, $i=1,\cdots,n$, where $A=\sum\limits^n_{i=1}P_i$, then $(P'_1,\cdots,P'_n)\in\PC_n(\aa)$. \end{corollary} \begin{proof} Set $B=\sum\limits^n_{i=1}P'_i$. Since $n\|A^{-1}\|\ge\|A\|\|A^{-1}\|\ge 1$, it follows that $\|A-B\|<\dfrac{1}{2\|A^{-1}\|}$. Thus $B$ is invertible in $\aa$ with $$ \|B^{-1}\|\le\dfrac{\|A^{-1}\|}{1-\|A^{-1}\|\|A-B\|}<2\|A^{-1}\|,\ \|B^{-1}-A^{-1}\|<2\|A^{-1}\|^2\|A-B\|. $$ Note that $P_iA^{-1}P_j=0$, $i\not=j$, $i,j=1,\cdots,n$, we have \begin{align*} \|P'_iB^{-1}P'_j\|&\le\|P'_i(B^{-1}-A^{-1})P'_j\|+\|(P'_i-P_i)A^{-1}P'_j\|+\|P_iA^{-1}(P_j-P'_j)\|\\ &\le 2\|A^{-1}\|^2\|A-B\|+\|A^{-1}\|\|P_i-P'_i\|+\|A^{-1}\|\|P_j-P'_j\|\\ &<\frac{1}{2n^2(n-1)\|A^{-1}\|}<\frac{1}{(n-1)\|B^{-1}\|\|B\|^2}. \end{align*} So $(P_1',\cdots,P_n')$ is complete in $\aa$ by Theorem \ref{th2}. \end{proof} \section{Some equivalent relations and topological properties on $\PC_n(\aa)$} \setcounter{equation}{0} Let $\aa$ be a $C^*$--algebra with the unit $1$ and let $GL_0(\aa)$ (resp. $U_0(\aa)$) be the connected component of $1$ in $GL(\aa)$ (resp. in $U(\aa)$). Set \begin{align*} \PI_n(\aa)&=\big\{(P_1,\cdots,P_n)\in\P_n(\aa)\,\vert\,\sum\limits^n_{i=1}P_i\in GL(\aa)\big\}\\ \PO_n(\aa)&=\big\{(P_1,\cdots,P_n)\in\P_n(\aa)\,\vert\,\sum\limits^n_{i=1}P_i=1,\ P_iP_j=0,\ i\not=j,\ i,j=1,\cdots,n\big\}. \end{align*} \begin{definition}\label{def5a} Let $(P_1,\cdots,P_n)$ and $(P_1',\cdots,P_n')$ be in $\PC_n(\aa)$. \begin{enumerate} \item[$(1)$] We say $(P_1,\cdots,P_n)$ is equivalent to $(P_1',\cdots,P_n')$, denoted by $(P_1,\cdots,P_n)\sim(P_1',\cdots,P_n')$, if there are $U_1,\cdots,U_n\in\aa$ such that $P_i=U_i^*U_i$, $P_i'=U_iU_i^*$. \item[$(2)$] $(P_1,\cdots,P_n)$ and $(P_1',\cdots,P_n')$ are called to be unitarily equivalent, denoted by $(P_1,\cdots,P_n)\sim_u(P_1',\cdots,P_n')$, if there is $U\in U(\aa)$ such that $UP_iU^*=P_i'$, $i=1,\cdots,n$. \item[$(3)$] $(P_1,\cdots,P_n)$ and $(P_1',\cdots,P_n')$ are called homotopically equivalent, denoted by $(P_1,\cdots,P_n)\sim_h(P_1',\cdots,P_n')$, if there exists a continuous mapping $F\colon [0,1]\rightarrow\PC_n(\aa)$ such that $F(0)=\pn$ and $F(1)=(P_1',\cdots,P_n')$. \end{enumerate} \end{definition} It is well--know that $$ \pn\sim_h(P_1',\cdots,P_n')\Rightarrow \pn\sim(P_1',\cdots,P_n') $$ and if $U(\aa)$ is path--connected, $$ \pn\sim_u(P_1',\cdots,P_n')\Rightarrow \pn\sim_h(P_1',\cdots,P_n'). $$ \begin{lemma}\label{lem5b} Let $(P_1,\cdots,P_n)$ be in $\PC_n(\aa)$ and $C$ be a positive and invertible element in $\aa$ with $P_iC^2P_i=P_i$, $\inn$. Then $(CP_1C,\cdots,CP_nC)\in\PC_n(\aa)$ and $\pn\sim_h(CP_1C,\cdots,CP_nC)$ in $\PC_n(\aa)$. \end{lemma} \begin{proof} From $(CP_iC)^2=CP_iC^2P_iC=CP_iC$, $1\le i\le n$, we have $(CP_1C,\cdots,CP_nC)$ $\in\P_n(\aa)$. $(P_1,\cdots,P_n)\in\PC_n(\aa)$ implies that $A=\sum\limits^n_{i=1}P_i\in GL(\aa)$ and $P_iA^{-1}P_i=P_i$, $1\le i\le n$ by Theorem \ref{th1}. So $(CP_iC)\Big(\sum\limits_{i=1}^n(CP_iC)\Big)^{-1}(CP_iC)=CP_iA^{-1}P_iC$ and hence $(CP_1C,\cdots,CP_nC)\in\PC_n(\aa)$ by Theorem \ref{th1}. Put $A_i(t)=C^tP_iC^t$ and $B_i(t)=C^{-t}P_iC^{-t}$, $\forall\,t\in [0,1]$, $i=1,\cdots,n$. Then $Q_i(t)\triangleq A_i(t)B_i(t)=C^tP_iC^{-t}$ is idempotent and $A_i(t)B_i(t)A_i(t)=A_i(t)$, $\forall\,t\in [0,1]$, $i=1,\cdots,n$. Thus $A_i(t)\aa=Q_i(t)\aa$, $\forall\,t\in [0,1]$, $i=1,\cdots,n$. By Lemma \ref{lem3b}, $P_i(t)\triangleq Q_i(t)(Q_i(t)+(Q_i(t))^*-1)^{-1}$ is a projection in $\aa$ satisfying $Q_i(t)P_i(t)=P_i(t)$ and $P_i(t)Q_i(t)=Q_i(t)$, $\forall\,t\in [0,1]$, $i=1,\cdots,n$. Clearly, $A_i(t)\aa=Q_i(t)\aa=P_i(t)\aa$, $\forall\, t\in [0,1]$ and $t\mapsto P_i(t)$ is a continuous mapping from $[0,1]$ into $\aa$, $i=1,\cdots,n$. Thus, from $$ (C^tP_1C^t)\aa\dotplus\cdots\dotplus (C^tP_nC^t)\aa=\aa,\quad \forall\,t\in [0,1], $$ we get that $F(t)=(P_1(t),\cdots,P_n(t))\in\PC_n(\aa)$, $\forall\,t\in [0,1]$. Note that $F\colon [0,1]\rightarrow\PC_n(\aa)$ is continuous with $F(0)=\pn$. Note that $A_i(1)=CP_iC$ is a projection with $A_i(1)Q_i(1)=CP_iCCP_iC^{-1}=Q_i(1)$ and $Q_i(1)A_i(1)=A_i(1)$, $i=1,\cdots,n$. So $P_i(1)=A_i(1)$, $i=1,\cdots,n$ and $F(1)=(CP_1C,\cdots,CP_nC)$. The assertion follows. \end{proof} For $(P_1,\cdots,P_n)\in\PC_n(\aa)$, $A=\sum\limits^n_{i=1}P_i\in GL(\aa)$ and $Q_i=A^{-1/2}P_iA^{-1/2}$ is a projection with $Q_iQ_j=0$, $i\not=j$, $i,j=1,\cdots,n$ (see Theorem \ref{th1}), that is, $(Q_1,\cdots,Q_n)\in\PO_n(\aa)$. Since $C=A^{-1/2}$ satisfies the condition given in Lemma \ref{lem5b}, we have the following: \begin{corollary}\label{coro5a} Let $(P_1,\cdots,P_n)\in\PC_n(\aa)$ and let $(Q_1,\cdots,Q_n)$ be as above. Then $(P_1,\cdots,P_n)\sim_h(Q_1,\cdots,Q_n)$ in $\PC_n(\aa)$. \end{corollary} \begin{theorem}\label{th3} Let $\pn$ and $(P_1',\cdots,P_n')\in\PC_n(\aa)$. Then the following statements are equivalent: \begin{enumerate} \item[$(1)$] $\pn\sim(P_1',\cdots,P_n')$. \item[$(2)$] there is $D\in GL(\aa)$ such that for $1\le i\le n$, $P_iDD^*P_i=P_i$ and $P'_i=D^*P_iD$. \item[$(3)$] there is $(S_1,\cdots,S_n)\in\PC_n(\aa)$ such that $$ \pn\sim_u(S_1,\cdots,S_n)\sim_h(P_1',\cdots,P_n'). $$ \end{enumerate} \end{theorem} \begin{proof} The implication (3)$\Rightarrow$(1) is obvious. We now prove the implications (1)$\Rightarrow$(2) and (2)$\Rightarrow$(3) as follows. (1)$\Rightarrow$ (2) Let $U_i\in\aa$ be partial isometries such that $U_i^*U_i=P_i$, $U_iU^*_i=P'_i$, $\inn$. Put $A=\sum\limits^n_{i=1}P_i$, $A'=\sum\limits^n_{i=1}P_i'$ and $W=A^{-1/2}\big(\sum\limits_{i=1}^nP_iU^*_iP'_i\big)A'^{-1/2}$. Then \begin{align*} W^*W&=A'^{-1/2}\big(\sum_{i=1}^nP'_iU_iP_i\big)A^{-1}\big(\sum_{i=1}^nP_iU^*_iP'_i\big)A'^{-1/2}\\ &=A'^{-1/2}\big(\sum_{i=1}^nP'_iU_iP_iU^*_iP'_i\big)A'^{-1/2} =A'^{-1/2}\big(\sum_{i=1}^nP'_i\big)A'^{-1/2}=1. \end{align*} Similarly, $WW^*=1$. Thus, $W\in U(\aa)$. Set $D=A^{-1/2}WA'^{1/2}\in GL(\aa)$. Then, for $1\le i\le n$, $$ D^*P_iD=\big(\sum_{i=1}^nP'_iU_iP_i\big)A^{-1}P_iA^{-1}\big(\sum_{i=1}^nP_iU^*_iP'_i\big) =P'_iU_iP_iU^*_iP'_i=P'_i $$ and $P_iDD^*P_i=P_i$ follows from $(D^*P_iD)^2=D^*P_iD$. (2)$\Rightarrow$(3) Put $U=(DD^*)^{-1/2}D$. Then $U\in U(\aa)$. Set $C=U^*(DD^*)^{1/2}U$ and $S_i=U^*P_iU$, $1\le i\le n$. Then $(S_1,\cdots,S_n)\in\PC_n(\aa)$ with $(S_1,\cdots,S_n)$$\sim_u(P_1,\cdots,P_n)$ and $(P'_1,\cdots,P'_n)=(CS_1C,\cdots,CS_nC)$. Since $S_iC^2S_i=U^*P_iDD^*P_iU=S_i$, $i=1,\cdots,n$, it follows from Lemma \ref{lem5b} that $(P'_1,\cdots,P'_n) \sim_h(S_1,\cdots,S_n)$ in $\PC_n(\aa)$. \end{proof} \begin{proposition}\label{prop5a} For $\P_n(\aa)$, $\PC_n(\aa)$, $\PI_n(\aa)$ and $\PO_n(\aa)$, we have \begin{enumerate} \item[$(1)$] $\PI_n(\aa)$ is open in $\P_n(\aa)$. \item[$(2)$] $\PC_n(\aa)$ is a clopen subset of $\PI_n(\aa)$. \item[$(3)$] $\PO_n(\aa)$ is a strong deformation retract of $\PC_n(\aa)$. \item[$(4)$] $\PC_n(\aa)$ is locally connected. Thus every connected component of $\PC_n(\aa)$ is path--connected. \item[$(5)$] $(P_1,\cdots,P_n),\, (P_1',\cdots,P_n')\in\PC_n(\aa)$ are in the same connected component iff there is $D\in GL_0(\aa)$ such that $P_i=D^*P_i'D$, $i=1,\cdots,n$. \end{enumerate} \end{proposition} \begin{proof} (1) Since $h(P_1,\cdots,P_n)=\sum\limits^n_{i=1}P_i$ is a continuous mapping from $\P_n(\aa)$ into $\aa$ and $GL(\aa)$ is open in $\aa$, it follows that $\PI_n(\aa)=h^{-1}(GL(\aa))$ is open in $\P_n(\aa)$. (2) Define $F\colon\PI_n(\aa)\rightarrow\mathbb R$ by $$ F(P_1,\cdots,P_n)=\sum_{1\le i<j\le n}(n-1)\big\|\sum^n_{i=1}P_i\big\|^2 \big\|\big(\sum^n_{i=1}P_i\big)^{-1}\big\|\big\|P_i\big(\sum^n_{i=1}P_i\big)^{-1}P_j\|. $$ Clearly, $F$ is continuous on $\PI_n(\aa)$. By means of Theorem \ref{th2}, we get that $\PC_n(\aa)=F^{-1}((-1,1))$ is open in $\PI_n(\aa)$ and $\PC_n(\aa)=F^{-1}(\{0\})$ is closed in $\PI_n(\aa)$. (3) Define the continuous mapping $r\colon\PC_n(\aa)\rightarrow\PO_n(\aa)$ by $$ r\pn=\big(A^{-1/2}P_1A^{-1/2},\cdots,A^{-1/2}P_nA^{-1/2}\big),\quad A=\sum^n_{i=1}P_i. $$ by Theorem \ref{th1}. Clearly, $r\pn=\pn$ when $\pn\in\PO_n(\aa)$. This means that $\PO_n(\aa)$ is a retract of $\PC_n(\aa)$. For any $t\in [0,1]$ and $i=1,\cdots,n$, put $$ H_i(P_1,\cdots,P_n,t)=A^{-t/2}P_iA^{t/2}(A^{-t/2}P_iA^{t/2}+A^{t/2}P_iA^{-t/2}-1)^{-1}. $$ Similar to the proof of Lemma \ref{lem5b}, we have $$ H(P_1,\cdots,P_n,t)=(H_1(P_1,\cdots,P_n,t),\cdots,H_n(P_1,\cdots,P_n,t)) $$ is a continuous mapping from $\PC_n(\aa)\times [0,1]$ to $\PC_n(\aa)$ with $H(P_1,\cdots,P_n,0)=(P_1,\cdots,P_n)$ and $H(P_1,\cdots,P_n,1)=r(P_1,\cdots,P_n)$. Furthermore, when $(P_1,\cdots,P_n)$ $\in\PO_n(\aa)$, $A=1$. In this case, $H(P_1,\cdots,P_n,t)=(P_1,\cdots,P_n)$, $\forall\,t\in [0,1]$. Therefore, $\PO_n(\aa)$ is a strong deformation retract of $\PC_n(\aa)$. (4) Let $\pn\in\PC_n(\aa)$. Then by Corollary \ref{coro4b}, there is $\de\in (0,1/2)$ such that for any $(R_1,\cdots,R_n)\in \P_n(\aa)$ with $\|P_i-R_i\|<\delta$, $1\le i\le n$, we have $(R_1,\cdots,R_n)\in\PC_n(\aa)$. Let $(R_1,\cdots,R_n)\in\PC_n(\aa)$ with $\|P_j-R_j\|<\delta$, $i=1,\cdots,n$. put $P_i(t)=P_i$, $R_i(t)=R_i$ and $a_i(t)=(1-t)P_i+tR_i$, $\forall\,t\in [0,1]$, $i=1,\cdots,n$. Then $P_i,R_i,a_i$ are self--adjoint elements in $C([0,1],\aa)=\mathcal B$ and $\|P_i-a_i\|=\max\limits_{t\in [0,1]}\|P_i-a_i(t)\|<\delta$, $i=1,\cdots,n$. It follows from \cite[Lemm 6.5.9 (1)]{Xue} that there exists a projection $f_i\in C^*(a_i)$ (the $C^*$--subalgebra of $\mathcal B$ generated by $a_i$) such that $\|P_i-f_i\|\le\|P_i-a_i\|<\delta$, $i=1,\cdots,n$. Thus, $\|P_i-f_i(t)\| <\delta$, $i=1,\cdots,n$ and consequently, $F(t)=(f_1(t),\cdots,f_n(t))$ is a continuous mapping of $[0,1]$ into $\PC_n(\aa)$. Since $a_i(0)=P_i$, $a_i(1)=R_i$ and $f_i(t)\in C^*(a_i(t))$, $\forall\,t\in [0,1]$, we have $f(0)=(P_1,\cdots,P_n)$ and $f(1)=(R_1,\cdots,R_n)$. This means that $\PC_n(\aa)$ is locally path--connected. (5) There is a continuous path $P(t)=(P_1(t),\cdots,P_n(t))\in\PC_n(\aa)$, $\forall\,t\in [0,1]$ such that $P(0)= (P_1,\cdots,P_n)$ and $P(1)=(P_1',\cdots,P_n')$. By \cite[Corollary 5.2.9.]{O}, there is a continuous mapping $t\mapsto U_i(t)$ of $[0,1]$ into $U(\aa)$ with $U_i(0)=1$ such that $P_i(t)=U_i(t)P_1U_i^*(t)$, $\forall\,t\in [0,1]$ and $i=1,\cdots,n$. Set $$ W(t)=\Big(\sum\limits^n_{i=1}P_i\Big)^{-1/2}\Big(\sum\limits^n_{i=1}P_iU_i^*(t)P_i(t)\Big) \Big(\sum\limits^n_{i=1}U_i(t)P_iU_i^*(t)\Big)^{-1/2} $$ and $D(t)=\Big(\sum\limits^n_{i=1}P_i\Big)^{-1/2}W(t)\Big(\sum\limits^n_{i=1}U_i(t)P_iU_i^*(t)\Big)^{1/2}$, $\forall\, t\in [0,1]$. Then $W(t)\in U(\aa)$ with $W(0)=1$, $D(t)\in GL(\aa)$ with $D(0)=1$ and $W(t)$, $D(t)$ are all continuous on $[0,1]$ with $D^*(t)P_iD(t)=P_i(t)$ (see the proof of (1)$\Rightarrow$(2) in Theorem \ref{th3}), $\forall\,t\in [0,1]$ and $i=1,\cdots,n$. Put $D=D(1)$. Then $D\in GL_0(\aa)$ and $D^*P_iD=P_i'$, $i=1,\cdots,n$. Conversely, if there is $D\in GL_0(\aa)$ such that $D^*P_iD=P_i'$, $i=1,\cdots,n$. Then $U=(DD^*)^{-1/2}D\in U_0(\aa)$ and $P_iDD^*P_i=P_i$, $UP_i'U^*=(DD^*)^{1/2}P_i(DD^*)^{1/2}$, $i=1,\cdots,n$. Thus, $(P_1',\cdots,P_n')\sim_h(UP_1'U^*,\cdots,UP_n'U^*)$ and $$ ((DD^*)^{1/2}P_1(DD^*)^{1/2},\cdots,(DD^*)^{1/2}P_n(DD^*)^{1/2})\sim_h(P_1,\cdots,P_n) $$ by Lemma \ref{lem5b}. Consequently, $(P_1',\cdots,P_n')\sim_h(P_1,\cdots,P_n)$. \end{proof} As ending of this section, we consider following examples: \begin{example} {\rm Let $\aa=\m_k(\mathbb C)$, $k\ge 2$. Define a mapping $\rho\colon\PC_n(\aa)\rightarrow\mathbb N^{n-1}$ by $\rho(P_1,\cdots,P_n)=(\Tr(P_1),\cdots,\Tr(P_{n-1}))$, where $2\le n\le k$ and $\Tr(\cdot)$ is the canonical trace on $\aa$. By Theorem \ref{th1}, $(P_1,\cdots,P_n)\in\PC_n(\aa)$ means that $A=\sum\limits^n_{i=1}P_i\in GL(\aa)$ and $(A^{-1/2}P_1A^{-1/2},\cdots,A^{-1/2}P_nA^{-1/2})\in\PO_n(\aa)$. Put $Q_i=A^{-1/2}P_iA^{-1/2}$, $i=1,\cdots,n$. Since $(P_1,\cdots,P_n)\sim_h(Q_1,\cdots,Q_n)$ by Corollary \ref{coro5a}, it follows that $\Tr(P_i)=\Tr(Q_i)$, $i=1,\cdots,n$ and $\Tr(A)=k$. Thus $\Tr(P_n)=k-\sum\limits^{n-1}_{i=1}P_i$. Note that $U(\aa)$ is path--connected. So, for $(P_1,\cdots,P_n),\,(P_1',\cdots,P_n')\in\PC_n(\aa)$, $(P_1,\cdots,P_n)$ and $(P_1',\cdots,P_n')$ are in the same connected component if and only if $\rho(P_1,\cdots,P_n)=\rho(P_1',\cdots,P_n')$. The above shows that $\PC_k(\aa)$ is connected and $\PC_n(\aa)$ is not connected when $k\ge 3$ and $2\le n\le k-1$. } \end{example} \begin{example}\label{exa4a} {\rm Let $H$ be a separable complex Hilbert space and $\mathcal K(H)$ be the $C^*$--algebra of all compact operators in $B(H)$. Let $\aa=B(H)/\mathcal K(H)$ be the Calkin algebra and $\pi\colon B(H)\rightarrow\aa$ be the quotient mapping. Then $\PC_n(\aa)$ is path--connected. In fact, if $(P_1,\cdots,P_n), (P_1',\cdots,P_n')\in\PC_n(\aa)$, then we can find $(Q_1,\cdots,Q_n)$, $(Q_1',\cdots,Q_n') \in\PO_n(\aa)$ such that $(P_1,\cdots,P_n)\sim_h(Q_1,\cdots,Q_n)$ and $(P'_1,\cdots,P'_n)\sim_h(Q'_1,\cdots,Q'_n)$ by Corollary \ref{coro5a}. Since $B(H)$ is of real rank zero, it follows from \cite[Corollary B.2.2]{Xue} or \cite[Lemma 3.2]{X} that there are projections $R_1,\cdots,R_n$ and $R_1',\cdots,R_n'$ in $B(H)$ such that $\pi(R_i)=Q_i$, $\pi(R_i')=Q_i'$, $i=1,\cdots,n$ and $$ R_iR_j=\delta_{ij}R_i,\ R_i'R_j'=\delta_{ij}R_i',\ i,j=1,\cdots,n,\ \sum\limits^n_{i=1}R_i=\sum\limits^n_{i=1}R_i'=I. $$ Note that $R_1,\cdots,R_n,R_1',\cdots,R_n'\not\in\mathcal K(H)$. So there are partial isometry operators $V_1,\cdots,V_n$ in $B(H)$ such that $V_i^*V_i=R_i$, $V_iV_i^*=R_i'$, $i=1,\cdots,n$. Put $V=\sum\limits^n_{i=1}V_i$. Then $V\in U(B(H))$ and $VR_iV^*=R_i'$, $i=1,\cdots,n$. Put $U=\pi(V)\in U(\aa)$. Then $(UQ_1U^*,\cdots,UQ_nU^*)=(Q_1',\cdots,Q_n)$ in $\PO_n(\aa)$. Since $U(B(H))$ is path--connected, we have $(Q_1,\cdots,Q_n)\sim_h(Q_1',\cdots,Q_n')$ in $\PC_n(\aa)$. Finally, $(P_1,\cdots,P_n)\sim_h(P_1',\cdots,P_n')$. This means that $\PC_n(\aa)$ is path--connected. } \end{example}
1,116,691,500,004
arxiv
\section{Introduction} The rapid development of spin-orbital physics and quantum information in recent years motivates the search for the realizations of intrinsically frustrated orbital (or pseudospin) interactions. Such interactions lead to radically different behavior from Heisenberg SU(2) isotropic exchange, and have been in the focus of very active research in recent years. It was realized that the quantum nature of orbital degrees of freedom, that may be released by emerging spin-orbital coupling and spin-orbital entanglement, is interdisciplinary and plays a crucial role in the fields of strongly correlated electrons \cite{Feiner,Tok00,Ole05,Kha05,Ole12,Kim09,Jac09,Cha10,Woh11,Brz15} and cold atoms \cite{Zhao08,Wu08,Sun12,Gal13,Zhou15}. The strong frustration of spin-orbital interactions can be best understood by considering generic orbital models, in which the bond-directional interactions provide the building blocks. Among them, the two-dimensional (2D) compass model defined on a square lattice \cite{Nus15} and the Kitaev model on a honeycomb lattice \cite{Kitaev} can be treated as two quintessential pseudospin models, where the effective moments cannot simultaneously align to satisfy interactions with all neighbors as they favor the quantum states with distinct quantization axes. In fact, the latter model is a rare example of an interacting 2D spin model that can be rigorously solved, and was found to support gapped and gapless quantum spin liquids with emergent fractional excitations obeying non-Abelian statistics. Otherwise, exact solutions for 2D models with frustrated exchange exist only for classical Ising interactions where a phase transition at finite temperature is found \cite{Lon80}. Recent studies show that also for the 2D compass model a phase transition to nematic order occurs at finite but much lower temperature \cite{Wen10}. In low-dimensional magnetic systems collective quantum phenomena are particularly strong since the reduced dimensionality amplifies the consequences of frustrated interactions between individual spins. To probe the exotic phases resulting from bond-directional interactions, we introduced a one-dimensional (1D) generalized compass model (GCM) with antiferromagnetic exchange alternating between even and odd bonds \cite{You1}. Such a model may be realized in layered structures of transition metal oxides, with alternating exchange interactions along the bonds parallel to $a$ and $b$ axes along a zigzag chain in an $(a,b)$ plane \cite{Xiao}, optical lattices \cite{Simon,Str11}, trapped ions \cite{Por04,Kim10}, and coupled photonic cavities \cite{Har08,Chen10}. On the other hand, the community focuses on two-body interactions in most systems studied, as they contribute to superexchange and are readily accessible experimentally. However, the range of the hybridization of the electron wave function will be finite in some realistic bonding geometries, and the effect of such long-range interactions must be addressed. Recently three-site interactions received considerable attention in a bit diverse context \cite{Got99,Tit03,Lou04,Kro08,Cheng10,Der11,Li11,Liu12,Top12,Zhang15,Lei15,Men15,Ste15,Lah15}. They also occur in an effective spin model in a magnetic field obtained from a 1D plaquette orbital model by an exact transformation, with spin dimers that replace plaquettes. Indeed, they are coupled along the chain by three-spin interactions in the Hilbert space reduced by a factor of 2 per plaquette \cite{Brz14}. Such complex interactions between three subsequent sites essentially enrich the ground state phase diagram of the spin model and open new opportunities for underlying physics. Experimentally, it can be realized in NMR quantum simulators \cite{Tseng99,Peng09} or optical lattices \cite{Pachos04}. Three-site spin interactions have been exhibited the multiferroics \cite{Suzuki71} and the magnetoelectric effect \cite{Top12,Men15}. The purpose of this paper is to focus on a 1D GCM with three-site interactions. We show that this model is exactly solvable and explore the consequences of three-site interactions. By investigating spin correlations we identify two chiral phases and demonstrate the existence of a nontrivial magnetoelectric effect. The organization of the paper is as follows. In Sec. \ref{sec:ham} we introduce the Hamiltonian of the 1D GCM with three-site interactions in Sec. \ref{model} and then present the procedure to solve it exactly by employing Jordan-Wigner transformation in Sec. \ref{exact}. The ground state and energy gap are retrieved. In Sec. \ref{sec:cor} we use spin correlations to characterize each phase and quantum phase transitions (QPTs). The model in the magnetic field is analyzed in Sec. \ref{sec:field}, and the complete phase diagram is obtained when the three-site interactions and magnetic fields are varied. The obtained exact solution allows us to present the thermodynamic properties including the Wilson ratio in Sec. \ref{sec:T}. We also point out that the three-site interactions play a role in the magnetoelectric effect in Sec. \ref{sec:mee}. A final discussion and conclusions are given in Sec. \ref{sec:summa}. \section{Generalized 1D compass model} \label{sec:ham} \subsection{The model with three-site exchange} \label{model} We consider below a 1D chain of $N$ sites with periodic boundary conditions, with GCM interactions given by \begin{equation} H_{\rm GCM}(\theta)= \sum_{i=1}^{N'} J_{o}\tilde{\sigma}_{2i-1}(\theta)\tilde{\sigma}_{2i}(\theta) +J_{e}\tilde{\sigma}_{2i}(-\theta)\tilde{\sigma}_{2i+1}(-\theta). \nonumber \\ \label{Hamiltonian1} \end{equation} Here $N'=N/2$ is the number of two-site unit cells, while $J_o$ and $J_e$ denote the coupling strengths on odd and even bonds, respectively (below we take $J_o$ as the unit of exchange interaction). The operator $\tilde{\sigma}_i(\theta)$ (with a tilde) is defined as a linear combination of $\{\sigma_{i}^x,\sigma_{i}^y\}$ pseudospin components (Pauli matrices), \begin{eqnarray} \tilde{\sigma}_{i}(\theta)&\equiv& \cos(\theta/2)\,\sigma_{i}^x +\sin(\theta/2)\,\sigma_{i}^y. \label{tilde} \end{eqnarray} These linear combinations imply that Ising-like interactions on an odd/even bond in Eq. (\ref{Hamiltonian1}) are characterized by the preferential easy axes selected by an arbitrary angle $\pm\theta/2$. With increasing angle $\theta$, frustration gradually increases when the model Eq. (\ref{Hamiltonian1}) interpolates between the Ising model at $\theta=0$ and the quantum compass model (QCM) at $\theta=\pi/2$, in analogy to the 2D compass model \cite{Cin10}. The model was solved exactly and the ground state is found to have order along the easy axis as long as $\theta\neq \pi/2$, whereas it becomes a highly disordered spin-liquid ground state at $\theta=\pi/2$ \cite{Brz07,You2}. Here we introduce the XZY$-$YZX type of three-site interactions in addition, \begin{equation} H_{\rm 3-site} =J^*\sum_{i=1}^{N} (\sigma^x_{i-1}\sigma^z_{i}\sigma^y_{i+1} -\sigma^y_{i-1}\sigma^z_{i}\sigma^x_{i+1}), \label{Hamiltonian2} \end{equation} where $J^*$ is its strength. Such interactions between three adjacent sites emerge as an energy current of a compass chain in the nonequilibrium steady states, as discussed in the Appendix. The complete Hamiltonian of the 1D GCM with the three-site XZY$-$YZX interaction is \begin{eqnarray} {\cal H} =H_{\rm GCM}+H_{\rm 3-site}. \label{model} \end{eqnarray} \subsection{Exact solution} \label{exact} We employ the Jordan-Wigner transformation which maps explicitly between quasispin operators and spinless fermion operators through the following relations \cite{Bar70}: \begin{eqnarray} \sigma _{j}^{z}& =&1-2c_{j}^{\dagger }c_{j}, \quad \sigma _{j}^{y}=i\sigma _{j}^{x}\sigma _{j}^{z}, \notag \\ \sigma _{j}^{x}& =& \prod_{i<j}\,(1-2c_{i}^{\dagger }c_{i}^{}) (c_{j}^{}+c_{j}^{\dagger}), \label{JW} \end{eqnarray} where $c_{j}$ and $c_{j}^{\dagger }$ are annihilation and creation operators of spinless fermions at site $j$ which obey the standard anticommutation relations, $\{c_{i},c_{j}\}=0$ and $\{c_{i}^{\dagger},c_{j}\}=\delta_{ij}$. By substituting Eqs. (\ref{JW}) into Eq. (\ref{model}), we arrive at a simple bilinear form of the Hamiltonian \eqref{model} in terms of spinless fermions: \begin{eqnarray} \cal{H}&=& \sum_{i=1}^{N'} \Big[J_{o} e^{i\theta} c_{2i-1}^{\dagger} c_{2i}^{\dagger} + J_{o} c_{2i-1}^{\dagger} c_{2i}^{} \nonumber \\ & &\hskip .5cm + J_{e}e^{-i\theta} c_{2i}^{\dagger} c_{2i+1}^{\dagger} + J_{e} c_{2i}^{\dagger} c_{2i+1}^{} \nonumber \\ & &\hskip .5cm - 2iJ^*(c_{2i-1}^{\dagger} c_{2i+1}+ c_{2i}^{\dagger}c_{2i+2}^{})+{\rm H.c.}\Big]. \end{eqnarray} Next discrete Fourier transformation for plural spin sites is introduced by \begin{eqnarray} c_{2j-1}\!=\frac{1}{\sqrt{N'}}\sum_{k}e^{-ik j}a_{k},\text{ \ \ } c_{2j}\!=\frac{1}{\sqrt{N'}}\sum_{k}e^{-ik j}b_{k}, \end{eqnarray} with discrete momenta as \begin{eqnarray} k=\frac{n\pi}{ N^\prime}, \quad n= -(N^\prime\!-1), -(N^\prime\!-3),\ldots, (N^\prime\! -1). \end{eqnarray} The Hamiltonian takes the following form, which is suitable to introduce the Bogoliubov transformation: \begin{eqnarray} \cal{H}&=& \sum_{k} \left[ B_k^{} a_{k}^{\dagger} b_{-k}^{\dagger}+ A_k^{} a_{k}^{\dagger} b_{k}^{} - A_k^* a_{k}^{}b_{k}^{\dagger}-B_k^* a_{k}^{}b_{-k}^{} \right. \nonumber \\ & & \left.\hskip .5cm - 4J^* \sin k (a_{k}^{\dagger} a_{k}^{} + b_{k}^{\dagger} b_{k}^{})\right]. \label{Hamiltonian5} \end{eqnarray} where \begin{eqnarray} A_k&=& J_{o} + J_{e}+ e^{ik}, \nonumber\\ B_k&=& J_o e^{i\theta}-J_e e^{i(k-\theta)}. \end{eqnarray} To diagonalize the Hamiltonian Eq. (\ref{Hamiltonian5}), we rewrite it in the Bogoliubov-de Gennes form, \begin{eqnarray} {\cal H} &=& \sum_{k}\, \Gamma_k^{\dagger}\,\hat{M}_k^{}\,\Gamma_k^{}, \label{FT2} \end{eqnarray} where \begin{eqnarray} \hat{M}_k\!=\frac{1}{2}\!\left(\! \begin{array}{cccc} -G_k & 0 & S_k & P_k+Q_k \\ 0 & -G_k & P_k- Q_k & -S_k \\ S_k^* & P_k^*-Q_k^* & -G_k & 0 \\ P_k^*+Q_k^* & -S_k^* & 0 &-G_k \end{array}\!\right), \label{MainHamMatrix} \end{eqnarray} and $\Gamma_k^{\dagger}=(a_k^{\dagger},a_{-k}^{},b_k^{\dagger},b_{-k}^{})$. In Eq. (\ref{MainHamMatrix}) the compact notation is introduced: \begin{eqnarray} P_k&=&-i (J_e e^{ik}+J_o)\sin\theta, \nonumber \\ Q_k&=& (J_e e^{ik}-J_o)\cos\theta, \nonumber \\ S_k &=&J_o+J_e e^{ik}, \nonumber \\ G_k&=& 2J^* \sin k. \label{compactnotations} \end{eqnarray} The diagonalization of Hamiltonian (\ref{MainHamMatrix}) is achieved by a four-dimensional Bogoliubov transformation which connects the operators $\{a_k^{\dagger},a_{-k}^{},b_k^{\dagger},b_{-k}^{}\}$ with four kind of quasiparticles, $\{\gamma_{k,1}^{\dagger},\gamma_{k,2}^{\dagger}, \gamma_{k,3}^{\dagger},\gamma_{k,4}^{\dagger}\}$, \begin{eqnarray} \left( \begin{array}{c} \gamma_{k,1}^{\dagger} \\ \gamma_{k,2}^{\dagger} \\ \gamma_{k,3}^{ \dagger}\\ \gamma_{k,4}^{\dagger} \end{array} \right)=\hat{U}_{k} \left( \begin{array}{c} a_k^{\dagger} \\ a_{-k} \\ b_k^{\dagger} \\ b_{-k} \end{array}\right), \label{eq:2DXXZ_RDM} \end{eqnarray} where the rows of $\hat{U}_{k}$ are eigenvectors of the Bogoliubov-de Gennes equations. The diagonalization is readily performed to yield the eigenspectra $\varepsilon_{k,j}$ ($j=1,\cdots,4$): \begin{eqnarray} \varepsilon_{k,1(2)}=-\frac{1}{2}\left( \sqrt{\xi_k \pm \sqrt{\xi_k^2-\tau_k^2}}+G_k\right), \nonumber \\ \varepsilon_{k,3(4)}=\frac{1}{2}\left( \sqrt{\xi_k \mp \sqrt{\xi_k^2-\tau_k^2}}-G_k\right), \label{excitationspectrum} \end{eqnarray} where \begin{eqnarray} \xi_k&=&\vert P_k \vert^2 + \vert Q_k \vert^2 + \vert S_k \vert^2 , \nonumber \\ \tau_k&=&\vert P_k^2 - Q_k^2 + S_k^2 \vert. \end{eqnarray} The eigenenergies for various $J^*$ are labeled sequentially from the bottom to the top as $\varepsilon_{k,1},\cdots,\varepsilon_{k,4}$ in Fig. \ref{Fig1:spec}. One finds that finite $J^*$ removes the symmetry of the spectra with respect to $\varepsilon=0$ energy and they are not invariant with respect to the $k\to -k$ transformation, in contrast to the case of the GCM with $J^*=0$ shown in Fig. \ref{Fig1:spec}(a). The three-site interactions break both parity (P) symmetry and time reversal (T) symmetry. Note that modes $k= 0,\pm \pi$ are time reversal invariant and their excitations are independent of $J^*$ as a consequence of vanishing $G_{k}$. Instantly, we obtain the diagonal form of the Hamiltonian, \begin{eqnarray} {\cal H}=\sum_{k }\sum_{j=1}^{4} \varepsilon_{k,j}\, \gamma_{k,j}^{\dagger}\gamma_{k,j}^{} . \label{diagonalform} \end{eqnarray} The most important properties of the 1D quantum system can be explored in the ground state. The ground state of any fermion system follows the Fermi-Dirac statistics, and the lowest energy is obtained when all the quasiparticle states with negative energies are filled by fermions. More precisely, in the thermodynamic limit ($N\to\infty$) the ground state of the system, $|\Phi_0\rangle$, corresponds to the configuration with chemical potential $\mu=0$, where all the states with $\varepsilon_{k,j}<0$ are occupied and the ones with $\varepsilon_{k,j}\ge 0$ are empty. This state is realized by means of the corresponding occupation numbers, \begin{equation} n_{k,j}=\langle\Phi_0\vert\gamma_{k,j}^{\dagger}\gamma_{k,j}^{} \vert\Phi_0\rangle = \left\{ \begin{array}{l l} 0 & \quad {\rm for}\;\varepsilon_{k,j} \ge 0,\\ 1 & \quad{\rm for}\;\varepsilon_{k,j}<0. \end{array} \right. \end{equation} One recognizes that the Bogoliubov-de Gennes Hamiltonian (\ref{MainHamMatrix}) actually acts in an artificially enlarged Nambu-spinor space and it respects an emergent particle-hole symmetry (PHS) ${\cal C}$, i.e., ${\cal C}\hat{M}_k{\cal C}=\hat{M}_{-k}$, with ${\cal C}^2=1$. Here in the so-called particle-hole space, the extra degree of freedom $\textbf{C}^{2}$ leads to two copies of the actual excitation spectrum, a particle and a hole copy emerge simultaneously. The PHS implies here that $\gamma_{k,4}^{\dagger}$=$\gamma_{-k,1}^{}$, $\gamma_{k,3}^{\dagger}$=$\gamma_{-k,2}^{}$, as is evidenced in Fig. \ref{Fig1:spec}. The bands with positive energies correspond to the electron excitations while the negative ones are the corresponding hole excitations. When all quasiparticles above the Fermi surface are absent the ground state energy may be expressed as: \begin{eqnarray} E_0 = -\frac{1}{2} \sum_{k} \sum_{j=1}^4 \left\vert \varepsilon_{k,j} \right\vert. \label{E0expression} \end{eqnarray} Accordingly, the gap is determined by the absolute value of the difference between the second and third energy branches, \begin{equation} \Delta=\min_{k} \left\vert\varepsilon_{k,2}-\varepsilon_{-k,3}\right\vert. \end{equation} \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig1.eps} \caption{ The energy spectra $\varepsilon_{k,j}$ ($j=1,\cdots,4$) for increasing $J^*$: (a) $J^*$ = 0, (b) $J^*$ = 0.239, (c) $J^*$ = 2, and (d) $J^*$ = 5. Parameters are as follows: $J_o = 1$, $J_e = 4$, $\theta=\pi/3$. } \label{Fig1:spec} \end{figure} One finds that with the increase of $J^*$, the minimum of $\varepsilon_{k,3}$ bends down until it touches $\varepsilon=0$ when $J^*$ reaches a threshold value $J_{c,1}^*$, i.e., $\Delta$ = 0; cf. Fig. \ref{Fig1:spec}(b). A gapless mode shows up at some incommensurate mode $k_{ic}$ and the spectrum vanishes quadratically. Further increase of $J^*$ leads to the bands inversion between portions of $\varepsilon_{k,2}$ and $\varepsilon_{k,3}$. There is a negative-energy region of $\varepsilon_{k,3}$ in $k$ space shown in Fig. \ref{Fig1:spec}(c), and there are two Fermi points across the Fermi surface. When $J^*$ exceeds another threshold value $J_{c,2}^*$ the energy spectrum of spinless fermions may also have two additional Fermi points \cite{Lou04}, as observed in Fig. \ref{Fig1:spec}(d). A Lifshitz transition occurs following the topological change of the Fermi surface in the Brillouin zone. \section{Correlations and quantum phase transitions} \label{sec:cor} In order to characterize the QPTs, we studied the nearest neighbor spin correlation function defined by \begin{eqnarray} C^{\alpha}_{l}&=&-\frac{2}{N}\sum_{i=1}^{N'}\langle \sigma_{i}^\alpha \sigma_{i+l}^\alpha \rangle, \end{eqnarray} where $l$=1(-1) and the superscript $\alpha=x,y,z$ denotes the cartesian component, and the $z$ component of scalar chirality operator \cite{Wen89} \begin{eqnarray} {\cal \chi}^{z} = -\frac{1}{N}\sum_{i=1}^{N} \langle {\sigma}_{i}^z \vec{z}\cdot [\vec{\sigma}_{i-1}\times\vec{\sigma}_{i+1}].\rangle \label{chir} \end{eqnarray} The scalar chirality operator can act as a local order parameter for states without PT symmetry. As shown in Fig. \ref{Fig:CF}, the ground state has finite nearest neighbor correlation functions for $J^*=0$, among which $x$ components $\{C_l^x\}$ dominate for $\theta=\pi/3$, implying that the adjacent spins are antiparallel and aligned with a canted angle with respect to the $x$ axis. Indeed, the ground state of the GCM is a canted N\'eel (CN) phase for $\theta<\pi/2$. \begin{figure}[t!] \begin{center} \includegraphics[width=\columnwidth]{fig2a.eps} \end{center} \caption{ The nearest neighbor correlations $C^\alpha$ on even bonds and chirality $\chi^\alpha$ by increasing $J^*$ for $h=0$. Parameters are as follows: $J_o = 1$, $J_e = 4$, $\theta=\pi/3$. } \label{Fig:CF} \end{figure} With the increase of $J^*$, the nearest neighbor correlation functions remain constant. After $J^*$ surpasses $J_{c,1}^*$, the system stays in a chiral-I phase without finite energy gap, characterized by a nonzero ${\cal \chi}^{z}$. In such a chiral-I phase, $x$ components $C_l^x$ decrease while $C_l^y$ and $C_l^z $ grow as $J^*$ increases, but they become saturated quickly. When $J^*>J_{c,2}^*$, the system enters chiral-II phase, where ${\cal \chi}^{z}$ grows rapidly and $\{C_l^\alpha\}$ ($\alpha=x$,$y$, and $z$) decreases simultaneously. In the fermionic picture different phases correspond to different Fermi-surface topology (different number of Fermi points) for fermions. In particular, the two Fermi-point spinless fermions (chiral-I phase) is distinct from the four-Fermi-point spinless fermions (chiral-II phase) \cite{Lou04}. Both spin-liquid phases have gapless excitations, however, the appearance of new points $k_F$ in the Fermi surface when the controlling parameter crosses a critical value will witness a general feature of the discontinuities in the correlation functions. We remark that the number of gapless modes determine the effective central charge and the coefficients of the area-law violating term of bipartite entanglement entropy \cite{Eisert,Eloy12}. Notably, the chiral-II phase is a dedicated phase of the critical XX model with three-site XZY$-$YZX interactions added \cite{Lou04,Liu12,Der11,Kro08,Tit03,Top12}, while this phase is absent for anisotropic XY model \cite{Lei15}. Here we observe the three-site XZY$-$YZX interactions in the GCM surprisingly triggers both chiral states for arbitrary $\theta$, and two different Tomonaga-Luttinger liquids reflect the importance of Fermi surface topology. \begin{figure}[b!] \includegraphics[width=7cm]{fig3.eps} \caption{ The critical value of $J^*$ as a function of $\theta$. Parameters are as follows: $J_o$=1, $J_e$=4. } \label{Lc} \end{figure} The determination of critical values of $J_{c,1}^*,J_{c,2}^*$ and the corresponding incommensurate momentum $k_{ic}$ can be given by \begin{eqnarray} \varepsilon_{k_{ic},3(4)}=0 , \quad \partial \varepsilon_{k_{ic},3(4)}/\partial k=0. \label{eqcrit} \end{eqnarray} This leads to the following quartic equation for $x_{ic}=\cos k_{ic}$: \begin{eqnarray} x_{ic}^4 + c_3 x_{ic}^3 + c_2 x_{ic}^2 +c_0 =0, \end{eqnarray} where \begin{eqnarray} c_3&=&4(J_o^2 + J_e^2)/(3J_o J_e \sin^2\theta), \nonumber \\ c_2&=&(J_o^2+J_e^2)^2/(3J_o^2 J_e^2 \sin^4\theta)-4\cot^4\theta/3+2/3, \nonumber \\ c_0&=&-1/3. \nonumber \end{eqnarray} This quartic equation can be solved analytically but the form is rather contrived. We plot the critical lines with respect to $\theta$ in Fig. \ref{Lc}. One finds that in the Ising limit, i.e., for $\theta \to 0$, it yields \begin{eqnarray} J^*_{c,1} \to \textrm{min} (J_o, J_e) \quad {\rm and} \quad J^*_{c,2} \to \textrm{max} (J_o, J_e). \end{eqnarray} While in the compass limit, i.e., for $\theta \to \pi/2$, we have \begin{eqnarray} J^*_{c,1} \to 0 \quad {\rm and} \quad J^*_{c,2} \to \textrm{max} (J_o, J_e). \end{eqnarray} In other words, the system for $\theta=\pi/2$ has an emergent $\mathbb{Z}_2$ symmetry and the ground state can not be ordered. Any infinitesimal perturbation of $J^*$ will induce the system into gapless chiral-I state. For the parameters we choose mostly in this paper, i.e., $J_o=1$, $J_e=4$, $\theta=\pi/3$, one finds $J^*_{c,1}=0.239$ and $J^*_{c,2}=4.048$. \section{Effect of transverse field} \label{sec:field} We now consider the case where the magnetic field $h$ is perpendicular to the easy plane of the spins, i.e., $\vec{h}=h\hat{z}$. In this case, the Zeeman term is given by \begin{eqnarray} {\cal H}_h= h\hat{z}\cdot\sum_{i=1}^{N'}(\vec{\sigma}_{2i-1}+\vec{\sigma}_{2i}), \end{eqnarray} where $h$ is the magnitude of the transverse external field. Subsequently, in Nambu representation, the Hamiltonian matrix $\hat{M}_k$ (\ref{MainHamMatrix}) is modified in the following way: \begin{eqnarray} \hat{M}_k \to \hat{M}_k^{'}=\hat{M}_k -h \mathbb{I}_2 \otimes \sigma^z, \label{Mk_h} \end{eqnarray} where $\mathbb{I}_2$ is a ($2\times 2$) unity matrix. It is obvious that the external magnetic field plays the role of a chemical potential for spinless fermions. \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig4.eps} \caption{ The energy spectra $\varepsilon_{k,j}$ ($j=1,\cdots,4$) for increasing electric field $h$: (a) $h$ = 1, (b) $h$ = 2, and (c) $h$ = 3. The inset in (b) is an amplification of the level crossing at the Fermi energy marked by dashed circle below. Parameters are as follows: $J_o$=1, $J_e$=4, $\theta=\pi/3$, and $J^*=0.1$. } \label{Fig2:spec} \end{figure} After diagonalization four branches of energies $\varepsilon_{k,j}$, with $j=1,\cdots,4$, are given by the following expressions: \begin{eqnarray} \varepsilon_{k,1(2)}= -\frac{1}{2}\left(\sqrt{\zeta_k \pm \sqrt{\eta_k }}-G_k\right), \nonumber \\ \varepsilon_{k,3(4)}= \frac{1}{2}\left(\sqrt{\zeta_k \mp \sqrt{\eta_k }}-G_k\right), \label{excitationspectrum2} \end{eqnarray} where \begin{eqnarray} \zeta_k&=&\vert P_k \vert^2 + \vert Q_k \vert^2 + \vert S_k \vert^2+ h^2, \nonumber \\ \eta_k&=& (S_k^* Q_k + S_k Q_k^*)^2 -(S_k^* P_k - S_k P_k^*)^2 \nonumber \\ &+&(P_k^* Q_k + P_k Q_k^*)^2 + 4 \vert S_k \vert^2 h^2. \end{eqnarray} The magnetic field further breaks the T symmetry and polarizes spins along $z$ direction. The analytical solution for $J^*$ = 0 had been scrutinized recently. One finds that increasing transverse field induces finite transverse polarization $\langle \sigma_i^z\rangle$ and drives the system into a saturated polarized phase above the critical field \cite{You1}. The field-induced QPT is of second order for arbitrary angle $\theta$ and occurs at at the critical value, \begin{eqnarray} h_c = 2 \sqrt{J_o J_e}\cos\theta. \label{hc} \end{eqnarray} \begin{figure}[t!] \begin{center} \includegraphics[width=\columnwidth]{newfig5.eps} \end{center} \caption{ The nearest neighbor correlations $C^\alpha$ on even bonds and chirality $\chi^\alpha$ by increasing $J^*$ for $h = 3$. Parameters are as follows: $J_o = 1$, $J_e = 4$, $\theta=\pi/3$. } \label{Fig:CF2} \end{figure} The field-induced criticality is suited at momentum $k= 0$, where $G_k$ does not play a role, see Eq. (\ref{compactnotations}). Figure \ref{Fig2:spec} shows the energy spectra obtained for three typical values of $h$ and fixed weak $J^*=0.1$. We find that a finite gap separates occupied from empty bands except when $h=h_c$, see Eq. (\ref{hc}). A small value of $J^*$ does not modify the critical field and the gap vanishes linearly for $\theta\neq\pi/2$, see inset in Fig. \ref{Fig2:spec}(b). When $h=h_c$ the gaps opens and grows with increasing $(h-h_c)$, see Fig. \ref{Fig2:spec}(c). \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig5.eps} \caption{ The gap $\Delta$ as a function of $h$ and $J^*$. Parameters are as follows: $J_o$=1, $J_e$=4, $\theta=\pi/3$. } \label{Fig1:gap} \end{figure} The nearest neighbor correlation functions $\{C_l^\alpha\}$ ($\alpha$=$x$, $y$, and $z$) and the $z$ component of scalar chirality operator ${\cal \chi}^z$ for increasing $J^*$ at $h=3$ are shown in Fig. \ref{Fig:CF2}. Finite magnetic field expands the range of CN phase and increases both $J_{c,1}^*$ and $J_{c,2}^*$, see Fig. \ref{Fig:CF2}. The $z$ components $\{C_l^z\}$ dominate over $x$ components $\{C_l^x\}$ for small $J^*$ and $\theta=\pi/3$, suggesting that the spins are aligned along the $z$ axis according to the sign of $\{C_l^z\}$. The correlation functions are found to be almost independent of $J^*$ as long as the system is within the polarized state, but they change in a discontinuous way at phase transitions. As $J^*$ rises above the critical value $J_{c,1}^*$, a nonzero chirality ${\cal \chi}^{z}$ starts to grow and saturates. One finds that $C_l^y$ and $C_l^z$ decrease and change sign from negative to positive values upon increasing $J^*$, which is contrast to the trend observed for $C_l^x$. A sharp upturn of ${\cal \chi}^{z}$ occurs for $J>J_{c,2}^*$, and it continues to increase with $J^*$. Simultaneously, all the correlation functions $\{C_l^\alpha\}$ ($\alpha=x,y,z$) decrease strongly towards zero when the system enters the chiral-II phase. To present a three-dimensional panorama of the excitation gap, we display $\Delta$ for varying $h$ and $J^*$ in Fig. \ref{Fig1:gap}. The gap $\Delta$ diminishes for large value of $J^*$. \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig6.eps} \caption{ Magnetic phase diagram of the 1D GCM as a function of transverse field $h$ and three-site XZY$-$YXZ interaction $J^*$. Parameters are as follows: $J_o$=1, $J_e$=4, $\theta=\pi/3$. } \label{phd2} \end{figure} Similarly, we can discriminate the critical lines $J^*_{c,1(2)}$ and zero-gap modes $k_{ic}$ using the relations in Eq. (\ref{eqcrit}). The phase diagram is shown in Fig. \ref{phd2}. The phase diagram at finite three-site XZY$-$YZX interaction and magnetic field consists of four phases: (i) canted antiferromagnetic, (ii) polarized, (iii) chiral-I, and (iv) chiral-II. A tricritical point is determined by the intersection of both critical lines which can be obtained analytically: \begin{eqnarray} h_c=2\sqrt{J_o J_e}\cos \theta, J_c^*= J_o J_e \cos^2\theta/(J_o+J_e). \end{eqnarray} In the special case of $\theta=\pi/2$, the CN phase is never stable. \section{THERMODYNAMIC PROPERTIES} \label{sec:T} Since the exact solution of the GCM with three-site interaction and the external field is at hand, it is straightforward to obtain its complete thermodynamic properties at finite temperature. All quantum phase transitions of the present 1D GCM are of second order. Among many thermodynamic quantities, the specific heat and magnetic susceptibility are easy to to be measured, and both of them are proportional to the electronic density of states at Fermi energy. For the particle-hole excitation spectrum (\ref{excitationspectrum2}), the free energy of the quantum spin chain at temperature $T$ reads, \begin{eqnarray} {\cal F}= - k_B T \sum_k\sum_{j=1}^4 \ln\left(2\cosh\frac{\varepsilon_{k,j}}{2k_B T}\right). \end{eqnarray} The low temperature behavior of the heat capacity, \begin{eqnarray} \label{cv} C_V(T)&=&-T\left(\frac{\partial^2{\cal F}}{\partial T^2}\right)_h \nonumber \\ &=& k_B \sum_k \sum_{j=1}^{4} \frac{(\varepsilon_{k,j}/2k_B T)^2} { \cosh^2 (\varepsilon_{k,j}/2k_BT)}. \end{eqnarray} The magnetic susceptibility is defined as follows, \begin{eqnarray} \label{chif} \chi(T)&=&-\left(\frac{\partial^2{\cal F}}{\partial h^2}\right)_T - \frac{1}{2 }\sum_k\sum_{j=1}^{4}\left\{ \frac{\partial^2\varepsilon_{k,j}}{\partial h^2} \tanh\left( \frac{\varepsilon_{k,j}}{2k_BT}\right) \right. \nonumber \\ &+& \left.\left(\frac{\partial\varepsilon_{k,j}} {\partial h}\right)^2\left[2k_BT\cosh^2\left(\frac{ \varepsilon_{k,j}}{2k_BT}\right)\right]^{-1}\right\}. \end{eqnarray} \begin{figure}[t!] \includegraphics[width=8.4cm]{fig7.eps} \caption{ The thermodynamic properties for two values of $h=1$ and $h=3$ at fixed temperature $T=0.01$: (a) the specific heat $C_V$, (b) the magnetic susceptibility $\chi$. The inset shows the Wilson ratio $R_W$ (\ref{wira}) as a function of three-site XZY$-$YXZ interaction $J^*$ for $h=1$ and $h=3$. Parameters are as follows: $J_o$=1, $J_e$=4, $\theta=\pi/3$. } \label{RW} \end{figure} At low temperatures the specific heat has a linear dependence on $T$ in liquid metals due to the contribution from the electrons within the energy interval $k_BT$ near the Fermi surface, while the magnetic susceptibility is independent of temperature owing to the fact that only the electrons within the energy $\mu_B gH$ near the Fermi surface contribute to magnetization. The Sommerfeld-Wilson ratio (Wilson ratio in short) is a parameter which characterizes strongly correlated Fermi liquids. It is defined as a dimensionless ratio of the zero-temperature magnetic susceptibility $\chi$ and the coefficient of the linear term $\propto T$ in the electronic specific heat $C_V(T)$ \cite{Wilson}, \begin{eqnarray} R_{\rm W}=\frac{1}{3}\left(\frac{2\pi k_B} {\mu_B g_{\rm Lande}} \right)^2\frac{T\chi(T)}{C_V(T)}, \label{wira} \end{eqnarray} where $k_B$ is Boltzmann's constant, $\mu_B \equiv e/(2mc) $ is the Bohr magneton, $g_{\rm Lande}\simeq 2$ is the Lande factor. Such quantity measures the strength of magnetic fluctuations versus thermal fluctuations. Figure \ref{RW} shows the specific heat $C_V(T)$ and the magnetic susceptibility $\chi(T)$ for increasing $J^*$, in the range which covers all phases. In a 1D antiferromagnet, the zero-temperature magnetic susceptibility exhibits a square-root divergence across critical fields. The Wilson ratio (\ref{wira}) undergoes an increase due to sudden changes in the density of states near the critical fields \cite{Gaun13}. $R_{\rm W}=1$ in the free-electron limit when $J^*\to\infty$. However, we notice that $R_{\rm W}$ deviates from 1 in chiral-I phase. In particular, $R_{\rm W}$ is larger here than that in chiral-II phase. Furthermore, $R_{\rm W}$ is enhanced by increasing magnetic field, see inset in Fig. \ref{RW}. The Wilson ratio can be measured experimentally as for instance in a recent experiment on a gapped spin-1/2 Heisenberg ladder compound (C$_7$H$_{10}$N)$_2$CuBr$_2$~\cite{Nin12}. \section{Magnetoelectric effect } \label{sec:mee} Next we consider the magnetoelectric effect (MEE), where the roles of magnetization and polarization can be interchanged. A key quantity to characterize the MEE is the linear magnetoelectric susceptibility which defines the dependence of magnetization on the electric field, or the polarization dependence on the magnetic field. \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig8.eps} \caption{ Electric polarizations (see legend) as functions of external field $h$ for: (a) $J^*=0$, (b) $J^*=0.5$, and (c) $J^*=4.5$. Parameters are as follows: $J_o$=1, $J_e$=4, $\theta=\pi/3$. } \label{OP_h} \end{figure} The three-spin interaction was naturally claimed to contribute to the ferroelectricity in the Katsura-Nagaosa-Balatsky (KNB) formula for its particular form \cite{Kat05}, in which the local spins (magnetic moments) and the local polarization are coupled, \begin{eqnarray} \vec{P} = \gamma \hat{e}_{ij} \times (\vec{\sigma}_i \times \vec{\sigma}_j), \label{polarization} \end{eqnarray} where $\hat{e}_{ij}$ is the unit vector connecting the neighboring spins $\vec{\sigma}_i$ and $\vec{\sigma}_j$ with a material-dependent coupling coefficient $\gamma$. Here we place the chain along the $x$ direction in the real space, i.e., $\hat{e}_{ij}=(1,0,0)$. Considering a particular component ($z$ here, to be specific) of the spin current, \begin{eqnarray} \frac{d \sigma_l^z}{dt}= i[{\cal H}, \sigma_l^z]=- {\rm div} j_l^z, \end{eqnarray} which defines the current $j_l^z$ and the corresponding $P_l^y$ by Eq. (\ref{polarization}). The {\it electric polarization} has two sources \cite{Men15}. The first term originates from the spin-current model, given by \begin{equation} P_1^y \propto \langle \sigma_l^x\sigma_{l+1}^y-\sigma_l^y\sigma_{l+1}^x\rangle, \label{p1} \end{equation} which couples with $y$ component of the electric field $\vec{E}$ induced by the Dzyaloshinskii-Moriya interaction. Through the relation $\vec{P_1}=(\partial{\cal H}/\partial\vec{E})$, the absence of external electric field $\vec{E}$ in Hamiltonian $\cal{H}$ suggests that it has little contribution to the electric polarization $P_1^y$. However, as shown in Fig. \ref{OP_h}, $P_1^y$ is induced in the presence of magnetic field $h$ as long as the phases are chiral, and it is larger in chiral-II phase than in chiral-I phase. \begin{figure}[t!] \includegraphics[width=\columnwidth]{fig9.eps} \caption{ The evolution of electric polarization contributions $P_n^y$ with increasing $h$ at different temperature $T$ for: (a) $P_1^y$ (\ref{p1}), and (b) $P_2^y$ (\ref{p2}). Parameters are as follows: $J_o$=1, $J_e$=4, $\theta=\pi/3$, $J^*=0.5$. } \label{ChangeT} \end{figure} \begin{figure}[b!] \includegraphics[width=\columnwidth]{fig10.eps} \caption{The evolution of $P_1^y$ and $P_2^y$ by reversing the magnetic field $h$. Parameters are as follows: $J_o$=1, $J_e$=4, $\theta=\pi/3$, $J^*=0.5$, $T=0.01$. } \label{ReversedH} \end{figure} Another contribution of electric polarization may come from the spin current triggered by the three-site interactions in the following way \cite{Men15}: \begin{equation} P_2^y\propto - \langle \sigma_l^x\sigma_{l+1}^z\sigma_{l+2}^x +\sigma_l^y\sigma_{l+1}^z \sigma_{l+1}^y\rangle. \label{p2} \end{equation} The general form of the current operator is given in the Appendix. The form of $P_2^y$ is the well-known XZX$+$YZY type of three-site interaction and remains solvable in the frame of Jordan-Wigner fermionization \cite{Tit03,Der11}. A little algebra will yield that three-site XZX$+$YZY interactions acts here as a renormalization (momentum-dependent) of the magnetic field $h$ in the Hamiltonian Eq. (\ref{Mk_h}). The manipulation of $h$ will affect finite $P_2^y$ in an indirect way, as is displayed in Fig. \ref{OP_h}. We find that $P_2^y$ is also induced by $h$, regardless of their phases. It has an opposite sign to $P_1^y$ and almost complements its increase. Both $P_1^y$ and $P_2^y$ scale linearly with small $h$, indicating that they are triggered by the external magnetic field. This is in contrast to some models with two-spin interactions only, where the electric polarization can emerge only for finite electric field. The compass model with three-site interactions verifies the proposal in Ref. \cite{Bro13}, and indeed exhibits a nontrivial magnetism-driven ferroelectricity. We can observe in Fig. \ref{ChangeT} that the ferroelectricity phenomena are quite stable for moderate temperature. An essential feature of the ferroelectric behavior is that the electric polarization can be reversed by the reversal of the magnetic field, as is verified in Fig. \ref{ReversedH}. \section{Summary and Conclusions} \label{sec:summa} In this paper we have considered the 1D generalized compass model Eq. (\ref{Hamiltonian1}) which interpolates between the Ising model (at $\theta=0$) and the maximally frustrated quantum compass model (at $\theta=\pi/2$) and includes three-site XZY$-$YZX interactions. We also investigated this model in the presence of external magnetic field. Although the system is quantum and highly frustrated, we have shown that exact solutions of the corresponding model may be obtained through Jordan-Wigner transformation. The XZY$-$YZX type of three-site interactions break both the parity symmetry and the time-reversal symmetry, and then drastically modify the energy spectra, leading to two kind of Tomonaga-Luttinger liquids. We find that moderate three-site XZY$-$YZX interactions will lead to a chiral-I state with two Fermi points in the representation of spinless fermions, and large three-site XZY$-$YZX interactions transform the system into the four Fermi point spinless fermions. Accordingly, this modification of the Fermi surface topology follows some noticeable changes in the central charges, and then affects the ground state properties, such as nearest neighbor correlation functions. We find that the $z$ component of scalar chirality operator can well distinguish gapped and gapless phases, and also witness an abrupt change from chiral-I to chiral-II phase. In both spin-liquid phases, not only the magnetization is influenced by the magnetic field but the polarization emerges even for $\vec{E}=0$ and is also affected by the magnetic field. To conclude, we emphasize that the advantage of the model considered here is its exact solvability that implies in particular the possibility to calculate accurately various dynamic quantities. The reported results may serve to test other approximate techniques used to study more realistic models. \acknowledgments W.-L.Y. acknowledges support by the Natural Science Foundation of Jiangsu Province of China under Grant No. BK20141190 and the NSFC under Grant No. 11474211. A.M.O. kindly acknowledges support by Narodowe Centrum Nauki (NCN, National Science Center) Project No. 2012/04/A/ST3/00331.
1,116,691,500,005
arxiv
\section{Introduction} The idea of the integer lattice gas developed by Blommel \textit{et al.}~\cite{blommel2018integer} extends traditional Boolean lattice gases \cite{FHP,doolen1991lattice}, that allow only one particle per lattice link to instead allow for an integer number of particles. This cures problems with the local density dependent advection pre-factor that breaks Galilean invariance in those standard lattice gas model. It also includes ideal gas fluctuations in a consistent manner, and can therefore represent fluctuations for low densities. Historically lattice Boltzmann methods were derived from lattice gas models~\cite{FHP,wolf2004lattice}. The exclusion principle that allowed for one particle per lattice velocity only caused the equilibrium distribution to be of the Fermi-Dirac rather than the Boltzmann form. This equilibrium distribution was the reason, that the hydrodynamic limit of these lattice gases contained non Galilean invariant terms. The original lattice Boltzmann methods were exactly equivalent to these lattice gas methods~\cite{higuera1989boltzmann}. Qian and d'Humieres modified the lattice Boltzmann collision operator to break the link to the underlying lattice gas model~\cite{qian1992lattice}, which enabled them to recover the Galilean invariance of the lattice gas hydrodynamics. Blommel and Wagner~\cite{blommel2018integer} were able to show that one can define a set of integer lattice gas models which will have a corresponding entropic lattice Boltzmann method as its Boltzmann averaged limit. This equivalence showed that the integer lattice gases share the improved level of Galilean invariance with their lattice Boltzmann counterparts, a significant improvement over Boolean lattice gases. The recovery of a Galilean invariant (in the lattice Boltzmann sense, since no lattice based method can be fully invariant) lattice gas was striking, but the advance was mostly of theoretical interest, since the fundamental two particle lattice gas collision operator was not computationally competitive with the corresponding lattice Boltzmann method. In this paper we develop a sampling collision operator for the lattice gas method, and we show that with such a collision operator the lattice gas can be competitive with the corresponding lattice Boltzmann method. Such a sampling approach was previously suggested by Boghosian~\cite{boghosian1997integer}, but in that case it was considered impractical to construct such a sampling collision operator. We show here that by focusing on a diffusive system leading to a diffusive lattice Boltzmann method~\cite{wolf1995lattice}, it is indeed possible to construct an efficient sampling method for our integer lattice gas. In section II we introduce the lattice gas, and in section III we derive its Boltzmann average. Section IV is dedicated to validating the fluctuating and dynamic properties of the method for a few test cases. Section V is dedicated to the analysis of the improved computational efficiency of the method. \section{The Monte Carlo Lattice Gas algorithm} A lattice gas consists of an underlying lattice and a set of lattice velocities $v_i$ as well as occupation numbers $n_i(x,t)$ indicating the number of particles at lattice point $x$ at time $t$ associated with lattice velocity $v_i$. The distance between the nearest neighbors of the lattice sites constitutes the lattice spacing $\Delta x$. Time increments in discrete time steps $\Delta t$. The velocities $v_i$ are defined such that the lattice displacements $v_i\Delta t$ are lattice vectors, i.e. they connect lattice sites with each other. In particular if $x$ is a lattice site so is $x+v_i\Delta t$. Typically the lattice velocities are the same for each lattice site and restricted in number such that the lattice displacements are restricted to a (small) neighborhood. In general we will refer to the number of lattice velocities as $V$. When we say that for each lattice velocity $v_i$ there is an associated integer occupation number $n_i(x,t)$, what we mean is $n_i(x,t)$ that the number of particles at lattice site $x$ at time $t$ that came from lattice site $x-v_i$ in the previous time step. This allows us to define a local number of particles at each lattice site \begin{equation} N(x,t)=\sum_i n_i(x,t). \label{eqn:N} \end{equation} The time-evolution of the lattice occupation numbers $n_i$ can be written as \begin{equation} n_i(x+v_i\Delta t,t+\Delta t)=n_i(x,t)+\Xi_i, \end{equation} where $\Xi_i$ is referred to as the collision operator. This collision operator will be a stochastic operator in general that obeys the local conservation laws. In our case the local number of particles $N(x,t)$ will be left unchanged by the collision. In the integer lattice gas developed by Blommel \textit{et al.}~\cite{blommel2018integer} this collision operator was constructed as the net effect of many two-particle collisions that conserved mass and momentum. This allowed for an implementation that recovered independent Poisson distributed fluctuations of the occupation numbers as well as an equilibrium distribution derivable from a maximum entropy consideration. The relaxation towards equilibrium was analytically derived, even including a non-linear relaxation regime. Despite these attractive features, the method did not constitute a viable replacement for fluctuating lattice Boltzmann methods as the collision operator required an execution time that, for larger numbers of collisions, would lead to times significantly larger than the equivalent lattice Boltzmann execution time. This prevented the integer lattice gas from becoming the method of choice for most practical applications. To alleviate this drawback we consider here a collision operator that performs all collisions in one step by directly sampling from the local equilibrium distribution. We expect that this will increase the performance of integer lattice gas method. While we believe that this is possible in the general case, this paper focuses on the simpler case that only has mass (i.e. no momentum) conservation. The hydrodynamic limit of such a method recovers the diffusion equation. We leave the extension to hydrodynamic lattice gasses to a future research project. We therefore aim to derive the lattice gas that is equivalent to the fluctuating lattice Boltzmann method for the diffusion equation~\cite{wagner2016fluctuating}. As in the case of the integer gas of Blommel \textit{et al.}~\cite{blommel2018integer} we are imposing the weights $w_i$ for the occupation numbers in local equilibrium. The collision of Blommel \textit{et al.} involved two particle collisions such that the total number of particles as well as the momentum (although not the energy) in each collision was conserved. In the case of our diffusive lattice gas momentum is not conserved. This represents a significant simplification since only the number of particles needs to be conserved and we now can consider single-particle collisions, which can be thought of physically as collisions of the particles with a matrix or a solvent that is not explicitly modeled. Collision rules equivalent to those of Blommel's integer lattice gas consist of selecting a particle at random. Since particles are not individually labeled in a lattice gas, this corresponds to picking a particle associated with velocity $v_{s}$ with probability $p_{s}=n_{s}/\sum_j n_j$. We impose a local equilibrium distribution for a number of particles $N$ at the lattice site, as a local ensemble average. This is an average over many collisions, but with a fixed number of particles, i.e. without exchanging particles with neighbouring cells. This local equilibrium is then given by \begin{equation} f_i^{0}(N)=\langle n_i\rangle^0 = N w_i, \label{eqn:f0} \end{equation} where $\langle \cdots \rangle^0$ implies an equilibrium average over occupation numbers $n_i$ under local collisions, but without an exchange of particle with neighbouring sites. The $w_i$ are weights with \begin{equation} 1 = \sum_{i=1}^V w_i, \end{equation} equivalent to the weights used in lattice Boltzmann~\cite{qian1992lattice} which are derived as a discretization of a Maxwell-Boltzmann distribution onto the lattice velocities $v_i$. In local equilibrium we require that detailed balance, \textit{i.e.} in equilibrium the transition from state A to state B is just as likely as the reverse transition, is obeyed. This can be formally written as a constraint on the general transition probabilities $P_{i\rightarrow j}$ : \begin{equation} N w_i P_{i\rightarrow j} = N w_j P_{j\rightarrow i}, \label{semidetailed} \end{equation} This tells us that the transition probabilities $P_{i\rightarrow j}$ for $i\neq j$ obey \begin{equation} P_{i\rightarrow j} \propto \min\left(1,\frac{w_j}{w_i}\right). \label{prob} \end{equation} This only defines the transition probabilities up to a pre-factor, so we define \begin{equation} P_{i\rightarrow j} = \left\{ \begin{array}{cc} \lambda_{ij} \min\left(1,\frac{w_j}{w_i}\right) & \mbox{ for }i\neq j \\ 1-\sum_{i,j (i\neq j)} P_{i\rightarrow j} &\mbox{ for }i=j \end{array}\right. \end{equation} where $\lambda_{ij}=\lambda_{ji}$ according to Eq.~(\ref{semidetailed}). For the most efficient simulations $\lambda_{ij}$ should be as large as possible to ensure the best possible acceptance rate for the collision. This becomes particularly simple, and maximally efficient, if we choose \begin{equation} \lambda_{ij}=\max(w_i,w_j), \end{equation} and we get \begin{equation} P_{i\rightarrow j}=w_j, \end{equation} which clearly fulfills the detailed balance Eq.~(\ref{semidetailed}). Practically this amounts to selecting a particle at random with velocity $v_s$ and reassigning a new velocity $v_t$ with probability $w_t$. The effect of this collision on the occupation number $n_i$ can be written as a random variable \begin{equation} \vartheta^c_j = \delta_{t,j}-\delta_{s,j}, \end{equation} where the Kroneker delta $\delta_{i,j}$ is one for $i=j$ and zero otherwise. The collision operator can then be written as a sum of $C$ such simple collision operators \begin{equation} \Xi_i^C=\sum_{c=1}^C \vartheta^c_i. \label{eqn:XiC} \end{equation} This fully defines an integer lattice gas for the fluctuating diffusion equation analogous to the integer lattice gas for hydrodynamic systems introduced by Blommel~\cite{blommel2018integer}. The disadvantage of these methods is that the time required for the collision operator scales linearly with the number of collisions. The number of collisions scales with the numbers of particles $C\propto N$, and particularly for higher densities this approach will become slow. The key idea of this paper is to replace the collision operator of Eq. (\ref{eqn:XiC}) with a one that simply samples the post-collision distribution from an appropriately chosen distribution and thereby to perform all collisions in a single step. In general for a system with $Q$ discrete velocities and $K$ conserved quantities this relates to the daunting problem of sampling from a distribution that lives on $Q-K$ dimensional manifold in a $Q$ dimensional space~\cite{boghosian1997integer}. For this comparatively simple system, however, the problem becomes tractable. For many collisions we can obtain a unique local equilibrium probability for the set of occupation numbers $\{n_i\}$. Since $N=\sum_i n_i$ of Eq.~(\ref{eqn:N}) is the number of particles at the lattice site (which is fixed for the local collisions), the mean of these local equilibrium distributions is given by $N w_i$. The probability for a set of occupation numbers $\{n_i\}$ is given by the multinomial distribution \begin{equation} P(\{n_i\}) = \left\{ \begin{array}{ll} N! \prod_{i=1}^V \frac{w_i^{n_i}}{n_i!} &\mbox{ if } \sum_i n_i=N,\\ \mbox{ }\\ 0 & \mbox{ otherwise.} \end{array}\right. . \label{Equil} \end{equation} This can be seen by realizing that this is entirely equivalent to the standard combinatorial problem of the cumulative occurrence of $N$ trials with $V$ outcomes with respective probabilities $w_i$. In this textbook example the probability of having $\{n_i\}$ occurrences of events $i$ is then given by Eq. (\ref{Equil}). This is often used as the example when the multinomial distribution is introcuded~\cite{Guichard}. First we focus on the case for $\Xi^C_i$ in the limit of $C\rightarrow \infty$, i.e. the case where we sample from the equilibrium distribution of Eq.~(\ref{Equil}). Sampling directly out of the multinomial distribution is implemented by some packages like the GNU scientific library (GSL)~\cite{GNU}. It allows us to sample a set of $V$ random numbers with the probability given by Eq. (\ref{Equil}). We can then sample the redistribution of $N$ particles onto $V$ boxes with probability $w_i$ through \begin{equation} \left(\begin{array}{c}\hat{n}_0\\\vdots\\\hat{n}_V\end{array}\right) =\left(\begin{array}{c} (X_{w_0,\cdots,w_V}^N)_0\\\vdots\\(X_{w_0,\cdots,w_V}^N)_V\end{array}\right) \label{eqn:Xmulti} \end{equation} where $(X_{w_0,\cdots,w_V}^N)_i$ is the $i^{th}$ component of a multinomial sample. Note that all this ensures that \begin{equation} \sum_i \hat{n}_i = N \label{eqn:Ncons} \end{equation} \textit{i.e.} that the collision does conserve the number of particles. Alternatively we can do this sequentially by picking new occupation numbers from a binomial distribution as follows: let $X_p^N$ be a binomially distributed random number with \begin{equation} P\left(X_p^N=n\right) = \left(\begin{array}{c} N\\n\end{array}\right) p^n (1-p)^{N-n}. \end{equation} We will need such binomially distributed numbers later even if we use multinomially distributed random numbers from Eq. (\ref{eqn:Xmulti}). To numerically obtain these binomially distributed random numbers one can use libraries like the GNU scientific library~\cite{GNU}. Since we expect the drawing of these random numbers to be a key factor in the overall performance of our algorithm we also developed an algorithm in-house. This algorithm is described in appendix \ref{code}, and we found that it could be significantly faster than the GSL algorithm for a range of about $N\in[10,1000]$ average particles per lattice site. The collision operator is then defined by the following binomial sampling algorithm. Out of the available $N$ particles we pick an occupation number associated to $v_0$ such that each particle has a probability $w_0$ to be assigned to $n_0$ given by \begin{equation} \hat{n}_0 = X_{w_0}^N, \label{eqn:X0} \end{equation} leaving us with $\tilde{N}_1=N-n_0$ particles. We now pick $n_1$ particles out of the remaining $\tilde{N}_1$ particles. Because there are now fewer states available the probability of a particle to be assigned to this occupation number $n_1$, however, has increased to $\tilde{w}_1=w_1/(1-w_0)$. We then pick \begin{equation} \hat{n}_1 = X_{\tilde{w}_1}^{\tilde{N}_1}. \label{eqn:X1} \end{equation} For the remaining occupation numbers we can define the remaining particles as \begin{equation} \tilde{N}_i=\tilde{N}_{i-1} -n_{i-1} \label{eqn:Xi} \end{equation} and the normalized probability as \begin{equation} \tilde{w}_i= \frac{w_i}{1-\sum_{j=1}^{i-1} w_i}. \end{equation} We sample the remaining densities as \begin{equation} \hat{n}_i = X_{\tilde{w}_i}^{\tilde{N}_{i-1}}. \label{eqnni} \end{equation} Note that because $\sum_i w_i=1$ the probability associated with the last weight $\tilde{w}_V=1$, \textit{i.e.} the remaining particles will be assigned to the last occupation number $n_V$ with probability 1. Therefore, this algorithm ensures local conservation of particles and Eqn. (\ref{eqn:Ncons}) is again fulfilled. This is of course to be expected since this implementation is just a specific recipe to generating a multinomially distributed random number and is therefore mathematically equivalent to (\ref{eqn:Xmulti}). It will be seen below, however, that either algorithm can be more efficient, depending on what context they are used in. In this case for $C\rightarrow \infty$ the sampling collision operator is then given by \begin{equation} \Xi_i^{C\to\infty} = \hat{n}_i-n_i. \label{fullsampling} \end{equation} For a finite number of collisions $C$ we need to consider the fraction of uncollided particles. The original algorithm of Blommel et al. \cite{blommel2018integer} used a fixed number of collisions, but there is no reason that there should be a fixed number of collisions at each lattice site at each time-step. Instead, we are envisaging here a fixed collision probability for each particle during the time-step $\Delta t$. This is both more realistic on physical grounds and easier to implement as a sampling algorithm. If each particle has the probability $\omega$ to be collided during the current time-step, and there are $N$ particles at a lattice site, then on average $N\omega$ collisions will occur. We can go a step further and find the probability distribution for the number of collided particles as \begin{equation} P(N^c) =\left( \begin{array}{c}N\\N^c\end{array}\right) \omega^{N^c} (1-\omega)^{N-N^c}. \label{eqn:Nselect} \end{equation} To reproduce a finite number of collisions with the sampling collision operator we then take the number of particles $N^c$ that will undergo a collision and remove them at random from the occupation numbers, again using a bionmialy distributed random number. We have to remove a binomially distributed random number of particles from each of the occupation numbers. The number of particles associated with velocity $v_i$ that are undergoing a collision are then selected through \begin{equation} n^\omega_i = X^{n_i}_\omega \label{eqn:Xomega} \end{equation} and we perform the sampling algorithm as above, except that we only distribute $N^c=\sum_i n^c_i$ particles. If we denote the redistributed particles from the equivalent equation to Eq~(\ref{eqnni}) as $\hat{n}^\omega_i$ then the collision operator becomes \begin{equation} \Xi_i^\omega = \hat{n}^\omega_i-n^\omega_i \label{eqn:Xiomega} \end{equation} and for $\omega=1$ we recover the full sampling collision operator of Eq.~(\ref{fullsampling}). This then defines an integer lattice gas for and arbitrary number of collisions. \section{Boltzmann average of the lattice gas} To compare the lattice gas results to the lattice Boltzmann method we will now examine an non-equilibrium ensemble average of the lattice gas evolution equation. We define the particle probability densities as \begin{equation} f_i(x,t) = \langle n_i(x,t) \rangle^{neq} \end{equation} where $\langle \cdots\rangle^{neq}$ implies a non-equilibrium average over an ensemble of microscopic realizations leading to the same macroscopic state. We define the density as \begin{equation} \rho(x,t) = \langle N(x,t)\rangle^{neq}. \end{equation} The evolution equation for the average particle densities is then \begin{equation} f_i(x+v_i,t+1) = f_i(x,t)+\Omega_i. \label{MCLB} \end{equation} The Boltzmann collision operator can be obtained as an averaged lattice gas collision operator \begin{align} \Omega_i^\omega &= \langle\; \Xi_i^\omega\rangle^{neq}\nonumber\\ &= \langle \hat{n}^\omega_i - n^\omega_i \rangle^{neq}\nonumber\\ &= \omega (f^0_i - f_i), \label{eqn:LBcoll} \end{align} where $f^0$ is the local equilibrium distribution function defined in Eq. (\ref{eqn:f0}). This is exactly the form of the standard BGK lattice Boltzmann collision operator \cite{wagner2016fluctuating}. Note that because of the particular simplicity of the diffusive case this is a pure BGK collision operator, in contrast to additional non-linear contributions Blommel \textit{et al.}~\cite{blommel2018integer} found for the hydrodynamic case. In the hydrodynamic limit, the density obeys a diffusion equation \begin{equation} \partial_t \rho = \nabla [D \nabla (\rho \theta)], \label{diffeq} \end{equation} with \begin{align} D &= \left(\frac{1}{\omega}-\frac{1}{2}\right) \label{Ddef},\\ \theta & = \frac{1}{d}\sum_i w_i v_i^2. \end{align} where $d$ is the number of spatial dimensions. \section{Verification of the method} So far we have focused on the Boltzmann average of our lattice gas. The original reason we were interested in the lattice gas were its fluctuating properties. The fluctuations in an ideal gas have been discussed by Landau \cite[\S 114]{landau1969statistical}, where it is shown that for a classical Boltzmann gas the number density in sub-volumes are Poisson distributed. The argument here is trivially extended to lattice gases showing that (for large lattices) each density $n_i$ should be Poisson distributed: \begin{equation} P(n_i)=\frac{exp(-f_i^{eq})(f_i^{eq})^{n_i}}{n_i!}, \label{poisson} \end{equation} where \begin{equation} f^{eq} = \langle n_i \rangle, \end{equation} is the global equilibrium distribution, corresponding to a full equilibrium average of the system. In our case this is given by $w_i$ times the average number of particles per lattice cell. \begin{figure} \centering \includegraphics[width=\columnwidth]{AdjustedDistributionsLowFixed.eps} \caption{The $n_i$ for the integer lattice gas are Poisson distributed. This example is for an average density of $N=10$.} \label{fig:Poisson} \end{figure} For explicit examples of the verification of the method we consider a square lattice and lattice displacements consisting of all displacements of $\pm 1$ lattice sites in each dimension. We use a two dimensional system, often referred to as a D2Q9 model, where D2 refers to the fact that the system has 2 dimensions and Q9 indicates the number of lattice velocities associated with each lattice site \textit{i.e.}~$V=9$ in out notation above. We show in Figure \ref{fig:Poisson} that we indeed recover Poisson distributions for $n_i$ as well as $\rho$ in the integer lattice gas method. In order to do this, we run a simulation initialized in a sine wave with an average of 10 particles per lattice an a 32x32 lattice grid. We then run the simulation for 10000 timesteps to allow it to relax to approximate equilibrium, and then take data points of the numbers of particles in each velocity at each lattice for 1000 timesteps, average this data, and normalize it. This measured Poisson distribution lacks some inaccuracies that resulted from continuous densities for the fluctuating lattice Boltzmann method presented in \cite{wagner2016fluctuating}. \begin{figure} \centering \includegraphics[width=\columnwidth]{AmplitudesFinal.eps} \caption{Amplitudes of decaying sine wave density profiles for different relaxation times, compared to theoretical predictions using a D2Q9 system of 32 by 32 lattice points with an average of 1000 particles per lattice.} \label{fig:amplitudes} \end{figure} To verify that the lattice gas does indeed have Eq. (\ref{diffeq}) as its hydrodynamic limit we look at an example problem that has an analytical solution. We initialize the simulation with a density profile in the shape of a $\sin()$ function: \begin{equation} \rho(x,y,0)=N^{av}\left[1+\sin\left(\frac{2\pi x}{L_x}\right)\right] \end{equation} where $N^{av}$ is the average number of particles. The evolution of Eq.~(\ref{diffeq}) with this initial condition has the analytical solution \begin{align} \rho(x,y,t)=& N^{av}\left[1+\sin\left(\frac{2\pi x}{L_x}\right)\exp\left(-\frac{4\pi^2 D t}{L_x^2}\right)\right]\\ =& N^{av} + A^{th}(t) \sin\left(\frac{2\pi x}{L_x}\right). \end{align} This defines the decay amplitude \begin{equation} A^\mathrm{th}(t) = N^{av}\exp\left(-\frac{4\pi^2 D t}{L_x^2}\right). \label{Ath} \end{equation} The mathematics are essentially identical to that presented in Blommel \textit{et al.} for a decaying shear wave. To implement an initial profile of a sine wave in a lattice gas we have the difficulty that the density is not an integer, and a lattice gas needs to include fluctuations that are already averaged away in Eq. (\ref{diffeq}). We therefore need to start with an initial density profile that is sampled from a sinusoidal probability distribution which is Poisson distributed. In particular we need \begin{equation} P(\rho) = N^{av} \left[1+\sin\left(\frac{2\pi x}{L_x}\right)\right]. \label{eqn:Prho} \end{equation} We achieve this by picking the occupation numbers $n_i(x,0)$ as Poisson distributed random numbers with expectation value $w_i P(\rho)$. This procedure allows us to initialize non-integer valued initial distributions and also implements the full equilibrium fluctuations in the initial configuration of the simulation. We then the run our simulation and record the densities at each timestep. We utilizing the method presented in Blommel \textit{et al.} to extract the amplitude from the data: \begin{equation} A^\mathrm{LG}(t) = \frac{\sum_{x=1}^{L_x}\sum_{y=1}^{L_y} \sin\left(\frac{2\pi x}{L_x}\right) N(x,y,t)}{ L_y\sum_x \sin^2\left(\frac{2\pi x}{L_x}\right)}. \end{equation} We then compare this measured amplitude to the theoretical result of Eq. (\ref{Ath}) in Figure \ref{fig:amplitudes} and find excellent agreement. The simulation was performed on a 32x32 D2Q9 lattice system for various values of $\tau$, each lattice point having an average of 1000 particles. Over several timesteps it can be seen that these decays match the theory, up until the region where the amplitude is so small that the averaging becomes insufficient and the fluctuations take over. This result validates our prediction that we can tune the diffusion constant by allowing for only partial collisions given by Eq. (\ref{Ddef}). An outstanding, but rather interesting, question is whether lattice gases can also implement over-relaxation, as is often done in lattice Boltzmann methods. Preliminary results suggest that this is possible, at least for larger numbers of particles per cell. A detailed discussion of this subject, however, is outside the scope of this paper. \section{Computational efficiency} In the previous section we have shown that the novel sampling collision operator will give results that are essentially equivalent to the direct single particle collision approach to integer lattice gases introduced by Blommel \textit{et al.}\cite{blommel2018integer}. A key ingredient in the algorithm is the sampling procedure that allows us to obtain binomially distributed random numbers in Eqs. (\ref{eqn:X0})--(\ref{eqn:Xi}) as well as Eq. (\ref{eqn:Xomega}). There exist open source sampling algorithms, and we implemented both a sequential multinomial sampling algorithm and a direct multinomial sampling algorithm using the GSL library \cite{gough2009gnu}. We further improved the performance of our algorithm within a certain range by writing our own sampling algorithm, which we refer to as the Lookup Table method below. Details of this algorithm are given in Appendix \ref{code}. Since we derived the sampling collision operator to improve the execution speed of the integer lattice gas method, we performed benchmarking tests of different versions of our algorithm: the base comparison is to the Collision method, \textit{i.e.} the algorithm based given by Eqn. (\ref{eqn:XiC}). We compare the timing of this baseline method to the algorithm using the sampling collision operator either with full collision of Eq. (\ref{fullsampling}) or with partial collisions of Eq. (\ref{eqn:Xiomega}). Both algorithms either employ the GSL or Lookup Table method to generate the binomially distributed random numbers. In addition, we compare the results to the fluctuating lattice Boltzmann method developed by Wagner \textit{et al.} \cite{wagner2016fluctuating}. There are two versions of this executable: One compiled with standard flags, and a second one using the -O3 optimization flag, which significantly improved the algorithm's runtime. Optimization flags had significant less effect on the lattice gas implementation. We summarize the different methods in Table \ref{tableMethod}. The source code for these methods is available on Github at \cite{SeekinsDiffMCLG2021Git}. \begin{table}[] \begin{tabular}{l|l|l} Method & Collision Operator & Comment\\ \hline Collision & Eq. (\ref{eqn:XiC}) & Random Collisions\\ GSL & Eq. (\ref{eqn:Xiomega}) & GSL Sampling\\ GSL Multinomial & Eq. (\ref{eqn:Xiomega}) & GSL Mult. Sampling\\ Lookup Table & Eq. (\ref{eqn:Xiomega}) & Lookup Table Sampling\\ LB & Eq. (\ref{eqn:LBcoll})+noise & Fluctuating LB\\ Optimized LB & Eq. (\ref{eqn:LBcoll}) & LB with the -O3 flag\\ \end{tabular} \caption{Summary of the different algorithms compared in the timing benchmarks.} \label{tableMethod} \end{table} \begin{figure} \centering \includegraphics[width=\columnwidth]{TimingT1.eps} \caption{Execution time of the different versions of the sampling algorithm compared to the original integer lattice gas approach and two lattice Boltzmann runs graphed on a log-log scale. Each method was tested using a 32 x 32 lattice grid, and all methods, with the exception of the non-optimized Lattice Boltzmann method, are run with the -O3 flag active. We ran each system for 1000 iterations, and initialized each system by utilizing the same methods as in section 4.} \label{fig:timing} \end{figure} To compare the performance of the different algorithms we used the simulations of a decaying sign wave presented in Sec. (4) for varying average numbers of particles. The setup consists of a 32x32 lattice with an initial density distribution given by Eq. (\ref{eqn:Prho}) with an initial amplitude of $A=N^{eq}$, as indicated in Eq. (\ref{Ath}). Since the lookup table method generates sample points as the simulation runs, we first run the simulation for 1000 iterations to ensure that the sampling algorithm has fully initialized and then measure the runtime of the next 1000 iterations. The results for an inverse relaxation time of $\omega=1$ are shown in Figure \ref{fig:timing}. For the Collision method we cannot guarantee that all particles will have collided, and therefore we chose a number of collisions that ensures that on average 99.9\% of particles will have collided, as explained in Appendix \ref{AppendixB}. As expected, the number of collisions scales linearly with the number or particles, as opposed to quadratically with the number of particles in the case of binary collisions in the hydrodynamic lattice gas \cite{blommel2018integer}. The lattice gas implementation with the sampling collision operator performs significantly better than the collision algorithm. It even outperforms the non-optimized version of the lattice Boltzmann code, although it remains slower than the optimized lattice Boltzmann code by about a factor of five. We believe that there is still significant potential for optimization, not least in the sampling algorithm. The implementation of the lookup method for sampling binomial numbers already outperforms the GSL algorithm in the range between 20 and 2000 average numbers of particles per lattice site. It is remarkable that the GSL sampling algorithms speed up for average numbers of particles larger than 2000, which will allow the algorithm to scale well for large numbers of particles. \begin{figure} \centering \subfloat[]{ \includegraphics[width=0.9\columnwidth]{TimingT15.eps}}\\ \subfloat[]{ \includegraphics[width=0.9\columnwidth]{TimingT18.eps}}\\ \subfloat[]{ \includegraphics[width=0.9\columnwidth]{TimingT2.eps}} \caption{Execution time of the different versions of the sampling algorithm compared to the original integer lattice gas approach and two lattice Boltzmann implementations, with the relaxation time $ 1/\omega$ being 1.5 in \ref{fig:timingtau}a, 1.8 in \ref{fig:timingtau}b, and 2.0 in \ref{fig:timingtau}c graphed on a log-log scale. Each method was run utilizing the same parameters as the previously shown timing graph, including the grid size, number of iterations, and initialization method. The extension of the collision method is an approximate line of best fit for the measured collision data, which shows how the data would progress at higher numbers of particles.} \label{fig:timingtau} \end{figure} As we decrease the inverse collision time $\omega$ we have to perform fewer collisions, which benefits the collision method. The effect on the sampling method is harder to predict: on the one hand a second sampling step is required to select the particles to be collided, as indicated in Eq. (\ref{eqn:Nselect}); on the other hand the number of particles to be collided are fewer, which generally leads to faster sampling times. The net effect of this is shown in Figure \ref{fig:timingtau}. As expected the collision method becomes faster with increasing $\omega$. The timing of the sampling methods changes only slightly. The most notable change is for the GSL methods, which show a slight increase in the execution time for the sequential multinomial sampling algorithm, and a significant increase in the execution time for the direct multinomial sampling algorithm, to the point that, while the direct sampling algorithm is faster for $\omega=1$, it is slower for $\omega<1$. We are not sure what the reason for this unexpected behavior of the library calls is. However, even for $\omega=0.5$ the sampling collision operators continue to outperform the direct collision approach when there are more than 20 particles per lattice site on average. Thus, for most practical approaches the Monte Carlo lattice gas is superior to the collision method, and has the potential to become competitive with the fluctuating lattice Boltzmann method. \section{Conclusions} We developed a novel integer lattice gas method for the fluctuating diffusion equation. This method remedies some deficiencies of the equivalent fluctuating lattice Boltzmann method, particularly for small densities. The fundamental approach based on single particle collisions and equivalent to the approach pioneered by Blommel for hydrodynamics integer lattice gases \cite{blommel2018integer} becomes slow for larger numbers of particles per lattice site. The computational cost for this approach scales linearly with the number of particles. To remedy this difficulty we also developed a sampling collision operator that picks new particle distributions directly from a local equilibrium distribution. The runtime of this algorithm now scales much better, and, in the case of the GSL sampling algorithm, approximately recovers the flat scaling of lattice Boltzmann approaches at higher densities. We believe that this first algorithm of a sampling collision operator shows the potential for lattice gas approaches to become competitive with lattice Boltzmann approaches again. These results open the door to develop sampling collision operators for hydrodynamic integer lattice gases that we hope will be computationally competitive with fluctuating lattice Boltzmann methods and have correct statistics for small numbers of particles per lattice site.
1,116,691,500,006
arxiv
\section*{Introduction} Medical treatments often compose a sequence of intervention decisions that are made adaptive to the time-varying clinical status and conditions of a patient, which are coined as \emph{Dynamic Treatment Regimes} (DTRs \cite{Lavori2000}). ``How can we optimize the sequence of specific treatments for specific patients?'' is a central question of \emph{precision medicine}. More specifically, the scientific question our paper focuses on is the determination of the optimal DTRs to maximize the long-term clinical outcome. When the straightforward rule-based treatment guidelines are difficult to be established, the statistical learning method provides a data-driven tool to explore and examine the best strategies. These data driven approaches leverage on the technology advances to collect the increasingly abundant medical data (e.g., clinical assessments, genomic data, electronic health records) from each individual patient to meet the promise of individualized treatment and health care. The problem of identifying the optimal DTRs that maximize the long-term clinical outcome using \emph{reinforcement learning} \cite{sutton1998reinforcement} has received much attention in the statistics community \cite{moodie2007,lavori2004,murphy2003,robins2004,robust2012,zhao2009,murphy2007,zhao2014,liu2016robust}. The existing DTR methods are proposed on \emph{Sequential Multiple Assignment Randomized Trial} (SMART) \cite{Murphy2005}, in which the methods for DTR optimization are limited to clearly defined homogeneous decision stages and low-dimensional action spaces. They are difficult to implement using observational data (such as electronic medical records, registry data), which exhibit much higher degree of heterogeneity in decision stages among patients, and the treatment options (i.e., the action space) are often high-dimensional. The existing methods can only analyze certain simplification of stage and action spaces among the enormous ways. Simplification by human experts might not lead to the optimal DTRs and in many cases there is no clear way of simplification. In addition, the simplification process needs substantial domain knowledge and labor-intensive data mining and feature engineering processes. There is a call for methods to expand DTR methodology from the limited application in SMART studies to broader, flexible, and practical applications using registry and cohort data. To make reinforcement learning accessible for more general DTR problems using observational datasets, we need a new framework which (i) automatically extracts and organizes the discriminative information from the data, and (ii) can explore high-dimensional action and state spaces and make personalized treatment recommendations. \emph{Deep learning} is a promising new technique to use \emph{representation learning} and save the labor-intensive feature engineering processes. The effective combination of deep learning (deep neural networks) and reinforcement learning technique, named \emph{Deep Reinforcement Learning} (DRL), is initially invented for intelligent game playing and has later emerged as an effective method to solve complicated control problems with large-scale, high-dimensional state and action spaces \cite{mnih2013playing,mnih2015human,silver2016mastering,wang2017dac,wang2017icdcs,wang2017icc} . The deep learning and DRL methods are promising to automatically extract discriminate information among decision stages, patient features, and treatment options. In the work we incorporate the state-of-the-art deep reinforcement learning into the DTR methodology and propose the first (to the best of our knowledge) data-driven framework that is scalable and adaptable to optimizing DTR with high-dimensional treatment options, and heterogeneous decision stages. To demonstrate the effectiveness of the proposed framework, we implemented it using a concrete example: Graft Versus Host Disease (GVHD) prevention and treatment for Leukemia patients who have undergone allogeneic hematopoietic cell transplantation (AHCT). The long-term longitudinal follow-up for almost all US patients and some international patients who have undergone AHCT make the Center for International Blood and Marrow Transplant Research (CIBMTR) registry database an ideal existing data set to explore the capacity of artificial intelligence in medical decision making. Reference \cite{ruutu2014prophylaxis} points out that GVHD is a major complication of AHCT. Once established, GVHD is difficult to treat. It can be prevented by selected methods, but often at the expense of an increased risk of relapse, rejection or delayed immune reconstitution \cite{bacigalupo1991increased, patterson1986graft}. Hence, no optimal or even satisfactory prevention and treatment methods have been defined. Reference \cite{ruutu2014prophylaxis} concluded that the difficulty in composing a standard practice guideline is the lack of solid scientific support for a large portion of procedures used in GVHD prevention and treatment, which calls for further systematic studies to compare different strategies. Such clinical needs for methodological innovations in finding the optimal personalized strategies can be largely resolved in the proposed study. More specifically, in this paper we develop a data-driven deep reinforcement learning (DRL) framework for the optimal DTR, comprising the prevention and treatment of both acute and chronic GVHDs, as well as the initial conditioning (chemotherapy) after the transplantation. The DRL framework, which deals with heterogenous decision stages (states) and high-dimensional action space, consists of two steps at each decision stage. The first step is to build a deep neural network to predict experts' treatment with a high-dimensional action space. The second step is to estimate the \emph{value function} of DTRs for strategies composed of the top expert actions with highest probability from the first step. The state and action spaces as well as reward function are carefully selected, and effective dimensionality reduction techniques such as the \emph{low variance filter} are utilized to mitigate the shortcoming of limited data in the database. The similar states have similar encoded representations. In the experimental results, we have demonstrated promising accuracy in predicting human experts' decisions, as well as the high expected reward function in the DRL-based dynamic treatment regimes. \section*{Results} In this section, we present results on the deep neural networks' prediction accuracy for expert treatment as well as the performance of the deep reinforcement learning for optimizing the sequence of treatments. Experiments are conducted based on the CIBMTR registry with data of 6,021 patients. The initial conditioning (to prevent relapse) and GVHD prophylaxis (to prevent GVHD) were administered right before the transplant, thus they are considered as action at time $t=0$; the treatment of acute GVHD takes place at 100 days and 6 months (180 days); the treatment of chronic GVHD takes place at 6 months, 1 year (365 days), 2 years (730 days), and 4 years. We test DTR within 4 years after transplantation because a large portion of patients' data will be missing after that time, and live patients without relapse can be considered to be cured from the disease. In the following, we will demonstrate the data-driven experimental results on the first step, i.e., building a deep neural network to predict experts' treatment, and then the second step, i.e., DRL-based framework of value function estimation and making recommendations among treatment options. We adopt separate DNNs for predicting experts' treatment in the first step, and separate DRLs for treatment of acute and chronic GVHDs in the second step. This is because of the limited data size to train an overall large DNN or DRL model. Details of the proposed procedure will be discussed in the next section. \subsection*{Results on Predicting Experts' Treatment} First, we demonstrate in Figure 1 the prediction accuracies of the initial conditioning and the initial GVHD prevention (prophylaxis). We use 80\% of the data set as training data and the remaining 20\% for testing data, which is common for the deep learning testings. Please note that we utilize the top-$N$ prediction accuracy, i.e., the prediction is valid as long as the actual treatment action from human experts is among the top $N$ choices suggested by the deep neural network. This top-$N$ accuracy is widely utilized for the image recognition such as the ImageNet contest \cite{deng2009imagenet} and other deep learning testings. We can observe that (i) the top-$N$ accuracy is in general between 75\% and 90\%, which shows the effectiveness of the proposed method; and (ii) the top-$N$ accuracy will increase with the increase of the $N$ value. \begin{figure}[H] \centering \begin{subfigure}{.52\textwidth} \centering \includegraphics[width=.8\linewidth]{inital_bsconditioning.jpg} \caption{Initial conditioning} \label{fig:inital_bsconditioninge} \end{subfigure}% \begin{subfigure}{.52\textwidth} \centering \includegraphics[width=.8\linewidth]{inital_bsgvhd.jpg} \caption{Initial GVHD prophylaxis} \label{fig:inital_bsgvhd} \end{subfigure} \caption{Accuracies on predicting experts' treatment for initial conditioning and GVHD prophylaxis.} \label{fig:initial_medical} \end{figure} Furthermore, Figure 2 illustrates the top-$N$ prediction accuracy results for acute GVHD treatments at (a) 100 days and (b) 6 months. Figure 3 illustrates the (a) top-7 and (b) top-10 prediction accuracy results for chronic GVHD treatments, at 100 days, 6 months, 1 year, and 2 years. Again we use 80\% of the data set as training data and the remaining 20\% for testing data. From these two figures we can derive the following observations. First, the prediction accuracies are in general higher compared with the initial conditioning and GVHD preventions, because the medication for GVHD treatments seems to be more regular compared with the initial treatments. The prediction accuracies are high enough and this shows a first step towards the ultimate goal of DTR using machine intelligence. Next, for the chronic GVHD treatment, the prediction accuracy will increase when time elapses, i.e., the prediction accuracy at 180 days is higher than that at 100 days, and the accuracy at 1 year will be even higher. The reason is that the patients will become more stable and easy for treatment when chronic GVHD occurs or prolongs at a later time. \begin{figure}[H] \centering \begin{subfigure}{.52\textwidth} \centering \includegraphics[width=.8\linewidth]{DRL_Agvhd_6monthacc.jpg} \caption{Results at 100 days} \label{fig:DRL_Agvhd_6monthacc} \end{subfigure}% \begin{subfigure}{.52\textwidth} \centering \includegraphics[width=.8\linewidth]{DRL_Agvhd_1yearacc.jpg} \caption{Results at 6 months} \label{fig:DRL_Agvhd_1yearacc} \end{subfigure} \caption{Accuracies on predicting experts' treatment for acute GVHD.} \label{fig:DRL_Agvhd} \end{figure} \begin{figure}[H] \centering \begin{subfigure}{.52\textwidth} \centering \includegraphics[width=.8\linewidth]{DRL_cgvhd_top7acc.jpg} \caption{Top-7 accuracy results} \label{fig:DRL_cgvhd_top7acc} \end{subfigure}% \begin{subfigure}{.52\textwidth} \centering \includegraphics[width=.8\linewidth]{DRL_cgvhd_top10acc.jpg} \caption{Top-10 accuracy results} \label{fig:DRL_cgvhd_top10acc} \end{subfigure} \caption{Accuracy results on predicting experts' treatment for chronic GVHD at 100 days, 6 months, 1 year, and 2 years.} \label{fig:DRL_Cgvhd} \end{figure} \subsection*{Results on DRL-based Value Function Estimation and Making Recommendations} In this section, we provide experimental results on the effectiveness of the DRL-based DTR framework for acute and chronic GVHD treatments, i.e., using DRL for value function estimation and making recommendations. Because of the limited data size to train an overall large DRL model, we build separate DRL models for the treatments of acute and chronic GVHDs. Details about the DRL models are provided in the next section. Again we use 80\% of the data set as training data and the remaining 20\% for testing data. The reader of the paper will be most interested in how the DRL-based recommendation making would improve the cumulative outcome, i.e., the disease free survival, of the patients. As a result, we compare between the proposed DRL-based approach with random action selection baseline in terms of the value function. The details of value function are described in the next section (we use the highest value 1 for relapse-free and GVHD-free survival, the lowest value 0 for death, and values in between for other terminal states such as relapse, GVHDs, etc.) More specifically, the baseline uses the average DRL values of all available actions (excluding the one with the highest expected reward) to mimic a random action selection policy. Using the actual values from he observational data set results in similar baseline performance and will not be shown in this paper. Figure 4 illustrates the comparison results between the proposed DRL method and baseline for acute GVHD treatment, while Figure 5 shows the comparison results for chronic GVHD treatment. Despite the limited data, we can still observe that the proposed DRL method outperforms the baseline methods both for acute and chronic GVHD treatments, which illustrates the effectiveness of using the DRL method for making recommendations in DTR. Also, we can observe that the value function (accumulative rewards) will increase when time elapses. This observation can also be explained as the expected outcome (e.g., the final relapse-free survival rate) will become higher for a patient if he/she has survived without relapse over a period of time (say 1 year or 2 years). \begin{figure}[H] \centering \includegraphics[width=.65\linewidth]{DRL_Agvhd_reward.jpg} \caption{Comparison results between the proposed DRL method and the baseline (details in the context) for acute GVHD treatment.} \label{fig:DRL_Agvhd_reward} \end{figure} \begin{figure}[H] \centering \includegraphics[width=.65\linewidth]{DRL_cgvhd_reward.jpg} \caption{Comparison results between the proposed DRL method and the baseline (details in the context) for chronic GVHD treatment.} \label{fig:DRL_cgvhd_reward} \end{figure} \section*{Discussion} In this work, we present a machine learning strategy from an observational dataset to address the decision making problem of GVHD prevention and treatment. It is of significant interests to incorporate this machine learned rule to facilitate treatment decision making and how to update the decision rules in an online fashion when new data are collected. There are some current trends in the mobile health field that combines randomized clinical trial with online reinforcement learning through micro-randomized-trials \cite{klasnja2015microrandomized}, where the randomization probability can be adopted in an online manner, in analogy to the exploration techniques in reinforcement learning. The applications can be seen in smoking secession, eating disorder management, and blood glucose management for diabetes patients. However, compared with our motivating example in bone marrow transplant, these existing interventions are easier to perform randomization due to the much fewer number of actions, less profound consequences, fewer treatment options and less complicated feature variables. Nevertheless, in the clinical fields like our motivating example, there are some pressing sequential decision making questions. For example, in the leukemia field, one other example is to decide whether transplant is a beneficial strategy compared to non-transplant, under what condition or time transplant will become a better option, and adapting these decisions to personal features. Given the constraints on conducting sequential randomized clinical trials on these questions, it is more practical to start from analyzing the observational data at this point. With the improvement of data collection and machine learning techniques in this field, a data-driven decision support system can provide treatment recommendations for doctors based on supervised learning and reinforcement learning. Furthermore, one can adopt the exploration policy in reinforcement learning for the adaptive treatment recommendations, while the decision is made through doctors and patient's preference. The Q-learning has theoretical guarantee to converge to optimal policy only under the assumptions for Markov decision process. However Deep Q-learning does not have theoretical guarantee for convergence to optimal policy even under the Markov decision process because of the sub-optimality of deep neural networks. The disease progression process does not strictly follow Markov process and the four state variables we are considering may not fully capture the patients status. However Q-learning and DQN have demonstrated good performance in a lot of application that Markov (memoriless) property does not hold \cite{liu2017hierarchical,zhu2017target}. For future work we will remedy for this problem with model without Markov assumption (e.g. RNN), taking the history information into account. \section*{Methods} In this section we discuss in details the proposed DRL framework for the optimal DTR, comprising the prevention and treatment of both acute and chronic GVHDs, as well as the initial conditioning after transplantation. We first provide a general framework of DRL which can deal with complicated control problems with high-dimensional state spaces, and then the cohort retrieving and data pre-processing, problem formulation, state and action spaces, reward function, and optimization techniques of the proposed DRL framework for precision medicine. \subsection*{The General DRL Framework for Complicated Control Problems} The general DRL framework, which can be utilized to solve complicated control problems, consists of two phases: an offline deep neural network (DNN) construction phase and an online deep Q-learning phase\cite{mnih2013playing,mnih2015human,silver2016mastering}. In the offline phase, a DNN is utilized to derive the correlation between each state-action pair $(s,a)$ of the system under control and the corresponding value function $Q(s,a)$. $Q(s,a)$ represents the expected cumulative and discounted reward when the system starts from state $s$ and follows action $a$ and certain policy thereafter. $Q(s,a)$ for a discrete-time system is given by: \begin{equation} Q(s,a) = \mathbf{E} \Big[\sum_{k=0}^{\infty }\gamma^kr(k) \Big| s_{0},a_{0}\Big] \end{equation} where $r(t)$ is the reward rate and $\gamma$ is the discount rate in a discrete-time system. In order to construct a DNN with a good accuracy, the offline phase needs to accumulate enough samples of $Q(s,a)$ value estimates and the corresponding state-action pairs $(s,a)$. It can be a model-based procedure or obtained from actual measurement data \cite{silver2016mastering}, in which the latter is the case for optimal DTRs in precision medicine. This procedure includes simulating the control process, and obtaining the state transition profile and estimations for $Q(s,a)$ value, using an arbitrary but gradually refined policy. The state transition profile is stored in an experience memory $D$ with capacity $N_{D}$. According to the inventors of DRL \cite{mnih2015human}, the use of experience memory can smooth out learning and avoid oscillations or divergence in the parameters. Based on the stored state transition profile and $Q(s,a)$ value estimates, the DNN is constructed with weight set $\theta$ trained using standard training algorithms such as backpropagation based stochastic gradient descent algorithms. The overall procedure is shown in the first part of Algorithm \ref{Alg:Sch}. \begin{algorithm}[h] \caption{Illustration of the General DRL Framework} \label{Alg:Sch} \begin{algorithmic}[1] \ENSURE {This is the \textbf{offline} part} \STATE Extract real data profiles using certain control policies and obtain the corresponding state transition profiles and $Q(s,a)$ value estimates; \STATE Store the state transition profiles and $Q(s,a)$ value estimates in experience memory $\mathcal{D}$ with capacity $N_\mathcal{D}$; \STATE Iterations may be needed in the above procedure; \STATE \textbf{Offline}: Pre-train a DNN with features $(s,a)$ and outcome $Q(s,a)$; \REQUIRE{This is the \textbf{online} part} % \FOR{each execution sequence} \FOR{at each decision epoch $t_k$} \STATE With probability $\epsilon$ select a random action, otherwise $a_k =\ argmax_{a} Q(s_k, a)$, in which $Q(s_k, a)$ is derived (estimated) from DNN; \STATE Perform system control using the chosen action; \STATE Observe state transition at next decision epoch $t_{k+1}$ with new state $s_{k+1}$, receive reward $r_k(s_k,a_k)$ during time period $[t_k,t_{k+1})$; \STATE Store transition $\left(s_k, a_k, r_k, s_{k + 1}\right)$ in $\mathcal{D}$; \STATE Updating $Q(s_k,a_k)$ based on $r_k(s_k,a_k)$ and $\max_{a'}Q(s_{k+1},a')$ based on Q-learning updating rule; \ENDFOR \STATE Update DNN parameters $\theta$ using new Q-value estimates; \ENDFOR \end{algorithmic} \end{algorithm} For the online phase, the deep Q-learning technique is utilized based on the offline-trained DNN to select actions and update Q-value estimates. More specifically, at each decision epoch $t_k$ of an execution sequence, suppose the system under control is in the state $s_k$. The DRL agent performs inference using DNN to obtain the $Q(s_k,a)$ value estimate for each state-action pair $(s_k,a)$. Then according to the $\epsilon$-greedy policy, the action with the maximum $Q(s_k,a)$ value estimate is selected with a $1-\epsilon$ probability and a random action is selected with an $\epsilon$ probability. After choosing an action denoted by $a_k$, the DRL agent receives total reward $r_k(s_k,a_k)$ during $[t_k,t_{k+1})$ before the next decision epoch $t_{k+1}$, and this leads to Q-value updates. The reference work has proposed to utilize a duplicate DNN $\hat{Q}$ for Q-value estimate updating, in order to mitigate the potential oscillation of the inference results of the DNN\cite{lillicrap2015continuous}. At the end of the execution sequence, the DNN is updated by the DRL agent using the recently observed Q-value estimates in a mini-batch manner, and will be employed in the next execution sequence. The overall procedure is shown in the second part of Algorithm 1. As can be observed from the above procedure, the DRL framework is highly scalable for problems with a large state space, which is distinctive from the traditional reinforcement learning techniques. On the other hand, the DRL framework requires an enumerable action space due to the fact that at each decision epoch the DRL agent needs to enumerate all possible actions at the current state and perform inference using the DNN to derive the optimal $Q(s,a)$ value estimate (and corresponding optimal action). This implies that the action space in the general DRL framework, or for the specific optimal DTR problem, needs to be effectively reduced. \subsection*{Developing a DRL Framework to Derive the Optimal DTR} In this section, we present the developed DRL framework with a motivating DTR application using the database from CIBMTR registry on prevention and treatment of GVHD. There are two forms of GVHD: acute GVHD typically occurs within the first 6 months after the transplant and lasts for a short term if successfully treated; chronic GVHD may occur from shortly after the transplantation to a few years later, and often requires long-term treatment that can lead to long-term complications/mobidity. Throughout this paper, we denote the time index $t=0$ for the time of transplantation, $t=1$ for 100 days, $t=2$ for 6 months, $t=3$ for 1 year, $t=4$ for 2 years, and $t=5$ for 4 years. We consider DTR within 4 years after the transplantation. In this paper, we adopt the DRL technique for three tasks of DTR: initial treatment before the transplantation including initial conditioning (chemotherapy to prevent relapse) and GVHD prophylaxis (to prevent GVHD), treatment of acute GVHD, and treatment of chronic GVHD. The initial preventive treatments take place at the time of transplantation $t=0$; the treatment of acute GVHD takes place at times $t=1$ (100 days) and $t=2$ (6 months); the treatment of chronic GVHD takes place at times $t=2$ (6 months) through $t=5$ (4 years). As can be observed in Figure \ref{fig:figure1}, the proposed DRL framework for optimal DTR comprises two steps at each decision epoch/stage. The first step is to build a supervised learning network to predict the distribution of human experts' decisions on treatment actions. The second step is to estimate the value functions for treatment decisions with high probabilities (the actual implementation is also compatible with estimating value functions for all treatment options). In this way the proposed framework can both provide human experts' opinions and the data-driven comparisons of different strategies and recommendation for the optimal strategy, with relatively minor computational efforts. The proposed DRL framework is data-driven and scalable to the heterogeneous decision stages and high-dimensionality in patient features and treatment options. The DRL framework is adaptive, in that models in both steps will be updated when new data comes that correspond to new patients or treatment outcomes. \subsubsection*{Retrieving the Target Cohort and Pre-Processing Data} The cohort of patients used for this analysis consists of 6,021 patients diagnosed with Acute Myeloid Leukemia (AML) who have undergone HCT between 1995 and 2007. Due to the discrete data collection scheme, we have higher quality data on the onsets of GVHD conditions and the subsequent treatment decisions in a discrete-time frame indicating the occurrence between two follow-up times. The exact date and sequence of treatment decisions between two periods of time are missing or not recorded to a greater extent. In this work, the state and action are considered to be the state and action taken at the time each form was recorded. We consider relapse and death as terminal states and occurrences of acute or chronic GVHD as transient states. We consider baseline features of patients and donors that have been shown to affect GVHD and survival rates in clinical studies, including patient's age, gender, and co-morbidity information (including diabetes, seizure, hypertension, etc.). It also includes important patient and donor relationship to patient and matching information, as well as donor's gender. This cohort includes both pediatric and adult patients. We include the histogram of age in Figure \ref{age}. And the Human Leukocyte Antigen (HLA) matching results of patients are presented in Table \ref{match}. \begin{table} \caption{Matching Information of Patients and Donors in the Data Set of Interests} \label{match} \begin{tabular}{cccccc} \hline Identical Sibling&Other Relative&URD Well Matched & URD Partially Matched & URD Mismatched & Other\\ \hline 3877 & 451& 686& 433& 173&401\\ \hline \end{tabular} \end{table} \begin{figure} \caption{Histogram of Patient Ages in the Data Set of Interests} \label{age} \includegraphics[scale=0.5]{histage}. \end{figure} \subsection*{Building a Deep Neural Network to Predict Expert Treatments} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{figure1.jpg} \caption{The proposed DRL framework for prevention and treatment of GVHD, as well as initial conditioning.} \label{fig:figure1} \end{figure} As shown in Figure \ref{fig:figure1}, the first step at each decision epoch is to build a supervised learning network to predict the distribution of human experts' decisions on treatment actions. The prediction networks are illustrated in Figure \ref{fig:figure1}. For the initial treatment before the transplantation, the input features (the state space) include the union of the basic information of patients (e.g., age, gender, and cormorbilities, etc.) and the HLA matching information between the patient and the donor. The output label (action) is the combination of medicines to be utilized for the initial treatment which include the initial conditioning to avoid disease relapse and the GVHD prophylaxis to prevent GVHD. For the treatment of acute GVHD at time stamps $t=1$ and $t=2$, the input features (the state space) include both the basic information of patients and the pairing conditions, as well as whether the patient has acute GVHD at that specific time stamp. The output label (action) is the combination of medicines to be utilized for the treatment of acute GVHD. Similar input features and actions also apply to the treatment of chronic GVHDs from $t=2$ through $t=5$. To reduce the high dimensionality in the action space, we encode the actions using all the medicine combinations that have been already utilized by doctors. We adopt an effective encoding scheme of the state space to reduce the state space to a large extent, thereby accelerating the convergence speed and mitigating potential overfitting issues. For enhancing the accuracy, separate multi-layer deep neural networks (instead of a single integrated network) are trained offline for the initial conditioning, prevention of GVHDs, treatment of acute and chronic GVHDs. In this step, we adopt the multiple layer, fully-connected neural network as our deep neural network. The network architecture consists of four layers: the input layer, two hidden layers and the output layer. The dimension of the input layer is 9 and the two hidden layers have 16 and 32 neurons, respectively. The output dimension is 145 for initial conditioning and 127 for GVHD prophylaxis. The output dimensions for treating acute and chronic GVHDs are 283 and 271, respectively. We use the Adam Optimizer to train the network, and the learning rate $\eta$ is set to be $10^{-4}$ \cite{DBLP:journals/corr/KingmaB14}. \subsubsection*{Estimating Value Function for Top Expert Choices and Making Recommendations} As shown in Figure \ref{fig:figure1}, the second step is to estimate the value function for expert actions with highest probabilities and make recommendations among treatment options. Our recommender only evaluates value function for actions with highest probabilities, since actions with small probability have too small number of samples in the observational medical datasets to arrive at a general conclusion. This restriction also reduces the computational complexity. The reward/outcome of major interests is the \emph{relapse-free survival time} after the transplantation, denoted as $T_i$. Let $\vec{a}$ denote the vector of actions at all stages, $\vec{\pi}$ denote the rules of decision sequences (i.e., policies), which represent the mapping from the currently observed state to action at each stage. The value function of a policy $\vec{\pi}$ is $V(\vec{\pi})=E(T_i| \vec{a}\in \vec{\pi} (s) )$. The objective is to maximize $V(\vec{\pi})$ and the so-called Q-function is the expected reward if a subject (patient) is assigned to the optimal treatment in all the future stages, and can be estimated through \emph{Dynamic Programming} following the ideas from Q-learning \cite{watkins1992q}. The learning algorithm will be tailored for the specific \emph{Censoring Scheme} of the data set. Denote $M_i$ as the indicator of whether patient $i$ is censored ($M_i=1$ if death or relapse is observed), and $C_i$ as the last observation time of patient $i$. Denote $D_{t,i}$ as the indicator that death or relapse is observed within the time period $t-1$ to $t$. For time $t$ and patient $i$, denote the indicator of observed terminal events at time $t$ as $M_{t,i}= \mathbb{I}(D_{1,i}=0,\dots,D_{t-1,i}=0,D_{t,i}=1)$, where $\mathbb{I}(\cdot)$ is the indicator function. The general Q-learning uses a backward induction procedure across time stamps (decision stages). At stage $t$, each valid training sample (patient) $i$ needs to satisfy $C_i>t$, and requires that action $a_t$ to be observed. For patients with $M_{t+1,i}=1$, we use their observed $T_i$ as the outcome. For patients with $C_i > t+1$, or $C_i=t+1, M_i=0$, we use the estimated Q-function for the future stage as the outcome. In other words, we impute patients who have survived beyond time stamp $t+1$ using their optimal future value estimation, regardless of censoring. Besides the value function of relapse-free survival time, we also propose to use an alternative discretized value function as shown in the following. For each patient $i$, let $t_i$ denote the time when he/she enters the terminal state (death, relapse, or relapse-free survival after 4 years) or when his/hers data get lost. The delayed reward (outcome) of patient $i$ at time $t_i$ can be classified into the following categories: \begin{enumerate} \item Relapse-free and GVHD-free survival. \item Survival with acute or chronic GVHD. \item Relapse of the leukemia disease. \item Death. \item Data loss. \end{enumerate} We assign different delayed rewards/outcomes for the five cases. For relapse-free and GVHD-free survival in 4 years, the highest reward (1) is achieved. Survival with acute or chronic GVHD receives a slightly degraded reward (0.8). Relapsed patients receive a significantly degraded reward (0.2). Death receives zero. This reward can be viewed as a heuristic 4-year-survival probability adjusted for the quality-of-life. The missing data problem caused by the lost of follow-up is solved by the imputation method as discussed above. In order to accommodate high dimensionality in state and action spaces, the recent DRL literature implemented Q-learning with deep neural networks to approximate the Q-function, which is named as the \emph{Deep Q-Network}. In this problem of interest, three separate deep Q-networks are developed for DTRs of initial conditioning (chemotherapy and prevention of GVHDs), treatment of acute and chronic GVHDs. For the inputs of deep Q-networks at time $t$, the corresponding input states described in the previous section (predicting human experts' decisions) serve as states, and the predicted human experts' decisions serve as actions. Effective encoding scheme is utilized to reduce the input state space. The output prediction is the expected value/return starting at this state and taking the corresponding action. Multi-layer deep neural networks are constructed to achieve this goal, and only those patients whose data are available at each time $t$ are utilized to train the deep Q-networks. In the deep Q-network, we use a replay buffer to store the dataset\cite{lillicrap2015continuous}. The replay buffer is a finite sized cache which can store the sampled transition tuples $(s_t,a_t,r_t,s_{t+1})$, and it discards the oldest samples when it is full. The reply buffer allows the algorithm to benefit from learning across a set of uncorrelated transitions. Direct implementation of deep Q-learning may cause the network to be unstable during the training process. As a result, we adopt the target network introduced in reference\cite{lillicrap2015continuous}. The target network is a copy of the Q-value network and is used to perform inference of $Q(s_{k+1},a')$. The weights of the target network are updated by slowly tracking of the updated parameters in the Q-value network: $\theta' \gets \tau\theta+(1-\tau)\theta'$ with $\tau \ll 1$. This constraint can significantly improve the stability of learning. Similar to the first step, a four layer fully-connected neural network architecture is adopted in the DRL network for acute and chronic GVHD treatments. It consists of the input layer, two hidden layers and the output layer. The input dimensions for treating both acute and chronic GVHDs are 8. The output dimensions for treating acute and chronic GVHDs are 283 and 271, respectively. The numbers of neurons in the two hidden layers are 32 and 64 for both acute and chronic GVHD treatments. The learning rate $\eta$ is set to be $10^{-3}$. The target network updating parameter $\tau$ is set as 0.01, and the discount rate of reward $\gamma$ is set as 0.99. The size of replay buffer is 20000.
1,116,691,500,007
arxiv
\section{Introduction} It was nearly 50 years ago that Askaryan proposed to detect high energy particles through the coherent pulse they emit as they interact in a dense medium~\cite{Askaryan62}. As secondary electrons, positrons and gamma rays are produced they develop electromagnetic showers in the medium which acquire an excess negative charge, which Askaryan estimated to be of order $10\%$ of the total number of electrons and positrons. This is so in spite of the interactions being completely charge symmetric, because matter in the medium only contains electrons. M\o ller, Bhabha and Compton scattering of matter electrons, accelerate them into the shower while electron-positron annihilation and Bhabha scattering decelerate the shower positrons thus also contributing to the excess charge, a mechanism referred to as the Askaryan effect. A more accurate calculation of the Askaryan effect indicated that the excess charge is actually $\sim 25\%$ of the total number of electrons and positrons~\cite{ZHS92}. Such an excess charge develops a coherent electromagnetic pulse as it travels through a non absorptive dielectric medium. The coherent part of the pulse is mainly due to the wavelength components which are large compared to the shower width. The energy radiated in the coherent pulse scales with the square of the excess charge and hence with the square of the shower energy. Such scaling naturally makes the detection of coherent radio pulses an attractive and promising technique for the detection of ultra high energy particles, such as cosmic rays. Radio detection of air showers was extensively studied in the 60's and 70's~\cite{allan71}. The drive to detect high energy neutrinos in the late 80's turned back the attention onto radio pulses produced by them in dense media such as natural ice~\cite{frichter96} or the regolith beneath the Moon's surface~\cite{zheleznykh}. The first full simulations of the Askaryan effect and the coherent pulses created in dense media were obtained in the early 90's~\cite{ZHS92,ZHS91}, which allowed more quantitative calculations and experimental programs were soon after started to search for neutrinos with arrays of antennas at Antarctica~\cite{RICE98} or with radio telescopes from Earth~\cite{Parkes96}. The Askaryan effect was measured for the first time firing photon bunches into sand at SLAC in 2000~\cite{Saltzberg_SLAC_sand} - and later in other dielectric media including ice \cite{Gorham_SLAC_salt,Miocinovic_SLAC_sand,Gorham_SLAC_ice} - and since then the field has received an enormous boost, strengthening previous initiatives using antennas buried in ice ~\cite{RICE03,RICElimits} and radiotelescopes~\cite{Parkes07}, and developing new ones such as a balloon flown antenna array~\cite{ANITAlite,ANITA_2009_limits,ANITAlong}, new radiotelescope searches~\cite{GLUElimits,Kalyazin,NuMoon,LUNASKA,RESUN} and new radio measurements of air showers \cite{ARENA08}. The first calculation of the radio emission from electromagnetic showers used a specifically designed Monte Carlo simulation code - the ZHS code - to calculate coherent radio pulses in ice~\cite{ZHS91,ZHS92}. The code has been extended to include the LPM effect~\cite{alz97}, to calculate in an approximate manner hadronic showers~\cite{alz98} and neutrino-induced showers \cite{alvz99}, to treat other dielectric media~\cite{alvz06}, and to perform an optimal statistical thinning that allows the simulation of pulses from ultrahigh energy showers~\cite{aljpz09}, and remains as a reference in the field. This code was designed to calculate the Fourier components of the electric field in the frequency domain. Alternative simulations using other codes such as GEANT3 \cite{almvz03,razzaque04}, GEANT4~\cite{almvz03,razzaque04,McKay_radio} and the AIRES+TIERRAS~\cite{TIERRAS,alctz10} code, have yielded results compatible to within $\sim 5 \%$. Semi-analytical calculations have also been performed \cite{buniy02}. All of these use the same technique to calculate the radio pulse in the frequency domain, but to our knowledge no full calculation exists in the time domain yet. All experimental arrangements measure the electric field as a function of time, and full understanding of the properties of the pulse as a function of time is thus also very important. Although the conversion from the frequency to the time domain is in principle straightforward and the algorithm in ZHS computes all required information to obtain it, there have been a number of doubts concerning the unconventional choice of Fourier transform as used in the code~\cite{ZHS92}, as well as the sign, phase and causality properties of the pulse \cite{buniy02}, that have complicated the analysis and interpretation of data. In this article we develop a formalism to calculate the pulse directly in the time domain. We simultaneously calculate the pulse of the same electromagnetic shower in both the time and frequency domains. An exhaustive comparison yields fully compatible results, makes patent the relative advantages of each approach, and sheds new light into the properties of the radio pulse in the time domain which can be related to those of the shower and can be of great practical importance in interpreting actual data. Some of these properties are discussed in more detail suggesting possible applications. Although the method developed in~\cite{ZHS92}, and extended here to the time-domain, has been obtained in the framework of \v Cerenkov radiation, it derives directly from Maxwell's equations and addresses classical radiation from charges in a pretty general fashion. Simple extensions of this work can be used for instance to calculate transition radiaton as particles cross dielectric media interfaces or to calculate the complete radiation patterns from charges moving in magnetic fields including \v Cerenkov radiation, as has been known for long to be important for ultra high energy air showers. This paper is structured as follows. In Section \ref{theory} we rederive the expression for the electric field in both the time and frequency domain in a form that can be easily used for practical applications and make the connection to the expression derived in the original ZHS paper \cite{ZHS92}. We also discuss some simple current density models and relate them to the results of a full electromagnetic shower simulation. In Section \ref{results} we perform a consistency check by Fourier-transforming the pulse in time and comparing it to the frequency spectrum obtained in the simulations. The summary and outlook constitute the last section. \section{Theory and Monte Carlo implementation} \label{theory} \subsection{Theory} We start from Maxwell's equations for linear, isotropic, homogeneous and non dispersive media. In the International System of units: \begin{align} \mathbf{\nabla} \cdot \mathbf{E} &= \frac{\rho}{\epsilon} & \mathbf{\nabla} \times \mathbf{E} &= - \frac{\partial \mathbf{B}}{\partial t} \\ \mathbf{\nabla} \cdot \mathbf{B} &= 0 & \mathbf{\nabla} \times \mathbf{B} &= \mu \mathbf{J} + \mu \epsilon \frac{\partial \mathbf{E}}{\partial t} \end{align} where $\rho$ is the charge density of the source, $\epsilon=\epsilon_{\rm r} \epsilon_0$ and $\mu=\mu_{\rm r} \mu_0$ are the total permittivity and permeability expressed in terms of the relative ($\mu_{\rm r}$ and $\epsilon_{\rm r}$) and free space ($\mu_0$ and $\epsilon_0$) permittivities and permeabilities. All effects of induced currents and electric polarization are automatically accounted for by the displacement field $\mathbf{D}=\epsilon \mathbf{E}$ proportional to the electric field, $\mathbf{E}$ and the magnetic field strength $\mathbf{H}=(\mu)^{-1} \mathbf{B}$, proportional to the magnetic field, $\mathbf{B}$. We recall the formal solution introducing the vector and scalar potentials ($\mathbf{A}$ and $\phi$): \begin{align} \mathbf{B} &= \mathbf{\nabla} \times \mathbf{A} \\ \mathbf{E} &= -\frac{\partial \mathbf{A}}{\partial t} - \mathbf{\nabla} \phi \label{Efield} \end{align} that naturally satisfy $\mathbf{\nabla} \cdot \mathbf{B} = 0$, and the equation involving the $\mathbf{\nabla} \times \mathbf{E}$ term. Choosing the transverse gauge, in which $ \mathbf{\nabla} \cdot \mathbf{A} = 0$, the two remaining equations imply: \begin{align} \nabla^2 \phi &= - \frac{\rho}{\epsilon} \\ \nabla^2 \mathbf{A} - \mu \epsilon \frac{\partial^2 \mathbf{A}}{\partial^2 t} &= - \mu \mathbf{J}_\perp \end{align} where $\mathbf{J}_\perp$ is the transverse current, a divergenceless component of the current density, which in the limit of observation at large distances from the source can be shown to correspond to the projection of the current density perpendicular to the direction of observation (of unit vector $\hat{\mathbf{u}}$), i.e., ${\mathbf J}_{\perp}=-\hat{\mathbf{u}}\times(\hat{\mathbf{u}}\times \mathbf{J})$. Both equations can be formally solved using Green's functions: \begin{align} \phi &= \frac{1}{4 \pi \epsilon} \int \frac{\rho(\mathbf{x'},t')} {\vert \mathbf{x} -\mathbf{x'} \vert} d^3\mathbf{x'} \label{phisol}\\ \mathbf{A} &= \frac{\mu}{4 \pi} \int \frac{\mathbf{J}_\perp(\mathbf{x'},t')} {\vert \mathbf{x} -\mathbf{x'} \vert} \delta \left(\sqrt{\mu \epsilon} \vert \mathbf{x} -\mathbf{x'} \vert- (t-t') \right) d^3\mathbf{x'}dt' \label{Asol} \end{align} The first is the familiar solution from electrostatics for the potential produced at the position $\mathbf{x}$ by a source with charge density $\rho(\mathbf{x'},t')$. The second is the solution of the wave equation with wave velocity $(\epsilon_0 \mu_0 \epsilon_{\rm r} \mu_{\rm r}) ^{-{1 \over 2}}$ smaller than the velocity of light in vacuum, $c=(\epsilon_0 \mu_0)^{-{1 \over 2}}$, by a factor $n=(\epsilon_{\rm r} \mu_{\rm r})^{-{1 \over 2}}$, the index of refraction. The Green's function for the wave equation involves a delta function that gives the familiar retarded time, $t'$, earlier than the observing time $t$. To evaluate the field at time $t$ at a given position $\mathbf{x}$, the current is to be evaluated at a time retarded by the time taken by light to reach observation point from point $\mathbf{x'}$, i.e. $\vert \mathbf{x}- \mathbf{x'} \vert n/c$. \subsection {Radiation from charges traveling in straight lines} \label{singletrack} We consider the shower as a superposition of finite particle tracks of constant velocity. Each track is completely defined by two limiting times $t_1$ and $t_2$, its velocity $\mathbf{v}$ and the position vector of an arbitrary point of the track, $\mathbf{x_0}$, which we have chosen to correspond to the time $t=0$. The transverse current density entering in Eq.~(\ref{Asol}) for a point charge moving with constant velocity, $\mathbf{v}$, between the two end points simply reads : \begin{equation} \mathbf{J}_\perp(\mathbf{x'},t')=e \mathbf{v}_\perp \delta^3 \left(\mathbf{x'} - \mathbf{x_0} - \mathbf{v}t' \right) \left[ \Theta(t'-t_1) -\Theta(t'-t_2) \right] \end{equation} where $-e$ is the charge of an electron, $\mathbf{v}_\perp$ is the projection of the velocity onto a plane perpendicular to the direction of observation (recall that we consider large distances so that this direction is uniquely defined), and $\Theta(x)$ is the Heaviside step function. We can now substitute the transverse current into Eq.(\ref{Asol}), integrate the three dimensional delta function substituting $\mathbf{x'}$ for $\mathbf{x_0}+\mathbf{v}t'$ and approximate the distance between $\mathbf{x}$ and $\mathbf{x'}$ by $\vert \mathbf{x} -\mathbf{x_0} -\mathbf{v}t' \vert \simeq R - \mathbf{v} \cdot \hat{\mathbf{u}} t'$, where we define $R=\vert \mathbf{x} - \mathbf{x_0} \vert$. In the limit of large distances of observation the denominator $\vert \mathbf{x} -\mathbf{x'} \vert$ can be simply approximated by $R$. However, we must use the above approximation in the argument of the retarding delta function to account for interference effects. This corresponds to the Fraunhofer approximation, in which the path difference between light pulses emitted at points $\mathbf{x_0}$ and $\mathbf{x'}=\mathbf{x_0}+\mathbf{v} t'$ is simply the distance between them projected onto the direction of observation. As a result the delta function reads $\delta\left(t'(1-n\beta\cos\theta)- \left(t-{nR\over c}\right) \right)$, with $\mathbf{v}= \mbox{\boldmath{$\beta$}} c$, which can be cast into: \begin{equation} {1 \over \vert 1-n \beta\cos\theta \vert} \delta\left(t'-{t-{nR\over c} \over 1-n\beta\cos\theta} \right) \label{delta} \end{equation} We note that the recurring factor $(1-n\beta \cos \theta)$, with $\theta$ the angle between $\mathbf{v}$ and $\hat{\mathbf{u}}$, gives zero for the \v Cerenkov angle $\theta_C$. Moreover the factor changes sign from positive to negative as the observation angle changes from being larger to smaller than the \v Cerenkov angle. Now we can perform the integration in $t'$ in Eq.~(\ref{Asol}) which simply implies replacing $t'$ in the step functions by ${t-{nR\over c} \over 1-n\beta\cos\theta}$. We now make use of the fact that: \begin{equation} \Theta(ax)=\begin{cases} \Theta(x)~ \text{ if $a>0$},\\ 1 - \Theta(x)~ \text{ if $a<0$} \end{cases} \label{thetacases} \end{equation} In this equation we can take $a=(1-n\beta\cos\theta)^{-1}$ and $x=t-nR/c-(1-n\beta\cos\theta)t_{1,2}$ which allows us to rewrite Eq.(\ref{Asol}) as: \begin{equation} \begin{split} &\mathbf{A} = {\mu e \over 4 \pi R} \mathbf{v}_\perp \\ & { \Theta(t-{nR \over c} - (1-n\beta\cos\theta)t_1) -\Theta(t-{nR \over c} - (1-n\beta\cos\theta)t_2) \over (1-n\beta\cos\theta) } \label{Atime} \end{split} \end{equation} Note that the modulus in the denominator of Eq.(\ref{delta}) is removed because of an effective $\mbox{sgn}(1-n\beta\cos\theta)$ that appears when changing the argument in the two step functions (according to Eq.~(\ref{thetacases})). This expression is easy to implement in a shower simulation by splitting particle tracks in portions that can be approximated by uniform motion. As $\theta$ approaches the \v Cerenkov angle $\theta_C$ the numerator and denominator of Eq.(\ref{Atime}) approach zero. To obtain a formal limit for the \v Cerenkov angle we multiply and divide by $\delta t$ to obtain: \begin{equation} \begin{split} &R\mathbf{A}(t,\theta)=\frac{e\mu_{r}}{4\pi\epsilon_0 c^2} {\mathbf{v}}_\perp \delta t \\ & \frac{\Theta\left(t-{nR\over c}-(1-n\beta\cos\theta)t_1\right)- \Theta\left(t-{nR\over c}-(1-n\beta\cos\theta)t_2\right)} {(1-n\beta\cos\theta)\delta t} \label{Atime2} \end{split} \end{equation} The limit $\theta\to\theta_{C}$ is equivalent to $(1-n\beta\cos\theta)\delta t \to 0$ which can be shown to give the first derivative of the step function, the function $\delta(t)$. The limit is then: \begin{equation} R\mathbf{A}(t,\theta_C)=\left[\frac{e\mu_{r}}{4\pi\epsilon_0 c^2} \right] \delta\left(t-{nR\over c}\right) {\mathbf{v}}_\perp \delta t \label{AtimeCerLimit} \end{equation} We note that the vector potential in this limit is simply proportional (and parallel) to ${\mathbf{v}}_\perp \delta t$, i.e. to the projection of the displacement vector onto a plane perpendicular to the observation direction. This expression can also be implemented in a shower simulation provided a suitable approximation is made for the delta function. The expression for the electric field is given by Eq.~(\ref{Efield}) and only the term with the time derivative of the vector potential gives contribution to the radiation term so that: \begin{equation} \begin{split} &R\mathbf{E}(t,\theta)=- \frac{e\mu_{r}}{4\pi\epsilon_0 c^2} {\mathbf{v}}_\perp \\ & \frac{\delta\left(t-{nR\over c}-(1-n\beta\cos\theta)t_1\right) -\delta\left(t-{nR\over c}-(1-n\beta\cos\theta)t_2\right)} {(1-n\beta\cos\theta)} \label{Etime} \end{split} \end{equation} \begin{figure}[tbp] \centering \includegraphics[width=9.0cm]{single_track_fields.eps} \caption{Schematic representation of the radiation fields in the time domain induced by a single particle with positive charge $e>0$ traveling at constant velocity $\beta$ between times $t_1$ and $t_2$. Top panel vector potential (see Eq.(\ref{Atime})). Bottom panel electric field (see Eq.(\ref{Etime})). See also text for more details.} \label{fig:track_field} \end{figure} The radiation field due to a single particle track with $e>0$ is similar to the schematic diagram shown in Fig.~\ref{fig:track_field}. Such a particle produces radiation when the track starts or ends. The two pulses ``as seen" by the observer (placed at angle $\theta$ w.r.t. the particle track) are separated by a time interval associated to the difference in propagation time $(1-n\beta\cos\theta)\delta t$. Let us first consider an angle exceeding the \v Cerenkov angle so that $(1-n\beta\cos\theta)$ is positive. The electric field of the first pulse corresponds to the start point of the track ($t_1$) and it is anti-parallel to $\mathbf{v}_\perp$ according to Eq.(\ref{Etime}), while it is parallel for the second pulse which corresponds to the end point ($t_2$). The sign of the electric field pulse is opposite to the sign of the particle acceleration in both cases. The zero in the shown arrival time is arbitrary and corresponds to $t={nR/c}$, i.e. it is a reference time associated to the arrival of a signal from the reference position $\mathbf{x_0}$. The two pulses associated with the track take place later than this reference time. As the angle decreases and becomes smaller than the \v Cerenkov angle, the situation is reversed: The first pulse corresponds to the end point of the track ($t_2$), while the second corresponds to the start point ($t_1$). Moreover not only is the arrival of the pulses as seen by the observer inverted, but both take place before the reference time. This apparent acausal behavior is due to the fact that the particle travels at a speed greater than that of light in the medium. Although the terms responsible for the first and second pulses are interchanged, and there is a sign change associated with this interchange, it is compensated by the denominator of Eq.(\ref{Efield}) that also reverses its sign. As a result there is no change in the sign of the electric field of the first and second pulses as the \v Cerenkov angle is crossed, and the double peak structure at any given time has the same qualitative behavior as the observation angle changes. This seems physically sound since there can be no discontinuity of the electric field across the \v Cerenkov cone boundary. For observation at the \v Cerenkov angle both signals arrive simultaneously. In this limiting case the electric field can be formally obtained taking minus the derivative of the delta function given by Eq.(\ref{AtimeCerLimit}). This again corresponds to a double pulse first antiparallel and then parallel to $\mathbf{v}_\perp$. \subsection{Equations in the Frequency Domain} The expression for the electric field in the frequency domain used in the ZHS simulation code (Eq.(12) in~\cite{ZHS92}) reads: \begin{equation} \mathbf{E}(\omega,\mathbf{x})= {e \mu_{\rm r}\over 2 \pi \epsilon_0 {\rm c}^2}~ i\omega~{{\rm e}^{i k R } \over R} ~ {\rm e}^{i(\omega - \mathbf{k} \cdot \mathbf{v}) {\rm t}_1}~ \mathbf{v}_{\perp }~ \left[{{\rm e}^{i(\omega - \mathbf{k} \cdot \mathbf{v}) \delta {\rm t}} - 1 \over i (\omega -\mathbf{ k} \cdot \mathbf{v})} \right] \label{Efreq} \end{equation} We recall that this equation has been obtained with the following convention for the Fourier transform of the electric field: \begin{equation} \tilde f(\omega)=2\int_{-\infty}^{\infty}{f}(t)~e^{i\omega t}dt \label{FT} \end{equation} where the factor 2 corresponds to an unusual convention (this factor is usually either 1 or $(2\pi)^{-{1\over2}}$). Applying this Fourier transform definition to Eq.(\ref{Etime}) giving the electric field in the time domain we obtain: \begin{equation} \begin{split} \mathbf{E}(\omega,\mathbf{x})= & -{e \mu_{\rm r} \over 2 \pi \epsilon_0 {\rm c}^2}~ {1 \over R} ~ \mathbf{v}_{\perp} \\ &{{\rm e}^{i \omega \left[n R/c + (1-n\beta\cos\theta)\right]{\rm t}_1}- {\rm e}^{i \omega \left[n R/c + (1-n\beta\cos\theta)\right]{\rm t}_2} \over (1-n\beta\cos\theta)} \end{split} \end{equation} which can be easily rearranged to give exactly Eq.~(\ref{Efreq}) noting that $k={n \omega \over c}$. Moreover if we apply the Fourier transform to Eq.(\ref{AtimeCerLimit}) which applies to the limit $\theta\rightarrow\theta_C$ we get: \begin{equation} R \mathbf{A}(\omega,\mathbf{x})= {e \mu_{\rm r} \over 2 \pi \epsilon_0 {\rm c}^2}~ \mathbf{v}_{\perp} \delta t ~ {\rm e}^{i(\omega {\rm t_1} -\mathbf{k} \mathbf{r}_1)} ~{\rm e}^{ikR} \end{equation} The electric field is obtained taking minus the time derivative which in Fourier space is just a factor $i \omega$, giving again the same result as Eq.(13) in \cite{ZHS92} for the electric field in the frequency domain at the \v Cerenkov angle. These calculations show the consistency of Eq.(\ref{Etime}) obtained in the time domain with Eq.(\ref{Efreq}) which gives the field in the frequency domain: They are simply Fourier transforms of each other as expected. \subsection{Pulses for Simple Charge Distributions} \label{simplemodels} Before performing a Monte Carlo simulation of electromagnetic showers, it is interesting to extend the calculations to simple models for the shower. These models allow us to obtain relations between the shape of the pulse in the time domain and the time and spatial distribution of the charge. A simple yet interesting model consists of a charge $Q(z')$ that rises and falls along the shower direction $z'$ and spreads laterally in $x'$ and $y'$. Assuming cylindrical symmetry we can write the current associated to this charge distribution as: \begin{equation} \mathbf{J}(\mathbf{x'},t')=\mathbf{v} f(z',\mathbf{r'})Q(z')\delta(z'-vt') \label{linecurrent} \end{equation} Here $\mathbf{r'}$ is a two dimensional vector in the $(x',y')$ plane transverse to $z'$, and the function $f(z',\mathbf{r'})$ gives the charge distribution in such a plane as a function of shower depth, with normalization chosen so that $Q$ indeed gives the excess charge: \begin{equation} \int d^2\mathbf{r} f(z',\mathbf{r})= \int_0^{2\pi} d\phi' \int^\infty_0 f(z',r',\phi')=1 \label{lateralNorm} \end{equation} with $\phi'$ the azimuthal angle in cylindrical coordinates. The simplest case is that of a line current along the $z'$ direction without lateral extension in which $f(x',y')$ is replaced by the two dimensional delta function $\delta(x')\delta(y')$. This approximation was also discussed in \cite{alvz99} in the frequency domain, where it was referred to as the one-dimensional approximation. When such line current is substituted into Eq.(\ref{Asol}) and integrated in $x'$, $y'$ and $t'$ making the Fraunhofer approximation, a relatively simple expression is obtained that relates the vector potential in the time domain to the excess charge $Q(z')$: \begin{equation} \begin{split} R\mathbf{A} =& \frac{\mu}{4 \pi} \mathbf{v}_\perp \\ & \int_{-\infty}^{\infty} dz' Q(z') \delta \left[z'(1-n \beta \cos \theta) - v\left(t-{n R \over c}\right)\right] \end{split} \label{Alinecurrent} \end{equation} The delta function relates the depth in the shower development $z'$ to the observation time $t$ through a linear function: \begin{equation} z'=\zeta(t)=\beta{ct-nR \over 1-n\beta \cos \theta} \label{depthtotime} \end{equation} As the observation angle approaches the \v Cerenkov angle, the time interval corresponding to the depth spanned by the shower, i.e. the pulse width, becomes smaller. We thus recover a familiar result already discussed in \cite{ZHS92} although in the frequency domain. Performing the integration in Eq.(\ref{Alinecurrent}) yields, \begin{equation} R\mathbf{A}=\frac{\mu c \beta}{4\pi} \frac{\mathbf{v}_{\perp}} {\vert 1-n\beta\cos\theta\vert} Q(\zeta(t)) \end{equation} where the delta function in Eq.(\ref{Alinecurrent}) introduces a factor $\vert 1-n\beta \cos \theta \vert^{-1}$. The electric field is obtained taking minus the derivative of the vector potential with respect to time: \begin{equation} R\mathbf{E} = - \frac{\mu c \beta}{4 \pi} {\mathbf{v}_\perp \over (1-n \beta \cos \theta)\vert 1-n \beta \cos \theta \vert} \left.{dQ(\zeta) \over d\zeta}\right \vert_{\zeta=\beta{ct-nR \over 1-n\beta \cos \theta}} \label{Elinecurrent} \end{equation} The factor $(1-n\beta\cos\theta)^{-1}$ arises from applying the chain rule to the derivative of $Q[\zeta(t)]$. As a result the pulse in the time domain can be regarded as the derivative of the development of the charge excess along the shower, scaled with the \v Cherenkov factors $(1-n \beta \cos \theta)^{-1}$ and $\vert1-n\beta\cos\theta\vert^{-1}$, and converted from depth into time through Eq.(\ref{depthtotime}), i.e., the pulse is firstly positive and then negative with respect to $\mathbf{v}_\perp$ since in a real shower $Q(z')$ corresponds to an excess of negative charge. A number of interesting results can be directly read off Eq.(\ref{Elinecurrent}). If the development curve for the excess charge $Q(z')$ is not symmetric, as happens in real showers, the asymmetry in its derivative is directly reflected into an asymmetry between the negative and positive parts of the pulse. Also it is interesting to note that when the angle of observation is below the \v Cerenkov angle, the pulse shape is inverted in time because the early part of the pulse corresponds to the end of the shower while the beginning of the shower corresponds to the end part of the pulse, as explained above. Still, the polarity of the first and second pulses remains the same because, although the slopes before and after shower maximum change sign, there is an extra sign change induced by the factor $(1-n \beta \cos \theta)^{-1}$. This is in complete analogy to what was discussed for a single track. In the case of observation in the \v Cerenkov direction, the $z'$ dependence of the delta function in Eq.(\ref{Alinecurrent}) disappears and the delta function can be factored away from the integral, to give a pulse of amplitude directly proportional to the integrated excess track length of the shower. The delta function term is due to all parts of the line current being observed simultaneously at the \v Cerenkov angle. These are two familiar results already emphasized in \cite{ZHS92}. The simulation has shown that the model with the absence of a lateral distribution breaks down at $\vert\theta-\theta_C \vert \lesssim 2.5^\circ$. This result is consistent with that found in \cite{alvz99} where the one dimensional model was studied in the frequency domain. It is instructive to extend the line current model to a more realistic three dimensional current $f(z',r)$ with cylindrical symmetry and current given in Eq.(\ref{linecurrent}). In that case the expression for the vector potential with two delta functions can be integrated in $t'$ and $\phi'$ and the resulting expression involves a double integral over cylindrical coordinate $r'$ and the shower depth $z'$: \begin{equation} \begin{split} &R\mathbf{A}=\mathbf{v_{\perp}}\frac{\mu}{2\pi} \int_{0}^{\infty} r'dr' \int_{-\infty}^{\infty}dz' f(z',r')Q(z') \\ &\frac{\Theta(n\beta r'\sin\theta-|z'(1-n\beta\cos\theta)-(vt-n\beta R)|)} {\sqrt{\left[n\beta r'\sin\theta\right]^2-\left[z'\left(1-n\beta\cos\theta\right)-\left(vt-n\beta R\right)\right]^2}} \end{split} \label{Alateral} \end{equation} This expression, despite being more cumbersome than Eq.(\ref{Alinecurrent}), if solved analytically for realistic lateral distribution functions, could give insight into useful parametrizations of the pulse in the time domain. In any case it can be used for numerical simulations. In the \v Cerenkov limit Eq.(\ref{Alateral}) becomes \begin{equation} \begin{split} & R\mathbf{A}=\frac{\mu}{2\pi} \frac{\mathbf{v_{\perp}}}{n\beta \sin\theta_C} \int_{-\infty}^{\infty}dz' Q(z') \\ & \int_{\frac {\vert vt-n\beta R \vert}{n\beta\sin\theta_C}}^{\infty} \frac{r'dr' f(z',r')} {\sqrt{r'^2-\left[\frac{vt-n\beta R}{n\beta\sin\theta_C}\right]^2}} \end{split} \label{AlateralCerenkov} \end{equation} This equation shows that the non-zero width of the electromagnetic pulse at the \v Cerenkov angle is the result of the lateral distribution of the shower. Although the integral is rather complicated to evaluate for realistic lateral shower profiles, it can be shown that for distributions of the form $f(r')=(r')^{-n}$ for integers $n>2$ the electric field $\mathbf{E}\propto \mathbf{v}_{\perp}\mbox{sgn}(t-nR/c)\vert vt-n\beta R\vert^{-n}$, which is a fast bi-polar pulse of non-zero width. This model still has some limitations. Note that Eq.(\ref{AlateralCerenkov}) predicts a pulse that is symmetric in time while simulations have shown that the pulse at the Cerenkov angle is asymmetric. This is due in part to the radial distribution of velocities of the shower which is not included in the model. The development of a current density vector model that can accurately produce the features of \v Cerenkov radiation is work in progress. \subsection{Implementation in the ZHS Monte Carlo} The ZHS Monte Carlo \cite{ZHS92} allows the simulation of electromagnetic showers and their associated coherent radio emission up to EeV energies \cite{aljpz09}. Originally developed in ice \cite{ZHS91}, it has been extended so that electromagnetic showers in other dielectric homogeneous media can be simulated \cite{alvz06,aljpz09}. The code accounts for bremsstrahlung, pair production, and the four interactions responsible for the development of the excess charge, namely M\o ller, Bhabha, Compton scattering and electron-positron annihilation. In addition multiple elastic scattering (according to Moli\`ere's theory) and continuous ionization losses are also implemented. The electron/positron tracks between each interaction are split into subtracks so that no subtrack exceeds a maximum depth fixed at 0.1 radiation lengths. For low energy particles these subdivisions are actually reduced to ensure that no subtrack is comparable to the particle range, and they become the step used to evaluate ionization losses and multiple elastic scattering. Convergence of results as the step is reduced has been carefully checked \cite{alvz00}. In order to account for interference effects between the radiation emitted due to the particles responsible for the excess negative charge, the ZHS code was designed to follow all electrons and positrons down to 100 keV kinetic energy threshold, as well as to carefully account for time, by considering deviations with respect to a plane front moving at the speed of light injected in phase with the primary particle. In addition to the delays associated to the propagation geometry, those due to particles travelling at velocities smaller than the velocity of light are accounted for assuming the energy loss is uniform across the step. An approximate account is also made of the time delay associated to the multiple elastic scattering processes along the step. As a result the tracks of all charged particles in a shower are divided into multiple subtracks which are assumed to be straight and to have constant velocity. The positions of the end points of these subtracks as well as the corresponding times are readily available by design, and they can be used to compute the frequency components of the electric field making extensive use of Eq.(\ref{Efreq}), taking into account the relative phase shift between different tracks because of their different starting point positions and time delays. In this work we have extended the Monte Carlo to also calculate the pulse in the time domain. A routine has been developed to account for contributions of each of these particle subtracks to the vector potential, making extensive use of Eq.(\ref{Atime}). Each subtrack contributes a unit ``rectangle'' to the vector potential, which varies in height, ``duration'' and sign - see Fig.~\ref{fig:track_field}, depending on the velocity, the relative orientation of the track with respect to the direction of observation and the charge of the particle. When the observation direction is very close to the \v Cerenkov angle the delta function in Eq.(\ref{AtimeCerLimit}) is replaced by a rectangle corresponding to a nascent delta function~\cite{Kelly}. If the sampling time bin width is set to $\Delta T$ then a natural choice of nascent delta function is given by \begin{equation} \eta_{\Delta T}(t) = \left\{ \begin{array}{l l} \frac{1}{\Delta T}, & \quad -\frac{\Delta T}{2} < t \leq \frac{\Delta T}{2}\\ 0, & \quad \mbox{otherwise}\\ \end{array} \right. \label{eq:discrete_delta_function} \end{equation} In this case the base of the rectangle is fixed by the intrinsic ``time resolution'' $\Delta T$ of the simulation and the pulse height depends on $\Delta T$. In practice, the time domain radio signal can be reconstructed with an antenna receiver system and digital sampling electronics. The time resolution of a single waveform is determined by the digital sampling bin width and the high frequency cutoff of the receiver system. Once the vector potential induced by each subtrack is defined, the contribution of all charged subtracks in the shower is obtained and the vector potential is derived with respect to time to obtain the electric field in the time domain. In the next Section we show several examples of the results of this procedure. \section{Results} \label{results} In Fig.~\ref{fig:efield} we show the electric field as a function of arrival time of the signal obtained with the ZHS code in a single 1 PeV electron-induced shower in ice for different observation angles. The zero in the shown arrival time is measured with respect to the arrival time of a pulse emitted as the primary particle initiating the shower is injected in the medium. \begin{figure}[tbp] \centering \includegraphics[width=9.0cm]{efield_time_1PeV.ps} \caption{Electric field as a function of time as obtained in ZHS simulations of a single 1 PeV electron-induced shower in ice for different observation angles. Top panel: Observation at the \v Cerenkov angle, bottom panel: observation at $\theta_C-5^\circ$ (long green dashes) and at $\theta_C+5^\circ$ (short blue dashes). In the bottom panel the red solid histograms represent the electric field obtained applying Eq.(\ref{Elinecurrent}) to the simulated excess negative charge $Q(z)$.} \label{fig:efield} \end{figure} The electric field is parallel to the projection of the velocity onto a plane perpendicular to the direction of the observation at early times and anti-parallel later on. This is expected after the discussion in Section~\ref{singletrack} of the electric field emitted by a single positively charged particle, with the important difference that in a shower the electric field is produced by an excess of negative charge and the polarity of the field is reversed with respect to that shown in Fig.~\ref{fig:track_field}. Also as in the case of a single track there is no change in the polarity of the pulse when observing inside ($\theta<\theta_c$) or outside ($\theta>\theta_c$) the \v Cerenkov cone ($\theta_c$). The pulse always starts being positive (parallel to $\mathbf v_\perp$) and ends being negative (antiparallel to $\mathbf v_\perp$) regardless of the observation angle. This feature can be used as a discriminator against background events for neutrino searches. It can be also clearly seen that the pulse is broader in time away from the \v Cerenkov cone than close to it with an apparent duration proportional to $\Delta z \vert1-n\beta\cos\theta\vert/c$ with $\Delta z$ being the spread along the shower axis of the excess charge (see Eq.(\ref{depthtotime})). For observation at the \v Cerenkov angle the apparent duration of the pulse is not zero, despite the fact that the \v Cerenkov factor $\vert1-n\beta\cos\theta_c\vert\rightarrow 0$, because the shower spreads out also in the lateral dimensions ($x$ and $y$ directions). Also due to our definition of $t=0$ and to the presence of the \v Cerenkov factor in the $\delta-$functions in Eq.(\ref{Etime}), the pulse occurs at $t>0$ outside the \v Cerenkov cone and at $t<0$ inside it. \begin{figure}[tbp] \centering \includegraphics[width=9.0cm]{efield_time_lpm.ps} \caption{Top panel: Longitudinal development of the excess negative charge as obtained in ZHS simulations of 1 PeV (long green dashes) and 100 PeV (short blue dashes) electron-induced showers in ice. Bottom panel: Electric field as a function of time generated in the showers shown in the top panel (dashed histograms), for observation angle $\theta_C+10^\circ$. The solid histograms represent the electric field obtained applying Eq.(\ref{Elinecurrent}) to the simulated excess charge $Q(z)$ shown in the top panel.} \label{fig:efield_lpm} \end{figure} According to the simple model developed in Section \ref{simplemodels} the field away from the \v Cerenkov angle is proportional to the derivative of the excess charge distribution $Q(z)$ with respect to $t$ - Eq.(\ref{Elinecurrent}) - or equivalently the derivative with respect to $z$ since there is a linear relation between $t$ and $z$ - Eq.(\ref{depthtotime}). The ZHS code also gives the longitudinal profile of the excess charge and we have applied Eq.(\ref{Elinecurrent}) to the simulated $Q(z)$, and compared to the electric field obtained directly in the Monte Carlo. This is also shown in Fig.~\ref{fig:efield}. The agreement between the electric field obtained directly in the Monte Carlo simulation (dashed histograms) and what is predicted by Eq.(\ref{Elinecurrent}) (solid histograms) is remarkable. The electric field follows the variation of the excess charge in $z$ or equivalently in $t$. This explains why for a fixed observation angle the pulse changes sign from early to late times (for a typical shower $Q(z)$ grows relatively fast, reaches a maximum, and then decreases more slowly with depth), and why it is asymmetric with respect to the time axis ($Q(z)$ is not a symmetric function around its maximum). Also when the direction of observation is inside the \v Cerenkov cone, the observer sees the derivative of the beginning of the excess charge distribution first and the corresponding derivative of the end of $Q(z)$ at later times, while the opposite is true for observations outside the \v Cerenkov cone. As a consequence the pulse at $\theta<\theta_c$ looks like an antisymmetric copy with respect to $t=0$ of the pulse at $\theta>\theta_c$, as can be clearly seen in Fig.~\ref{fig:efield}. An accurate reconstruction of the time domain electric field could in principle determine on which side of the Cerenkov cone the event was observed. On the other hand the shape of the pulse can be conversely used to infer the depth development of the shower. Eq.(\ref{Elinecurrent}) stresses the fact that the features of the excess charge distribution are ``mapped" in the time structure of the pulse. In particular it is well known that electromagnetic showers with energies above the energy scale at which the LPM effect \cite{LPM} starts to be effective ($\sim$ PeV in ice \cite{Stanev_LPM}), are ``stretched" in the longitudinal dimension and often show peaks in their profile \cite{RalstonLPM,Konishi,alz97,klein}. These two features should translate into the duration in time also of the pulse and into its time structure that should also exhibit multiple peaks. This is shown in Fig.~\ref{fig:efield_lpm} in which due to the LPM effect the longitudinal profile of a 100 PeV electron-induced shower exhibits two peaks which appear as 2 positive and 2 negative peaks in the time structure of the pulse. For comparison a 1 PeV electron-induced shower not affected by the LPM effect and its corresponding electric field are also shown. The linear relation between the time domain structure of an electric field and the shower profile suggests that the longitudinal profile of the shower could be reconstructed from an observation off the \v Cerenkov angle. The extended ZHS code is able to calculate both the electric field as a function of time and its Fourier transform from first principles. Moreover, the two calculations can be made simultaneously for the same shower. Both calculations can be easily compared by performing the Fourier transform of the pulse calculated in the time domain, following the convention in Eq.(\ref{FT}). This provides a further check of the two methods, as well as a test of accuracy in the numerical procedures involved in the calculation of the radio emission in both domains. An example is shown in Fig.~\ref{fig:efieldFT}, where the electric field as a function of frequency as obtained in ZHS simulations of a single 1 PeV electron-induced shower is plotted along with the (Fast) Fourier Transform (FFT) of the electric field in the time domain obtained in simultaneous ZHS simulations of the same shower. The agreement between both spectra is very good for frequencies below $\omega_{\Delta T} \sim 2\pi/\Delta T$ with $\Delta T$ an arbitrary time resolution needed for the ZHS simulations in the time domain. We do not expect to be able to reproduce the frequency spectrum at frequencies above $\omega_{\Delta T}$ - proportional to the Nyquist frequency of the system. To illustrate this point in Fig.~\ref{fig:efieldFT} we also show the Fourier transformed spectrum (at the \v Cerenkov angle) of several time domain calculations performed with different time resolutions $\Delta T=0.1$ and 0.5 ns. One can see that the agreement between the frequency spectrum obtained in ZHS and the Fourier transformed time domain electric field improves as $\Delta T$ decreases as expected. Calculations in the frequency domain are more advisable near the \v Cerenkov angle. \begin{figure}[tbp] \centering \includegraphics[width=9.0cm]{time_to_freq_FT_1PeV.ps} \caption{Electric field frequency spectrum obtained in ZHS simulations of a single 1 PeV electron-induced shower in ice for different observation angles (green dashed lines). Also shown is the Fast Fourier Transform (FFT) of the electric field in the time domain obtained in simultaneous ZHS simulations of the same shower for two different time resolutions $\Delta T=0.1$ ns (red solid lines) and $\Delta T=0.5$ ns (magenta dotted line - only shown at the \v Cerenkov angle for clarity).} \label{fig:efieldFT} \end{figure} \section{Summary and outlook} In this work we have developed an algorithm to obtain the \v Cerenkov radio pulse produced by a single charged particle track in a dielectric medium. We have implemented this algorithm in the ZHS Monte Carlo with which we can predict the \v Cerenkov coherent radio emission emission of electromagnetic showers in dense dielectric media in both the time and frequency domains. An observer in the Fraunhofer region, far from the axis of the electromagnetic shower at an angle $\theta$, sees a bi-polar pulse due to the excess of negative charge in the shower. The apparent time duration of the pulse is proportional to $\Delta z~(1-n\cos\theta)/c$ with $\Delta z$ the spread of the shower in the longitudinal direction. At the \v Cerenkov angle $(1-n\cos\theta_C)\rightarrow 0$ and the duration of the pulse is mainly determined by the lateral extent of the shower. At angles $\theta>\theta_C$, the observer sees first the electric field produced by the early stages of the shower, and the field due to the end of the shower later on, while the time sequence reverses for observation at $\theta<\theta_C$. Regardless of the observation angle, the bulk of the electric field due to the excess negative charge is directed along ${\mathbf v}_\perp$ - the projection of the particle velocity onto a plane perpendicular to shower axis - at early times and in the opposite direction later on. The shape of the pulse maps the variation with depth of the excess charge in the shower. This information can be of great practical importance for interpreting actual data. A consistency check performed by Fourier-transforming the pulse in time and comparing it to the frequency spectrum obtained directly in the simulations yields, as expected, fully consistent results. Our results, besides testing algorithms used for reference calculations in the frequency domain, shed new light into the properties of the radio pulse in the time domain. In the future we plan to implement the algorithm for time-domain calculations of electric field pulses in Monte Carlo simulations of hadronic and neutrino-induced showers, of great importance for neutrino detectors using the radio \v Cerenkov technique. Also we will explore how actual experiments can exploit the richness of information contained in the shape in time of the radio pulse to obtain information on the shower development. This could be of great help in the reconstruction of the parameters of the neutrino-induced showers and to discriminate against background events. \section{Acknowledgments} J.A-M and E.Z. thank Xunta de Galicia (INCITE09 206 336 PR) and Conseller\'\i a de Educaci\'on (Grupos de Referencia Competitivos -- Consolider Xunta de Galicia 2006/51); Ministerio de Ciencia e Innovaci\'on (FPA 2007-65114 and Consolider CPAN) and Feder Funds, Spain. We thank CESGA (Centro de SuperComputaci\'on de Galicia) for computing resources and assistance. A. R-W thanks NASA (NESSF Grant NNX07AO05H). We thank J. Bray and C.W. James for many helpful discussions.
1,116,691,500,008
arxiv
\section{Introduction} Let $K$ be a positive integer. A symmetric subset $A$ of a group $G$ is a $K$-approximate subgroup if there is a finite subset $E\subseteq G$ such that $|E|\leq K$ and $AA\subseteq EA$. The formal definition of an approximate subgroup was introduced by Tao in \cite{tao}. Since then many important results on approximate subgroups have been established. In particular, Breuillard, Green and Tao essentially described the structure of finite approximate subgroups \cite{bgt}. The reader is referred to the recent book \cite{tointon}, or the surveys \cite{breu1,breu2}, for detailed information on these developments. An interesting principle was stated in \cite{breu2}:\bigskip {\it Group-theoretical arguments can often be successfully transferred to approximate subgroups}. \bigskip In the present article we check this principle against certain variations of B. H. Neumann's theorem that a BFC-group has finite commutator subgroup. Given a group $G$ and an element $x\in G$, we write $x^G$ for the conjugacy class containing $x$. More generally, if $X,Y\subseteq G$, we write $X^Y$ for the set of all $x^y$, where $x\in X$ and $y\in Y$. Of course, if the number of elements in $x^G$ is finite, we have $|x^G|=[G:C_G(x)]$. A group is called a BFC-group if its conjugacy classes are finite and have bounded size. In 1954 B. H. Neumann discovered that the commutator subgroup $G'$ of a BFC-group $G$ is finite \cite{bhn}. It follows that if $|x^G|\leq n$ for each $x\in G$, then $G'$ has finite $n$-bounded order. Throughout the article we use the expression ``$(a,b,\dots)$-bounded" to mean that a quantity is finite and bounded by a certain number depending only on the parameters $a,b,\dots$. A first explicit bound for the order of $G'$ was found by J. Wiegold \cite{wie}, and the best known was obtained in \cite{gumaroti} (see also \cite{neuvoe} and \cite{sesha}). The article \cite{dieshu} deals with groups $G$ in which conjugacy classes containing commutators are bounded. It is shown that if $|x^G|\leq n$ for any commutator $x$, then the second commutator subgroup $G''$ has finite $n$-bounded order. Later this was extended in \cite{dms} to higher commutator subgroups. A related result for groups in which the conjugacy classes containing squares have finite bounded sizes was obtained in \cite{squares}. A stronger version of the Neumann theorem was recently established in \cite{ashu}: \bigskip \noindent {\it Let $n$ be a positive integer and $G$ a group having a subgroup $A$ such that $|a^G|\leq n$ for each $a\in A$. Then the commutator subgroup of $\langle A^G\rangle$ has finite $n$-bounded order.} \bigskip Here, as usual, $\langle X\rangle$ denotes the subgroup generated by the set $X$ and so $\langle A^G\rangle$ denotes the minimal normal subgroup containing $A$. In the present paper we extend the above result as follows. \begin{theorem}\label{main1} Let $K,n$ be positive integers and $G$ a group having a $K$-approximate subgroup $A$ such that $|a^G|\leq n$ for each $a\in A$. Then the commutator subgroup of $\langle A^G\rangle$ has finite $(K,n)$-bounded order. \end{theorem} For a subset $X$ of a group $G$ we write $[G,X]$ to denote the subgroup generated by all commutators $[g,x]$, where $g\in G$ and $x\in X$. It is well-known that $[G,X]$ is a normal subgroup of $G$. Moreover, $[G,X]=[G,\langle X\rangle]$. We examine approximate subgroups $A\subseteq G$ such that the conjugacy classes of commutators $[g,a]$ have bounded sizes whenever $g\in G$ and $a\in A$. \begin{theorem}\label{main2} Let $K,n$ be positive integers and $G$ a group having a $K$-approximate subgroup $A$ such that $|[g,a]^G|\leq n$ for all $g\in G$ and $a\in A$. Then the commutator subgroup of $[G,A]$ has finite $(K,n)$-bounded order. \end{theorem} It is worthwhile to mention that Theorem \ref{main2} was unknown even in the case where $A$ is a subgroup of $G$. It can be regarded as an extension of the aforementioned result in \cite{dieshu} that if $|x^G|\leq n$ for any commutator $x$, then the second commutator subgroup $G''$ has finite $n$-bounded order. \section{Preliminaries} Let $G$ be a group generated by a set $X$ such that $X = X^{-1}$. Given an element $g\in G$, we write $l_X(g)$ for the minimal number $l$ with the property that $g$ can be written as a product of $l$ elements of $X$. A proof of the following result can be found in \cite[Lemma 2.1]{dieshu}. \begin{lemma}\label{21} Let $G$ be a group generated by a set $X=X^{-1}$ and let $L$ be a subgroup of finite index $m$ in $G$. Then each coset $Lb$ contains an element $g$ such that $l_X(g)\leq m-1$. \end{lemma} The next lemma is almost obvious. \begin{lemma}\label{22} Let $k,n,s\geq1$, and let $G$ be a group containing a set $X=X^{-1}$ such that $|x^G|\leq n$ for any $x\in X$. Let $g_1,\dots,g_s\in\langle X\rangle$ and assume that $l_X(g_i)\leq k$. Then $C_G(g_1,\dots,g_s)$ has finite $(k,n,s)$-bounded index in $G$. \end{lemma} \begin{proof} Since $l_X(g_i)\leq k$, we can write $g_i=x_{i1}\ldots x_{ik}$, where $x_{ij}\in X$ and $i=1,\ldots,s$. By the hypothesis the index of $C_G(x_{ij})$ in $G$ is at most $n$ for any such element $x_{ij}$. Set $U=\cap_{i,j}C_G(x_{ij})$. We have $[G:U]\leq n^{ks}$. Since $U\leq C_G(g_1,\dots,g_s)$, the lemma follows. \end{proof} The next observation will play a crucial role in the proof of Theorems \ref{main1} and \ref{main2}. \begin{lemma}\label{23} Let $X$ be a normal subset of a group $G$ such that $|x^G|\leq n$ for any $x\in X$, and let $H=\langle X\rangle$. Then the subgroup $\langle[H,x]^G\rangle$ has finite $n$-bounded order. \end{lemma} \begin{proof} Without loss of generality we can assume that $X=X^{-1}$. Let $m$ be the maximum of indices of $C_H(x)$ in $H$ for $x\in X$. Of course, $m\leq n$. Take $x\in X$. Since the index of $C_H(x)$ in $H$ is at most $m$, by Lemma \ref{21} we can choose elements $y_1,\ldots,y_m$ in $H$ such that $l_X(y_i)\leq m-1$ and the subgroup $[H,x]$ is generated by the commutators $[y_i,x]$, for $i=1,\ldots,m$. For any such $i$ write $y_i=y_{i1}\ldots y_{i(m-1)}$, with $y_{ij}\in X$. By using standard commutator identities we can rewrite $[y_i,x]$ as a product of conjugates in $H$ of the commutators $[y_{ij},x]$. Let $\{h_1,\ldots,h_s\}$ be the conjugates in $H$ of all elements from the set $\{x,y_{ij} \mid 1\leq i\leq m,\ 1\leq j\leq m-1\}.$ Note that the number $s$ here is $m$-bounded. This follows form the fact that $C_H(x)$ has index at most $m$ in $H$ for each $x\in X$. Put $T=\langle h_1,\ldots,h_s \rangle$. Observe that the centre $Z(T)$ has index at most $m^s$ in $T$, since the index of $C_H(h_i)$ in $H$ is at most $m$ for any $i=1,\ldots,s$. Thus, by Schur's theorem \cite[10.1.4]{Rob}, we conclude that the commutator subgroup $T'$ has finite $m$-bounded order. Since $[H,x]$ is contained in $T'$, deduce that the order of $[H,x]$ is $m$-bounded. Further, the subgroup $[H,x]$ is normal in $H$ and there are at most $n$ conjugates of $[H,x]$ in $G$. Therefore $\langle[H,x]^G\rangle$ is a product of at most $n$ normal subgroups, each of which has $n$-bounded order. Hence, the result. \end{proof} \begin{lemma} \label{uxx} Let $G$ be a group and $x,y\in G$. Assume that $|x^G|=m$ and $|(yx)^G|\leq m$. Suppose that there are $b_1,\dots,b_m\in G$ such that $x^G=\{x^{b_1},\dots,x^{b_m}\}$ and $y\in C_G(b_1,\dots,b_m)$. Then $[G,y]\leq[G,x]$. \end{lemma} \begin{proof} First we note that $(yx)^G=\{yx^{b_1},\dots,yx^{b_m}\}$. This is because all elements $yx^{b_1},\dots,yx^{b_m}$ are different and since by the hypothesis $(yx)^G$ contains at most $m$ elements, the class $(yx)^G$ must coinside with $\{yx^{b_1},\dots,yx^{b_m}\}$. Therefore for any $g\in G$ there is $b_i\in\{b_1,\dots,b_m\}$ such that $(yx)^g=yx^{b_i}$. So $y^gx^g=yx^{b_i}$ and $[y,g]=x^{b_i}x^{-g}\in[G,x]$. The lemma follows. \end{proof} \section{Proof of Theorem \ref{main1}} Recall that $A$ is a $K$-approximate subgroup of a group $G$ such that $|a^G|\leq n$ for any $a\in A$. We wish to show that $H=\langle A^G\rangle$ has finite commutator subgroup of $n$-bounded order. Let $E=\{e_1,\dots,e_K\}$ be a set of size $K$ such that $AA\subseteq EA$. It will be assumed that $K$ is chosen as small as possible and so for each $i=1,\dots,K$ there are $x_{i1},x_{i2},x_{i3}\in A$ such that $x_{i1}x_{i2}=e_ix_{i3}$. By Lemma \ref{23} each of the subgroups $\langle[H,x_{ij}]^G\rangle$ has $n$-bounded order. Hence, also their product has finite $(K,n)$-bounded order. Now we can pass to the quotient over the product of all $\langle[H,x_{ij}]^G\rangle$ and without loss of generality assume that the set $E$ is contained in the centre of $H$. Denote by $X$ the set $A^G$. Let $m$ be the maximum of indices of $C_H(x)$ in $H$ for $x\in X$. Of course, $m\leq n$. Select $a\in A$ such that $|a^H|=m$. Choose $b_1,\ldots,b_m$ in $H$ such that $l_X(b_i)\leq m-1$ and $a^H=\{a^{b_i};i=1,\ldots,m\}$. The existence of the elements $b_i$ is guaranteed by Lemma \ref{21}. Set $U=C_G(\langle b_1,\ldots,b_m \rangle)$. In view of Lemma \ref{22} note that the index of $U$ in $G$ is $n$-bounded. Let $r$ be the minimal number for which there are elements $d_1,\dots,d_r\in A$ such that $A$ is contained in the union of the left cosets $d_iU$. We fix the elements $d_i$ and denote by $S$ the product of the subgroups $\langle[H,d_i]^G\rangle$ for $i=1,\dots,r$. By Lemma \ref{23} $S$ is a product of at most $r$ normal subgroups of finite $n$-bounded order. Taking into account that $r$ is $n$-bounded conclude that $S$ has finite $n$-bounded order. Choose any $u\in A\cap U$. Since $AA\subseteq EA$ write $ua=ex$ for suitable $e\in E$ and $x\in A$. Since $e\in Z(H)$ and $|x^H|\leq m$, it follows that $|(ua)^H|\leq m$. Recall that $U=C_G(\langle b_1,\ldots,b_m \rangle)$. Lemma \ref{uxx} implies that $[H,u]\leq[H,a]$. This happens for every choice of $u\in A\cap U$ and so $[H,(A\cap U)]\leq[H,a]$. Let $T=\langle[H,a]^G\rangle$ and observe that by virtue of Lemma \ref{23} $T$ has finite $n$-bounded order. Choose an arbitrary element $d\in A$. There is an index $j$ such that $d\in d_jU$ and ${d_j}^{-1}d\in U$. Since ${d_j}^{-1}d\in AA$, write ${d_j}^{-1}d=ey$ for suitable $e\in E$ and $y\in A$. Observe that $e\in U$, whence $y\in U$. Taking into account that $[H,d_j]\leq S$, $[H,e]=1$, and $[H,y]\leq T$ deduce $$[H,d]=[H,d_jey]\leq[H,d_j][H,e][H,y]\leq ST.$$ Since $d$ was chosen in $A$ arbitrarily, conclude that $[H,A]\leq ST$. Recall that $H=\langle A^G\rangle$. We therefore conclude that $H'\leq ST$. Now the theorem follows from the fact that the order of $ST$ is $n$-bounded. This completes the proof of the theorem. \section{Proof of Theorem \ref{main2}} Throughout this section $A\subseteq G$ is a $K$-approximate subgroup of a group $G$ such that $|[g,a]^G|\leq n$ for each $g\in G$ and $a\in A$. We need to prove that $[G,A]$ has finite commutator subgroup of $(K,n)$-bounded order. Let $X$ be the set of all conjugates of commutators $[g,a]$, where $g\in G$ and $a\in A$. Note that the set $X$ is symmetric. Put $H=\langle X\rangle$. By Lemma \ref{23} the subgroup $\langle[H,x]^G\rangle$ has finite $n$-bounded order whenever $x\in X$. \begin{lemma} \label{one} For any $a\in A$ the subgroup $[H,[G,a]]$ has finite $n$-bounded order. \end{lemma} \begin{proof} Choose $a\in A$. Let $m_0$ be the maximum of indices of $C_H(x)$ in $H$, where $x$ ranges through the set of commutators $[g,a]$ with $g\in G$. Select $g_0\in G$ such that $|[g_0,a]^H|=m_0$. Choose $b_1,\ldots,b_{m_0}$ in $H$ such that $l_X(b_i)\leq m_0-1$ and $[g_0,a]^H=\{[g_0,a]^{b_i}; i=1,\ldots,m_0\}$. (The existence of the elements $b_i$ is guaranteed by Lemma \ref{21}.) Set $U=C_G(\langle b_1,\ldots,b_{m_0} \rangle)$. Note that by Lemma \ref{22} the index of $U$ in $G$ is $n$-bounded. Let $U_0=\cap_{g\in G} U^g$ be the maximal normal subgroup of $G$ contained in $U$. Obviously, the index of $U_0$ in $G$ is $n$-bounded as well. For any $g\in G$ observe that $[gg_0,a]=[g,a]^{g_0}[g_0,a]$. Choose $g\in U_0$ and set $[g,a]^{g_0}=u$. Lemma \ref{uxx} shows that $[H,u]\leq[H,[g_0,a]]$. Let $c_1,\dots,c_k$ be a transversal of $U_0$ in $G$. For $i=1,\dots,k$ let $T_i$ denote the subgroup $\langle[H,[c_i,a]]^G\rangle$. In view of Lemma \ref{23} each subgroup $T_i$ has finite $n$-bounded order. Further, let $T_0$ denote the subgroup $\langle[H,[g_0,a]]^G\rangle$. Likewise, $T_0$ has finite $n$-bounded order. Let $N$ be the product of all $T_i$ for $i=0,1,\dots,k$. Any element $g\in G$ can be written as a product $g=xc_j$ for suitable $x\in U_0$ and $j\leq k$. Then we have $[g,a]=[xc_j,a]=[x,a]^{c_j}[c_j,a]$. We now know that the images in $G/N$ of both $[x,a]$ and $[c_j,a]$ are central in $H/N$. It follows that also the image $[G,a]$ is central in $H/N$. Since $N$ has finite $n$-bounded order, the lemma follows. \end{proof} Let $E=\{e_1,\dots,e_K\}$ be a set of size $K$ such that $AA\subseteq EA$. It will be assumed that $K$ is chosen as small as possible and so for each $i=1,\dots,K$ there are $x_{i1},x_{i2},x_{i3}\in A$ such that $x_{i1}x_{i2}=e_ix_{i3}$. By Lemma \ref{one} for each $x_{ij}$ the subgroup $N_{ij}=[H,[G,x_{ij}]]$ has finite $n$-bounded order. Let $N$ be the product of all these subgroups $N_{ij}$ and observe that $N$ has finite $(K,n)$-bounded order. Pass to the quotient $G/N$ and assume that $[H,[G,x_{ij}]]=1$ for all $i,j$. Then of course $[H,[G,e_i]]=1$ for all $i=1,\dots,K$. Therefore in what follows, without loss of generality, we will assume that $$[G,E]\leq Z(H).$$ Let $m$ be the maximum of indices of $C_H(x)$ in $H$, where $x$ ranges through the set $X$. \begin{lemma}\label{three} For any $b\in\langle A\rangle$ and $g\in G$ we have $|[g,b]^H|\leq m$. \end{lemma} \begin{proof} Indeed, since $b\in\langle A\rangle$, we can write $b=ea$ for suitable $a\in A$ and $e\in\langle E\rangle$. Then $[g,b]=[g,ea]\in[G,E]X$. Now use that $[G,E]\leq Z(H)$ and $|x^H|\leq m$ for any $x\in X$ and deduce the lemma. \end{proof} Now fix $h_0\in G$ and $a_0\in A$ such that $|[h_0,a_0]^H|=m$. Choose $h_1,\ldots,h_{m}$ in $H$ such that $l_X(h_i)\leq m-1$ and $$[h_0,a_0]^H=\{[h_0,a_0]^{h_i};i=1,\ldots,m\}.$$ The existence of the elements $h_i$ follows from Lemma \ref{21}. Set $V=C_G(\langle h_1,\ldots,h_m\rangle)$. Note that by Lemma \ref{22} the index of $V$ in $G$ is $n$-bounded. Let $V_0=\cap_{g\in G}V^g$ be the maximal normal subgroup of $G$ contained in $V$ and note that the index of $V_0$ in $G$ is $n$-bounded as well. By Lemma \ref{one} the subgroup $S=[[H,[G,a_0]]$ has finite $n$-bounded order. \begin{lemma}\label{two} $[H,[G,V_0\cap\langle A\rangle]]\leq S$. \end{lemma} \begin{proof} Let $b\in V_0\cap\langle A\rangle$. Lemma \ref{three} tells us that $|[g,a_0b]^H|\leq m$ for any $g\in G$. Moreover, observe that $[g,a_0b]=[g,b][g,a_0]^b$ while $[g,b]\in V_0$. Lemma \ref{uxx} shows that $[H,[g,b]]\leq [H,[g,a_0]]\leq S$. This happens for every $g\in G$ so the lemma follows. \end{proof} Let $r$ be the minimal number for which there are elements $d_1,\dots,d_r\in A$ such that $A$ is contained in the union of the left cosets $d_1V_0,\dots,d_rV_0$. We fix the elements $d_i$ and for each $i=1,\dots,r$ put $M_i=[H,[G,d_i]]$. By Lemma \ref{one} the product of all $M_i$ has finite $n$-bounded order. Pass to the quotient $G/\prod_iM_i$ and, without loss of generality, assume that $[G,d_i]\leq Z(H)$ for each $i$. For an arbitrary element $a\in A$ there is $i\leq r$ such that $a\in d_iV_0$. We have $[G,a]\leq[G,d_i][G,{d_i}^{-1}a]$. By assumptions, $[H,[G,d_i]]=1$. Taking into account that ${d_i}^{-1}a\in V_0\cap\langle A\rangle$ and using Lemma \ref{two} deduce that $[H,[G,{d_i}^{-1}a]]\leq S$. Thus, $[H,[G,a]]\leq S$ whenever $a\in A$. Since $H=\prod_{a\in A}[G,a]$, it follows that $H'\leq S$. This completes the proof of the theorem.
1,116,691,500,009
arxiv
\section{Proofs}\label{ec:sec:Proofs} We define a constant $\ensuremath{\Gamma} \coloneqq {(1+\gamma)}/{(1-\gamma)}$ which we will use in various proofs. \subsection{Additional Details of Assumption \ref{asm:VFcontinuity}}\label{subsec:SumAssump} Assumptions \ref{asm:VFcontinuity} and \ref{asm:random basis function} will hold for all proofs in the electronic companions. In particular, Assumption \ref{asm:VFcontinuity} ensures the existence of an optimal policy solving program \eqref{eq:minCostMDP}. There are known conditions in the literature that guarantee such existence. For the purposes of our proofs, we formalize Assumption \ref{asm:VFcontinuity} as follows. \begin{assumption} \label{ec:asm:MDP-Kernel-cost} It holds that (i) the MDP cost function is bounded over $\ensuremath{\sSpace\times\mathcal{A}_s}$ and function $c(s,\cdot):\ensuremath{{\mathcal{A}}_{s}}\mapsto\mathbb{R}$ is lower semicontinuous for all $s\in\ensuremath{\mathcal{S}}$; (ii) for every bounded and measurable function $V:\ensuremath{\mathcal{S}}\mapsto\mathbb{R}$, the mapping $(s,a) \mapsto \int_{\ensuremath{\mathcal{S}}} V(s^\prime)P({\diff}s^\prime|s,a)$ is bounded and continuous over $\ensuremath{\sSpace\times\mathcal{A}_s}$; and (iii) there exists a finite-cost policy $\pi \in\Pi$ such that $\ensuremath{\mathrm{PC}}(s,\pi)<\infty$ for all $s\in\ensuremath{\mathcal{S}}$. \end{assumption} Assumption \ref{ec:asm:MDP-Kernel-cost} is adopted from assumptions 4.2.1 and 4.2.2 in \citealt[henceforth abbreviated as \citetalias{hernandez1996discrete}]{hernandez1996discrete}. Specifically, in Part (a) of Assumption 4.2.1 in \citetalias{hernandez1996discrete}, the cost function $c(s,\cdot)$ is assumed to be lower semi-continuous, non-negative, and inf-compact (defined in Condition 3.3.3 in \citetalias{hernandez1996discrete}) whereas, in our setting, non-negativity is replaced by boundedness and the inf-compactness is guaranteed by the virtue of $c(s,\cdot)$ being lower semi-continuous and its domain $\ensuremath{{\mathcal{A}}_{s}}$ being compact (please see the first paragraph of \S\ref{section:Optimality Equation and an Exact Linear Program}). Part (b) of Assumption 4.2.1 and Assumption 4.2.2 in \citetalias{hernandez1996discrete} are equivalent to parts (ii) and (iii) of Assumption \ref{ec:asm:MDP-Kernel-cost}, respectively. Under the aforementioned technical conditions, Part (b) of Theorem 4.2.3 in \citetalias{hernandez1996discrete} guarantees the existence of a deterministic and stationary policy $\pi^*\in\Pi$ that is ``$\gamma$-discount optimal''. In other words, $\pi^*\in\Pi$ solves \eqref{eq:minCostMDP} in our setting. \subsection{Proofs of Statement in \S \ref{sec:Exact Linear Programs for MDPs}} \subsubsection*{Proof of Proposition \ref{prop:ELP-RKHS-gap}.} \underline{Part (i).} Since the optimal value function $V^*\in\mathcal{C}$ is continuous (by Assumption \ref{asm:VFcontinuity}) and the class of random basis function $\varphi$ is universal (by Assumption \ref{asm:random basis function}), there is a finite constant $C\ge0$ and $\bar V\in\ensuremath{\mathcal{R}_C(\varphi,\rho)}$ such that $\sNorm{V^* - \bar V}_\infty \le \varepsilon$. Since $\bar V$ belongs to $\ensuremath{\mathcal{R}_C(\varphi,\rho)}$, it can be written as $\bar V(s)=\bar b_{0} +\inprod{\bar \ensuremath{\boldsymbol{b}}}{\varphi(s)}$ for some $(\bar b_{0},\bar \ensuremath{\boldsymbol{b}})$ with $\sNorm{\bar \ensuremath{\boldsymbol{b}}}_{\infty,\rho}\leq C$. Recall that $\ensuremath{\Gamma} = {(1+\gamma)}/{(1-\gamma)}$. We now show that $(b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}},\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}) = \big(\bar b_{0}-\ensuremath{\Gamma}\varepsilon, \bar \ensuremath{\boldsymbol{b}}\big)$ is the desired feasible FELP solution. This is because $\sNorm{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{\infty,\rho} = \sNorm{\bar \ensuremath{\boldsymbol{b}}}_{\infty,\rho}\le C$ and for any $(s,a)\in\ensuremath{\sSpace\times\mathcal{A}_s}$, we have \begin{equation*}\resizebox{.98\textwidth}{!}{$ \begin{aligned} (1-\gamma)b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}+\inprod{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}{\varphi(s) - \gamma \ensuremath{\mathbb{E}}[{\varphi(s^\prime)}\vert s,a]} & \ = \ (1-\gamma)\big(\bar b_{0}- \ensuremath{\Gamma}\varepsilon\big)+\inprod{\bar \ensuremath{\boldsymbol{b}}}{\varphi(s) - \gamma \ensuremath{\mathbb{E}}[{\varphi(s^\prime)}\vert s,a]} \nonumber \\ & \ = \ -(1+\gamma)\varepsilon + \bar V(s) -\gamma \ensuremath{\mathbb{E}}[ \bar V(s^\prime) \vert s,a] \nonumber \\ & \ \le \ -(1+\gamma)\varepsilon+ V^*(s)+\varepsilon - \gamma \ensuremath{\mathbb{E}}[V^*(s^\prime) -\varepsilon \vert s,a] \nonumber \\ & \ = \ V^*(s) - \gamma \ensuremath{\mathbb{E}}[V^*(s^\prime) \vert s,a] \nonumber \\ & \ \le \ c(s,a), \nonumber \end{aligned} $} \end{equation*} where the first inequality is valid since $\sNorm{V^*- \bar V}_\infty\le\varepsilon$, which ensures $\bar V(s) \le V^*(s)+\varepsilon$ and $-\bar V(s)\le -V^*(s)+\varepsilon$ for all $s\in\ensuremath{\mathcal{S}}$. Thus, $(b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}},\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}})$ is feasible to FELP. In addition, the value function $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}(\cdot) \coloneqq \bar V(\cdot) -\ensuremath{\Gamma}\varepsilon$ associated with the FELP feasible solution $(b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}},\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}})$ belongs to $\ensuremath{\mathcal{R}_C(\varphi,\rho)}$ and $\sNorm{V^*- \ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}_\infty\le\sNorm{V^*- \bar V}_\infty +\ensuremath{\Gamma}\varepsilon\le \varepsilon+ \ensuremath{\Gamma}\varepsilon= {2\varepsilon}/{(1-\gamma)}$, which completes the proof. \\[0.5em] \underline{Part (ii).} Consider an optimal solution $(\coefRKHS,\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}})$ to FELP and let $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}(\cdot)=\coefRKHS+\inprod{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}{\varphi(\cdot)}$. Using the function $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$ defined in Part (i) of this proposition, we have $\sNorm{V^*-\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}_{1,\nu} \le \sNorm{V^*-\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}_{1,\nu} \le \sNorm{V^*-\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}_{\infty}\le {2\varepsilon}/{(1-\gamma)}$ where the first inequality follows from \eqref{eqn:RegFormOfALP} (which is based on Lemma 1 in \citealp{farias2003ALP}) since $(\coefRKHS,\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}})$ and $(b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}},\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}})$ are optimal and feasible solutions, respectively, to FELP. \hfill \Halmos \endproof \subsection{Proofs of Statements in \S \ref{sec:Approximate Linear Programs with Random Basis Functions}} \begin{lemma}\label{ec:lem:optV-properties} Any continuous function $V:\ensuremath{\mathcal{S}}\mapsto \mathbb{R}$ that is feasible to constraints \eqref{constr:ELP} satisfies $V(s) \le V^*(s)$ for all $s\in\ensuremath{\mathcal{S}}$. \end{lemma} \proof{Proof.} The proof follows from Part (b) of Lemma 4.2.7 in \citetalias{hernandez1996discrete}, which requires four assumptions to hold. We now show that these assumptions are true in our setting. First, since $V$ is continuous, it is measurable. Second, the Bellman operator $\mathrm{T}V(s) \coloneqq \min_{a\in\ensuremath{{\mathcal{A}}_{s}}}\{c(s,a) + \gamma\ensuremath{\mathbb{E}}[V(s^\prime)|s,a]\}$ is well defined since the minimum in its definition is attained via the compactness of $\ensuremath{{\mathcal{A}}_{s}}$ and the finiteness of the expectation $\ensuremath{\mathbb{E}}[V(s^\prime)|s,a]=\int_{\ensuremath{\mathcal{S}}} V( s^\prime) P(\diff s^\prime | s,a)$, which holds by Assumption \ref{ec:asm:MDP-Kernel-cost}. Third, since $V$ is feasible to constraints \eqref{constr:ELP}, we have \[V(s) \ \le \ \min_{a\in\ensuremath{{\mathcal{A}}_{s}}}\{ c(s,a) + \gamma \ensuremath{\mathbb{E}}[V(s^\prime) \vert s,a]\} \ = \ \mathrm{T}V(s), \qquad \forall s\in \ensuremath{\mathcal{S}}.\] Fourth, the continuity of $V$ and the compactness of $\ensuremath{\mathcal{S}}$ imply $\sNorm{V}_\infty <\infty$ and \[ \lim_{n\rightarrow\infty}\gamma^n \ensuremath{\mathbb{E}}\Bigg[\sum_{t=0}^{n} V(s^\pi_t) \Big\vert s_0 = s\Bigg] \ \le \ \sNorm{V(s)}_\infty \lim_{n\rightarrow\infty}(n+1)\gamma^n \ = \ 0, \qquad \forall s\in\ensuremath{\mathcal{S}}, \pi \in\Pi, \] where expectation $\ensuremath{\mathbb{E}}$ and the notation $s_t^\pi$ retain their definitions from \S \ref{section:Optimality Equation and an Exact Linear Program}. Hence, the function $V$ fulfills the four assumptions of Part (b) of Lemma 4.2.7 in \citetalias{hernandez1996discrete} and thus $V(\cdot)\le V^*(\cdot)$. \hfill\Halmos \endproof \begin{definition}\label{ec:def:high-prob-feas} Fix an optimal solution $(\coefRKHS,\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}})$ to FELP. For $N$ independent and identical sampled parameters $\{\theta_1,\theta_2,\dots,\theta_N\}$ from $\rho$, we define the coordinates of $\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}} \in\mathbb{R}^{N+1}$ as follows \begin{equation*} \coefFeas{i} \coloneqq\begin{cases} \coefRKHS &\quad \text{if} \quad i=0; \\[6pt] \dfrac{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}(\theta_{i})}{N\rho(\theta_{i})} &\quad \text{if} \quad i=1,2,\dots,N. \end{cases} \end{equation*} \end{definition} \begin{lemma}\label{ec:lem:high-prob-feas-soln} Given $\varepsilon>0$ and $\delta\in(0,1]$, let \begin{equation}\label{ec:eq:N_epsilon} N_\varepsilon \coloneqq\Big\lceil \varepsilon^{-2}\tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}_{\infty,\rho}^2(\ensuremath{{\Omega}}+\ensuremath{{{\Delta}}_{{\delta}}})^2 \Big \rceil. \end{equation} \begin{itemize} \item[(i)] If $N \ge N_\varepsilon $, it holds that $\tallNorm{\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} - V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})}_{\infty} \le\varepsilon$ with a probability of at least $1-\delta$. \item[(ii)] If $N \ge N_\varepsilon$, with a probability of at least $1-\delta$, the vector $(\coefFeas{0} - \Gamma\varepsilon,\coefFeas{1},\dots,\coefFeas{N})$ is feasible to FALP$\programIndex{N}$ and \[ \big\lVert{\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} -\big(V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})- \Gamma\varepsilon\big)\big\rVert}_\infty \le \frac{2\varepsilon}{(1-\gamma)}. \] \end{itemize} \end{lemma} \proof{Proof.} \underline{Part (i).} Since $\coefFeas{0}=\coefRKHS $, we have for $N\ge N_\varepsilon$ \begin{align} \tallNorm{\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}-V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})}_\infty & = \bigg\lVert \coefRKHS+\inprod{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}{\varphi(s)}-\Big(\coefRKHS+\sum_{i=1}^{N}\coefFeas{i} \varphi(s;\theta_i)\Big)\bigg\rVert_{\infty}\nonumber\\ &= \bigg\lVert\inprod{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}{\varphi(s)}-\sum_{i=1}^{N}\coefFeas{i} \varphi(s;\theta_i)\bigg\rVert_{\infty}\nonumber\\ &\le \frac{\tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}_{\infty,\rho}}{\sqrt{N}} \Bigg( \sqrt{2\ln \Big(\frac{1}{\delta}\Big)} \ + \ 4(\diamSState+1)\ensuremath{\mathrm{L}_\varphi} \sqrt{\ensuremath{\mathbb{E}}_\rho\Big[\sNorm{\theta}_2^2\Big]}\Bigg)\nonumber\\ &\le \frac{\tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}_{\infty,\rho}}{\sqrt{N_\varepsilon}} \Bigg( \sqrt{2\ln \Big(\frac{1}{\delta}\Big)} \ + \ 4(\diamSState+1)\ensuremath{\mathrm{L}_\varphi} \sqrt{\ensuremath{\mathbb{E}}_\rho\Big[\sNorm{\theta}_2^2\Big]}\Bigg),\label{eqn1:ec:lem:high-prob-feas-soln} \end{align} where the first inequality holds with a probability of at least $1-\delta$ by Theorem 3.2 of \cite{rahimi2008uniform} after adjusting our notation to theirs. To help the reader, we discuss the notational differences in Remark \ref{rem:RahimiNotation} immediately following this proof. We can now use the definitions of $\ensuremath{{\Omega}}$ and $\ensuremath{{{\Delta}}_{{\delta}}}$ (see \S \ref{sec:Random Approximate Linear Program}) in $N_\varepsilon$ to simplify the right hand side of \eqref{eqn1:ec:lem:high-prob-feas-soln} to $\varepsilon$ and get \[ \tallNorm{\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}-V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})}_\infty \le \varepsilon \] with a probability of at least $1-\delta$.\\[0.5em] \underline{Part (ii).} If $N\ge N_\varepsilon$, the vector $(\coefFeas{0} - \Gamma\varepsilon,\coefFeas{1},\dots,\coefFeas{N})$ is feasible to FALP$\programIndex{N}$ with a probability of at least $1-\delta$ since \begin{equation}\label{ec:eq:non-positive-F} \begin{aligned} &(1-\gamma)\big(\coefFeas{0}- \ensuremath{\Gamma}\varepsilon \big)+\sum_{i=1}^{N}\coefFeas{i} \big(\varphi(s;\theta_i) - \gamma \ensuremath{\mathbb{E}}\big[\varphi({s^\prime};\theta_i) \big\vert s,a\big]\big) \\ &\hspace{1cm}= V(s;\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}}) - { \varepsilon} - \gamma \ensuremath{\mathbb{E}}\big [V(s^\prime;\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}}) + {\varepsilon} \big \vert s,a\big]\\ &\hspace{1cm}\le\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}(s) - \gamma \ensuremath{\mathbb{E}}[\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}(s^\prime) \vert s,a] \\ &\hspace{1cm}=(1-\gamma)\coefRKHS+ \inprod{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}{\varphi(s) - \gamma \ensuremath{\mathbb{E}}[\varphi(s^\prime) \vert s,a]} \\ &\hspace{1cm}\le c(s,a), \end{aligned} \end{equation} where the first equality comes from the definitions of $V(s;\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})$ and $\ensuremath{\Gamma}$; the first inequality holds because $\lvert{\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}(s)-V(s;\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})}\rvert\le\lVert\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}-V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})\rVert_{\infty}\le { \varepsilon}$ for all $s\in\ensuremath{\mathcal{S}}$ with a probability of at least $1-\delta$ by Part (i) of this lemma; the second equality results from using the definition of $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}$; and the second inequality holds because $(\coefRKHS,\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}})$ is an optimal (hence feasible) solution of FELP. Moreover, if $N\ge N_\varepsilon$, by Part (i) of this lemma and the definition of $\ensuremath{\Gamma}$, we get \[\tallNorm{\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} -\big(V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})-\ensuremath{\Gamma}\varepsilon\big)}_\infty \le \tallNorm{\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} -V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})}_\infty + \ensuremath{\Gamma}\varepsilon \leq \varepsilon + \ensuremath{\Gamma}\varepsilon = \dfrac{2 \varepsilon}{(1-\gamma)}\] with a probability of at least $1-\delta$. \hfill \Halmos \endproof \begin{remark}\label{rem:RahimiNotation} We use the notations $(1, s)$, $\varphi$, $\ensuremath{\boldsymbol{b}}$, $\rho$, $N$, $\ensuremath{\mathrm{L}_\varphi}$, and $\diamSState$ +1 in this paper instead of $x$, $\phi$, $\boldsymbol{\alpha}$, $p$, $K$, $L$, and $B$, respectively, in \cite{rahimi2008uniform}. The additional $1$ in the term $\diamSState+1$ is due to the notational differences between $x$ and $(1,s)$ used in \cite{rahimi2008uniform} and this paper, respectively. Moreover, the function class $\mathscr{F}$ defined in \S III of \cite{rahimi2008uniform} is the same as $\mathcal{R}_\infty(\varphi, \rho)$ and the functions $\inprod{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}{\varphi(s)} \in \mathcal{R}_\infty(\varphi, \rho)$ with $\sNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}_{\infty,\rho} < \infty$ and $\sum_{i=1}^{N}\coefFeas{i} \varphi(s;\theta_i)$ satisfy the conditions of Theorem 3.2 in \cite{rahimi2008uniform}. \looseness=-1 \end{remark} \subsubsection*{Proof of Theorem \ref{prop:ALP}.} \underline{Part (i).} The function $V(\cdot;\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})$ is continuous due to the continuity of the class of basis function $\varphi$ by Assumption \ref{asm:random basis function}, and is feasible to constraints \eqref{constr:ELP} because $\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}$ is a feasible solution of FALP$\programIndex{N}$. Thus, Lemma \ref{ec:lem:optV-properties} guarantees $V(s;\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}) \le V^*(s)$ for all $s\in \ensuremath{\mathcal{S}}$.\\[0.5em] \underline{Part (ii).} Given an optimal solution to FALP$\programIndex{N}$, $\coefALPVecN{N}$, we define $\hat\ensuremath{\boldsymbol{\beta}} \coloneqq (\coefALPVecN{N}, 0,0, \dots,0)\in \mathbb{R}^{N^\prime}$, that is obtained by appending $N^\prime - N$ zeros to $\coefALPVecN{N}$. This vector is trivially a feasible solution to FALP$\programIndex{N^\prime}$ and has the same $(1,\nu)$-norm deviation from $V^*$ as $\coefALPVecN{N}$. Moreover, by Lemma 1 in \citep{farias2003ALP}, FALP is equivalent to \eqref{eqn:RegFormOfALP}, that is, FALP minimizes the $(1,\nu)$-norm distance between its VFA and $V^*$. Thus, it follows that $\tallNorm{V^* - V(\coefALPVecN{N^\prime})}_{1,\nu} \leq \tallNorm{V^* - V(\hat\ensuremath{\boldsymbol{\beta}} )}_{1,\nu} = \tallNorm{V^* - V(\coefALPVecN{N})}_{1,\nu}$.\\[0.5em] \underline{Part (iii).} Consider Part (ii) of Lemma \ref{ec:lem:high-prob-feas-soln} that ensures $(\coefFeas{0}- \ensuremath{\Gamma}\varepsilon,\coefFeas{1},\dots,\coefFeas{N})$ is a feasible solution to FALP$\programIndex{N}$ with a probability of at least $1-\delta$ if $N\ge N_\varepsilon$. Let $\{\theta_1,\dots,\theta_N\}$ be any $N$ independent and identical samples from $\rho$ defining function $V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}}) - \ensuremath{\Gamma}\varepsilon$ corresponding to vector $(\coefFeas{0}- \ensuremath{\Gamma}\varepsilon,\coefFeas{1},\dots,\coefFeas{N})$. Mapping $L:\Theta^N\mapsto\mathbb{R}$ defined as \begin{equation}\label{ec:eq:macDiramidFunction} L(\theta_1,\dots,\theta_N) \ \coloneqq \ \ensuremath{\mathbb{E}}_\nu \big[ \ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} - (V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})-\ensuremath{\Gamma}\varepsilon) \big] \ = \ \ensuremath{\Gamma}\varepsilon + \ensuremath{\mathbb{E}}_\nu\bigg[\inprod{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}{\varphi(s)} - \sum_{i=1}^{N}{\coefFeas{i} }{\varphi(s;\theta_i)}\bigg], \end{equation} has important properties. First, it satisfies \begin{align} \ensuremath{\mathbb{E}}_{\rho}[L(\theta_1,\dots,\theta_N)] &= \ensuremath{\Gamma}\varepsilon + \ensuremath{\mathbb{E}}_\nu\bigg[\inprod{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}{\varphi(s)} - \ensuremath{\mathbb{E}}_{\rho}\bigg[\sum_{i=1}^{N} {\coefFeas{i} }{\varphi(s;\theta_i)}\bigg] \bigg] \nonumber \\ & = \ensuremath{\Gamma}\varepsilon + \ensuremath{\mathbb{E}}_\nu\bigg[\inprod{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}{\varphi(s)} - \sum_{i=1}^{N} \ensuremath{\mathbb{E}}_{\rho}\bigg[{\frac{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}(\theta_{i})}{N\rho(\theta_{i})} }{\varphi(s;\theta_i)}\bigg] \bigg] \nonumber \\ & = \ensuremath{\Gamma}\varepsilon + \ensuremath{\mathbb{E}}_\nu\bigg[\inprod{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}{\varphi(s)} - \frac{1}{N} \sum_{i=1}^{N} \int_{\Theta} \ {\frac{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}(\theta)}{\rho(\theta)} }{\varphi(s;\theta)} \rho( \theta) \diff \theta \bigg] \bigg] \nonumber \\ & = \ensuremath{\Gamma}\varepsilon, \nonumber \end{align} where the second equality is obtained using the definition of $\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}}$, the third one holds since the $\theta_i$'s are independent and identical samples, and the last one follows from the definition of $\inprod{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}{\varphi(s)}$. Second, for any $\ell\in\{1,2,\dots,N\}$ and parameter $\hat{\theta}_\ell\in\Theta$, the function $L(\cdot)$ has the following property: \begin{align} \sup_{\theta_1,\dots,\theta_N,\hat\theta_\ell} \left\vert L(\theta_1,\dots,\theta_\ell,\dots,\theta_N)- L(\theta_1,\dots,\hat\theta_\ell,\dots,\theta_N)\right\vert &= \sup_{\theta_\ell,\hat\theta_\ell} \left\vert{\coefFeas{\ell} }\ensuremath{\mathbb{E}}_{\nu}[\varphi(s;\theta_\ell)] - {\coefFeas{\ell} }\ensuremath{\mathbb{E}}_{\nu}[\varphi(s;\hat\theta_\ell)]\right\vert \nonumber \\ &\le 2 \sup_{\theta_\ell} \left\vert\coefFeas{\ell} \right\vert \nonumber \\ &= 2 \sup_{\theta} \left\vert\frac{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}(\theta)}{N\rho(\theta)}\right\vert \nonumber \\ & \le \frac{2}{N}\tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}_{\infty,\rho}. \nonumber \end{align} The first equality follows from the fact that the two points $(\theta_1,\dots,\theta_\ell,\dots,\theta_N)$ and $(\theta_1,\dots,\hat\theta_\ell,\dots,\theta_N)$ only differ in their $\ell^{\mathrm{th}}$ components. The first inequality is obtained using $\sNorm{\bar\varphi}_\infty \le 1$ (please see Assumption \ref{asm:random basis function}), the second equality follows from the definition of $\coefFeas{\ell}$ in Definition \ref{ec:def:high-prob-feas}, and the last inequality is based on the definition of $\tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}_{\infty,\rho}$. Given $\bar\varepsilon > 0$, these two properties of $L(\cdot)$ and an application of McDiarmid's inequality (see, e.g., Theorem D.3 in \citealt{mohri2012foundations}) to function $L(\cdot)$ give us \begin{align} \prob\big( L(\theta_1,\dots,\theta_N) -\ensuremath{\mathbb{E}}_{\rho}[L(\theta_1,\dots,\theta_N)] \ge \bar\varepsilon \big) &= \prob\bigg( \ensuremath{\Gamma}\varepsilon + \ensuremath{\mathbb{E}}_\nu\bigg[\inprod{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}{\varphi(s)} - \sum_{i=1}^{N}{\coefFeas{i} }{\varphi(s;\theta_i)}\bigg] -\ensuremath{\Gamma}\varepsilon \ge \bar\varepsilon \bigg) \nonumber \\ & = \prob\bigg( \ensuremath{\mathbb{E}}_\nu\bigg[\inprod{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}{\varphi(s)} - \sum_{i=1}^{N}{\coefFeas{i} }{\varphi(s;\theta_i)}\bigg] \ge \bar\varepsilon \bigg) \nonumber\\ & \le \exp\Bigg(\frac{-N\bar\varepsilon^2}{ 2\tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}_{\infty,\rho}^2} \Bigg), \nonumber \end{align} where $\prob(\cdot)$ denotes the probability over the samples $(\theta_{1},\dots,\theta_N)$ drawn from $\rho$. If we set the right-hand-side of the above inequality to $\delta$ and solve for $\bar\varepsilon$, then with a probability of at least $1-\delta$, it holds that \begin{equation}\label{ec:eq:L-bound} \ensuremath{\mathbb{E}}_\nu\bigg[\inprod{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}{\varphi(s)} - \sum_{i=0}^{N}{\coefFeas{i} }{\varphi(s;\theta_i)}\bigg] \ \ \le \ \ \tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}_{\infty,\rho}\sqrt{2\ln\Big(\frac{1}{\delta}\Big)} \Big/ \sqrt{N} \ \ \le \ \ \ensuremath{{{\Delta}}_{{\delta}}} \varepsilon \big / \big(\ensuremath{{\Omega}}+\ensuremath{{{\Delta}}_{{\delta}}}\big) \end{equation} where the second inequality follows from our choice of $N\ge N_\varepsilon$ and the definition of $\ensuremath{{{\Delta}}_{{\delta}}}$. Using \eqref{ec:eq:macDiramidFunction}, \eqref{ec:eq:L-bound}, and the definition of $\ensuremath{\Gamma}$, we obtain \begin{align} \label{eq:ALP-approx-gap-3} \ensuremath{\mathbb{E}}_\nu \big[ \ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} - (V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})-\ensuremath{\Gamma}\varepsilon) \big] &= \ensuremath{\Gamma}\varepsilon + \ensuremath{\mathbb{E}}_\nu\bigg[\inprod{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}{\varphi(s)} - \sum_{i=1}^{N}{\coefFeas{i} }{\varphi(s;\theta_i)}\bigg] \nonumber \\ & \le \ensuremath{\Gamma}\varepsilon + \ensuremath{{{\Delta}}_{{\delta}}} \varepsilon / \left(\ensuremath{{\Omega}}+\ensuremath{{{\Delta}}_{{\delta}}}\right) \\ & \le \frac{2\varepsilon}{(1-\gamma)} \cdot\Bigg( \frac{(1+\gamma)\ensuremath{{\Omega}} + 2\ensuremath{{{\Delta}}_{{\delta}}}}{2(\ensuremath{{\Omega}} + \ensuremath{{{\Delta}}_{{\delta}}})}\Bigg), \nonumber \end{align} which holds with a probability of at least $1-\delta$. Let $\lambda \coloneqq {((1+\gamma)\ensuremath{{\Omega}} + 2\ensuremath{{{\Delta}}_{{\delta}}})}/{2(\ensuremath{{\Omega}} + \ensuremath{{{\Delta}}_{{\delta}}})}$. Then choosing $\varepsilon$ as $\frac{\varepsilon}{\lambda}$ in Part (ii) of Lemma \ref{ec:lem:high-prob-feas-soln} indicates that for any \[ N \geq N_{{\varepsilon}/{\lambda}} = \Bigg\lceil \varepsilon^{-2} \ \tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}_{\infty,\rho}^2 \ \left(\frac{(1+\gamma)}{2}\ensuremath{{\Omega}} \ + \ \ensuremath{{{\Delta}}_{{\delta}}} \right)^2 \Bigg\rceil, \] the vector $(\coefFeas{0}- \frac{\ensuremath{\Gamma}\varepsilon}{\lambda},\coefFeas{1},\dots,\coefFeas{N})$ is feasible to FALP$\programIndex{N}$ with a probability of at least $1-\delta$. In addition, following the exact same steps as in \eqref{eq:ALP-approx-gap-3} we get $ \ensuremath{\mathbb{E}}_\nu \big[ \ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} - (V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})-\frac{\ensuremath{\Gamma}\varepsilon}{\lambda}) \big] \le \frac{2\varepsilon}{(1-\gamma)} $. Hence, \begin{equation}\label{ec:whyBoundTight} \begin{aligned} \big\lVert V^* - \ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}) \big\rVert_{1,\nu} \ & = \ensuremath{\mathbb{E}}_{\nu}\big[V^* + \ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} - \ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} - \ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}) \big] \\ & = \ensuremath{\mathbb{E}}_{\nu}\big[V^* - \ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}] + \ensuremath{\mathbb{E}}_{\nu}\big[\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} - \ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})] \\ & \le \frac{2\varepsilon}{(1-\gamma)} + \ensuremath{\mathbb{E}}_{\nu}\bigg[\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} - \Big(V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})-\frac{\ensuremath{\Gamma}\varepsilon}{\lambda} \Big)\bigg] \\ & \le \frac{4\varepsilon}{(1-\gamma)}, \end{aligned} \end{equation} with a probability of at least $1-\delta$, where we used Lemma \ref{ec:lem:optV-properties} to derive the first equality and Part (ii) of Proposition \ref{prop:ELP-RKHS-gap} and the optimality of to $\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}$ to FALP$\programIndex{N}$ to derive the first inequality. \hfill \Halmos \endproof \subsubsection*{Proof of Proposition \ref{prop:Algo1Convergence}.} Given $\tau > 0$, to prove this proposition, we show that $\tau^*$ becomes smaller than $\tau$ in a finite number of iterations with high probability. To do so, we bound the terms $\mathrm{LB}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}})$ and $\ensuremath{\mathrm{PC}}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}})$ used in the definition of $\tau^*$ from below and above, respectively, and show that the ratio $ {\mathrm{LB}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}})}/{\mathrm{PC}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}})}$ can get arbitrarily close to one when $N$ is sufficiently large. \emph{Finding a lower bound on $\mathrm{LB}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}})$:} Since we set $\mathcal{M}\programIndex{N}$ to FALP$\programIndex{N}$ in Algorithm \ref{alg:sampledBasesALP}, vector $\ensuremath{\boldsymbol{\beta}}\vectorIndex{N}$ in this algorithm equals $\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}$. For a given $N$, define $\ensuremath{\mathcal{S}}^\prime\coloneqq\{s\in\ensuremath{\mathcal{S}}: \nu(s) = 0\}\subseteq \ensuremath{\mathcal{S}}$, $\ensuremath{\mathcal{S}}^{\prime\prime}\coloneqq\{s\in\ensuremath{\mathcal{S}}: |\mu_{\chi}(s;\ensuremath{\boldsymbol{\beta}}\vectorIndex{N})| = \infty\}\subseteq \ensuremath{\mathcal{S}}$, $\ensuremath{\mathcal{S}}^{0} \coloneqq \ensuremath{\mathcal{S}}^\prime \cup \ensuremath{\mathcal{S}}^{\prime \prime}$, $W_1 \coloneqq \sup_{s\in\ensuremath{\mathcal{S}}\backslash\ensuremath{\mathcal{S}}^{0}}\big\{{\mu_{\chi}(s;\ensuremath{\boldsymbol{\beta}}\vectorIndex{N})}/{\nu(s)}\big\} \in (0,\infty)$, and $W_2 \coloneqq \sup_{s\in\ensuremath{\mathcal{S}}\backslash\ensuremath{\mathcal{S}}^{\prime}} \big\{{\chi(s)}/{\nu(s)}\big\} \in (0,\infty)$. By our assumptions on $\nu$ and $\mu_{\chi}(\ensuremath{\boldsymbol{\beta}}\vectorIndex{N})$, the sets $\ensuremath{\mathcal{S}}^\prime$, $\ensuremath{\mathcal{S}}^{\prime \prime}$, and $\ensuremath{\mathcal{S}}_0$ have zero measure. Then, when $N\geq \Bigg\lceil \varepsilon^{-2} \ \tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}_{\infty,\rho}^2 \ \left(\frac{(1+\gamma)}{2}\ensuremath{{\Omega}} \ + \ \ensuremath{{{\Delta}}_{{\delta}}} \right)^2 \Bigg\rceil$, we can write \begin{align*} \ensuremath{\mathbb{E}}_{\chi}[V^*] - \ensuremath{\mathbb{E}}_{\chi}\big[\ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})\big] & = \int_{\ensuremath{\mathcal{S}}} \big( V^*(s) - V(s;\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})\big) \chi(\diff s) \\ & = \int_{\ensuremath{\mathcal{S}}\backslash\ensuremath{\mathcal{S}}^\prime} \big( V^*(s) - V(s;\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})\big) \frac{\chi(s)}{\nu(s)} \nu(\diff s) \\ & \le W_2 \tallNorm{V^*- \ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})}_{1,\nu}\\ & \le \frac{4W_2\varepsilon}{(1-\gamma)}, \end{align*} where the second equality is valid since $\ensuremath{\mathcal{S}}^\prime$ is a zero-measure set; the first inequality holds since $V^*$ is a pointwise upper bound on $\ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})$ by Lemma \ref{ec:lem:optV-properties} given $\ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})$ is continuous and feasible to constraints \eqref{constr:ELP}; and the second inequality holds for the choice of $N$ with a probability of at least $1-\delta$ by Part (iii) of Theorem \ref{prop:ALP}. Using the above inequalities, with the same probability, it holds that \begin{equation}\label{ec:eq:limitingResult_lowerBound} \mathrm{LB}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}}) \ge \mathrm{LB}(\ensuremath{\boldsymbol{\beta}}\vectorIndex{N}) = \mathrm{LB}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}) = \ensuremath{\mathbb{E}}_{\chi}\big[\ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})\big] \ge \ensuremath{\mathbb{E}}_{\chi}[V^*] - \frac{4W_2\varepsilon}{(1-\gamma)}, \end{equation} where the first inequality follows from Step (iv) of Algorithm \ref{alg:sampledBasesALP} which indicates that $\mathrm{LB}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}})$ is always the largest lower bound. \emph{Finding an upper bound on $\mathrm{UB}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}})$}: We can write for $N\geq \Bigg\lceil \varepsilon^{-2} \ \tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}_{\infty,\rho}^2 \ \left(\frac{(1+\gamma)}{2}\ensuremath{{\Omega}} \ + \ \ensuremath{{{\Delta}}_{{\delta}}} \right)^2 \Bigg\rceil$ that \begin{align*} \tallNorm{V^* - \ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})}_{\mu_{\chi}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})} & = \int_{\ensuremath{\mathcal{S}}} \Big \lvert V^*(s) - \ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})\Big\rvert {\mu_{\chi}(s;\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})} \diff s \\ & = \int_{\ensuremath{\mathcal{S}}\backslash\ensuremath{\mathcal{S}}^{0}}\Big \rvert V^*(s) - \ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})\Big\rvert \frac{{\mu_{\chi}(s;\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})} }{\nu(s)} \nu(\diff s) \\ & \le \sup_{s\in\ensuremath{\mathcal{S}}\backslash\ensuremath{\mathcal{S}}^{0}} \bigg\{\frac{{\mu_{\chi}(s;\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})} }{\nu(s)} \bigg\} \ \tallNorm{V^* - \ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})}_{1,\nu}\\ & \le \frac{4W_1\varepsilon}{(1-\gamma)}, \end{align*} where the second equality is valid since $\ensuremath{\mathcal{S}}_0$ is a zero-measure set and the second inequality holds for the chosen $N$ with a probability of at least $1-\delta$ by Part (iii) of Theorem \ref{prop:ALP}. Utilizing Proposition \ref{prop:worst case policy performance} in \S\ref{sec:Random Approximate Linear Program} and the definition of $\ensuremath{\mathrm{PC}}(\cdot)$ in \S\ref{section:Optimality Equation and an Exact Linear Program}, with the same probability, we obtain \[ \ensuremath{\mathrm{PC}}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}) - \ensuremath{\mathbb{E}}_{\chi}[V^*] = \tallNorm{\ensuremath{\mathrm{PC}}(\cdot;\pi_g(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})) - V^*(\cdot)}_{1,\chi} \ \le \ \frac{\sNorm{V(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}) -V^*}_{1, \mu_\chi(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})}}{1-\gamma} \le \frac{4W_1\varepsilon}{(1-\gamma)^2}. \] Therefore, \begin{equation}\label{ec:eq:limitingResult_upperBound} \ensuremath{\mathrm{PC}}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}}) \le \ensuremath{\mathrm{PC}}(\ensuremath{\boldsymbol{\beta}}\vectorIndex{N}) = \ensuremath{\mathrm{PC}}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}) \le \ensuremath{\mathbb{E}}_\chi\big[V^*\big] +\frac{4W_1\varepsilon}{(1-\gamma)^2}, \end{equation} where the first inequality follows from Step (v) in Algorithm \ref{alg:sampledBasesALP} which guarantees that $\ensuremath{\mathrm{PC}}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}}) $ is always the smallest upper bound. Using \eqref{ec:eq:limitingResult_lowerBound} and \eqref{ec:eq:limitingResult_upperBound}, we obtain \[ \tau^*= 1 - \dfrac{\mathrm{LB}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}})}{\mathrm{PC}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}})} \le 1- \left[ \left({\ensuremath{\mathbb{E}}_\chi\left[V^*\right] - \frac{4W_2\varepsilon}{(1-\gamma)}}\left) \bigg/ \right({\ensuremath{\mathbb{E}}_\chi\left[V^*\right] +\frac{4W_1\varepsilon}{(1-\gamma)^2}}\right)\right]\le \frac{4(W_1 + (1-\gamma)W_2)}{(1-\gamma)^2 \ensuremath{\mathbb{E}}_\chi[V^*]}\varepsilon, \] which holds with a probability of at least $1-\delta$. For $W_3 \coloneqq {(1-\gamma)^2 \ensuremath{\mathbb{E}}_\chi[V^*]}/{4(W_1 + (1-\gamma)W_2)}$, if we choose $\varepsilon < W_3 \tau$, then for \[ N \ > \ N_\tau = \Bigg\lceil \tau^{-2} W_3^{-2} \ \tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}_{\infty,\rho}^2 \ \bigg(\frac{(1+\gamma)}{2}\ensuremath{{\Omega}} \ + \ \ensuremath{{{\Delta}}_{{\delta}}} \bigg)^2 \Bigg\rceil, \] we have $\tau^* < \tau$ and Algorithm \ref{alg:sampledBasesALP} terminates with a probability of at least $1-\delta$ in $\lceil(N_\tau+1)/B\rceil$ iterations. Notice that we used Part (iii) of Theorem \ref{prop:ALP} to obtain $N_\tau$ by replacing $\varepsilon$ with $ W_3 \tau$.\looseness = -1 \hfill \Halmos \endproof \subsection{Proofs of Statements in \S \ref{sec:Self-guided Approximate Linear Programs}} \subsubsection*{Proof of Proposition \ref{prop:SG-ALP-basic}.} Any VFA in the set $\{V(\cdot;\coefSGVecK{mB}): m=1,2,\dots,n\}$ is a continuous function because of the Lipschitz continuity of $\bar{\varphi}$ in Assumption \ref{asm:random basis function}. Moreover, each function $V(\cdot;\coefSGVecK{mB})$ is feasbile to the constraints \eqref{constr:ELP} since each vector $\coefSGVecK{mB}$ is feasible to the constraints \eqref{FALPConst1} of FGLP$\programIndex{mB}$. As a result of these two observations, Lemma \ref{ec:lem:optV-properties} guarantees $V(s;\coefSGVecK{mB})\le V^*(s)$ for $m=1,2,\dots,n$ and $s\in\ensuremath{\mathcal{S}}$. In addition, the constraints \eqref{FALPConst2} in FGLP indicate that $V(\cdot;\coefSGVecK{mB}) \le V(\cdot;\coefSGVecK{(m+1)B})$ for $m=1,2,\ldots,n-1$. \hfill \Halmos \endproof The rest of this section is devoted to our projection-based sampling bound for FGLP in Theorem \ref{thm:SG-ALP sampling bound}. Our analysis is based on a known projection in the space of (2,$\rho$) integrable functions, which is formalized in Lemma \ref{ec:lem:PrepDecompose}. \begin{lemma}[Example 4.5 in \citealt{rudin1987real}]\label{ec:lem:PrepDecompose} Define the space of (2,$\rho$) integrable functions $\mathcal{B}$ with its associated inner product ${\inprodLTWO{\cdot}{\cdot}}$ and norm $\sNorm{\cdot}_{2,\rho}^2\coloneqq {\inprodLTWO{\cdot}{\cdot}}$ as \begin{align*} \mathcal{B} \coloneqq \big\{\ensuremath{\boldsymbol{b}}: \Theta \mapsto \mathbb{R} \big | \sNorm{\ensuremath{\boldsymbol{b}}}_{2,\rho} < \infty \big\} \quad \text{and} \quad \inprodLTWO{\ensuremath{\boldsymbol{b}}}{\ensuremath{\boldsymbol{b}}^{\prime}} \coloneqq \int_\Theta \frac{\ensuremath{\boldsymbol{b}}(\theta) \ \ensuremath{\boldsymbol{b}}^{\prime}(\theta)}{\rho(\theta)} \diff\theta \ \mbox{for} \ \ensuremath{\boldsymbol{b}},\ensuremath{\boldsymbol{b}}^{\prime}\in\mathcal{B}. \end{align*} Then, the space $\mathcal{B}$ equipped with inner product ${\inprodLTWO{\cdot}{\cdot}}$ form a Hilbert space. \end{lemma} Definition \ref{ec:def:classOfWeightFunctions} below models coefficients of functions (excluding the intercept) in space $\mathcal{W}_\alpha(\Phi_{N})$ and connects such a space to the Hilbert space $\mathcal{B}$. Lemma \ref{ec:lem:prep-decompose} uses this definition to set up our orthogonal projection. \begin{definition}\label{ec:def:classOfWeightFunctions} Given $N$ samples $\{\theta_1,\dots,\theta_N\}$ and $\alpha\in\big(0,\min_{i\ne j}\lVert \theta_{i}-\theta_{j}{\rVert}_2\big)$, we define \[ \mathcal{B}_{\alpha,N} \equiv \mathcal{B}_\alpha(\Phi_{N}) \coloneqq \bigg\{\ensuremath{\boldsymbol{b}}\in \mathcal{B} \ \Big | \ \exists(\beta_1,\dots, \beta_N) \mbox{ and } \ensuremath{\boldsymbol{b}}(\theta)=\sum_{i=1}^{N}\beta_i \phi_{i,\alpha}(\theta) \bigg\}, \] and let $\overline{\mathcal{B}}_{\alpha,N}$ and $\ensuremath\overline{\mathcal{B}}^{\raisemath{2pt}{\bot}}_{\alpha,N}$ to be the closure of ${\mathcal{B}}_{\alpha,N}$ and the perpendicular complement of $\overline{\mathcal{B}}_{\alpha,N}$, respectively. In particular, \[ \ensuremath\overline{\mathcal{B}}^{\raisemath{2pt}{\bot}}_{\alpha,N} \coloneqq \Big \{ \ensuremath{\boldsymbol{b}} \in\mathcal{B} \Big\vert \inprodLTWO{\ensuremath{\boldsymbol{b}}}{\ensuremath{\boldsymbol{b}}^\prime} = 0, \ \forall \ensuremath{\boldsymbol{b}}^\prime \in \overline{\mathcal{B}}_{\alpha,N}\Big \}. \] \end{definition} \begin{lemma}\label{ec:lem:prep-decompose} Let $\alpha\in\big( 0, \min_{i\ne j} \lVert \theta_{i}-\theta_{j}\rVert_2\big)$. For a given $\varepsilon > 0$, fix a feasible solution $(b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}},\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}})$ to FELP satisfying $\tallNorm{V^*- \ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}_\infty \le {2\varepsilon}/{(1-\gamma)}$, where $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}(\cdot) = b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \inprod{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}{\varphi(\cdot)} \in\ensuremath{\mathcal{R}_C(\varphi,\rho)}$. Then there exist $\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}} \in \overline{\mathcal{B}}_{\alpha,N}$ and $\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}} \in \ensuremath\overline{\mathcal{B}}^{\raisemath{2pt}{\bot}}_{\alpha,N}$ such that (i) the function $\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}$ admits the decomposition $\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}} = \ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}} + \ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}$; (ii) the Pythagorean identity $\tallNorm{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho} = \tallNorm{\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho} + \tallNorm{\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho}$ holds; (iii) the norms $\tallNorm{\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{\infty,\rho}$ and $\tallNorm{\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{\infty,\rho}$ are finite; and (iv) $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} = V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \prep{V}_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$ where \[ V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}(\cdot) \coloneqqb_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \inprod{\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}}{\varphi(\cdot)} \quad \text{and} \quad \prep{V}_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}(\cdot) \coloneqq \inprod{\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}}{\varphi(\cdot)}. \] \end{lemma} \proof{Proof.} The required FELP feasible solution $(b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}},\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}})$ exists by Part (i) of Proposition \ref{prop:ELP-RKHS-gap}. Clearly $\big\lVert\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}\big\rVert_{\infty,\rho} \le C$ since $\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}$ is feasible to FELP. Thus, from the inequalities, \[ \tallNorm{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho}^2 \ = \ \int_\Theta \frac{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}(\theta)^2}{\rho(\theta)} \diff \theta \ = \ \int_\Theta \bigg(\frac{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}(\theta)}{\rho(\theta)}\bigg)^2 \rho(\diff\theta) \ \le \ \int_\Theta \bigg(\sup_\theta\bigg\lvert\frac{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}(\theta)}{\rho(\theta)}\bigg\rvert\bigg)^2 \rho(\diff\theta) \ \le \ \tallNorm{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{\infty,\rho}^2, \] we have $\tallNorm{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho}\le\tallNorm{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{\infty,\rho}<\infty$ which shows $\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}\in \mathcal{B}$. It is straightforward to see that $\overline{\mathcal{B}}_{\alpha,N}$ is a closed linear subspace of $\mathcal{B}$. Leveraging the orthogonal projection in Hilbert spaces (see, e.g., Theorem 5.24 in \citealt{folland1999real}), we can decompose Hilbert space $\mathcal{B}$ into elements $\overline{\mathcal{B}}_{\alpha,N}$ and $\ensuremath\overline{\mathcal{B}}^{\raisemath{2pt}{\bot}}_{\alpha,N}$. Therefore, given $\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}\in\mathcal{B}$, there exist $\ensuremath{\boldsymbol{b}}_{1} \in \overline{\mathcal{B}}_{\alpha,N} $ and $\ensuremath{\boldsymbol{b}}_{2}\in \ensuremath\overline{\mathcal{B}}^{\raisemath{2pt}{\bot}}_{\alpha,N}$ such that $\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}} = \ensuremath{\boldsymbol{b}}_{1} + \ensuremath{\boldsymbol{b}}_{2}$. Since two components $\ensuremath{\boldsymbol{b}}_{1}$ and $\ensuremath{\boldsymbol{b}}_{2}$ are orthogonal in Hilbert space $\mathcal{B}$ (see Lemma \ref{ec:lem:PrepDecompose}), they satisfy the Pythagorean identity $\tallNorm{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho} = \tallNorm{\ensuremath{\boldsymbol{b}}_{1}}_{2,\rho} + \tallNorm{\ensuremath{\boldsymbol{b}}_{2}}_{2,\rho}$ (i.e., Theorem 5.23 in \citealt{folland1999real}). We next show that weighting functions $\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}$ and $\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}$ with finite $(\infty,\rho)$-norms can be constructed using $\ensuremath{\boldsymbol{b}}_{1}$ and $\ensuremath{\boldsymbol{b}}_{2}$, respectively, such that $\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}} = \ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}+\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}$. Let $\Theta_1 \coloneqq \big\{\theta\in \Theta: |\ensuremath{\boldsymbol{b}}_{1}(\theta)| = \infty\big\}$ and $\Theta_2 \coloneqq\big\{\theta\in \Theta: |\ensuremath{\boldsymbol{b}}_{2}(\theta)| = \infty\big\}$. If these sets are empty, then we can set $\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}} = \ensuremath{\boldsymbol{b}}_1$ and $\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}} = \ensuremath{\boldsymbol{b}}_2$. Otherwise, we show they are both zero-measure sets. By contradiction, assume that at least one of them is not a zero-measure set. Then, \begin{align} \sNorm{\ensuremath{\boldsymbol{b}}_{1}}_{2,\rho} + \sNorm{\ensuremath{\boldsymbol{b}}_{2}}_{2,\rho} &= \ \int_{\Theta_1} \frac{{(\ensuremath{\boldsymbol{b}}_{1}(\theta))}^2}{\rho(\theta)}\diff\theta + \int_{\Theta \backslash \Theta_1} \frac{{(\ensuremath{\boldsymbol{b}}_{1}(\theta))}^2}{\rho(\theta)}\diff\theta + \int_{\Theta_2} \frac{{(\ensuremath{\boldsymbol{b}}_{2}(\theta))}^2}{\rho(\theta)}\diff\theta + \int_{\Theta \backslash \Theta_2} \frac{{(\ensuremath{\boldsymbol{b}}_{2}(\theta))}^2}{\rho(\theta)}\diff\theta \nonumber\\[4pt] & \ge \ \int_{\Theta_1} \frac{{(\ensuremath{\boldsymbol{b}}_{1}(\theta))}^2}{\rho(\theta)}\diff\theta + \int_{\Theta_2} \frac{{(\ensuremath{\boldsymbol{b}}_{2}(\theta))}^2}{\rho(\theta)}\diff\theta = \infty \label{ec:zero-measure-ineq} \end{align} which is a contradiction with $\sNorm{\ensuremath{\boldsymbol{b}}_{1}}_{2,\rho} + \sNorm{\ensuremath{\boldsymbol{b}}_{2}}_{2,\rho} = \sNorm{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho} \le \sNorm{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{\infty,\rho} \le C {U_{\rho}} < \infty$. Further, $\Theta_1$ must equal $ \Theta_2$ since otherwise for any $\hat{\theta} \in \Theta_1\backslash \Theta_2$ (or $\hat{\theta} \in \Theta_2\backslash \Theta_1$), we get $\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}(\hat\theta) = \ensuremath{\boldsymbol{b}}_{1}(\hat\theta) + \ensuremath{\boldsymbol{b}}_{2}(\hat\theta) = \infty$, which contradicts $\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}(\hat \theta) \le C {U_{\rho}}$. To guarantee $\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}$ and $\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}$ have finite $(\infty,\rho)$-norms, we construct them as follows: \begin{equation}\label{ec:eq:max-norm-corrected} \ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}(\theta) \coloneqq \begin{cases} \ensuremath{\boldsymbol{b}}_1(\theta) & \text{ if } \theta \in \Theta\backslash\Theta_1; \\[4pt] {\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}(\theta)}/{2} & \text{ if } \theta \in \Theta_1; \\ \end{cases} \quad \text{and} \quad \ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}(\theta) \coloneqq \begin{cases} \ensuremath{\boldsymbol{b}}_2(\theta) & \text{ if } \theta \in \Theta\backslash\Theta_1; \\[4pt] {\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}(\theta)}/{2} & \text{ if } \theta \in \Theta_1. \\ \end{cases} \end{equation} Since $\|\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}\|_{\rho, \infty}$ is finite, both $\tallNorm{\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{\rho,\infty}$ and $\tallNorm{\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{\rho,\infty}$ are finite by definition. In addition, it can be easily verified that the Pythagorean identity $\sNorm{\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho} + \sNorm{\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho} = \sNorm{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho}$ and the equation $\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}(\theta) = \ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}(\theta) + \ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}(\theta)$ hold. Replacing $\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}$ with $ \ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}} + \ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}$ in the definition of $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}=b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \inprod{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}{\varphi(s)}$, we obtain the decomposition $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} = V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \prep{V}_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$. \hfill \Halmos \endproof Lemmas \ref{ec:lem:approxProjection} and \ref{ec:lem:extenedFGLPBound} show that the orthogonal functions $V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$ and $\prep{V}_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$ can be approximated using finite random samples $\theta$ from $\rho$. \begin{lemma}\label{ec:lem:approxProjection} Given $\xi>0$ and $\alpha\in\big(0,\min_{i\ne j}\lVert{\theta_{i}-\theta_{j}\rVert}_2\big)$, there is a function $V_{N} \in {\mathcal{W}}(\Phi_N)$ such that \begin{equation}\label{eq:boundingGenericProjection} \tallNorm{V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}- V_N}_\infty \ \le \ \sqrt{U_{\rho}} \xi + \alpha\sqrt{NU_{\rho}} \ensuremath{\mathrm{L}_\varphi}\left(\diamSState+1\right) (\sNorm{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho} + \xi) \end{equation} where $V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}(\cdot) =b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \inprod{\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}}{\varphi(\cdot)}$ is defined in Lemma \ref{ec:lem:prep-decompose}. \end{lemma} \proof{Proof.} Since $\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}} \in\overline{\mathcal{B}}_{\alpha,N}$, there is a weighting function $\ensuremath{\boldsymbol{b}}_\alpha \in {\mathcal{B}}_{\alpha,N}$ such that $\sNorm{\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}} - \ensuremath{\boldsymbol{b}}_\alpha}_{2,\rho} \le \xi$. Let $V_\alpha(\cdot) \coloneqq b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \inprod{{\ensuremath{\boldsymbol{b}}}_\alpha}{\varphi(\cdot)}$. For all $s\in\ensuremath{\mathcal{S}}$, we have \begin{align} \Big(V_\alpha (s) - V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}(s) \Big)^2 &= \bigg( \int_\Theta \big(\ensuremath{\boldsymbol{b}}_\alpha(\theta) - \ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}(\theta)\big)\varphi(s;\theta)\diff \theta \bigg)^2 \nonumber \\ & \le \int_\Theta \big(\ensuremath{\boldsymbol{b}}_\alpha(\theta) - \ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}(\theta)\big)^2({\varphi(s;\theta)})^2\diff \theta \nonumber \\ & \le \int_\Theta \frac{U_{\rho}}{\rho(\theta)}\big(\ensuremath{\boldsymbol{b}}_\alpha(\theta) - \ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}(\theta)\big)^2 \diff \theta \nonumber\\ & = U_{\rho} \sNorm{\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}} - \ensuremath{\boldsymbol{b}}_\alpha}_{2,\rho}^2, \nonumber \end{align} where the first inequality holds by the Jensen's inequality and the second one follows from $\rho(\theta) \le U_{\rho}$ for all $\theta\in \Theta$ and the fact that $\sNorm{\bar \varphi}_\infty \le 1$ by Assumption \ref{asm:random basis function}. Hence, we have \begin{equation}\label{ec:eq:gapInOriginalSpace} \tallNorm{V_\alpha- V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}_{\infty} \ \le \ \sqrt{U_{\rho}}\sNorm{\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}} - \ensuremath{\boldsymbol{b}}_\alpha}_{2,\rho} \ \le \ \sqrt{U_{\rho}}\xi. \end{equation} Since $\ensuremath{\boldsymbol{b}}_\alpha \in \mathcal{B}_{\alpha,N}$, it can be written as ${\ensuremath{\boldsymbol{b}}}_\alpha(\theta) = \sum_{i=1}^{N}\beta_i\phi_{i,\alpha}(\theta)$ for some real-valued coefficients $\beta_{1},\ldots,\beta_{N}$ and the function $V_\alpha(\cdot) \coloneqq b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \inprod{{\ensuremath{\boldsymbol{b}}}_\alpha}{\varphi(\cdot)}$ belongs to $\mathcal{W}_\alpha(\Phi_N)$. Define $V_N(\cdot) \coloneqq b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \sum_{i=1}^{N}\beta_i\varphi(\cdot;\theta_i)$. We next show that $\sNorm{V_N-V_\alpha}_\infty$ is bounded. Consider the inequalities \begin{equation*}\label{ec:eq:gapOfBoundary} \begin{aligned} \lvert\varphi(s;\theta)-\varphi(s;\theta_{i})\rvert & =\big\lvert\bar\varphi\big(\inprod{(1,s)}{\theta}\big)-\bar\varphi\big(\inprod{(1,s)}{\theta_{i}}\big)\big\rvert\\ & \le \ensuremath{\mathrm{L}_\varphi} {\lVert (1,s)\rVert}_2 \sNorm{\theta-\theta_{i}}_2 \\ & \le \ensuremath{\mathrm{L}_\varphi}(\diamSState+1)\sNorm{\theta-\theta_{i}}_2, \end{aligned} \end{equation*} where the equality follows from the definition of $\varphi$ in Assumption \ref{asm:random basis function} and the first inequality is obtained by the Lipschitz continuity of $\bar\varphi$ and the H\"older's inequality. Then, for all $i=1,2,\dots,N$, $\theta\in\Theta$, and $s\in\ensuremath{\mathcal{S}}$ we have \begin{align*} \frac{\beta_i}{z^i_{\alpha}}\rho(\theta) \indicator{\lVert{\theta-\theta_{i}\rVert}_2\le\alpha} \varphi(s;\theta) & \ \le \ \frac{\beta_i}{z_{\alpha}^i}\rho(\theta) \indicator{\lVert{\theta-\theta_{i}\rVert}_2\le\alpha} \varphi(s;\theta_{i}) \\ & \ \qquad + \bigg\lvert\frac{\beta_i}{z_{\alpha}^i}\bigg\rvert \rho(\theta) \indicator{\lVert{\theta-\theta_{i}\rVert}_2\le\alpha}\ensuremath{\mathrm{L}_\varphi}(\diamSState+1)\sNorm{\theta-\theta_{i}}_2. \end{align*} Summing the above inequality over samples $\theta_i$, integrating over $\theta$, and adding intercept $b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$ leads to,\looseness = -1 \begin{align*} V_\alpha(s) & = b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \int_\Theta \sum_{i=1}^{N}\frac{\beta_i}{z_{\alpha}^i} \rho(\theta) \indicator{\lVert{\theta-\theta_{i}\rVert}_2\le\alpha} \varphi(s;\theta) \diff \theta \\ & \le b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}+ \sum_{i=1}^{N} \frac{\beta_i}{z_{\alpha}^i}\varphi(s;\theta_{i}) \int_\Theta\rho(\theta)\indicator{\lVert{\theta-\theta_{i}\rVert}_2\le \alpha}\diff \theta \ + \\ &\quad \ensuremath{\mathrm{L}_\varphi}(\diamSState+1)\left(\sum_{i=1}^{N} \bigg \lvert\frac{\beta_i}{z_{\alpha}^i} \bigg \rvert \int_\Theta \rho(\theta)\indicator{\lVert{\theta-\theta_{i}\rVert}_2\le\alpha} \lVert{\theta-\theta_{i}\rVert}_2 \diff \theta\right) \\ & \le b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \sum_{i=1}^{N} \beta_i\varphi(s;\theta_{i}) + \ensuremath{\mathrm{L}_\varphi}(\diamSState+1) \alpha \sum_{i=1}^{N} \bigg \lvert\frac{\beta_i}{z^i_{\alpha}} \bigg \rvert \int_\Theta \rho(\theta)\indicator{\lVert{\theta-\theta_{i}\rVert}_2\le\alpha} \diff \theta \\ & = V_N(s) + \ensuremath{\mathrm{L}_\varphi}(\diamSState+1)\alpha \sum_{i=1}^{N}|\beta_i|, \end{align*} where the second inequality derived using the inequality $\lVert{\theta-\theta_{i}\rVert}_2\le\alpha$ that holds for all $\theta$ in the $\alpha$-ball $\mathcal{U}_i(\alpha) \coloneqq \{\theta\in\Theta: \lVert{\theta-\theta_{i}\rVert}_2\le\alpha\}$ around sample $\theta_i$ with $i\in\{1,2,\dots,N\}$, and the second equality follows from definition of $z_{\alpha}^i$ in \S\ref{sec:Convergence of Self-guided ALPs via an Orthogonal Decomposition}. Similarly, one can bound $V_\alpha$ from below as $V_\alpha(s) \ge V_N(s) - \ensuremath{\mathrm{L}_\varphi}(\diamSState+1)\alpha\sum_{i=1}^{N}|\beta_i| $ for all $s\in\ensuremath{\mathcal{S}}$. This indicates that \begin{equation}\label{ec:eq:bound-1} \tallNorm{V_\alpha - V_N}_\infty \ \le \ \ensuremath{\mathrm{L}_\varphi}(\diamSState+1)\alpha \sum_{i=1}^{N}\left|\beta_i\right|. \end{equation} We next find an $\alpha$-independent upper bound on the term $\sum_{i=1}^{N}|\beta_i|$. Notice that for $\theta \in \mathcal{U}\coloneqq\bigcup_{i=1}^{N} \mathcal{U}_i(\alpha)$, there is exactly one index $i\in \{1,2,\dots,N \}$ such that $\indicator{\lVert{\theta-\theta_{i}\rVert}_2\le \alpha}=1$, and for all $\theta \in \Theta\backslash \mathcal{U}$, it holds that $\indicator{\lVert{\theta-\theta_{i}\rVert}_2\le \alpha}=0$ for all $i\in \{1,2,\dots,N \}$. Thus, we can write \begin{align}\label{eq:ball-shirinkage} \tallNorm{{\ensuremath{\boldsymbol{b}}}_\alpha}_{2,\rho}^2 &=\int_\Theta \frac{1}{\rho(\theta)} \bigg(\sum_{i=1}^{N}\frac{\beta_i}{z_{\alpha}^i} \rho(\theta) \indicator{\lVert{\theta-\theta_{i}\rVert}_2\le\alpha}\bigg)^2 \diff\theta \nonumber\\ & \ge \frac{1}{U_{\rho}}\int_{\Theta} \Big(\sum_{i=1}^{N} \frac{\beta_i}{z^i_{\alpha}} \rho(\theta) \indicator{\lVert{\theta-\theta_{i}\rVert}_2\le\alpha}\Big)^2\diff\theta \nonumber\\ & = \frac{1}{U_{\rho}}\int_{\mathcal{U}} \Big(\sum_{i=1}^{N} \frac{\beta_i}{z^i_{\alpha}} \rho(\theta) \indicator{\lVert{\theta-\theta_{i}\rVert}_2\le\alpha}\Big)^2\diff\theta \nonumber\\ & = \frac{1}{U_{\rho}}\sum_{k=1}^{N} \int_{\mathcal{U}_k (\alpha)} \bigg(\sum_{i=1}^{N} \frac{\beta^i}{z^i_{\alpha}} \rho(\theta) \indicator{\lVert{\theta-\theta_{i}\rVert}_2\le\alpha}\diff\theta \bigg)^2\\ & = \frac{1}{U_{\rho}}\sum_{k=1}^{N} \bigg(\frac{\beta_k}{z^k_{\alpha}} \bigg)^2 \int_{\mathcal{U}_k (\alpha)} \Big( \indicator{\lVert{\theta-\theta_{k}\rVert}_2\le\alpha}\rho(\diff\theta) \Big)^2 \nonumber\\ & \ge \frac{1}{U_{\rho}}\sum_{k=1}^{N} \bigg(\frac{\beta_k}{z^k_{\alpha}} \bigg)^2 \bigg( \int_{\mathcal{U}_k (\alpha)} \indicator{\lVert{\theta-\theta_{k}\rVert}_2\le\alpha}\rho(\diff\theta) \bigg)^2 \nonumber\\ & = \frac{1}{U_{\rho}}\sum_{k=1}^{N} \bigg(\frac{\beta_k}{z^k_{\alpha}} \bigg)^2 \bigg( \int_{\Theta} \indicator{\lVert{\theta-\theta_{k}\rVert}_2\le\alpha}\rho(\diff\theta) \bigg)^2\nonumber \\ & = \frac{1}{U_{\rho}} \sum_{k=1}^{N}(\beta_k)^2, \nonumber \end{align} where the first inequality above is valid since $\rho(\theta) \le U_{\rho}$ for all $\theta\in\Theta$ by Assumption \ref{asm:random basis function}; the second equality is valid since all indicator functions take zero value for $\theta\in \Theta \backslash \mathcal{U}$; the third equality is followed by the definition of $\mathcal{U}$ and the fact that $\mathcal{U}_k$ are mutually exclusive; the fourth equality follows from $\indicator{\lVert{\theta-\theta_{k}\rVert}_2\le\alpha} = 0$ that holds for all $\theta\in\Theta\backslash \mathcal{U}_k$; the second inequality is obtained by the Jensen's inequality; the last equality is valid via the definition of $z^k_{\alpha}$. Using $\sNorm{\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}} - \ensuremath{\boldsymbol{b}}_\alpha}_{2,\rho} \le \xi$, \eqref{eq:ball-shirinkage}, and $\sNorm{\ensuremath{\boldsymbol{\beta}}}_1 \le \sqrt{N}\sNorm{\ensuremath{\boldsymbol{\beta}}}_2$ for $\ensuremath{\boldsymbol{\beta}}\in\mathbb{R}^N$, we have \begin{equation}\label{ec:eq:bound-2} \small \sum_{i=1}^{N} \big\lvert \beta_i \big\rvert \le \sqrt{N} \Bigg[{\sum_{i=1}^{N}(\beta_i)^2}\Bigg]^{\nicefrac{1}{2}} \le \sqrt{N U_{\rho}}\tallNorm{{\ensuremath{\boldsymbol{b}}}_\alpha}_{2,\rho} \le\sqrt{N U_{\rho}}\big(\tallNorm{\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho} + \xi\big) \le\sqrt{N{{U_{\rho}}}}\big(\tallNorm{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho} + \xi\big), \end{equation} where the last inequality follows from the Pythagorean identity in Lemma \ref{ec:lem:prep-decompose}. Using the inequalities \eqref{ec:eq:gapInOriginalSpace}, \eqref{ec:eq:bound-1} and \eqref{ec:eq:bound-2}, we obtain $ \sNorm{V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}- V_N}_\infty \le \tallNorm{ V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} - V_\alpha}_{\infty} +\sNorm{V_\alpha- V_N}_\infty \le \sqrt{U_{\rho}} \xi + \alpha\sqrt{N\color{black}{U_{\rho}}} \ensuremath{\mathrm{L}_\varphi}(\diamSState+1) (\sNorm{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho} + \xi)$. \hfill \Halmos \endproof \begin{lemma}\label{ec:lem:extenedFGLPBound} Given $\varepsilon>0$ and $\alpha\in\big(0,\min_{i\ne j}\lVert{\theta_{i}-\theta_{j}\rVert}_2\big)$, for any $H \ge H_\varepsilon\coloneqq \big\lceil \varepsilon^{-2} \big\lVert \ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}} -\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}} \big\rVert_{\infty,\rho} ^2(\ensuremath{{\Omega}}+\ensuremath{{{\Delta}}_{{\delta}}})^2 \big\rceil$, there exists a function ${V}_H\in\mathcal{W}(\Phi_H)$ such that \begin{equation*} {\tallNorm{\prep{V}_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} \ - \ {V}_H}_{\infty} \leq \varepsilon}, \end{equation*} with a probability of at least $1-\delta$, where $\prep{V}_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}(\cdot) = \inprod{\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}}{\varphi(\cdot)}$ is defined in Lemma \ref{ec:lem:prep-decompose}. \end{lemma} \proof{Proof.} The proof of this lemma is similar to the proof of Lemma \ref{ec:lem:high-prob-feas-soln}. In particular, consider $\prep{V}_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}(\cdot) = \inprod{\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}}{\varphi(\cdot)}$ defined in Lemma \ref{ec:lem:prep-decompose} where $\tallNorm{\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{\infty,\rho} < \infty$. Theorem 3.2 in \cite{rahimi2008uniform} ensures there exists a function ${V}_H\in\mathcal{W}(\Phi_H)$, such that \[ \tallNorm{\prep{V}_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} - {V}_H}_{\infty} \le \frac{\tallNorm{\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{\infty,\rho}}{\sqrt{H}}\left(\ensuremath{{{\Delta}}_{{\delta}}} + \ensuremath{{\Omega}} \right), \] where $\ensuremath{{{\Delta}}_{{\delta}}}$ and $\ensuremath{{\Omega}}$ are defined in \S \ref{sec:Random Approximate Linear Program}. When $ H \ge \Big\lceil \varepsilon^{-2} \ \tallNorm{\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{\infty,\rho}^2 \ \big(\ensuremath{{{\Delta}}_{{\delta}}} + \ensuremath{{\Omega}} \big)^2 \Big\rceil $, we can then guarantee $\tallNorm{\prep{V}_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} - V_H}_{\infty} \le \varepsilon$ with a probability of at least $1-\delta$. This inequality also holds for $H\geq H_\varepsilon$ since $\ensuremath{\prep{\coefInf}_\alpha^{\raisemath{3pt}{\scaleto{\varepsilon}{3.5pt}}}} = \ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}} - \ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}$.\looseness = -1 \hfill \Halmos \endproof \begin{remark} Given $\alpha\in\big(0,\min_{i\ne j}\lVert{\theta_{i}-\theta_{j}\rVert}_2\big)$, norm $\tallNorm{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}} -\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{\infty,\rho}$ used in $H_\varepsilon$ is finite. \end{remark} Now, we integrate our results in Lemmas \ref{ec:lem:approxProjection} and \ref{ec:lem:extenedFGLPBound} to prove Theorem \ref{thm:SG-ALP sampling bound}. \subsubsection*{Proof of Theorem \ref{thm:SG-ALP sampling bound}.} Consider $(b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}},\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}})$ given in Part (i) of Proposition \ref{prop:ELP-RKHS-gap} and its corresponding function $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$. Using Lemma \ref{ec:lem:prep-decompose}, we can decompose $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$ as $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} = V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \prep{V}_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$. Let $V = V_N + {V}_H$ where $V_N\in {\mathcal{W}}(\Phi_N)$ and ${V}_H \in {\mathcal{W}}(\Phi_H)$ are defined in Lemmas \ref{ec:lem:approxProjection} and \ref{ec:lem:extenedFGLPBound}, respectively. In addition, assume $\ensuremath{\boldsymbol{\beta}}=(\beta_{0},\beta_{1},\ldots,\beta_{N+H})$ are the coefficients defining $V$. We observe that $V \in {\mathcal{W}}(\Phi_N\cup \Phi_H)$. When $H\ge H_{\nicefrac{\varepsilon}{3}}$ (we use $\nicefrac{\varepsilon}{3}$ in lieu of $\varepsilon$ in Lemma \ref{ec:lem:extenedFGLPBound}), with a probability of at least $1-\delta$, we have \begin{equation}\label{ec:eq:gapInThm2} \begin{aligned} \tallNorm{\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} - V}_{\infty} \ & = \ \tallNorm{V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \prep{V}_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} - (V_N + V_H)}_{\infty} \\ \ & \le \ \tallNorm{V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} - V_N}_{\infty} + \tallNorm{ \prep{V}_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} - {V}_H}_{\infty} \\ \ & \le \ \sqrt{\color{black}{U_{\rho}}}\xi + \alpha\sqrt{N\color{black}{U_{\rho}}} \ensuremath{\mathrm{L}_\varphi}{\color{black}(\diamSState+1)} \big(\sNorm{\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho} + \xi\big) + \frac{\varepsilon}{3} \\ \ & \le \ \xi\sqrt{\color{black}{U_{\rho}}}\Big(1+ \frac{\varepsilon}{\ensuremath{\Omega^\prime}} \ensuremath{\mathrm{L}_\varphi}{\color{black}(\diamSState+1)}\Big) + \varepsilon \Big(\frac{1}{3} + \frac{\sqrt{\color{black}{U_{\rho}}}}{\ensuremath{\Omega^\prime}} \ensuremath{\mathrm{L}_\varphi}{\color{black}(\diamSState+1)} \sNorm{\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho} \Big)\\ \ & \le \varepsilon \end{aligned} \end{equation} where the third inequality follows from the choice of $\alpha$ in Theorem \ref{thm:SG-ALP sampling bound} and the last one from the definition of $\ensuremath{\Omega^\prime}$ in \S\ref{sec:Convergence of Self-guided ALPs via an Orthogonal Decomposition} and choosing $ \xi \in \bigg(0, \ \frac{\varepsilon}{3\sqrt{\color{black}{U_{\rho}}}(1+ {\varepsilon} \ensuremath{\mathrm{L}_\varphi}{\color{black}(\diamSState+1)}/{\ensuremath{\Omega^\prime}})}\bigg]. $ Consider the vector $\ensuremath{\boldsymbol{\beta}}^\prime = (\beta_0-\ensuremath{\Gamma}\varepsilon,\beta_1,\dots,\beta_{N+H})$. This vector is feasible to constraints \eqref{FALPConst1} of FGLP$\programIndex{N+H}$ with a probability of at least $1-\delta$ since it holds that \begin{align*} (1-\gamma)\big(\beta_0 - \ensuremath{\Gamma}\varepsilon\big) + \sum^{N+H}_{i=1} \beta_i \big( \varphi(s;\theta_i) - \gamma \ensuremath{\mathbb{E}}[\varphi(s^\prime;\theta_i) \vert s,a] \big) & = V(s) - \varepsilon - \gamma\ensuremath{\mathbb{E}}\big[V(s^\prime) + \varepsilon \big\vert s,a\big] \\ &\le \ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}(s) - \gamma\ensuremath{\mathbb{E}}[\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}(s^\prime) \vert s,a] \\ & \le c(s,a), \end{align*} for all $(s,a)\in \ensuremath{\sSpace\times\mathcal{A}_s}$ where the first inequality follows from \eqref{ec:eq:gapInThm2} and the last one from the feasibility of $(\coefRKHS,\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}})$ to FELP. Also, with a probability of at least $1-\delta$, the vector $\ensuremath{\boldsymbol{\beta}}^\prime $ violates constraints \eqref{FALPConst2} of FGLP$\programIndex{N+H}$ by at most $\frac{4\varepsilon}{(1+\gamma)}$. In particular, with a probability of at least $1-\delta$, Part (i) of Proposition \ref{prop:ELP-RKHS-gap} and Lemma \ref{ec:lem:optV-properties} guarantee that \begin{equation}\label{ec:eq:FGLP-const-violation} V(s;\coefSGVecK{N}) \le V^*(s) \le \ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}(s) +\frac{2\varepsilon}{1-\gamma}\le (V(s) -\ensuremath{\Gamma}\varepsilon) + \varepsilon(1+\ensuremath{\Gamma}) +\frac{2\varepsilon}{(1-\gamma)} = V(s;\ensuremath{\boldsymbol{\beta}}^\prime) + \frac{4\varepsilon}{(1-\gamma)}. \end{equation} In \eqref{ec:eq:FGLP-const-violation}, we used inequality \eqref{ec:eq:gapInThm2} to derive $\lVert V^* - (V-\ensuremath{\Gamma}\varepsilon)\rVert_\infty\le\sNorm{V^* - V}_\infty+\ensuremath{\Gamma}\varepsilon \le (1+\ensuremath{\Gamma})\varepsilon$ and $ V(s;\ensuremath{\boldsymbol{\beta}}^\prime) = V(s) -\ensuremath{\Gamma}\varepsilon$. In addition, \eqref{ec:eq:FGLP-const-violation} together with the fact that $V^*$ is a pointwise upper bound on $V(\ensuremath{\boldsymbol{\beta}}^\prime)$ (by Lemma \ref{ec:lem:optV-properties}) ensure that $\tallNorm{V^* - V(\ensuremath{\boldsymbol{\beta}}^\prime)}_{\infty} \le \frac{4\varepsilon}{(1-\gamma)}$ which holds with a probability of at least $1-\delta$ when $ H \ge H_{\nicefrac{\varepsilon}{3}}=\big\lceil 9\varepsilon^{-2} \tallNorm{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}} -\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{\infty,\rho} ^2 \ (\ensuremath{{\Omega}}+\ensuremath{{{\Delta}}_{{\delta}}})^2 \big\rceil. $ \hfill \Halmos \endproof \section{Addendum to \S \ref{sec:Random Approximate Linear Program}: Constant factor in FALP sampling bound}\label{ec:sec:Analyzing an FALP Sampling Bound} In this section, we derive an FALP sampling bound without using the property that its VFA is a pointwise lower bound on $V^*$, which makes apparent the sharper constant that we obtain in Part (iii) of Theorem \ref{prop:ALP} by using this FALP VFA property. Proposition \ref{ec:prop:naiveSamplingBound} leverages Part (ii) of Lemma \ref{ec:lem:high-prob-feas-soln} alone to establish a sampling bound for FALP. \begin{proposition} \label{ec:prop:naiveSamplingBound} Fix $\varepsilon>0$ and $\delta\in(0,1]$. Then for any $N\ge N_\varepsilon$, where $N_\varepsilon$ is defined in \eqref{ec:eq:N_epsilon}, and any optimal solution $\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}$ to FALP$\programIndex{N}$, it holds that \[\tallNorm{V^* - \ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})}_{1,\nu} \ \leq \ \frac{4\varepsilon}{(1-\gamma)}\] with a probability of at least $1-\delta$. \end{proposition} \proof{Proof.} Part (ii) of Lemma \ref{ec:lem:high-prob-feas-soln} ensures that vector $(\coefFeas{0} - \ensuremath{\Gamma}\varepsilon,\coefFeas{1},\dots,\coefFeas{N})$ for $N \ge N_\varepsilon$, is feasible to FALP$\programIndex{N}$ with a probability of at least $1-\delta$ and $ \big\lVert\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} -\big(V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})- \ensuremath{\Gamma}\varepsilon)\big\rVert_\infty \le \nicefrac{2\varepsilon}{(1-\gamma)} $ with the same probability. Using Part (ii) of Proposition \ref{prop:ELP-RKHS-gap}, it holds that \begin{align}\label{ec:whyBoundLoose} \tallNorm{V^* - \ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})}_{1,\nu} & \le \tallNorm{V^* - \ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}_{1,\nu} + \tallNorm{\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} - \ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})}_{1,\nu} \nonumber\\ & \le \frac{2\varepsilon}{(1-\gamma)} + \tallNorm{\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} - \big(V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})- \ensuremath{\Gamma}\varepsilon\big)}_{1,\nu}\\ & \le \frac{4\varepsilon}{(1-\gamma)} \nonumber \end{align} with a probability of at least $1-\delta$, where the second inequality is valid since $(\coefFeas{0} - \ensuremath{\Gamma}\varepsilon,\coefFeas{1},\dots,\coefFeas{N})$ and $\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}$ are feasible and optimal to FALP$\programIndex{N}$, respectively, and the last inequality follows from $\tallNorm{\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} - \big(V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})- \ensuremath{\Gamma}\varepsilon\big)}_{1,\nu}\le \big\lVert{\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} -\big(V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})- \ensuremath{\Gamma}\varepsilon)\big\rVert}_\infty \le {2\varepsilon}/{(1-\gamma)}$. \hfill \Halmos \endproof The sampling bound in Proposition \ref{ec:prop:naiveSamplingBound} minus the sampling bound in Part (iii) of Theorem \ref{prop:ALP} equals \[ \varepsilon^{-2} \ \tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}^2_{\infty,\rho} \ \frac{(1-\gamma)\ensuremath{{\Omega}}}{2} \ \bigg(\frac{3+\gamma}{2}\ensuremath{{\Omega}} + 2\ensuremath{{{\Delta}}_{{\delta}}}\bigg). \] In other words, the latter bound is tighter than the former one. This tightening is because we leveraged that $V^*$ is a state-wise upper bound on any continuous function satisfying constraints \eqref{constr:ELP} when obtaining the inequalities \eqref{ec:whyBoundTight} in the proof of Part (iii) of Theorem \ref{prop:ALP}. In contrast, this property is not used in the analogous inequalities \eqref{ec:whyBoundLoose} in Proposition \ref{ec:prop:naiveSamplingBound}, where we directly employ the results in Part (ii) of Lemma \ref{ec:lem:high-prob-feas-soln}. \section{Addendum to \S\ref{sec:Convergence of Self-guided ALPs via an Orthogonal Decomposition}: Applying FALP sampling analysis to FGLP}\label{ec:sec:Analyzing an FALP-based Sampling Bound for FGLP} In this section, we show that the direct application of the FALP sampling bound analysis to FGLP leads to a sampling bound that is weak and does not account for the quality of the self-guiding constraints in an insightful manner. To directly apply the analysis used for FALP to FGLP, we require that $V(s;\coefSGVecK{N})$ is $\kappa\vectorIndex{N}$ far from to $V^*(s)$, that is, $\min_{s \in \ensuremath{\mathcal{S}}}|V^*(s) - V(s;\coefSGVecK{N})| \geq \kappa\vectorIndex{N} > 0$. The positivity of $\kappa\vectorIndex{N}$ may not be true when $V^*( \hat s) = V( \hat s;\coefSGVecK{N})$ for a state $\hat s\in\ensuremath{\mathcal{S}}$. This is thus a restrictive assumption. Proposition \ref{ec:prop:SG-ALP-coservative-bound} states a bound on the number of samples $M$ that follows directly from Proposition \ref{ec:prop:naiveSamplingBound} and is analogous to the number of samples $N+H$ in \S\ref{sec:Convergence of Self-guided ALPs via an Orthogonal Decomposition}. \begin{proposition}\label{ec:prop:SG-ALP-coservative-bound} Suppose we have an optimal solution $\coefSGVecK{N}$ to FGLP$_{\programIndex{N}}$ such that $\kappa\vectorIndex{N}>0$. Given $\varepsilon>0$ and $\delta\in(0,1]$, if \[ M \ge \bigg\lceil \min\{\varepsilon,\kappa\vectorIndex{N}\}^{-2} \ \Big(\frac{4}{1-\gamma}\Big)^2 \ \tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}_{\infty,\rho}^2(\ensuremath{{\Omega}}+\ensuremath{{{\Delta}}_{{\delta}}})^2 \bigg \rceil, \] then any optimal solution $\coefSGVecK{M}$ to FGLP$_{\programIndex{M}}$ satisfies \[ \tallNorm{ V^* - V(\coefSGVecK{M})}_{1,\nu} \leq \min\{\varepsilon,\kappa\vectorIndex{N}\},\] with a probability of at least $1-\delta$. \end{proposition} \proof{Proof.} Let $\varepsilon^\prime = {(1-\gamma)\min\{\varepsilon,\kappa\vectorIndex{N}\}}/{4}$. Using Part (ii) of Lemma \ref{ec:lem:high-prob-feas-soln} with the choice of $\varepsilon$ set to $\varepsilon^\prime$, we have that for any \[ M \ge N_{\varepsilon^\prime} =\bigg\lceil \min\{\varepsilon,\kappa\vectorIndex{N}\}^{-2} \ \Big(\frac{4}{1-\gamma}\Big)^2 \ \tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}_{\infty,\rho}^2(\ensuremath{{\Omega}}+\ensuremath{{{\Delta}}_{{\delta}}})^2 \bigg \rceil, \] vector $(\coefFeas{0} - \ensuremath{\Gamma}\varepsilon^\prime,\coefFeas{1},\dots,\coefFeas{M})$ is feasible to FALP$_{\programIndex{M}}$ and satisfies $\big\lVert\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} -\big(V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})-\ensuremath{\Gamma}\varepsilon^\prime \big) \big\rVert_\infty \le\nicefrac{2\varepsilon^\prime }{(1-\gamma)}$ with a probability of at least $1-\delta$. Next, by leveraging Part (ii) of Proposition \ref{prop:ELP-RKHS-gap} with $\varepsilon$ chosen as $\varepsilon^\prime $, we can write \[ \big\lVert V^* -\big(V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})-\ensuremath{\Gamma}\varepsilon^\prime \big) \big \rVert_\infty \le \frac{2\varepsilon^\prime}{(1-\gamma)} + \big\lVert\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}} -\big(V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})-\ensuremath{\Gamma}\varepsilon^\prime\big) \big\rVert_\infty \le \frac{4\varepsilon^\prime}{(1-\gamma)} = \min\{\varepsilon,\kappa\vectorIndex{N}\} \le\kappa\vectorIndex{N}, \] which holds with a probability of at least $1-\delta$. Thus, for all $s\in \ensuremath{\mathcal{S}}$, we obtain $V(s;\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})-\ensuremath{\Gamma}\varepsilon^\prime\ge V^*(s) - \kappa\vectorIndex{N}$, and from the definition of $\kappa\vectorIndex{N}$, we have $V^*(s) -\kappa\vectorIndex{N} \ge V(s;\coefSGVecK{N})$. Hence, for all $s\in \ensuremath{\mathcal{S}}$, it holds that \begin{equation}\label{ec:eq:kappaFEasbile} V(s;\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})-\ensuremath{\Gamma}\varepsilon^\prime \ \ge \ V^*(s) - \kappa\vectorIndex{N} \ \ge \ V(s;\coefSGVecK{N}), \end{equation} with a probability of at least $1-\delta$. This shows that for $M \ge N_{\varepsilon^\prime}$, the vector $(\coefFeas{0} - \ensuremath{\Gamma}\varepsilon^\prime,\coefFeas{1},\dots,\coefFeas{M})$ is feasible to constraints \eqref{FALPConst2} and \eqref{FALPConst1} of FGLP$_{\programIndex{M}}$ and it satisfies $\big\lVert V^* -\big(V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})-\ensuremath{\Gamma}\varepsilon^\prime \big) \big\rVert_{\infty} \le \min\{\varepsilon,\kappa\vectorIndex{N}\}$, where these statements hold with probability at least $1-\delta$. Therefore, with the same probability, an optimal FGLP solution $\coefSGVecK{M}$ has a smaller $(1,\nu)$ difference with respect to $V^*$, that is, we have \[ \tallNorm{ V^* - V(\coefSGVecK{M})}_{1,\nu} \le \tallNorm{ V^* - (V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})-\ensuremath{\Gamma}\varepsilon)}_{1,\nu}\le \tallNorm{ V^* - (V(\ensuremath{\boldsymbol{\beta}^{\scaleto{\mathrm{FEAS}}{3.5pt}}})-\ensuremath{\Gamma}\varepsilon)}_{\infty}\le \min\{\kappa\vectorIndex{N},\varepsilon\}. \] \hfill \Halmos \endproof The sampling lower bound in Proposition \ref{ec:prop:SG-ALP-coservative-bound} is similar to the FALP bound but has two key differences: (i) it has an additional constant $(\nicefrac{4}{(1-\gamma)})^2$ and (ii) $\varepsilon$ is replaced by $\min\{\kappa\vectorIndex{N},\varepsilon\}$. The additional constant $(\nicefrac{4}{(1-\gamma)})^2$ stems from constructing in inequality \eqref{ec:eq:kappaFEasbile} a feasible solution to the self-guiding constraints. The intuition behind replacement of $\varepsilon$ by $\min\{\kappa\vectorIndex{N},\varepsilon\}$ is as follows. We assumed that $\min_{s \in \ensuremath{\mathcal{S}}}|V^*(s) - V(s;\coefSGVecK{N})| \geq \kappa\vectorIndex{N}$, that is, $V(s;\coefSGVecK{N})$ is below $V^*$ by at least $\kappa_N$ at all states. Therefore, a conservative approach to satisfy the self-guiding constraints is to sample sufficiently many random basis functions such that $V(s;\coefSGVecK{M})$ is within $\min\{\kappa\vectorIndex{N},\varepsilon\}$ of $V^*(s)$ at all states. \looseness=-1 \section{Deterministic and Average-Cost Semi-MDPs}\label{ec:sec:FGLP Sampling Bound for Deterministic and Average-Cost Semi-MDPs} In this section, we discuss how our results for discounted-cost MDPs can be adapted to deterministic and average-cost semi-MDPs. Consider the average-cost linear program \eqref{AvgELPObj}-\eqref{AvgELPConst} and our bias function approximation in equation \eqref{eq:biasApproximation}. Suppose $B$ denotes the sampling batch size. For a number of sampled random basis functions $N \geq 2B$, the average-cost analogue of FGLP$\programIndex{N}$ is \begin{align*} \sup_{\eta',\ensuremath{\boldsymbol{\beta}}} \ \ & \eta' \nonumber \\[-9pt] & \eta' T(s,a) +\sum_{j=1}^{J}\beta_{1,j}(s^\prime_j - s_j) + \sum_{i=1}^{N} \beta_{2,i}\big(\varphi(s^\prime;\theta_i) - \varphi(s;\theta_i) \big) \ \le \ c(s,a), && \forall (s,a) \in\ensuremath{\sSpace\times\mathcal{A}_s}, \nonumber\\ &\beta_0 -\sum_{j=1}^{J}\beta_{1,j}s_j -\sum_{i=1}^{N} \beta_{2,i}\varphi(s;\theta_{j}) \ \ge \ u(s;\coefACFGN{N-B}), \qquad \forall s \in\ensuremath{\mathcal{S}}.\nonumber \end{align*} where vector $(\eta^\AC{FG\text{-}AC},\coefACFGN{N-B})$ is an optimal solution to FGLP$\programIndex{N-B}$ and $u(\cdot;\coefACFGN{N-B}) = -\infty$ for $N=B$. Indeed, the analogue of FALP$\programIndex{N}$ can be obtained by removing the second set of self-guiding constraints from the above linear program. The average-cost FALP$\programIndex{N}$ is thus \begin{align*} \sup_{\eta',\ensuremath{\boldsymbol{\beta}}} \ \ & \eta' \nonumber \\[-9pt] & \eta' T(s,a) +\sum_{j=1}^{J}\beta_{1,j}(s^\prime_j - s_j) + \sum_{i=1}^{N} \beta_{2,i}\big(\varphi(s^\prime;\theta_i) - \varphi(s;\theta_i) \big) \ \le \ c(s,a), && \forall (s,a) \in\ensuremath{\sSpace\times\mathcal{A}_s}. \nonumber \end{align*} For the theoretical results in this section, we require the following assumptions to hold for the exact linear program and the average-cost FALP. Such assumptions are standard in the literature (see, e.g., Lemma 4.1 of \citealt{klabjan2007RidgeBasisFunctionALP}). \begin{assumption}\label{ec:asm:slater} There is a feasible solution $(\eta^\AC{S},u^\AC{S})$ to the linear program \eqref{AvgELPObj}-\eqref{AvgELPConst} such that \[ \zeta \coloneqq \inf_{(s,a)\in\ensuremath{\sSpace\times\mathcal{A}_s}}\{ c(s,a) - \eta^\AC{S} T(s,a) - u^\AC{S}(s) + u^\AC{S}(s^\prime) \}, \] is strictly positive. Moreover, solution $u^\AC{S}$ can be obtained in set $\mathcal{R}_{\infty}(\varphi,\rho)$. \end{assumption} \begin{assumption}\label{ec:asm:slaterFALP} There is a feasible solution $(\eta^\AC{FA\text{-}S},\ensuremath{\boldsymbol{\beta}}^\AC{FA\text{-}S})$ to average-cost FALP such that \[ \zeta^\AC{FA} \coloneqq \inf_{(s,a)\in\ensuremath{\sSpace\times\mathcal{A}_s}} \bigg\{ c(s,a) - \eta^\AC{FA\text{-}S} T(s,a) - \sum_{j=1}^{J}\beta^\AC{FA\text{-}S}_{1,j}(s^\prime_j - s_j) - \sum_{i=1}^{N} \beta^\AC{FA\text{-}S}_{2,i}\big(\varphi(s^\prime;\theta_i) - \varphi(s;\theta_i) \big) \bigg\}, \] is strictly positive. Moreover, Assumption \ref{asm:random basis function} holds. \end{assumption} Assumption \ref{ec:asm:slater} ensures that there is a Slater feasible point $(\eta^\AC{S},u^\AC{S})$ to the average-cost exact linear program \eqref{AvgELPObj}-\eqref{AvgELPConst} such that all of its constraints are strictly satisfied. In addition, $u^\AC{S}$ belongs to set $\mathcal{R}_{\infty}(\varphi,\rho)$. Assumption \ref{ec:asm:slaterFALP} assumes a similar Slater condition for average-cost FALP as well as the standard assumptions on the random basis functions from the main text. Lemma \ref{ec:lem:Slatter} utilizes these Slater points to construct a feasible solution to program \eqref{AvgELPObj}-\eqref{AvgELPConst} starting from an $\varepsilon$-feasible solution. \begin{lemma}\label{ec:lem:Slatter} Given $\varepsilon>0$, if solution $(\eta,u)$ is $\varepsilon$-feasible to linear program \eqref{AvgELPObj}-\eqref{AvgELPConst}, then there is a feasible solution, denoted $(\hat \eta,\hat u)$, to this linear program such that \begin{equation}\label{ec:eq:slaterGap} |\eta-\hat \eta|\le \frac{\varepsilon}{(\zeta + \varepsilon)}|\eta -\eta^\AC{S}| \ \text{ and } \ \sNorm{u - \hat{u}}_\infty \le \frac{\varepsilon}{(\zeta + \varepsilon)} \sNorm{u - u^\AC{S}}_\infty. \end{equation} \end{lemma} \proof{Proof.} Let $R \coloneqq \nicefrac{\varepsilon}{\zeta + \varepsilon} \in(0,1)$ and define $R^\prime\coloneqq 1- R = \nicefrac{\zeta}{\zeta + \varepsilon} $. Since $(\eta,u)$ is an $\varepsilon$-feasible to program \eqref{AvgELPObj}-\eqref{AvgELPConst}, we have $ \eta T(s,a) + u(s) - u(s^\prime) \le c(s,a) + \varepsilon$ for all $(s,a)\in\ensuremath{\sSpace\times\mathcal{A}_s}$. Also, from the definition of $\zeta$, we have for all $(s,a)\in\ensuremath{\sSpace\times\mathcal{A}_s}$ that $\eta^\AC{S} T(s,a) + u^\AC{S}(s) - u^\AC{S}(s^\prime) \le c(s,a) - \zeta$. Convex combination $(\hat \eta,\hat u)\coloneqq\big(R\eta^\AC{S} + R^\prime\eta ,Ru^\AC{S} + R^\prime u \big)$ fulfills \begin{align*} \big (R\eta^\AC{S} + R^\prime\eta\big) + \big(R u^\AC{S}(s) + R^\prime u(s)\big) - \big(R u^\AC{S}(s^\prime) + R^\prime u(s^\prime)\big) & \le R\big(c(s,a) - \zeta\big) + R^\prime\big(c(s,a) + \varepsilon\big) \\ &\le (R+R^\prime )c(s,a) -R\zeta + R^\prime\varepsilon\\ &\le c(s,a) \end{align*} for all $(s,a)\in\ensuremath{\sSpace\times\mathcal{A}_s}$. Thus, $(\hat \eta,\hat u)$ is feasible to optimization \eqref{AvgELPObj}-\eqref{AvgELPConst} and satisfies \eqref{ec:eq:slaterGap}. \hfill \Halmos \endproof Proposition \ref{ec:prop:AC} leverages Lemma \ref{ec:lem:Slatter} to tailor Part (i) of Proposition \ref{prop:ELP-RKHS-gap} to the deterministic and average-cost semi-MDPs. Let $(\eta^\AC{AC},u^\AC{AC})$ be an optimal solution to the linear program \eqref{AvgELPObj}-\eqref{AvgELPConst}. \begin{proposition}\label{ec:prop:AC} Given $\varepsilon>0$, there is an intercept $\eta^\AC{FE\text{-}AC}\in\mathbb{R}$ and a function $u^\AC{FE\text{-}AC}(\cdot)= b_0^\AC{FE\text{-}AC} + \inprod{\ensuremath{\boldsymbol{b}}^\AC{FE\text{-}AC}}{\varphi(\cdot)} \in\mathcal{R}_{\infty}(\varphi,\rho)$ such that $(\eta^\AC{FE\text{-}AC}, u^\AC{FE\text{-}AC})$ is feasible to optimization \eqref{AvgELPObj}-\eqref{AvgELPConst} and \[ |\eta^\AC{AC} - \eta^\AC{FE-AC}| \le \frac{\varepsilon}{\varepsilon+\zeta}|\eta^\AC{FE\text{-}AC} - \eta^\AC{S}| \quad \ \text{ and } \ \quad \sNorm{u^\AC{AC} - u^\AC{FE\text{-}AC}}_\infty \le \frac{\varepsilon}{\varepsilon+\zeta}\Big( \varepsilon + \sNorm{u^\AC{S} - u^\AC{AC}}_\infty \Big). \] \end{proposition} \proof{Proof.} Since $u^\AC{AC}$ is a continuous function and random basis function $\varphi$ is universal by our Assumption \ref{asm:random basis function}, then Definition \ref{defn:randBasisFns} guarantees the existence of a finite $C\ge 0$ and a function $\hat u\in\ensuremath{\mathcal{R}_C(\varphi,\rho)}$ such that $\sNorm{u^\AC{AC} - \hat u}_\infty \le \varepsilon$. Using the feasibility (optimality) of $u^\AC{AC}$ to linear program \eqref{AvgELPObj}-\eqref{AvgELPConst}, for all $(s,a)\in\ensuremath{\sSpace\times\mathcal{A}_s}$, we have \begin{align*} c(s,a) \ \ge \ \eta^\AC{AC} T(s,a) + u^\AC{AC}(s) - u^\AC{AC}(s^\prime) \ \ge \ \eta^\AC{AC} T(s,a) + \hat u(s) - \hat u(s^\prime) -2\varepsilon. \end{align*} Thus, $(\eta^\AC{AC},\hat u)$ is $2\varepsilon$-feasible to program \eqref{AvgELPObj}-\eqref{AvgELPConst}. By applying Lemma \ref{ec:lem:Slatter} to this $2\varepsilon$-feasible solution, we obtain a feasible solution, denoted $(\eta^\AC{FE\text{-}AC},u^\AC{FE\text{-}AC})$, to program \eqref{AvgELPObj}-\eqref{AvgELPConst} that satisfies \[ |\eta^\AC{AC} - \eta^\AC{FE\text{-}AC}| \le \frac{\varepsilon}{\varepsilon+\zeta}|\eta^\AC{AC} - \eta^\AC{S}|, \] as well as \[ \sNorm{u - u^\AC{FE\text{-}AC}}_\infty \le \frac{\varepsilon}{\varepsilon+\zeta}\sNorm{u - u^\AC{S}}_\infty \le \frac{\varepsilon}{\varepsilon+\zeta}\Big( \varepsilon + \sNorm{u^\AC{S} - u^\AC{AC}}_\infty \Big), \] where the last inequality is obtained using the triangle inequality and the $\sNorm{u^\AC{AC} - u}_\infty \le \varepsilon$. Moreover, it is straightforward to verify that $u^\AC{FE\text{-}AC}\in\mathcal{R}_{\infty}(\varphi,\rho)$ given that $u^\AC{S}\in\mathcal{R}_{\infty}(\varphi,\rho)$ and $\hat u\in\ensuremath{\mathcal{R}_C(\varphi,\rho)}$. \hfill \Halmos \endproof Proposition \ref{ec:prop:ac-FALP} establishes a sampling bound for the average-cost FALP and extends our FALP sampling bound for the discounted-cost MDPs reported in Part (iii) of Theorem \ref{prop:ALP}. \begin{proposition}\label{ec:prop:ac-FALP} Given $\varepsilon>0$, suppose $u^\AC{FE\text{-}AC}$ and $\ensuremath{\boldsymbol{b}}^\AC{FE\text{-}AC}$ follow their definitions in Proposition \ref{ec:prop:AC} and satisfy the statement of this proposition. Then for $\delta\in(0,1]$ and $N \ge N^\AC{AC}_\varepsilon \coloneqq \big\lceil \varepsilon^{-2}\tallNorm{\ensuremath{\boldsymbol{b}}^\AC{FE\text{-}AC}}_{\infty,\rho}^2(\ensuremath{{\Omega}}+\ensuremath{{{\Delta}}_{{\delta}}})^2 \big \rceil $, there is a feasible solution $(\eta^\AC{FA\text{-}AC},\ensuremath{\boldsymbol{\beta}}^\AC{FA\text{-}AC}\vectorIndex{N})$ to the average-cost FALP$\programIndex{N}$ such that \[\small |\eta^\AC{FE\text{-}AC} - \eta^\AC{FA\text{-}AC}|\le \frac{\varepsilon}{(\zeta^\AC{FA} + \varepsilon)}|\eta^\AC{FE\text{-}AC} -\eta^\AC{FA\text{-}S}|\] \text{and} \[\big\lVert u^\AC{FE\text{-}AC} - u(\ensuremath{\boldsymbol{\beta}}^\AC{FA\text{-}AC}) \big\rVert_\infty \le \varepsilon+ \frac{\varepsilon}{(\zeta^\AC{FA} + \varepsilon)}\Big( \varepsilon + \sNorm{u^\AC{FE\text{-}AC} - u(\ensuremath{\boldsymbol{\beta}}^\AC{FA\text{-}S})}_\infty\Big) \] with a probability of $1-\delta$. \end{proposition} \proof{Proof.} Similar to Part (i) of Lemma \ref{ec:lem:high-prob-feas-soln}, we employ Theorem 3.2 of \cite{rahimi2008uniform} to approximate the inner product $\inprod{\ensuremath{\boldsymbol{b}}^\AC{FE\text{-}AC}}{\varphi(\cdot)}$ by sampling $N$ random basis functions. This theorem ensures there is a function of the form $\sum_{i=1}^{N} \beta^{\prime\prime}_i \varphi(s;\theta_i)$ (e.g., equation (5) in \citealp{rahimi2008uniform}) such that \[ \bigg\lVert\inprod{\ensuremath{\boldsymbol{b}}^\AC{FE\text{-}AC}}{\varphi(s)} -\sum_{i=1}^{N} \beta^{\prime\prime}_i \varphi(s;\theta_i) \bigg\rVert_\infty \ \le \ \frac{\sNorm{\ensuremath{\boldsymbol{b}}^\AC{FE\text{-}AC}}_{\infty,\rho}}{\sqrt{N}}(\ensuremath{{\Omega}}+\ensuremath{{{\Delta}}_{{\delta}}}) \] with a probability of at least $1-\delta$. Equivalently, for $N \ge N^\AC{AC}_\varepsilon$, if we let $u^{\prime\prime}(s) = b_0^\AC{FE\text{-}AC} + \sum_{i=1}^{N} \beta^{\prime\prime}_i \varphi(s;\theta_i)$, then it holds that $\big\lVert u^\AC{FE\text{-}AC} -u^{\prime\prime} \big\rVert_\infty \le \varepsilon$ with a probability of at least $1-\delta$. With the same probability, since $(\eta^\AC{FE\text{-}AC}, u^\AC{FE\text{-}AC})$ is feasible to optimization \eqref{AvgELPObj}-\eqref{AvgELPConst}, we can write \begin{align*} c(s,a) \ \ge \ \eta^\AC{FE\text{-}AC} T(s,a) + u^\AC{FE\text{-}AC}(s) - u^\AC{FE\text{-}AC}(s^\prime) \ge \ \eta^\AC{FE\text{-}AC} T(s,a) + u^{\prime\prime} (s) - u^{\prime\prime} (s^\prime) -2\varepsilon, \end{align*} which shows that $(\eta^\AC{FE\text{-}AC}, u^{\prime\prime})$ is $2\varepsilon$-feasible to optimization \eqref{AvgELPObj}-\eqref{AvgELPConst}. In addition, this shows that vector $(\eta^\AC{FE\text{-}AC},\beta^{\prime\prime}_1,\dots, \beta^{\prime\prime}_N)$ is feasible to FALP$\programIndex{N}$ with a probability of at least $1-\delta$. Since we assume there is a Slater point $(\eta^\AC{FA\text{-}S},\ensuremath{\boldsymbol{\beta}}^\AC{FA\text{-}S})$ for FALP$\programIndex{N}$, Lemma \ref{ec:lem:Slatter} can be adapted to find a feasible solution $(\eta^\AC{FA\text{-}AC}, \ensuremath{\boldsymbol{\beta}}^\AC{FA\text{-}AC})$ to FALP$\programIndex{N}$ (as a convex combination of the feasible solutions $(\eta^\AC{FA\text{-}S},\ensuremath{\boldsymbol{\beta}}^\AC{FA\text{-}S})$ and $(\eta^\AC{FE\text{-}AC},\beta^{\prime\prime}_1,\dots, \beta^{\prime\prime}_N)$) such that \[ |\eta^\AC{FE\text{-}AC} - \eta^\AC{FA\text{-}AC}|\le \frac{\varepsilon}{(\zeta^\AC{FA} + \varepsilon)}|\eta^\AC{FE\text{-}AC} -\eta^\AC{FA\text{-}S}|, \] and \[ \sNorm{u^{\prime\prime} - u(\ensuremath{\boldsymbol{\beta}}^\AC{FA\text{-}AC})}_\infty \le \frac{\varepsilon}{(\zeta^\AC{FA} + \varepsilon)} \sNorm{u^{\prime\prime} - u(\ensuremath{\boldsymbol{\beta}}^\AC{FA\text{-}S})}_\infty \le \frac{\varepsilon}{(\zeta^\AC{FA} + \varepsilon)}\Big( \varepsilon + \sNorm{u^\AC{FE\text{-}AC} - u(\ensuremath{\boldsymbol{\beta}}^\AC{FA\text{-}S})}_\infty\Big), \] where the above approximation bound holds with a probability of at least $1-\delta$. Thus, with the same probability, if we use the error bound $\big\lVert u^\AC{FE\text{-}AC} -u^{\prime\prime} \big\rVert_\infty \le \varepsilon$ and the above approximation gap, we obtain \begin{align*}\resizebox{.98\textwidth}{!}{$ \big\lVert u^\AC{FE\text{-}AC} - u(\ensuremath{\boldsymbol{\beta}}^\AC{FA\text{-}AC}) \big\rVert_\infty \le \big\lVert u^\AC{FE\text{-}AC} - u^{\prime\prime} \big\rVert_\infty+\big\lVert u^{\prime\prime} - u(\ensuremath{\boldsymbol{\beta}}^\AC{FA\text{-}AC}) \big\rVert_\infty \le \varepsilon+ \frac{\varepsilon}{(\zeta^\AC{FA} + \varepsilon)}\Big( \varepsilon + \sNorm{u^\AC{FE\text{-}AC} - u(\ensuremath{\boldsymbol{\beta}}^\AC{FA\text{-}S})}_\infty\Big). $}\end{align*} \hfill \Halmos \endproof It is possible to combine our results in Proposition \ref{ec:prop:ac-FALP} with the orthogonal projection ideas in \S\ref{sec:Self-guided Approximate Linear Programs} to construct an $\varepsilon$-feasible and $\varepsilon$-optimal solution to the average-cost FGLP. We omit a formal statement and proof of these results for brevity. \section{Addendum to Computational Study}\label{ec:sec:Further Discussion on Numerical Studie} In \S\ref{ec:sec:A Lower Bound Estimator for Constraint-sampled ALPs}, we elaborate on how we estimate valid lower bounds using a VFA from FALP or FGLP with sampled constraints in the discounted-cost MDP setting. We followed this procedure to obtain the bounds reported for the perishable inventory control application in \S\ref{sec:PIC-Observations}. In \S\ref{ec:sec:FALP and FGLP Setup for Generalized Joint Replenishment}, we discuss the implementation details of our constraint generation and greedy policy optimization to solve the average-cost versions of FALP and FGLP on the GJR application. We also provide a table with detailed results to supplement the figure in \S\ref{sec:GJR-Observations}. \subsection{A Valid Lower Bound Estimate for Constraint-sampled ALPs} \label{ec:sec:A Lower Bound Estimator for Constraint-sampled ALPs} This material discusses how ideas in \cite{lin2017ContViolLearning} can be leveraged to estimate a valid lower bound while using a constraint-sampled version of a generic ALP, and in particular, the FALP and FGLP models in this paper. For any VFA $V(\ensuremath{\boldsymbol{\beta}})$, we define the function \[ y(s,a;\ensuremath{\boldsymbol{\beta}})\coloneqq \ \ensuremath{\mathbb{E}}_{\chi}[V(\ensuremath{\boldsymbol{\beta}}) ] + \frac{1}{1-\gamma} \Big( c(s,a) + \gamma \ensuremath{\mathbb{E}}\big[V(s^\prime;\ensuremath{\boldsymbol{\beta}}) \ | \ s,a\big] - V(s;\ensuremath{\boldsymbol{\beta}}) \Big), \] that encodes the violation of ALP constraints for a given $\ensuremath{\boldsymbol{\beta}}$ at a state-action pair $(s,a)$. Suppose $\ensuremath{\boldsymbol{\beta}^{\raisemath{0pt}{\scaleto{\mathrm{CS\text{-}FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}$ is an optimal solution of a constraint-sampled FALP$\programIndex{N}$. We observe that minimizing the function $y(s,a;\ensuremath{\boldsymbol{\beta}^{\raisemath{0pt}{\scaleto{\mathrm{CS\text{-}FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})$ over state-action pairs corresponds to finding the most violating constraint in the constraint-sampled FALP$\programIndex{N}$ with the optimal solution $\ensuremath{\boldsymbol{\beta}^{\raisemath{0pt}{\scaleto{\mathrm{CS\text{-}FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}$ since term $\ensuremath{\mathbb{E}}_{\chi}[V(\ensuremath{\boldsymbol{\beta}}) ]$ is independent of the state and action and the term ${( c(s,a) + \gamma \ensuremath{\mathbb{E}}[V(s^\prime;\ensuremath{\boldsymbol{\beta}}) | s,a] - V(s;\ensuremath{\boldsymbol{\beta}}) )}/{(1-\gamma)}$ is the constraint slack. Thus, if the minimum value of function $y(s,a;\ensuremath{\boldsymbol{\beta}^{\raisemath{0pt}{\scaleto{\mathrm{CS\text{-}FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})$ over state-action pairs is strictly less than $\ensuremath{\mathbb{E}}_{\chi}[V(\ensuremath{\boldsymbol{\beta}^{\raisemath{0pt}{\scaleto{\mathrm{CS\text{-}FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})]$, then $\ensuremath{\boldsymbol{\beta}^{\raisemath{0pt}{\scaleto{\mathrm{CS\text{-}FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}$ violates a constraint of FALP$\programIndex{N}$. Otherwise, $\ensuremath{\boldsymbol{\beta}^{\raisemath{0pt}{\scaleto{\mathrm{CS\text{-}FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}$ is feasible to FALP$\programIndex{N}$. Under mild conditions, function $y$ is Lipschitz with constant $\mathrm{L}_y>0$. Lemma \ref{lem:Lin_et_al_lower_bound} is directly based on Lemma EC.3 in \citealt{lin2017ContViolLearning} and provides a lower bound on the optimal cost. For a given VFA $V(\ensuremath{\boldsymbol{\beta}})$ and $\lambda\in(0,1]$, we define a measure $Y$ on $\ensuremath{\sSpace\times\mathcal{A}_s}$ as $Y(s,a;\ensuremath{\boldsymbol{\beta}},\lambda) \coloneqq \exp(\nicefrac{-y(s,a;\ensuremath{\boldsymbol{\beta}})}{\lambda})$. \begin{lemma}[Lemma EC.3, \citealt{lin2017ContViolLearning}] \label{lem:Lin_et_al_lower_bound} For all $\lambda \in (0,1]$ and $\ensuremath{\boldsymbol{\beta}}$, we have $ \mathrm{PC}(\pi^*) \ge \ensuremath{\mathbb{E}}_{Y}\big[y(s,a;\ensuremath{\boldsymbol{\beta}})\big] + \lambda( \Lambda + {d_{\scaleto{\saSpace}{4.5pt}}} \ln(\lambda)) $ where \[ \Lambda \coloneqq -\ln\bigg[ \Gamma\bigg(1+\frac{{d_{\scaleto{\saSpace}{4.5pt}}}}{2}\bigg) \ \Big(R_{\ensuremath{\sSpace\times\mathcal{A}_s}} \sqrt{\uppi}\Big)^{-{d_{\scaleto{\saSpace}{4.5pt}}}} \ \int_{\ensuremath{\sSpace\times\mathcal{A}}} \diff (s,a)\bigg] - \mathrm{L}_y(R_{\ensuremath{\sSpace\times\mathcal{A}_s}}+\diamSSaspace), \] and ${d_{\scaleto{\saSpace}{4.5pt}}}$ is ${d_{\scaleto{\sSpace}{3.5pt}}}+{d_{\scaleto{\mathcal{A}}{3.5pt}}}$. Function $\Gamma$ is the standard gamma function, $\uppi$ is the Archimedes constant, $R_{\ensuremath{\sSpace\times\mathcal{A}_s}}>0$ is the radius of the largest ball contained in $\ensuremath{\sSpace\times\mathcal{A}}$, and $\diamSSaspace$ is the diameter of $\ensuremath{\sSpace\times\mathcal{A}}$. \end{lemma} Given a solution $\ensuremath{\boldsymbol{\beta}^{\raisemath{0pt}{\scaleto{\mathrm{CS\text{-}FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}$, Lemma \ref{lem:Lin_et_al_lower_bound} suggests that a valid lower bound can be computed by estimating the expected value $\ensuremath{\mathbb{E}}_{Y}\big[y(s,a;\ensuremath{\boldsymbol{\beta}})\big]$ and a constant term. For our numerical experiments in \S\ref{sec:Perishable Inventory Control}, we estimate $\ensuremath{\mathbb{E}}_{Y}\big[f(\ensuremath{\boldsymbol{\beta}},s,a)\big]$ using the Metropolis-Hastings method with $4000$ samples by generating $8$ Markov Chains, each with length of $1500$, where we burn the first $1000$ samples and use the last $500$. Parameter $\Lambda$ can be easily evaluated for the instances in Table \ref{table:PIC Instances} since the perishable inventory control cost function is Lipschitz with constant $\mathrm{L}_c >0$. In fact, it is easy to verify that $\mathrm{L}_c = 2(\gamma^Lc_o\bar{a} +c_h\bar{a} + c_b\underline{s} +c_d\bar{a}+c_l\bar{a})$ and consequently, $\mathrm{L}_y = \nicefrac{(4\sNorm{\ensuremath{\boldsymbol{\beta}}}_1+\mathrm{L}_c)}{1-\gamma}$. We choose the other parameters defining $\Lambda$ as follows: ${d_{\scaleto{\saSpace}{4.5pt}}}=4, R_{\ensuremath{\sSpace\times\mathcal{A}_s}}=\nicefrac{\bar{a}}{2},$ and $\diamSSaspace=3\bar{a}^2 + (\underline{s}-\bar{a})^2$. We set $\lambda = \nicefrac{1}{(\Lambda + {d_{\scaleto{\saSpace}{4.5pt}}}) }$ but one can cross-validate this parameter to possibly obtain tighter bounds. \subsection{Constraint Generation and Greedy Policy Optimization for the Generalized Joint Replenishment Experiments} \label{ec:sec:FALP and FGLP Setup for Generalized Joint Replenishment} We employed constraint generation to solve the average-cost variants of FALP and FGLP with random stump basis functions. Consider the FALP formulation in \S\ref{ec:sec:FGLP Sampling Bound for Deterministic and Average-Cost Semi-MDPs} and recall the decomposition $\hat{\eta} + \sum_{j =1}^{J}\beta_{1,j}\lambda_j$ for the long-run optimal average cost $\eta(\ensuremath{\boldsymbol{\lambda}})$. Let $(\ensuremath{\hat{\eta}^{\scaleto{\mathrm{INI}}{3pt}}\vectorIndex{N}},\ensuremath{\coefACN{N}^{\scaleto{\mathrm{INI}}{3pt}}})$ be a solution to a version of FALP$\programIndex{N}$ with constraints enforced for state-action pairs $(s,a) \in\hat\ensuremath{\mathcal{S}}\times\hat\ensuremath{{\mathcal{A}}_{s}}$ alone, where $\hat\ensuremath{\mathcal{S}}\times\hat\ensuremath{{\mathcal{A}}_{s}}$ is a sampled subset of $\ensuremath{\sSpace\times\mathcal{A}_s}$. Given solution $(\ensuremath{\hat{\eta}^{\scaleto{\mathrm{INI}}{3pt}}\vectorIndex{N}},\ensuremath{\coefACN{N}^{\scaleto{\mathrm{INI}}{3pt}}})$, the following separation problem can be solved to find a state-action pair, if any, that violates the FALP constraints corresponding to $\ensuremath{\sSpace\times\mathcal{A}_s}$: \[ \Psi(\ensuremath{\hat{\eta}^{\scaleto{\mathrm{INI}}{3pt}}\vectorIndex{N}},\ensuremath{\coefACN{N}^{\scaleto{\mathrm{INI}}{3pt}}}) \coloneqq \min_{(s,a)\in\ensuremath{\sSpace\times\mathcal{A}_s}} \bigg\{ c(s,a) - \ensuremath{\hat{\eta}^{\scaleto{\mathrm{INI}}{3pt}}\vectorIndex{N}} \ T(s,a) - \sum_{j=1}^{J}\beta^{\scaleto{\mathrm{INI}}{3pt}}_{1,j}a_j - \sum_{i=1}^{N} \beta^{\scaleto{\mathrm{INI}}{3pt}}_{2,i}\big(\varphi(s^\prime;\theta_i) - \varphi(s;\theta_i) \big) \bigg\}, \] where we use the definition of GJR transition function, that is, $s^\prime = s+a -\lambda T(s,a)$, to derive this program. The above separation problem is based on the average-cost FALP constraints shown in \S\ref{ec:sec:FGLP Sampling Bound for Deterministic and Average-Cost Semi-MDPs} and can also be found in (11) of \cite{adelman2012GJR}. Motivated by the mixed integer linear programming reformulation of the separation problem (with no holding cost) in \S 3.1 of \cite{adelman2012GJR}, we discuss the analogous mixed integer linear programming formulation $\Psi(\ensuremath{\hat{\eta}^{\scaleto{\mathrm{INI}}{3pt}}\vectorIndex{N}},\ensuremath{\coefACN{N}^{\scaleto{\mathrm{INI}}{3pt}}}) $ for the FALP separation problem when using random stump basis functions. Recalling the transition time $T(s,a) = \min_j \{\nicefrac{s_j+a_j}{\lambda_j}\}$ and the bias function approximation \eqref{eq:biasApproximation}, this formulation is \begin{align*} \Psi(\ensuremath{\hat{\eta}^{\scaleto{\mathrm{INI}}{3pt}}\vectorIndex{N}},\ensuremath{\coefACN{N}^{\scaleto{\mathrm{INI}}{3pt}}}) \equiv\\ \min_{(G,Q,Q',s,a,t,s',Z,Z')} & \ \bigg(c^\prime + \sum_{j=1}^{J} c^{\prime\prime}_jG_j \bigg) - \bigg( \hat{\eta}t + \sum_{j=1}^{J}\beta_{1,j}a_j && \hspace{-.8cm}+ \sum_{i=1}^{N} \beta_{2,i}\big(Z_i^\prime - Z_i \big) \bigg) \\ & \sum_{j=1}^{J} G_j \ge 1; && a_j \le \bar{s}_j G_j, &&& j=1,2,\ldots J; \\ & s^\prime_j = s_j +a_j -\lambda_jt, \qquad j=1,2,\ldots J; && s_j + a_j \le \bar{s}_j, &&& j=1,2,\ldots J; \\ & \sum_{j=1}^{J} a_j \le \bar{a}; && s_j \le \bar{s}_j(1-Q_j), &&& j=1,2,\ldots J;\\ & \sum_{j=1}^{J} Q_j \ge 1; && s^\prime_i \le \bar{s}_j(1-Q^\prime_j), &&& j=1,2,\ldots J;\\ & \sum_{j=1}^{J} Q^\prime_j \ge 1; \qquad && Q_j \le G_j, &&& j=1,2,\ldots J;\\ & Z_i = \operatornamewithlimits{sgn}(s^\prime_{q_i} - \omega_i), \qquad i=1,\dots,N; \hspace{.5cm} && Z^\prime_i = \operatornamewithlimits{sgn}(s_{q_i} - \omega_i), &&& i=1,\dots,N; \\ & G, Q , Q^\prime \text{ binary;} && Z,Z^\prime \text{ integer;} \\ &s,a,t,s^\prime \text{ nonnegative}. \end{align*} In the above mixed integer linear program, the variable $G_j$ is one if item $j$ is replenished and zero otherwise. Constraint $\sum_{j=1}^{J} G_j \ge 1$ ensures that at least one item is replenished. If $G_j=1$ for some $j\in\{1,2\ldots,J\}$, then the constraint $a_j \le \bar{s}_j$ ensures that the replenishment decision $a_j$ can take any feasible value, and if $G_j=0$, we have $a_j =0$. Constraints $s^\prime_j = s_j +a_j -\lambda_jt$ model the MDP transition function. Constraints $s_j + a_j \le \bar{s}_j$ and $\sum_{j=1}^{J} a_j \le \bar{a}$ check that the state-action pair $(s,a)$ adheres to the inventory and replenishment capacities, respectively. For $j\in\{1,2\ldots,J\}$, if binary variable $Q_j$ is one, then item $j$ is stocked out at the current decision time, i.e. $s_j = 0$, and if $Q^\prime_j$ is one, then this item will be stocked out in the next decision epoch, i.e. $s^\prime_j = 0$. Constraints $\sum_{j=1}^{J} Q_j \ge 1$ and $\sum_{j=1}^{J} Q^\prime_j \ge 1$ ensure at least an item at the current and the next decision epochs is stocked out. If $G_j=0$ for some item $j$, then it should not be stocked out, and thus $Q_j =0$ via constraint $Q_j \le G_j$; otherwise, $Q_j \in\{0,1\}$, that is, we can either replenish a stocked-out item or an item with a non-zero inventory level. Integer variables $Z_i\in\{-1,0,1\}$ and $Z_i^\prime\in\{-1,0,1\}$ model the value of random basis function $\varphi(s;\theta_i)$ and $\varphi(s^\prime;\theta_i)$. A sign function can be implemented in a solver as a piecewise constant function using a big-M formulation or approximately as a piecewise linear function. After encountering numerical issues with the first option, we used the function \texttt{setPWLObj} in Gurobi (\citealt{gurobi}) to model sign functions. Specifically, we implemented the following piecewise linear approximation: \[ \operatornamewithlimits{sgn}(x) \approx \begin{cases} 1 & x\ge \epsilon; \\ \frac{x}{\epsilon} & x\in[-\epsilon,\epsilon];\\ -1 & x\le -\epsilon. \end{cases} \] To be consistent, we also use the above approximation to construct our VFAs. The separation problem is key to constraint generation. Given solution $(\ensuremath{\hat{\eta}^{\scaleto{\mathrm{INI}}{3pt}}\vectorIndex{N}},\ensuremath{\coefACN{N}^{\scaleto{\mathrm{INI}}{3pt}}})$ that is obtained by solving FALP$\programIndex{N}$ with constraints in $\hat\ensuremath{\mathcal{S}}\times\hat\ensuremath{{\mathcal{A}}_{s}}$, if the optimal objective value $\Psi(\ensuremath{\hat{\eta}^{\scaleto{\mathrm{INI}}{3pt}}\vectorIndex{N}},\ensuremath{\coefACN{N}^{\scaleto{\mathrm{INI}}{3pt}}})$ is positive, the current solution $(\ensuremath{\hat{\eta}^{\scaleto{\mathrm{INI}}{3pt}}\vectorIndex{N}},\ensuremath{\coefACN{N}^{\scaleto{\mathrm{INI}}{3pt}}})$ is feasible to the continuum of FALP$\programIndex{N}$ constraints. Otherwise, the state-action component of an optimal solution, $(s^{\scaleto{\mathrm{SEP}}{3pt}},a^{\scaleto{\mathrm{SEP}}{3pt}})$, computed by $\Psi(\hat{\eta}\vectorIndex{N},\coefACN{N})$ corresponds to an FALP$\programIndex{N}$ constraint violated by $(\ensuremath{\hat{\eta}^{\scaleto{\mathrm{INI}}{3pt}}\vectorIndex{N}},\ensuremath{\coefACN{N}^{\scaleto{\mathrm{INI}}{3pt}}})$. In this case, we update $\hat\ensuremath{\mathcal{S}}\times\hat\ensuremath{{\mathcal{A}}_{s}}$ to $\hat\ensuremath{\mathcal{S}}\times\hat\ensuremath{{\mathcal{A}}_{s}} \cup\{(s^{\scaleto{\mathrm{SEP}}{3pt}},a^{\scaleto{\mathrm{SEP}}{3pt}})\}$ and re-solve FALP$\programIndex{N}$ with the new set of constraints to find a solution. We repeat this procedure until the violation becomes negligible. The optimal value of FALP$\programIndex{N}$ in the last iteration of this process is a lower bound on the optimal cost. To estimate policy cost associated with a bias function approximation, we follow Algorithm 1 in \cite{adelman2012GJR}. The core of this algorithm is to solve the greedy policy optimization \eqref{eqn:GreedyOpt} via a mixed-integer linear program similar to the separation problem. We thus solve a modification of greedy policy optimization, that is known as $K$-step greedy policy optimization. Given bias function approximation $u(s;\ensuremath{\boldsymbol{\beta}}) = \beta_0 - \sum_{j=1}^{J} \beta_{1,j} s_j - \sum_{i=1}^{N}\beta_{2,i}\varphi(s;\theta_{i})$ and $\eta = \hat{\eta} + \sum_{j=1}^{J}\beta_{1,j}\lambda_j $ computed by the FALP$\programIndex{N}$ in \S\ref{ec:sec:FGLP Sampling Bound for Deterministic and Average-Cost Semi-MDPs}, the action taken by the K-step greedy policy $\pi_{g,K}(s_t;\hat{\eta} ,\ensuremath{\boldsymbol{\beta}})$ at the current stage $t$ and state $\hat s_t$ is defined by the $a_t$ component of the optimal solution to \begin{align*}\small \min_{(a_t,s_{t},\dots,a_{t+K-1},s_{t+K-1},s_{t+K} )} \ & \ \sum_{t^\prime = t}^{t+K-1} \big(c(s_{t^\prime},a_{{t^\prime}}) - \eta T(s_{t^\prime},a_{{t^\prime}})\big) \ +\ u(s_{t+K}) \\ \text{s.t.} & \hspace{1.2cm} s_{t} = \hat{s}_t,\\ & \hspace{1.2cm} s_{t^{\prime}+1}= s_{t^{\prime}} +a_{t^{\prime}} -\lambda_{t^{\prime}}T(s_{t^{\prime}},a_{t^{\prime}}), && \forall t^{\prime} = t,\dots, K-1, \\ &\hspace{1.2cm} a_{t'}\in\mathcal{A}_{s_{t'}}, && \forall t^{\prime} = t,\dots, K-1. \end{align*} Due to our choice of random stump bases, we can efficiently solve the $K$-step greedy optimization by casting it as a mixed integer linear program similar to the optimization problem (PD) in \cite{adelman2012GJR}. We do not repeat this program here as it is analogous to math program $\Psi(\ensuremath{\hat{\eta}^{\scaleto{\mathrm{INI}}{3pt}}\vectorIndex{N}},\ensuremath{\coefACN{N}^{\scaleto{\mathrm{INI}}{3pt}}})$. For implementation, we use $K=4$ and we set the number of stages for simulating policy in Algorithm 1 of \cite{adelman2012GJR}, e.g., $N$, to 4000. We use $\epsilon = 0.01$ in the approximation to the sign function. We follow a similar constraint separation strategy for FGLP applied only to constraints \eqref{FALPConst1}, that is we do not separate the self-guiding constraints \eqref{FALPConst2} as their exact feasibility is not needed to obtain a valid lower bound. Instead, we enforce constraints \eqref{FALPConst2} only on a set of sampled which include 5,000 initially sampled states plus those encountered during the constraint separation process applied to constraints \eqref{FALPConst1}. \begin{table}[t] \centering \caption{Benchmarking performance of FGLP against AF and RLP on the GJR instances.} \begin{footnotesize} \adjustbox{max width=\textwidth}{ \scriptsize \begin{threeparttable}\renewcommand{\arraystretch}{1.5} \begin{tabular}{c|lr@{\hskip 8pt}rrrr@{\hskip 8pt}rrrrc@{\hskip 18pt}lr@{\hskip 8pt}rrrr@{\hskip 8pt}rrrr} \cline{1-22} \multirow{2}{*}{{VFA}} & \multirow{2}{*}{\tiny\rotatebox[origin=c]{90}{Instance}} & \multirow{2}{*}{\tiny\rotatebox[origin=c]{90}{$\#$ bases}} & \multicolumn{3}{c}{Bound and gap} & & \multicolumn{3}{c}{Improvement} & \multirow{2}{*}{\tiny\rotatebox[origin=c]{90}{Runtime}} & & \multirow{2}{*}{\tiny\rotatebox[origin=c]{90}{Instance}} & \multirow{2}{*}{\tiny\rotatebox[origin=c]{90}{$\#$ bases}} & \multicolumn{3}{c}{Bound and gap} & & \multicolumn{3}{c}{Improvement} & \multirow{2}{*}{\tiny\rotatebox[origin=c]{90}{Runtime}} \\ \cline{4-6} \cline{8-10} \cline{15-17} \cline{19-21} & & & \multicolumn{1}{c}{OBJ} & \multicolumn{1}{c}{PC} & \multicolumn{1}{c}{$\tau$} & & \multicolumn{1}{c}{OBJ} & \multicolumn{1}{c}{PC} & \multicolumn{1}{c}{$\tau$} & & & & & \multicolumn{1}{c}{OBJ} & \multicolumn{1}{c}{PC} & \multicolumn{1}{c}{$\tau$} & & \multicolumn{1}{c}{OBJ} & \multicolumn{1}{c}{PC} & \multicolumn{1}{c}{$\tau$} & \\ \cline{1-11} \cline{13-22} {AF} & \multirow{3}{*}{2} & {4} & {83.8} & {86.1} & {2.4} & {} & {$-$} & {$-$} & {$-$} & {$-$} & & \multirow{3}{*}{6} & {4} & {35.5} & {36.5} & {2.8} & {} & {$-$} & {$-$} & {$-$} & {$-$} \\[-.15cm] {RLP} & & {18} & {85.5} & {86.1} & {0.7} & {} & {1.8} & {0.0} & {1.7} & {1.0} & & % & {3} & {36.0} & {36.5} & {1.4} & {} & {1.4} & {0.0} & {1.3} & {0.1} \\[-.15cm] {FGLP} & & {14} & {85.0} & {86.1} & {1.2} & {} & {1.2} & {0.0} & {1.2} & {1.0} & & % & {6} & {35.9} & {36.5} & {1.5} & {} & {1.3} & {0.0} & {1.2} & {0.2} \\ {AF} & \multirow{3}{*}{9} & {6} & {83.6} & {88.2} & {5.9} & {} & {$-$} & {$-$} & {$-$} & {$-$} & & \multirow{3}{*}{14} & {6} & {31.5} & {34.3} & {8.3} & {} & {$-$} & {$-$} & {$-$} & {$-$} \\[-.15cm] {RLP} & & {47} & {86.6} & {88.2} & {2.1} & {} & {3.9} & {0.0} & {3.8} & {26.6} & & % & {122} & {33.1} & {34.3} & {3.8} & {} & {4.8} & {0.0} & {4.5} & {120} \\[-.15cm] {FGLP} & & {32} & {86.5} & {88.2} & {1.8} & {} & {3.9} & {0.0} & {3.8} & {26.7} & & % & {53} & {33.1} & {34.3} & {4.0} & {} & {4.5} & {0.0} & {4.3} & {98.0} \\ {AF} & \multirow{3}{*}{15} & {6} & {30.7} & {33.4} & {8.1} & {} & {$-$} & {$-$} & {$-$} & {$-$} & & \multirow{3}{*}{18} & {8} & {93.3} & {99.8} & {6.4} & {} & {$-$} & {$-$} & {$-$} & {$-$} \\[-.15cm] {RLP} & & {77} & {32.4} & {33.9} & {3.9} & {} & {5.6} & {-1.3} & {4.3} & {72.7} & & % & {95} & {96.1} & {99.8} & {3.4} & {} & {3.1} & {0.0} & {3.0} & {180} \\[-.15cm] {FGLP} & & {52} & {32.6} & {33.4} & {2.4} & {} & {5.9} & {0.0} & {5.7} & {52.8} & & % & {28} & {97.4} & {99.8} & {2.3} & {} & {4.2} & {0.0} & {4.1} & {73.8} \\ {AF} & \multirow{3}{*}{19} & {8} & {111.7} & {117.4} & {6.1} & {} & {$-$} & {$-$} & {$-$} & {$-$} & & \multirow{3}{*}{22} & {8} & {90.3} & {102.6} & {11.6} & {} & {$-$} & {$-$} & {$-$} & {$-$} \\[-.15cm] {RLP} & & {46} & {115.3} & {117.4} & {2.1} & {} & {4.1} & {0.0} & {4.0} & {84.4} & & % & {92} & {98.6} & {102.6} & {3.4} & {} & {8.5} & {0.0} & {8.2} & {146.6} \\[-.15cm] {FGLP} & & {28} & {115.1} & {117.4} & {2.3} & {} & {3.9} & {0.0} & {3.8} & {100.0} & & % & {64} & {97.7} & {102.6} & {4.4} & {} & {7.6} & {0.0} & {7.2} & {145.8} \\ {AF} & \multirow{3}{*}{23} & {8} & {92.9} & {103.6} & {9.9} & {} & {$-$} & {$-$} & {$-$} & {$-$} & & \multirow{3}{*}{25} & {8} & {33.4} & {35.0} & {4.8} & {} & {$-$} & {$-$} & {$-$} & {$-$} \\[-.15cm] {RLP} & & {54} & {101.3} & {103.6} & {2.0} & {} & {8.0} & {0.0} & {7.8} & {43.0} & & % & {81} & {33.9} & {35.0} & {3.3} & {} & {1.4} & {0.1} & {1.5} & {109.7} \\[-.15cm] {FGLP} & & {57} & {99.4} & {103.8} & {3.6} & {} & {6.6} & {0.0} & {6.3} & {75.8} & & % & {17} & {33.8} & {35.0} & {3.7} & {} & {1.1} & {0.0} & {1.0} & {109.6} \\ {AF} & \multirow{3}{*}{26} & {8} & {22.8} & {25.5} & {10.4} & {} & {$-$} & {$-$} & {$-$} & {$-$} & & \multirow{3}{*}{27} & {8} & {33.7} & {33.7} & {5.5} & {} & {$-$} & {$-$} & {$-$} & {$-$} \\[-.15cm] {RLP} & & {51} & {25.0} & {25.5} & {1.8} & {} & {8.7} & {0.0} & {8.5} & {83.8} & & % & {21} & {33.0} & {33.7} & {2.0} & {} & {3.6} & {0.0} & {3.6} & {40.2} \\[-.15cm] {FGLP} & & {35} & {25.0} & {25.5} & {1.8} & {} & {8.7} & {0.0} & {8.6} & {79.5} & & % & {15} & {29.7} & {30.2} & {1.3} & {} & {4.3} & {0.0} & {4.2} & {5.9} \\ {AF} & \multirow{3}{*}{32} & {10} & {122.4} & {131.3} & {6.9} & {} & {$-$} & {$-$} & {$-$} & {$-$} & & \multirow{3}{*}{35} & {10} & {70.8} & {80.1} & {10.9} & {} & {$-$} & {$-$} & {$-$} & {$-$} \\[-.15cm] {RLP} & & {77} & {126.7} & {131.3} & {3.5} & {} & {3.5} & {0.0} & {3.4} & {201.5} & & % & {59} & {75.7} & {80.1} & {4.9} & {} & {6.2} & {0.0} & {5.9} & {195.5} \\[-.15cm] {FGLP} & & {44} & {125.3} & {131.3} & {4.6} & {} & {2.4} & {0.0} & {4.6} & {240} & & % & {59} & {75.9} & {80.1} & {4.7} & {} & {6.5} & {0.0} & {6.1} & {195.7} \\ {AF} & \multirow{3}{*}{36} & {10} & {70.4} & {79.5} & {10.7} & {} & {$-$} & {$-$} & {$-$} & {$-$} & & \multirow{3}{*}{37} & {10} & {70.4} & {79.4} & {10.7} & {} & {$-$} & {$-$} & {$-$} & {$-$} \\[-.15cm] {RLP} & & {59} & {76.2} & {79.5} & {3.6} & {} & {7.4} & {0.0} & {7.0} & {141.8} & & % & {73} & {75.8} & {79.4} & {4.1} & {} & {6.8} & {0.0} & {6.4} & {162.1} \\[-.15cm] {FGLP} & & {56} & {75.9} & {79.5} & {4.1} & {} & {7.0} & {0.0} & {6.7} & {195.4} & & % & {67} & {74.9} & {79.5} & {5.0} & {} & {6.0} & {0.0} & {5.7} & {195.5} \\ {AF} & \multirow{3}{*}{41} & {10} & {26.3} & {28.5} & {7.5} & {} & {$-$} & {$-$} & {$-$} & {$-$} & & \multirow{3}{*}{42} & {10} & {26.3} & {28.5} & {7.5} & {} & {$-$} & {$-$} & {$-$} & {$-$} \\[-.15cm] {RLP} & & {62} & {27.5} & {28.5} & {2.9} & {} & {4.7} & {0.0} & {4.6} & {178.1} & & % & {60} & {27.4} & {28.5} & {3.4} & {} & {4.2} & {0.0} & {4.1} & {240.0} \\[-.15cm] {FGLP} & & {17} & {27.7} & {28.5} & {2.5} & {} & {5.2} & {0.0} & {5.0} & {185.5} & & % & {23} & {27.6} & {28.5} & {2.9} & {} & {4.8} & {0.0} & {4.6} & {216.0} \\ \cline{1-22} \end{tabular} \end{threeparttable} }\end{footnotesize} \label{tab:SG-ALPs-on-GJR} \end{table} Table \ref{tab:SG-ALPs-on-GJR} summarizes detailed results of AF, RLP, and FGLP on the GJR instances described in Table \ref{table:GJR Instances}. Figure \ref{fig:gjrcomparison} is generated using this table. The label ``AF'' refers to using an affine bias function in an ALP, e.g., $N=0$ in equation \eqref{eq:biasApproximation}, ``RLP'' is the bias function approximation generated by Algorithm 2 in \cite{adelman2012GJR}, and ``FGLP'' refers to our bias function approximation. Column ``$\#$ bases'' reports the expected (over 5 trials) number of basis functions. For AF, the number of bases equals $J$ while this number for RLP and FGLP depends on the dynamically generated basis function. We report, in Table \ref{tab:SG-ALPs-on-GJR}, the expected lower bound, policy cost, and the optimality gap for each of these models. To assess the value of basis function generation on the lower bound and the policy cost, we average the following percentages: \[ \frac{\mathrm{OBJ}(\coefSGVecK{N}) - \mathrm{OBJ}(\ensuremath{\boldsymbol{\beta}}^{\mathrm{AF}})}{\mathrm{OBJ}(\ensuremath{\boldsymbol{\beta}}^{\mathrm{AF}})}\times 100\%, \ \ \text{ and } \ \ \frac{ \mathrm{PC}(\ensuremath{\boldsymbol{\beta}}^{\mathrm{AF}}) - \mathrm{PC}(\coefSGVecK{N})}{\mathrm{PC}(\ensuremath{\boldsymbol{\beta}}^{\mathrm{AF}})}\times 100\%, \] where $\ensuremath{\boldsymbol{\beta}}^{\mathrm{AF}}$ refers to an optimal solution to the FALP model with an affine bias functions. These percentages are also computed and averaged for the method in \cite{adelman2012GJR}. We also report, in Table \ref{tab:SG-ALPs-on-GJR}, the difference in the optimality gaps between models RLP and AF as well as FGLP and AF. These percentages are reported in Table \ref{tab:SG-ALPs-on-GJR} and labeled as ``improvement''. We finally report average runtime (in minutes) of each algorithm. RLP and FGLP compute a near-optimal policy with at most $5\%$ average optimality gap for all eighteen instances. Both methods improve the lower bounds (on average) by up to $8.7\%$ (see instance 26), while the policy costs are almost never improved. This observation is consistent with \cite{adelman2012GJR}. For instance ${15}$, RLP worsens the average upper bound by factor of $1.3\%$ while this is not the case for FGLP. As the number of items increases, both algorithms require more time to compute near-optimal policies. There is no clear pattern showing that a particular method converges faster. For instances $2,6,9,22,25,$ and $35$, the average run times of both methods are almost identical. The average run time of FGLP for the instances $14,15,18,26, 27$ and $42$ is smaller than RLP, and RLP is faster than FGLP on the instances $19,23,32,36,37$ and $41$. \section{Introduction.}\label{intro} \section{Introduction} Computing high-quality control policies in sequential decision making problems is an important task across several application domains. Markov decision processes (MDPs; \citealt{puterman1994MDP}) provide a powerful framework to find optimal policies in such problems but are often intractable to solve exactly due to their large state and action spaces or the presence of high-dimensional expectations (see \S 1.2 and \S 4.1 of \citealp{powell2007ADP}). Therefore, a class of approximate dynamic programming (ADP) approaches instead approximate the value function of MDPs and use the resulting approximation to obtain control policies in simulation (e.g., see \citealt{bertsekas1996neuro}). Approximate linear programming \citep{schweitzer1985ALP,farias2003ALP} is a math-programming based ADP approach for computing value function approximations (VFAs) that has been applied to a wide variety of domains, including operations research, reinforcement learning, and artificial intelligence \citep{adelman2003price,guestrin2003efficient,forsell2006approximate,desai2009smoothed,adelman2013dynamic,tong2013approximate,nadarajah2015relaxALP,mladenov2017approximate,balseiro2019multiagent,blado2019relaxation}. VFAs in an approximate linear program (ALP) are represented as a linear combination of functions, referred to as basis functions, defined on the MDP state space. Solving ALP thus provides the (linear combination) weights associated with basis functions defining a VFA, which can be used to compute a control policy similar to other VFA-based ADP methods. In addition, an appealing property of the ALP VFA is that it provides a lower bound on the optimal policy cost, which can be used to compute an optimality gap of the ALP policy as well as other heuristic policies. \looseness=-1 \begin{figure} \caption{ALP implementation strategies.} \begin{minipage}{.45\textwidth} \resizebox{1\textwidth}{!}{ \begin{tikzpicture}[auto] \node [null] (null) {\footnotesize{(a) Standard}}; \node [block, below of=null] (INI) {\footnotesize{\textbf{(i)} Select basis functions using domain knowledge}}; \node [block, below of=INI,node distance=1.5cm] (ALP) {\footnotesize{\textbf{(ii)} Solve ALP}}; \node [block, below left=1.5cm and 3cm of ALP,node distance=2cm] (GAP) {\footnotesize{\textbf{(iii)} Compute optimality gap}}; \node [block, below right=1.5cm and 3cm of ALP,node distance=2cm] (BF) {\footnotesize{\textbf{(iv)} Modify the basis functions heuristically}}; \node [init, below of=GAP, node distance=1.5cm](Terminal) {\footnotesize{Stop and return VFA}}; \path [line,dashed] (INI) -- (ALP); \path [line,dashed] (ALP) -- (GAP); \path [line,dashed] (GAP) -- node {\footnotesize{If gap is small}}(Terminal); \path [line,dashed] (GAP) -- node {\footnotesize{If gap is large}}(BF); \path [line,dashed] (BF) -- (ALP); \end{tikzpicture} } \end{minipage} \hfill \begin{minipage}{.45\textwidth} \resizebox{1\textwidth}{!}{ \begin{tikzpicture}[auto] \node [null] (null) {\footnotesize{(b) Proposal}}; \node [block,below of=null] (INI) {\footnotesize{\textbf{(i)} Sample random basis functions}}; \node [block, below of=INI,node distance=1.5cm] (ALP) {\footnotesize{\textbf{(ii)} Solve FALP or FGLP}}; \node [block, below left=1.5cm and 3cm of ALP,node distance=2cm] (GAP) {\footnotesize{\textbf{(iii)} Compute optimality gap}}; \node [block, below right=1.5cm and 3cm of ALP,node distance=2cm] (BF) {\footnotesize{\textbf{(iv)} Sample additional random basis functions}}; \node [init, below of=GAP, node distance=1.5cm](Terminal) {\footnotesize{Stop and return VFA}}; \path [line,dashed] (INI) -- (ALP); \path [line,dashed] (ALP) -- (GAP); \path [line,dashed] (GAP) -- node {\footnotesize{If gap is small}}(Terminal); \path [line,dashed] (GAP) -- node {\footnotesize{If gap is large}}(BF); \path [line,dashed] (BF) -- (ALP); \end{tikzpicture} } \end{minipage} \label{fig:how-to-use-ALP} \end{figure} The steps involved in a standard implementation of ALP is summarized in Figure \ref{fig:how-to-use-ALP}(a). Step (i) selects basis functions using domain knowledge. Step (ii) solves the ALP formulated using these basis functions. Step (iii) evaluates the value of the ALP control policy in simulation and computes its optimality gap using the ALP lower bound. Step (iv) modifies the basis functions and repeats the process from Step (ii) if the optimality gap is large; otherwise, this process is terminated and the incumbent ALP VFA is returned. The solution of ALP given a fixed set of basis functions in Step (ii) is challenging since it has a large number of constraints and has been a topic of active research. It can be approached, for example, using techniques such as constraint generation, constraint sampling, and constraint-violation learning (see \citealp{lin2017ContViolLearning} for a recent overview of ALP solution techniques). The initial selection and potential modification of basis functions in steps (i) and (iv), respectively, are implementation bottlenecks when using ALP but this issue has received limited attention in the literature (\citealp{klabjan2007RidgeBasisFunctionALP}, \citealp{adelman2012GJR}, and \citealp{bhat2012NonParaALP}). We focus in this paper on side-stepping the need for basis function engineering in ALP for infinite horizon discounted-cost MDPs with continuous value functions as well as state and action spaces, which covers a broad class of applications. We also provide extensions to average-cost semi-MDPs. Our starting point is a novel linear programming reformulation of a discounted-cost MDP, which we refer to as feature-based exact linear program (FELP). The MDP value function at each state in FELP is represented with arbitrary accuracy as an integral of the product between a weighting function and basis functions (i.e., features) parametrized by a continuous vector. This integral can be viewed as an infinite weighted linear combination of a continuum of basis functions, referred to as random basis functions (or random features in machine learning). Examples of random bases include Fourier and random stumps functions, which are defined using cosine and sign functions, respectively \citep{rahimi2008large}. The weight associated with each random basis function in FELP is a variable, which makes this linear program contain an infinite number of variables. The variable space of FELP can be approximated by replacing the integral over random basis functions by a sample average approximation based on a known distribution associated with these functions. The resulting model, dubbed feature-based approximate linear program (FALP), is an ALP with a VFA that is a linear combination of randomly sampled basis functions, where each linear combination weight is a variable. Constructing FALP with Fourier and random stump basis functions involves sampling from uniform or normal distributions. The randomized nature of FALP suggests the modified ALP implementation process illustrated in Figure \ref{fig:how-to-use-ALP}(b). In this scheme, basis function selection and modification in steps (ii) and (iv) of the standard implementation approach (Figure \ref{fig:how-to-use-ALP}(a)) have been replaced by inexpensive sampling. We establish high-probability bounds on the number of samples required for the FALP VFA to be close to the exact value function, where closeness is measured (as done in the ALP literature) using a weighted one-norm involving a state-relevance distribution over the state space that appears in the definition of the FALP objective function. In addition, we show under mild conditions that the sequence of FALP lower bounds and policy costs converge to the optimal policy cost as the number of sampled random basis functions tends to infinity. Despite this asymptotic property, neither the FALP lower bound nor its policy cost may improve monotonically as more basis functions are sampled. While the non-monotonicity of the FALP lower bound can be handled easily, the undesirable behavior of policy cost fluctuation is harder to tackle. We relate this behavior to a potential inconsistency between two frequencies defined over the state space; the first specified by the visit frequency of the FALP policy and the second associated with the state-relevance distribution appearing in the FALP objective function.\looseness=-1 We propose a mechanism for the FALP sequence to self-guide its VFAs in a manner that addresses the non-monotonic behavior of bounds. Specifically, we enforce ``self-guiding'' constraints that require the VFA being computed at a given iteration to be greater than or equal to the VFA available from the immediately preceding iteration. We refer to FALP with these additional constraints as the feature-based guided linear program (FGLP) and embed it in lieu of FALP in the iterative process of Figure {\ref{fig:how-to-use-ALP}(b)}. The sequence of VFAs associated with FGLP provides monotonically decreasing lower bounds and policy costs with a monotonically non-increasing worst case bound. The latter property mitigates policy cost fluctuation and neither property is satisfied by the analogous sequence of FALP VFAs. The ``price'' of these desirable FGLP features is the larger number of constraints in this model compared to FALP. This price is reflected in the sampling bound that we derive for FGLP, which has an extra term that is absent in our FALP sampling complexity. Nevertheless, existing techniques for solving FALP can be easily applied to tackle FGLP without significant computational overhead. \looseness=-1 We validate the numerical performance of FALP and FGLP on two applications. In particular, we show how FALP and FGLP combined with the appropriate classes of random basis functions can be solved using known techniques in the literature. We evaluate FGLP and FALP in terms of their policy cost fluctuation, lower bound quality, and policy performance. \looseness=-1 The first application is a variant of the perishable inventory control problem considered in {\cite{karaesmen2011PerishInv}}, which gives rise to a challenging discounted cost infinite-horizon MDP. Our experiments employ the sixteen instances from \citet{lin2017ContViolLearning} and consider as a benchmark the ALP in this paper that embeds basis functions tailored to the application. We find that the FALP policy cost fluctuates significantly, that is, the policy cost can worsen as the iterations of the procedure shown in Figure \ref{fig:how-to-use-ALP}(b) progress. In contrast, the FGLP policy cost fluctuations are marginal on most instances. This finding supports our addition of self-guiding constraints. Moreover, the FGLP policy optimality gap is at most 5\% across these instances and improves by up to 8\% the previously known gaps from the ALP in \citet{lin2017ContViolLearning}. The second application we consider relates to generalized joint replenishment. \citet{adelman2012GJR} model it as an averaged-cost infinite horizon semi-MDP and approximately solve the model using an ALP with basis functions generated in a dynamic manner exploiting problem-specific structure. We consider eighteen instances from this paper for our experiments. \citet{adelman2012GJR} show that a near-optimal policy can be obtained on these instances using a linear VFA but the dynamic generation of additional basis functions is necessary to improve the lower bound. In other words, policy cost fluctuation is not a concern here; thus providing a suitable set of instances for us to assess if FGLP (with randomly sampled basis function) can deliver competitive lower bounds to an existing adaptive basis function generation approach. We find that this is indeed the case, which is encouraging since FGLP does not exploit application structure.\looseness=-1 Our results show that random basis functions provide an effective way to overcome the implementation burden of basis function engineering when using ALP on the two applications that we consider. Our research also has broader relevance to solving large-scale MDPs in other applications, in particular, providing a mechanism to obtain application-agnostic policies and lower bounds. Such an application-agnostic mechanism has two important benefits. First, it makes the use of ALP more accessible to non-experts in an application domain. Second, it provides a benchmark to assess the value of procedures that exploit application-specific structure. To facilitate such benchmarking, we have made Python code implementing the approaches developed in this paper as well as related benchmarks publicly available. \subsection{Novelty and Contributions} Research on ALPs predominantly assumes a fixed set of basis functions. Work that relaxes this assumption, as we do, is limited. \cite{klabjan2007RidgeBasisFunctionALP} develop a convergent algorithm to generate basis functions for a semi-Markov decision processes that requires the solution of a challenging nonlinear program. Building on this work, \citet{adelman2012GJR} consider an innovative approximation algorithm for basis function generation in a generalized joint replenishment problem. Their algorithm leverages structure and numerical experience on this application. Our approach differs from this work because it uses low-cost sampling to generate basis functions and is also application agnostic. \looseness=-1 \cite{bhat2012NonParaALP} side-step basis function selection when computing a VFA by applying the kernel trick (see, e.g., chapter 5 of \citealp{mohri2012foundations}) to replace inner-products of such functions in the dual of a regularized ALP relaxation. Guarantees on the approximation quality of their VFA depend on the kernel and an idealized sampling distribution that assumes knowledge of an optimal policy. Our approach instead works directly on the primal ALP formulation and samples over the parameters of a class of basis functions as opposed to state-action pairs. Moreover, the sampling distribution is readily available in our framework and the approximation guarantees that we develop for FALP and FGLP are not linked to the knowledge of an optimal policy. \looseness=-1 Overall, the exact representation of an MDP based on random bases, that is FELP, and its approximations FALP and FGLP are novel models that avoid pre-specifying basis functions. Further, we are not aware of any prior efforts in the ALP literature to develop bounds on the number of sampled basis functions, as we do, to obtain a good VFA. The end-to-end procedure in this paper to obtain policies and bounds from the proposed models is also intended to ease the use of ALP in several ways. First, as discussed before, it provides an application-agnostic basis function generation approach that makes ALP accessible to users that may not have the domain knowledge to hand-engineer good basis functions and subsequently modify them. Second, we show that the combination of self-guiding constraints and the iterative addition of random basis functions can be used to obtain monotonically improving lower bounds as well as mitigate the effect that the state-relevance distribution used in the ALP objective has on policy performance. The latter aspect is a known performance issue in ALP (\citealp{farias2003ALP}) but one that cannot be tackled in the manner we do when using fixed basis functions. Third, we showcase how specific classes of random basis functions can be embedded in FALP and FGLP so that these linear programs can be solved using a constraint-generation technique and an extended version of a constraint sampling method. The latter method combines the popular constraint sampling approach in \citet{farias2004constraintSampling} and a recent lower bounding technique from \citet{lin2017ContViolLearning}, and would be of independent interest for solving ALPs with fixed basis functions. Finally, we have made publicly available Python code implementing FALP and FGLP as well as benchmark methods. Our work builds on the seminal research on random bases by \citet{rahimi2008uniform} (see, also \citealp{rahimi2008large} and \citealp{rahimi2009RKS}). There is extant literature applying this idea to data mining and machine learning applications (\citealp{lu2013faster}, \citealp{mcwilliams2013correlated}, \citealp{beevi2016detection}, and \citealt{wu2018scalable}) and to a value iteration algorithm by \cite{haskell2017empiricalDPwithRBF}. These papers embed random bases in what amounts to an unconstrained regression setting, whereas we show that such bases can be effectively used in FELP, FALP, and FGLP, all of which are constrained models. In addition, the investigation of the variance in the performance of policies obtained from VFAs constructed using random basis functions and the subsequent addition of self-guiding constraints to mitigate this issue are both novel. We also add to this literature in terms of theory. Our approximation guarantees for FALP apply the arguments in \citet{rahimi2008uniform} to a constrained setting. Similar analysis of FGLP, unfortunately, does not lead to insightful bounds. We develop sampling bounds for FGLP based on functional projections, which is new to this literature, and potentially of independent interest. \looseness=-1 \subsection{Organization of Paper}\label{subsec:organization} In \S\ref{sec:Exact Linear Programs for MDPs}, we provide background on the standard linear programming approach to solve MDPs and then introduce FELP as a reformulation. In \S\ref{sec:Approximate Linear Programs with Random Basis Functions}, we develop an approximation to this linear program, that is FALP, and analyze it. In \S\ref{sec:Self-guided Approximate Linear Programs}, we introduce the FGLP model with self-guiding constraints and analyze it. We numerically evaluate our models on perishable inventory control and generalized joint replenishment problems in \S\ref{sec:Perishable Inventory Control} and \S\ref{sec:Generalized Joint Replenishment}, respectively. We conclude in \S\ref{sec:Concluding Remarks}. All proofs can be found in an electronic companion to this paper. We focus on discounted-cost MDPs in the main paper and relegate results for average-cost semi-MDPs to the electronic companion. Python code accompanying this paper can be found at \url{https://github.com/Self-guided-Approximate-Linear-Programs}. \section{Exact Linear Programs for MDPs}\label{sec:Exact Linear Programs for MDPs} In \S \ref{section:Optimality Equation and an Exact Linear Program}, we provide background on infinite horizon discounted cost MDPs and its known linear programming MDP reformulation. In \S \ref{sec:Reformulating Exact Linear Program via RKHS}, we propose an alternative linear programming reformulation for MDPs based on random basis functions, which plays a central role in the approximations we consider in later sections. \subsection{Background}\label{section:Optimality Equation and an Exact Linear Program} Consider a decision maker controlling a system over an infinite horizon. A policy $\pi:\ensuremath{\mathcal{S}}\mapsto\ensuremath{{\mathcal{A}}_{s}}$ assigns an action $a\in \ensuremath{{\mathcal{A}}_{s}}$ to each state $s\in \ensuremath{\mathcal{S}}$ where $\ensuremath{\mathcal{S}}$ denotes the MDP state space and $\ensuremath{{\mathcal{A}}_{s}}$ represents the feasible action space at state $s$. We assume $\ensuremath{\mathcal{S}}$ and $\ensuremath{{\mathcal{A}}_{s}}$ for all $s \in \ensuremath{\mathcal{S}}$ are continuous and compact real-valued sets. An action $a\in\ensuremath{{\mathcal{A}}_{s}}$ taken at state $s \in \ensuremath{\mathcal{S}}$ results in an immediate cost of $c(s,a)$ and the transition of the system to state $s^\prime\in\ensuremath{\mathcal{S}}$ with probability $P(s'|s,a)$. A (stationary and deterministic) policy $\pi:\ensuremath{\mathcal{S}}\mapsto\ensuremath{{\mathcal{A}}_{s}}$ assigns an action to each state. The decision maker's objective is to find an optimal policy that minimizes long-run discounted expected costs. Starting from an initial state $s_0 = s \in\ensuremath{\mathcal{S}}$, the long-run discounted expected cost of a policy $\pi$ is \begin{equation*} \ensuremath{\mathrm{PC}}(s,\pi) \coloneqq \ensuremath{\mathbb{E}} \Bigg[ \sum_{t=0}^{\infty} \gamma^t c(s^{\pi}_t,\pi(s^{\pi}_t)) \ \bigg | \ s_0 = s\Bigg], \end{equation*} where $\gamma \in (0,1)$ denotes the discount factor, expectation $\ensuremath{\mathbb{E}}$ is with respect to the state-action probability distribution induced by the transition probabilities $P(\cdot|s,a)$ and the policy $\pi$, and $s^{\pi}_t$ is the state reached at stage $t$ when following this policy. The quality of a given policy is evaluated with respect to a distribution $\chi(s)$ for the initial state. Specifically, we define the cost of policy $\pi$ as $\mathrm{PC}(\pi) \coloneqq \mathbb{E}_{\chi}[\ensuremath{\mathrm{PC}}(s,\pi)]$. Denoting by $\Pi$ the set of feasible policies, an optimal policy $\pi^*$ solves the following optimization problem: \begin{equation}\label{eq:minCostMDP} \pi^* \in \underset{\pi \in \Pi}{\mathrm{arginf}} \ \mathrm{PC}(\pi). \end{equation} We define the MDP value function as $V^*(s) = \ensuremath{\mathrm{PC}}(s,\pi^*)$. \begin{assumption}\label{asm:VFcontinuity} An optimal policy $\pi^* \in \Pi$ that solves \eqref{eq:minCostMDP} exists. Moreover, $V^*(\cdot)$ is a continuous function. \end{assumption} \noindent The existence of an optimal policy holds under well-known conditions and thus the $\inf$ in \eqref{eq:minCostMDP} can be replaced by a $\min$ (see \ref{subsec:SumAssump} and pages 46-47 in \citealp{hernandez1996discrete}) . The continuity of the value function is important for our theoretical results to hold, although we note that these results extend to the more general case of measurable functions since they are nearly continuous (see, e.g., Theorem 7.10 of \citealp{folland1999real}). The computation of the value function can be conceptually approached via the exact linear program (ELP; see, e.g., pages 131-143 in \citealp{hernandez1996discrete}) \begin{align} \setlength{\jot}{10pt} \max_{V' \in \mathcal{C}} \quad & \ensuremath{\mathbb{E}}_{\nu} \big[V'(s) \big ]\nonumber\\ \text{s.t.} \quad & \hspace{18pt} V'(s) \ - \ \gamma \ensuremath{\mathbb{E}}[V'(s^\prime) \ | \ s,a] \ \le \ c(s,a), \quad \forall (s,a)\in \ensuremath{\sSpace\times\mathcal{A}_s},\label{constr:ELP} \end{align} where $\mathcal{C}$ is the class of continuous functions and $\nu$ is a state-relevance distribution that specifies the relative importance of each state in the state space. ELP is a doubly infinite linear program. It has continuums of decision variables and constraints, one for each state and state-action pair, respectively. ELP is thus intractable to solve directly. \subsection{Feature-based Exact Linear Program}\label{sec:Reformulating Exact Linear Program via RKHS} To be able to approximate ELP, we present a reformulation of it below that relies on a class of random basis functions defined by a parameter vector $\theta \in \Theta$ and an associated sampling density $\rho(\theta)$. For a given $\theta$, the corresponding function in this class is, denoted by $\varphi(\cdot;\theta): \ensuremath{\mathcal{S}} \mapsto \mathbb{R}$, maps states in $\ensuremath{\mathcal{S}}$ to the real line. A popular example is the random Fourier bases which has the representation $\varphi(s;\theta) = \cos(q+\sum_{i=1}^{{d_{\scaleto{\sSpace}{3.5pt}}}}\omega_is_i)$, where $\theta=(q,\omega_1,\dots,\omega_{d_{\scaleto{\sSpace}{3.5pt}}}) \in \mathbb{R}^{{d_{\scaleto{\sSpace}{3.5pt}}} + 1}$. Intercept $q$ is sampled from the uniform distribution over the interval $[-\uppi,\uppi]$ and the vector $(\omega_1,\dots,\omega_{d_{\scaleto{\sSpace}{3.5pt}}})$ from the multi-dimensional normal $\mathcal{N}(0,\sigma^{-2}\ensuremath{{\mathrm{I}}})$, where $\uppi$ is the Archimedes constant, $\ensuremath{{\mathrm{I}}}$ denotes a ${d_{\scaleto{\sSpace}{3.5pt}}} \times {d_{\scaleto{\sSpace}{3.5pt}}}$ identity matrix, and $\sigma$ is a bandwidth parameter that needs to be chosen. The reformulation of ELP relies on using random bases with known ``universal'' approximation power, that is, they can approximate continuous functions with arbitrary accuracy. Formally, given $\varphi$ and its associated $\rho$, consider the following class of continuous functions with compact domain defined using an intercept $b_0\in\mathbb{R}$, a weighting function $\ensuremath{\boldsymbol{b}} : \Theta \mapsto \mathbb{R}$, a constant $C \in [0,\infty)$, and the inner product $\inprod{\ensuremath{\boldsymbol{b}}}{\varphi(s)} \coloneqq \int_\Theta \ensuremath{\boldsymbol{b}}(\theta)\varphi(s;\theta)\diff \theta$: $$ \ensuremath{\mathcal{R}_C(\varphi,\rho)} \coloneqq \Big\{V: \ensuremath{\mathcal{S}} \mapsto \mathbb{R} \ \Big | \ \exists (b_0,\ensuremath{\boldsymbol{b}}) \mbox{ with } V(s) = b_0 +\inprod{\ensuremath{\boldsymbol{b}}}{\varphi(s)} \mbox{ and } \sNorm{\ensuremath{\boldsymbol{b}}}_{\infty,\rho}\leq C \Big\}, $$ where $\sNorm{\ensuremath{\boldsymbol{b}}}_{\infty,\rho} \coloneqq \sup_{\theta\in\Theta}|\nicefrac{\ensuremath{\boldsymbol{b}}(\theta)}{\rho(\theta)}|$ is referred to as the $(\infty,\rho)$-norm. The notion of universality is formalized in the following definition, where $\sNorm{V}_\infty := \sup_{s \in \ensuremath{\mathcal{S}}}|V(s)|$. \begin{definition}\label{defn:randBasisFns} A class of random basis functions $\varphi$ with sampling distribution $\rho$ is called universal if for any continuous function $V$ and $\varepsilon>0$, there is a finite constant $C\ge0$ such that $\bar V \in\ensuremath{\mathcal{R}_C(\varphi,\rho)}$ and $\sNorm{V - \bar V}_\infty<\varepsilon$ \end{definition} The class of random Fourier basis functions discussed above is universal (for other examples, please see \S V in \citealp{rahimi2008uniform}). We will make the following standard assumption for the theoretical results presented in this paper (see, e.g., Theorem 3.2 \citealp{rahimi2008uniform}). Random Fourier basis functions satisfy Assumption \ref{asm:random basis function}. \begin{assumption}\label{asm:random basis function} The class of random basis functions $\varphi$ is universal, and its sampling distribution $\rho$ has a finite second moment and satisfies $\rho(\theta) \in (0,U_{\rho}]$ for all $\theta\in\Theta$ for a finite positive constant $U_{\rho}$. Moreover, $\varphi(s;\theta) = \bar\varphi\big(q + \sum_{i=1}^{{d_{\scaleto{\sSpace}{3.5pt}}}}\omega_is_i\big)$, where $\theta=(q,\omega_1,\dots,\omega_{d_{\scaleto{\sSpace}{3.5pt}}})$ and $\bar{\varphi}:\mathbb{R} \mapsto \mathbb{R}$ is a mapping with finite Lipschitz constant $\ensuremath{\mathrm{L}_\varphi}$ that satisfies $\sNorm{\bar\varphi}_\infty \le 1$ and $\bar{\varphi}(0) = 0$. \end{assumption} Since the MDP value function $V^*$ is continuous (Assumption \ref{asm:VFcontinuity}) and $\varphi$ is universal (Assumption \ref{asm:random basis function}), replacing the ELP variables $V'(s)$ modeling this value function by the inner product $b_0 + \inprod{\ensuremath{\boldsymbol{b}}}{\varphi(s)}$ should intuitively not result in any significant error. Performing this replacement and requiring the weighting function to have a finite norm as in the definition of $\ensuremath{\mathcal{R}_C(\varphi,\rho)}$ gives the following linear program: \begin{equation} \begin{aligned} \sup_{b_0,\ensuremath{\boldsymbol{b}}} \quad & b_0 + \inprod{\ensuremath{\boldsymbol{b}}}{\ensuremath{\mathbb{E}}_{\nu}[\varphi(s)]} && \nonumber\\ \text{s.t.} \quad & (1-\gamma)b_0 + \inprod{\ensuremath{\boldsymbol{b}}}{\varphi(s) \ - \ \gamma \ensuremath{\mathbb{E}}[{\varphi({s^\prime})} \ | \ s,a]} && \le c(s,a), &&& \forall (s,a)\in \ensuremath{\sSpace\times\mathcal{A}_s} \\ & \sNorm{\ensuremath{\boldsymbol{b}}}_{\infty,\rho} &&\leq C. &&& \nonumber \end{aligned} \end{equation} Unlike ELP, which directly optimizes a value function, the above linear program optimizes the weights associated with a feature based representation of the value function. Hence, we refer to it as the feature-based exact linear program (FELP). Let $(\coefRKHS,\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}})$ denote an FELP optimal solution and define the function $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}(s)\coloneqq \coefRKHS + \inprod{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}{\varphi(s)}$. Proposition \ref{prop:ELP-RKHS-gap} shows that $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}$ approximates $V^*$ with arbitrary accuracy. For a function $V:\ensuremath{\mathcal{S}}\mapsto\mathbb{R}$, we represent its $(1,\nu)$-norm by $\sNorm{V}_{1,\nu} = \ensuremath{\mathbb{E}}_{\nu}[|V|]$. \begin{proposition}\label{prop:ELP-RKHS-gap} Given $\varepsilon>0$, there exists a finite constant $C\ge0$ such that \begin{itemize} \item[(i)] there is a feasible solution $(b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}},\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}})$ to FELP with \[\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}(\cdot) = b_{0}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \inprod{\ensuremath{{\coefInf}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}}{\varphi(\cdot)} \in\ensuremath{\mathcal{R}_C(\varphi,\rho)} \quad \mbox{ and } \quad \tallNorm{V^*- \ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}}_\infty \le \frac{2\varepsilon}{1-\gamma};\] \item [(ii)] for any optimal solution $(\coefRKHS,\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}})$ to FELP, we have \[\tallNorm{V^*-\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}_{1,\nu} \le \frac{2\varepsilon}{1-\gamma}.\] \end{itemize} \end{proposition} Part (i) of this proposition shows that the universality of random basis functions can be used to construct a feasible FELP solution that is arbitrarily close to $V^*$ under the infinity-norm. Part (ii) establishes that the function $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}$ defined by an optimal FELP solution approximates $V^*$ arbitrarily closely with respect to the $(1,\nu)$-norm, which at a high level is the result of Part (i) and the FELP objective employing such a norm (please see the proof for details). As we will see shortly, the representation of FELP will facilitate sampled approximations that side-step basis function selection. \looseness=-1 \section{Approximate Linear Programming with Random Bases} \label{sec:Approximate Linear Programs with Random Basis Functions} In \S \ref{sec:Random Approximate Linear Program}, we introduce and analyze FALP, which approximates FELP using sampled basis functions. In \S\ref{sec:An Example of Policy Cost Fluctuation in R-ALPs}, we illustrate the behavior of FALP using a simple example and highlight an issue associated with policy cost fluctuation. \subsection{Feature-based Approximate Linear Programs}\label{sec:Random Approximate Linear Program} In the literature, an ALP is derived by substituting $V'(s)$ in ELP by a VFA that is a linear combination of pre-specified basis functions. Instead, we obtain an ALP by replacing the inner product $b_0 + \inprod{\ensuremath{\boldsymbol{b}}}{\varphi(s)}$ in FELP by a sampled VFA \[V(s;\ensuremath{\boldsymbol{\beta}}) := \beta_{0} + \sum_{i=1}^{N}\beta_i\varphi(s;\theta_{i}),\] where $\theta_1,\theta_2,\ldots,\theta_N$ are independent and identical samples of the basis function parameter vector $\theta$ from the sampling distribution $\rho$; and $\ensuremath{\boldsymbol{\beta}}$ is the weight vector $(\beta_0,\beta_1,\ldots,\beta_N)$. The weight $\beta_0$ represents an intercept and $\beta_i$ the weight associated with the $i$-th random basis function. The term random basis function is associated with $\varphi(\cdot;\theta_{i})$ because it is defined using a sampled $\theta_i$. The ALP constructed using these $N$ samples, which we refer to as feature-based approximate linear program or FALP$\programIndex{N}$, is \looseness=-1 \begin{equation} \begin{aligned} \max_{\ensuremath{\boldsymbol{\beta}}} \quad& \beta_0 + \sum_{i=1}^{N}\beta_i\ensuremath{\mathbb{E}}_{\nu} \big[\varphi(s;\theta_{i}) \big] && &&& \nonumber\\ \text{s.t.} \ \quad& (1-\gamma)\beta_0 + \sum_{i=1}^{N}\beta_i \left(\varphi(s;\theta_{i}) - \gamma \ensuremath{\mathbb{E}} \big[\varphi(s';\theta_{i}) \ | \ s,a\big]\right) &&\le c(s,a), &&& \forall (s,a)\in \ensuremath{\sSpace\times\mathcal{A}_s}. \end{aligned} \end{equation} The model FALP$\programIndex{N}$ is a semi-infinite linear program with $N + 1$ variables and an infinite number of constraints. We let $\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}} := (\coefALP{N,0},\ldots,\coefALP{N,N})$ represent an optimal solution to FALP$\programIndex{N}$. Theorem \ref{prop:ALP} establishes key properties of FALP$\programIndex{N}$ and relies on the following constants for a fixed $\delta\in(0,1]$: \[ \ensuremath{{\Omega}} \coloneqq 4(\diamSState+1) \ensuremath{\mathrm{L}_\varphi} \sqrt{\mathbb{E}_\rho \left[{\Vert\theta\rVert}_2^2\right]}, \quad \ensuremath{{{\Delta}}_{{\delta}}} \coloneqq \sqrt{2\ln\Big(\frac{1}{\delta}\Big)}, \] where $\sNorm{\cdot}_2$ denotes the two-norm, $\mathbb{E}_\rho$ expectation under the distribution $\rho$, and $\diamSState \coloneqq \max_{s \in \ensuremath{\mathcal{S}}} \lVert{s\rVert}_2$ is the diameter of state space. Let $V(\cdot;\ensuremath{\boldsymbol{\beta}}) := \beta_0 + \sum_{i=1}^{N}\beta_{i} \varphi(\cdot;\theta_{i})$ be the VFA associated with $\ensuremath{\boldsymbol{\beta}}$. To ease exposition, we use the shorthand $V(\ensuremath{\boldsymbol{\beta}}) \equiv V(\cdot;\ensuremath{\boldsymbol{\beta}})$ and define the lower bound $\mathrm{LB}(\ensuremath{\boldsymbol{\beta}}): = \ensuremath{\mathbb{E}}_{\chi} \big[V(s;\ensuremath{\boldsymbol{\beta}})\big]$ on the optimal policy cost $\mathrm{PC}(\pi^*) \equiv \ensuremath{\mathbb{E}}_{\chi} \big[V^*(s)\big]$ for a function $V(\ensuremath{\boldsymbol{\beta}}) \leq V^*$. \begin{theorem}\label{prop:ALP}The following hold: \begin{itemize} \item[(i)] For a given $N$, we have $V(s;\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}) \leq V^*(s)$ for all $s\in\ensuremath{\mathcal{S}}$. \item[(ii)] For $N^\prime > N$, suppose FALP$\programIndex{N^\prime}$ contains the same random basis functions as FALP$\programIndex{N}$ as well as $N^\prime - N$ additional independently sampled basis functions. Then \[\tallNorm{V^* - V(\coefALPVecN{N^\prime})}_{1,\nu} \leq \tallNorm{V^* - V(\coefALPVecN{N})}_{1,\nu}.\] \item[(iii)] Given $\varepsilon>0$, $\delta\in(0,1]$, and \[ N \ \ge \ \Bigg\lceil \varepsilon^{-2} \ \tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}}_{\infty,\rho}^2 \ \bigg(\frac{(1+\gamma)}{2}\ensuremath{{\Omega}} \ + \ \ensuremath{{{\Delta}}_{{\delta}}} \bigg)^2 \Bigg\rceil, \] any FALP$\programIndex{N}$ optimal solution $\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}$ satisfies \[\tallNorm{V^* - \ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})}_{1,\nu} \ \leq \ \frac{4\varepsilon}{(1-\gamma)},\] with a probability of at least $1-\delta$. \end{itemize} \end{theorem} \noindent Parts (i) and (ii) of this theorem adapt known results in approximate linear programming (see, e.g., \S2 in \citealp{farias2003ALP}). Specifically, Part (i) shows that the VFA $\ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})$ defines a point-wise lower bound on $V^*$. Part (ii) establishes that this VFA gets closer to $V^*$ with respect to the $(1,\nu)$-norm as more random basis functions are sampled. This result holds because FALP$\programIndex{N}$ is equivalent to (see Lemma 1 in \citealp{farias2003ALP}) \begin{equation}\label{eqn:RegFormOfALP} \begin{aligned} \min_{\ensuremath{\boldsymbol{\beta}}} & \quad \sNorm{V(\ensuremath{\boldsymbol{\beta}}) - V^*}_{1,\nu}\\ \mbox{ s.t. } & \quad \ V(s;\ensuremath{\boldsymbol{\beta}}) -\gamma\ensuremath{\mathbb{E}}\big[V(s^\prime;\ensuremath{\boldsymbol{\beta}}) \ | \ s,a\big] \ \leq \ c(s,a), \qquad \forall (s,a)\in\ensuremath{\sSpace\times\mathcal{A}_s}. \end{aligned} \end{equation} Part (iii) of Theorem \ref{prop:ALP} provides FALP's sampling complexity. The bound on the number of samples is based on concentration arguments analogous to \citet{rahimi2008uniform} but augmented to a constrained setting by leveraging the structure of FALP$\programIndex{N}$ in several ways. First, a given infeasible solution to this linear program can be made feasible by appropriately scaling the intercept $\beta_{0}$ of the VFA. Second, a guarantee on the $(1,\nu)$-norm distance between $\ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})$ and $V^*$ is intuitively possible without knowledge of $V^*$ because FALP$\programIndex{N}$ is equivalent to \eqref{eqn:RegFormOfALP}. Third, the fact that $\ensuremath{V}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}) \leq V^*$, allows us to sharpen the constant in the original bound of \citet{rahimi2008uniform}, which we elaborate on further in Online Supplement \S\ref{ec:sec:Analyzing an FALP Sampling Bound}. Given the worst case nature of the bound on $N$ in Theorem \ref{prop:ALP}, it is likely too large to be used in the implementation of FALP$\programIndex{N}$. Therefore, we evaluate if a particular $N$ is large enough by solving FALP$_{(\scaleto{\mathrm{N}}{4pt})}$ and computing an associated optimality gap with respect to lower and upper bounds on the optimal policy cost. The lower bound is $\mathrm{LB}(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})$. The upper bound is defined with respect to the so-called greedy policy $\pi_g(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})$ associated with $V(\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}})$ (see, e.g., \citealp{powell2007ADP}). The action taken by this policy at state $s \in \ensuremath{\mathcal{S}}$ solves \begin{equation}\label{eqn:GreedyOpt}\min_{a\in\ensuremath{{\mathcal{A}}_{s}}} \Big \{ c(s,a) + \gamma\ensuremath{\mathbb{E}} \big[V(s^\prime;\ensuremath{\boldsymbol{\beta}^{\raisemath{2pt}{\scaleto{\mathrm{FA}}{3.5pt}}}_{{\scaleto{{\mathrm{N}}}{4pt}}}}) \ | \ s,a \big] \Big \}. \end{equation} The cost of the greedy (feasible) policy $\mathrm{PC}(\pi_g(\ensuremath{\boldsymbol{\beta}}))$, which we abbreviate by $\mathrm{PC}(\ensuremath{\boldsymbol{\beta}})$, is an upper bound on the optimal policy cost. \setlength{\algomargin}{9pt} \begin{algorithm}[t] \DontPrintSemicolon \SetAlgoLined \KwIn{% state-relevance distribution $\nu$, random basis $\varphi$ and associated sampling distribution $\rho$, math program $\mathcal{M}\programIndex{N}$ parametrized by distribution $\nu$ and $N$ random basis functions from class $\varphi$, optimality tolerance $\tau$, and sampling batch size $B$. } \KwInit{% the number of random basis functions $N$ to $0$, the set $\vartheta$ of sampled basis function parameters to $\{\}$, the vectors $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}}$ and $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}}$ to $\boldsymbol{0} \in \mathbb{R}^B$, and the optimality gap $\tau^*$ to $1$.\; } \While{$\tau^* > \tau$}{% (i) Update $N = N + B$.\; % (ii) Sample $B$ independent samples $\{\theta_1,\ldots,\theta_B\}$ from $\rho(\theta)$ and set $\vartheta = \vartheta \cup\{\theta_1,\ldots,\theta_B\}$.\; % (iii) Solve $\mathcal{M}\programIndex{N}$ to obtain coefficents $\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}} \in\mathbb{R}^{N+1}$ and compute $\mathrm{PC}(\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}})$ and $\mathrm{LB}(\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}})$.\; % (iv) {\bf if} $\mathrm{LB}(\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}}) \ge \mathrm{LB}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}})$ {\bf do} redefine $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}}$ as $\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}}$.\; % (v) {\bf if} $\mathrm{PC}(\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}}) \le \mathrm{PC}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}})$ {\bf do} redefine $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}}$ as $\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}}$.\; % (vi) Compute \[\tau^* = 1 - \dfrac{\mathrm{LB}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}})}{\mathrm{PC}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}})}.\] } \KwOut{% coefficients $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}}$ and $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}}$. } \caption{\normalfont{Random Basis Function Generation for VFA Computation using Math Programs}} \label{alg:sampledBasesALP} \end{algorithm} Algorithm \ref{alg:sampledBasesALP} leverages FALP$\programIndex{N}$ and the bounds just described in an iterative procedure reflecting the scheme in Figure \ref{fig:how-to-use-ALP}(b). The inputs to this algorithm are the state-relevance distribution $\nu$ over the state space; a class of random basis functions $\varphi$ and its associated sampling distribution $\rho$; a math program $\mathcal{M}\programIndex{N}$ parameterized by this state-relevance distribution and the number of samples $N$, which we assume is FALP$\programIndex{N}$ in this section; an optimality tolerance $\tau$; and a sampling batch size $B$. At initialization, Algorithm \ref{alg:sampledBasesALP} assigns the number of sampled random basis functions $N$ to zero, the set of sampled basis function parameters to the empty set, the VFA weight vectors $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}}$ and $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}}$ to the vector of $B$ zeros, and the optimality gap $\tau^*$ to $1$. It then executes the following six steps until the optimality gap $\tau^*$ is less than or equal to the optimality tolerance $\tau$. In Step (i), the number of sampled basis functions is incremented by $B$. In Step (ii), $B$ independent basis function parameters are sampled from $\rho(\theta)$ and appended to $\vartheta$. In Step (iii), the math program $\mathcal{M}\programIndex{N}$ (= FALP$\programIndex{N}$ in this section) embedding the random basis functions of set $\vartheta$ is solved and the resulting VFA coefficient vector $\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}}$ is used to compute the greedy policy cost $\mathrm{PC}(\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}})$ and lower bound $\mathrm{LB}(\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}})$. Steps (iv) and (v) update $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}}$ and $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}}$ if there is improvement in the lower bound and policy cost, respectively. The optimality gap percentage $\tau^*$ is updated in Step (vi) using $\mathrm{LB}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}})$ and $\mathrm{PC}(\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}})$. If $\tau^* \leq \tau$, Algorithm \ref{alg:sampledBasesALP} terminates and returns the VFA vectors $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}}$ and $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}}$ corresponding to the tightest lower bound and best policy, respectively. \looseness=-1 Proposition \ref{prop:Algo1Convergence} shows that Algorithm \ref{alg:sampledBasesALP} with $\mathcal{M}\programIndex{N}=\mbox{FALP}\programIndex{N}$ terminates when $\nu$ is positive almost everywhere (e.g., if it is chosen to be uniform) and under an assumption that the state-visit frequency of the greedy policy is bounded. Specifically, for a given greedy policy $\pi_g(\ensuremath{\boldsymbol{\beta}})$, its state visit frequency $\visitFreq{\chi}(\ensuremath{\boldsymbol{\beta}})$ defines the following probability of visiting a subset of states $\ensuremath{\mathcal{S}}_1\subseteq\ensuremath{\mathcal{S}}$ (see, e.g., pages 132--133 in \citealt{hernandez1996discrete}): \begin{equation}\label{eqn:greedyPolicyVisitFrequency} \visitFreq{\chi}(\ensuremath{\mathcal{S}}_1;\ensuremath{\boldsymbol{\beta}}) \coloneqq \chi(\ensuremath{\mathcal{S}}_1) \ + \ \sum_{t=0}^{\infty}\gamma^{t+1} \ensuremath{\mathbb{E}}\Big[ P\big(s^{\pi_g(\ensuremath{\boldsymbol{\beta}})}_{t+1}\in\ensuremath{\mathcal{S}}_1 \ | \ s_t,\pi_g(s_t;\ensuremath{\boldsymbol{\beta}})\big) \Big] \end{equation} where state $s^{\pi_g(\ensuremath{\boldsymbol{\beta}})}_{t+1}$ and transition probability $P$ retain their definitions from \S\ref{section:Optimality Equation and an Exact Linear Program}, and $\chi(\ensuremath{\mathcal{S}}_1)$ is the probability of the initial state belonging to $\ensuremath{\mathcal{S}}_1$. Expected value $\ensuremath{\mathbb{E}}$ is taken with respect to control policy $\pi_g(\ensuremath{\boldsymbol{\beta}})$ and initial state distribution $\chi$ over initial state $s_0$. Assuming that the state-occupancy frequency is bounded is also needed when proving the convergence of other approximate dynamic programming algorithms (see, e.g., \citealp{munos2003error}). \looseness=-1 \begin{proposition} \label{prop:Algo1Convergence}Suppose that $\mathcal{M}\programIndex{N}=\mbox{FALP}\programIndex{N}$ in Algorithm \ref{alg:sampledBasesALP}, the state-relevance distribution $\nu$ assigns positive mass to all non-zero measure subsets of the state space, and the state-visit frequency $\visitFreq{\chi}(\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}})$ is bounded above by a constant for all $N$. Then, for a given $\delta\in(0,1]$ and $\tau\in(0,1]$, Algorithm \ref{alg:sampledBasesALP} terminates after a finite number of iterations with a probability of at least $1-\delta$. \end{proposition} In other words, Proposition \ref{prop:Algo1Convergence} establishes that the lower bounds and policy costs generated by Algorithm \ref{alg:sampledBasesALP} with $\mathcal{M}\programIndex{N}=\mbox{FALP}\programIndex{N}$ converge towards each other as the number of samples tends to infinity. Despite this asymptotic property, $\mathrm{LB}(\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}})$ and $\mathrm{PC}(\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}})$ may not monotonically improve with $N$. Monotonicity of the sequence of lower bounds can be achieved by choosing $\nu$ equal to $\chi$ because $\mathrm{LB}(\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}}) \equiv \ensuremath{\mathbb{E}}_{\chi} \big[V(s;\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}})\big]$ and the objective function $\ensuremath{\mathbb{E}}_{\nu} \big[V(s;\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}})\big]$ of $\mbox{FALP}\programIndex{N}$ coincide in this case. However, even under such a choice for $\nu$, $\mathrm{PC}(\coefALPVecN{N})$ may worsen as more random basis functions are added, which is undesirable since additional computational effort may not translate into better policies. We refer to this behavior as policy cost fluctuation. In \S\ref{sec:An Example of Policy Cost Fluctuation in R-ALPs}, we illustrate Algorithm \ref{alg:sampledBasesALP} with FALP$\programIndex{N}$ and policy cost fluctuation on a simple example, also elaborating on the cause of such fluctuation. \looseness=-1 \subsection{Illustrative Example and Policy Cost Fluctuation}\label{sec:An Example of Policy Cost Fluctuation in R-ALPs} Consider a simple version of MDP \eqref{eq:minCostMDP} with both the state space $\ensuremath{\mathcal{S}}$ and each action space $\ensuremath{{\mathcal{A}}_{s}}$ for $s \in \ensuremath{\mathcal{S}}$ equal to the interval $[0,1]$. State transitions are governed by the discrete conditional distribution $P(s^\prime=s|s,a) = 0.1$, $P(s^\prime=a|s,a) = 0.9$, and $P(s^\prime \not \in \{s,a\}|s,a) = 0$. The immediate cost function is $c(s,a) = |s-0.5|$ for all $(s,a) \in \ensuremath{\sSpace\times\mathcal{A}_s}$ and future costs are discounted using a discount factor of $\gamma =0.9$. The initial state distribution $\chi$ is chosen to be uniform over $\ensuremath{\mathcal{S}}$. The action $\pi^*(s)$ taken by the MDP optimal policy for all $s \in \ensuremath{\mathcal{S}}$ equals $0.5$ since $c(s,a)$ equals $0$, if $s = 0.5$, and is strictly positive, otherwise. Therefore, the MDP value function is $V^*(s) = c(s,0.5)/(1-0.1\gamma) \approx1.1|s-0.5|$. The optimal policy cost $\mathrm{PC}(\pi^*)$ is 0.27. Figure \ref{fig:cost-fluctuation} displays this information using thick (purple) solid lines. The independence of $c(s,a)$ on the action and the definition of state transition probabilities together simplify the optimization \eqref{eqn:GreedyOpt} for determining the greedy policy $\pi_g(\ensuremath{\boldsymbol{\beta}})$ to $\min_{a \in [0,1]} {V}(a;\ensuremath{\boldsymbol{\beta}}) \equiv \min_{s \in [0,1]} {V}(s;\ensuremath{\boldsymbol{\beta}})$. In other words, $\pi_g(s,\ensuremath{\boldsymbol{\beta}})$ can be defined as the constant action $\bar s$ for all $s \in \ensuremath{\mathcal{S}}$, where $\bar s$ is a minimizer of ${V}(\cdot;\ensuremath{\boldsymbol{\beta}})$ over the interval $[0,1]$.\looseness=-1 \begin{figure} \centering \caption{Results from executing Algorithm \ref{alg:sampledBasesALP} with FALP on example.} \includegraphics[width=1\linewidth]{figs/FALP_costFluctuation} \label{fig:cost-fluctuation} \vspace{-25pt} \end{figure} Next, we analyze the VFAs and greedy policies resulting from the application of Algorithm \ref{alg:sampledBasesALP} for three consecutive iterations with the parameter $B$ set to $1$. Specifically, we compare iterations two and three, which correspond to FALP with two and three random basis functions, respectively. We assume that the state-relevance distribution $\nu$ equals the initial-state distribution $\chi$ so that the lower bound $\mathrm{LB}(\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}})$ is non-decreasing in $N$ and we can focus only on the fluctuation of the policy cost $\mathrm{PC}(\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}})$. The FALP VFAs are specified using the class of Fourier random basis functions $\varphi(s;\theta)=\cos(\theta s)$, where $\theta\in\mathbb{R}$. At the end of iteration two, suppose the random basis functions correspond to the sampled parameters in set $\vartheta = \{\theta_{1} = 2,\theta_{2}=-5\}$ and $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}} = \ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}} = \coefALPVecN{2}$. Figure \ref{fig:cost-fluctuation} plots $V(s;\coefALPVecN{2}) \equiv \coefALP{2,0} + \coefALP{2,1}\cos(2s) + \coefALP{2,2}\cos(-5s)$ and $\pi_g(\coefALPVecN{2})$ in (green) dashed-dotted lines and displays the associated bounds using circular markers. The minimum of $V(\cdot; \coefALPVecN{2})$ is attained at $0.513$ and $\mathrm{LB}(\coefALPVecN{2})$ equals 0.15. Moreover, $\pi_g(s,\coefALPVecN{2})$ equals $0.513$ for all $s \in \ensuremath{\mathcal{S}}$ and the greedy policy cost $\mathrm{PC}(\coefALPVecN{2})$ equals 0.39. The corresponding optimality gap $\tau^*$ is 60.9\%. At iteration three, we consider two scenarios for the sample $\theta_3$ associated with the third random basis function. \looseness=-1 \begin{itemize} \item Scenario 1 ($\theta_3 = 3$): We have $\vartheta = \{2,-5,3\}$ and the VFA associated with FALP$_{(\scaleto{\mathrm{3}}{4pt})}$ is $V(s;\coefALPVecN{3}) = \coefALP{3,0} + \coefALP{3,1}\cos(2s) + \coefALP{3,2}\cos(-5s) + \coefALP{3,3}\cos(3s)$. This VFA, its greedy policy, and optimality gap are shown in Figure \ref{fig:cost-fluctuation} using (red) dashed lines and diamond markers. The function $V(\cdot;\coefALPVecN{3})$ attains its minimum over $s$ at 0.507 and $\pi_g(\coefALPVecN{3})$ equals this value at all states. In addition, $\mathrm{LB}(\coefALPVecN{3})$ and $\mathrm{PC}(\coefALPVecN{3})$ are 0.23 and 0.34, respectively, with both bounds improving over their respective iteration 2 values. As a result, $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}} = \ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}} = \coefALPVecN{3}$. In particular, $\pi_g(\coefALPVecN{3})$ becomes the best policy computed thus far. The optimality gap of this policy of 30.2\% is significantly lower than the gap in iteration two because of the improvements in both the lower bound and policy cost.\looseness=-1 % \item Scenario 2 ($\theta_3 = 40$): We have $\vartheta_3 = \{2,-5,40\}$ and $V(s;\coefALPVecN{3}) = \coefALP{3,0} + \coefALP{3,1}\cos(2s) + \coefALP{3,2}\cos(-5s) + \coefALP{3,3}\cos(40s)$. The information displayed by (dark blue) dotted lines and triangular markers in Figure \ref{fig:cost-fluctuation} corresponds to this case. Both the minimum of $V(\cdot;\coefALPVecN{3})$ and $\pi_g(\coefALPVecN{3})$ equal $0.598$. The lower bound $\mathrm{LB}(\coefALPVecN{3})$ is 0.18 and improves on $\mathrm{LB}(\coefALPVecN{2})$ as expected because $\nu$ equal to $\chi$. Thus, $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{LB}}{4pt}} = \coefALPVecN{3}$. In contrast, the upper bound $\mathrm{PC}(\coefALPVecN{3})$ is $1.14$, which is worse than $\mathrm{PC}(\coefALPVecN{2})$, and $\ensuremath{\boldsymbol{\beta}}^{\scaleto{\mathrm{UB}}{4pt}}$ thus remains $\coefALPVecN{2}$. In other words, we do not find an improved greedy policy. The optimality gap computed by Algorithm \ref{alg:sampledBasesALP} equals $53.5\%$ and is based on $\mathrm{PC}(\coefALPVecN{2})$ and $\mathrm{LB}(\coefALPVecN{3})$. This gap is smaller than the one of iteration 2 due to the stronger lower bound. If one instead computed the optimality gap of $\pi_g(\coefALPVecN{3})$ with respect to $\mathrm{LB}(\coefALPVecN{3})$, it would be 84.2\% (i.e., $1 - \nicefrac{0.18}{1.14}$), which highlights the significant worsening of the greedy policy in iteration 3.\looseness=-1 \end{itemize} Scenario 2 of iteration 3 makes concrete the notion of policy cost fluctuation introduced in \S\ref{sec:Self-guided Approximate Linear Programs}. We now provide some insight into this behavior. Recall from Theorem \ref{prop:ALP} that $\sNorm{V(\coefALPVecN{N'})- V^*}_{1,\nu} \leq \sNorm{V(\coefALPVecN{N})- V^*}_{1,\nu}$ for two iterations of Algorithm \ref{alg:sampledBasesALP} with $N' > N$. Improving the VFA with respect to the $(1,\nu)$-norm does not imply improvement in the greedy policy performance, that is, the inequality $\mathrm{PC}(\coefALPVecN{N'}) \equiv \mathbb{E}_{\chi}\big[\ensuremath{\mathrm{PC}}\big(s,{\pi_g\big(\coefALPVecN{N'}\big)}\big)\big] \leq \mathbb{E}_{\chi}\big[\ensuremath{\mathrm{PC}}\big(s,{\pi_g\big(\coefALPVecN{N}\big)}\big)\big] \equiv \mathrm{PC}(\coefALPVecN{N})$ may not hold, because the greedy policy visits the state space using a frequency $\mu_\chi(\ensuremath{\boldsymbol{\beta}})$ defined in \eqref{eqn:greedyPolicyVisitFrequency} that is potentially different from $\nu$. The link between the performance of policies and the aforementioned distributions is more formally apparent in the following known worst case result (see, e.g., Theorem 1 in \citealt{farias2003ALP}). \begin{proposition}\label{prop:worst case policy performance} For a VFA $V(\ensuremath{\boldsymbol{\beta}})$ such that $V(\ensuremath{\boldsymbol{\beta}}) \le V^*$, we have \[\ensuremath{\mathrm{PC}}(\ensuremath{\boldsymbol{\beta}}) - \ensuremath{\mathrm{PC}}(\pi^*) \ \le \ \frac{\sNorm{V(\ensuremath{\boldsymbol{\beta}}) -V^*}_{1, \mu_\chi(\ensuremath{\boldsymbol{\beta}})}}{1-\gamma}.\] \end{proposition} Proposition \ref{prop:worst case policy performance} shows that for a VFA $V(\ensuremath{\boldsymbol{\beta}})$ that lower bounds $V^*$ (e.g., the FALP VFA), the additional cost incurred by using the greedy policy $\pi_g(\ensuremath{\boldsymbol{\beta}})$ instead of the optimal policy $\pi^*$ is bounded above by the $(1,\visitFreq{\chi}(\ensuremath{\boldsymbol{\beta}}))$-norm difference between the VFA $V(\ensuremath{\boldsymbol{\beta}})$ and the MDP value function $V^*$. If $\nu$ and $\mu_\chi(\ensuremath{\boldsymbol{\beta}})$ are identical, this result implies that an FALP VFA with a small $(1,\nu)$-norm error also guarantees good greedy policy performance. However, such a policy performance guarantee does not hold when the aforementioned frequencies differ and the deviation of a VFA from $V^*$ with respect to the $(1,\nu)$-norm is minimized, as done by FALP. \begin{figure} \centering \caption{State-visit frequencies of greedy policies and uniform state-relevance distribution on example.} \includegraphics[width=.7\linewidth]{figs/StateVisitFrequency} \label{fig:statevisitfrequency}\vspace{-15pt} \end{figure} Figure \ref{fig:statevisitfrequency} displays the state-visit frequency $\visitFreq{\chi}(\coefALPVecN{2})$, the versions of the frequency $\visitFreq{\chi}(\coefALPVecN{3})$ for scenarios 1 and 2 (i.e., for $\vartheta=\{2,-5,3\}$ and $\vartheta=\{2,-5,40\}$), and the uniform state-relevance distribution $\nu$ on our example. It is apparent that the state-visit frequencies $\visitFreq{\chi}(\coefALPVecN{2})$, $\visitFreq{\chi}(\coefALPVecN{3})$ for scenario 1, and $\visitFreq{\chi}(\coefALPVecN{3})$ for scenario 2 assign more than $99\%$ of their visit probabilities to states $\bar{s} = 0.513$, $\bar{s}=0.507$, and $\bar{s}=0.598$, respectively, and the negligible remaining probability to other states. In stark contrast, the uniform state-relevance distribution $\nu$, depicted via a (black) dotted line, assigns a probability of $0.0249$ to all states. This discrepancy causes the worst-case bound $\sNorm{V(\coefALPVecN{3}) -V^*}_{1, \mu_\chi(\coefALPVecN{3})}/(1-\gamma)$ on policy performance in iteration 3 to take a value in scenario 2 of 9.98 which is substantially higher than its value in scenario 1 of 0.99. Moreover, as already discussed, the greedy policy in scenario 2 is much worse than the one in scenario 1. Therefore, it may be possible to mitigate policy cost fluctuation (at least in the worst case sense) by improving the VFA at each iteration of Algorithm \ref{alg:sampledBasesALP} with respect to the $(1,\visitFreq{\chi}(\ensuremath{\boldsymbol{\beta}}))$-norm in addition to the $(1,\nu)$-norm. We explore this idea further in \S\ref{sec:Self-guided Approximate Linear Programs}. The impact of $\nu$ on approximation quality and policy performance is acknowledged in the ALP literature but not yet addressed to the best of our knowledge (see, e.g., \citealt{farias2003ALP}). Compared to an ALP with fixed basis functions, the impact of $\nu$ in Algorithm \ref{alg:sampledBasesALP} with FALP is relatively less severe in theory because one could choose $\nu$ to be uniform over the state space and use a sufficiently large number of sampled basis functions to obtain a near optimal policy, which follows from Proposition \ref{prop:Algo1Convergence}. However, the number of samples required to obtain a good approximation at all states may be large and there may be several iterations for which the policy performance worsens despite the additional computational effort incurred. Thus, policy cost fluctuation is still a practical concern that needs to be handled. \section{Self-guided Approximate Linear Programs}\label{sec:Self-guided Approximate Linear Programs} In \S \ref{sec:Basic Properties of Self-guided ALPs}, we modify FALP to mitigate policy cost fluctuation. We analyze this modified linear program in \S\ref{sec:Convergence of Self-guided ALPs via an Orthogonal Decomposition}. \looseness=-1 \subsection{Formulation and Basic Properties} \label{sec:Basic Properties of Self-guided ALPs} Motivated by Proposition \ref{prop:worst case policy performance} and the related discussion in \S\ref{sec:An Example of Policy Cost Fluctuation in R-ALPs}, we explore the strategy of mitigating policy cost fluctuation by improving the term $\sNorm{V(\ensuremath{\boldsymbol{\beta}}) -V^*}_{1, \mu_\chi(\ensuremath{\boldsymbol{\beta}})}/(1-\gamma)$, which is a worst-case bound on greedy policy performance. We begin by presenting a modification of FALP$\programIndex{N}$ to be used in conjunction with Algorithm \ref{alg:sampledBasesALP}, which we dub feature-based guided linear program and abbreviate FGLP$\programIndex{N}$. We then describe how this linear program improves the aforementioned bound. Denoting by $\coefSGVecK{N-B}$ an optimal solution to FGLP$\programIndex{N - B}$, the model FGLP$\programIndex{N}$ is \begin{minipage}{0.9\linewidth} \begin{align} \max_{\ensuremath{\boldsymbol{\beta}}} \ \ & \beta_0 + \sum_{i=1}^{N}\beta_i\ensuremath{\mathbb{E}}_{\nu} \big[\varphi(s;\theta_{i}) \big] && \nonumber\\ \text{s.t.} \quad & (1-\gamma)\beta_0 + \sum_{i=1}^{N}\beta_i \left(\varphi(s;\theta_{i}) - \gamma \ensuremath{\mathbb{E}} \big[\varphi(s';\theta_{i}) | \ s,a\big]\right) \ \le \ c(s,a), && \forall (s,a)\in \ensuremath{\sSpace\times\mathcal{A}_s}, \label{FALPConst1}\\ & \beta_0 + \sum_{i=1}^{N}\beta_i\varphi(s;\theta_{i}) \ \ge \ V\big(s;\coefSGVecK{N-B}\big), && \forall s\in\ensuremath{\mathcal{S}}.\label{FALPConst2} \end{align} \end{minipage} \begin{minipage}{0.05\linewidth} \end{minipage}\hfill\vspace{9pt} \noindent Both FGLP$\programIndex{N}$ and FALP$\programIndex{N}$ have the same objective function. The former linear program includes all the constraints of the latter linear program as well as additional ``self-guiding'' constraints \eqref{FALPConst2} that require its VFA to be a state-wise upper bound on the VFA $V\big(s;\coefSGVecK{N-B}\big)$ computed in the previous iteration by solving FGLP$\programIndex{N-B}$. We assume $V\big(s;\coefSGVecK{N-B}\big) = -\infty$ for all $s \in \ensuremath{\mathcal{S}}$ when $N = B$ which implies that the constraints \eqref{FALPConst2} are redundant in the first iteration of Algorithm \ref{alg:sampledBasesALP} with $\mathcal{M}\programIndex{N}$ = FGLP$\programIndex{N}$.\looseness=-1 Proposition \ref{prop:SG-ALP-basic} establishes a key property of FGLP. \begin{proposition}\label{prop:SG-ALP-basic} Suppose $\mathcal{M}\programIndex{N} = \mbox{FGLP}\programIndex{N}$ in Algorithm \ref{alg:sampledBasesALP}. Then, for any given $n > 1$, the sequence of VFAs generated by this algorithm up to iteration $n$ satisfies \begin{equation}\label{FGLPVFAOrdering} V(s; \coefALPVecN{B}) \ = \ V(s; \coefSGVecK{B}) \ \le \ V(s; \coefSGVecK{2B}) \ \le \ \cdots \ \le \ V(s; \coefSGVecK{nB}) \ \le \ V^*(s), \quad \forall s\in\ensuremath{\mathcal{S}}. \end{equation} \end{proposition} The equality in \eqref{FGLPVFAOrdering} follows from our assumption that $V\big(\cdot;\coefSGVecK{0}\big) = -\infty$. The relationship $V(s; \coefSGVecK{\bar{n}B}) \leq V^*(s)$ holds $\forall s \in \ensuremath{\mathcal{S}}$ and $\bar n \in \{1,\ldots,n\}$ by Part (i) of Theorem \ref{prop:ALP} because $\coefSGVecK{\bar{n}B}$ is feasible to $\mbox{FGLP}\programIndex{\bar{n}B}$ and thus also feasible to $\mbox{FALP}\programIndex{\bar{n}B}$. The inequalities of the type $V(s; \coefSGVecK{N-B}) \ \le \ V(s; \coefSGVecK{N})$ are directly implied by the self-guiding constraints \eqref{FALPConst2}. An important consequence of Proposition \ref{prop:SG-ALP-basic} is that Algorithm \ref{alg:sampledBasesALP} with FGLP generates a sequence of VFAs that gets (weakly) closer to $V^*$ at all states. As a result, the lower bound $\mathrm{LB}(\ensuremath{\boldsymbol{\beta}}_{\scaleto{\mathrm{N}}{4pt}})$ is non-decreasing with $N$ even when the state-relevance distribution $\nu$ is not equal to the initial-state distribution $\chi$, which is a property that cannot be guaranteed for the VFAs produced when solving FALP embedded in Algorithm \ref{alg:sampledBasesALP}. In addition, the VFAs of FGLP generated in two consecutive iterations with $N - B$ and $N$ random basis functions satisfy \[\sNorm{V(\coefSGVecK{N}) -V^*}_{1, \mu} \leq \sNorm{V(\coefSGVecK{N-B}) -V^*}_{1, \mu},\] for any proper distribution $\mu$ defined over the state space, and in particular, when $\mu$ is the state-visit distribution $\mu_{\nu}(\coefSGVecK{N})$ associated with the greedy policy ${\pi_g(\coefSGVecK{N})}$. Thus, for any fixed iteration index $\bar{n}$ and its corresponding greedy-policy-visit distribution $\mu_\chi(\coefALPVecN{\bar{n} B})$, it follows that the sequence of VFAs $V(\coefALPVecN{B}), V(\coefALPVecN{2B}), \ldots, V(\coefALPVecN{nB}),\ldots$ generated by Algorithm \ref{alg:sampledBasesALP} improves the worst-case performance bound of Proposition \ref{prop:worst case policy performance}, that is, $\sNorm{V(\coefALPVecN{n B}) -V^*}_{1, \mu_\chi(\coefALPVecN{\bar{n} B})}$ is non-increasing in $n$. This property tackles policy cost fluctuation but does not hold when using FALP. We analyzed the results from running Algorithm \ref{alg:sampledBasesALP} with FGLP on the simple example considered in \S\ref{sec:An Example of Policy Cost Fluctuation in R-ALPs}. As a result of the self-guiding constraints in FGLP, the worst-case bound on policy performance was roughly equal to 0.08 for both scenarios 1 and 2 of iteration 3, which is a significant improvement over the respective worst-case performance bounds of 0.99 and 9.98 when using FALP (see discussion in \S\ref{sec:An Example of Policy Cost Fluctuation in R-ALPs}). Interestingly, accompanying this worst-case bound improvement, the policy cost improved in both the scenarios of iteration 3, unlike in Figure \ref{fig:cost-fluctuation}, which bodes well for the use of self-guiding constraints. We now discuss the implementation aspects of FGLP. First, it contains the continuum of self-guiding constraints \eqref{FALPConst2} in addition to the infinitely many constraints \eqref{FALPConst1} found in FALP (i,.e., constraints that are standard in all ALP models). Finding a solution that is feasible to the latter constraints is a well-studied challenge (see, .e.g, \citealp{lin2017ContViolLearning} and references therein) but needed to obtain a lower bound on the optimal policy cost using approximate linear programming. In contrast, the self-guiding constraints can be approximately satisfied, for example via constraint sampling, as their violation does not affect lower bound validity. In other words, the self-guiding constraints are computationally easier to handle than the standard constraints found in an ALP. As a result, we can employ known ALP solution techniques to tackle FGLP. Second, the self-guiding constraints in FGLP mitigate policy cost fluctuation using information from previous VFAs, in particular without requiring knowledge of the visit distribution of greedy policies or information regarding an optimal policy \subsection{A Sampling Bound for FGLP}\label{sec:Convergence of Self-guided ALPs via an Orthogonal Decomposition} Studying the quality of the sequence of FGLP VFAs generated by Algorithm \ref{alg:sampledBasesALP} is challenging because consecutive VFAs in this sequence are coupled by the self-guiding ALP constraints \eqref{FALPConst2}. Given a VFA $V\big(s;\coefSGVecK{N}\big)$ generated by solving FGLP$\programIndex{N}$, we analyze the number of additional samples $H$ that would be needed to compute a VFA $V\big(s;\coefSGVecK{N + H}\big)$ that is ``close'' to $V^*(s)$, is feasible to constraints \eqref{FALPConst1}, and is near-feasible to constraints \eqref{FALPConst2}. The techniques used to obtain a sampling bound for FALP$\programIndex{N}$ in Theorem \ref{prop:ALP} (understandably) do not factor in the effect of $V\big(s;\coefSGVecK{N}\big)$ (please see Online Supplement \S\ref{ec:sec:Analyzing an FALP-based Sampling Bound for FGLP} for details) and thus do not provide a useful lower bound on $H$ for FGLP of the type described above. We therefore develop a new projection-based analysis to bound $H$. Before delving into details, we provide an informal description of the key idea underlying our analysis, which may be of independent interest in machine learning and optimization. Consider the set of functions spanned by an intercept plus a linear combination of $N$ random basis functions in set $\Phi_N := \{\varphi(\cdot;\theta_1), \varphi(\cdot;\theta_2),\ldots, \varphi(\cdot;\theta_N)\}$: \[ \mathcal{W}(\Phi_{N}) \coloneqq \bigg\{V(s;\ensuremath{\boldsymbol{\beta}}) = \beta_0 + \sum_{i=1}^{N} \beta_i \varphi(s;\theta_i) \ \Big | \ \ensuremath{\boldsymbol{\beta}} \in\mathbb{R}^{N+1} \bigg\}. \] A strategy to account for the impact of $V(s;\coefSGVecK{N})$ on $H$ is to ask if $V^*(s)$ is a part of the functional space $\mathcal{W}(\Phi_{N})$ containing $V(s;\coefSGVecK{N})$. If $V^* \in \mathcal{W}(\Phi_{N})$, then it would not be possible to improve the incumbent VFA $V(s;\coefSGVecK{N})$ via additional sampling. If $V^* \not \in \mathcal{W}(\Phi_{N})$, then $V^*$ intuitively has a (projected) component in the functional space $\mathcal{W}(\Phi_{N})$ as well as a nonzero (projected) component in the orthogonal complement of this space. We then bound the number of samples $H$ of random basis functions needed to approximate well the latter orthogonal component of $V^*$ using random basis functions. After the required $H$ basis functions have been sampled, we add the set $\Phi_{H}$ containing these samples to the existing set of samples $\Phi_{N}$. We then show that there exists a VFA in $\mathcal{W}(\Phi_{N} \cup \Phi_H)$ that is both near-feasible to FGLP$\programIndex{N+H}$ and approximates $V^*$ well. \looseness=-1 Formally carrying out the analysis described above requires decomposing $V^*(\cdot)$ into a component that belongs to $\mathcal{W}(\Phi_{N})$ and a residual in an orthogonal complement space. Such decomposition is possible if we work in the functional space $\ensuremath{\mathcal{R}_C(\varphi,\rho)}$ because it is a closed subspace of a Hilbert space where an orthogonal decomposition is well defined (i.e., Theorem 5.24 \citealt{folland1999real}). Unfortunately, neither $V^*(\cdot)$ or $\mathcal{W}(\Phi_{N})$ have the same representation as $\ensuremath{\mathcal{R}_C(\varphi,\rho)}$. This issue can be easily addressed for $V^*$ by instead analyzing the function $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$ which belongs to $\ensuremath{\mathcal{R}_C(\varphi,\rho)}$ and is $\varepsilon$-close to $V^*$. Such a function exists by virtue of Proposition \ref{prop:ELP-RKHS-gap}. Addressing the analogous concern for $\mathcal{W}(\Phi_{N})$ requires constructing an extension of this set as follows. Given a random basis function parameter vector $\theta_i$, define an active set that is a ball of radius $\alpha$ as \[\phi_{i,\alpha}(\theta) \coloneqq \frac{\rho(\theta)}{z_{\alpha}^i}\indicator{\lVert{\theta-\theta_{i}\rVert}_2\le\alpha},\] where $\indicator{\cdot}$ is an indicator function that evaluates to one if its argument is true and is zero otherwise and $z_{\alpha}^i \coloneqq \int\indicator{\sNorm{\theta-\theta_{i}}_2\le\alpha} \rho(\diff \theta)$ is a normalizing constant that equals the volume of an $\alpha$-ball around $\theta_i$. The extension of $\mathcal{W}(\Phi_{N})$ is \begin{equation}\label{eq:orthogonal-sapce} \mathcal{W}_{\alpha}(\Phi_{N}) \coloneqq \bigg\{ V(s) = b_0 + \inprod{\ensuremath{\boldsymbol{b}}}{\varphi(s)} \ \Big | \ (b_0, \beta_1,\dots,\beta_N)\in\mathbb{R}^{N+1} \mbox{ and } \ensuremath{\boldsymbol{b}}(\theta) = \sum\limits_{i=1}^{N}{\beta_i}\phi_{i,\alpha}(\theta) \bigg\}, \end{equation} which we can show to be a subspace of a Hilbert space. Note that any function in $\mathcal{W}(\Phi_{N})$ defined by a finite vector $\ensuremath{\boldsymbol{\beta}} \in\mathbb{R}^{N+1}$ has a corresponding extension in ${\mathcal{W}}_{\alpha}(\Phi_{N})$ defined by the infinite pair $(b_0(\ensuremath{\boldsymbol{\beta}}), \ensuremath{\boldsymbol{b}}(\ensuremath{\boldsymbol{\beta}}))$. This extension provides a bridge to test the richness of $\mathcal{W}(\Phi_{N})$ with respect to $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$, which belongs to $\ensuremath{\mathcal{R}_C(\varphi,\rho)}$ and has an associated $(b_0,\ensuremath{\boldsymbol{b}})$. Specifically, we can decompose $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$ into $V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$ and $\prep{V}_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$, that is $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} = V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \prep{V}_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$, where $V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$ and $\prep{V}_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$ are projections of $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$ on to ${\mathcal{W}}_{\alpha}(\Phi_{N})$ and its orthogonal complement (to be precise, projections are performed onto the closures of these set). Based on this construction, Theorem \ref{thm:SG-ALP sampling bound} establishes our sampling bound related to FGLP. Let $(b_{0,\alpha}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}},\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}})$ be such that $V_\alpha^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} = b_{0,\alpha}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}} + \inprod{\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}}{\varphi(s)}$. Moreover, define the constant $\ensuremath{\Omega^\prime} 3\ensuremath{\mathrm{L}_\varphi} \sqrt{U_{\rho}} (\diamSState+1) \tallNorm{\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{2,\rho} $, where $\ensuremath{\mathrm{L}_\varphi}$ is the Lipschitz constant associated with the class of random basis functions $\varphi$ and $U_{\rho}$ the constant upper bound on the sampling distribution $\rho$ (see Assumption \ref{asm:random basis function}). For a given $\epsilon> 0$, we define an $\epsilon$-feasible solution to the self-guiding constraints as a vector $\ensuremath{\boldsymbol{\beta}}$ that violates constraints \eqref{FALPConst2} by at most $\epsilon$. \begin{theorem} \label{thm:SG-ALP sampling bound} Given $\varepsilon>0$ and $\delta\in(0,1]$, suppose \[ \alpha \coloneqq \min\bigg\{ \frac{ \varepsilon}{\ensuremath{\Omega^\prime} \sqrt{N}}, \ \min_{i\ne j}\lVert{\theta_{i}-\theta_{j}\rVert}_2\bigg \}\] and \[ H \ge \Big\lceil 9\varepsilon^{-2} \tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}} -\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{\infty,\rho} ^2 \ (\ensuremath{{\Omega}}+\ensuremath{{{\Delta}}_{{\delta}}})^2 \Big\rceil. \] Then with a probability of at least $1-\delta$ there exists a function in $\mathcal{W}(\Phi_{N} \cup \Phi_H)$ with associated vector $\ensuremath{\boldsymbol{\beta}} \in \mathbb{R}^{N+ H +1}$ that (i) is feasible to constraints \eqref{FALPConst1} of FGLP$\programIndex{N+H}$, (ii) is a ${4\varepsilon}/{(1-\gamma)}$-feasible solution to constraints \eqref{FALPConst2}, and (iii) satisfies \[ \tallNorm{V(\ensuremath{\boldsymbol{\beta}})- V^*}_{\infty} \ \leq \ \frac{4\varepsilon}{(1-\gamma)}. \] \end{theorem} This theorem establishes that after we add a number of random basis functions equal to the stated lower bound, the set $\mathcal{W}(\Phi_{N} \cup \Phi_H)$ will contain with high probability a VFA $V(\ensuremath{\boldsymbol{\beta}})$, where $\ensuremath{\boldsymbol{\beta}} \in \mathbb{R}^{N+ H +1}$, that is feasible to the standard ALP constraints, $4\varepsilon/(1-\gamma)$ feasible to the self-guiding constraints, and at most $4\varepsilon/(1-\gamma)$ away from $V^*$ in terms of the infinity-norm. It is easy to verify that this VFA is in fact an optimal solution to a version of FGLP$\programIndex{N+H}$ with only the self-guiding constraints \eqref{FALPConst2} relaxed by an amount $4\varepsilon/{(1-\gamma)}$. As discussed at the end of \S\ref{sec:Basic Properties of Self-guided ALPs}, computationally tackling the self-guiding constraints typically necessitates such a relaxation. Moreover, we can show that Algorithm \ref{alg:sampledBasesALP} terminates when solving such a relaxation under the same conditions used to establish the termination of this algorithm with FALP in Proposition \ref{prop:Algo1Convergence}. We now discuss properties of the lower bound on $H$. The quality of the VFA $V(s;\coefSGVecK{N})$ used to lower bound the FGLP$\programIndex{N+H}$ VFA in the self-guiding constraints \eqref{FALPConst2} is captured by $r(\alpha) := \tallNorm{\ensuremath{{\ensuremath{\boldsymbol{b}}}^{\raisemath{1pt}{\scaleto{\mathrm{FE}}{3.5pt}}}}-\ensuremath{\coefInf_\alpha^{\raisemath{2pt}{\scaleto{\varepsilon}{3.5pt}}}}}_{\infty,\rho}$ of this bound. Intuitively, a small $r(\alpha)$ implies that the functional space $\mathcal{W}(\Phi_{N})$ containing $V(s;\coefSGVecK{N})$ is rich enough to closely approximate $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$ and thus $V^*$. In this case, the number of additional samples $H$ needed to obtain a good approximation of $V^*$ is less compared to when $V(s;\coefSGVecK{N})$ is a poorer approximation of $\ensuremath{V}^{\raisemath{1pt}{\scaleto{\varepsilon}{3.5pt}}}$. The dependence of the sampling lower bound on $\varepsilon$ depends on the value of $\alpha$ that we used to create the extended function space $\mathcal{W}_{\alpha}(\Phi_{N})$. Suppose $\alpha = \min_{i\ne j}\lVert{\theta_{i}-\theta_{j}\rVert}_2$, which is a signal that the set of sampled random basis function parameters is densely packed. Then the term $r(\alpha)$ is independent of $\varepsilon$ and the lower bound on $H$ changes at the rate of $\varepsilon^{-2}$ similar to the analogous bound for FALP in Theorem \ref{prop:ALP}. On the other hand, consider the case $ \alpha = { \varepsilon}/{\ensuremath{\Omega^\prime} \sqrt{N}} $. Here the lower bound on $H$ depends on $\varepsilon$ via both the terms $\varepsilon^{-2}$ and $r(\alpha)$. The latter term can increase with $\varepsilon$ and cause the lower bound to grow at a rate faster than $\varepsilon^{-2}$, highlighting that the dependence of the FGLP sampling bound on $\varepsilon$ is greater than the analogous dependence of the FALP sampling bound in this case. \section{Perishable Inventory Control} \label{sec:Perishable Inventory Control} In this section, we assess the performance of our methods and a benchmark on the perishable inventory control problem studied in \citet[henceforth abbreviated LNS]{lin2017ContViolLearning}. We present its infinite-horizon discounted cost MDP formulation and instances in \S\ref{sec:PIC-MDP}. We describe our experimental setup in \S\ref{sec:PIC-Computing-Lower-Bounds} and discuss numerical findings in \S\ref{sec:PIC-Observations}. \looseness=-1 \subsection{Discounted-cost MDP Formulation and Instances}\label{sec:PIC-MDP} Managing the inventory of a perishable commodity is a fundamental and challenging problem in operations management (\citealp{karaesmen2011PerishInv}, \citealp{chen2014pricingStrategiesForPerishableProducts}, \citealp{sun2014quadratic}, and LNS). We study a variant of this problem with partial backlogging and lead time from \S7.3 in LNS. Consider a perishable commodity with $l\ge0$ and $L\ge0$ periods of life time and ordering lead time, respectively. Ordering decisions are made over an infinite planning horizon. At each decision epoch, the on-hand and in-transit inventory levels are represented by the state vector $ s = (\onHandState{0},\onHandState{1},\dots,\onHandState{l-1},\pipeState{1},\pipeState{2},\dots,\pipeState{L-1}) $. The on-hand inventory $\onHandState{i}$ for $i = 0,1,\dots,l-1$ is the amount of available commodity with $i$ remaining periods. The in-transit inventory $\pipeState{i}$ for $i = 1,2,\dots,L-1$ is the previously ordered quantity that will be received $i$ periods from now. Inventories $\onHandState{i}$ and $\pipeState{j}$ take values in the interval $[0,\bar{a}]$ for all $i=1,\dots,l-1$ and $j=1,2,\dots,L-1$, respectively, where $\bar{a}\ge 0$ denotes the maximum ordering level. To understand the state $\onHandState{0}$, it is useful to consider the total on-hand inventory $\onHandState{0} + \sum_{i = 1}^{l-1}\onHandState{i}$. If $\onHandState{0} \in [-\sum_{i = 1}^{l-1}\onHandState{i}, \bar{a}]$, then the on-hand inventory is non-negative. Instead, if $\onHandState{0} < -\sum_{i = 1}^{l-1}\onHandState{i}$, then the on-hand inventory $\onHandState{0} + \sum_{i = 1}^{l-1}\onHandState{i}$ is negative and represents the amount of backlogged orders. Demand for the commodity is governed by a random variable $\tilde D$. In each period, we assume that demand realizes before order arrival and is satisfied in a first-in-first-out manner. Given a demand realization $D$, taking an ordering decision (i.e., action) $a$ from a state $s$ results in the system transitioning to a new state \looseness=-1 \[ s^\prime \coloneqq \bigg(\max\bigg\{\onHandState{1} - (D-\onHandState{0})_+ ,\ \underline{s}-\sum_{i=2}^{l-1}\onHandState{i}\bigg\}, \onHandState{2},\dots,\onHandState{l-1},\pipeState{1}\pipeState{2},\dots, \pipeState{L-1},a \bigg), \] where $(\cdot)_+ := \max\{\cdot,0\}$ and $\underline{s}\le 0$ is a maximum limit on the amount of backlogged orders, beyond which we treat unsatisfied orders as lost sales. The first element of $s^\prime$ can be understood as follows: If there was no backlogging limit, then the on-hand inventory after demand realization and before order arrival would be $\onHandState{1} - (D-\onHandState{0})_+ + \sum_{i=2}^{l-1}\onHandState{i}$; instead, in the presence of the maximum backlog limit $\underline{s}$, this total on-hand inventory of $\onHandState{1} - (D-\onHandState{0})_+ + \sum_{i=2}^{l-1}\onHandState{i}$ is greater than or equal to $\underline{s}$ if and only if $\onHandState{1} - (D-\onHandState{0})_+ \geq \underline{s} -\sum_{i=2}^{l-1}\onHandState{i}$. The remaining elements of $s^\prime$ are shifted elements of $s$, with the last element accounting for the latest order $a$. \looseness=-1 The immediate cost associated with a transition from a state-action pair $(s,a)$ is \begin{equation*} \resizebox{\hsize}{!}{% $ c(s,a) \coloneqq \gamma^L c_o a + \ensuremath{\mathbb{E}}_{D}\Bigg[c_h\bigg[ \sum\limits_{i=1}^{l-1}\onHandState{i} - (D-\onHandState{0})_+\bigg]_+ + c_d(\onHandState{0}-D)_+ + c_b\bigg[D- \sum\limits_{i=0}^{l-1} \onHandState{i}\bigg]_+ + c_l\bigg[\underline{s} +D -\sum\limits_{i=0}^{l-1} \onHandState{i}\bigg]_+\Bigg], $ } \end{equation*} where expectation $\ensuremath{\mathbb{E}}_{D}$ is with respect to the demand distribution. The per-unit ordering cost $c_o\ge 0$ is discounted by $\gamma^L$ because we assume payments for orders are made only upon receipt. Holding cost $c_h\ge0$ penalizes leftover inventory $\big( \sum_{i=1}^{l-1}\onHandState{i} - (D-\onHandState{0})_+\big)_+$, while per unit disposal and backlogging costs $c_d\ge0$ and $c_b\ge0$ factor in, respectively, costs associated with disposing $(\onHandState{0}-D)_+$ units and backlogging $\big(D- \sum_{i=0}^{l-1} \onHandState{i}\big)_+$ units. Finally, each unit of lost sales $\big(\underline{s} +D -\sum_{i=0}^{l-1} \onHandState{i}\big)_+$ is charged $c_l\ge0$. \looseness=-1 \begin{table}[t] \centering \caption{Parameters of the perishable inventory control instances.} \begin{scriptsize} \adjustbox{width=\textwidth}{ \begin{tabular}{ccc@{\hskip 18pt}ccccccccc@{\hskip 18pt}ccccccc} \hline {} & & {Instance} & {$c_o$} & {$c_h$} & {$c_d$} & {$c_b$} & {$\overline a$} & {$\underline s $} & {$\gamma$} & {\hspace{.05cm}} & % {Instance} & {$c_o$} & {$c_h$} & {$c_d$} & {$c_b$} & {$\overline a$} & {$\underline s $} & {$\gamma$} \\ \cline{3-10} \cline{12-19} \multirow{3}{*}{Small}& & {1} & {$20$}& {$2$} & {$5$} & {$10$} & {$10$} & {$-10$} & {$.95$} & {\hspace{0.05cm}} & {2} & {$20$}& {$2$} & {$5$} & {$10$} & {$10$} & {$-10$} & {$.99$} \\[-.1cm] {}& & {3} & {$20$}& {$5$} & {$10$} & {$8$} & {$10$} & {$-10$} & {$.95$} & {\hspace{0.05cm}} & {4} & {$20$}& {$5$} & {$10$} & {$8$} & {$10$} & {$-10$} & {$.99$} \\[-.1cm] {}& & {5} & {$20$}& {$2$} & {$10$} & {$10$} & {$10$} & {$-10$} & {$.95$} & {\hspace{0.05cm}} & {6} & {$20$}& {$2$} & {$10$} & {$10$}& {$10$} & {$-10$} & {$.99$} \\ % \multirow{2}{*}{Medium}& & {7} & {$20$}& {$2$} & {$10$} & {$10$} & {$30$} & {$-30$} & {$.95$} & {\hspace{0.05cm}} & {8} & {$20$}& {$2$} & {$10$} & {$10$} & {$30$} & {$-30$} & {$.99$} \\[-.1cm] {}& & {9} & {$16$}& {$5$} & {$8$} & {$8$} & {$30$} & {$-30$} & {$.95$} & {\hspace{0.05cm}} & {10} & {$16$}& {$5$} & {$8$} & {$8$} & {$30$} & {$-30$} & {$.99$} \\ % \multirow{3}{*}{Large}& & {11} & {$20$}& {$5$} & {$10$} & {$8$} & {$50$} & {$-50$} & {$.95$} & {\hspace{0.05cm}} & {12} & {$20$}& {$5$} & {$10$} & {$8$} & {$50$} & {$-50$} & {$.99$} \\[-.1cm] {}& & {13} & {$20$}& {$2$} & {$5$} & {$10$} & {$50$} & {$-50$} & {$.95$} & {\hspace{0.05cm}} & {14} & {$20$}& {$2$} & {$5$} & {$10$} & {$50$} & {$-50$} & {$.99$} \\[-.1cm] {}& & {15} & {$20$}& {$2$} & {$12$} & {$6$} & {$50$} & {$-50$} & {$.95$} & {\hspace{0.05cm}} & {16} & {$20$}& {$2$} & {$12$} & {$6$} & {$50$} & {$-50$} & {$.99$} \\ \hline \end{tabular}} \end{scriptsize} \label{table:PIC Instances} \end{table} We consider sixteen instances of the above MDP for our numerical experiments. Across all instances, we set $l=L=2$; fix the demand distribution to the truncated normal distribution with range $[0,10]$, a mean of $5$, and a standard deviation of $2$; and set the lost-sales cost $c_l$ to $100$. The remaining parameter values of each instance are summarized in Table \ref{table:PIC Instances}. We categorized instances with $\bar{a}$ equal to 10, 30, and 50 as small, medium, and large because the size of the state space increases with $\bar{a}$. Instances that are numbered $1$ through $8$ and $11$ through $14$ are identical to those considered in LNS while we created the instances numbered 9, 10, 15, and 16 to add more cases with medium and large state spaces. \looseness=-1 \subsection{Computational Setup} \label{sec:PIC-Computing-Lower-Bounds} On each instance, we tested two versions of Algorithm \ref{alg:sampledBasesALP} by varying the input math program $\mathcal{M}\programIndex{N}$ to be either FALP$\programIndex{N}$ or FGLP$\programIndex{N}$. We choose the initial state distribution $\chi$ and the state-relevance distribution $\nu$ to both be degenerate with all their mass at $s=(5,5,5)$ consistent with LNS. We use Fourier random basis functions (see \S\ref{sec:Reformulating Exact Linear Program via RKHS} for definition) with its parameter $\sigma$ uniformly randomized over the interval $[10^2,10^3]$ (see \S 3 in \citealt{sarra2009random} for the use of such a strategy). We set the batch size $B$ equal to $10$ and ensure that the same sampled random basis functions are used in both FALP and FGLP at a given iteration. We terminate Algorithm \ref{alg:sampledBasesALP} using an optimality gap of $5\%$ (i.e., $\tau=0.05$) or if we have already sampled $200$ basis functions. \looseness=- Solving FALP and FGLP requires computing expectations that do not have an analytical form as well as dealing with infinitely many constraints. We approximate all expected values in these models using sample average approximations with $5\times{10}^3$ independent and identical demand samples. We use constraint sampling (\citealp{farias2004constraintSampling}) to replace the continuum of constraints by a finite collection. Specifically, we sample $5\times{10}^4$, $8\times{10}^4$, and $10^5$ state-action pairs from a uniform distribution over the state-action space for the small, medium, and large instances, respectively. We only include FALP constraints corresponding to these sampled state-action pairs. We construct FGLP constraints \eqref{FALPConst1} similarly and consider the self-guiding constraints \eqref{FALPConst2} for all states that are part of the sampled state-action pairs. Although constraint sampling is easy to implement, the optimal objective function values of the sampled approximations of FALP and FGLP may not yield valid lower bounds on the optimal policy cost. This is a known issue when using constraint sampling with ALP, as also discussed in LNS, where they notice that a sampled version of ALP does not yield valid lower bounds on the perishable inventory control application. To circumvent this problem, we embed the VFA from FALP or FGLP in a simulation procedure to estimate valid lower bounds. This procedure is a novel application of the saddle point formulation underlying the constraint violation-learning approach in LNS to obtain a valid lower bound when using constraint sampling. We discuss it in detail in Online Supplement \ref{ec:sec:A Lower Bound Estimator for Constraint-sampled ALPs}.\looseness=-1 To compute the greedy policies associated with FALP and FGLP we need to solve the optimization problem \eqref{eqn:GreedyOpt} with their respective VFAs. We solve a discretized version of this problem with action space $[0,\bar{a}]$ approximated by $\bar{a}$ equally spaced points similar to LNS. Then we simulate the policy starting from state $s=(5,5,5)$ to estimate the policy cost. The maximum standard errors of FALP and FGLP lower bound estimates were $1.18\%$ and $0.59\%$, respectively. Analogous standard error maxima for policy cost estimates were 1.61\% and 1.33\%.\looseness=-1 \begin{table}[h!] \centering \caption{Optimality gaps on perishable inventory control instances.} \adjustbox{max width=\textwidth}{\renewcommand{\arraystretch}{1.1} \begin{tabular}{cccccccccccccccccccc} \hline \multirow{2}{*}{Instance} & \multirow{2}{*}{LNS} & \multicolumn{3}{c}{FALP} & \multicolumn{3}{c}{FGLP} & & \multirow{2}{*}{Instance} & \multirow{2}{*}{LNS} & \multicolumn{3}{c}{FALP} & & \multicolumn{3}{c}{FGLP} \\ {} & {} &{Min} & {Median} & {Max} & & {Min} & {Median} & {Max} & {} & {} & {Min} & {Median} & {Max} & & {Min} & {Median} & {Max}\\ \cline{3-5} \cline{7-9} \cline{12-14} \cline{16-18} {1} & {4.01} & {1.22} & {2.72} & {3.95} & & {0.97} & {2.67} & {3.87} & {9} & {9.7} & {1.94} & {3.51} & {4.72} & & {3.71} & {4.75} & {4.99}\\ {2} & {6.44} & {0.87} & {1.92} & {3.72} & & {0.81} & {1.97} & {3.73} & {10}& {9.4} & {1.54} & {3.79} & {4.89} & & {2.30} & {3.21} & {4.24}\\ {3} & {7.96} & {0.79} & {2.21} & {4.86} & & {0.55} & {2.19} & {4.86} & {11} & {11.73} & {1.45} & {3.36} & {4.97} & & {1.92} & {3.88} & {4.98}\\ {4} & {2.17} & {0.11} & {0.83} & {4.13} & & {0.05} & {0.77} & {3.90} & {12}& {11.01} & {1.29} & {4.28} & {9.07} & & {1.12} & {4.06} & {4.94}\\ {5} & {2.40} & {1.94} & {2.79} & {3.46} & & {1.91} & {2.63} & {3.73} & {13}& {4.40} & {1.23} & {4.42} & {88.59} & & {0.63} & {2.33} & {4.40}\\ {6} & {3.60} & {1.32} & {3.19} & {4.86} & & {1.23} & {3.31} & {4.87} & {14}& {2.17} & {1.02} & {3.56} & {4.10} & & {0.07} & {3.06} & {4.55}\\ {7} & {6.14} & {3.29} & {3.95} & {4.87} & & {3.37} & {4.48} & {4.76} & {15}& {14.4} & {2.19} & {3.91} & {77.03}& & {0.45} & {3.26} & {4.92}\\ {8} & {9.14} & {3.26} & {4.54} & {4.97} & & {3.89} & {4.59} & {4.90} & {16}& {14.8} & {2.17} & {3.69} & {4.58} & & {0.74} & {3.93} & {4.81}\\ \hline \end{tabular}} \label{tab:BoundsCompareWithLNS} \end{table} As a benchmark, we use the lower bound and policy cost from LNS for instances 1-8 and 11-14. These bounds are based on an ALP model with basis functions tailored to the MDP in \S\ref{sec:PIC-MDP} that is solved using a primal-dual solution technique. We had access to the corresponding code from LNS, which we used to generate bounds on the new instances 9, 10, 15, and 16 that we added. \looseness=-1 \subsection{Results}\label{sec:PIC-Observations} Table \ref{tab:BoundsCompareWithLNS} reports three optimality gaps. The first is defined using the lower bound and policy cost from the LNS approach, that is, 100 times ([LNS policy cost] - [LNS lower bound])/[LNS policy cost]. The second and third correspond to FALP and FGLP, specifically the final optimality gaps computed in Step (vi) of Algorithm \ref{alg:sampledBasesALP} when embedding these math programs. Unlike LNS, FALP and FGLP, which are based on random basis functions, have a distribution of optimality gaps because we resolve each instance ten times to assess the impact of basis function sampling variability. We thus report the minimum, median, and maximum of these distributions. The FALP and FGLP median optimality gaps are smaller than the corresponding LNS optimality gaps, except for instances 5 and 14 where the LNS optimality gap is marginally better. The largest LNS optimality gap is roughly 12\% whereas the maximum FALP and FGLP optimality gaps are about 89\% and 5\%, respectively. Thus, FGLP significantly improves upon LNS. Moreover, it is encouraging that this improvement can be achieved without engineering basis functions using application-specific knowledge as done in LNS. \looseness=-1 \begin{figure}[h!] \centering \caption{\scriptsize Comparison of the lower bounds and the policy costs from FALP and FGLP relative to their respective LNS values.} \includegraphics[width=\linewidth]{figs/compareUB_LB} \label{fig:compareUB_LB} \vspace{-18pt} \end{figure} Figure \ref{fig:compareUB_LB} explores the source of the optimality gap improvements just discussed. It reports in its top and bottom panels, respectively, the percentage ratios ([FALP (or FGLP) policy cost] - [LNS policy cost])/[LNS lower bound] and ([FALP (or FGLP) lower bound] - [LNS lower bound])/[LNS lower bound] corresponding to the terminal bounds. Specifically, the panels report the distributions of these percentages across the ten trials on each instance via box plots (which as usual display the minimum, maximum, median, and 25th and 75th percentiles). Both the FALP and FGLP policy costs improve on the LNS policy cost significantly on instances 2, 3, 7, 9, 10, 11, 12, and 16, with FGLP also improving LNS on instances 13 and 15. The median policy costs of FALP and FGLP improve on the one of LNS by up to 14\%. The lower bounds from FALP and FGLP also dominate LNS on the instances 1, 2, 4, 5, 6, 8, 9, 10, and 16, with median improvements as large as 7\%. These results show that the optimality gap improvements in Table \ref{tab:BoundsCompareWithLNS} of FALP, and in particular FGLP, are the result of improving the LNS operating policies (i.e., policy costs) significantly on some instances and generating tighter lower bounds on others, which could be useful to obtain more accurate optimality gaps for other heuristic operating policies. Figure \ref{fig:compareUB_LB} shows that the lower bounds and policy costs from FALP and FGLP are similar on the small and medium sized instances but differ more significantly on the large instances. \looseness=-1 Next we focus on understanding the relative performance of FGLP and FALP on the large instances. To this end, we display two statistics in Figure \ref{fig:picjumpbasisdist} related to the variability of the FALP and FGLP policy costs as a function of the number of iterations (unlike Figure \ref{fig:compareUB_LB} which focuses on the terminal bound values). The first statistic, which we refer to as the policy cost fluctuation percentage, is a count of the number of iterations in Algorithm \ref{alg:sampledBasesALP} where the policy cost worsens relative to its immediately preceding iteration. We express this count as a percentage of the total number of iterations. The left panel of Figure \ref{fig:picjumpbasisdist} displays box plots of this statistic, where it is apparent that such fluctuation is significantly more frequent for FALP than FGLP. The median policy cost fluctuation percentage is less than 22\% and 6\% for FALP and FGLP, respectively. To understand the extent of this fluctuation, we consider as a second statistic the fluctuation magnitude, that is, the average of the magnitude by which the policy cost worsens over iterations that exhibit a policy cost fluctuation. The right panel of Figure \ref{fig:picjumpbasisdist} reports the distribution of the policy cost fluctuation magnitude across the ten trials. The median of this statistic is zero for FGLP across all the large instances but is nonzero for FALP on instances 12, 13, 15, and 16. The maximum of the fluctuation magnitude for FGLP is substantially smaller than the corresponding maximum for FGLP. In particular, the FGLP fluctuation magnitude is negligible in most cases, while for FALP the magnitude of policy fluctuation is large on a significant number of trials across instances. In addition to shedding some light on the relative performance of FALP and FGLP, these results also provide numerical support for the ability of self-guiding constraints in FGLP to efficiently mitigate policy cost fluctuation. \looseness=-1 \begin{figure}[t] \centering \caption{\scriptsize Distributions of the policy cost fluctuation percentage (left panel) and magnitude (right panel).} \includegraphics[width=\linewidth]{figs/PIC_jump_basis_dist} \label{fig:picjumpbasisdist} \end{figure} \begin{figure}[t] \vspace{-30pt} \centering \caption{\scriptsize Median (left panel) and maximum (right panel) number of random basis functions sampled by Algorithm \ref{alg:sampledBasesALP} with FALP and FGLP.} \includegraphics[width=\linewidth]{figs/FALP_FGLP_median_max} \label{fig:falpfglpmedianmax} \vspace{-18pt} \end{figure} Finally, we discuss the average CPU times taken by FALP and FGLP on the large instances, which are most time consuming to solve. The average FALP and FGLP solve times per iteration are 8 minutes and 11 minutes, respectively. The larger per iteration time for FGLP is expected since it has more constraints than FALP. Nevertheless, the average CPU time of 35 minutes to execute Algorithm \ref{alg:sampledBasesALP} with FGLP is smaller than the 42 minutes it takes on average when this algorithm embeds FALP. This can be explained by Algorithm \ref{alg:sampledBasesALP} requiring fewer iterations in most cases to terminate in the former case. Note that few iterations implies a smaller number of random basis functions (which equals the number of iterations multiplied by $B$). Figure \ref{fig:falpfglpmedianmax} reports the median and maximum of the number of basis functions required by Algorithm \ref{alg:sampledBasesALP}. The maximum statistic is important in determining the total CPU time as it corresponds to the largest linear programs being solved. The computational burden of LNS to solve ALP and simulate bounds is 14 minutes, which is roughly half the time taken when using FGLP. This difference is because LNS solves a single ALP while we solve multiple ALPs in Algorithm \ref{alg:sampledBasesALP}. In other words, there is a computational cost of using our application-agnostic basis function generation scheme over the LNS approach based on tailored basis functions. On the other hand, it is unclear how one can engineer basis functions in the LNS framework in a principled manner to obtain the better FGLP optimality gaps seen in Table \ref{tab:BoundsCompareWithLNS}. \looseness=-1 \section{Generalized Joint Replenishment} \label{sec:Generalized Joint Replenishment} In this section, we test the effectiveness of ALPs with random basis functions for solving a generalized joint replenishment (GJR) problem. In \S\ref{sec:Semi-MDP Formulation}, we describe GJR and its average-cost semi-MDP formulation from \citet[henceforth abbreviated AK]{adelman2012GJR}. In \S\ref{sec:GJR-computational-configuration}, we summarize our methods, as well as, an adaptive-basis function generation approach and a set of instances, both from AK. In \S\ref{sec:GJR-Observations}, we discuss our numerical findings. \looseness=-1 \subsection{Average-cost Semi-MDP Formulation}\label{sec:Semi-MDP Formulation} The GJR problem involves the replenishment of a collection of products that are consumed at a fixed and deterministic rate and are coupled via a shared replenishment capacity \citep{adelman2012GJR}. We describe the average-cost semi-MDP formulation for this problem from AK. Consider managing the replenishment of inventories across $J$ products over a continuous time horizon with index set $\{1,2,\ldots,J\}$. Each product $j$ is consumed at a finite and deterministic rate $\lambda_j >0$ and we denote by $\ensuremath{\boldsymbol{\lambda}} = (\lambda_1,\lambda_2,\ldots,\lambda_J)$ the vector of these rates. A state vector $s = (s_1,s_2,\dots,{s}_{J})$ encodes the inventory levels of these items all measured in normalized units, where each component $s_j \ge 0$ is non-negative for all $j \in \{1,2,\ldots,J\}$. A zero value for the $j$-th state component signals that the $j$-th item is stocked out. Since the replenishment time can be postponed if no item is currently stocked out, it can be assumed that at least one item has zero inventory in the state. Thus, the state space is given by $\ensuremath{\mathcal{S}} \coloneqq \{s: 0\le s \le \bar{s}, \ s_j=0 \text{ for some } j \in \{1,2,\ldots,J\} \}$, where $\bar{s}\in (0,\infty)^J$ is a vector of maximum inventory levels. The replenishment decision is specified by $a\in \mathbb{R}_+^J$. This decision at a given state $s\in\ensuremath{\mathcal{S}}$ belongs to $\ensuremath{{\mathcal{A}}_{s}} \coloneqq \big\{a\in \mathbb{R}_+^J: s+a \le \bar{s}, \ \sum_{j=1}^{J}a_j\le \bar{a}\big\}$. Here $\bar{a} \in \mathbb{R}_+$ denotes a capacity constraint on the total amount of joint replenishment. The immediate cost $c(s,a)$ of an action $a$ at state $s$ has (i) a fixed component $c_{\mathrm{supp}(a)}$ that depends on the set of items replenished $\mathrm{supp}(a) := \{j \in \{1,\ldots,J\}|a_j > 0\}$, and (ii) a variable holding cost component $\sum_{j=1}^{J} (2s_ja_j + a_j^2)h_j/2\lambda_j$, where $h_j$ denotes the holding cost per unit per time. Since the usage rate is deterministic, the time till next replenishment is defined by $T(s,a) := \min_j\{(s_j + a_j)/\lambda_j\}$ and the system transitions to a new state $s' = s + a - T(s,a)\lambda$. To find a deterministic and stationary policy $\pi:\ensuremath{\mathcal{S}}\mapsto\ensuremath{{\mathcal{A}}_{s}}$, one can in theory solve the semi-MDP optimality equations (see, e.g., Theorem 10.3.6 in \citealt{hernandez1999further}) \begin{equation}\label{eqn:AvgCostOpt} u(s) = \inf_{a\in\ensuremath{{\mathcal{A}}_{s}}}\{c(s,a) - \eta T(s,a) +u(s^\prime) \}, \qquad \forall s\in\ensuremath{\mathcal{S}} \end{equation} where $\eta \in \mathbb{R}$ denotes the long-run optimal average cost and $u(\cdot)$ is a bias function that captures state-dependent transient costs. The action prescribed at a state $s \in \ensuremath{\mathcal{S}}$ by an optimal deterministic and stationary policy can be found by solving the infimum in the right hand side of \eqref{eqn:AvgCostOpt}. Moreover, the optimality equations \eqref{eqn:AvgCostOpt} have the following linear programming representation \begin{align} \sup_{(\eta',u') \in \mathbb{R} \times \mathcal{C}} \ \ & \eta' \label{AvgELPObj} \\ & \eta' T(s,a) + u'(s) - u'(s^\prime) \le c(s,a), && \forall (s,a) \in\ensuremath{\sSpace\times\mathcal{A}_s}.\label{AvgELPConst} \end{align} This infinite linear program is the average-cost analogue of ELP (see \S\ref{section:Optimality Equation and an Exact Linear Program}). However, note that there is no need to specify a state-relevance distribution such as $\nu$ in this case. \subsection{Methods and Instances}\label{sec:GJR-computational-configuration} Solving the infinite linear program \eqref{AvgELPObj}-\eqref{AvgELPConst} is intractable for the same reasons as ELP. AK thus replace the bias function $u(s)$ by an approximation to obtain an ALP. Their approximation has a (static) affine component $\beta_0 - \sum_{j=1}^{J} \beta_{1,j} s_j$ and an adaptive component $\sum_{i=1}^I\beta_{2,i}f^i(r^i s)$ with $I$ terms, where $f^i:\mathbb{R}\mapsto\mathbb{R}$ is a piecewise linear ridge function and $r^i \in \mathbb{R}^J$ is a ridge vector. Putting these two components together gives the bias function approximation \[ u(s;\ensuremath{\boldsymbol{\beta}}) \coloneqq \beta_0 - \sum_{j=1}^{J} \beta_{1,j} s_j - \sum_{i=1}^I \beta_{2,i}f^i(r^is). \] They also approximate $\eta$ in \eqref{AvgELPObj}-\eqref{AvgELPConst}, which is not needed for tractability, but facilitates managerial interpretation. This approximation is $\eta(\ensuremath{\boldsymbol{\lambda}}) = \hat{\eta} + \sum_{j=1}^{J}\beta_{1,j}\lambda_j $, where $\hat{\eta}$ is an intercept, $\beta_{1,j}$ can be interpreted here as marginal values associated with each item, and $\ensuremath{\boldsymbol{\lambda}} := (\lambda_1,\lambda_2,\ldots,\lambda_J)$. We refer to the resulting approximation of \eqref{AvgELPObj}-\eqref{AvgELPConst} as the ridge linear program (RLP). AK approach the solution of RLP using constraint generation, which involves solving mixed integer linear programs. In addition, they dynamically generate the ridge basis functions in the bias approximation via an approximation algorithm that that exploits the policy structure in the GJR application. We implemented RLP as a benchmark following the details in AK. To study the effectiveness of random basis functions in this context, we derive an average-cost FGLP analogue starting from the exact linear program \eqref{AvgELPObj}-\eqref{AvgELPConst}. To be consistent with AK, we use the same approximation $\eta(\ensuremath{\boldsymbol{\lambda}})$ for $\eta$ and replace the bias function $u(s)$ by \begin{equation}\label{eq:biasApproximation} u(s;\ensuremath{\boldsymbol{\beta}}) := \beta_0 - \sum_{j=1}^{J} \beta_{1,j} s_j - \sum_{i=1}^{N}\beta_{2,i}\varphi(s;\theta_{i}), \end{equation} where the adaptive basis function component in the RLP bias function approximation has been substituted with random basis functions. We select $\varphi(s;\theta)$ to be random stumps defined using the $\mathrm{sgn}(\cdot)$ function which returns $-1$, $0$, and $1$, respectively, if its argument is negative, zero, and positive. Specifically, $\varphi(s;\theta) = \mathrm{sgn}(s_q - \omega)$ where $\theta=(q,\omega)$, $q$ is a random index uniformly distributed over the set $\{1,2,\ldots,J\}$, and $\omega$ is uniformly distributed in the interval $[-\sigma,\sigma]$. We uniformly randomize the choice of $\sigma$ over the interval $[1,\max_{j} \bar s_j]$. For this class of random basis functions, we show in Online Supplement \ref{ec:sec:FALP and FGLP Setup for Generalized Joint Replenishment} that FGLP can be solved using constraint generation where the separation problem to identify a violated constraint is a mixed integer linear program. Our setup of RLP and FGLP thus differ mainly in how adaptive basis functions are generated. In the former approach, ridge basis functions are generated via a application-specific approximation algorithm whereas in the latter case we sample random stump basis functions. The difficulty of generating lower bounds and policy costs using the approximations from RLP or FGLP is similar. We provide details in Online Supplement \ref{ec:sec:FALP and FGLP Setup for Generalized Joint Replenishment} and summarize the key ideas here. Since RLP and FGLP are solved via constraint generation, the approximation $\eta(\ensuremath{\boldsymbol{\lambda}})$ can be shown to provide a lower bound on the optimal policy cost. To obtain a policy cost estimate, we can simulate the policy whose action at state $s$ is obtained by (i) replacing $\eta$ and $u(\cdot)$ in the right hand side of \eqref{eqn:AvgCostOpt} by $\eta(\ensuremath{\boldsymbol{\lambda}})$ and $u(\cdot;\ensuremath{\boldsymbol{\beta}})$, respectively, and (ii) solving the resulting optimization problem. \begin{table}[t] \centering \caption{Parameters of the GJR instances.} \begin{tiny} \adjustbox{width=\textwidth}{\renewcommand{\arraystretch}{1.1} \begin{tabular}{ccccc@{\hskip 15pt}ccccc@{\hskip 15pt}cccc} \hline {\begin{tabular}{@{}c@{}}AK Instance \\ Index\end{tabular}} & {$J$} & {$\bar{s}$} & {$z$} & {}& {\begin{tabular}{@{}c@{}}AK Instance \\ Index\end{tabular}} & {$J$} & {$\bar{s}$} & {$z$} & {}& {\begin{tabular}{@{}c@{}}AK Instance \\ Index\end{tabular}} & {$J$} & {$\bar{s}$} & {$z$}\\ \cline{0-3}\cline{6-9} \cline{11-14} {2} & {$4$} & {Random} & {100} & {} &{6} & {$4$} & {Discrete} & {100} & {} & {9} & {$6$} & {Random} & {100}\\ {14} & {$6$} & {Discrete} & {67} & {} &{15} & {$6$} & {Discrete} & {100} & {} & {18} & {$8$} & {Random} & {75}\\ {19} & {$8$} & {Random} & {100} & {} &{22} & {$8$} & {Constant} & {75} & {} & {23} & {$8$} & {Constant} & {100}\\ {25} & {$8$} & {Discrete} & {50} & {} &{26} & {$8$} & {Discrete} & {75} & {} & {27} & {$8$} & {Discrete} & {100}\\ {32} & {$10$} & {Random} & {100} & {} &{35} & {$10$} & {Constant} & {60} & {} & {36} & {$10$} & {Constant} & {80}\\ {37} & {$10$} & {Constant} & {100} & {} &{41} & {$10$} & {Discrete} & {80} & {} & {42} & {$10$} & {Discrete} & {100}\\ \hline \end{tabular} }\end{tiny} \label{table:GJR Instances} \end{table} For testing, we compare the optimality gaps from RLP and FGLP on the GJR instances in AK, which contains instances with and without holding costs. AK find that the instances without holding costs are the ones where adding basis functions adaptively on top of the affine bias function approximation has significant impact. We thus focus on these instances and in particular look at a subset of 18 instances where the lower bound improves by at least $2\%$ as a result of ridge basis function generation in RLP. In Table \ref{table:GJR Instances}, we summarize the considered GJR instances, also indicating the index of the instance used in Table 2 of AK. The number of items ($J$) in these instances is $4, 6, 8,$ and $10$. The usage rate $\lambda_j$ is distributed uniformly in the interval $[0,10]$. The vector of maximum inventory levels $\bar{s}$ is chosen based on two random variables $u_j$ and $\alpha_j$ associated with each item $j \in \{1,2,\ldots,J\}$ that are distributed uniformly over $[0,1]$ and $\{2,4,8\}$, respectively. These random variables are independent across items. The $j$-th bound $\bar{s}_j$ on the inventory level is defined in three ways, labeled ``random'', ``constant'', and ``discrete'', as $10\lambda_j u_j + \lambda_j$, $\bar{s}_j = \sum_{k=1}^{J}\lambda_k(u_k + \nicefrac{1}{J})$, and $\bar{s}_j=\alpha_j \sum_{k=1}^{J} \lambda_k(u_k + \nicefrac{1}{J}) $, respectively. The joint replenishment capacity $\bar{a}$ is set equal to the summation of the first $z\%$ of the smallest storage limits $\bar{s}_j$, $j=1,2,\dots,J$, where $z$ varies in set $\{50,60,67,75,80,100\}$ across instances. The immediate cost form is $c(s,a) = c_{\mathrm{supp}(a)} = c^{\prime} +\sum_{j\in\mathrm{supp}(a)} c^{\prime\prime}_j$, where $c^{\prime}\ge0$ and $c^{\prime\prime}_j\ge0$ are constant and item-specific fixed costs, respectively. AK set $c^{\prime} = 100$ and sample $c^{\prime\prime}_j$ from a uniform distribution over the range $[0,60]$. \subsection{Results}\label{sec:GJR-Observations} We implemented adaptive basis function generation in RLP following the procedure described in AK and did this for FGLP in the framework of Algorithm \ref{alg:sampledBasesALP} with ten new basis functions added every iteration (i.e., $B = 10$). As a termination criteria for Algorithm \ref{alg:sampledBasesALP}, we set an optimality gap tolerance of 2\% (i.e., $\tau = 0.02$) and chose run time limits of 1, 2, 3, and 4 hours for instances with $J$ equal to 4, 6, 8, and 10 items, respectively. For each instance specification in Table \ref{table:GJR Instances}, we generated five realizations of the corresponding random variables and computed the (average) optimality gap and (average) run time. We find that RLP and FGLP are able to obtain policies with optimality gaps less than 5\% across all 18 instances. The lower bounds from RLP and FGLP improve on average the lower bounds based on an affine bias function approximation (i.e., no adaptive basis function generation) by 4.7\% and 4.5\%, respectively. Their corresponding maximum lower bound improvements are 13.7\% and 12.1\% (across the eighteen instances and the five realizations for each instance). In contrast, the policy costs from an affine approximation on these instances do not improve significantly as a result of adaptive basis function generation in both RLP and FGLP. These observations are consistent with those reported in AK. \begin{figure}[t] \centering \caption{Comparison of RLP and FGLP on the GJR instances.} \includegraphics[width=.8\linewidth]{figs/GJR_comparison} \label{fig:gjrcomparison} \vspace{-15pt} \end{figure} To understand the relative performance of RLP and FGLP, we represent the instances in Figure \ref{fig:gjrcomparison} in terms of their run time (x-axis) and optimality gap (y-axis). There are thus 18 points for each method. We use squares and triangles to represent results from FGLP and RLP, respectively. We also divide each axis into two halves which leads to four quadrants. The lower-left quadrant contains the instances that are solved the fastest (less than 135 minutes) and have the least optimality gap (less than 2.5\%), while the upper-right quadrant include the ones that take the most time and have the highest optimality gaps. The counts of the number of instances in each quadrant is also shown and is largely the same for both RLP and FGLP, with one notable exception in the upper right quadrant where FGLP has two fewer instances than RLP. Overall, these results show that FGLP is competitive with RLP on the GJR instances. This is encouraging because the random basis function generation approach used in FGLP does not exploit any application-specific structure. \section{Conclusions}\label{sec:Concluding Remarks} We propose a procedure for basis function generation in approximate linear programming, which is an established approach to obtain value function approximations (VFAs) for high dimensional Markov decision processes (MDPs). Our application-agnostic procedure embeds random basis functions generated via inexpensive sampling in an approximate linear program (ALP), which we refer to as random feature based ALP (FALP). FALP side-steps the implementation task of basis function engineering when using ALP, which is typically both ad-hoc and based on application knowledge. We provide a sampling guarantee for the VFA generated by FALP to be arbitrarily close to the MDP value function. Despite this worst-case sampling guarantee, the FALP policy performance can fluctuate significantly as random basis functions are iteratively added to FALP in practice. We introduce a modification of FALP, dubbed feature based guided linear program (FGLP), to circumvent this issue. FGLP adds constraints to FALP requiring its VFA to be a pointwise upper bound on a previously constructed FGLP with fewer random basis functions. We also analyze the sampling requirement of FGLP and compare it to FALP. We test FALP and FGLP on challenging applications that give rise to discounted-cost and average-cost-semi MDPs. FGLP outperforms FALP and is either competitive or outperforms application-specific benchmarks, including an existing adaptive basis function generation for ALP. Our findings showcase the potential for our procedure to (i) significantly reduce the implementation burden to use ALP and (ii) provide an application-agnostic policy and lower bound for MDPs that can be used to benchmark other methods.\looseness=-1 \bibliographystyle{informs2014}
1,116,691,500,010
arxiv
\section{Introduction} High temperature superconductivity was discovered in cuprates in 1986.\cite{BM8689} The rapid raising of the transition temperature to well above the melting point of nitrogen \cite{WAT8708} shattered the old record of 23~K. Furthermore, the fact that high $T_c$ superconductivity was discovered in a rather unexpected material, a transition metal oxide, made it clear that some novel mechanism must be at work. The intervening years have seen great strides in high $T_c$ research. The growth and characterization of cuprate single crystals and thin films have advanced to the point where sample quality and reproducibility problems which plagued the field in the early days are no longer issues. At the same time, basically all conceivable experimental tools have been applied to the cuprates. Indeed, the need for more and more refined data has spurred the development of experimental techniques such as angle resolved photoemission spectroscopy (ARPES) and low temperature scanning tunneling microscopy (STM). Today the cuprate is arguably the best studied material outside of the semiconductor family and a great deal of facts are known. It is also clear that many of the physical properties are unusual, particularly in the metallic state above the superconductor. Superconductivity is only one aspect of a rich phase diagram which must be understood in its totality. It is often remarked that there is no consensus for the mechanism of high $T_c$ superconductivity. This may be true but I must emphasize that a lack of consensus is not synonymous with a lack of understanding or lack of progress. In fact, I will argue that the basic physics of the cuprate family is well understood. While there are hundreds of high $T_c$ compounds, they all share a layered structure which contains one or more copper-oxygen planes. The low energy physics of these planes can further be simplified to a model of electrons hopping on a square lattice called the one band Hubbard model.\cite{A8796,ZR8859} (Many of the details left out in this paper can be found in a more comprehensive review.\cite{LNW0617}) \begin{equation} H=-\sum_{<i,j>\sigma}t_{ij} c^\dagger_{i\sigma}c_{j\sigma} + U\sum_{i}n_{i\uparrow}n_{i\downarrow} \end{equation} where $c^\dagger_{i\sigma}$ is the creation operator of an electron with spin $\sigma$ on a square lattice, $n_{i\sigma} = c^\dagger_{i\sigma}c_{i\sigma}$, $t_{ij}$ is the hopping matrix element between sites $i$ and $j$, (we shall denote nearest neighbor hopping by $t$ and further neighbor hopping by $t^\prime$, $t^{\prime\prime}$ etc.), and $U$ is the repulsive energy cost due to screened Coulomb interaction to put two electrons with opposite spin on the same site. At half-filling (one electron per site), there is a metal-to-insulator transition as the ratio $U/t$ is in increased. The insulator is called a Mott insulator,\cite{M4916} as opposed to a band insulator, because its existence is driven by strong repulsion and is not described by band theory. The latter would require the state to be metallic at half-filling. The term ``strong correlation physic'' is now used to describe the state of affairs when interaction energy dominates the kinetic energy in controlling the electron motion. As seen in Fig.~1, for large enough $U/t$ the electrons prefer to be localized on the lattice site because any hopping to reduce the kinetic energy $t$ requires double occupation of some site, which costs $U$. This insulator is also predicted to be antiferromagnetic (AF), because AF alignment permits virtual hopping to gain an energy $J=4t^2/U$ by second order perturbation theory, whereas hopping is strictly forbidden by Pauli exclusion for parallel spins. It is now generally agreed that the parent compound of the high $T_c$ cuprate is a Mott insulator. \begin{figure}[t] \centerline{ \includegraphics[width=0.5\textwidth]{NewFigs/fig1.eps} } \caption{ Structure of the Cu-O layer in high $T_c$ materials. Copper atoms sit on a square lattice with oxygen atoms in between. The electronic structure is simplified to a one band model shown on the right, with electrons hopping with matrix element $t$. There is an antiferromagnetic exchange $J$ between spins on neighboring sites.} \end{figure} Things get interesting when electron vacancies (called holes) are introduced into the copper-oxygen layers in a process called hole doping, i.e. a charge reservoir away from the copper-oxygen plane is introduced which removes electrons for the plane. We denote the concentration of holes by $x$. The resulting phase diagram is schematically shown in Fig.~2. The AF order is rapidly destroyed by a few percent of holes, beyond which superconductivity appears. The transition temperature reaches a maximum around 15\% doping, which is called optimal doping. The dome-shaped $T_c$ is characteristic of all hole doped cuprates, even though the maximum $T_c$ seems to be clustered around two groups. It is about 40~K in the La$_{2-x}$Sr$_x$CuO$_4$ (LSCO) family and 93~K and higher in a second family which includes YBa$_2$Cu$_3$O$_{7-\delta}$ (YBCO) and Ba$_2$Sr$_2$CaCu$_2$O$_{9+\delta}$ (Bi-2212). The highest $T_c$ at ambient pressure of 135~K was reached in HgBa$_2$Ca$_2$Cu$_3$O$_{8+\delta}$ in 1993.\cite{SCG9356} Before high $T_c$ superconductors were discovered in 1986, almost all superconductors were believed to be $s$-wave BCS superconductors. The requisite attractive interaction to form pairs is provided by electrons exchanging a phonon. There were a few potential exceptions in a class of strongly correlated metals called the heavy fermions, but $T_c$ was very low, typically less than 1~K.\cite{ LRS8699}. This is why the discovery of superconductivity in a system where the repulsion is strong enough to create a Mott insulator was such a surprise. However, in the intervening years, we know of many examples of non $s$-wave superconductors. These all occur in strongly correlated materials and are clearly not driven by electron-phonon coupling. In the heavy fermion systems, experimental progress has removed any doubt about the non-$s$-wave nature of a large number of compounds.\cite{C0600} Other new examples include Sr$_2$RuO$_4$ exhibiting triplet pairing which breaks time-reversal symmetry;\cite{MM0357} hydrated cobaltates,\cite{TAK0353}, and several systems near the quantum critical point of a zero temperature ferromagnetic (UGe$_2$)or antiferromagnetic (CePd$_2$Si$_2$, CeIn$_3$)phase transition.\cite{SAX0087,MAT9839} The transition temperature of these systems remain low, less than 5~K. Of particular interest to the present discussion is the superconductivity discovered in a series of layered organic molecular solids.\cite{SM0263,B0700} This is because these compounds also live in the vicinity of the Mott transition. Unlike the cuprates, they are not doped and remain at half filling. In these materials the effective hopping parameter $t$ is sensitive to pressure and to the choice of anion molecules. The ratio $U/t$ can be tuned right through the Mott transition.\cite{KUR0501} Amazingly it was discovered that when the Mott insulator is destroyed, the system immediately becomes a superconductor, before becoming a metal at even higher pressure. Furthermore, the transition temperature reaches 11.6~K, the highest known among the organics. There is also strong evidence that these superconductors have $d$-wave pairing symmetry.\cite{ARA0118} \begin{figure}[t] \centerline{ \includegraphics[width=0.4\textwidth]{NewFigs/fig2.eps} } \caption{ Schematic phase diagram of high $T_c$ materials. The antiferromagnet (AF) is rapidly destroyed by doped holes. The $d$-wave superconductor is subject to strong phase fluctuations below the dotted line, where the proliferation of vortices has been detected by the Nernst effect. A pseudogap region extends up to high temperatures in the underdoped region.} \end{figure} I would argue that 11.6~K for an organic metal qualifies it as an example of a high $T_c$ superconductor! The reason is that the electronic energy scale for organic solids is much smaller that that for ordinary solids. For example, the hopping matrix element $t$ is about 0.05~eV compared with 0.4~eV for the cuprates. Thus the ratio $k_BT_c/t \approx {1\over 40}$ is about the same for both systems. To emphasize this point, in Fig~3 I have put both materials on the same phase diagram in the parameter space $U/t$ and $x$ of the Hubbard model. Is the $d$-wave superconductor that appears with doping connected with the one that appears under pressure? We do not have the answer at present. My point is that with so many ``unconventional'' examples, our mindset today should be different from that of 20 years ago, and we should be more receptive to the idea that superconductivity may be a highly competitive ground state in a pure repulsive model such as the Hubbard model. \begin{figure}[t] \centerline{ \includegraphics[width=0.45\textwidth]{NewFigs/fig3.eps} } \caption{ Location of high $T_c$ cuprates and organic superconductors in the Hubbard model phase diagram. At half filling, the antiferromagnetic insulator onsets when $U/t$ exceeds a critical value $U_c/t$, where the Mott transition occurs. High $T_c$ superconductivity occurs when holes are doped into the Mott insulator over a concentration range between 6\% and 25\%. In certain organic compounds, 12~K superconductivity lives on the boundary between the Mott insulator and the metal. The ratio $k_BT_c/t$ is about ${1\over 40}$ for both systems. Whether the two superconducting regions are connected is not known and indicated by the question mark.} \end{figure} With that remark let us return to the cuprates and examine the phase diagram in more detail. The region between the disappearance of AF and the onset of superconductivity is complicated by disorder effects, even though heroic efforts to make pure samples of YBCO have yielded interesting new information.\cite{DL0601} We shall not discuss this region further. The regions of the phase diagram with doping to the left and right of optimal is called underdoped and overdoped, respectively. The metallic state above $T_c$ in the underdoped region has been under intense study and exhibits many unusual properties not encountered before in any other metal. This is shown below the dashed line in Fig.~2 and has been called the pseudogap phase. It is not a well-defined phase in that a definite finite temperature phase boundary has never been found, so the dashed line should be regarded as a cross-over. There is now broad agreement that the high $T_c$ problem is synonymous with that of doping of a Mott insulator. It then makes sense to focus on the underdoped region, where the battle line between Mott insulator and superconductivity is drawn. Since we are interested in the case where $U$ is sufficiently large compared with $t$ for the Hubbard model to be in the Mott insulator phase, it is useful to expand in $t/U$. The leading order result is the $t$-$J$ model \begin{equation} H=P \left[ \sum_{<ij>,\sigma} t_{ij} c^\dagger_{i\sigma}c_{j\sigma} + J \sum_{<ij>} \left( \bm{S}_i \cdot \bm{S}_j - {1\over 4} n_in_j \right) \right] P . \end{equation} The second term is the AF Heisenberg exchange between local spins $ \bm{S}_i = \frac{1}{2} c^\dagger_{i\alpha}\bm{\sigma}_{\alpha\beta}c_{i\beta} $ discussed earlier. The nontrivial part of the $t$-$J$ model resides in the projection operator $P$ which restricts the Hilbert space to exclude the doubly occupied states. The strong Coulomb repulsion now becomes a constraint of no double occupation. Compared with the Hubbard model, the Hilbert space is reduced from four states per site to three, namely spin up, spin down or empty. The parameters of the $t$-$J$ model appropriate for the cuprates is also well established. $J \sim 0.13$~eV $\sim 1500$~K, $t/J \sim 3$ and $t^\prime/t$ is negative, of order $-0.2$, and is believed to vary somewhat from compound to compound.\cite{PDS0103} Equations (1) and (2) are deceptively simple looking Hamiltonians which have defied accurate numerical or analytic solution. Nevertheless, the belief among many workers in the field is that they contain the rich physics of the high $T_c$ phase diagram. The situation is not unlike QCD, where the Lagrangian is known, but precise understanding of confinement and the mass spectrum has just begun to emerge from quantum Monte Carlo after decades of hard work. To make matters worse, the high $T_c$ problem at finite doping is analogous to the QCD problem with finite quark density,\cite{MR0600} where accurate numerical solution is so far not possible due to the fermion sign problem. On the other hand, unlike the quark-gluon problem the high $T_c$ problem has a lot more experimental constraint and input. As a result we know a lot about the high $T_c$ phenomenology which severely limits the theoretical options. \section{Simple Physical Picture and the Pseudogap Phenomenology} Let us start with some simple common sense arguments to gain some insight into the nature of the problem of a doped Mott insulator. Consider a single hole hoping in an AF background as shown in Fig.~1. After one hop we find a spin surrounded by ferromagnetic neighbors, costing an energy of ${3\over 2} J$ from the 3 ferromagnetic bonds if the spins are treated as classical $S = {1\over 2}$. There is a competition between the exchange energy $J$ and the desire of the hole to hop in order to gain the kinetic energy $t$ per hole. For large enough doping the kinetic energy wins and we expect a metallic state with some short range AF correlation. By comparing $xt$ and $J$, we expect this to onset at $x \sim {J\over t} \sim {1\over 3}$, in good agreement with the experimental finding. This state should be a Fermi liquid state. There is a powerful theorem in Landau Fermi liquid theory commonly called the Luttinger theorem\cite{AGD6500} which states that the area of the Fermi surface is the same as that of free fermions, i.e., it is determined by the total density of electrons in the unit cell. In our case the area is ${1\over 2}(1-x)A_{BZ}$ where $A_{BZ} = (2\pi/a)^2$ is the area of the Brillouin zone (BZ). This is exactly what is found experimentally. In Fig.~4(d) we show an example of the measured Fermi surface. The precise shape can be fitted with a hopping model with further neighbor hopping. The opposite limit of a few holes $(x \ll 1)$ hopping in an AF background is less trivial, but by now reasonably well understood. The competition with the AF exchange causes the effective hopping matrix element to be renormalized downward from $t$ to $J$.\cite{KLR8980,SRV8893,LM9225} The quasiparticle nevertheless manages to form coherent bands. The bands have minima at $\left( \pm{\pi\over 2a}, \pm{\pi \over 2a}\right)$.\cite{SS8867} With finite doping the Fermi surfaces are ellipses centered at $\left( \pm{\pi\over 2a}, \pm{\pi \over 2a}\right)$ as shown in Fig.~4(a). Note that the unit cell is doubled because of AF ordering and the BZ is reduced to the diamond in Fig.~4(a). Applying Luttinger theorem to the doubled unit cell, the total area of the Fermi surface in the reduced BZ is now $(1-x)A_{RBZ}$ where $A_{RBZ} = {1\over 2}A_{BZ}$. Therefore we conclude that the area of each ellipse (hole pocket) is ${x\over 4}A_{BZ}$. Physically it makes sense that transport properties are determined only by $x$ carriers occupying small Fermi pockets. The theory of a few holes in AF background is quite well developed, and recently papers applying effective field theory approach borrowed from the particle physics literature are particularly notable.\cite{MS0701,BRU0652} We have good understanding of $x \ll 1$ and $x \gtrsim {1\over 3}$. What happens in between? Here we run into a dilemma. We know that AF order is destroyed for $x \gtrsim 0.03$, beyond which points we have no indication of unit cell doubling. If Fermi liquid theory were to hold, what would happen to Luttinger theorem? Recall that the nice physical picture of small hole pockets rely on the unit cell doubling. Once that is absent, Luttinger theorem forces us to have a ``large'' F.S., i.e. one with area proportional to $1-x$. In that case it will be difficult to see how transport properties will continue to look as if it is given by $x$ holes. We note that while the original derivation of the Luttinger theorem was perturbative in the interaction strength, the modern derivation by Oshikawa\cite{O0070,PV0418} is a topological one and relies on very few assumptions, not much beyond the statement that well defined quasiparticles exist. In principle, the Fermi liquid can develop a heavy mass $\approx {1\over x}$ so that the conductivity spectral weight $n/m^\ast \approx x$, but experimentally there is no evidence of such heavy mass formation. Parenthetically we point out that the three dimensional example of doped Mott insulator La$_{2-x}$Sr$_x$TiO$_3$ appears to take the heavy mass route.\cite{TOK9326} It turns out that Nature solves this problem in an extremely clever and unexpected way. As far as the ground state is concerned, the question is moot because it appears that once AF is destroyed the system becomes superconducting and the Luttinger theorem cannot be applied. What about the normal state above the superconducting $T_c$? The extensive work using angle resolved photoemission spectroscopy (ARPES) has shown that the gapless excitations lie on an arc.\cite{CNR03,DHS0373} Anywhere apart from the arc, the excitations are gapped. \begin{figure}[t] \centerline{ \includegraphics[width=0.5\textwidth]{NewFigs/fig4.eps} } \caption{ (a)~Fermi pockets in a doped AF. Dashed lines indicate the reduced Brillouin zone due to the unit cell doubling of the AF. (b)~Fermi surface of a tight binding model with first and second nearest neighbor hopping. (c)~Schematic picture of the Fermi arcs. the excitations are gapless when path A crosses the arc but are gaped everywhere along path B. (d)~Experimental data showing the Fermi surface in overdoped Tl-2201 ($x=0.25$). Colors indicate the intensity of low energy excitations. Data from Plat\'{e} {\em et al.}\cite{PLA0501} (e)~Experimental data showing the Fermi arc in one quadrant of Figure 4(c) in underdoped Na$_{2-x}$Ca$_x$Cu$_2$O$_2$Cl$_2$ $(x=0.1)$. Data from K. Shen {\em et al.}\cite{SHE0501}} \end{figure} This situation is sufficiently strange that requires a bit more explanation in terms of the experimental observation. ARPES measures the spectrum of occupied electron states which can be removed by excitations with a photon, i.e. it measures the hole spectral function. A spectrum is measured at every $\bm{k}$ point. In a Fermi liquid the spectrum consists of a quasiparticle peak at energy $\varepsilon_{\bm{k}}$. As one moves along line A in Fig.~4(c), the peak approaches the Fermi energy and disappears as $\varepsilon_{\bm{k}}$ crosses the Fermi surface, thus locating its position. This is how the Fermi surface in Fig.4(b) is mapped out. In the underdoped case, what happens is that along line B, the quasiparticle peak (now quite broad) approaches the Fermi energy but stops before crossing it. Instead it loses weight and disappears. You might say that this also happens in Fig.4~(a), if path B misses the hole pocket. The important difference is that in the case of the hole pocket where there is unit cell doubling, in the extended zone scheme we expect the occupied band to live also outside the reduced BZ. Thus if we follow path A we should see a quasiparticle peak appearing at the second crossing of the ellipse. Further along path A this peak will then move down in energy away from the Fermi energy. In other words, the back side of the hole pocket should be visible in ARPES as an occupied quasiparticle state rises up to meet the FS. The surprise is that in Fig.~4(c), there is no back side to the ellipse, and one is left with what is called a Fermi arc. The spectrum is gapped everywhere except on the arc, and the gap reaches a maximum near $(\pi,0)$. In the superconducting state the arc also becomes gapped, leaving a single gapless point along $(\pi,\pi)$ called the nodal point. In this way the quasiparticle spectrum of a $d$ wave superconductor is smoothly formed out of the pseudogap normal state. The size of the maximum gap has been measured as a function of doping, and is found to increase with decreasing $x$, in a way which tracks the onset of the pseudogap phase shown in Fig.~2. (A word of caution: the electron spectral function near $(\pi,0)$, the so called anti-nodal direction, is extremely broad and, unlike the nodal directions, does not show a quasiparticle peak even in the superconducting state in strongly underdoped samples. So the gap is measured as the pull-back of the leading edge of the spectrum for the Fermi energy. This picture is corroborated by extensive scanning tunneling microscopy (STM) work, which has excellent energy and spatial resolution, but no momentum space information. The Fermi arc and anti-nodal gap is one part of a rich phenomenology associated with the pseudogap phase. The first evidence of an energy gap comes from NMR data, which found that in underdoped samples the Knight shift, which measures the spin susceptibility, does not follow Pauli's prediction of being temperature independent, but starts dropping around room temperature and has lost about 80\% of its value by the time $T_c$ is reached.\cite{CJS9777} The gap also shows up in $c$-axis frequency dependent conductivity,\cite{HTL9310} where electrons move from one layer to the next, but the transport within the plane remains metallic, being dominated by the quasiparticles near the Fermi arc. What is the origin of the large gap at $(\pi,0)$? One suggestion is that it is simply a $d$-wave superconducting gap and the pseudogap phase should be understood as a superconductor destroyed by strong phase fluctuations. Phase fluctuation is controlled by the superfluid stiffness which is given by the superfluid density. Thus $T_c$ should be proportional to the superfluid density, i.e. decreases with decreasing $x$.\cite{EK9534} This picture is made more quantitatively accurate if the reduction of the superfluid density due to thermal excitations of nodal quasiparticles is taken into account.\cite{LNW0617,LW9711} The opposite trend of the energy gap and $T_c$ is explained, but the origin of such a large superconducting gap close to an insulator becomes even a greater puzzle. Experimentally this is now ample evidence that the picture of fluctuating superconductivity indeed survives up to much higher temperature compared with a conventional metal: perhaps 2 or 3 times $T_c$. However, that scale decreases with decreasing doping and does not reach up to the pseudogap temperature. The key experiment that mapped out this region is the Nernst effect, which is sensitive to mobile superconducting vortices.\cite{WLO0610} For this reason we refer to this region as the Nernst region in Fig.~2. If the pseudogap is not a pairing gap, it presents a great challenge for theory, because while the Fermi arc scenario interpolates between the small hole pocket and large Fermi surface beautifully, it is not allowed by conventional band theory or Fermi liquid theory. Fermi surfaces do not simply terminate. This is a situation not encountered before in solid state physics. This is why the psuedogap phenomenon is considered one of the central mysteries of the high $T_c$ story. We shall return to this issue in section V. \section{The RVB Picture} While the pseudgap phenomena is really strange, to a large extent it was anticipated by theory. Here I am referring to the concept of the resonating valence bond (RVB) introduced by P.W. Anderson\cite{A8796} and the slave boson mean field theory and elaborations which followed. Here we provide a brief review. We explained in the last section that the N\'{e}el spin order is incompatible with hole hopping. The question is whether there is another arrangement of the spin which achieves a better compromise between exchange energy and the kinetic energy of the hole. For $S={1\over 2}$ it appears possible to take advantage of the special stability of the singlet state. The ground state of two spins $S$ coupled with antiferromagnetic Heisenberg exchange is a spin singlet with energy $-S(S+1)J$. Compared with the classical large spin limit, we see that quantum mechanics provides an additional stability in the term unity in $(S+1)$ and this contribution is strongest for $S={1\over 2}$. Let us consider a one-dimensional spin chain. A N\'{e}el ground state with $S_z = \pm {1\over 2}$ gives an energy of $-{1\over 4}J$ per site. On the other hand, a simple trial wavefunction of singlet dimers already gives a lower energy of $-{3\over 8}J$ per site. This trial wavefunction breaks translational symmetry and the exact ground state can be considered to be a linear superposition of singlet pairs which are not limited to nearest neighbors, resulting in a ground state energy of 0.443~J. In a square and cubic lattice the N\'{e}el energy is $-{1\over 2}J$ and $-\frac34J$ per site, respectively, while the dimer variational energy stays at $-{3\over 8}J$. It is clear that in a 3D cubic lattice, the N\'{e}el state is a far superior starting point, and in two dimensions the singlet state may present a serious competition. Historically, the notion of a linear superposition of spin singlet pairs spanning different ranges, called the resonating valence bond (RVB), was introduced by \cite{A735} and \cite{FA7432} as a possible ground state for the $S={1\over 2}$ antiferromagnetic Heisenberg model on a triangular lattice. The triangular lattice is of special interest because an Ising-like ordering of the spins is frustrated. Subsequently, it was decided that the ground state forms a $\sqrt{3} \times \sqrt{3}$ superlattice where the moments lie on the same plane and form $120^\circ$ angles between neighboring sites.\cite{HE8831} Soon after the discovery of high $T_c$ superconductors, Anderson \cite{A8796} revived the RVB idea and proposed that with the introduction of holes the N\'{e}el state is destroyed and the spins form a superposition of singlets. The vacancy can hop in the background of what he envisioned as a liquid of singlets and a better compromise between the hole kinetic energy and the spin exchange energy may be achieved. Many elaborations of this idea followed, but here we argue that the basic physical picture described above gives a simple account of the pseudogap phenomenon. The singlet formation explains the decrease of the uniform spin susceptibility. The vacancies are responsible for transport in the plane. The conductivity spectral weight in the $ab$ plane is given by the hole concentration $x$ and is unaffected by the singlet formation. On the other hand, for $c$-axis conductivity, an electron is transported between planes. Since an electron carries spin ${1\over 2}$, it is necessary to break a singlet. This explains the gap formation in $\sigma_c(\omega)$ and the energy scale of this gap should be correlated with that of the uniform susceptibility. In photoemission, an electron leaves the solid and reaches the detector, the pull back of the leading edge simply reflects the energy cost to break a singlet. A second concept associated with the RVB idea is the notion of spinons and holons, and spin charge separations. Anderson postulated that the spin excitations in an RVB state are $S={1\over 2}$ fermions which he called spinons. This is in contrast with excitations in a N\'{e}el state which are $S = 1$ magnons or $S = 0$ gapped singlet excitations. \begin{figure}[t] \centerline{ \includegraphics[width=0.5\textwidth]{NewFigs/fig5.eps} } \caption{ A cartoon representation of the RVB liquid or singlets. Solid bond represents a spin singlet configuration and circle represents a vacancy. In (b) an electron is removed from the plane in photoemission or $c$-axis conductivity experiment. This necessitates the breaking of a singlet. } \label{RVB} \end{figure} Initially the spinons are suggested to form a Fermi surface, with Fermi volume equal to that of $1-x$ fermions.\cite{BZA8773} Later it was proposed that the Fermi surface is gapped to form $d$-wave type structure, with maximum gap near $(0,\pi)$.\cite{KL8842} This $\bm{k}$ dependence of the energy gap is needed to explain the momentum dependence observed in photoemission. The concept of spinons is a familiar one in one-dimensional spin chains where they are well understood to be domain walls. In two dimensions the concept is a novel one which does not involve domain walls. Instead, a rough physical picture is as follows. If we assume a background of short range singlet bonds, forming the so-called short-range RVB state, a cartoon of the spinon is shown in Fig. \ref{RVB}. If the singlet bonds are ``liquid,'' two $S={1\over 2}$ formed by breaking a single bond can drift apart, with the liquid of singlet bonds filling in the space between them. They behave as free particles and are called spinons. The concept of holons follows naturally \cite{KRS8765} as the vacancy left over by removing a spinon. A holon carries charge $e$ but no spin. \section{Projected Wavefunction, Slave Boson and the Gauge Theory Formulation of the RVB Picture} Is there any calculated tool or mathematical formalism to put some meat into the physical picture of RVB described in the last section? As far as computation is concerned, the use of the projected wavefunction has enjoyed considerable success. The idea is to write down a trial wavefunction of the type \begin{equation} \Psi = P_G \phi \end{equation} where $P_G = \prod_i(1-n_{i\uparrow} n_{i\downarrow})$ is called the Gutzwiller projection and $\phi$ is any Hartree Fock or BCS wavefunction, usually suggested by mean field theory described below. The role of the Gutzwiller projection is to remove all doubly occupied states in $\phi$. Equation~(3) is a suitable variational wavefunction for the $t$-$J$ model because the constraint is satisfied by definition, and its expectation values and correlation functions can be computed by efficient Monte Carlo algorithms.\cite{G8953,Edegger} The mean field parameters can be treated as variational parameters. The projection wavefunction gives excellent ground state energy and sublattice magnetization at half-filling, capturing the important quantum fluctuations of the N\'{e}el ordered state. With doping it predicted correctly the $d$-wave pairing ground state,\cite{G8831} even though the prediction of the co-existence of superconductivity with AF up to $x\approx 0.11$ is not in agreement with experiment. Putting aside the question of whether it is the ground state, a comparison of the physical properties of the projected $d$-wave pairing states with a variety of experiments was successfully made.\cite{PRT0404} The trial wavefunction can be further improved by the Lanczos method of repeatedly hitting it with the Hamiltonian. There is some controversy as to whether the ground state of the $t$-$J$ model with nearest-neighbor hopping only is a $d$-wave superconductor,\cite{SMB0202,SCL9894} but it is clear that superconductivity is a highly competitive state, as found by other numerical methods such as density matrix renormalization group,\cite{WS9953} cluster dynamical mean field theory \cite{Maier} and variational cluster approximation.\cite{Senechal,Tremblay} Recently it was found that introduction of $t^\prime$ considerably stabilizes the $d$-wave superconducting state.\cite{SLE0402} At present, I would stay that there is strong numerical evidence that the $d$-wave superconductor is a strong contender for the ground state of the $t$-$J$ model. What about analytic theory and where does the mean field $\phi$ come from? A useful method is called the Gutzwiller approxmation, which imposes the constraint approximately by treating the available configuration for hopping and exchange in a statistical basis.\cite{ZGR8836} This is clearly related to the slave-boson method which we discus below. The slave-boson method was developed for the Kondo problem\cite{B7675,C8435} It has enjoyed great success as the best way to undersand the properties of a remarkable class of materials called the heavy fermion compounds, where Fermi liquid theory has been stretched to the extreme, with effective mass as large as several thousand times the free electron mass.\cite{H9300} The idea is to write the electron operator as a product of the boson and fermion which carries the spin index \begin{equation} c_{\bm i\sigma}^\dagger = f_{\bm i\sigma}^\dagger b_{\bm i} \end{equation} with the condition \begin{equation} f_{\bm i\uparrow}^\dagger f_{\bm i\uparrow} + f_{\bm i\downarrow}^\dagger f_{\bm i\downarrow} + b^\dagger_{\bm i} b_{\bm i} = 1. \end{equation} This constraint can be enforced with a Lagrangian multiplier $\lambda_{\bm i}$. Note that Eq.~(4) is not an operator identity and the right-hand-side does not satisfy the fermion commutation relation. Rather, the requirement is that both sides have the correct matrix elements in the reduced Hilbert space with no doubly occupied states. For example, the Heisenberg exchange term is written in terms of $f^\dagger_{\bm i\sigma}$, $f_{\bm i\sigma}$ only \cite{BZA8773} \begin{eqnarray} {\bm{S}}_{\bm i}\cdot {\bm{S}}_{\bm j} &=& -{1\over 4} f_{\bm i\sigma}^\dagger f_{\bm j\sigma} f_{\bm j\beta}^\dagger f_{\bm i\beta} \nonumber \\ &-& {1\over 4} \left( f_{\bm i\uparrow}^\dagger f_{\bm j\downarrow}^\dagger - f_{\bm i\downarrow}^\dagger f_{\bm j\uparrow}^\dagger \right) \left( f_{\bm j\downarrow} f_{\bm i\uparrow} - f_{\bm j\uparrow} f_{\bm i\downarrow} \right) \nonumber \\ &+& {1\over 4} \left( f_{\bm i\alpha}^\dagger f_{\bm i\alpha} \right) . \end{eqnarray} We then decouple the exchange term in both the particle-hole and particle-particle channels via the Hubbard-Stratonovich (HS) transformation. Then the partition function is written in the form \begin{equation} Z = \int{D f D f^\dagger Db D\lambda D\chi D\Delta} \exp \left( -\int^\beta _0 d\tau L_1 \right) \end{equation} where \begin{eqnarray} L_1 &=& \tilde{J} \sum_{\langle\bm i\bm j\rangle} \left( |\chi_{\bm i\bm j}|^2 + | \Delta_{\bm i\bm j} |^2 \right) +\sum_{\bm i\sigma}f_{\bm i\sigma}^\dagger (\partial_\tau - i\lambda_{\bm i}) f_{\bm i\sigma} \nonumber\\ &-& \tilde{J} \left[ \sum_{\langle\bm i\bm j \rangle} \chi_{\bm i\bm j}^\ast \left( \sum_{\sigma} f_{\bm i\sigma}^\dagger f_{\bm j\sigma} \right) + c.c. \right] \\ &+& \tilde{J} \left[ \sum_{\langle\bm i\bm j\rangle} \Delta_{\bm i\bm j} \left( f^\dagger_{\bm i\uparrow}f^\dagger_{\bm j\downarrow} - f^\dagger_{\bm i\downarrow} f^\dagger_{\bm j\uparrow} \right) + c.c. \right] \nonumber \\ &+&\sum_{\bm i} b_{\bm i}^\ast(\partial_\tau - i\lambda_{\bm i} + \mu_B ) b_{\bm i} - \sum_{\bm i\bm j}{t}_{\bm i\bm j}b_{\bm i}b_{\bm j}^\ast f_{\bm i\sigma}^\dagger f_{\bm j\sigma} , \nonumber \end{eqnarray} with $\chi_{\bm i\bm j}$ representing fermion hopping and $\Delta_{\bm i\bm j}$ representing fermion pairing corresponding to the two ways of representing the exchange interaction in terms of the fermion operators. $\tilde{J} = 3J/8$ is chosen to reproduce the mean field self-consistent equation which is obtained by the Feynman variational principle. Mean field theory corresponds to the saddle point solution to the functional integral. The mean field conditions are \begin{eqnarray} \chi_{\bm i\bm j} &= \sum_\sigma \langle f^\dagger_{\bm i\sigma} f_{\bm j\sigma} \rangle \\ \Delta_{\bm i\bm j} &= \langle f_{\bm i\uparrow}f_{\bm j\downarrow} - f_{\bm i\downarrow}f_{\bm i\uparrow} \rangle \end{eqnarray} Let us write $\chi_{ij} = |\chi_{ij}|e^{ia_{ij}}$ and ignore the fluctuation of the amplitude. Furthermore, in the last term in Eq.~(8) we replace $f_{i\sigma}^\dagger f_{j\sigma}$ by $\chi_{ij}$. Then the rather complicated Lagrangian of Eq.~(8) has a rather simple interpretation. It describes fermions and bosons hopping on a lattice with hopping matrix element $\chi_{ij}$. The phase $a_{ij}$ lives on the links, and plays the role of the spatial components of a lattice gauge field, while the $\lambda_i$ fields introduced to enforce the constraint become the time components. Note that both fermions and bosons are coupled to the same gauge field. In addition, the fermions may have singlet pairing amplitude given by $\Delta_{ij}$. Thus we come to the conclusion that the $t$-$J$ model is equivalent to a lattice gauge theory with non-relativistic fermions and bosons coupled with a {\em compact} $U(1)$ gauge field (compactness simply refers to the fact that the gauge field $a_{ij}$ is a phase defined module $2\pi$). At this point the gauge field has no dynamics. It fluctuates freely and is in the infinite coupling limit. Interesting dynamics emerge upon integrating out some of the matter field but one is left with a lattice gauge theory with strong coupling. The mapping is basically exact, but the question remains as to how to deal with such a model. Before discussing the importance of gauge fluctuations, let us examine some examples of mean field solutions. \begin{enumerate} \item $d$-wave pairing states. \\ Here $\chi_{ij} = \chi$ is constant and $\Delta_{ij} = \Delta$ for $(ij)$ bonds along $x$ and $\Delta_{ij} = -\Delta$ for $(ij)$ along $y$, i.e. it has $d$-wave pairing geometry. Without pairing, the fermions hop on a tight binding band with dispersion \begin{equation} \varepsilon_f(\bm k) = -2 \tilde{J}\chi (\cos k_xa + \cos k_ya) . \end{equation} With pairing we have the classic $d$-wave dispersion \begin{equation} E(k) = \sqrt { \left( \varepsilon_f (\bm k) - \mu_f \right)^2 + |\Delta_k|^2 } \end{equation} where $\mu_f$ is the fermion chemical potential and $\Delta_k = 2\Delta (\cos k_xa -\cos k_ya)$. The bosons see the same band dispersion and condense at the bottom of the band minimum at low temperatures. In mean field theory the Bose condensation temperature is proportional to the boson density $x$. Below this temperature we have electron pairing, because the BCS order parameter $\langle c_{k\uparrow}c_{-k\downarrow}\rangle = b_0^2\langle f_{k\uparrow}f_{-k\downarrow}\rangle \neq 0$ where $\langle b \rangle = b_0$. The mean field phase diagram is shown in Fig.~6 and captures some key features of the high $T_c$ phase diagram shown in Fig.~2. In particular, the $d$-wave superconducting state appears at intermediate doping and a spin gap state (region II) where a $d$-wave-like gap exists for spin excitations but not charge excitations, anticipates many of the properties of the pseudogap phase. \begin{figure}[t] \centerline{ \includegraphics[width=0.4\textwidth]{NewFigs/fig6.eps} } \caption{ Schematic phase diagram of the $U(1)$ mean field theory. The solid line denotes the onset of the uniform RVB state $(\chi \neq 0)$. The dashed line denotes the onset of fermion pairing $(\Delta \neq 0)$ and the dotted line denotes mean field Bose condensation $(b \neq 0)$. The four regions are (I)~Fermi liquid $\chi \neq 0$, $b \neq 0$; (II)~spin gap $\chi \neq 0$, $\Delta \neq 0$; (III)~$d$-wave superconductor $\chi \neq 0$, $\Delta \neq 0$, $b \neq 0$; and (IV)~ strange metal $\chi \neq 0$. From Lee and Nagaosa.\cite{LN9221}} \end{figure} \item The staggered flux state.\\ Early in the development of the mean field theory, a variety of mean field states were discovered which give identical dispersion. Notable among these is the staggered flux state \cite{AM8874} In this state the hopping $\chi_{\bm i\bm j}$ is complex, $\chi_{\bm i\bm j} = \chi_0 \exp \left( i (-1)^{i_x+j_y} \Phi_0 \right) $, and the phase is arranged in such a way that it describes free fermion hopping on a lattice with a fictitious flux $\pm 4 \Phi_0$ threading alternative plaquettes. Remarkably, the eigenvalues of this problem are identical to that of the $d$-wave superconductor given by eq.~(12), with $\mu_f = 0$ and \begin{equation} \tan \Phi_0 = {\Delta \over \chi} . \end{equation} The case $\Phi_0 = \pi/4$, called the $\pi$ flux phase, is special in that it does not break the lattice translation symmetry. As we can see from Eq.~(13), the corresponding $d$-wave problem has a very large energy gap and its dispersion is shown in Fig.~7. The key feature is that the energy gap vanishes at the nodal points located at $\left( \pm{\pi\over 2}, \pm{\pi\over 2} \right)$. Around the nodal points the dispersion rises linearly, forming a cone which resembles the massless Dirac spectrum. For the $\pi$ flux state the dispersion around the node is isotropic. For $\Phi_0$ less than $\pi /4$ the gap is smaller and the Dirac cone becomes progressively anisotropic. The anisotropy can be characterized by two velocities, $v_F$ in the direction towards $(\pi,\pi)$ and $v_\Delta$ in the direction towards the maximum gap at $(0,\pi)$. \begin{figure}[t] \centerline{ \includegraphics[width=0.4\textwidth]{NewFigs/fig7.eps} } \caption{ The energy dispersion of the staggered flux phase. Note the massless Dirac spectrum at the nodal points $\left( \pm{\pi\over 2}, \pm{\pi\over 2} \right)$. Figure shown is for the special case of $\pi$-flux. In general, the nodal spectra becomes anisotropic. With doping Fermi pockets are formed when the Fermi energy crosses the energy spectrum.} \end{figure} The reason various mean-field theories have the same energy at half-filling was explained by Affleck {\em et al}.\cite{AZH8845} and Dagotto {\em et al}.\cite{DFM8826} as being due to a certain $SU(2)$ symmetry. It corresponds to the following particle-hole transformation \begin{eqnarray} f_{\bm i\uparrow}^\dagger &\rightarrow& \alpha_{\bm i} f_{\bm i\uparrow}^\dagger + \beta_{\bm i} f_{\bm i\downarrow} \\ \nonumber f_{\bm i\downarrow} &\rightarrow& -\beta_{\bm i}^\ast f_{\bm i\uparrow}^\dagger + \alpha_{\bm i}^\ast f_{\bm i\downarrow} . \end{eqnarray} Note that the spin quantum number is conserved. It describes the physical idea that adding a spin-up fermion or removing a spin-down fermion are the same state after projection to the subspace of singly occupied fermions. It is then not a surprise to learn that the Gutzwiller projection of the $d$-wave superconductor and that of the staggered flux state gives the same trial wavefunction, up to a trivial overall phase factor, provided $\mu_f = 0$ and Eq.~(13) is satisfied. A simple proof of this is given by Zhang {\em et al}.\cite{ZGR8836}. The energy of this state is quite good. The best estimate for the ground state energy of the square lattice Heisenberg antiferromagnet which is a N\'{e}el ordered state is $\langle S_{\bm i} \cdot S_{\bm j} \rangle = -0.3346$~J.\cite{TC8937,R9292} The projected $\pi$ flux state \cite{G8831} gives $-0.319$J, which is excellent considering that there is no variational parameter. In the presence of a hole, the dispersion of the staggered-flux phase is still given by Eq.~(12) with $\mu_f = 0$, but the Fermi level is now located at $\mu_f < 0$ which lies below the node. This is shown in Fig.~7. If the bosons are condensed, this becomes a Fermi liquid state with small hole pockets just like what is shown in Fig.~4(a). This state has higher energy than the $d$-wave superconductor because in a superconductor the node is shifted in $\bm k$ space away from $(\pi/2a, \pi/2a)$ towards the origin, but its energy is always tied to the Fermi level. The staggered flux state was proposed by Hsu, Marston and Affleck\cite{HMA9166} in 1991 to be the origin of the pseudogap state. They pointed out that with doping, the staggered flux state has the remarkable property that orbital currents flow around each square plaquette in a staggered way, so that the physical order parameter of this state is the staggered orbital current. This proposal did not receive much attention because the appearance of an ordered state with Ising order parameter requires a finite temperature phase transition, which has never been seen experimentally. Furthermore, the model requires hole pockets instead of the arcs which were later observed. While the matrix element effect can reduce the spectral weight of the backside,\cite{CNT0304} the model requires sharp quasiparticle peaks which should be observable, especially near the end of the arc, where the weight reduction is only a factor of 2. In 2002, Chakravarty {\em et al.}\cite{CLM0203} revived this proposal, arguing that disorder effect may round the transition. They named this state the $d$-density wave (DDW) state. Since this state is in fact identical to the staggered flux state of Hsu {\em et al.}\cite{HMA9166}, the two names are sometimes used interchangeably in the literature. However, the philosophy of their approach is very different. Chakravarty {\em et al.} take a phenomenological Landau theory approach. For them the superconducting state is an entirely different order parameter and the two orders compete with each other. I believe this view misses the key fact that all this is happening close to the Mott insulator. While the $d$-wave superconductor and the staggered flux (DDW) states are very different states at the mean field level, the mean field states do not respect the no double occupation constraint. The story changes if we enforce the constraint by applying the Gutzwiller projection. As mentioned earlier, the projected states become identical at half filling and by continuity must share a lot of similarity slightly away from half filling. For example, we found that at $x = 0.1$ the projected $d$-wave superconductor has short-range orbital current order\cite{ILW0058} and presumably the projected staggered flux state has short range $d$-wave superconductor order. Both have significant short range AF order\cite{PRT0521}. This kind of consideration motivated Wen and I to introduce the $SU(2)$ gauge theory in 1996 as an improvement over the Hsu {\em et al.} proposal.\cite{WL9603} Instead of an ordered state, the pseudogap phase is considered to be a fluctuating state which included the staggered flux state and $d$-wave superconductivity on equal footing. This will be described in more detail in the next section. \begin{figure}[t] \centerline{ \includegraphics[width=0.4\textwidth]{NewFigs/fig8.eps} } \caption{ Energy per site of projected wavefunctions in units of $J$. SF stands for staggered flux. From Ivanov.\cite{I0403}} \end{figure} It is interesting to examine the energetics of various projected states as shown in Fig.~8. It was found that the best state is a projected $d$-wave superconductor and the sublattice magnetization is nonzero for $x < x_c$, where $x_c = 0.11$ for $t/J = 3$.\cite{I0403} The projected staggered-flux state always lies above the projected $d$-wave superconductor, but the energy difference is small and vanishes as $x$ goes to zero, as expected. The staggered-flux state also prefers antiferromagnetic order for small $x$, and the critical $x_c^{SF}$ is now 0.08, less than that for the projected $d$-wave superconductor. The projected staggered flux state is the lowest energy nonsuperconducting state that has been constructed so far. \end{enumerate} \section{SU(2), vortex core and theory of the pseudogap phase} The lesson from examining projected wavefunctions is that the close relationship between different mean field states must be taken seriously. In fact, the $d$-wave state and the staggered flux state are not the only states which have similar energies after projection. There is a continuous family of states related by the $SU(2)$ symmetry given by Eq.~(14). Wen and I developed a formulation to take this into account by introducing an $SU(2)$ doublet of bosons $(b_1, b_2)$ instead of the single boson in the $U(1)$ gauge theory.\cite{WL9603} For our purpose this is just a technical way of generating different mean field states which are parametrized by rotating the quantization axis $\bm I$ in the $SU(2)$ space.\cite{LNN9803} This way different states can be visualized and smoothly connected to each other in space and time. In Fig.~9 we show such a representation. The north and south poles correspond to the two degenerate staggered flux (DDW) states which break translational symmetry. The equator is the $d$-wave superconducting state. In between are many states which share both kinds of order. One advantage of this approach is that we can construct a model of the vortex core of an $hc/2e$ vortex.\cite{LW0117} This takes the form of a meron, or half a skyrmion. As shown in Fig.~10 the center of the vortex is occupied by the staggered flux state. This solves a serious problem with the original $U(1)$ formulation which favors the $hc/e$ vortex\cite{S9289,NL9266} because it is energetically favorable to retain the fermion pairing and make the boson wind by $2\pi$ around the vortex. That approach ignored the fact that another state, the staggered flux state, is available to take the place of the normal state inside the core, allowing us to construct an $hc/2e$ vortex that costs very little energy. Our model of the vortex core also explains a puzzling experimental observation, i.e. STM tunneling found that an energy gap remains when tunneling into the core region.\cite{PHG0036} This is opposite to what was found in a conventional superconductor, where bound states are formed inside the core which fills in the gap.\cite{WM9576} \begin{figure}[t] \centerline{ \includegraphics[width=0.4\textwidth]{NewFigs/fig9.eps} } \caption{The quantization axis $\bm{I}$ in the $SU(2)$ gauge theory. The north and south poles correspond to the staggered flux phases with shifted orbital current patterns. All points on the equators are equivalent and correspond to the $d$-wave superconductor. } \end{figure} The $SU(2)$ construction allows us to construct a simple picture of the finite temperature phase diagram which accounts for the Nernst region and the pseudogap phase.\cite{LW0117} This is shown in Fig.~(11). At low temperature the $SU(2)$ quantization vector $\bm I$ is in the $x$-$y$ plane and we have the $d$-wave superconductor ground state. As the temperature is raised, vortices are created with staggered flux cores and the proliferation of these vortices drives the Berezinskii-Kosterlitz-Thouless transition in the standard way. Above this transition (which is the true superconducting $T_c$), vortex and anti-vortex proliferates giving rise to the Nernst phase. At even higher temperature the $\bm I$ vector is completely disordered and this is our picture of the pseudogap phase. We emphasized that in this theory the pseudogap phenomenon and superconductivity are intimately connected. There is no separate pairing mechanism for superconductivity. What drives superconductivity is the coherence of the boson to select the true ground state out of a myriad of fluctuating possibilities. This is very different from the competing order scenario which does require a separate pairing mechanism and a completely separate energy gap scale. This dichotomy has spurred a debate concerning one gap vs. two gaps, i.e. whether a smaller energy gap appears which scales with $T_c$.\cite{HHD0700} The debate is often couched in a black and white language, with one gap favoring a superconducting gap destroyed by phase fluctuations, and two gaps implying the need for some kind of competing order. In my view, the truth is likely to be more complicated. In the mean field RVB picture, a sharp quasiparticle peak appears below $T_c$ with weight $x$ which follows the $d$-wave dispersion with only one gap. The mean field picture is probably too simplistic. For example, it is possible that the large gap at $(0,\pi)$ as a spin gap which can remain broad while the low energy quasiparticles near the nodes become coherent below $T_c$. To a first approximation, the coherent nodal quasiparticle has a dispersion which extrapolates to the large pseudogap at $(0,\pi)$. There could be a coherence energy scale which scales with $T_c$, but exactly how this coherence scale affects the density of states and how it develops as a function of temperature is an open question. The issue of one gap/two gaps is not settled experimentally. For moderately underdoped Bi2212 ($T_c > 50$~K) the evidence from ARPES is that the quasiparticle in the superconducting state is reasonably peaked even near the antinodes and obey the $d$-wave dispersion with a single gap which increases with decreasing doping. This is supported by low temperature thermal conductivity data which measures the ratio $v_F/v_\Delta$ where $v_\Delta$ is the quasiparticle velocity in the direction of $(0,\pi)$.\cite{SHH0320} It is found that $v_\Delta$ increases with decreasing $x$ and extrapolates to the antinodal gap measured by ARPES. On the other hand, for severely underdoped samples and one layer cuprates with low $T_c$'s, there are claims based on ARPES that the energy gap in the Fermi arc region near the nodal point does not scale with the pseudogap at $(0,\pi)$, which increases with decreasing doping as mentioned before.\cite{TAN0610} Instead, it seems to stay constant or increase with decreasing doping. It is argued that this reveals a new gap scale associated with superconductivity. I should caution that deeply underdoped samples are known to be strongly disordered, and the disorder increases with reduced doping. Furthermore, the lineshape remains very broad in the antinodal direction even in the superconducting state. Thus it is risky to draw strong conclusion from lineshapes without an understanding of disorder effects and of the lineshape. \begin{figure}[t] \centerline{ \includegraphics[width=0.44\textwidth]{NewFigs/fig10.eps} } \caption{ Model for a $hc/2e$ vortex. The $SU(2)$ quantization axis points for the north pole at the center (forming a staggered flux vortex core) and rotates smoothly towards the equatorial plane as one moves out radially.} \end{figure} Other support for two gaps comes from Andreev reflection studies\cite{D9910} and Raman scattering.\cite{LET0637} In a superconductor-normal metal junction in conventional superconductors, normal electrons incident on the junction has an extra channel for transport, by tunnelling as a Cooper pair into the superconductor and Andreev reflected as a hole. This leads additional conductance below an energy scale of the energy gap. Such extra conductance was observed in underdoped cuprates, but the energy scale observed is much lower than the pseudogap and is more related to $T_c$. I note that in contrast to conventional tunnelling, Andreev reflection does not simply measure the density of states, but requires coherence of the quasiparticle in its interaction with the condensate. What is seen in the Andreev data may be this coherence scale. \begin{figure}[t] \centerline{ \includegraphics[width=0.5\textwidth,height=0.33\textwidth]{NewFigs/fig11.eps} } \caption{ Schematic picture of the quantization axis $\bm{I}$ in different parts of the phase diagram shown in Fig.~2. (a)~In the superconducting phase $\bm{I}$ is ordered in the $x$-$y$ plane. (b)~In the Nernst phase, $\bm{I}$ points to the north or south pole inside the vortex core. (c)~The pseudogap corresponds to a completely disordered arrangement of $\bm{I}$. ($\bm{I}$ is a three dimensional vector and only a two dimensional projection is shown.)} \end{figure} I must emphasize that the simple cartoon shown in Fig.~11 is only an approximate picture. We have assumed that the bosons are locally condensed and can be treated as a $c$-number which varies in space and time. However, even at $T=0$, the vortex configurations shown in Fig.~10 can tunnel between each other and destroy the staggered flux order at some time scale. I think the correct answer requires a quantum mechanical treatment of the boson strongly coupled to gauge fields, which is not available at present. In particular, we have not yet been able to compute the ARPES spectrum and make a satisfactory comparison with experiment. We make crude approximations such as assuming a binding of the bosons with fermions via gauge fluctuations.\cite{LNN9803} As an example of an alternative aproach, Ribeiro and Wen\cite{RW0501} introduced a new formulation which hybridizes the physical hole with the spin carrying fermions and have had success in understanding the higher energy spectra. Their theory seems to favor the two gap scenario. The truth is that the theory of the spectral function is not under control at present: the problem of a fermion and boson strongly interacting with a gauge field at finite temperatures is too daunting with currently known techniques. Rather than trying to explain data in detail, it may be more fruitful to step back and attempt a classification of the pseudogap state. We have developed a view that the pseudogap phase belongs to the deconfined side of the lattice gauge theory, so that discussion in terms of fermions, bosons and {\em noncompact} gauge fields makes sense. More explicitly, we propose that the pseudogap is best understood as the doping of a particular spin liquid called the algebraic spin liquid. This is explained in the next section. \section{Gauge theory, de-confinement and spin liquids} Let us return to the gauge theory formulation described in section III. As it stood Eq.~(8) is pretty much an exact reformulation of the $t$-$J$ model. The constraint is enforced exactly upon integrating out the gauge fields. However, the gauge field is fluctuating strongly because there is no restoring force, and critics of the gauge theory approach have pointed to this as evidence that this approach cannot be trusted to give meaningful results. At a deeper level, I think what lies behind this objection is the implicit assumption that a phenomenon called confinement, familiar in QCD, necessarily takes place when the gauge coupling is large. In this case, fermions are tightly bound to bosons by the gauge field, and one just recovers the original intractable electron problem with constraint. In contrast, our way of thinking assumes that confinement does not happen, in which case the fermions, bosons and gauge fields retain their identity, albeit with strong interaction. The ultimate strongly coupled fixed point is nontrivial and different from that of free particles, but as is usual in the strong coupling problem, it may be possible to access their behavior using artificial expansion parameters such as ${1\over N}$ where $N$ is the number of copies of the matter field. Even though the physical problem may correspond to $N \sim$ 2 or 4, as long as confinement does not take place the behavior of the physical system is smoothly connected to the large $N$ system and the physical behavior of the system can be understood, even though the critical exponents cannot be computed quantitatively. In the case of a compact $U(1)$ gauge, it is well known that in the 2+1 dimension the theory is confining even for arbitrarily small coupling, due to the proliferation of instantons. The point is that in a compact gauge field, a $2\pi$ flux can pop up through an elementary plaquette. The space-time point where this happens is called an instanton event, which is also called a magnetic monopole in Euclidean space-time. If these monopoles proliferate, the total flux through the sample is not conserved. The strongly fluctuating gauge flux leads to confinement between external charges. At first sight, this bodes poorly for the gauge theory approach. However, the existence of matter field can change the story completely. There is considerable experience in the study of a gauge field coupled to relativistic fermions and bosons, but little is known in the non-relativistic case. In the past few years there has been significant progress in the case of half-filling where there are no bosons. Consider the saddle point with $\pi$ flux through a plaquette. The fermionic spectrum are massless Dirac fermions with nodes at $\left( \pm {\pi \over 2a}, \pm {\pi \over 2a} \right)$. Due to the additional $SU(2)$ symmetry, these fermions are minimaly coupled to a set of $SU(2)$ gauge fields. If the coupling to the gauge field is confining, this leads to what is known in QCD as mass generation and chiral symmetry breaking, which translates to gap formation and AF order in our language.\cite{KL9930} Thus the $\pi$ flux phase is our route to AF order. On the other hand, consider the staggered flux state. This mean field ansatz breaks the $SU(2)$ symmetry and the gauge field is broken down from $SU(2)$ to $U(1)$. This motivates the study of th problem of $N$ 2-component massless Dirac fermions coupled to a $U(1)$ gauge field in 2+1 dimensions. This model is often called QED$_3$ in the literature. Since the staggered flux state has two spins and two nodes, the physical problem corresponds to $N=4$. (It was shown that the velocity anisotropy of the Dirac cone is an irrelevant variable and the low energy physics scales to the isotropic fixed or the QED$_3$ model.\cite{VTF0211,Lee/Herbut}) Assuming deconfinement, this class of states has been studied by ${1\over N}$ expansion and is called the algebraic spin liquid.\cite{RW0201} Hermele {\em et al.}\cite{HSF0437} showed, using results borrowed from the field theory literature, that for sufficiently large $N$, the deconfined state can be stable. In other words, the large $N$ fixed point has no relevant perturbation, including the appearance of monopoles. This is at least a proof of the principle that deconfinement is possible in the presence of matter field. It is not known whether the critical $N$ is greater than or smaller than 4, and here is where the QCD Monte Carlo community can help.\cite{Fiore} Recently we received some encouraging news from experiments. It has long been thought that the Kagome lattice with $S = {1\over 2}$ has sufficient frustration to support a spin liquid ground state. This seems to be realized by recent experiments on ZuCu$_3$(OH)$_6$Cl$_2$ where the Cu ions occupy Kagome sites and the system shows no AF order down to 30~mK, despite an AF exchange of $\sim 200$~K.\cite{HEL0704} On the theoretical side, it is found that the projected wavefunction of a certain flux state gives the ground state energy obtained by exact diagonalization to within error.\cite{RHL0705} This is remarkable for a trial wavefunction with no variation parameter, and is much better than what projected flux states did for the square lattice. The low energy physics of this state is that of two 2-component massless Dirac fermions with spins, coupled to a $U(1)$ gauge field. The confirmation of this picture for the Kagome lattice will give us reason to believe that the critical $N$ is less than 4 in this case and perhaps in the case of the algebraic spin liquid as well. Prior to the Kagome example, there was strong evidence that a spin liquid state exists in the organic compound $\kappa$-(ET)$_2$Cu$_2$(CN)$_3$.\cite{SMK0301} Here the active sites are organic molecular dimer molecules which form a triangular lattice to a good approximation. As mentioned in section III, the Heisenberg Hamiltonian on a triangular lattice is expected to order. Nevertheless, experimentally no AF is found down to 32~mK. The explanation lies in the fact that this system sits just on the insulating side of the Mott transition, so that charge fluctuations and ring exchange terms lead to a more complicated effective Hamiltonian which may favor the spin liquid state.\cite{M0505,LL0503} Numerical work has given an adequate account of the phase diagram, including the appearance of superconductivity under pressure as shown in Fig.~3.\cite{Kyung,Watanabe} On the triangular lattice the low energy Lagrangian is expected to be an almost circular Fermi sea of spinons (fermions which carry $S={1\over 2}$ but no charge) coupled to $U(1)$ gauge fields. This model has even more low lying fermionic excitations than the Dirac sea, and there is reason to believe that deconfinement occurs. Experimentally the observations of a constant spin susceptibility and linear specific heat at low temperatures are highly unusual for an insulator, and support the notion of a spinon Fermi surface. The appearance of particles, such as spinons which carry different quantum numbers compared with the original electron, is a phenomenon called ``fractionalization.'' It is remarkable that the fermions originally introduced as a formal device in Eq.~(4) take on a life of their own at the end of the day. Many people find this a hard concept to swallow and here is where explicit examples of a spin liquid will be a great help in convincing skeptics. I must emphasize that the new structure in the theory (spinons, gauge fields, etc.) emerges in the low energy physics and what emerges is independent of the way the problem was formulated. The constraint of no double occupation could have been enforced using $Z(2)$, $U(1)$ or $SU(2)$ gauge fields, but the low energy structure will be the same. One particular formulation may simply be the convenient way to expose this emergent structure. The discovery of experimental examples of spin liquids is a very important development because for the first time, the deconfined (fractionalized) states are low temperature states which in principle can be studied in great detail. The success of the gauge theory method in these materials will give us confidence in its application to the more complex problem with holes. It is fortunate that the two promising examples of spin liquids are closely related to those discussed in the high $T_c$ context for underdoped and optimal doping, respectively. Finally, I would like to mention an approach to the pseudogap problem from a more general perspective. Senthil and I \cite{SL0515} proposed that the underdoped cuprate should be considered as proximate to a spin liquid state which is then doped. In this view the pseudogap is the finite temperature region controlled by the quantum critical point of doping a spin liquid by varying the chemical potential. The most promising spin liquid is in fact the algebraic spin liquid with its low energy massless Dirac fermion and $U(1)$ gauge field. This point of view does not help solve the doped spin liquid problem, but does raise and answer the question: what is the signature of a deconfined state. The answer is that in the presence of a matter field, the only signature left is the irrelevance of instantons, i.e. the total gauge flux is conserved, much like the conservation of magnetic flux in our world. The experimental implication is subtle, but can be probed at least in principle in a suggested experiment.\cite{SL0515} \section{Quantum oscillations in high magnetic field} Is there a region of the phase diagram where the picture is less complicated and precise predictions can be made and tested? One natural idea is to apply a large magnetic field perpendicular to the layer to kill the superconductivity and ask what is the next most stable ground state which emerges. When $H$ reaches $H_{c2}$ the vortex core overlaps and the core state becomes the homogeneous ground state. Recent experiments indicate that we may be probing this regime and have caused a lot of excitement.\cite{DN0765,YEL0700} It is believed that $H_{c2}$ for underdoped cuprates is approximately 100~T, well beyond what is available in the Laboratory. The resistive transition, on the other hand, is controlled by flux flow and is much more accessible to experiment. Recently, Shubnikov-de Haas oscillations are reported in an underdoped YBCO compound ($T_c = 55$~K) in the flux flow region, in a magnetic field range of 40~T to 62~T. From the period of oscillation of the resistivity and Hall resistivity vs. ${1\over B}$ the area of the Fermi surface pocket is extracted and found to contain 0.038 holes per pocket. If we assume the doubled unit cell scenario, there are two pockets in the reduced BZ and this corresponds to $x = 0.076$. The effective mass can also be measured from the temperature dependence, and $m^\ast = 1.9$ $m_e$. The doping concentration of YBCO is difficult to estimate, because the doping is from the oxygen chains and its charge state is not independently measurable. The best estimate by the experimentalists places $x$ at 10\%. More recently, similar quantum oscillations are reported in YBa$_2$Cu$_4$O$_8$, the so-called Y124 compound ($T_c \sim 80$~K) with magnetic field from 55~T to 85~T \cite{YEL0700} and Shubnikov-de-Haas oscillations were reported in the range 45~T to 61~T.\cite{Bangura} The hole concentration per pocket is now 0.05 and the inferred $x$ of 0.1 is again lower than the value of 0.125 that is believed to characterize this material. The effective mass is now $m^\ast = $ 3 $m_e$. Quantum oscillation is considered the best way to measure the Fermi surface area, provided the sample is sufficiently free of defects that the electron can complete a cyclotron orbit without being scattered. Even at 50~T, the cyclotron orbit circumference is more than a thousand $\rm{\AA}$ in size (see Fig.~12), thus making it clear that disorder is really not an issue in the YBCO family. \begin{figure}[t] \centerline{ \includegraphics[width=0.45\textwidth]{NewFigs/fig12.eps} } \caption{ Vortex cores are represented by discs of radius $R_v$. Figure is drawn almost to scale at $H={1\over 2}H_{c2}$, when the discs occupy half of the total area ($R_v\approx 25\rm{\AA}$). A cyclotron orbit is shown, also roughly to scale. Its area encloses about 10 vortices for the experiment on YBCO. The long dimension of the orbit is about $800 \rm{\AA}$.} \end{figure} I think there is an excellent chance that the experiment is accessing the normal state that lies beyond $H_{c2}$. It is well known experimentally that quantum oscillations persist to $H < H_{c2}$.\cite{JAN9898} The frequency of the oscillation is not changed upon crossing $H_{c2}$, only its magnitude is diminished. This phenomenon is particularly striking in layered materials where the flux flow regime can be very wide. For example, a recent experiment on organics observed quantum oscillations down to ${1\over 2} H_{c2}$ and is even more striking in that the expected reduction of the amplitude was not even observed.\cite{WOS0073} A rough physical explanation may be as follows. Let us take a cartoon picture of the vortex core as discs of radius $R_v$. At $H_{c2}$ the discs overlap and we define $H_{c2} = \tilde{\phi}_0/(\pi R_v^2)$ where $\tilde{\phi}_0 = hc/2e$ is the superconducting flux quantum. Note $R_v = \sqrt{2}\xi$ where $\xi$ is the coherence length. For $H_{c2} = 100$~T, $R_v $ = 25~$\rm{\AA}$. For $H = {1\over 2} H_{c2}$ the disc density is rather high, as drawn in Fig.~12, almost to scale. Let us estimate the size of the semiclassical orbit. Very generally, the real space area $\tilde{A}$ is related to the momentum space pocket size $A_k$ by \begin{equation} \tilde{A} = A_k \left( {hc\over eH} \right)^2 . \end{equation} It is useful to introduce the Landau length \begin{equation} \ell_H = \sqrt{{hc\over eH}} . \end{equation} Note that the flux through an area $\pi \ell^2_H$ is $\tilde{\phi}_0$, so that $R_v = \ell_{H=H_{c2}}$. The Landau level number $\nu$ at the Fermi level is given by the number of full flux quantum $\phi_0 = 2\tilde{\phi}_0$ which penetrates $\tilde{A}$, i.e. \begin{equation} \nu = {\tilde{A} \over 2 \pi \ell_H^2} = {A_k \tilde{\phi}_0 \over 2\pi^2 H} . \end{equation} We find $\nu = 10$ for $H = 50$~T, using the experimentally measured $A_k = 5.1 \times10^{14}$~cm$^{-2}$ for YBCO. Since 50~T corresponds to $\approx {1\over 2} H_{c2}$, this also means that on average, 10 vortex cores are inside the spatial orbit. Since the cyclotron motion is fast compared with that of the vortices, we can assume a static picture of the vortex core, which either forms a hexagonal lattice or somewhat distorted from it. Assuming the orbit to be an ellipse with aspect ratio $\sim 8$ (for reasons to be explained later), we sketch a snapshot of the orbit in relation to the vortex core in Fig.~12. In order to see quantum oscillations, what is required is that the states in the cores are in phase, whether it is AF order or staggered-flux order. In the latter case, the $\bm I$ vector in all the vortices inside the cyclotron orbit are either pointing up or pointing down, i.e. the coherence length of the order must exceed the orbit size. For the aspect ratio chosen, the long dimension of the orbit is about 800~$\rm{\AA}$. What is required for quantum oscillations is strong tunnelling between the vortex cores. Then the quasiparticles are extended states which carry information of the uniform normal state for $H > H_{c2}$ because the random phase coming from tunnelling through the superconducting region will cancel. Given the dense packing of vortices, this seems quite reasonable. What is the origin of the pockets that give rise to the quantum oscillations? Here we are on rather uncertain grounds. In general, we can classify the state as conventional or exotic. By conventional I mean there is some new order giving rise to unit cell doubling and a Fermi liquid ground state, and exotic means everything else. While exotic scenarios have been suggested \cite{KKS0700} here I will concentrate on discussing conventional possibilities, knowing that we immediately encounter a problem concerning the relationship of the measured area to doping estimates. Either the doping estimate is incorrect, or there exists undetected electron or hole pockets. Let me forge ahead and assume that the doping concentrations are actually $x=0.075$ and 0.10 and correspond to the pocket areas measured in the two experiments and discuss several candidates for the unit cell doubling order. \begin{enumerate} \item AF order. We saw in section I that at very low doping, hole pockets appear around $(\pi , \pi)$ in the background of AF order. The question is whether this narrow region $(x < 3\%)$ will open up in a strong magnetic field and extend up to 10 or 12\% doping. This is probably what most people have in mind when they refer to AF in the core and we shall call states of this kind ``doped AF.'' I think this scenario is unlikely. From neutron scattering we know that there is a sharp triplet resonance at $\sim 30$~meV in this doping range at $(\pi , \pi)$ and a sharp drop in spectral weight below it.\cite{Stock} AF order requires the condensation of this triplet mode. A 50~T field corresponds to an energy scale of 6~meV, hardly enough to perturb the resonance mode. Thus I think the AF is energetically highly unfavorable. Nevertheless, there are experimental ways to distinguish doped AF from the staggered flux state, which I will return to later. \item Staggered flux state. As we argued before, this state is energetically favored to appear once superconductivity is destroyed. I want to point out that in principle a small amount of AF order can co-exist with this state. As seen in Fig.~8, this is favorable for $x < 0.08$. However the driving force and excitation spectrum of this state is completely different from the doped AF. In that case the holes live in the lower Hubbard band, i.e. the band is a downward parabola, separated by a gap of order of the Mott gap with energy scale 1~eV. \item Valence bond solid (VBS), or more general states which break translation symmetry without breaking time reversal as in AF. There has been discussion of this state living in the vortex core.\cite{SO313} Roughly speaking, the electronic spectrum may be quite similar to the staggered flux state except that it is expected to be gapped. \item Spin density wave (SDW).\cite{CYR0700} This seems very attractive phenomenologically. From NMR there are claims for enhanced antiferromagnetism in the core.\cite{MIT0303,KKM0303} In the doped La$_2$CuO$_4$ system it is known from neutron scattering that incommensurate SDW is induced in the vicinity of the vortex core.\cite{Lake,KLE0228} This is quite understandable because inelastic neutron scattering indicates soft excitations at precisely these incommensurate wave vectors, typically around ${2\pi\over 8a}$ from $\left( {\pi\over a}, {\pi\over a} \right)$. This is often connected to the notion of stripes near the concentration $x = {1\over 8}$. However, static SDW has never been seen in the YBCO family. Neutron scattering has reported dynamical fluctuations at 20~meV at the incommensurate wave vectors $\left( {\pi\over 2a} \pm \delta, {\pi\over 2a} \pm\delta \right)$. While recent data on more underdoped samples ($T_c = 50$~K) sees dyamic scattering down to 12~meV, it has much less spectral weight than in the doped La$_2$CuO$_4$ system.\cite{Stock} Thus it is questionable whether a magnetic field can induce SDW order. Furthermore, the prevailing view is that the SDW is not tied to Fermi surface nesting, and it is unclear whether hole pockets should be expected. \begin{figure}[t] \centerline{ \includegraphics[width=0.4\textwidth]{NewFigs/fig13.eps} } \caption{ Schematic picture of the tunneling density of states for (a) the spin split bands in the doped staggered flux state and (b) the doped AF. If the hole pocket lies in the lower Hubbard model, a large Mot gap is expected.} \end{figure} How can we distinguish the staggered flux state from the doped AF state? A useful experimental test is to do a tunnelling experiment to measure the density of states in high fields. Since spatial resolution is not needed, we do not need STM, which is probably impossible under these pulsed field conditions, but we propose either break-junction tunnelling or other tunnelling geometry where electrons can be injected into the a-b plane from the side. Schematically the density of states of the staggered flux state vs. the doped AF is sketched in Fig.~13(a). Of course, we expect much of the higher energy features to be smeared out, but strong particle-hole asymmetry and a dip in the density of states above $\varepsilon_F$ will support the staggered flux picture. On the other hand, if this state is the continuation of the doped AF state observed at $x < 0.03$ for zero field, we expect a large Mott gap above the Fermi energy, which has been estimated to be 30~meV in the case of Y124.\cite{YEL0700} Thus tunnelling data is a sensitive test of the notion of hole pockets in the lower Hubbard band. Next we extract some physical parameters from the data. Let us parametrize the Dirac spectrum of the staggered flux state near each node by \begin{equation} E(k) = \sqrt{(v_Fk_1)^2 + (v_\Delta k_2)^2} \end{equation} where $k_1(k_2)$ in the direction perpendicular (parallel) to the large Fermi surface. By definition \begin{equation} m^\ast = {\hbar^2 \over 2\pi} {dA_k \over dE} . \end{equation} By writing the density of states $m^\ast/2\pi$ in terms of the velocities along the Fermi surface, we find the anisotropic generalization of the relation $v_F = \hbar k_F/m$ : \begin{equation} \sqrt{v_Fv_\Delta} = {\hbar \over m^\ast}\sqrt{{A_k \over \pi}} . \end{equation} Equation~(20) applies equally well to the anisotropic Dirac spectrum and the anisotropic parabolic band. The right hand side of this equation are directly measured. Putting in the numbers for YBCO with $m^\ast$~=~1.9~$m_e$, we find $\hbar\sqrt{v_Fv_\Delta}$ to be 0.48~eV$\rm{\AA}$. The Fermi velocity can be measured by ARPES, even though the error bar is substantial. Taking the value $v_F$~=~1.4~eV$\rm{\AA}$ from Campuzano {\em et al.},\cite{CNR03} we find the anisotropy to be $v_F/v_\Delta = 8.4$. This compares with the value 7.9 directly extracted from thermal conductivity data \cite{SHH0320} in their $T_c = 62$ sample. In that paper they extract a gap value $\Delta_0$ of 71~meV, using $v_F$~=~1.65~eV$\rm{\AA}$. With our numbers we expect $\Delta_0 = 57$~meV, in very good agreement with ARPES estimate of the pseudogap for a similarly doped Bi2212 sample. A similar exercise using Y124 data \cite{YEL0700} yields $v_F/v_\Delta = 16.7$ and $\Delta_0 = 29$~meV. The gap value is on the low side. Given that these estimates are sensitive to the square of $m^\ast$ and $v_F$ and their associated errors, these are reasonable numbers. More importantly, it shows the correct trend of increasing anisotropy and reduced gap with increasing doping. It is worth noting that the ellipse drawn in Fig.~1(b) by Doiron-Leyraud {\em et al.}\cite{DN0765} has an aspect ratio of $v_F/v_\Delta \approx 4$. Noting that the length of the ellipse scale as $\sqrt{{v_F\over v_\Delta}}$, our numbers indicate a more elongated ellipse and a small departure of the inside edge of the pocket away from the Fermi arc. For Y124, the ellipse almost reaches the saddle point at $(0,\pi)$. Beyond this point the pockets connect to form large Fermi surfaces which are presumably unobservable due to disorder scattering, and we expect the quantum oscillation to disappear. There will be a transition to the uniform large Fermi surface (area $1-x$) state, but the exact nature of this transition is an open issue. We reiterate that Eq.~(20) applies to the parabolic band of the doped AF as well. The only difference is that in that case there is no reason to identify $v_F$ with the normal state Fermi velocity. Quite generally, if this state is a Fermi liquid in a nonmagnetic background, we expect Zeeman splitting of the up- and down-spin pockets. For free fermions the splitting is entirely determined by $m^\ast$ which gives the density of states. Let us define $\Delta A_k = A_{k\uparrow} - A_{k\downarrow}$ to the difference in the up- and down-spin pockets. We find \begin{equation} \Delta A_k = \Delta E {dA\over dE} = \Delta E 2\pi m^\ast/\hbar^2 \end{equation} Using $\Delta E = g\mu_BH$ with $g=2$ and $H=60$~T, we find for Y124 a splitting of $\Delta A_k/A_k \approx 26\%$. Experimentally a splitting of $\approx 10\%$ was observed. If we attribute this to spin splitting, we will need a reduction of the predicted magnetization due to residual interactions. In Fermi liquid theory, this is given by the Landau parameter $F_0^a$, so that $\chi = \chi_0/\left( 1 + F_0^a \right)$. Thus we need $F_0^a = 1.6$ which is quite reasonable for a strongly interacting system. In contrast, for the doped AF, we do not expect a similar spin splitting. This was pointed out to me by T. Senthil. This is because the AF will cant with sublattice magnetization perpendicular to the applied field and spin along the field is no longer a good quantum number. If splitting due to interlayer tunnelling can be ruled out, this is another argument against the doped AF. Finally, we discuss the cyclotron resonance experiment. In principle, this can help distinguish between the Dirac spectrum of the staggered flux state and the parabolic spectrum expected for the doped AF.\cite{LW0117} As made famous by recent experiments on graphene,\cite{NOV0597,Zha0501} the Landau level in a Dirac spectrum is given by $E_\nu = \sqrt{2\nu} \hbar\sqrt{v_Fv_\Delta}/\ell_{BH}$ which scales as $\sqrt{B}$, instead of the standard Landau level spacing $\hbar eB/m^\ast c$ which is linear in $B$ in conventional parabolic bands. Using measured parameters for YBCO we find the splitting between the $\nu = 9$ and $\nu = 10$ Landau level to be 3.1~meV, compared with 3.3~meV for a parabolic band, an unfortunate coincidence. In principle, one can check the $\sqrt{B}$ vs. $B$ dependence, but this may be a challenging experiment to carry out in pulsed fields. \begin{figure}[t] \centerline{ \includegraphics[width=0.5\textwidth]{NewFigs/fig14.eps} } \caption{ Schematic phase diagram predicted for an underdoped cuprate in a magnetic field. The solid lines are true phase transitions while the dashed lines are cross-overs to the pseudogap phase and the vortex liquid phase. The vortex liquid at $H=0$ corresponds to the Nernst region in Fig.~2.} \end{figure} To summarize, with insight gained from Nernst effect measurements, we propose in Fig.~14 a phase diagram for the underdoped cuprate in a field. The staggered flux state appears as a new phase in high field and low temperature. Since this state requires the phase coherence of the bosons just as the superconducting state, we expect its transition temperature to be comparable to the superconducting $T_c$. We expect a signature in transport properties at this transition temperature. The $H_{c2}$ line is a cross-over line which characterizes the onset of a vortex liquid state. This is the extension of the Nernst region shown in Fig.~2 to finite $H$. The true superconducting transition is to a vortex solid or glass state. Whether the staggered flux state terminates in the vortex liquid or solid phase is not known. Below a large energy scale $\Delta_0$ lies the spin gap phase which is the extension of the pseudogap phase in Fig.~2 to finite $H$. We emphasize that while the model of Hsu {\em et al.}\cite{HMA9166} and the more phenomenological approach of Chakravarty {\em et al.}\cite{CLM0203} also predict a staggered flux (DDW) state at high field, they key point they made is that this state is smoothly connected to the pseudogap phase, i.e. the solid phase boundary in Fig.~14 is pushed up to an energy scale $\Delta_0$. Great effort has been made to look for the moment generated by the orbital currents by neutron scattering and nothing convincing has been found. There is no sign of a phase transition even in the highest purity materials. It is difficult to see how the Fermi arc can be consistent with a Fermi liquid picture of sharp quasiparticles. In my opinion, existing data at low field is sufficient to rule out the DDW scenario. In contrast, in our picture the pseudogap phase is separated from the high field staggered flux Fermi liquid state by a phase boundary. While a detailed description of the Fermi arc is not available, it is at least possible to imagine that different physics is at play and the Fermi arc picture can be consistent with the experimental observation of quantum oscillations in high fields. Finally, the proposed phase diagram is heavily based on mean field theory and the notion of boson condensation and should be viewed simply as a starting point for discussion of the experiment. It is possible that the orbital current can survive fluctuations, but it is also possible that tunnelling events wipe out the ordering pattern, leaving us with an exotic ground state. Other order, such as weak AF, could also emerge to take its place. The overall picture, I think, is more general than the survival of the orbital currents. The large spin gap at $(0,\pi)$ and features in the tunnelling density of states sketched in Fig.~13(a) may survive. Clearly more experiments in high fields will be needed to shed light on these important issues. \end{enumerate} \section{Questions and answers} Instead of a conclusion, I would like to pose a number of questions and attempt to answer them. \begin{enumerate} \item What is unique about the cuprates? What is the pairing mechanism? What controls $T_c$? Can $T_c$ be raised? The cuprate is the prime example of a two-dimensional Mott insulator which can be doped. The copper $2+$ ion is in a $d^9$ configuration, making it a spin ${1\over 2}$ system. The layer symmetry breaks the degeneracy of the $d$ orbitals dow to a single $d_{x^2-y^2}$ orbital, so that orbital issues which cause complications in most other transition metal oxides do not play a role here. In most other examples, strong coupling of the charge state to distortion of the oxygen cage causes doped carriers to localize. For example, if we replace Cu by Ni, La$_2$NiO$_4$ requires 50\% hole doping before it becomes metallic. In many ways, the cuprate is the ideal model system for a doped Hubbard model. The parent compoud is one of the few layered $S = {1\over 2}$ antiferromagnet with no orbital degeneracy that is known. It is hard to find a material that is so simple! Searching for a pairing mechanism may be the wrong question, because implicitly the assumption is made that pairing occurs via the conventional route of well defined quasiparticles exchanging some bosonic glue.\cite{P0705} We know that underdoped cuprates do not fit this description. Instead superconductivity is the ground state which emerges as the best compromise between the kinetic energy of hole motion and the antiferromagnetic exchange. The energy gap is set by the exchange energy $J$ while $T_c$ is controlled by phase coherence and is of order $xt$. Thus the optimal $T_c$ is set by $J$. The cuprate family already has the highest $J$ among transition metals. (The ladder compound made up of the same copper-oxygen bond is the only example I know which has a higher $J$ than the planar cuprate.) The hopping matrix element $t$ is also large because of the strong covalent bonding between the $d_{x^2-y^2}$ orbital with the oxygen $p$ orbital. Thus the cuprates are blessed with a large energy scale. The variation of $T_c$ on a few tens of degrees between YBCO and the Hg compound seems to be attributable to changes in the further neighbor hopping matrix elements.\cite{PDS0103} If the superconductivity is due to the electronic energy scale such as $J$ and $t$, why isn't $T_c$ higher? As we saw in section I, it is only $\sim {t\over 40}$ or ${J\over 10}$ for both cuprates and organics. What accounts for the small numerical factor? One consequence of doping a Mott insulator is that the carrier density and therefore the superfluid density is small. Thus phase fluctuation is important and limits $T_c$. In a $d$-wave superconductor, the thermal excitation of nodal quasiparticles further suppresses the superfluid stiffness on a short distance scale, as seen by experiment.\cite{CMO9921} This reduces the $T_c$ further. This is why $T_c$ is such a small fraction of $J$. Besides increasing the overall energy scale, one possible avenue for enhancing $T_c$ is to try to stabilize superconductors with a full gap such as $d+id$ states. For example, on a triangular lattice, the $d+id$ state is believed to be favored. In this case, the quasiparticle suppression of the superfluid density is negligible and $T_c$ will be higher, everything else being equal. \item What about other approaches? Why is there no consensus after all these years? In a review of this kind which adopts a particular viewpoint (going under the names strong correlation, RVB, gauge theory, etc.) I feel obligated to give my honest opinion on other approaches. This may not be the best way to win friends, but let me proceed. First let me point out that within the RVB approach there are many variants that have not been covered here. I emphasized the formulation based on a fermionic representation of the spins and a bosonic representation of the holes. The opposite formulation with bosonic spinons \cite{SO313} and a more complicated formulation with flux attachment \cite{W0773} has also been pursued. Below let me focus on approaches outside the RVB ``big tent'' by separating the most prominent work into four major categories and discussing them in turn. \begin{enumerate} \item Spin fluctuation models. Historically the possibility of $d$-wave superconductivity based on the exchange of AF spin fluctuations was discussed prior to the discovery of high $T_c$ and a number of workers apply these ideas to the cuprates.\cite{E8621,MSV8654,SLH8690,MP9369} These discussions are either based on random phase approximation treatment for the Hubbard model, or phenomenological coupling between spins and electrons treated as separate entities. The latter may be correct in heavy fermion systems, where there indeed exists separate itinerant and local moments, but is more questionable in the Hubbard model where there is only one kind of electronic state. The best chance for this model to work is in the overdoped region, where well defined quasiparticles exist in the normal state and the onset of superconductivity appears quite conventional. However, from neutron scattering it is known that spin fluctuations become very weak, with spin correlation length of 10~$\rm{\AA}$ or less beyond optimal doping. A variant of this idea ascribes the boson being exchanged to the sharp triplet resonance discovered by neutron scattering. However, this resonance is very narrow in momentum space and carries very little spectral weight, making it unlikely to be the powerful glue needed.\cite{KKA0202} It is clear that in order for the spin fluctuation picture to work, one has to deal with the strong coupling problem and include spin fluctuations at all energy scales up to $J$. Numerical methods may be the only option. Recent numerical studies based on cluster dynamical mean field theory shows promise in this direction in that a pairing kernel was identified in the triplet channel which peaks at $\left({\pi\over 2}, {\pi\over 2}\right)$ and resembles the spin fluctuation channel.\cite{Maier} I think the most serious limitation of this line of ideas is that it completely fails to address the pseudogap issue. As we have seen, there is simply no well defined quasiparticle above $T_c$ to pair them by whatever mechanism. By avoiding the issue of proximity to a Mott insulator, this approach misses the key physics and the most interesting aspect of high $T_c$. \item Microscopic inhomogeneity, stripes, etc. The basic idea is that one way the doped Mott insulator can resolve the competition between kinetic energy and AF exchange is for the holes to phase segregate. In particular, the holes may concentrate into one dimensional regions where AF order is suppressed, separated by an AF region where the hole density is suppressed.\cite{CEK03} Experimentally, it was discovered that a particularly stable configuration where the holes form a quarter filled chain (one hole per two sites along the chain) separating AF regions as an anti-phase boundary indeed exists in La$_{2-x}$Ba$_x$CuO$_4$ where $x = {1\over 8}$.\cite{TSA9561} This configuration and variants thereof has been called stripes. The stripe explains the suppression of $T_c$ in the doped La$_2$CuO$_4$ system near $x = {1\over 8}$. It also receives theoretical support from numerical solution of the $t$-$J$ model using density matrix renormalization group method.\cite{WS9953} Thus there is good reason to believe that stripes form a strongly competitive ground state, especially near $x = {1\over 8}$. However, the ordering temperature for the charge order which preceded spin order is low, of order 40~K, and the spin order (incommensurate SDW) is even lower. Furthermore, static stripes are seen only in the doped La$_2$CuO$_4$ family and not in the YBCO family, and in the latter case, even dynamical stripe fluctuations have weak intensities, as discussed earlier. Thus it is my belief that stripes represent low energy physics and emerge as the ground state only as a result of delicate competition. Recently there is a remarkable report that the ARPES in La$_{1.875}$Ba$_{.125}$CuO$_4$ reveals quasiparticles which $d$-wave dispersion with a maximum gap of 20~meV, as if the existence of stripes has no effect on pairing of the pseudogaps except at very low energy.\cite{VFL0614} This reinforces my view that stripe order is a secondary effect. Static stripes are clearly detrimental to superconductivity. Nevertheless, there has been a strong push, notably by S. Kivelson and co-workers, for the idea that fluctuating stripes may be responsible for superconductivity and the pseudogap phenomenon. I have always found the motivation of this idea puzzling. Fortunately a recent article by Kivelson and Fradkin\cite{KF07} lays out the physical ideas and motivation in a very clear way. They share our basic assumption that doping a Mott insulator is the key question and they share our notion that superconductivity comes from strong repulsion. However, they reject the possibility that this can happen in a homogeneous phase. As far as I can tell, the reason offered is that ``the enormous effort has been devoted to numerical searches for superconductivity in various uniform Hubbard and $t$-$J$ related models, with results that are, at least, ambiguous'' and the feeling that if that were the right route, ``unambiguous evidence of it would have been found by now.'' Instead they focused on models where superconductivity from a repulsive Hubbard model can be demonstrated, namely the ladder system. The ladder system can be considered an example of a valence band solid, where spins on the rung form singlets and doped holes tend to form pairs on the rung. Then pair hopping between ladders provides bulk superconductivity. This is all well and good but Kivelson and Fradkin are the first to admit (in fact insist) that this toy model has little to do with cuprates. For example, it is hard to imagine how quasiparticles can traverse this system or ladder (or any kind of stripes) at a $45^\circ$ angle and become the gapless nodal particle. As long as this problem is not addressed adequately, I fail to see how this approach can be a viable route to superconductivity in the cuprates. \item Phonons. Electron phonon coupling is typically strong in transition metal oxides involving the $E_g$ orbitals, because of strong coupling to the distortion of the oxygen cage. As mentioned earlier, in most cases this leads to localization of the doped hole. While holes in the cuprate escape this fate, the electron phonon coupling must be strong. Nevertheless, we have argued that the dominant energy in this problem is Coulomb repulsion. Hence the discussion of electron-phonon coupling must be made in this context. A combination of numerical and analytic work has shown that short range repulsion has the effect of reducing electron-phonon scattering at large momentum transfer because charge fluctuations at short distance is suppressed.\cite{MN0402,BRC0620} It is also claimed that under certain circumstances, $d$-wave pairing is possible with electron-phonon coupling, contrary to conventional wisdom. Experimentally, phonon sidebands appear visible in STM tunnelling spectra. All this says that electron-phonon coupling is there (no great surprise) but does not provide any evidence that it is the essential driving force behind pairing. The isotope effect is often quoted as evidence for the phonon mechanism for high $T_c$. It turns out that isotope effects affect $T_c$ via the superfluid density, and not the order parameter itself.\cite{KSC0317} Thus the isotope effect in fact supports the view that $T_c$ is controlled by phase fluctuations in the underdoped region. There is no isotope effect at optimal doping and beyond. \item Three band model. Since the copper oxygen layer involves one copper and two oxygen per unit cell (excluding the apical oxygen), the minimal microscopic electronic model requires a $d_{x^2-y^2}$ Cu orbital and two oxygen $p$ orbitals. This is called the three band model.\cite{E8759,VSA8781} Various cluster calculations indicate that the low energy physics (below a scale of $t \approx 4$~meV) can be adequately described by a one band Hubbard model.\cite{ZR8859} Over the years this has become the majority view, but there are still workers who believe that the three band model is required for superconductivity. I think the original motivation for this view came from a period of nearly ten years when much of the community believed that the pairing symmetry is $s$-wave. It certainly is true that a repulsive Hubbard model cannot have an $s$-wave superconducting ground state. Three band models with further neighbor repulsion were introduced to generate the requisite effective attraction. The $s$-wave story was overturned once and for all by phase sensitive experiments in 1994 but some of the three band model proponents persevered. In particular, Varma introduced the idea of intra-cell orbital currents, i.e. a current pattern flowing between the Cu-O and O-O bonds as a model for the pseudogap.\cite{V9754} This has the virtue of leaving the unit cell intact and this kind of order is very difficult to detect. With such a complicated model it is difficult to make a convincing case based on theory alone and a lot of attention has been focused on experimental detection of time reversal symmetry breaking or spontaneous moments due to orbital currents. Unfortunately, the expected signals are very small and can easily be contaminated by a small amount of minority phase with AF order. Up to now there is no reliable results in support of this kind of orbital currents. \end{enumerate} \item Why is the high $T_c$ problem hard? Why is the high $T_c$ problem important? One of the ironies about the high $T_c$ problem is that the ground states are rather conventional. It ranges from AF to $d$-wave superconductors to Fermi liquid as the doping is increased. The exciting new physics happens at finite temperatures where Fermi arcs, pseudogaps and other novel phenomena appear. This region is not characterized by order in the traditional Landau sense, and it is not possible to make precise statements as one could if one has an exotic ground state. From the theoretical side, what we can do so far to classify the pseudogap as the deconfined phase of a gauge theory, where new particles, spinons and holons and $U(1)$ gauge fields emerge in the low energy physics. These particles and gauge fields are still strongly coupled and we are not able to compute and compare with experiments in a quantitative way. We can make various caricatures (such as local boson condensation and fluctuations between different phases) which mimic the observations in a qualitative manner. Nevertheless, I believe the problem is so strongly constrained by data that this path is the only viable option at the moment. I must emphasize that the gauge field is an integral part of this story, and not a technical artifact of our particular formulation. In principle it is possible to formulate the problem with spin carrying bosons and fermions and holes, and reach the same low energy state described here. In this sense this proposed solution of the high T$_c$ problem represents the emergence of new physics. Two recent developments are encouraging. First, it appears possible to probe the ``normal state'' at zero temperature by applying a large magnetic field. While the state again appears to be a Fermi liquid, it may expose a new kind of order and reveal what is fluctuating in the pseudogap phase. Secondly, the elusive spin liquid ground state may finally have been realized. Even though this is an early stage in its development, the study of the low energy excitations of these ground states may finally allow us to access spinons and gauge fields. The effective low energy theory of the spin liquid problem is also more amenable to treatment because bosonic holes are not involved. It is our view that the study of spin liquids and high $T_c$ opens a new chapter in condensed matter, one that defies the dual Landau paradigm (local order parameter and Fermi liquid theory) that has been the backbone of our Science for three quarters of a century. To me, this is why the high $T_c$ problem is important and exciting. \end{enumerate} \acknowledgements I thank Naoto Nagaosa and Xiao-Gang Wen for their collaboration on many of the topics discussed here and I am particularly grateful to T. Senthil for sharing his insights and for helping shape many of the thoughts that have gone into this review. I acknowledge support by NSF grant number DMR--0517222.
1,116,691,500,011
arxiv
\section{I. Introduction} \noindent The promise of quantum computation is to enable new algorithm which renders physical problems using exorbitant physical resources for their solution on a classical computer. There are two broader class of algorithms. The first class is build upon \textit{Shor's quantum fourier transform} \cite{shor} and includes remarkable algorithms for solving the discrete logarithm problems, providing a striking exponential speedup over the best known classical algorithms. The second class of algorithm is based upon Grover's algorithm for performing \textit{quantum searching} \cite{grover-search}. Apart from these two broader line of divisions Deutsch algorithm based on \textit{quantum parallelism/interference}\cite{Deutsch} is another example which has no classical analogue. This provide a remarkable speed up over the best possible classical algorithms. With the induction of quantum algorithms questions were raised for proving complexity superiority of Quantum Model over Classical Model \cite{feynman}.\\ \noindent Grover's Search algorithm was one of the first algorithms that opened a class of problems solvable by Quantum Computation\cite{grover-framework}, in a quadratic speedup over classical systems. Classical unstructured search or processing of search space, is essentially linear as we have to process each item using randomized search functions which can be at best optimized to $N/2$ states. In 1996, L.K.Grover gave the Grover Search algorithm to search through a search space in $ \mathcal{O}\sqrt{N} $ \cite{grover-search}. The algorithm leverages the computational power of superimposed quantum states. In its initialization step an equi-probable superposition state is prepared from the entire search space. In each iteration of the algorithm the coefficients of selected states, based on a selection function, are increased and that of the unselected states are decreased by inversion about the mean. This method increases the coefficients of selected states quadratically and in $ \mathcal{O}\sqrt{N} $ steps we get the selected states with high probability. The unstructured search approach can be used to compute any NP problem by iterating over the search space.\\ \noindent From an application perspective quantum search algorithms have several applications, as it can be used to extract statistic, such as the minimal element from an unordered data set more quickly than is possible on a classical computer\cite{min}. It has been extended to solve various problems like finding different measures of central tendency like mean \cite{mean} and median \cite{median}. It can be used to speed up algorithms for some problem in NP specifically those problems for which a straightforward search for a solution is the best algorithm known. Finally it can be used to speedup the search for keys to the cryptographic systems such as widely used Data Encryption Standard (DES).\\ \noindent In the field of e-commerce we have seen recommended system collects information on the preferences of users for a set of items. The information can be acquired explicitly (by collecting user's ratings) or implicitly (monitoring user's behavior)\cite{lee,nun,choi}. It make uses of different sources of information for providing user with prediction and recommended items. They try to balance various factors like accuracy, novelty, dispersity and stability in the recommended items. Collaborative filtering plays an important role in the recommendations although they are often used with other filtering techniques like content-based, knowledge-based. Another important approach in recommending process is the k-nearest neighbor hood approach in which we find the k nearest neighbors of the search item. Recently recommendation system implementations has increased and has facilitated in diverse areas \cite{park} like recommending various topics like music, television , book documents; in e-learning and e-commerce; application in markets and web search \cite{car,serr,zai,huang,castro,costa,Mcnally}. Mostly these recommendations are done in structured classical database .\\ \noindent NP problems\cite{NPProblems} have been explored in general to be solved using Grover Search \cite{grover-framework}. In extension to that optimization problems have been used to find solution to various specific applications. The class of NP Optimization problem (NPO)\cite{NPO} exists which finds solution for the combinatorial optimization problems under specific conditions. \noindent In this work we replace the static selection function of the Grover search with a dynamic selection function. This allows us to extend the application of Grover search to the field of randomized search algorithms. One such application is the use in recommendation System. We also define the goals for a recommendation system. Finally we define the algorithm for recommendation system for binomial similarity distribution space giving us a quadratic speedup over traditional unstructured recommendation systems. Another application is in finding an optimal search state, for a given NPO problem. We see that Durr and Hoyer's work \cite{min} also performs optimization in $O(\log (N) \sqrt{N})$, however use of dynamic Grover Search can achieve the same in $O(\sqrt{N})$.\\ \noindent In section II we give a brief introduction to Grover search by using standard static selection function. In section III we introduce our model of dynamic grover search and by defining the algorithm over the binomial distribution space and then by comparing it with the traditional unstructured recommended systems. In the last section IV we provide an application of this dynamic grover search in recommendation systems and optimization algorithms.\\ \section{II. Grover Search Algorithm} \noindent In this section we briefly describe Grover search algorithm as a standard searching procedure and elaborate on the fact that how it expedites the searching process as compared to a classical search in an unstructured database\cite{nielsen}.\\ \noindent \textbf{Oracle:} Suppose we wish to search for a given element through an unstructured search space consisting of $N$ elements. For the sake of simplicity, instead of directly searching a given element we assign indices to each of these elements which are just numbers in the range of $0$ to $N-1$. Without loss of generality we assume $N=2^n$ and we also assume that there are exactly $M$ solutions ($1\leq M \leq N$) to this search problem. Further, we define a selection function $f$ which takes an input state $\ket{x}$, where the index $x$ lies in the range $0$ to $N-1$. It assigns value $1$ when the state is a solution to the search problem and the value $0$ otherwise, \begin{eqnarray} f = \begin{cases} 0 & \text{if $\ket{x}$ is not selected}, \\ 1 & \text{if $\ket{x}$ is selected}. \end{cases} \end{eqnarray} \noindent Here we are provided with quantum oracle-black box which precisely is a unitary operator $O$ and its action on the computational basis is given by, \begin{equation} \ket{x}\ket{q} \rightarrow \ket{x} \ket{q \oplus f(x)}. \end{equation} \noindent In the above equation we have $\ket{x}$ as the index register. The symbol $\oplus$ denotes addition modulo $2$, and the oracle qubit $\ket{q}$ gets flipped if we have $f(\ket{x})=1$. It remains unchanged otherwise. This helps us to check whether $\ket{x}$ is a solution to the search problem or not as this is equivalent of checking the oracle qubit is flipped or not.\\ \noindent \textbf{Algorithm:} The algorithm starts by creating a superposition of $N$ quantum states by applying a Hadamard transformation on $\ket{0}^{\otimes n}$. \begin{equation} \ket{\psi} =\frac{1}{\sqrt{N}}\sum _{x=0}^{N-1}\ket{x} \end{equation} \noindent The algorithm then proceeds to repeated application of a quantum subroutine known as the Grover iteration or as the Grover operator denoted by $G$. The grover subroutine consists of following steps:\\ \textbf{Procedure: Grover Subroutine} \begin{itemize} \item Apply the Oracle $O$. \item Perform inversion about mean. \end{itemize} \textbf{Algorithm: Grover Search} \begin{itemize} \item Initialize the system, such that there is same amplitude for all the N states \item Apply Grover Iteration $O(\sqrt{N})$ times \item Sample the resulting state, where we get the expected state with probability greater than 1/2 \end{itemize} \noindent \textbf{Geometry:} The entire process of Grover iteration can be considered as a rotation in the two dimensional space, where 1 dimension represents the solution space, and the other represents the remaining search space. These normalized states are written as, \begin{eqnarray} \ket{ \alpha} = \frac{1}{\sqrt{N-M}}\sum_x^{''} \ket{ x} \nonumber\\ \ket{ \beta} = \frac{1}{\sqrt{M}}\sum_x^{'} \ket{ x} . \end{eqnarray} \noindent The initial state $\ket{ \psi} $ can be re-expressed as \begin{equation} \ket{ \psi} =\sqrt{\frac{N-M}{N}}\ket{ \alpha} +\sqrt{\frac{M}{N}}\ket{ \beta} . \end{equation} \noindent Geometric visualization of Grover iteration can be described in two parts, the oracle function and inversion about mean. The oracle function can be considered as reflection of state $\ket{\psi}$ about $\ket{\alpha}$ and the inversion about mean further reflects this state about the new mean ($\approx \ket{\psi} $) \noindent In short, $G$ can be understood as a rotation in two dimensional space spanned by $\ket{ \alpha} $ and $\ket{ \beta} $. It is rotating the state $\ket{\psi}$ with some angle $\theta$ radians per application of $G$. Applying Grover rotation operator multiple times brings the state vector very close to $\ket{ \beta} $.\\ \begin{figure} \begin{tikzpicture} \coordinate (Origin) at (0,0); \coordinate (XAxisMax) at (5,0); \coordinate (YAxisMax) at (0,3); \draw [thin, gray,-latex] (Origin) -- (XAxisMax) node [right] {$\ket{ \alpha }$} \draw [thin, gray,-latex] (Origin) -- (YAxisMax) node [right] {$\ket{ \beta }$} \draw [thick, black,-latex] (Origin) -- (15: 4) node [right] {$\ket{ \psi }$} \draw [thick, gray,-latex] (Origin) -- (-15: 4) node [right] {$O \ket{ \psi }$} \draw [thick, black,-latex] (Origin) -- (45: 4) node [right] {$G \ket{ \psi }$} \draw [dotted, gray,-latex] (15: 4) -- (-15: 4) -- (45: 4); \draw [<->] (0:2.1) arc (0:15:2.1) node [ right ] {$\theta /2$}; \draw [<->] (0:2.1) arc (0:-15:2.1) node [above right] {$-\theta /2$}; \draw [->] (15:2.1) arc (15:45:2.1) node [below right] {$\theta$}; \end{tikzpicture} \caption{The geometrical representation of single Grover iteration} \label{fig:gover} \end{figure} \section{III. Dynamic Grover's Search} \noindent In this section we introduce a dynamic selection function $f_{s}$ which selects a state $\ket{ x} $ with certain probability $P_{s}(\ket{ x} )$. We use $f_{s}$ instead of the static selection function $f$ as used in Grover's search to introduce randomness in the search algorithm itself. This selection criterion can be based on different properties like similarity to a given state, number of satisfying clauses, etc for applications in recommendation systems, MAX-SAT optimization systems etc. \\ \noindent We consider $N$ items in a search space which are represented by (computational) basis vectors in a Hilbert space. Here our goal is to select $N_{s}$ states out of these $N$ states using the dynamic selection function. We define the dynamic selection function as: \[ f_{s} = \begin{cases} 1 & \text{$\ket{x}$ is selected with $P_{s}(x)$}, \\ 0 & \text{otherwise}. \end{cases} \] The dynamic nature of this function introduces selection scenarios that are fundamentally different from the traditional Grover search.\\ \noindent For analysis let us consider the selected states be represented by $\ket{ x_s} $ with coefficient $a_s$ and unselected states by $\ket{ x_{us}} $ with coefficients $a_{us}$. The state of the system at a given time can be represented as \[ \ket{ \psi } = \sum_{s} a_s\ket{ x_s} + \sum_{us} a_{us}\ket{ x_{us}} . \] The probability of sampling from selected states is given by $P_s(=\sum_{s}|a_s|^2)$. Similarly for unselected states we have the corrosponding probability as $P_{us}(=\sum_{us}|a_{us}|^2)$. We define gain $G(=\frac{P_s}{P_{us}})$ as an indicator for achievability of the desired result. \subsection{Analysis of Grovers Search in a different scenario} \noindent In this subsection we discuss the impact of dynamic selection function on the execution of Grover's search. We also analyze the required conditions for a Grover step to complete successfully.\\ \noindent \textbf {Corollary 1:} For proper execution of Grover search following conditions must be satisfied. \\ \begin{enumerate} \item The mean $\mu$ calculated in the inversion step should be positive. \item The probability amplitude $a_{us}$ of the unselected states $\ket{ x_{us}} $ remain positive. \item The number of selected states $\ket{ x_{s}} $ for Gain $G$ should be \[ N_{s} < \frac{N}{2G} \;\;\;\; where \;\; G \gg 1 \] \end{enumerate} \noindent \textit{Proof 1:} The mean $\mu$ (calculated in the inversion step) should be positive. If the mean is less than 0 then the coefficients of the selected states will decrease and the coefficients of the unselected states will become negative as given by, \begin{eqnarray} &&a_{x_{s}} = \mu - (- a_{x_{s}} - \mu) = 2\mu + a_{x_{s}}{}\nonumber \\&& a_{x_{us}} = \mu - (a_{x_{us}} - \mu) = 2\mu - a_{x_{us}} \end{eqnarray} \noindent \textit{Proof 2:} The coefficients of the unselected states remain positive. If the coefficient of the unselected state is negative the mean will be negative.\\ \noindent \textit{Proof 3:} For successfully having a gain $G$, \[ N_{sel} < \frac{N}{2G} \;\;\;\; where \;\; G \gg 1 \] as described in Appendix A1.\\ \noindent \textbf{Corollary 2:} If a state is selected in a Grover iteration and not in the next one, coefficient of the selected state after the next iteration will be less than those states which are not selected in both these rounds.\\ \noindent \textit{Proof :} Consider $\ket{ \psi_{i}} $ to be the input state to the iteration process. The first Grover step inverts the selected state to $\ket{ \psi_{iv}} $ and then calculates the mean state $\ket{ \psi_{\mu}} $. The final probability of the selected state $\ket{ \psi_{s}} $ is increased and the state $\ket{ \psi_{us}} $ is decreased as compared to the $\psi_{i}$, as described earlier in introduction. \\ \noindent Let $a_s$, $a_{us}$ be the coefficients, $\mu_1$ be the mean and $a_{s1}$, $a_{us1}$ be the output for the first Grover iteration. Then we have,\\ \begin{eqnarray} && a_{s1} = 2 \mu_1 + a_s {}\nonumber \\&& a_{us1} = 2 \mu_1 - a_{us} \end{eqnarray} \noindent For the second iteration when no states are selected, let $\mu_2$ be the mean and $a_{s2}$, $a_{us2}$ be the outputs of the iteration. These coefficients are given by,\\ \begin{eqnarray} && a_{s2} = 2 \mu_2 - a_{s1} = 2\mu_2 - 2\mu_1 - a_s {}\nonumber \\&& a_{us2} = 2 \mu_2 - a_{us1} = 2\mu_2 - 2\mu_1 + a_{us} \end{eqnarray} \noindent Hence the coefficient of the state $a_{s2}$ that was selected in one iteration is less that the state $a_{us2}$ that was never selected. \noindent The Fig~\ref{fig:GeometricReper} shows a geometric representation of a Grover iteration with initial state (black arrow), selected and unselected states (red arrow).\\ \begin{figure}[h] \begin{tikzpicture} \coordinate (Origin) at (0,0); \coordinate (XAxisMax) at (5,0); \coordinate (YAxisMax) at (0,3); \draw [thin, gray,-latex] (Origin) -- (XAxisMax) node [right] {$\ket{\alpha}$} \draw [thin, gray,-latex] (Origin) -- (YAxisMax) node [right] {$\ket{\beta}$} \draw [thick, black,-latex] (Origin) -- (15: 4) node [above right] {$\ket{\psi_{i}}$} \draw [thick, gray,-latex] (Origin) -- (-15: 4) node [above right] {$ \ket{\psi_{iv}}$} \draw [dotted, black,-latex] (Origin) -- (13: 4) node [right] {$ \ket{\psi_{\mu}}$} \draw [thick, red,-latex] (Origin) -- (11: 4) node [below right] {$ \ket{\psi_{us}}$} \draw [thick, red,-latex] (Origin) -- (41: 4) node [above right] {$ \ket{\psi_{s}}$} \draw [<-] (0:1.5) arc (0:15:1.5) node [below right] {$\theta$}; \draw [->] (0:1.5) arc (0:-15:1.5) node [above right] {$-\theta$}; \draw [->] (-15:2.1) arc (-15:13:2.1) node [below right] {$\theta_{s}$}; \draw [->] (13:2.1) arc (13:41:2.1) node [right] {$\theta_{s}$}; \draw [->] (15:2.5) arc (15:11:2.5) node [above] {$2 \theta_{us}$}; \end{tikzpicture} \caption{Geometrical Representation of selection in Grover Search} \label{fig:GeometricReper} \end{figure} \noindent Further, let us suppose that in the next iteration, a state $\ket{ \psi_{s}} $ which is selected earlier is not selected and an earlier unselected state $\ket{ \psi_{us}} $ is not selected. Now consider that no state is selected and no inversion occurs, then in that case the mean $\ket{ \psi_{\mu2}} $ lies above $\ket{ \psi_{us}} $. The inversion about mean will cause the state $\ket{ \psi_{s}} $ (in black) to become the state $\ket{ \psi_{s1}} $ (in red) which has a probability lower than those states which were unselected twice $\ket{ \psi_{us1}} $. \\ \begin{figure}[h] \begin{tikzpicture} \coordinate (Origin) at (0,0); \coordinate (XAxisMax) at (5,0); \coordinate (YAxisMax) at (0,3); \draw [thin, gray,-latex] (Origin) -- (XAxisMax) node [below right] {$\ket{ \alpha }$} \draw [thin, gray,-latex] (Origin) -- (YAxisMax) node [right] {$\ket{ \beta }$} \draw [thick, black,-latex] (Origin) -- (13: 4) node [below right] {$ \ket{ \psi_{us1} }$} \draw [thick, black,-latex] (Origin) -- (27: 4) node [above right] {$ \ket{ \psi_{s1} }$} \draw [dotted, black,-latex] (Origin) -- (15: 4) node [right] {$ \ket{ \psi_{\mu2} }$} \draw [thick, red,-latex] (Origin) -- (17: 4) node [above right] {$\ket{ \psi_{us2} }$} \draw [thick, red,-latex] (Origin) -- (3: 4) node [right] {$ \ket{ \psi_{s2} }$} \draw [->] (27:1.5) arc (27:3:1.5) node [below right] {$2 \theta_{s2}$}; \draw [<-] (17:2.5) arc (17:13:2.5) node [below right] {$2 \theta_{us2}$}; \end{tikzpicture} \caption{Geometrical representation of the rejection of a selected state} \end{figure} \noindent Note that the issue is not restricted to the case where no state is selected it is inherent with the use of inversion function for unselected states. In order to overcome this issue we need to always run each iteration of Grover twice with a given result from the selection function $f_s$, so that the relative coefficients remain in the order of number of times the state was selected.\\ \noindent \textbf{Corollary 3:} If no state is selected and the Grover step is repeated twice, no change happens in the coefficients.\\ \noindent \textbf{Corollary 4:} If all states are selected and the Grover step is repeated twice, there is no net change in the coefficients.\\ \noindent \textit{Proof :} In first step all the coefficients will become negative of themselves so the mean will be negative and inversion around the mean will leave them in negative coefficients. In second step all the coefficients will become negative of themselves so mean will be positive and the coefficients will be rotated back to their original places around the mean. \\ \subsection{Dynamic Grovers Search Algorithm} \noindent In this subsection we formalize the procedure of dynamic grover search algorithm.\\ \noindent \textbf{Procedure: Dynamic Grover Iteration} \begin{itemize} \item Apply the Oracle $f_{s}$ and store the result. \item Apply the \textit{Grover Iteration} using the stored oracle results. \item Apply the \textit{Grover Iteration} again using the stored oracle results, to nullify any negative affects of inversion about the mean. \end{itemize} \noindent \textbf{Algorithm: Dynamic Grover Search} \begin{itemize} \item Initialize the system, such that there is same amplitude for all the N states \item Apply \textit{Dynamic Grover Iteration} $O(\sqrt{N})$ times \item Sample the resulting state, where we get the expected state with $probablity > \frac{1}{2}$ \end{itemize} \section{IV. Applications of Dynamic Grover search} \noindent In this section we explore two different applications of our Dynamic grover search algorithm. In the first sub section we show its application in recommendation process. In the second section we give a generic optimization problem and solution using dynamic grover search. \subsection{ Quantum Recommendation Algorithm } \noindent Now that we have described dynamic Grover algorithm, we look towards how it can be applied to recommendation systems. \\ \noindent In essence a recommendation algorithm on an unstructured search space, is similar to Grover search, with the difference of selection function. If we know the search space well, we can construct a static selection function, that can select top $M$ states. In case of unknown search space, or a dynamic search space, we may not always be able to construct a static selection function. In that case we need to associate the selection dynamically to the similarity with the given state $\ket{x} $ (say). This will increase the probability of selection of desired $M$ states sufficiently.\\ \subsection{Recommendation Problem :} \begin{itemize} \item Consider a standard recommendation problem, \item Given a unstructured search space $S$, \item The dimensionality of the space be $n$, and total number of states to be N($=2^n$). \item We need to find $M$ recommended states for a given search result $\ket{x} $. \end{itemize} \noindent Let the similarity of two pure states $S(\ket{x} ,\ket{y} )$, represent a measure of the likeliness of these two states to be recommended for each other.\\ \textbf{Criteria for an effective Recommendation function} \\ \noindent Now we give a criteria for selection function $f_{s}$ to be effective in a dynamic system. Let the Dynamic Selection function be given as: \begin{eqnarray} f_{s} = \begin{cases} 1 & \text{$\ket{x}$ is selected with $P_{s}$}, \\ 0 & \text{otherwise}. \end{cases} \end{eqnarray} \noindent The $P_{s}$ should have the following criteria for good recommendation: \begin{itemize} \item The most likely state is selected with high probability, \[ \lim_{S(x,y) \to n } P_{s}(x) \geq \Big( 1-\frac{1}{N} \Big) \] \item The least likely state is selected with a low probability, \[ \lim_{S(x,y) \to 0 } P_{s}(x) \leq \Big( \frac{1}{N} \Big) \] \item In order to select $M$ states, we have from Corollary 1, \[ M < \frac{N}{2G} \] The expected number of selected states, \[ E(N_{s}) = \int_{x} P_{s}(x,y) \approx M \] \end{itemize} \subsection{ Recommendation for Binomial distribution} \noindent Consider an example for initial state space with equal probability for each of the states. The similarity of states with respect to a particular state be given by the Manhattan distance between the states. The Similarity function $S(x,y)$ would be a binomial curve Fig~\ref{fig:BinomialDistribution}. The probability of selection is given by the following equation: \begin{equation} e^{-\log(\sqrt[n]{K}-1) S(\ket{x},\ket{y})} \end{equation} \noindent The expected selection should be Fig~\ref{fig:BinomialProbabilty} \pgfmathdeclarefunction{gauss}{2}{% \pgfmathparse{1/(#2*sqrt(2*pi))*exp(-((x-#1)^2)/(2*#2^2))}% } \begin{figure}[h] \begin{tikzpicture} \begin{axis}[ no markers, domain=0.8:7.5, samples=100, axis lines*=left,xlabel=$x$, ylabel=$N_{States}$, every axis y label/.style={at=(current axis.above origin),anchor=south}, every axis x label/.style={at=(current axis.right of origin),anchor=west}, height=5cm, width=8cm, xtick=\empty, ytick=\empty, enlargelimits=false, clip=false, axis on top, grid = major ] \addplot [thick,cyan!50!black] {gauss(4,1)}; \draw [yshift=-0.6cm, xshift=-1.3cm, latex-latex](axis cs:4,0) -- node [fill=white] {$S(|x\rangle, |y\rangle)$} (axis cs:7.2,0); \end{axis} \end{tikzpicture} \caption{Distribution of the states with respect to similarity function (Similarity function $S(|x\rangle, |y\rangle)$ vs $N_{States}$)} \label{fig:BinomialDistribution} \end{figure} \pgfmathdeclarefunction{selectionProb}{2}{% \pgfmathparse{-1/(x*x*x-0.1) +#1}% } \begin{figure}[h] \begin{tikzpicture} \begin{axis}[ no markers, domain=0.8:7.5, samples=100, axis lines*=left,xlabel={$S(|x\rangle, |y\rangle)$}, ylabel={$P_s$}, height=6cm, width=8cm, xtick=\empty, ytick=\empty, enlargelimits=false, clip=false, axis on top, grid = major ] \addplot [fill=cyan!20, draw=none, domain=-0.6:-0.5] {selectionProb(0,10)} \closedcycle; \addplot [thick,cyan!50!black,domain=-2:-0.5] {selectionProb(0,10)}; \draw [yshift=-0.6cm, xshift=-3cm, latex-latex](axis cs:4,0) -- node [fill=white] {$S(|x\rangle, |y\rangle)$} (axis cs:5.96,0); \end{axis} \end{tikzpicture} \caption{Plotting probability of selection $P_s$ with the similarity function $S(|x\rangle, |y\rangle)$. The blue shaded region indicates the expected selected items.} \label{fig:BinomialProbabilty} \end{figure} \begin{figure}[h] \begin{tikzpicture}[x=1cm,y=0.2cm] \begin{axis}[ axis lines=center, axis on top=true, xmin=10, xmax=14, xlabel={Number of bits}, xticklabel style={/pgf/number format/1000 sep=}, ymin=0, ymax=70, ylabel={Number of steps} ] \addplot[color=blue,mark=o] coordinates { ( 10, 7 ) ( 11, 11 ) ( 12, 17 ) ( 13, 22 ) ( 14, 32 ) }; \addplot[color=red,mark=o] coordinates { ( 10, 7 ) ( 11, 10 ) ( 12, 13 ) ( 13, 18 ) ( 14, 25 ) }; \end{axis} \end{tikzpicture} \caption{Comparative analysis of the number of steps with the dimensionality of the search space , Dynamic Grover(blue), Grover(red)} \label{fig:ExecutionComparison} \end{figure} \begin{figure}[h] \begin{tikzpicture}[x=1cm,y=0.2cm] \begin{axis}[ axis lines=left, axis on top=true, xmin=0, xmax=15, xlabel={$S(|x\rangle, |y\rangle)$}, xticklabel style={/pgf/number format/1000 sep=}, ymin=0, ymax=1.0, ylabel={Probability} ] \addplot[color=blue,mark=o] coordinates { ( 0 , 0 ) ( 1 , 0 ) ( 2 , 0 ) ( 3 , 7.31511470083e-06 ) ( 4 , 0.000105024146776 ) ( 5 , 0.000570578946665 ) ( 6 , 0.00249286535328 ) ( 7 , 0.00745920069926 ) ( 8 , 0.0317610356855 ) ( 9 , 0.0618686957303 ) ( 10, 0.120915738795 ) ( 11, 0.179278528288 ) ( 12, 0.240100783919 ) ( 13, 0.35544023332 ) }; \addplot[smooth, color=red,mark=o] coordinates { ( 0 , 0 ) ( 1 , 0 ) ( 2 , 0 ) ( 3 , 6.428343871e-07 ) ( 4 , 9.22926512908e-06 ) ( 5 , 5.01410821938e-05 ) ( 6 , 0.000137933892775 ) ( 7 , 0.000211768013807 ) ( 8 , 0.000193217650065 ) ( 9 , 0.000106848258484 ) ( 10, 3.51263075808e-05 ) ( 11, 6.70384432261e-06 ) ( 12, 0.932631829595 ) ( 13, 0.0666165592568 ) }; \end{axis} \end{tikzpicture} \caption{Plotting probability of sampling with the similarity function $S(|x\rangle, |y\rangle)$. The (red) line indicates standard Grover's algorithm while the (blue) line indicates our dynamic grover search algorithm} \label{fig:accuracy} \end{figure} \noindent We see that the dynamic selection, gives a similar performance (Fig~\ref{fig:ExecutionComparison}) as well as desired accuracy (Fig~\ref{fig:accuracy}) as would have been given by a static selection function using the Grover search. However this algorithm makes our recommendation system robust with respect to changes in search space and distribution of search space. Further details can be seen from the Appendix. \subsection{Approximate Optimization Algorithms } \noindent The Grover search algorithm was a landmark algorithm because it provided a framework\cite{grover-framework}, that can be used to solve any NP problem, with a quadratic speedup over classical systems. Durr and Hoyer's work on finding minimum of the search space \cite{min} can be considered as a tool to find the optimal value (min or max) for a search space, but it is done by applying the Grover search multiple times, and uses the quantum probability during the sampling to arrive at the optimal state.\\ \noindent \textbf{Optimization Problem:} An optimization problem can be represented in the following way:\\ $\boldsymbol{Given:}$ A function $f : A \to R$ from some set $A$ to the set of all real numbers $R$.\\ $\boldsymbol{Sought:}$ An element $\ket{x_o} $ (optimal) in $A$ such that $f(\ket{x_o} \leq f(\ket{x} )$ for all $\ket{x} $ in $A$ (minimization) or such that $f(\ket{x_o} ) \geq f(\ket{x} )$ for all $\ket{x} $ in $A$ (maximization). \\ \subsection{Solving Optimization Problem using Durr and Hoyer's Min Approach} \noindent To solve the Optimization problem using Durr and Hoyer's approach\cite{min}, the selection function would use the following: \begin{enumerate} \item Set $\ket{x_o} = \ket{0} $ \item Set $f_{s} = f(\ket{x} ) \geq f(\ket{x_o} )$ \item Run Grover Search using $f_{s}$, sample out $\ket{y} $ \item \textbf{if} $ f(\ket{y} ) > f(\ket{x_o}) $ \begin{itemize} \item[] $\ket{x_o} = \ket{y} $ \item[] Repeat from 2 \end{itemize} \item \textbf{else} return $\ket{x_o} $. \end{enumerate} \noindent The algorithm runs in expected $O(\log(N)\sqrt{N})$ Grover iterations.\\ \noindent \textbf{Solving Optimization Problem using Dynamic Grover Search :} With the Dynamic Grover System, we present a generic framework for solving optimization problems using classical probability. Consider the distribution function $D(\ket{x} ) = f(\ket{x} )$, we use a Probabilistic function $P_{s}: A \to [0,1]$ such that,\\ \[ \lim_{f(\ket{x} ) \to f_{max} } P_{s}(\ket{x} ) \geq \Big( 1-\frac{1}{N} \Big) \] \[ \lim_{f(x) \to f_{min}} P_{s}(\ket{x} ) \leq \Big( \frac{1}{N} \Big) \] \noindent Using a good heuristic, a probabilistic function $P_{s}$ can be chosen to get optimal results with high probability by running Grover search algorithm once. Hence the Dynamic Grover search can be modeled for any Optimization Problem, which runs in $O(\sqrt{N})$ Grover iterations. And the accuracy of the search depends on the Probability function $P_{s}$.\\
1,116,691,500,012
arxiv
\subsection*{1. Proof of Theorem $1$} For any three different nodes $p_i, p_j, p_k$ in $\mathbb{R}^3$, the condition $\theta_i+\theta_j+\theta_k = \pi$ must hold. The angle constraints can be rewritten as \begin{align}\label{para1} & w_{ik}d_{ik}d_{ij}\cos \theta_i + w_{ki}d_{ik}d_{jk}\cos \theta_k =0, \\ \label{para2} & w_{ij}d_{ik}d_{ij}\cos \theta_i + w_{ji}d_{ij}d_{jk}\cos \theta_j =0, \\ \label{para3} & w_{jk}d_{jk}d_{ij}\cos \theta_j + w_{kj}d_{ik}d_{jk}\cos \theta_k =0, \end{align} with $w_{ik}^2+w_{ki}^2 \neq 0$, $w_{ij}^2+w_{ji}^2 \neq 0$, and $w_{jk}^2+w_{kj}^2 \neq 0$. First, we introduce \textit{Lemma 7} and \textit{Lemma 8} below for proving \text{Theorem 1}. \textit{Lemma 7.} $p_i, p_j, p_k$ are non-colinear if the parameters in (1)-(3) satisfy $ w_{ik}w_{ij}w_{jk}w_{ki}w_{ji}w_{kj} = 0$. \begin{proof} When $ w_{ik}w_{ij}w_{jk}w_{ki}w_{ji}w_{kj} = 0$, without loss of generality, suppose $w_{ik}=0$, since $w_{ik}^2+w_{ki}^2 \neq 0$, we have $\theta_k = \frac{\pi}{2}$, $\theta_i + \theta_j= \frac{\pi}{2}$ from \eqref{para1}. Hence, $p_i, p_j, p_k$ are non-colinear. Similarly, we can prove that $p_i, p_j, p_k$ are non-colinear if the parameter $w_{ij}=0$, or $w_{jk}=0$, or $w_{ki}=0$, or $w_{ji}=0$, or $w_{kj}=0$. \end{proof} If $ w_{ik}w_{ij}w_{jk}w_{ki}w_{ji}w_{kj} \neq 0$, \eqref{para1}-\eqref{para3} can be rewritten as \begin{align}\label{para21} & \frac{\cos \theta_i}{\cos \theta_k} = - \frac{w_{ki}d_{jk}}{w_{ik}d_{ij}}, \\ \label{para22} & \frac{\cos \theta_i}{\cos \theta_j} = - \frac{w_{ji}d_{jk}}{w_{ij}d_{ik}}, \\ \label{para23} & \frac{\cos \theta_j}{\cos \theta_k} = - \frac{w_{kj}d_{ik}}{w_{jk}d_{ij}}. \end{align} From \eqref{para21} and \eqref{para22}, we have \begin{align}\label{di} & d_{ij} = -\frac{\cos \theta_k}{\cos \theta_i}\frac{w_{ki}}{w_{ik}}d_{jk},\\ \label{dj} & d_{ik} = -\frac{\cos \theta_j}{\cos \theta_i}\frac{w_{ji}}{w_{ij}}d_{jk}. \end{align} Note that $\cos \theta_i = \frac{d_{ij}^2+d_{ik}^2-d_{jk}^2}{2d_{ij}d_{ik}}$, $\cos \theta_j = \frac{d_{ij}^2+d_{jk}^2-d_{ik}^2}{2d_{ij}d_{jk}}$, $\cos \theta_k = \frac{d_{ik}^2+d_{jk}^2-d_{ij}^2}{2d_{ik}d_{jk}}$. Combining \eqref{di} and \eqref{dj}, it yields \begin{equation}\label{dd1} \frac{d_{ij}^2+d_{ik}^2-d_{jk}^2}{d_{ik}^2+d_{jk}^2-d_{ij}^2} + \frac{d_{ij}^2+d_{ik}^2-d_{jk}^2}{d_{ij}^2+d_{jk}^2-d_{ik}^2} = -(\frac{w_{ki}}{w_{ik}} +\frac{w_{ji}}{w_{ij}}). \end{equation} \textit{Lemma 8.} When the parameters in (1)-(3) satisfy $ w_{ik}w_{ij}w_{jk}w_{ki}w_{ji}w_{kj} \neq 0$, $p_i, p_j, p_k $ are colinear if and only if \begin{equation} \frac{w_{ki}}{w_{ik}} +\frac{w_{ji}}{w_{ij}} =1, \text{or} \ \frac{w_{ij}}{w_{ji}} + \frac{w_{kj}}{w_{jk}} =1, \text{or} \ \frac{w_{ik}}{w_{ki}} +\frac{w_{jk}}{w_{kj}} =1. \end{equation} \begin{proof} (Necessity) If $p_i, p_j, p_k $ are colinear, there are three cases: $\text{(\romannumeral1)}$ $\theta_i=\pi$, $\theta_j, \theta_k =0$; $\text{(\romannumeral2)}$ $\theta_j=\pi$, $\theta_i, \theta_k =0$; $\text{(\romannumeral3)}$ $\theta_k=\pi$, $\theta_i, \theta_j =0$. For the case $\text{(\romannumeral1)}$ that $\theta_i=\pi$, $\theta_j, \theta_k =0$, we have $d_{ij}+d_{ik}= d_{jk}$. Substituting $d_{ij}+d_{ik}= d_{jk}$ into \eqref{dd1}, we get \begin{equation} \frac{w_{ki}}{w_{ik}} +\frac{w_{ji}}{w_{ij}}=1. \end{equation} Similarly, the conditions can be derived for the other two cases $\text{(\romannumeral2)}$-$\text{(\romannumeral3)}$. (Sufficiency) If $\frac{w_{ki}}{w_{ik}} +\frac{w_{ji}}{w_{ij}}=1$, \eqref{dd1} becomes \begin{equation}\label{dd2} \frac{d_{ij}^2+d_{ik}^2-d_{jk}^2}{d_{ik}^2+d_{jk}^2-d_{ij}^2} + \frac{d_{ij}^2+d_{ik}^2-d_{jk}^2}{d_{ij}^2+d_{jk}^2-d_{ik}^2} = -1. \end{equation} Then, \eqref{dd2} can be rewritten as \begin{equation}\label{dd3} (d_{ij}^2+d_{ik}^2-d_{jk}^2)^2 = 4d_{ik}^2d_{ij}^2. \end{equation} Since $\cos \theta_i = \frac{d_{ij}^2+d_{ik}^2-d_{jk}^2}{2d_{ij}d_{ik}}$, \eqref{dd3} becomes \begin{equation} 4d_{ij}^2d_{ik}^2\cos^2 \theta_i = 4d_{ij}^2d_{ik}^2, \rightarrow \cos^2 \theta_i = 1. \end{equation} Hence, $\theta_i = 0$ or $\pi$, i.e., $p_i, p_j, p_k $ must be colinear. Similarly, we can prove that $p_i, p_j, p_k $ must be colinear for the other two cases $\frac{w_{ij}}{w_{ji}} + \frac{w_{kj}}{w_{jk}} =1 \ \text{and} \ \frac{w_{ik}}{w_{ki}} +\frac{w_{jk}}{w_{kj}} =1$. \end{proof} Next, we will prove that the angles $\theta_i, \theta_j, \theta_k \in [0, \pi] $ are determined uniquely by the parameters $w_{ik}, w_{ki}, w_{ij}, w_{ji}, $ $ w_{jk}, w_{kj}$ in (1)-(3). From \textit{Lemma 7} and \textit{Lemma 8}, we can know that there are only three cases for (1)-(3): \begin{enumerate}[(i)] \item $ w_{ik}w_{ij}w_{jk}w_{ki}w_{ji}w_{kj} = 0$; \item $ w_{ik}w_{ij}w_{jk}w_{ki}w_{ji}w_{kj} \neq 0$, and $\frac{w_{ki}}{w_{ik}} +\frac{w_{ji}}{w_{ij}} =1$, \text{or} \ $\frac{w_{ij}}{w_{ji}} + \frac{w_{kj}}{w_{jk}} =1$, \text{or} \ $\frac{w_{ik}}{w_{ki}} +\frac{w_{jk}}{w_{kj}} =1$; \item $ w_{ik}w_{ij}w_{jk}w_{ki}w_{ji}w_{kj} \neq 0$, and $ \frac{w_{ki}}{w_{ik}} +\frac{w_{ji}}{w_{ij}}, \frac{w_{ij}}{w_{ji}} + \frac{w_{kj}}{w_{jk}}, \frac{w_{ik}}{w_{ki}} +\frac{w_{jk}}{w_{kj}} \neq 1$. \end{enumerate} The above three cases $(\text{\romannumeral1})\!-\!(\text{\romannumeral3})$ are analyzed below. For the case $(\text{\romannumeral1})$, from \textit{Lemma 7}, we can know that $p_i, p_j, p_k$ are non-colinear and form a triangle $\bigtriangleup_{ijk}(p)$. Without loss of generality, suppose $w_{ik}=0$, since $w_{ik}^2+w_{ki}^2 \neq 0$, we have $\theta_k = \frac{\pi}{2}$, $\theta_i + \theta_j= \frac{\pi}{2}$ from \eqref{para1}. Since $\theta_i + \theta_j = \frac{\pi}{2}$, we have $w_{ij}\cdot w_{ji} < 0$ from \eqref{para22}. According to the sine rule, $\frac{d_{jk}}{d_{ik}}= \frac{\sin \theta_i}{\sin \theta_j}$. Then, \eqref{para22} becomes \begin{equation}\label{tan} \frac{\tan \theta_i}{\tan \theta_j} = - \frac{w_{ij}}{w_{ji}}. \end{equation} Since $\theta_j= \frac{\pi}{2}-\theta_i$, from \eqref{tan}, we have \begin{equation} \tan \theta_i = \sqrt{- \frac{w_{ij}}{w_{ji}}}, \rightarrow \theta_i = \arctan \sqrt{- \frac{w_{ij}}{w_{ji}}}. \end{equation} Similarly, we can prove that $\theta_i, \theta_j, \theta_k$ can be determined uniquely if the parameter $w_{ij}, w_{jk}, w_{ki}, w_{ji},$ or $w_{kj}$ equals $0$. For the case $(\text{\romannumeral2})$, from \textit{Lemma 8}, we can know that $p_i, p_j, p_k$ are colinear. Two of $\theta_i, \theta_j, \theta_k$ must be $0$. If $w_{kj}w_{jk}<0$, i.e., $\frac{\cos \theta_j}{\cos \theta_k}>0$ from \eqref{para23}, we have $\theta_i=\pi$, $\theta_j, \theta_k =0$. Similarly, we have $\theta_j=\pi$, $\theta_i, \theta_k =0$ if $w_{ki}w_{ik}<0$, and $\theta_k=\pi$, $\theta_i, \theta_j =0$ if $w_{ji}w_{ij}<0$. For the case $(\text{\romannumeral3})$, from \textit{Lemma 8}, we can know that $p_i, p_j, p_k$ are non-colinear and form a triangle $\bigtriangleup_{ijk}(p)$. For this triangle $\bigtriangleup_{ijk}(p)$, at most one of $\theta_i, \theta_j, \theta_k$ is an obtuse angle. Hence, there are only four possible cases: $(\text{a})$ $w_{ki}w_{ik}, w_{ji}w_{ij}, w_{kj}w_{jk} <0$; $(\text{b})$ $w_{ki}w_{ik}, w_{ji}w_{ij}>0, w_{kj}w_{jk} <0$; $(\text{c})$ $w_{ki}w_{ik}, w_{kj}w_{jk}>0, w_{ji}w_{ij}<0$; $(\text{d})$ $w_{ji}w_{ij}, w_{kj}w_{jk} >0, w_{ki}w_{ik}<0$. For the case $(\text{a})$, we have $\theta_i, \theta_j, \theta_k < \frac{\pi}{2}$. From \eqref{para21} and \eqref{para22}, we have \begin{equation}\label{trans} \tan \theta_k = -\frac{w_{ki} }{w_{ik}} \tan \theta_i, \ \ \tan \theta_j = -\frac{w_{ji} }{w_{ij}} \tan \theta_i. \end{equation} Note that $\tan \theta_i = \tan (\pi- \theta_j - \theta_k)= \frac{\tan \theta_j +\tan \theta_k}{\tan \theta_j\tan \theta_k -1}$. Based on \eqref{trans}, we have \begin{equation} \tan \theta_i = \sqrt{\frac{1-\frac{w_{ki}}{w_{ik}}- \frac{w_{ji}}{w_{ij}}}{\frac{w_{ki}w_{ji}}{w_{ik}w_{ij}}}}. \end{equation} Then, we can obtain the angle $\theta_i$ by \begin{equation} \theta_i = \arctan \sqrt{\frac{1-\frac{w_{ki}}{w_{ik}}- \frac{w_{ji}}{w_{ij}}}{\frac{w_{ki}w_{ji}}{w_{ik}w_{ij}}}}. \end{equation} Similarly, the angles $\theta_j$ and $\theta_k$ can also be obtained. Using this way, we can prove that $\theta_i, \theta_j, \theta_k$ can be determined uniquely by the parameters $w_{ik}, w_{ki}, w_{ij}, w_{ji}, w_{jk}, w_{kj}$ for the cases $(\text{b})$-$(\text{d})$. \begin{figure}[t] \centering \includegraphics[width=0.4\linewidth]{fbearing.pdf} \caption{3-D local-relative-bearing-based network. } \label{moti1} \end{figure} \subsection*{2. Proof of \textit{Lemma} $2$} \begin{proof} Since $ \mu_{ij}e_{ij}\!+\! \mu_{ik}e_{ik}\!+\! \mu_{ih}e_{ih}\!+\! \mu_{il}e_{il}=\mathbf{0}$ and $w_{ik}e_{ik}^Te_{ij}+w_{ki}e_{ki}^Te_{kj}=0$, for the scaling space $S_s$, it is straightforward that $\eta_d^Tp =\mathbf{0}$ and $\eta_r^Tp =0$. For the translation space $S_t$, we have $\eta_d^T( \mathbf{1}_n\otimes {I}_3)=\mathbf{0}$ and $\eta_r^T( \mathbf{1}_n\otimes {I}_3)=\mathbf{0}$. For the rotation space $S_r= \{(I_n \otimes A)p, A+A^T =\mathbf{0}, A \in \mathbb{R}^{3 \times 3} \}$, it follows that $\eta_d^T (I_n \otimes A) p = A (\mu_{ij}e_{ij}\!+\! \mu_{ik}e_{ik}\!+\! \mu_{ih}e_{ih}\!+\! \mu_{il}e_{il})=\mathbf{0}$ and \begin{equation} \begin{array}{ll} \eta_r^T (I_n \otimes A) p\\ = w_{ik}p_i^T(A+A^T)p_i \!+\! (w_{ki}\!-\!w_{ik})p_j^T(A\!+\!A^T)p_i \\ -(w_{ik}\!+\!w_{ki})p_k^T(A\!+\!A^T)p_i + (w_{ik}\!-\!w_{ki})p_k^T(A\!+\!A^T)p_j \\ +w_{ki}p_k^T(A\!+\!A^T)p_k =0. \end{array} \end{equation} Then, the conclusion follows. \end{proof} \subsection*{3. Local-relative-bearing-based Displacement Constraint} $g_{ij} = \frac{e_{ij}}{d_{ij}} \in \mathbb{R}^3$ is the relative bearing of $p_j$ with respect to $p_i$ in $\Sigma_g$. For the node $i$ and its neighbors $j, k, h, l$ in $\mathbb{R}^3$, the matrix $g_i=(g_{ij}, g_{ik}, g_{ih}, g_{il}) \in \mathbb{R}^{3 \times 4}$ is a wide matrix. From the matrix theory, there must be a non-zero vector $ \bar \mu_i=(\bar \mu_{ij}, \bar \mu_{ik}, \bar \mu_{ih}, \bar \mu_{il})^T \in \mathbb{R}^4$ such that $g_i\bar \mu_i= \mathbf{0}$, i.e., \begin{equation}\label{root1} \bar \mu_{ij}g_{ij}+\bar \mu_{ik}g_{ik}+\bar \mu_{ih}g_{ih} + \bar \mu_{il}g_{il}= \mathbf{0}, \end{equation} where $\bar \mu_{ij}^2+\bar \mu_{ik}^2+\bar \mu_{ih}^2+\bar \mu_{il}^2 \neq 0$. The equation $g_i\bar \mu_i= \mathbf{0}$ is a bearing constraint, based on which a displacement constraint can be obtained shown as following. The non-zero vector $(\bar \mu_{ij}, \bar \mu_{ik}, \bar \mu_{ih}, \bar \mu_{il})^T$ can be calculated with local relative bearing measurements $g_{ij}^{i}, g_{ik}^{i}, g_{ih}^{i}, g_{il}^{i}$ by solving the following equation \begin{equation}\label{wmi1} \left[ \! \begin{array}{c c c c} g_{ij}^{i} & g_{ik}^{i} & g_{ih}^{i} & g_{il}^{i} \\ \end{array} \right] \left[ \! \begin{array}{c} \bar \mu_{ij} \\ \bar \mu_{ik} \\ \bar \mu_{ih} \\ \bar \mu_{il} \end{array} \right] = \mathbf{0}. \end{equation} Note that \eqref{root1} can be rewritten as \begin{equation}\label{loca1} \bar \mu_{ij}\frac{e_{ij}}{d_{ij}}+\bar \mu_{ik}\frac{e_{ik}}{d_{ik}}+\bar \mu_{ih}\frac{e_{ih}}{d_{ih}} + \bar \mu_{il}\frac{e_{il}}{d_{il}} = \mathbf{0}. \end{equation} \begin{assumption}\label{ad3} No two nodes are collocated in $\mathbb{R}^3$. Each anchor node has at least two neighboring anchor nodes, and each free node has at least four neighboring nodes. The free node and its neighbors are non-colinear. \end{assumption} Under \text{Assumption \ref{ad3}}, without loss of generality, suppose node $l$ is not colinear with nodes $i,j,k,h$ shown in the above Fig. \ref{moti1}. The angles among the nodes $p_i, p_j, p_k, p_h, p_l$ are denoted by $\xi_{ilj} \!=\! \angle p_ip_lp_j, \xi_{ilk} \!=\! \angle p_ip_lp_k, \xi_{ilh} \!=\! \angle p_ip_lp_h, \xi_{ijl} \!=\! \angle p_ip_jp_l, \xi_{ikl} \!=\! \angle p_ip_kp_l, \xi_{ihl} \!=\! \angle p_ip_hp_l$. Note that these angles can be obtained by only using the local relative bearing measurements. For example, $\xi_{ilj} = g_{li}^Tg_{lj}= {g^{l}_{li}}^TQ_l^TQ_lg_{lj}^{l} ={g_{li}^{l}}^Tg_{lj}^l$. According to the sine rule, $\frac{d_{il}}{d_{ij}}= \frac{\sin \xi_{ijl}}{\sin \xi_{ilj}}, \frac{d_{il}}{d_{ik}}= \frac{\sin \xi_{ikl}}{\sin \xi_{ilk}}, \frac{d_{ih}}{d_{il}}= \frac{\sin \xi_{ilh}}{\sin \xi_{ihl}}$. Then, based on \eqref{loca1}, we can obtain a displacement constraint by only using the local relative bearing measurements shown as \begin{equation}\label{bead} \mu_{ij}e_{ij}+ \mu_{ik}e_{ik}+ \mu_{ih}e_{ih} + \mu_{il}e_{il}= \mathbf{0}, \end{equation} where \begin{equation} \begin{array}{ll} & \mu_{ij} = \bar \mu_{ij}\frac{\sin \xi_{ijl}}{\sin \xi_{ilj}}, \ \ \mu_{ik} = \bar \mu_{ik}\frac{\sin \xi_{ikl}}{\sin \xi_{ilk}}, \\ & \mu_{ih} = \bar \mu_{ih}\frac{\sin \xi_{ilh}}{\sin \xi_{ihl}}, \ \ \mu_{il} = \bar \mu_{il}. \end{array} \end{equation} In a local-relative-bearing-based network in $\mathbb{R}^3$ under \text{Assumption \ref{ad3}}, let $\mathcal{X}_{\mathcal{G}}= \{ ( i, j, k, h, l) \in \mathcal{V}^{5} : (i,j), (i,k), $ $ (i,h), (i,l), (j,k), (j,h), (j,l) \in \mathcal{E}, j \!<\! k \!<\! h \!<\! l\}$. Each element of $\mathcal{X}_{\mathcal{G}}$ can be used to construct a local-relative-bearing-based displacement constraint. \subsection*{4. Distance-based Displacement Constraint} Since the displacement constraints are invariant to translations and rotations, a congruent network of the subnetwork consisting of the node and its neighbors has the displacement constraint. Each displacement constraint can be regarded as a subnetwork, and multi-dimensional scaling can be used to obtain displacement constraint shown in the following Algorithm \ref{disa} \cite{han2017barycentric}. \subsection*{5. Ratio-of-distance-based Displacement Constraint} For the free node $i$ and its neighbors $j,k,h,l$, under Assumption $1$, we can obtain the ratio-of-distance matrix $M_r$ \eqref{ratio} by the ratio-of-distance measurements. \begin{equation}\label{ratio} M_r = \frac{1}{d_{ij}^2}\left[ \! \begin{array}{c c c c c} 0 & d_{ij}^2 & d_{ik}^2 & d_{ih}^2 & d_{il}^2 \\ d_{ji}^2 & 0 & d_{jk}^2 & d_{jh}^2 & d_{jl}^2 \\ d_{ki}^2 & d_{kj}^2 & 0 & d_{kh}^2 & d_{kl}^2 \\ d_{hi}^2 & d_{hj}^2 & d_{hk}^2 & 0 & d_{hl}^2 \\ d_{li}^2 & d_{lj}^2 & d_{lk}^2 & d_{lh}^2 & 0 \end{array} \right]. \end{equation} Note that the displacement constraints are not only invariant to translations and rotations, but also to scalings. Hence, a network with ratio-of-distance measurements $\frac{1}{d_{ij}}\{d_{ij}, \cdots, d_{hl}, \cdots \}$ has the same displacement constraints as the network with distance measurements $\{d_{ij}, \cdots, d_{hl}, \cdots \}$, that is, the displacement constraint $\mu_{ij}e_{ij}+ \mu_{ik}e_{ik}+ \mu_{ih}e_{ih} + \mu_{il}e_{il}= \mathbf{0}$ can also be obtained by Algorithm $1$, where the distance matrix $M$ \eqref{dim} is replaced by the the ratio-of-distance matrix $M_r$ \eqref{ratio}. \begin{algorithm} \caption{Distance-based displacement constraint} \label{disa} \begin{algorithmic}[1] \State Available information: Distance measurements among the nodes $p_i,p_j,p_k,p_h,p_l$. Denote $(\mathcal{\bar G}, \bar p)$ as a subnetwork with $\bar p=(p_i^T,p_j^T,p_k^T,p_h^T,p_l^T)^T$. \State Constructing a distance matrix $M \in \mathbb{R}^{5 \times 5}$ shown as \begin{equation}\label{dim} M = \left[ \! \begin{array}{c c c c c} 0 & d_{ij}^2 & d_{ik}^2 & d_{ih}^2 & d_{il}^2 \\ d_{ji}^2 & 0 & d_{jk}^2 & d_{jh}^2 & d_{jl}^2 \\ d_{ki}^2 & d_{kj}^2 & 0 & d_{kh}^2 & d_{kl}^2 \\ d_{hi}^2 & d_{hj}^2 & d_{hk}^2 & 0 & d_{hl}^2 \\ d_{li}^2 & d_{lj}^2 & d_{lk}^2 & d_{lh}^2 & 0 \end{array} \right]. \end{equation} \State Computer the centering matrix $J=I-\frac{1}{5}\mathbf{1}_5\mathbf{1}_5^T$; \State Compute the matrix $X=-\frac{1}{2}JMJ$; \State Perform singular value decomposition on $X$ as \begin{equation} X = V \Lambda V^T, \end{equation} where $V=(v_1,v_2,v_3,v_4,v_5) \in \mathbb{R}^{5 \times 5}$ is a unitary matrix, and $\Lambda = \text{diag}(\lambda_1,\lambda_2,\lambda_3, \lambda_4, \lambda_5)$ is a diagonal matrix whose diagonal elements $\lambda_1 \ge \lambda_2 \ge \lambda_3 \ge \lambda_4 \ge \lambda_5$ are singular values. Since $\text{Rank}(X) \le 3$, we have $\lambda_4 = \lambda_5=0$. Denote by $V_*=(v_1,v_2,v_3)$ and $\Lambda_*=\text{diag}(\lambda_1,\lambda_2,\lambda_3)$; \State Obtaining a congruent network $(\mathcal{\bar G}, \bar q) \cong (\mathcal{\bar G}, \bar p)$ with $\bar q=(q_i^T,q_j^T,q_k^T,q_h^T,q_l^T)^T $, where $(q_i, q_j, q_k, $ $ q_h, q_l ) = \Lambda_* ^{\frac{1}{2}}V_*^T$; \State Based on the congruent network $\bar q=(q_i^T,q_j^T,q_k^T,q_h^T,q_l^T)^T$ of the subnetwork $\bar p=(p_i^T,p_j^T,p_k^T,p_h^T,p_l^T)^T$, the parameters $\mu_{ij}, \mu_{ik}, \mu_{ih}, \mu_{il}$ in $\mu_{ij}e_{ij}+\mu_{ik}e_{ik}+\mu_{ih}e_{ih} + \mu_{il}e_{il}= \mathbf{0}$ can be obtained by solving the following matrix equation \begin{equation}\label{wmi} \left[ \! \begin{array}{c c c c} q_j-q_i & q_k-q_i & q_h-q_i & q_l-q_i \\ \end{array} \right] \left[ \! \begin{array}{c} \mu_{ij} \\ \mu_{ik} \\ \mu_{ih} \\ \mu_{il} \end{array} \right] = \mathbf{0}. \end{equation} \end{algorithmic} \end{algorithm} \subsection*{6. Angle-based Displacement Constraint} For a triangle $\bigtriangleup_{ijk}(p)$, according to the sine rule, the ratios of distance can be calculated by the angle measurements $\theta_i, \theta_j, \theta_k$ shown as \begin{equation}\label{sin} \frac{d_{ij}}{d_{ik}}=\frac{\sin \theta_k}{\sin \theta_j}, \frac{d_{ij}}{d_{jk}}=\frac{\sin \theta_k}{\sin \theta_i}. \end{equation} Under \text{Assumption \ref{ad3}}, the ratios of distance of all the edges among the nodes $i, j, k, h, l$ can be calculated by the angle measurements through the sine rule \eqref{sin}, i.e., the ratio-of-distance matrix $M_r$ \eqref{ratio} is available. Then, the displacement constraint $\mu_{ij}e_{ij}+ \mu_{ik}e_{ik}+ \mu_{ih}e_{ih} + \mu_{il}e_{il}= \mathbf{0}$ can be obtained by Algorithm $1$, where the distance matrix $M$ \eqref{dim} is replaced by the the ratio-of-distance matrix $M_r$. In an angle-based network in $\mathbb{R}^3$ under \text{Assumption \ref{ad3}}, let $\mathcal{X}_{\mathcal{G}}=\{ ( i, j, k, h, l) \in \mathcal{V}^{5} : (i,j), (i,k), $ $ (i,h), (i,l), (j,k), (j,h), (j,l), (k,h), (k,l), (h,l) \in \mathcal{E}, j \!<\! k \!<\! h \!<\! l\}$. Each element of $\mathcal{X}_{\mathcal{G}}$ can be used to construct an angle-based displacement constraint. \subsection*{7. Relaxed Assumptions for Constructing local-relative-position-based, Distance-based, Ratio-of-distance-based, Local-relative-bearing-based, and Angle-based Displacement Constraint in a Coplanar Network} \begin{assumption}\label{as31} No two nodes are collocated in $\mathbb{R}^3$. Each anchor node has at least two neighboring anchor nodes, and each free node has at least three neighboring nodes. \end{assumption} \begin{assumption}\label{ad32} No two nodes are collocated in $\mathbb{R}^3$. Each anchor node has at least two neighboring anchor nodes, and each free node has at least three neighboring nodes. The free node and its neighbors are non-colinear. \end{assumption} \begin{enumerate} \item In a local-relative-position-based coplanar network in $\mathbb{R}^3$ with \text{Assumption \ref{as31}}, let $\mathcal{X}_{\mathcal{G}}=\{ ( i, j, k, h) \in \mathcal{V}^{4} : (i,j), (i,k), $ $ (i,h) \in \mathcal{E}, j \!<\! k \!<\! h \}$. Each element of $\mathcal{X}_{\mathcal{G}}$ can be used to construct a local-relative-position-based displacement constraint $\mu_{ij}e_{ij}+\mu_{ik}e_{ik}+\mu_{ih}e_{ih}= \mathbf{0}$. \item In a distance-based coplanar network in $\mathbb{R}^3$ with \text{Assumption \ref{as31}}, let $\mathcal{X}_{\mathcal{G}}=\{ ( i, j, k, h) \in \mathcal{V}^{4} : ((i,j), (i,k), $ $ (i,h), (j,k), (j,h), (k,h) \in \mathcal{E}, j \!<\! k \!<\! h\}$. Each element of $\mathcal{X}_{\mathcal{G}}$ can be used to construct a distance-based displacement constraint $\mu_{ij}e_{ij}+\mu_{ik}e_{ik}+\mu_{ih}e_{ih}= \mathbf{0}$. \item In a ratio-of-distance-based coplanar network in $\mathbb{R}^3$ with \text{Assumption \ref{as31}}, let $\mathcal{X}_{\mathcal{G}}=\{ ( i, j, k, h) \in \mathcal{V}^{4} : ((i,j), (i,k), $ $ (i,h), (j,k), (j,h), (k,h) \in \mathcal{E}, j \!<\! k \!<\! h\}$. Each element of $\mathcal{X}_{\mathcal{G}}$ can be used to construct a ratio-of-distance-based displacement constraint $\mu_{ij}e_{ij}+\mu_{ik}e_{ik}+\mu_{ih}e_{ih}= \mathbf{0}$. \item In a local-relative-bearing-based coplanar network in $\mathbb{R}^3$ with \text{Assumption \ref{ad32}}, let $\mathcal{X}_{\mathcal{G}}=\{ ( i, j, k, h) $ $ \in \mathcal{V}^{4} : (i,j), (i,k), (i,h), (j,k), (j,h) \in \mathcal{E}, j \!<\! k \!<\! h \!<\! l\}$. Each element of $\mathcal{X}_{\mathcal{G}}$ can be used to construct a local-relative-bearing-based displacement constraint $\mu_{ij}e_{ij}+\mu_{ik}e_{ik}+\mu_{ih}e_{ih}= \mathbf{0}$. \item In an angle-based coplanar network in $\mathbb{R}^3$ with \text{Assumption \ref{ad32}}, let $\mathcal{X}_{\mathcal{G}}=\{ ( i, j, k, h) \in \mathcal{V}^{4} : ((i,j), (i,k), $ $ (i,h), (j,k), (j,h), (k,h) \in \mathcal{E}, j \!<\! k \!<\! h\}$. Each element of $\mathcal{X}_{\mathcal{G}}$ can be used to construct an angle-based displacement constraint $\mu_{ij}e_{ij}+\mu_{ik}e_{ik}+\mu_{ih}e_{ih}= \mathbf{0}$. \end{enumerate} \bibliographystyle{IEEEtran}
1,116,691,500,013
arxiv
\section{Introduction} \begin{dfnz} A {\em Ricci Soliton} is a smooth $n$--dimensional complete Riemannian manifold $(M,g)$ such that there exists a smooth 1--form $\omega$ such that \begin{equation}\label{soliton} 2{\mathrm R}_{ij}+\nabla_i\omega_j+\nabla_j\omega_i=\frac{2\mu}{n}g_{ij} \end{equation} for some constant $\mu\in{{\mathbb R}}$. A {\em gradient Ricci Soliton} is a smooth $n$--dimensional complete Riemannian manifold $(M,g)$ such that there exists a smooth function $f:M\to{{\mathbb R}}$, sometimes called {\em potential function}, satisfying \begin{equation}\label{gsoliton} {\mathrm R}_{ij}+\nabla^2_{ij}f=\frac{\mu}{n}g_{ij} \end{equation} for some constant $\mu\in{{\mathbb R}}$. Sometimes in literature these manifolds are called {\em quasi--Einstein} manifolds. A soliton is said to be {\em contracting}, {\em steady} or {\em expanding} if the constant $\mu\in{{\mathbb R}}$ is respectively positive, zero or negative. We say that a soliton is {\em trivial} if the form $\omega$ can be chosen to be zero, or the function $f$ to be constant.\\ This is like to say that $(M,g)$ is an Einstein manifold (in dimension three or if the Weyl tensor is zero, it is equivalent to constant curvature). \end{dfnz} \begin{rem} Clearly, a Ricci soliton is a gradient soliton if the form $\omega$ above is exact. \end{rem} Ricci solitons move under the Ricci flow simply by diffeomorphisms and homotheties of the initial metric, that is, in other words, they are stationary points of the Ricci flow in the space of the metrics on $M$ modulo diffeomorphism and scaling.\\ Moreover, their importance is due to the fact that they arise as blow--up limits of the Ricci flow when singularities develop.\\ In this note we try to analyze these manifolds forgetting the properties of the Ricci flow and the related {\em dynamical} techniques, just looking at the defining elliptic equations~\eqref{soliton} and~\eqref{gsoliton}. \smallskip We suggest to the interested reader the preprint of Derdzinski~\cite{derdz2} and the survey of Cao~\cite{cao3} as very comprehensive reviews of the present literature, open problems and recent developments. \smallskip During the publication of this paper, there appear several preprints extending the results (or presenting different proofs) to the complete noncompact case~\cite{caowang,fmanzan,na1,nw1,nw2,pw}. \begin{ackn} We wish to thank Xiaodong Cao for pointing us at some inaccuracies in earlier versions of the paper. \end{ackn} \section{General Computations} We recall some well known facts from Riemannian geometry: \begin{itemize} \item The Schur's Lemma, when $n>2$ $$ 2{\mathrm {div}}{\mathrm {Ric}}=d{\mathrm R}\,. $$ \item The fact that ${\mathrm R}_{ij}=g_{ij}{\mathrm R}/2$ when $n=2$. \item The formula for the interchange of covariant derivatives of a form, $$ \nabla^2_{ij}\omega_k-\nabla^2_{ji}\omega_k= {\mathrm R}_{ijks}\omega^s\,. $$ \item The decomposition of the Riemann tensor, $$ {\mathrm R}_{ijkl}=-\frac{{\mathrm R}}{(n-1)(n-2)}(g_{ik}g_{jl}-g_{il}g_{jk}) + \frac{1}{n-2}({\mathrm R}_{ik}g_{jl}-{\mathrm R}_{il}g_{jk} +{\mathrm R}_{jl}g_{ik}-{\mathrm R}_{jk}g_{il})+{\mathcal{W}}_{ijkl}\,. $$ \item The fact that the Weyl tensor ${\mathcal{W}}$ is zero when $n\leq3$. \end{itemize} Now we work out some consequences of equation~\eqref{gsoliton}. \begin{prop} Let $(M,g)$ be a Ricci gradient soliton, then the following formulas hold, \begin{equation}\label{equ1} {\mathrm R}+\Delta f=\mu \end{equation} \begin{equation}\label{equ2} \nabla_i {\mathrm R}=2{\mathrm R}_{ij}\nabla^jf \end{equation} \begin{equation}\label{commut1} \nabla_j {\mathrm R}_{ik}-\nabla_i{\mathrm R}_{jk}={\mathrm R}_{ijks}\nabla^sf\,. \end{equation} \begin{equation}\label{equ8} {\mathrm R}+\vert\nabla f\vert^2-\frac{2\mu}{n}f=\,\text{constant} \end{equation} \begin{equation}\label{equ4} \Delta{\mathrm R}=\langle\nabla{\mathrm R}\,\vert\,\nabla f\rangle+\frac{2\mu}{n}{\mathrm R}-2\vert{\mathrm {Ric}}\vert^2 \end{equation} \begin{equation}\label{equ3} \Delta{\mathrm R}_{ij}=\langle\nabla{\mathrm R}_{ij}\,\vert\,\nabla f\rangle+\frac{2\mu}{n}{\mathrm R}_{ij}-2{\mathrm R}_{ikjs}{\mathrm R}^{ks} \end{equation} \begin{align} \Delta{\mathrm R}_{ik} =&\,\langle\nabla{\mathrm R}_{ik}\,\vert\,\nabla f\rangle+\frac{2\mu}{n}{\mathrm R}_{ik}-{\mathcal{W}}_{ijkl}R^{jl}\label{equ6}\\ &\,+\frac{2}{(n-1)(n-2)} \bigl({\mathrm R}^2 g_{ik}-n{\mathrm R}\RRR_{ik} +2(n-1){\mathrm S}_{ik}-(n-1){\mathrm S} g_{ik}\bigr)\,.\nonumber \end{align} where ${\mathrm S}_{ik}={\mathrm R}_{ij}g^{jl}{\mathrm R}_{lk}$ and ${\mathrm S}=\operatornamewithlimits{tr}\nolimits{\mathrm S}=\vert{\mathrm {Ric}}\vert^2$. \end{prop} \begin{proof} Equation~\eqref{equ1}: we simply contract equation~\eqref{gsoliton}. \smallskip \noindent Equation~\eqref{equ2}: we take the divergence of the Ricci tensor, by using equation~\eqref{gsoliton}, \begin{align*} \operatornamewithlimits{div}\nolimits{\mathrm{Ric}}_i=&\,g^{jk}\nabla_k{\mathrm R}_{ij}\\ =&\,-g^{jk}\nabla_k\nabla_i\nabla_jf\\ =&\,-g^{jk}\nabla_i\nabla_k\nabla_jf-g^{jk}{\mathrm R}_{kijs}\nabla^sf\\ =&\,-\nabla_i\Delta f-{\mathrm R}_{is}\nabla^sf\,, \end{align*} where we used the formula for the interchange of covariant derivatives. Using then equation~\eqref{equ1} and Schur's Lemma we get \begin{equation*} \frac{1}{2}\nabla_i{\mathrm R}=-\nabla_i(\mu-{\mathrm R}) -{\mathrm R}_{is}\nabla^sf= \nabla_i {\mathrm R}-{\mathrm R}_{is}\nabla^sf\,, \end{equation*} hence, equation~\eqref{equ2} follows. \smallskip \noindent Equation~\eqref{commut1}: it follows by a computation analogous to the previous one. \smallskip \noindent Relation~\eqref{equ8}: it follows by simply differentiating the quantity on the left side and using equations~\eqref{equ2} and~\eqref{gsoliton}. \smallskip \noindent Equation~\eqref{equ4}: once obtained equation~\eqref{equ3}, it follows by contracting it with the metric $g$. \smallskip \noindent Equation~\eqref{equ3}: we compute the Laplacian of the Ricci tensor by means of equation~\eqref{commut1} and the second Bianchi identity, \begin{align*} \Delta{\mathrm R}_{ik}=\nabla^j\nabla_j{\mathrm R}_{ik} =&\,\nabla^j\nabla_i{\mathrm R}_{jk}+\nabla^j{\mathrm R}_{ijks}\nabla^sf+{\mathrm R}_{ijks}\nabla^{j}\nabla^sf\\ =&\,\nabla_i\nabla^j{\mathrm R}_{jk}+{\mathrm R}^j_{\phantom{j}ijs}{\mathrm R}^s_k+{\mathrm R}^j_{\phantom{j}iks}{\mathrm R}^s_j\\ &\,+\nabla_k{\mathrm R}^j_{\phantom{j}sij}\nabla^sf-\nabla_s{\mathrm R}^j_{\phantom{j}kij}\nabla^sf +{\mathrm R}_{ijks}\nabla^{j}\nabla^sf\\ =&\,\nabla_i\nabla^j{\mathrm R}_{jk}+{\mathrm R}_{is}{\mathrm R}^s_k+{\mathrm R}^j_{\phantom{j}iks}{\mathrm R}^s_j\\ &\,-\nabla_k{\mathrm R}_{si}\nabla^sf+\nabla_s{\mathrm R}_{ki}\nabla^sf +{\mathrm R}_{ijks}\nabla^{j}\nabla^{s}f\\ =&\,\frac{1}{2}\nabla_i\nabla_k{\mathrm R}+{\mathrm R}_{is}{\mathrm R}^s_k+{\mathrm R}^j_{\phantom{j}iks}{\mathrm R}^s_j\\ &\,-\nabla_k{\mathrm R}_{si}\nabla^sf+\langle\nabla{\mathrm R}_{ik}\,\vert\,\nabla f\rangle -{\mathrm R}_{ijks}{\mathrm R}^{js}+\frac{\mu}{n}{\mathrm R}_{ik}\,,\\ =&\,\langle\nabla{\mathrm R}_{ik}\,\vert\,\nabla f\rangle+\frac{\mu}{n}{\mathrm R}_{ik}-2{\mathrm R}_{ijks}{\mathrm R}^{js} +{\mathrm R}_{is}{\mathrm R}^s_k\\ &\,+\frac{1}{2}\nabla_k\nabla_i{\mathrm R}-\nabla_k{\mathrm R}_{is}\nabla^sf\,,\\ \end{align*} where we used also Schur's Lemma, substituted $\nabla^j\nabla^sf$ by means of equation~\eqref{gsoliton} and rearranged some terms in the last line.\\ Differentiating relation~\eqref{equ2} we obtain $$ 0=\frac{1}{2}\nabla_k\left(\nabla_i{\mathrm R}-2{\mathrm R}_{is}\nabla^sf\right) $$ hence, $$ \frac{1}{2}\nabla_k\nabla_i{\mathrm R}-\nabla_k{\mathrm R}_{is}\nabla^sf={\mathrm R}_{is}\nabla^s\nabla_kf\,. $$ Then, we conclude \begin{align*} \Delta{\mathrm R}_{ik} =&\,\langle\nabla{\mathrm R}_{ik}\,\vert\,\nabla f\rangle+\frac{\mu}{n}{\mathrm R}_{ik}-2{\mathrm R}_{ijks}{\mathrm R}^{js} +{\mathrm R}_{is}{\mathrm R}^s_k+{\mathrm R}_{is}\nabla^s\nabla_kf\\ =&\,\langle\nabla{\mathrm R}_{ik}\,\vert\,\nabla f\rangle+\frac{\mu}{n}{\mathrm R}_{ik}-2{\mathrm R}_{ijks}{\mathrm R}^{js} +{\mathrm R}_{is}{\mathrm R}^s_k+\frac{\mu}{n}{\mathrm R}_{is}-{\mathrm R}_{is}{\mathrm R}^s_k\\ =&\,\langle\nabla{\mathrm R}_{ik}\,\vert\,\nabla f\rangle+\frac{2\mu}{n}{\mathrm R}_{ik}-2{\mathrm R}_{ijks}{\mathrm R}^{js}\,. \end{align*} \smallskip \noindent Equation~\eqref{equ6}: by using the decomposition of the Riemann tensor we can go on with the computation in formula~\eqref{equ3}, \begin{align*} \Delta{\mathrm R}_{ik} =&\,\langle\nabla{\mathrm R}_{ik}\,\vert\,\nabla f\rangle+\frac{2\mu}{n}{\mathrm R}_{ik} -2{\mathcal{W}}_{ijkl}{\mathrm R}^{jl}\\ &\,+\frac{2{\mathrm R}}{(n-1)(n-2)}(g_{ik}g_{jl}-g_{il}g_{jk}){\mathrm R}^{jl}\\ &\,-\frac{2}{n-2}({\mathrm R}_{ik}g_{jl}-{\mathrm R}_{il}g_{jk}+{\mathrm R}_{jl}g_{ik} -{\mathrm R}_{jk}g_{il}){\mathrm R}^{jl}\\ =&\,\langle\nabla{\mathrm R}_{ik}\,\vert\,\nabla f\rangle+\frac{2\mu}{n}{\mathrm R}_{ik} -2{\mathcal{W}}_{ijkl}{\mathrm R}^{jl}\\ &\,+\frac{2{\mathrm R}}{(n-1)(n-2)}({\mathrm R} g_{ik}-{\mathrm R}_{ik}) -\frac{2}{n-2}({\mathrm R}\RRR_{ik}-2{\mathrm S}_{ik}+{\mathrm S} g_{ik})\\ =&\,\langle\nabla{\mathrm R}_{ik}\,\vert\,\nabla f\rangle+\frac{2\mu}{n}{\mathrm R}_{ik} -2{\mathcal{W}}_{ijkl}{\mathrm R}^{jl}\\ &\,+\frac{2}{(n-1)(n-2)} \bigl({\mathrm R}^2 g_{ik}-n{\mathrm R}\RRR_{ik} +2(n-1){\mathrm S}_{ik}-(n-1){\mathrm S} g_{ik}\bigr)\,. \end{align*} \end{proof} \subsection{The Case $n=2$} We can use the complete description of the curvature tensor via the scalar curvature ${\mathrm R}$, that is ${\mathrm R}_{ij}={\mathrm R} g_{ij}/2$.\\ \begin{prop} If $n=2$ there hold, \begin{align*} \nabla^2_{ij} f=&\,\frac{\mu-{\mathrm R}}{2}g_{ij}\\ \nabla{\mathrm R}=&\,{\mathrm R}\nabla f\\ \Delta{\mathrm R}=&\,\langle\nabla{\mathrm R}\,\vert\,\nabla f\rangle+\mu{\mathrm R}-{\mathrm R}^2 \end{align*} \end{prop} \subsection{The Case $n=3$} Using equation~\eqref{equ6}, as the Weyl tensor ${\mathcal{W}}$ is identically zero for every 3--manifold, we get \begin{prop} If $n=3$ there hold, \begin{equation*} \Delta{\mathrm R}_{ik} = \langle\nabla{\mathrm R}_{ik}\,\vert\,\nabla f\rangle+\frac{2\mu}{3}{\mathrm R}_{ik}+{\mathrm R}^2 g_{ik}-3{\mathrm R}\RRR_{ik} +4{\mathrm S}_{ik}-2{\mathrm S} g_{ik} \end{equation*} with ${\mathrm S}_{ik}={\mathrm R}_{ij}g^{jl}{\mathrm R}_{lk}$ and ${\mathrm S}=\vert{\mathrm {Ric}}\vert^2$. \end{prop} This proposition clearly generalizes to dimension $n>3$ when ${\mathcal{W}}=0$, \begin{align} \Delta{\mathrm R}_{ik} =&\,\langle\nabla{\mathrm R}_{ik}\,\vert\,\nabla f\rangle+\frac{2\mu}{n}{\mathrm R}_{ik}\label{equ5}\\ &\,+\frac{2}{(n-1)(n-2)} \bigl({\mathrm R}^2 g_{ik}-n{\mathrm R}\RRR_{ik} +2(n-1){\mathrm S}_{ik}-(n-1){\mathrm S} g_{ik}\bigr)\,.\nonumber \end{align} \section{Compact Ricci Solitons} By means of Perelman work~\cite{perel1} and previous others, see Hamilton~\cite{hamilton5} (dimension two) and Ivey~\cite{ivey1} (dimension 3), we have the following fact. \begin{teo}[Perelman]\label{p4} Every compact Ricci soliton is a gradient Ricci soliton. \end{teo} \begin{proof} Let $(M,g)$ be a Ricci Soliton, with a potential form $\omega$. We start with the following computation for a generic smooth function $f:M\to{{\mathbb R}}$, \begin{align*} g^{kj}\nabla_k&\,[2({\mathrm R}_{ij}+\nabla^2_{ij}f-\mu g_{ij}/n)e^{-f}]\\ =&\,(\nabla_i{\mathrm R}+2\Delta\nabla_if)e^{-f} -2[({\mathrm R}_{ij}+\nabla^2_{ij}f-\mu g_{ij}/n)g^{jk}\nabla_k f]e^{-f}\\ =&\,(\nabla_i{\mathrm R}+2\nabla_i\Delta f +2{\mathrm R}_{is}\nabla^s f)e^{-f} -2[({\mathrm R}_{ij}+\nabla^2_{ij}f-\mu g_{ij}/n)g^{jk}\nabla_k f]e^{-f}\\ =&\,(\nabla_i{\mathrm R}+2\nabla_i\Delta f-2g^{jk}\nabla^2_{ij}f\nabla_k f +2\mu/n\nabla_i f) e^{-f}\\ =&\,\nabla_i({\mathrm R}+2\Delta f-\vert\nabla f\vert^2+2\mu f/n) e^{-f}\,. \end{align*} Hence, supposing to find a smooth function $f:M\to{{\mathbb R}}$ such that \begin{equation}\label{equ100} {\mathrm R}+2\Delta f-\vert\nabla f\vert^2+2\mu f/n \end{equation} is constant, we have $$ \operatornamewithlimits{div}\nolimits[(\mathrm{Ric}+\nabla^2 f-\mu g/n)e^{-f}]=0\,. $$ Then, as $\nabla_l\omega_k+\nabla_k\omega_l=-2{\mathrm R}_{lk}+2\mu g_{lk}/n$, \begin{align*} \operatornamewithlimits{div}\nolimits[(\nabla_k&\, f-\omega_k)g^{kj}({\mathrm R}_{ij}+\nabla^2_{ij}f-\mu g_{ij}/n)e^{-f}]\\ =&\,(\nabla^2_{lk}f-\nabla_l\omega_k)g^{kj}g^{li} ({\mathrm R}_{ij}+\nabla^2_{ij}f-\mu g_{ij}/n)e^{-f}\\ =&\,(2\nabla^2_{lk}f-\nabla_l\omega_k-\nabla_k\omega_l)g^{kj}g^{li} ({\mathrm R}_{ij}+\nabla^2_{ij}f-\mu g_{ij}/n)e^{-f}/2\\ =&\,(2\nabla^2_{lk}f+2{\mathrm R}_{lk}-2\mu g_{lk}/n)g^{kj}g^{li} ({\mathrm R}_{ij}+\nabla^2_{ij}f-\mu g_{ij}/n)e^{-f}/2\\ =&\,\vert{\mathrm R}_{ij}+\nabla^2_{ij}f-\mu g_{ij}/n\vert^2 e^{-f}\,, \end{align*} where, passing from the second to the third line, we substituted $\nabla_l\omega_k$ with $(\nabla_l\omega_k+\nabla_k\omega_l)/2$ since the skew--symmetric component of $\nabla\omega$ vanishes once we contract it with the symmetric 2--form ${\mathrm R}_{ij}+\nabla^2_{ij}f-\mu g_{ij}/n$.\\ Hence, we conclude that $$ 0\leq Q=\vert\mathrm{Ric}+\nabla^2 f-\mu g/n\vert^2 e^{-f}=\operatornamewithlimits{div}\nolimits\mathrm{T} $$ for some 1--form $\mathrm{T}$.\\ Integrating $Q$ on $M$, we immediately get that $Q=0$, since it is nonnegative.\\ This clearly implies that $(M,g)$ is a gradient Ricci soliton with a potential $f$. The existence of a function $f$ such that relation~\eqref{equ100} holds, can be proven by constrained minimization of Perelman's ${\mathcal{W}}$ functional (defined in~\cite{perel1}). A logarithmic Sobolev estimate is needed, see Appendix~\ref{appA}. \end{proof} \begin{prob} Is it possible to prove Theorem~\ref{p4} without using arguments related to Ricci flow, that is, showing directly that the form $\omega$ in equation~\eqref{soliton} is exact? \end{prob} \begin{rem} In the noncompact case, there exists {\em non} gradient Ricci solitons, see Baird and Danielo~\cite{bairdan}, Lott~\cite{lott}. \end{rem} We can then concentrate ourselves on compact {\em gradient} Ricci solitons.\\ The key tool will be the maximum principle for elliptic equations. We start from equation~\eqref{equ4}, \begin{equation*} \Delta{\mathrm R}=\langle\nabla{\mathrm R}\,\vert\,\nabla f\rangle+\frac{2\mu}{n}{\mathrm R}-2\vert{\mathrm {Ric}}\vert^2\, \end{equation*} noticing that $\mu=\frac{1}{{\mathrm {Vol}}(M)}\int_M{\mathrm R}\geq{\mathrm R}_{\min}$, with equality if and only if ${\mathrm R}$ is constant.\\ We have, \begin{align*} \Delta{\mathrm R}=&\,\langle\nabla{\mathrm R}\,\vert\,\nabla f\rangle+\frac{2\mu}{n}{\mathrm R} -2\vert{{\overset{\circ}{\mathrm {Ric}}}}\vert^2-2{\mathrm R}^2/n\\ \leq&\,\langle\nabla{\mathrm R}\,\vert\,\nabla f\rangle+\frac{2{\mathrm R}}{n}(\mu-{\mathrm R}) \end{align*} where we denoted with ${\overset{\circ}{\mathrm {Ric}}}$ the {\em tracefree} part of the Ricci tensor.\\ When ${\mathrm R}$ gets its minimum, \begin{equation*} \Delta{\mathrm R}_{\min}\leq\frac{2{\mathrm R}_{\min}}{n}(\mu-{\mathrm R}_{\min})\,. \end{equation*} This relation, by the strong maximum principle, implies that if ${\mathrm R}$ is nonconstant, then it must be positive everywhere, hence also $\mu$ is positive. \begin{prop} Every steady or expanding compact Ricci soliton has the scalar curvature ${\mathrm R}$ constant (and equal to the constant $\mu$). \end{prop} Coming back to equation~\eqref{equ1} this fact forces $\Delta f=0$, hence, since we are on a compact manifold, $f$ is constant and the soliton is trivial. \begin{cor}\label{p1} Every steady or expanding compact Ricci soliton is trivial. \end{cor} Now we deal with contracting compact gradient Ricci solitons.\\ The two--dimensional case is special, we saw that ${\mathrm R}>0$, hence topologically we are dealing with ${{\mathbb S}}^2$ or its ${{\mathbb Z}}_2$--quotient ${\mathbb{RP}^2}$.\\ The relevant equation is $$ \nabla^2_{ij} f=\frac{\mu-{\mathrm R}}{2}g_{ij}\,, $$ indeed, differently by the 3--dimensional case, the II Bianchi identity (hence the Schur's Lemma) is a void condition, making this case more difficult. The following result was first proved by Hamilton~\cite{hamilton5} with an argument using the uniformization theorem which can be strongly simplified by means of Kazdan--Warner identity (which relies on uniformization), see~\cite[pag.~131]{chknopf} and~\cite{choyau}. Recently, Chen, Lu and Tian found a simple proof independent by uniformization of surfaces~\cite{chenlutian}. \begin{prop}\label{p3} Every contracting, compact, two--dimensional Ricci soliton is ${\mathbb S}^2$ or ${\mathbb{RP}^2}$ with the standard metric. \end{prop} We assume now to be in dimension three or in higher dimension with a zero Weyl tensor. As we said, the scalar curvature ${\mathrm R}$ must be positive everywhere, then by means of equation~\eqref{equ5} we have, \begin{align*} \Delta\left(\frac{{\mathrm R}_{ik}}{{\mathrm R}}\right) =&\,\frac{\Delta{\mathrm R}_{ik}}{{\mathrm R}}-\frac{{\mathrm R}_{ik}\Delta{\mathrm R}}{{\mathrm R}^2} +2\frac{\vert\nabla{\mathrm R}\vert^2{\mathrm R}_{ik}}{{\mathrm R}^3} -2\frac{\langle\nabla{\mathrm R}_{ik}\,\vert\,\nabla{\mathrm R}\rangle}{{\mathrm R}^2}\,, \end{align*} substituting the equations for $\Delta{\mathrm R}_{ik}$ and $\Delta{\mathrm R}$ we get \begin{align*} \Delta\left(\frac{{\mathrm R}_{ik}}{{\mathrm R}}\right) =&\,\frac{\Delta{\mathrm R}_{ik}}{{\mathrm R}}-\frac{{\mathrm R}_{ik}\Delta{\mathrm R}}{{\mathrm R}^2} +2\frac{\vert\nabla{\mathrm R}\vert^2{\mathrm R}_{ik}}{{\mathrm R}^3} -2\frac{\langle\nabla{\mathrm R}_{ik}\,\vert\,\nabla{\mathrm R}\rangle}{{\mathrm R}^2}\\ =&\,\frac{\langle\nabla{\mathrm R}_{ik}\,\vert\,\nabla f\rangle}{{\mathrm R}}+\frac{2\mu}{n}\frac{{\mathrm R}_{ik}}{{\mathrm R}}\\ &\,+\frac{2\bigl({\mathrm R}^2 g_{ik}-n{\mathrm R}\RRR_{ik} +2(n-1){\mathrm S}_{ik}-(n-1){\mathrm S} g_{ik}\bigr)}{(n-1)(n-2){\mathrm R}}\\ &\,-\frac{{\mathrm R}_{ik}\Delta{\mathrm R}}{{\mathrm R}^2} +2\frac{\vert\nabla{\mathrm R}\vert^2{\mathrm R}_{ik}}{{\mathrm R}^3} -2\frac{\langle\nabla{\mathrm R}_{ik}\,\vert\,\nabla{\mathrm R}\rangle}{{\mathrm R}^2}\\ =&\,\Bigl\langle\frac{\nabla{{\mathrm R}_{ik}}}{{\mathrm R}} \,\Bigr\vert\,\nabla f -2\frac{\nabla{\mathrm R}}{{\mathrm R}}\Bigr\rangle +\frac{2\mu}{n}\frac{{\mathrm R}_{ik}}{{\mathrm R}}\\ &\,+\frac{2\bigl({\mathrm R}^2 g_{ik}-n{\mathrm R}\RRR_{ik} +2(n-1){\mathrm S}_{ik}-(n-1){\mathrm S} g_{ik}\bigr)}{(n-1)(n-2){\mathrm R}}\\ &\,-\frac{{\mathrm R}_{ik}\langle\nabla{\mathrm R}\,\vert\,\nabla f\rangle+\frac{2\mu}{n}{\mathrm R}\RRR_{ik}-2\vert{\mathrm {Ric}}\vert^2{\mathrm R}_{ik}}{{\mathrm R}^2} +2\frac{\vert\nabla{\mathrm R}\vert^2{\mathrm R}_{ik}}{{\mathrm R}^3}\\ =&\,\Bigl\langle\nabla\Bigl(\frac{{\mathrm R}_{ik}}{{\mathrm R}}\Bigr) \,\Bigr\vert\,\nabla f -2\frac{\nabla{\mathrm R}}{{\mathrm R}}\Bigr\rangle\\ &\,+\frac{2}{(n-1)(n-2)}\left({\mathrm R} g_{ik}-n{\mathrm R}_{ik} +\frac{2(n-1){\mathrm S}_{ik}}{{\mathrm R}}-\frac{(n-1){\mathrm S} g_{ik}}{{\mathrm R}}\right) +\frac{2{\mathrm S}{\mathrm R}_{ik}}{{\mathrm R}^2}\\ =&\,\Bigl\langle\nabla\Bigl(\frac{{\mathrm R}_{ik}}{{\mathrm R}}\Bigr) \,\Bigr\vert\,\nabla f -2\frac{\nabla{\mathrm R}}{{\mathrm R}}\Bigr\rangle\\ &\,+\frac{2\left({\mathrm R}^3g_{ik} -n{\mathrm R}^2{\mathrm R}_{ik} +2(n-1){\mathrm R}{\mathrm S}_{ik} -(n-1){\mathrm S}{\mathrm R} g_{ik}+(n-1)(n-2){\mathrm S}{\mathrm R}_{ik}\right)}{(n-1)(n-2){\mathrm R}^2}\,. \end{align*} Let now $\lambda_{\min}:M\to{{\mathbb R}}$ to be the minimal eigenvalue of the Ricci tensor. If $p\in M$ is the point where $\lambda_{\min}/{\mathrm R}$ gets its minimum with eigenvector $v_p$, we consider a local unit smooth tangent vector field $w=w^i$ such that $w(p)=v_p$, $\nabla w^i(p)=\Delta w^i(p)=0$. Then the smooth function ${\mathrm R}_{ij}w^iw^j/{\mathrm R}$ has a local minimum at $p$, $({\mathrm R}_{ij}w^iw^j/{\mathrm R})(p)=\lambda_{\min}(p)/{\mathrm R}(p)$, $\nabla({\mathrm R}_{ij}w^iw^j/{\mathrm R})(p)=0$ and $\Delta({\mathrm R}_{ij}w^iw^j/{\mathrm R})(p)\geq 0$.\\ By the assumptions on the derivatives of $w$ at $p$ we have$\nabla({\mathrm R}_{ij}/{\mathrm R})(p)v_p^iv_p^j=0$ and $\Delta({\mathrm R}_{ij}/{\mathrm R})(p)v_p^iv_p^j\geq 0$, hence, using the previous equation, $$ 0\leq \Delta({\mathrm R}_{ij}/{\mathrm R})(p)v_p^iv_p^j=\frac{2\left({\mathrm R}^3-n\lambda_{\min}{\mathrm R}^2+2(n-1)\lambda_{\min}^2{\mathrm R}-(n-1){\mathrm S}{\mathrm R} +(n-1)(n-2)\lambda_{\min}{\mathrm S}\right)}{{\mathrm R}^2}\,, $$ where the right hand side is evaluated at $p\in M$.\\ This implies, \begin{equation}\label{equ9} 0\leq {\mathrm R}^3-n\lambda_{\min}{\mathrm R}^2+2(n-1)\lambda_{\min}^2{\mathrm R}-(n-1){\mathrm S}{\mathrm R} +(n-1)(n-2)\lambda_{\min}{\mathrm S}\,. \end{equation} We work on this term, setting $\widetilde{{\mathrm R}}$ and $\widetilde{{\mathrm S}}$ to be respectively the sum and the sum of the squares of the eigenvalues of the Ricci tensor, but $\lambda_{\min}$. \begin{align*} 0\leq&\, {\mathrm R}^3-n\lambda_{\min}{\mathrm R}^2+2(n-1)\lambda_{\min}^2{\mathrm R}-(n-1){\mathrm S}{\mathrm R} +(n-1)(n-2)\lambda_{\min}{\mathrm S}\\ =&\,(\lambda_{\min}+\widetilde{{\mathrm R}})^3 -n\lambda_{\min}(\lambda_{\min}+\widetilde{{\mathrm R}})^2 +2(n-1)\lambda_{\min}^2(\lambda_{\min}+\widetilde{{\mathrm R}})\\ &\,-(n-1)(\lambda_{\min}^2+\widetilde{{\mathrm S}})(\lambda_{\min}+\widetilde{{\mathrm R}}) +(n-1)(n-2)\lambda_{\min}(\lambda_{\min}^2+\widetilde{{\mathrm S}})\\ =&\,\lambda_{\min}^3\bigl(1-n+2(n-1)-(n-1)+(n-1)(n-2)\bigr)\\ &\,+\lambda_{\min}^2\bigl(3\widetilde{{\mathrm R}}-2n\widetilde{{\mathrm R}} +2(n-1)\widetilde{{\mathrm R}}-(n-1)\widetilde{{\mathrm R}}\bigr)\\ &\,+\lambda_{\min}\bigl(3\widetilde{{\mathrm R}}^2-n\widetilde{{\mathrm R}}^2 -(n-1)\widetilde{{\mathrm S}}+(n-1)(n-2)\widetilde{{\mathrm S}}\bigr)\\ &\,+(\widetilde{{\mathrm R}}^3-(n-1)\widetilde{{\mathrm R}}\widetilde{{\mathrm S}}\bigr)\\ =&\,(n-1)(n-2)\lambda_{\min}^3-(n-2)\lambda_{\min}^2\widetilde{{\mathrm R}}\\ &\,+(n-3)\lambda_{\min}\bigl((n-1)\widetilde{{\mathrm S}}-\widetilde{{\mathrm R}}^2\bigr) -\widetilde{{\mathrm R}}\bigl((n-1)\widetilde{{\mathrm S}}-\widetilde{{\mathrm R}}^2\bigr)\\ =&\,(n-2)\lambda_{\min}^2\bigl(\lambda_{\min}(n-1)-\widetilde{{\mathrm R}}\bigr) +\bigl((n-3)\lambda_{\min}-\widetilde{{\mathrm R}}\bigr) \bigl((n-1)\widetilde{{\mathrm S}}-\widetilde{{\mathrm R}}^2\bigr)\,. \end{align*} Now, as ${\mathrm R}$ is positive, both terms $\bigl(\lambda_{\min}(n-1)-\widetilde{{\mathrm R}}\bigr)$ and $\bigl((n-3)\lambda_{\min}-\widetilde{{\mathrm R}}\bigr)$ are nonpositive. Using the {\em Arithmetic--Quadratic} mean inequality we see that the term $\bigl((n-1)\widetilde{{\mathrm S}}-\widetilde{{\mathrm R}}^2\bigr)$ must be nonnegative, so we conclude that all this expression is nonpositive. Hence, it must be zero at $p$, the point where $\lambda_{\min}/{\mathrm R}$ gets its minimum.\\ There are only two possibilities this can happen: either $\lambda_{\min}(p)=0$ and all the other $n-1$ eigenvalues of the Ricci tensor are equal, or all the eigenvalues are equal (and positive as $R>0$).\\ In this latter case, $\lambda_{\min}(p)={\mathrm R}/n$ and since we are in the point of minimum, ${\mathrm R}_{ij}/{\mathrm R}\geq g_{ij}/n$ or ${\mathrm R}_{ij}\geq {\mathrm R} g_{ij}/n$ on the whole manifold. But this inequality easily implies that $(M,g)$ is an Einstein manifold (the soliton is trivial).\\ In the other case, as $\lambda_{\min}(p)=0$ and all the other $n-1$ eigenvalues of the Ricci tensor are equal to ${\mathrm R}/(n-1)$. It can be shown that locally around $p$ the eigenvector $v(q)$ realizing the minimal eigenvalue $\lambda_{\min}(q)$ can be chosen smoothly depending on the point $q$ (locally there are no ``bifurcations'' of the minimal eigenvalue of ${\mathrm R}_{ij}$). Then, as ${\mathrm R}_{ij}v^i=\lambda_{\min}v_j$, differentiating this relation, we get $$ \nabla^k{\mathrm R}_{ij}v^i =\nabla^k\lambda_{\min}v_j+\lambda_{\min}\nabla^kv_j-{\mathrm R}_{ij}\nabla^kv^i\,, $$ then we compute for $\lambda_{\min}={\mathrm R}_{ij}v^iv^j$, \begin{align*} \Delta\lambda_{\min}=&\,\Delta({\mathrm R}_{ij}v^iv^j)\\ =&\,\Delta{\mathrm R}_{ij}v^iv^j+4\nabla^k{\mathrm R}_{ij}v^i\nabla_kv^j +2{\mathrm R}_{ij}\nabla^kv^i\nabla_kv^j+2{\mathrm R}_{ij}v^i\Delta v^j\\ =&\,\Delta{\mathrm R}_{ij}v^iv^j +2{\mathrm R}_{ij}\nabla^kv^i\nabla_kv^j+2{\mathrm R}_{ij}v^i\Delta v^j\\ &\,+4\nabla^k\lambda_{\min}v_j\nabla_kv^j +4\lambda_{\min}\nabla^kv_j\nabla_kv^j-4{\mathrm R}_{ij}\nabla^kv^i\nabla_kv^j\\ =&\,\Delta{\mathrm R}_{ij}v^iv^j -2{\mathrm R}_{ij}\nabla^kv^i\nabla_kv^j+2{\mathrm R}_{ij}v^i\Delta v^j +2\nabla^k\lambda_{\min}\nabla_k\vert v\vert^2 +4\lambda_{\min}\vert\nabla v\vert^2\\ \leq&\,\Delta{\mathrm R}_{ij}v^iv^j -2\lambda_{\min}\nabla^kv_j\nabla^kv^j +2{\mathrm R}_{ij}v^i\Delta v^j +4\lambda_{\min}\vert\nabla v\vert^2\\ =&\,\Delta{\mathrm R}_{ij}v^iv^j +2\lambda_{\min}\vert\nabla v\vert^2 +2\lambda_{\min}v_j\Delta v^j\\ =&\,\Delta{\mathrm R}_{ij}v^iv^j +\lambda_{\min}\Delta\vert v\vert^2\\ =&\,\Delta{\mathrm R}_{ij}v^iv^j\,. \end{align*} Working as before, we obtain the following elliptic inequality for the minimal eigenvalue, locally around $p$, \begin{align*} \Delta\lambda_{\min}\leq &\,\langle\nabla\lambda_{\min}\,\vert\,\nabla f\rangle +\frac{2\mu}{n}\lambda_{\min}\\ &\,+\frac{2}{(n-1)(n-2)} \bigl({\mathrm R}^2-n{\mathrm R}\lambda_{\min} +2(n-1)\lambda_{\min}^2-(n-1){\mathrm S}\bigr)\,. \end{align*} This inequality implies \begin{align*} \Delta\Bigl(\frac{\lambda_{\min}}{{\mathrm R}}\Bigr) \leq&\,\Bigl\langle\nabla\Bigl(\frac{\lambda_{\min}}{{\mathrm R}}\Bigr) \,\Bigr\vert\,\nabla f -2\frac{\nabla{\mathrm R}}{{\mathrm R}}\Bigr\rangle\\ &\,+\frac{2}{(n-1)(n-2)} \bigl({\mathrm R}-n\lambda_{\min} +2(n-1)\lambda_{\min}^2/{\mathrm R}-(n-1){\mathrm S}/{\mathrm R}\bigr)+2\lambda_{\min}{\mathrm S}/{\mathrm R}^2\,, \end{align*} holding locally around $p$ where $\lambda_{\min}/{\mathrm R}$ get the local minimum zero.\\ As at this local minimum $\Delta\Bigl(\frac{\lambda_{\min}}{{\mathrm R}}\Bigr)=0$, by the strong maximum principle, it follows that $\lambda_{\min}/{\mathrm R}$ is locally constant around $p$. Then it is an easy consequence that this must hold on the whole connected $M$ and $\lambda_{\min}$ is identically zero.\\ Getting back to the initial equation~\eqref{gsoliton}, we put ourselves at the point $p\in M$ where the function $f$ gets its maximum. If $v\in T_pM$ is the unit zero eigenvector of the Ricci tensor, we take normal coordinates at $p$ such that $v=\partial_{x_1}$, hence \begin{equation*} {\mathrm R}_{ij}(p)v^iv^j+\nabla^2_{ij}f(p)v^iv^i=\frac{\mu}{n}\,, \end{equation*} that is, \begin{equation*} \nabla^2_{11}f(p)=\mu/n>0\,, \end{equation*} which is impossible as $p$ is a maximum point for $f$. \begin{prop}\label{p2} Every contracting, compact, three--dimensional Ricci soliton is a quotient of ${\mathbb S}^3$ with the standard metric. \end{prop} \begin{prop}\label{p5} Every contracting, compact, Ricci soliton when $n>3$ and the Weyl tensor is zero, is trivial. Then, it is a quotient of ${\mathbb S}^n$ with the standard metric. \end{prop} \begin{rem} In the recent preprint~\cite{caowang} Cao and Wang also prove this result by means of a completely different method. \end{rem} When $n>3$, we have counterexamples to the triviality of compact Ricci solitons due to Koiso~\cite{koiso1}, Cao~\cite{cao1}, Feldman, Ilmanen and Ni~\cite{ilman6}. Moreover, some of these examples have ${\mathrm {Ric}}>0$. See also Bryant~\cite{bry1}.\\ In general, we only know that it must be ${\mathrm R}>0$ and nonconstant. \begin{prob} Are there special conditions in dimension $n=4$ (on the Weyl tensor?) assuring that a contracting, compact, Ricci soliton is trivial? \end{prob} \begin{prob} Are there counterexamples in dimension $n=5$? \end{prob} Recently B\"ohm and Wilking~\cite{bohmwilk} showed the following Hamilton's conjecture. \begin{teo} If the Riemann {\em operator} is definite positive, every contracting, compact, Ricci soliton is trivial. \end{teo} Hence, it is a quotient of ${\mathbb S}^n$, by a theorem of Tachibana~\cite{tachib1}. Previously, by Hamilton work~\cite{hamilton2} this result was known for $n\leq 4$ and there was a partial result by Cao~\cite{caox1} in any dimension. \begin{prob} The sectional curvatures of a compact, NONtrivial, contracting Ricci soliton can be all positive (nonnegative)? \end{prob} \begin{prob} What are in general the properties of compact, NONtrivial, contracting Ricci solitons? \end{prob} See this paper by Derdzinski~\cite{derdz1}.\\ We are also aware of a preprint~\cite{fergarciario} of Fern\'andez--L\'opez and Garc\'{\i}a--R\'{\i}o where they show that the first fundamental group has to be finite.\\ We give here a short proof of this fact. \begin{prop} The first fundamental group of a compact shrinking Ricci solitons is finite. \end{prop} \begin{proof} Denoting with $\pi:\widetilde{M}\to M$ the Riemannian universal covering of $M$, it is well known that the fundamental group is in one--to--one correspondence with the discrete counterimage of a basepoint $p\in M$. Clearly $\widetilde{M}$ is again a shrinking gradient Ricci soliton (possibly noncompact) with a potential function $\widetilde{f}=f\circ\pi$, since $\pi$ is a local isometry. Let $a$ and $b$ be a pair of points with $\pi(a)=\pi(b)=p$ and $\gamma:[0,L]\to\widetilde{M}$ the minimal geodesic between them, parametrized by arclength.\\ Let $E_i$ be an orthonormal basis of $T_a\widetilde{M}$ such that $E_1=\gamma^\prime(0)$ extended by parallel transport along $\gamma$. If $h(t)=\sin\left({\frac{\pi t}{L}}\right)$, we define the fields $Y_i(t)=h(t)E_i$, zero at $t=0,L$.\\ As $\gamma$ is minimal, the index form is nonnegative definite, \begin{equation*} 0\leq I(Y_i,Y_i)=\int_0^L \vert Y_i^\prime\vert^2-{\mathrm R}(Y_i,\gamma',Y_i,\gamma')\,dt= \int_0^L \vert h'(t)\vert^2-h^2(t){\mathrm R}(E_i,\gamma',E_i,\gamma')\,dt\,, \end{equation*} and after summing on $i\in{1,\dots,n}$, we get \begin{equation*} 0\leq\frac{\pi^2(n-1)}{L^2}\int_0^L \cos^2\left({\frac{\pi t}{L}}\right)\,dt -\int_0^L \sin^2\left({\frac{\pi t}{L}}\right)\mathrm{Ric}(\gamma',\gamma')\,dt\,. \end{equation*} Substituting now the Ricci soliton equation we get \begin{equation*} \int_0^L \sin^2\left({\frac{\pi t}{L}}\right) \frac{\mu}{n}\,\vert\gamma'\vert^2\,dt -\int_0^L \sin^2\left({\frac{\pi t}{L}}\right) \nabla^2_{\gamma',\gamma'}\widetilde{f}\, dt \leq\frac{\pi^2(n-1)}{L^2}\int_0^L \cos^2{\left(\frac{\pi t}{L}\right)}\,dt \end{equation*} that is, \begin{align*} \frac{\mu}{n}\int_0^L&\,\sin^2\left({\frac{\pi t}{L}}\right)\,dt -\int_0^L \sin^2\left({\frac{\pi t}{L}}\right) \frac{d^2\,}{dt^2}[\widetilde{f}(\gamma(t))]\,dt\\ =&\,\frac{\mu}{n}\int_0^L \sin^2\left({\frac{\pi t}{L}}\right)\,dt +2\frac{\pi^2}{L^2}\int_0^L \left[\sin^2\left({\frac{\pi t}{L}}\right) -\cos^2\left({\frac{\pi t}{L}}\right)\right]\widetilde{f}(\gamma(t))\,dt\\ \leq &\,\frac{\pi^2(n-1)}{L^2}\int_0^L \cos^2\left({\frac{\pi t}{L}}\right)\,dt\,. \end{align*} As $\vert \widetilde{f}\vert\leq \max_M |f|\leq C$ and setting $A=\int_0^L \sin^2\left({\frac{\pi t}{L}}\right)\,dt=\int_0^L \cos^2\left({\frac{\pi t}{L}}\right)\,dt$ we obtain \begin{equation*} \frac{\mu}{n} A - 2CA\frac{\pi^2}{L^2}\leq \frac{(n-1)\pi^2}{L^2}A \end{equation*} hence, \begin{equation*} L^2\leq\frac{n\pi^2(n-1+2C)}{\mu}\,. \end{equation*} This estimate says that all the counterimages of a point $p\in M$ belong to a bounded, hence compact, subset of $\widetilde{M}$. Since such a set is discrete, it must be finite. The thesis then follows. Also, the universal covering is compact. Notice that, as a byproduct of this argument, we have that if the potential function $f$ of a complete, shrinking gradient Ricci soliton is bounded then the soliton is compact. Moreover, by equation~\eqref{equ8} it also follows that if ${\mathrm R}$ is bounded and $\vert\nabla f\vert$ is bounded, again the soliton is compact. \end{proof} \subsection{Another Proof of Proposition~\ref{p2}} We give now a direct proof of Propositions~\ref{p2} and~\ref{p5} only using the defining equation~\eqref{soliton}, without passing by Theorem~\ref{p4}. We start with the following computation, \begin{align*} \Delta\nabla_i\omega_j=&\,\nabla_k\nabla^k\nabla_i\omega_j =\nabla_k(\nabla_i\nabla^k\omega_j+{\mathrm R}^k_{\phantom{k}ijs}\omega^s)\\ =&\,-\nabla_k\nabla_i\nabla_j\omega^k-2\nabla_k\nabla_i{\mathrm R}^k_j+ \nabla_k{\mathrm R}^k_{\phantom{k}ijs}\omega^s+{\mathrm R}^k_{\phantom{k}ijs}\nabla_k\omega^s\\ =&\,-\nabla_i\nabla_k\nabla_j\omega^k-{\mathrm R}_{kijs}\nabla^s\omega^k -{\mathrm R}_{ki\phantom{k}s}^{\phantom{ki}k}\nabla_j\omega^s -2\nabla_i\nabla_k{\mathrm R}^k_j-2{\mathrm R}_{ki\phantom{k}s}^{\phantom{ki}k}{\mathrm R}^s_j -2{\mathrm R}_{kij}^{\phantom{kij}s}{\mathrm R}^k_s\\ &\,-\nabla_j{\mathrm R}_{sk\phantom{k}i}^{\phantom{sk}k}\omega^s -\nabla_s{\mathrm R}_{kj\phantom{k}i}^{\phantom{sk}k}\omega^s +{\mathrm R}^k_{\phantom{k}ijs}\nabla_k\omega^s\\ =&\,-\nabla_i\nabla_j\nabla_k\omega^k-\nabla_i{\mathrm R}_{js}\omega^s -{\mathrm R}_{js}\nabla_i\omega^s+{\mathrm R}_{ikjs}\nabla^s\omega^k -{\mathrm R}_{is}\nabla_j\omega^s -\nabla_i\nabla_j{\mathrm R}\\ &\,-2{\mathrm S}_{ij} +2{\mathrm R}_{ikj}^{\phantom{ikj}s}{\mathrm R}^k_s+\nabla_j{\mathrm R}_{is}\omega^s -\nabla_s{\mathrm R}_{ij}\omega^s -{\mathrm R}_{i\phantom{k}js}^{\phantom{i}k}\nabla_k\omega^s\\ =&\,\nabla_i\nabla_j{\mathrm R} -\nabla_i{\mathrm R}_{js}\omega^s -{\mathrm R}_{js}\nabla_i\omega^s +{\mathrm R}_{ikjs}\nabla^s\omega^k -{\mathrm R}_{is}\nabla_j\omega^s -\nabla_i\nabla_j{\mathrm R}\\ &\,-2{\mathrm S}_{ij} +2{\mathrm R}_{ikj}^{\phantom{ikj}s}{\mathrm R}^k_s+\nabla_j{\mathrm R}_{is}\omega^s -\nabla_s{\mathrm R}_{ij}\omega^s -{\mathrm R}_{i\phantom{k}js}^{\phantom{i}k}\nabla_k\omega^s\\ =&\,-\nabla_i{\mathrm R}_{js}\omega^s -{\mathrm R}_{js}\nabla_i\omega^s +{\mathrm R}_{ikjs}\nabla^s\omega^k -{\mathrm R}_{is}\nabla_j\omega^s\\ &\,-2{\mathrm S}_{ij} +2{\mathrm R}_{ikj}^{\phantom{ikj}s}{\mathrm R}^k_s+\nabla_j{\mathrm R}_{is}\omega^s -\nabla_s{\mathrm R}_{ij}\omega^s -{\mathrm R}_{i\phantom{k}js}^{\phantom{i}k}\nabla_k\omega^s\\ \end{align*} Then, we are ready to write the Laplacian of the Ricci tensor \begin{align*} 2\Delta{\mathrm R}_{ij}=&\,-\Delta\left({\nabla_i\omega_j+\nabla_j\omega_i}\right)\\ =&\,\nabla_i{\mathrm R}_{js}\omega^s +{\mathrm R}_{js}\nabla_i\omega^s -{\mathrm R}_{ikjs}\nabla^s\omega^k +{\mathrm R}_{is}\nabla_j\omega^s\\ &\,+\nabla_j{\mathrm R}_{is}\omega^s +{\mathrm R}_{is}\nabla_j\omega^s -{\mathrm R}_{jkis}\nabla^s\omega^k +{\mathrm R}_{js}\nabla_i\omega^s\\ &\,+2{\mathrm S}_{ij} -2{\mathrm R}_{ikj}^{\phantom{ikj}s}{\mathrm R}^k_s -\nabla_j{\mathrm R}_{is}\omega^s +\nabla_s{\mathrm R}_{ij}\omega^s +{\mathrm R}_{i\phantom{k}js}^{\phantom{i}k}\nabla_k\omega^s\\ &\,+2{\mathrm S}_{ji} -2{\mathrm R}_{jki}^{\phantom{jki}s}{\mathrm R}^k_s -\nabla_i{\mathrm R}_{js}\omega^s +\nabla_s{\mathrm R}_{ji}\omega^s +{\mathrm R}_{j\phantom{k}is}^{\phantom{j}k}\nabla_k\omega^s\\ =&\,2\nabla_s{\mathrm R}_{ij}\omega^s+2{\mathrm R}_{js}\nabla_i\omega^s+2{\mathrm R}_{is}\nabla_j\omega^s\\ &\,-{\mathrm R}_{ikjs}\nabla^s\omega^k -{\mathrm R}_{jkis}\nabla^s\omega^k +{\mathrm R}_{ikjs}\nabla^k\omega^s +{\mathrm R}_{jkis}\nabla^k\omega^s\\ &\,+4{\mathrm S}_{ij}-4{\mathrm R}_{ikjs}{\mathrm R}^{ks}\,.\\ \end{align*} Now, noticing that all the second line cancels, \begin{align*} \Delta{\mathrm R}_{ij} =&\,\nabla_s{\mathrm R}_{ij}\omega^s +{\mathrm R}_{js}\nabla_i\omega^s +{\mathrm R}_{is}\nabla_j\omega^s +2{\mathrm S}_{ij} -2{\mathrm R}_{ikjs}{\mathrm R}^{ks}\\ =&\,\left\langle\nabla{\mathrm R}_{ij}\,\vert\,\omega\right\rangle-2{\mathrm R}_{ikjs}{\mathrm R}^{ks}\\ &\,-\frac{{\mathrm R}_{is}}{2}\Bigl(\nabla_j\omega^s+\nabla^s\omega_j -\frac{2\mu}{n}g_{sj}\Bigr) -\frac{{\mathrm R}_{js}}{2}\Bigl(\nabla_i\omega^s+\nabla^s\omega_i -\frac{2\mu}{n}g_{si}\Bigr)\\ &\,+{\mathrm R}_{js}\nabla_i\omega^s +{\mathrm R}_{is}\nabla_j\omega^s\\ =&\,\left\langle\nabla{\mathrm R}_{ij}\,\vert\,\omega\right\rangle-2{\mathrm R}_{ikjs}{\mathrm R}^{ks} +\frac{2\mu}{n}{\mathrm R}_{ij}+ \frac{{\mathrm R}_{is}}{2}\bigl(\nabla_j\omega^s-\nabla^s\omega_j\bigr) +\frac{{\mathrm R}_{js}}{2}\bigl(\nabla_i\omega^s-\nabla^s\omega_i\bigr)\,. \end{align*} Finally, contracting this equation with $g^{ij}$ we get \begin{align}\label{equ99} \Delta{\mathrm R} =&\,\left\langle\nabla{\mathrm R}\,\vert\,\omega\right\rangle-2{\mathrm R}_{ks}{\mathrm R}^{ks} +\frac{2\mu}{n}{\mathrm R} +\frac{{\mathrm R}_{is}}{2}\bigl(\nabla_j\omega^s-\nabla^s\omega_j\bigr)g^{ij} +\frac{{\mathrm R}_{js}}{2}\bigl(\nabla_i\omega^s-\nabla^s\omega_i\bigr)g^{ij}\\ =&\,\left\langle\nabla{\mathrm R}\,\vert\,\omega\right\rangle+\frac{2\mu}{n}{\mathrm R}-2{\mathrm S}\nonumber \end{align} by the skew--symmetry of the sum of the last two terms.\\ When $n=3$ or in general if the Weyl tensor is zero, as before, setting $\lambda_{\min}:M\to{{\mathbb R}}$ to be the minimal eigenvalue of the Ricci tensor, if $p\in M$ is the point where $\lambda_{\min}/{\mathrm R}$ gets its minimum with eigenvector $v_p$, we consider a local unit smooth tangent vector field $w=w^i$ such that $w(p)=v_p$, $\nabla w^i(p)=\Delta w^i(p)=0$. Then the smooth function ${\mathrm R}_{ij}w^iw^j/{\mathrm R}$ has a local minimum at $p$, $({\mathrm R}_{ij}w^iw^j/{\mathrm R})(p)=\lambda_{\min}(p)/{\mathrm R}(p)$, $\nabla({\mathrm R}_{ij}w^iw^j/{\mathrm R})(p)=0$ and $\Delta({\mathrm R}_{ij}w^iw^j/{\mathrm R})(p)\geq 0$.\\ By the assumptions on the derivatives of $w$ at $p$ we have$\nabla({\mathrm R}_{ij}/{\mathrm R})(p)v_p^iv_p^j=0$ and $\Delta({\mathrm R}_{ij}/{\mathrm R})(p)v_p^iv_p^j\geq 0$, hence, \begin{align*} 0\leq&\,\Delta({\mathrm R}_{ij}/{\mathrm R})(p)v_p^iv_p^j\\ =&\,\frac{2\left({\mathrm R}^3-n\lambda_{\min}{\mathrm R}^2+2(n-1)\lambda_{\min}^2{\mathrm R}-(n-1){\mathrm S}{\mathrm R} +(n-1)(n-2)\lambda_{\min}{\mathrm S}\right)}{{\mathrm R}^2}\\ &\,+\frac{{\mathrm R}_{is}}{2}\bigl(\nabla_j\omega^s-\nabla^s\omega_j\bigr)v_p^iv_p^j +\frac{{\mathrm R}_{js}}{2}\bigl(\nabla_i\omega^s-\nabla^s\omega_i\bigr)v_p^iv_p^j\\ =&\,\frac{2\left({\mathrm R}^3-n\lambda_{\min}{\mathrm R}^2+2(n-1)\lambda_{\min}^2{\mathrm R}-(n-1){\mathrm S}{\mathrm R} +(n-1)(n-2)\lambda_{\min}{\mathrm S}\right)}{{\mathrm R}^2}\\ &\,+\frac{\lambda_{\min}}{2}\bigl(\nabla_j\omega_s-\nabla_s\omega_j\bigr)v^s_pv^j_p +\frac{\lambda_{\min}}{2}\bigl(\nabla_i\omega_s-\nabla_s\omega_i\bigr)v^s_pv^i_p\,. \end{align*} Again, by skew--symmetry the last two terms cancel and we get $$ 0\leq {\mathrm R}^3-n\lambda_{\min}{\mathrm R}^2+2(n-1)\lambda_{\min}^2{\mathrm R}-(n-1){\mathrm S}{\mathrm R} +(n-1)(n-2)\lambda_{\min}{\mathrm S}\,. $$ Since this inequality and equation~\eqref{equ99} are respectively analogous to~\eqref{equ9} and~\eqref{equ4} for {\em gradient} Ricci solitons, following the proof of Proposition~\ref{p2} we get directly to the conclusion only supposing that $(M,g)$ is a Ricci soliton, without using Theorem~\eqref{p4} to know that we are actually dealing with a {\em gradient} Ricci soliton.
1,116,691,500,014
arxiv
\section{Introduction}\label{Intorduction} Conventionally, fixed energy supplies (e.g. batteries) are employed to power energy-constrained wireless networks, such as sensor networks. The lifetime of the network is typically limited, and is thus one of the most important considerations for designing such networks. To prolong the network's operation time, energy harvesting has recently attracted a great deal of attention since it enables scavenging energy from the environment and potentially provides unlimited power supplies for wireless networks. Among other commonly used energy sources (e.g. solar and wind), radio signals radiated by ambient transmitters have drawn an upsurge of interest as a viable new source for wireless energy harvesting. Harvesting energy from radio signals has already been successfully implemented in applications such as passive radio-frequency identification (RFID) systems and body sensor networks (BSNs) for medical implants. More interestingly, wireless energy harvesting opens an avenue for the joint investigation of simultaneous wireless information and power transfer (SWIPT) since radio signals carry energy and information at the same time. SWIPT has recently been investigated for various wireless channels, e.g., the point-to-point additive white Gaussian noise (AWGN) channel \cite{Zhou}, the fading AWGN channel \cite{Liu}-\cite{Caspers}, the multi-antenna channel \cite{Zhang}-\cite{Park}, the relay channel \cite{Gurakan}, \cite{Narir}, and the multi-carrier based broadcast channel \cite{Ng}-\cite{Zhou2}. To achieve maximal wireless energy transfer (WET) and wireless information transfer (WIT) simultaneously, one key challenge is to develop efficient and pragmatic receiver architectures to enable information decoding (ID) and energy harvesting (EH) from the same received signal at the same time \cite{Zhou}, \cite{Caspers}. Practically, two suboptimal receiver designs for SWIPT have been proposed in \cite{Zhang} based on the principle of orthogonalizing ID and EH, namely \emph{power splitting} and \emph{time switching}. The power splitting scheme splits the received signal into two streams of different power for ID and EH separately, while the time switching scheme switches the receiver between an ID mode and an EH mode from time to time. The optimal switching rules between ID versus EH modes for a point-to-point single-antenna fading channel subject to the co-channel interference have been derived in \cite{Liu} to maximize/minimize the information transmission rate/outage probability given an average harvested energy target. It was shown in \cite{Liu} that the time-fluctuation or fading of wireless channels is indeed beneficial for receiver mode-switching (time-switching) based SWIPT systems, where an ``opportunistic'' energy harvesting scheme is proved to be optimal, i.e., the receiver should switch to the EH mode when the channel power is larger than a certain threshold, and to the ID mode otherwise. Intuitively, this phenomenon can be explained as follows. Note that the received energy (in Joule) and amount of information (in bits) both scale linearly with time, but linearly and sub-linearly (logarithmically) with power, respectively; as a result, given the same signal energy at receiver, it is desirable to have more significant power fluctuations such that a given target energy can be harvested during shorter power peaks, thus resulting in more time (but smaller power) for receiving a higher amount of information. In this paper, we further investigate the time-switching based SWIPT system in a multicast scenario, where one multi-antenna transmitter (Tx) broadcasts both energy and common information to multiple single-antenna receivers (Rxs) simultaneously over quasi-static multiple-input single-output (MISO) flat-fading channels, as shown in Fig. \ref{Fig_MulticastSWIPT}. We assume that Tx has an unlimited energy supply that provides constant transmit power while all Rxs have only limited energy sources (e.g., rechargeable batteries) and thus need to replenish energy from the signals broadcast by Tx. Each Rx harvests energy and decodes information from the received signal via time switching, i.e., it can either decode information \emph{or} harvest energy from the received signal at any time, but \emph{not both}. It is worth noting that the number of Rxs in the network can be arbitrarily large, and thus it may not be practically feasible for Tx to gather the instantaneous channel state information (CSI) from all Rxs via dedicated feedback since this will increase the system complexity and overhead drastically with the increasing number of Rxs. Therefore, in this paper we consider a practical setup where the MISO channels from Tx to different Rxs are only known at each respective Rx but unavailable at Tx. \begin{figure} \centering \includegraphics[width=0.45\columnwidth]{MulticastSWIPT} \caption{A MISO multicast network for SWIPT.} \label{Fig_MulticastSWIPT} \end{figure} In order to optimize the rate-energy (R-E) trade-offs achievable at each Rx, inspired by the result on the beneficial time-variation of fading channels for time-switching based SWIPT systems \cite{Liu}, in this paper we propose a new application of the celebrated ``random beamforming'' technique at the multi-antenna transmitter to generate artificial channel variations at each receiver to opportunistically harvest energy when the channel power exceeds a given threshold and decode information otherwise. This is realized by partitioning each transmission block with constant user channels into sub-blocks with equal duration in which independent random beams (RBs) are applied to generate artificial channel fading. Note that the use of random beamforming in this paper is motivated differently from that in the conventional setup for broadcasting with WIT only, which aims at achieving asymptotically interference-free independent information transmissions to multiple receivers in multi-antenna broadcast channels by exploiting multi-user diversity based partial channel feedback and transmission scheduling as the number of receivers increases to infinity \cite{Viswanath}, \cite{Sharif}. In contrast, for multicast SWIPT systems under our investigation, random beamforming is employed for generating artificial time-variation of channels to achieve better R-E trade-offs with time-switching receivers. The main results of this paper are summarized as follows: \begin{itemize} \item We propose a novel design with transmitter random beamforming and receiver time switching for MISO multicast SWIPT systems. We first characterize the performance trade-offs between WET and WIT by investigating the achievable rate and harvested power pair in a given transmission block with constant MISO AWGN channels, assuming Gaussian distributed random beams. Furthermore, we compare the R-E performance of our proposed scheme with that of a reference scheme with receiver periodic switching between ID and EH modes, but without random beamforming applied at Tx. \item We then extend our analysis for the MISO AWGN channel to MISO Rayleigh fading channel. We investigate the achievable average information rate and average harvested power at each Rx, and characterize their asymptotic trade-offs when the transmit power goes to infinity. It is shown that employing one single random beam for the proposed scheme achieves the best R-E trade-off asymptotically and also outperforms that of periodic switching. \item When Rx consumes significant amount of power at each block and/or the capacity of its energy storage device is limited, it may suffer from power shortage unless the amount of harvested power in each block is larger than a certain requirement. We thus study the ``power outage probability'' of the proposed scheme in fading MISO channels, which is also compared to that of the periodic switching in both asymptotic and finite transmit power regimes. \item In practice, transmit power is preferably to be constant for the maximal operation efficiency of transmitter amplifiers. However, the use of Gaussian distributed random beams for the proposed scheme can cause large transmit power fluctuations. We thus propose alternative random beam designs with constant transmit power, for which the R-E performance is characterized and compared with the case of Gaussian random beam. \end{itemize} The rest of this paper is organized as follows. Section \ref{Sec:SystemModel} introduces the proposed scheme as well as the reference scheme of periodic switching, and compare their harvested power and achievable information rate for one single block with the AWGN MISO channel. Section \ref{Sec:PerformanceAnalysis} investigates the R-E performances of the proposed and reference schemes in Rayleigh fading MISO channels. Section \ref{Sec:OtherBeams} compares the performances of the proposed scheme with different random beam designs. Finally, Section \ref{Sec:Conclusion} concludes the paper. \emph{Notations:} In this paper, matrices and vectors are denoted by bold-face upper-case letters and lower-case letters, respectively. ${{\bf{I}}_N}$ denotes an $N \times N$ identity matrix and ${{\bf{0}}}$ represents a matrix with all zero entries. The distribution of a circularly symmetric complex Gaussian (CSCG) random vector with mean vector $\boldsymbol{\mu}$ and covariance matrix ${\boldsymbol{\Sigma }}$ is denoted by ${\mathcal{CN}}( {\boldsymbol{\mu} ,{\boldsymbol{\Sigma }}})$, and $ \sim $ stands for ``distributed as". ${{\mathbb{C}}^{a \times b}}$ and ${{\mathbb{R}}^{a \times b}}$ denote the spaces of $a \times b$ matrices with complex and real entries, respectively. $\left\| {\bf{z}} \right\|$ denotes the Euclidean norm of a complex vector $\bf{z}$. ${\mathbb{E}}\left[ \cdot \right]$ represents the statistical expectation. \section{System Model}\label{Sec:SystemModel} As shown in Fig. \ref{Fig_MulticastSWIPT}, we consider a MISO multicast SWIPT system consisting of one Tx and multiple Rxs, e.g., sensors. Since Tx broadcasts a common signal to all Rxs, in this paper we focus on one particular Tx-Rx pair as shown in Fig. \ref{Fig_SystemModel} for the purpose of exposition, while the effect of multiuser channels on the performance of the considered system will be evaluated by simulation in Section \ref{Sec:PerformanceAnalysis}. We assume that Tx is equipped with $N_t > 1$ antennas and Rx is equipped with one single antenna. It is also assumed that the MISO channel from Tx to Rx follows quasi-static flat-fading, where the channel remains constant during each block transmission time, denoted by $T$, but varies from one block to another. It is further assumed that the channel in each block is perfectly known at Rx, but unknown at Tx. \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{SystemModel_PtoP} \caption{A MISO wireless system for SWIPT via receiver mode switching.} \label{Fig_SystemModel} \end{figure} The transmitted signal at the $i$th symbol interval in the $t\,$th transmission block is denoted by ${\bf{x}}_t \left( i \right) \in {{\mathbb{C}}^{{N_t} \times 1}}$. The covariance matrix of the transmitted signal is thus given by ${{\bf{S}}_{t,\,\bf{x}}} = \mathbb E [ {{\bf{x}}_t\left( i \right){\bf{x}}_t^{H}{{\left( i \right)}}} ] = \frac{P}{{{N_t}}}{{\bf{I}}_{{N_t}}}$, where $P$ denotes the constant transmit power, which is assumed to be equally allocated among $N_t$ transmit antennas. In addition, the MISO channel from Tx to Rx in the $t\,$th transmission block is denoted by ${\tilde{\bf{h}}}_t \in {{\mathbb{C}}^{{N_t} \times 1}}$, which is constant during each block. Without loss of generality, the MISO channel ${\bf{\tilde h}}_t$ can be modeled as ${\bf{\tilde h}}_t = \sqrt{\theta}\, {\bf{h}}_t$, where $\theta$ and ${\bf{h}}_t \in \mathbb C^{N_t \times 1}$ denote the signal power due to distance-dependent attenuation and large-scale channel fading (assumed to be constant over all $t$'s for the time being) and the MISO channel due to small-scale channel fading in the $t\,$th block, respectively. The received signal at Rx is then expressed as \begin{equation}\label{Eq_ReceivedSignal_General} {\begin{array}{l} {y_t}\left( i \right) = {\bf{\tilde h}}_t^T{{\bf{x}}_t}\left( i \right) + {z_t}\left( i \right) \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = \sqrt{\theta} {\bf{h}}_t^T{{\bf{x}}_t}\left( i \right) + {z_t}\left( i \right), \\ \end{array}} \end{equation} where ${y}_t\left( i \right)$ and ${z}_t\left( i \right)$ denote the received signal and noise at Rx, respectively; it is assumed that ${z}_t\left( i \right) \sim {\mathcal{CN}}\left( {0,\sigma^2} \right)$, which is independent over both $t$ and $i$. In addition, since we can consider one block of interest without loss of generality, the block index $t$ will be omitted in the sequel for notational brevity. In each block, Tx aims at achieving SWIPT to Rx. It is assumed that Rx is equipped with a rechargeable battery to store the energy harvested from the received signal, which is used to provide power to its operating circuits. Specifically, Rx harvests energy from the received signals when it is in the EH mode, while it decodes information in the ID mode. We assume that Rx switches between ID mode and EH mode as in \cite{Liu} and \cite{Zhang} since it is difficult yet to use the received signal for both ID and EH at the same time due to practical circuit limitations \cite{Zhou}. As in \cite{Liu}, ID mode and EH mode are represented by defining an indicator function as \begin{equation}\label{Eq_ModeSelection} {\rho = \left\{ {\begin{array}{*{20}{c}} {1,} \\ {0,} \\ \end{array}\begin{array}{*{20}{c}} {\,\,\,{\rm{ID}}\,\,{\rm{mode}}\,\,{\rm{is}}\,\,{\rm{active}}} \\ {\,\,\,{\rm{EH}}\,\,{\rm{mode}}\,\,{\rm{is}}\,\,{\rm{active.}}} \\ \end{array}} \right.} \end{equation} We consider two time switching schemes, namely ``\emph{periodic switching} (PS)'' and ``\emph{threshold switching} (TS)'' as elaborated next. \subsection{Reference Scheme: Periodic Switching}\label{Ref_OOS} \begin{figure} \centering \includegraphics[width=0.52\columnwidth]{OOS_SystemModel} \caption{Transmitter and receiver structures for periodic switching (PS).} \label{Fig_OOS_SystemModel} \end{figure} As shown in Fig. \ref{Fig_OOS_SystemModel}, with PS, Rx sets $\rho = 1$ during the first $\tau T$ amount of time in each transmission block, with $0 \le \tau \le 1$, and $\rho = 0$ for the remaining block duration $(1-\tau)T$.\footnote{Ideally, with a given time allocation $\tau$, setting $\rho = 1$ or $0$ at the beginning of each block will not change the system performance; however, setting $\rho = 1$ initially is practically more favorable for Rx to implement block-wise time synchronization.} For given ${\bf{h}}$ and $\tau$, the amount of harvested energy normalized by $T$, i.e., \emph{average harvested power}, in a transmission block can be derived using ${{\bf{S}}_{\bf{x}}}$ as \begin{equation}\label{Eq_Energy_TimeSharing} {\begin{array}{l} {Q^{(\rm{P})}}\left( {H,\tau } \right) = \left( {1 - \tau } \right)\zeta {\mathbb{E}}\left[ {{{\left\| {\sqrt{\theta} \, {{\bf{h}}^T}{\bf{x}}\left( i \right)} \right\|}^2}} \right] \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = \left( {1 - \tau } \right)\zeta \theta PH, \\ \end{array}} \end{equation} where $H = \frac{1}{{{N_t}}}\left\| {\bf{h}} \right\|^2$ is the normalized average channel power, and $0 < \zeta \le 1$ is a constant reflecting the loss in the energy transducer when the harvested energy is converted to electrical energy to be stored. In (\ref{Eq_Energy_TimeSharing}), it has been assumed that the power harvested due to the receiver noise is negligible and thus is ignored. It is further assumed that $\zeta = 1$ in the sequel for notational brevity. The structure of Tx for PS is also shown in Fig. \ref{Fig_OOS_SystemModel}. Note that with PS, Rx can adjust $\tau$ based on its energy and rate requirements, as well as the channel condition. Since Tx keeps sending information symbols while Rx determines $\tau$ for switching between ID and EH modes based on its own channel quality, Rx observes an erasure AWGN channel and thus the erasure code \cite{Alon} should be employed at Tx for channel coding.\footnote{This is especially useful for the multicast network, where receivers can set different values of $\tau$ for decoding common information sent by the transmitter, based on their individual channel conditions and energy requirements.} The bit stream to be transmitted during a transmission block is thus first encoded by an erasure code. Space-time (ST) code is then applied to modulate the output bits from the erasure-code encoder, and the modulated symbols are transmitted by $N_t$ antennas. We consider a ST code of length $L$, denoted by matrix ${\bf{X}}^{(\rm{P})} \in {{\mathbb{C}}^{L \times {N_t}}}$. It is assumed that ${\bf{X}}^{(\rm{P})}$ is a capacity-achieving ST code.\footnote{Alamouti code \cite{Alamouti} is known as the capacity-achieving ST code when $N_t = 2$. For $N_t >2$, capacity-achieving ST code has not yet been found in general. In this paper, however, capacity-achieving ST code is assumed even when $N_t > 2$ to provide a performance upper bound for the system under consideration.} Tx transmits a sequence of ${\bf{X}}^{(\rm{P})}$'s in each transmission block. Considering ${\bf{X}}^{(\rm{P})}$ with $L$ consecutive transmitted symbols from each antenna, (\ref{Eq_ReceivedSignal_General}) is modified as \begin{equation}\label{Eq_ReceivedSignal_OOS} {{\bf{y}} = \sqrt{\theta}\,{{\bf{X}}^{({\rm{P}})}}{\bf{h}} + {\bf{z}},} \end{equation} where ${\bf{y}} \in {{\mathbb{C}}^{L \times 1}}$ and ${\bf{z}} \in {{\mathbb{C}}^{L \times 1}}$ denote the received signal vector and noise vector, respectively, and ${\bf{z}} \sim { \mathcal{CN}}\left( {{\bf{0}},\sigma^2 {{\bf{I}}_L}} \right)$. Since ${\bf{X}}^{(\rm{P})}$ is assumed to be a capacity-achieving ST code, the achievable rate of the channel in (\ref{Eq_ReceivedSignal_OOS}) can be shown equivalent to that of a MISO channel $\tilde{\bf{h}} = \sqrt{\theta}\,{\bf{h}}$ with input covariance matrix ${{\bf{S}}_{\bf{x}}} = \frac{P}{{{N_t}}}{{\bf{I}}_{{N_t}}}$. Assume that the number of ST coded blocks transmitted in each block is sufficiently large such that $\tau T$ is approximately an integer number of the ST block durations for any value of $\tau$. For given ${\bf{h}}$ and $\tau$, the information rate for PS can thus be expressed as \begin{equation}\label{Eq_Rate_TimeSharing} {{R^{(\rm{P})}}\left( H, \tau \right) = \tau{\log _2}\left( {1 + \frac{\theta P H}{\sigma^2}} \right),} \end{equation} Note that ${R^{(\rm{P})}}\left( H, \tau \right)$ is achievable when $N_t \le 2$, but is in general an upper bound on the achievable rate when $N_t > 2$ for given $\bf{h}$ and $\tau$. \subsection{Proposed Scheme: Threshold Switching}\label{Prof_TBS} \begin{figure} \centering \includegraphics[width=0.52\columnwidth]{TBS_SystemModel} \caption{Transmitter and receiver structures for threshold switching (TS).} \label{Fig_TBS_SystemModel} \end{figure} As shown in Fig. \ref{Fig_TBS_SystemModel}, the TS scheme is designed to take advantage of the received signal power fluctuations induced by transmit random beamforming within each transmission block for opportunistic EH/ID mode switching, even with a constant MISO channel $\bf{h}$. For this purpose, each transmission block is further divided into $K$ sub-blocks each consisting of one or more ST codewords, and artificial channel fading over different sub-blocks is generated by multi-antenna random beamforming at Tx. Furthermore, at the $k$th sub-block, $k = 1, \cdots, K$, Rx determines whether to switch to ID mode or EH mode based on $A\left( k \right)$, which denotes the channel power at the $k$th sub-block normalized by $\theta$ and $P$ (to be specified later). According to \cite{Liu}, in the presence of received channel power fluctuations, the optimal mode switching rule that achieves the optimal trade-off between the maximum harvested energy and information rate in a transmission block is given by \begin{equation}\label{Eq_Opt_1_Solution} {\rho \left( k \right) = \left\{ {\begin{array}{*{20}{c}} {1,} \\ {0,} \\ \end{array}\begin{array}{*{20}{c}} {\,\,\,{\rm{if}}\,\,A\left( k \right) \le \bar A} \\ {\,\,\,{\rm{otherwise}},} \\ \end{array}} \right.} \end{equation} where $\bar A \ge 0$ is a pre-designed threshold on the normalized channel power $A(k)$. It is noted that choosing EH or ID mode at the $k$th sub-block is determined by the normalized channel power $A\left( k \right)$ as compared to the threshold $\bar A$, or equivalently the received signal power $\theta PA\left(k\right)$ as compared to the threshold $\theta P \bar A$; thus, ID mode is selected, i.e., $\rho \left( k \right) = 1$, if the received signal power is no greater than $\theta P \bar {A}$ and EH mode is selected, i.e., $\rho \left( k \right) = 0$, otherwise. Artificial channel fading over sub-blocks is generated at Tx by using $N$ RBs simultaneously, $1 \le N \le N_t$. Denote the $n$th RB at the $k$th sub-block as ${\boldsymbol{\phi} _n}\left( k \right) \in {{\mathbb C}^{{N_t} \times 1}}$, where ${\mathbb{E}} [ {{\boldsymbol{\phi} _n}\left( k \right){\boldsymbol{\phi} _{n}^{H}}{{\left( {k} \right)}}} ] = \frac{1}{N_t}{{\bf{I}}_{{N_t}}}$ and ${\mathbb{E}} [ {{\boldsymbol{\phi} _n}\left( k \right){\boldsymbol{\phi} _{m}^{H}}{{\left( {j} \right)}}} ] = {{\bf{0}}}$ if $k \ne j$ and/or $n \ne m$. Then it follows that $A\left( k \right) = \frac{1}{N}{\left\| {{{\bf{a}}}\left( k \right)} \right\|^2}$, where ${\bf{a}}(k) = {\bf{\Phi}}^T {\left( k \right)}{\bf{h}} \in \mathbb C^{N \times 1}$ is the equivalent MISO channel at the $k$th sub-block generated by ${\bf{\Phi}} \left( k \right) = [ {{\boldsymbol{\phi} _1}( k )\,\,{\boldsymbol{\phi} _2}( k )\,\, \cdots \,\,{\boldsymbol{\phi} _N}( k )} ]$, which is assumed to be a pre-designed pseudo random sequence and known to all Rxs.\footnote{Each Rx can estimate ${\bf{a}}\left( k \right)$'s without knowledge of ${\bf{\Phi}} \left( k \right)$'s by employing conventional channel estimation over all sub-blocks. However, such an implementation incurs high training overhead. When ${\bf{\Phi}} \left( k \right)$'s are assumed to be known at all Rxs, however, each Rx only needs to estimate ${\bf{h}}$ at the beginning of each block to obtain ${\bf{a}}(k)$'s and thus the overhead for channel estimation can be significantly reduced.} Similarly to PS, the erasure code should be employed in the case of TS for channel coding since the set of sub-blocks used for ID according to (\ref{Eq_Opt_1_Solution}) are in general randomly distributed within a transmission block with $\bar A > 0$, and thus the resulting channel from Tx to Rx in ID mode can be modeled by an erasure AWGN channel. In addition, the ST code is applied over $N$ RBs with TS instead of $N_t$ antennas with PS. This is because the use of $N$ RBs transforms the $N_t \times 1$ constant MISO channel ${{\bf{h}}}$ into an $N \times 1$ fading MISO channel specified by ${\bf{a}}\left( k \right)$'s in each transmission block. For all $K$ sub-blocks in TS, we consider the use of a ST code of length $L$ denoted by matrix ${{\bf{X}}^{({\rm{T}})}} \in {{\mathbb{C}}^{{L} \times N}}$. For convenience, we express ${{\bf{X}}^{({\rm{T}})}} = [ {{\bf{x}}_1^{({\rm{T}})}\,\,{\bf{x}}_2^{({\rm{T}})}\,\, \cdots \,\,{\bf{x}}_L^{({\rm{T}})}} ]^T$, where ${{\bf{x}}_l^{({\rm{T}})}} \in {{\mathbb{C}}^{{N} \times 1}}$, $1 \le l \le L$, denotes the $l$th transmitted signal vector in each ST coded block. The covariance matrix for ${\bf{x}}_l^{({\rm{T}})}$ is given by ${\bf{S}}_{{\bf{x}},l}^{(\rm{T})} = \mathbb E [ {{\bf{x}}_l^{({\rm{T}})}{{( {{\bf{x}}_l^{({\rm{T}})}} )}^H}} ] = \frac{P}{N}{{\bf{I}}_N}$, $\forall l$, to be consistent with ${{\bf{S}}_{\bf{x}}} = \frac{P}{{{N_t}}}{{\bf{I}}_{{N_t}}}$. Similar to ${\bf{X}}^{(\rm{P})}$ in the case of PS, ${{\bf{X}}^{(\rm{T})}}$ is assumed to be a capacity-achieving ST code for an equivalent MISO channel with $N$ transmitting antennas. The received signal at each sub-block is used for either energy harvesting or information decoding according to (\ref{Eq_Opt_1_Solution}). For the $k$th sub-block, the received signal can thus be expressed by modifying (\ref{Eq_ReceivedSignal_General}) as \begin{equation}\label{Eq_RCS_Received_Signal} {\begin{array}{l} {\bf{y}}\left( k \right) = {{\bf{X}}^{(\rm{T})}} {\Phi ^{T}} \left( k \right) {\tilde{\bf{h}}} + {\bf{z}}\left( k \right) \\ \,\,\,\,\,\,\,\,\,\,\,\,\,\, = \sqrt{\theta}\,{{\bf{X}}^{(\rm{T})}} {\bf{a}}{\left( k \right)} + {\bf{z}}\left( k \right),\\ \end{array}} \end{equation} where ${\bf{y}}\left( k \right) \in {{\mathbb C}^{L \times 1}}$ and ${\bf{z}}\left( k \right) \in {{\mathbb C}^{L \times 1}}$ denote the received signal and noise vectors, respectively, with ${\bf{z}}\left( k \right) \sim {\mathcal{CN}}\left( {{\bf{0}},\sigma^2 {{\bf{I}}_L}} \right)$. When $\rho \left( k \right) = 0$, the amount of harvested power (i.e., harvested energy normalized by sub-block duration $T/K$) at the $k$th sub-block is derived using ${\bf{S}}_{{\bf{x}},l}^{(\rm{T})}$ as \begin{equation}\label{Eq_Energy_TBS1} {{Q^{(\rm{T})}}\left( k \right) = \frac{1}{L}{\mathbb E}\left[ {{{\left\| \sqrt{\theta}\,{{\bf{X}}^{(\rm{T})}}{{\bf{a}}{{\left( k \right)}}} \right\|}^2}} \right] = \theta P{A\left( k \right)}.} \end{equation} Furthermore, by assuming a capacity-achieving ST code, the achievable rate with TS at the $k$th sub-block when $\rho \left( k \right) = 1$ can be expressed as \begin{equation}\label{Eq_Rate_TBS1} {{R^{(\rm{T})}}\left( k \right) = {\log _2}\left( {1 + \frac{\theta P A\left( k \right)}{\sigma^2}} \right).} \end{equation} \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{HarvestedEnergy_vs_H} \caption{${Q^{(\rm{T})}}\left( {h,N,\bar A} \right)$ vs. $h$ with $P = 30$dBm, $N = 1, 2$, $\theta = 10^{-4}$, and $\bar A = 0.1$, $0.2$, $0.5$.} \label{Fig_HarvestedEnergy_vs_H} \end{figure} The amount of harvested energy in a transmission block is the sum of the energy harvested from all sub-blocks in the EH mode. Assuming that $K \to \infty$, the average harvested power in a transmission block for given $N$ RBs, threshold $\bar A$, and the realization of the normalized MISO channel $\bf{h}$ with $H = h$ can be obtained from (\ref{Eq_Energy_TBS1}) as \[ {{Q^{(\rm{T})}}\left( {h,N,\bar A} \right) = \frac{1}{T}\mathop {\lim }\limits_{K \to \infty } \sum\limits_{k = 1}^K {\left( {1 - \rho \left( k \right)} \right)\frac{T \times {Q^{(\rm{T})}}\left( k \right)}{K}}} \] \begin{equation}\label{Eq_Energy_TBS_1_1} \,\,\,\,\,\, = \mathbb{E}\left[ {\left( {1 - \rho \left( k \right)} \right)\theta PA\left( k \right)} \right].\, \end{equation} In this section, Gaussian RBs\footnote{Alternative RB designs will be studied later in Section \ref{Sec:OtherBeams}.} are assumed to generate artificial channel fading, i.e., ${\boldsymbol{\phi} _n}\left( k \right) \sim {\mathcal{CN}} ( {{\bf{0}},\frac{1}{N_t}{{\bf{I}}_{{N_t}}}} )$. It can be easily verified that ${\bf{a}}\left( k \right) \sim {\mathcal{CN}}\left( {{\bf{0}},H{{\bf{I}}_N}} \right)$ for a given $H$, and $A \left( k \right)$ is thus a chi-square random variable with $2N$ degrees-of-freedom. With $N$ RBs and conditioned on a given normalized MISO channel realization ${\bf{h}}$ with $H = h$, the probability density function (PDF) of $A := A\left( k \right)$, $\forall k,$ and the cumulative distribution function (CDF) of $A$ are given, respectively, by \cite{Proakis} \begin{equation}\label{Eq_TBS_pdf} {{f_{A\left| H \right.}^{(N)}}\left( {a\left| h \right.} \right) = \frac{1}{{{{\left( {h/N} \right)}^N}\Gamma \left( N \right)}}{a^{N - 1}}{e^{ - \left( {N/h} \right)a}},} \end{equation} \begin{equation}\label{Eq_TBS_CDF} {{F_{A\left| H \right.}^{(N)}}\left( {a \left| h \right.} \right) = 1 - \frac{{\Gamma \left( {N,\frac{{Na}}{h}} \right)}}{{\Gamma \left( {N} \right)}} ,} \end{equation} where $\Gamma \left( x \right) = \int_0^\infty {{t^{x - 1}}{e^{ - t}}dt}$ and $\Gamma \left( {\alpha ,x} \right) = \int_x^\infty {{t^{\alpha - 1}}{e^{ - t}}dt}$ represent the Gamma function and incomplete Gamma function, respectively. From (\ref{Eq_Energy_TBS_1_1}) and (\ref{Eq_TBS_pdf}), ${Q^{(\rm{T})}}\left( {h,N,\bar A} \right)$ with Gaussian RBs can thus be obtained as \begin{equation}\label{Eq_Energy_TBS2} {{Q^{(\rm{T})}}\left( {h,N,\bar A} \right) = \int_{\bar A}^\infty {\theta Pa{f_{A\left| H \right.}^{(N)}}\left( {a\left| h \right.} \right)da}} \end{equation} \begin{equation}\label{Eq_Energy_TBS3} {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = \theta Ph\frac{{\Gamma \left( {N + 1,\frac{{N\bar A}}{h}} \right)}}{{\Gamma \left( {N + 1} \right)}}, } \end{equation} where (\ref{Eq_Energy_TBS3}) can be obtained by applying (\ref{Eq_TBS_pdf}) and \cite[3.351-2]{TableOfIntegral} to (\ref{Eq_Energy_TBS2}). For an illustration, Fig. \ref{Fig_HarvestedEnergy_vs_H} shows ${Q^{(\rm{T})}}\left( {h,N,\bar A} \right)$ versus different values of $h$ when $N = 1$, $2$ and $\bar A = 0.1$, $0.2$, $0.5$, assuming 40dB signal power attenuation due to large-scale fading, i.e., $\theta = 10^{-4}$, with the carrier frequency and the distance between Tx and Rx given by $900$MHz and $5$ meters. The transmit power at Tx is set to be $P = 30$dBm. It is observed that ${Q^{(\rm{T})}}\left( {h,N,\bar A} \right)$ decreases with increasing $\bar A$ when $N$ and $h$ are both fixed, which is in accordance with (\ref{Eq_Energy_TBS3}). Moreover, when $N$ and $\bar A$ are both fixed, ${Q^{(\rm{T})}}\left( {h,N,\bar A} \right)$ is observed to increase monotonically with $h$. This is because ${F_{A\left| H \right.}^{(N)}}\left( {\bar A \left| h \right.} \right)$ in (\ref{Eq_TBS_CDF}) decreases with increasing $h$, and thus $1 - F_{A\left| H \right.}^{(N)}( {\bar A\left| h \right.} )$, which is the percentage of the received sub-blocks allocated to EH mode in each block, increases. Thus, the amount of harvested power in each block increases with $h$ thanks to the increased number of sub-blocks assigned to EH mode, as well as the increased average channel power $h$, as can be inferred from (\ref{Eq_Energy_TBS3}). Furthermore, when $h$ and $\bar A$ are both fixed, ${Q^{(\rm{T})}}\left( {h,N,\bar A} \right)$ is observed to decrease with increasing $N$ when $h$ is small, but increase with $N$ when $h$ is sufficiently large. This is because, as inferred from (\ref{Eq_TBS_pdf}) and (\ref{Eq_TBS_CDF}), the artificial channel fading is more substantial when smaller number of RBs, $N$, is used, although the same average channel power is given as $h$. Given $1 \le N \le N_t$, it can be shown that $F_{A\left| H \right.}^{(N)}( {\bar A\left| h \right.} )$ in (\ref{Eq_TBS_CDF}) increases with $N$ when $h$ is small, and thus larger power is harvested with smaller number of RBs. In contrast, it can also be shown that $F_{A\left| H \right.}^{(N)}( {\bar A\left| h \right.} )$ decreases with increasing $N$ when $h$ is larger than a certain threshold, and thus more power is harvested with larger number of RBs. Similarly, we can verify that ${Q^{(\rm{T})}}\left( {h,N,\bar A} \right)$ increases with $N$ when $\bar A$ is small, but decreases with increasing $N$ when $\bar A$ is sufficiently large. Next, the achievable rate in a block for given $N$, $\bar A$, and $h$ can be derived from (\ref{Eq_Rate_TBS1}) and (\ref{Eq_TBS_pdf}) as \[ {R^{(\rm{T})}}\left( {h,N,\bar A} \right) = {\mathbb E}\left[ \rho \left( k \right) {{\log }_2}\left( 1 + \frac{\theta P A \left( k \right)}{\sigma^2} \right) \right] \] \begin{equation}\label{Eq_Rate_TBS2} {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = \int_0^{\bar A} {{{\log }_2}\left( 1 + \frac{\theta P}{\sigma^2} a \right){f_{A\left| H \right.}^{(N)}}\left( {a\left| h \right.} \right)da.}} \end{equation} With ${f_{A\left| H \right.}^{(N)}}\left( {a\left| h \right.} \right)$ given in (\ref{Eq_TBS_pdf}), it is in general difficult to obtain a unified closed-form expression of (\ref{Eq_Rate_TBS2}) for arbitrary values of $N$. However, it is possible to derive closed-form expressions for (\ref{Eq_Rate_TBS2}) for some special values of $N$. For example, ${R^{(\rm{T})}}\left( {h,1,\bar A} \right)$ and ${R^{(\rm{T})}}\left( {h,2,\bar A} \right)$ for $N = 1$ and $2$, respectively, can be derived in closed-form in Appendix \ref{App_Rate_N1_Derivation}. Fig. \ref{Fig_AchievableRate_vs_H} shows ${R^{({\rm{T}})}}\left( {h,N,\bar A} \right)$ versus different values of $h$ when $N = 1$, $2$ and $\bar A = 0.1$, $0.2$, $0.5$ with the same setup as for Fig. \ref{Fig_HarvestedEnergy_vs_H} with $\theta = 10^{-4}$ and $P = 30$dBm. It is further assumed that the bandwidth of the transmitted signal is $10$MHz, and receiver noise is white Gaussian with power spectral density $-110$dBm/Hz or $-40$dBm over the entire bandwidth of $10$MHz. It is observed that ${R^{(\rm{T})}}\left( {h,N,\bar A} \right)$ increases with $\bar A$ when $N$ and $h$ are both fixed, which is in accordance with (\ref{Eq_Rate_TBS2}). Moreover, by the opposite argument of the explanation for Fig. \ref{Fig_HarvestedEnergy_vs_H}, when $h$ and $\bar A$ are both fixed, ${R^{(\rm{T})}}\left( {h,N,\bar A} \right)$ is observed to increase with $N$ when $h$ is small or $\bar A$ is large, but decrease with increasing $N$ when $h$ or $\bar A$ is sufficiently large/small. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{AchievableRate_vs_H} \caption{${R^{(\rm{T})}}\left( {h, N, \bar A} \right)$ vs. $h$ with $P = 30$dBm, $N = 1, 2$, $\theta = 10^{-4}$, and $\bar A = 0.1$, $0.2$, $0.5$.} \label{Fig_AchievableRate_vs_H} \end{figure} However, different from ${Q^{(\rm{T})}}\left( {h,N,\bar A} \right)$ in Fig. \ref{Fig_HarvestedEnergy_vs_H} which is a monotonically increasing function of $h$, it is observed in Fig. \ref{Fig_AchievableRate_vs_H} that ${R^{(\rm{T})}}\left( {h,N,\bar A} \right)$ in general first increases with $h$, and then decreases with increasing $h$ for given $N$ and $\bar A$. The reason is as follows. When $h \to 0$, from (\ref{Eq_Rate_TBS2}), we have ${R^{(\rm{T})}}\left( {h,N,\bar A} \right) \to {{{\log }_2}\left( {1 + \frac{\theta P}{\sigma^2} h} \right)}$; thus, ${R^{(\rm{T})}}\left( {h,N,\bar A} \right)$ increases with $h$. However, when $h \to \infty$, ${f_{A\left| H \right.}^{(N)}}\left( {\bar A\left| h \right.} \right) \to 0$ for any finite $0 \le a \le \bar A$, and thus ${R^{(\rm{T})}}\left( {h,N,\bar A} \right) \to 0$; therefore, ${R^{(\rm{T})}}\left( {h,N,\bar A} \right)$ should decrease with increasing $h$ when $h$ is sufficiently large. \subsection{Rate-Energy Performance Comparison}\label{NumericalExample1} As in \cite{Liu} and \cite{Zhang}, there exist rate-energy (R-E) trade-offs in both PS and TS schemes for information and energy transfer. R-E trade-offs in PS and TS can be characterized by setting different values of $\tau$ and $\bar A$, respectively. Fig. \ref{Fig_R_E_Tradeoff_Nt_2} shows R-E trade-offs in PS and TS for $N_t = 2$ and a constant MISO channel ${\bf{h}} = {\left[ {1.0\,\,\,\,0.56} \right]^T}$, with the same channel setup as for Figs. \ref{Fig_HarvestedEnergy_vs_H} and \ref{Fig_AchievableRate_vs_H}. For PS, ${\bf{X}}^{(\rm{P})}$ is generated by Alamouti code with $L = 2$ \cite{Alamouti}. For TS, a scalar code cascaded by one single RB is applied when $N = 1$, while the Alamouti code with two RBs is applied when $N=2$. The harvested power is denoted by $Q$. It is observed that TS yields the best R-E trade-off with $N=1$ when $Q_{th}^{\left( 1 \right)} \le Q \le h$, and with $N=2$ when $Q_{th}^{\left( 2 \right)} \le Q < Q_{th}^{\left( 1 \right)}$, while PS yields the best R-E trade-off when $0 \le Q < {Q_{th}^{(2)}}$, where $Q_{th}^{\left( 1 \right)}$ and $Q_{th}^{\left( 2 \right)}$ are shown in Fig. \ref{Fig_R_E_Tradeoff_Nt_2}. Note that at $Q = 0$, i.e., no EH is required as in the conventional MISO system with WIT only, PS achieves higher rate than TS since artificial channel fading by random beamforming degrades the AWGN channel capacity. However, when the harvested power exceeds certain thresholds, i.e., $Q_{th}^{(2)}$ and $Q_{th}^{(1)}$, TS with $N = 2$ RBs and $N = 1$ RB achieves the best rate performance for a given power harvesting target, respectively. This demonstrates the unique usefulness of random beamforming in a multi-antenna SWIPT system even with constant AWGN channels. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{R_E_Tradeoff_Nt_2} \caption{Trade-offs between achievable rate and harvested power when $P=30$dBm, $N_t = 2$, $\theta = 10^{-4}$, and ${\bf{h}} = {\left[ {1.0\,\,\,\,0.56} \right]^T}$.} \label{Fig_R_E_Tradeoff_Nt_2} \end{figure} It is worth noting that for TS larger information rate is achieved with $N = 1$ when $Q_{th}^{\left( 1 \right)} \le Q \le h$, but with $N=2$ otherwise. This can be explained as follows. For a given $h$, it can be shown from (\ref{Eq_Energy_TBS3}) that $\bar A \to 0$ when $Q \to h$. Thus, with sufficiently small $\bar A$, we have ${Q^{(\rm{T})}}\left( {h,1,\bar A} \right) \approx {Q^{(\rm{T})}}\left( {h,2,\bar A} \right)$ (note that ${Q^{(\rm{T})}}\left( {h,2,\bar A} \right)$ is sightly larger than ${Q^{(\rm{T})}}\left( {h,1,\bar A} \right)$ for small $\bar A$ as discussed for Fig. \ref{Fig_HarvestedEnergy_vs_H}; but the gap between them is negligible as shown in Fig. \ref{Fig_HarvestedEnergy_vs_H} with $\bar A = 0.1$). On the other hand, with small $\bar A$, it can be shown from (\ref{Eq_TBS_pdf}) that ${f_{A\left| H \right.}^{(1)}}\left( {a\left| h \right.} \right) > {f_{A\left| H \right.}^{(2)}}\left( {a\left| h \right.} \right)$, $0 \le a \le \bar A$, and thus ${R^{(\rm{T})}}\left( {h,1,\bar A} \right) > {R^{(\rm{T})}}\left( {h,2,\bar A} \right)$ from (\ref{Eq_Rate_TBS2}), as discussed for Fig. \ref{Fig_AchievableRate_vs_H}. Therefore, TS with $N = 1$ achieves larger information rate than $N = 2$ when $Q$ is sufficiently large. In contrast, as $Q \to 0$, we have $\bar A \to \infty$ from (\ref{Eq_Energy_TBS3}). Then, it can be shown that ${R^{(\rm{T})}}\left( {h,1,\infty} \right) < {R^{(\rm{T})}}\left( {h,2,\infty} \right)$ since the ergodic capacity of a fading MISO channel increases with the number of transmit antennas. Therefore, for TS larger information rate is achieved with $N = 2$ than $N = 1$ when $Q$ is smaller than a certain threshold. \section{Performance Analysis in Fading MISO Channel}\label{Sec:PerformanceAnalysis} In this section, the R-E performances of TS and PS schemes are further analyzed in fading MISO channels. It is assumed that the small-scale MISO channel from Tx to each Rx follows independent and identically distributed (i.i.d.) Rayleigh fading with ${\bf{h}} \sim \mathcal{CN}\left( {{\bf{0}},{{\bf{I}}_{{N_t}}}} \right)$, and thus $H = \frac{1}{{{N_t}}}\left\| {\bf{h}} \right\|^2$ is a chi-square random variable with $2N_t$ degrees-of-freedom, with the following PDF and CDF \cite{Proakis}: \begin{equation}\label{Eq_pdf_H} {{f_H}\left( h \right) = \frac{{{N_t}^{{N_t}}}}{{\Gamma \left( {{N_t}} \right)}}{h^{{N_t} - 1}}{e^{ - {N_t}h}},} \end{equation} \begin{equation}\label{Eq_CDF_H} {{F_H}\left( h \right) = 1 - \frac{{\Gamma \left( {{N_t},{N_t}h} \right)}}{{\Gamma \left( {{N_t}} \right)}}.} \end{equation} In practice, it is possible for Rxs to change $\bar A$ for TS or $\tau$ for PS with the fading MISO channel $\bf{h}$ for different transmission blocks; however, this incurs additional complexity at Rx. For simplicity, it is assumed in this paper that $\bar A$ and $\tau$ are set to be fixed values for all Rxs over different realizations of $\bf{h}$ for a given $\theta$. \subsection{Achievable Average Information Rate} We consider that the performance of information transfer is measured by the achievable average rate over fading channels. Given $N$ and $\bar A$, the achievable average rate of TS is denoted by ${\bar R^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right) = {{\mathbb E_H}\left[ {{R^{\left( {\rm{T}} \right)}}\left( {h,N,\bar A} \right)} \right]}$, where ${{R^{\left( {\rm{T}} \right)}}\left( {h,N,\bar A} \right)}$ is given by (\ref{Eq_Rate_TBS2}) for a given $h$. However, it is difficult to obtain the closed-form expressions for ${\bar R^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right)$'s using (\ref{Eq_Rate_TBS2}) and (\ref{Eq_pdf_H}) for any given $N$, $1 \le N \le N_t$. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{AchievableRate_Exact_Approx} \caption{Plot of ${R^{(\rm{T})}}\left( {h,N,\bar A} \right)$ with $N = 1$, $h = 0.1$ and $1.0$, $\bar A = 0.05$ and $1.0$.} \label{Fig_AchievableRate_Exact_Approx} \end{figure} Note that in practice, SWIPT systems usually operate with large transmit power $P$ due to the requirement of energy transfer, resulting in large $\frac{\theta P}{\sigma^2}$, (e.g., $\frac{\theta P}{\sigma^2} = 30$dB with the setup for Fig. \ref{Fig_AchievableRate_vs_H}). It is also worth noting that as $P \to \infty$, ${\log _2}\left( {1 + \frac{\theta P}{\sigma^2} a} \right) = {\log _2}\left( {\frac{\theta P a}{\sigma^2}} \right) + o\left( {{{\log }_2} P } \right)$ for given $a > 0$,\footnote{$f\left( x \right) = o\left( {g\left( x \right)} \right)$ as $x \to x_0$ represents that $\mathop {\lim }\limits_{x \to {x_0}} \frac{{f\left( x \right)}}{{g\left( x \right)}} = 0$, meaning intuitively that $f\left( x \right) \ll g\left( x \right)$ as $x \to x_0$.} resulting in $\mathop {\lim }\limits_{P \to \infty } {\log _2}(1 + \frac{\theta P a}{\sigma^2}) = {\log _2}(\frac{\theta P a}{\sigma^2})$. Therefore, $\mathop {\lim }\limits_{P \to \infty } {R^{(T)}}\left( {h,N,\bar A} \right) = \mathop {\lim }\limits_{P \to \infty } \int_0^{\bar A} {{{\log }_2}\left( {\frac{\theta P a}{\sigma^2}} \right)f_{A\left| H \right.}^{(N)}\left( {a\left| h \right.} \right)da}$, and as $P$ is sufficiently large, ${R^{(\rm{T})}}\left( {h,N,\bar A} \right)$ in (\ref{Eq_Rate_TBS2}) with $\bar A > 0$ can be approximated as \begin{equation}\label{Eq_Rate_TBS3} {{R^{(\rm{T})}}\left( {h,N,\bar A} \right) \approx {F_{A\left| H \right.}^{(N)}}\left( {\bar A \left| h \right.} \right){\log _2}\left( \frac{\theta P}{\sigma^2} \right) + C_0 \left( {h,N,\bar A} \right),} \end{equation} where ${F_{A\left| H \right.}}\left( {\bar A \left| h \right.} \right) = \int_0^{\bar A} {{f_{A\left| H \right.}}\left( {a \left| h \right.} \right)da}$ and $C_0 \left( {h,N,\bar A} \right) = \int_0^{\bar A} {{{\log }_2}\left( a \right){f_{A\left| H \right.}}\left( {a\left| h \right.} \right)da}$, which is a constant not related to $P$. Please refer to Appendix \ref{App_C0_Derivation} for detailed derivation of $C_0 \left( {h,N,\bar A} \right)$. Note that the right-hand side of (\ref{Eq_Rate_TBS3}) is a lower bound on ${R^{(\rm{T})}}\left( {h,N,\bar A} \right)$, but approximates ${R^{(\rm{T})}}\left( {h,N,\bar A} \right)$ tightly with sufficiently large $P$. Fig. \ref{Fig_AchievableRate_Exact_Approx} shows ${R^{(\rm{T})}}\left( {h,N,\bar A} \right)$ and its approximation by (\ref{Eq_Rate_TBS3}) versus $P$ for different values of $h$ and $\bar A$ with the same setup as for Fig. \ref{Fig_AchievableRate_vs_H} and $N = 1$. It is observed that the approximation in (\ref{Eq_Rate_TBS3}) is more accurate as $h$ and/or $\bar A$ increases. It is also observed that the gap between the achievable rate and its approximation becomes negligible when $P \ge 30$dBm even with moderate values of $h = 0.1$ and $\bar A = 0.05$. With the approximation of ${R^{(\rm{T})}}\left( {h,N,\bar A} \right)$ by (\ref{Eq_Rate_TBS3}), we can characterize the asymptotic behavior of ${\bar R^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right)$ as $P$ becomes large by investigating its pre-log scaling factor, which is given by the following proposition. \begin{proposition}\label{Proposition_Avg_Scaling} Given $1 \le N \le N_t$ and $\bar A \ge 0$, the achievable average rate for TS over the i.i.d. Rayleigh fading MISO channel is obtained as ${\bar R^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right) = {\Delta^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right){\log _2}\left( P \right) + o\left( {\log _2} \, P \right)$ with $P \rightarrow \infty$, where \begin{equation}\label{Eq_TBS_AvgRate_Scaling} {{\Delta^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right) \buildrel \Delta \over = \mathop {\lim }\limits_{P \to \infty } \frac{{\bar R^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right)}{{{{\log }_2}\, P}} = {F_A^{(N)}}\left( \bar A \right),} \end{equation} with ${F_A^{(N)}}\left( a \right) = \mathbb E_H \left[{F_{A\left| H \right.}^{(N)}}\left( {a\left| h \right.} \right)\right]$ denoting the unconditional CDF of $A$ after averaging over the fading distribution, which can be further expressed as \begin{equation}\label{Eq_Marginal_CDF} {{F_A^{(N)}}\left( a \right) = 1 - \frac{2}{{\Gamma \left( {{N_t}} \right)}}\sum\limits_{k = 0}^{N - 1} {\frac{{{ \left( \beta \left( a \right) \right)^{{N_t} + k}}}}{{k!}}{K_{{N_t} - k}}\left( {2\beta \left( a \right) } \right),}} \end{equation} where $\beta \left( a \right) \buildrel \Delta \over = \sqrt{{N_t}Na}$, and ${K_\delta }\left( x \right)$ denotes the second-kind modified Bessel function \[ {{K_\delta }\left( x \right) = \frac{\pi }{2}\frac{{{I_{ - \delta }}\left( x \right) - {I_\delta }\left( x \right)}}{{\sin \left( {\delta x} \right)}},} \] with ${I_\delta }\left( x \right)$ denoting the first-kind modified Bessel function \[ {{I_\delta }\left( x \right) = \sum\limits_{m = 0}^\infty {\frac{1}{{m!\Gamma \left( {m + \delta + 1} \right)}}{{\left( {\frac{x}{2}} \right)}^{2m + \delta }}} .} \] \end{proposition} \begin{proof} Please refer to Appendix \ref{App_Proof_Proposition_Avg_Scaling}. \end{proof} \begin{remark}\label{Remark_RateScaling_vs_CDF} In the fading MISO channel, ${F_A^{(N)}}\left( \bar A \right)$ denotes the percentage of sub-blocks allocated to ID mode for TS. From Proposition \ref{Proposition_Avg_Scaling}, it is inferred that ${F_A^{(N)}}\left( \bar A \right)$ is also the pre-log rate scaling factor of the asymptotic achievable average information rate over the MISO fading channel for TS with given $\bar A$ and $N$. \end{remark} Fig. \ref{Fig_Marginal_CDF} shows ${F_A^{(N)}}\left( \bar A \right)$ versus $\bar A$ for TS with $N_t = 4$ when $\bf{h}$ follows i.i.d. Rayleigh fading. From Fig. \ref{Fig_Marginal_CDF}, it is observed that the rate scaling factor ${F_A^{(N)}}\left( \bar A \right)$ for TS increases with decreasing $N$ when $\bar A$ is small, but decreases with $N$ when $\bar A$ is sufficiently large. As a result, ${\bar R^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right)$ scales faster with increasing $P$ for smaller value of $N$ when $\bar A$ is small, but scales slower with $P$ when $\bar A$ becomes large. \begin{figure} \centering \includegraphics[width=0.65\columnwidth]{Marginal_CDF} \caption{$F_A^{(N)}\left( \bar A \right)$ vs. $\bar A$ when $N_t = 4$.} \label{Fig_Marginal_CDF} \end{figure} On the other hand, the rate scaling factor for PS in the i.i.d. Rayleigh fading MISO channel can be determined from (\ref{Eq_Rate_TimeSharing}) and (\ref{Eq_pdf_H}) as \begin{equation}\label{Eq_PreLog_OOS} {{\Delta^{({\rm{P}})}}\left( \tau \right) = \mathop {\lim }\limits_{P \to \infty } \frac{{\mathbb E_H\left[ {{R^{({\rm{P}})}}\left( {h,\tau } \right)} \right]}}{{{{\log }_2} P }} = \tau .} \end{equation} \subsection{Average Harvested Power} In this subsection, we study the average harvested power over the i.i.d. Rayleigh fading MISO channel by TS, defined as $\bar Q^{(\rm{T})}\left( {N,\bar A} \right) = \mathbb E_{H}\left[ {{Q^{\left( {\rm{T}} \right)}}\left( {h,N,\bar A} \right)} \right]$, where ${{Q^{\left( {\rm{T}} \right)}}\left( {h,N,\bar A} \right)}$ is given by (\ref{Eq_Energy_TBS3}). \begin{proposition}\label{Proposition_AvgEnergy} In the i.i.d. Rayleigh fading MISO channel, for given $\bar A$ and $N$, the average harvested power for TS is given by \begin{equation}\label{Eq_AvgEnergy_Slope} {\bar Q^{(\rm{T})}\left( {N,\bar A} \right) = \theta P\frac{{2}}{{\Gamma \left( {{N_t}} \right)}}\sum\limits_{k = 0}^N {\frac{{\left( \beta \left( \bar A\right) \right)^{{{N_t} + k}}}}{{k!}}\sqrt{{ {\frac{{N\bar A}}{{{N_t}}}} }}{K_{{N_t} - k + 1}}\left( {2 \beta \left( \bar A \right) } \right),}} \end{equation} where $\beta \left( a \right)$ and ${K_\delta }\left( x \right)$ are defined in Proposition \ref{Proposition_Avg_Scaling}. \end{proposition} \begin{proof} Please refer to Appendix \ref{Proof_Proposition_AvgEnergy}. \end{proof} For convenience, we term ${\Pi ^{({\rm{T}})}}\left( {N,\bar A} \right) = {Q^{({\rm{T}})}}\left( {N,\bar A} \right)/({\theta P})$ as the \emph{power scaling factor} for TS with increasing $P$. Notice that $0 \le {\Pi ^{({\rm{T}})}}\left( {N,\bar A} \right) \le 1$. Fig. \ref{Fig_AvgEnergy_vs_A} shows ${\Pi ^{({\rm{T}})}}\left( {N,\bar A} \right)$ versus different values of $\bar A$ with $N_t = 4$. It is observed that the power scaling factor ${\Pi ^{({\rm{T}})}}\left( {N,\bar A} \right)$ for TS behaves in the opposite way of the rate scaling factor ${F_A^{(N)}}\left( \bar A \right)$, as compared to Fig. \ref{Fig_Marginal_CDF}, i.e., ${\Pi ^{({\rm{T}})}}\left( {N,\bar A} \right)$ decreases with $N$ when $\bar A$ is small, but increases with decreasing $N$ when $\bar A$ is sufficiently large. As a result, for given $\theta$ and $P$, ${\bar Q^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right)$ behaves the same as ${\Pi ^{({\rm{T}})}}\left( {N,\bar A} \right)$. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{AvgEnergy_vs_A} \caption{$\Pi^{(\rm{T})}\left( {N,\bar A} \right)$ vs. $\bar A$ when $N_t = 4$.} \label{Fig_AvgEnergy_vs_A} \end{figure} On the other hand, the power scaling factor for PS in the i.i.d. Rayleigh fading MISO channel can be easily obtained from (\ref{Eq_Energy_TimeSharing}) and (\ref{Eq_pdf_H}) as \begin{equation}\label{AvgEnergy_OOS} {\Pi^{(\rm{P})}\left( {\tau} \right) = {\mathbb{E}_H\left[ {{Q}^{(\rm{P})}\left( {h,\tau} \right)} \right] \mathord{\left/ {\vphantom {1 P}} \right. \kern-\nulldelimiterspace} \left( \theta P \right)} = \left( {1 - \tau } \right), \,\,\, 0 \le \tau \le 1. } \end{equation} The rate and power scaling factors characterize the asymptotic rate-energy trade-off as $P \to \infty$. Given $1 \le N \le N_t$, for TS it is easily shown from (\ref{Eq_Marginal_CDF}) and (\ref{Eq_AvgEnergy_Slope}) that the rate scaling factor ${\Delta^{\left( {\rm{T}} \right)}}\left( {N, 0} \right) = 0$ and the power scaling factor ${\Pi^{({\rm{T}})}}( {N, 0} ) = 1$ at $\bar A = 0$, while ${\Delta^{\left( {\rm{T}} \right)}}\left( {N,\infty} \right) = 1$ and ${\Pi^{({\rm{T}})}}( {N,\infty} ) = 0$ at $\bar A \to \infty$. Note that for TS the distribution of the received channel power $A \left( k \right)$ at each sub-block becomes different according to $N$, and as a result different asymptotic rate-energy trade-off is achieved when $0 < {\Delta^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right) < 1$ and $0 < {\Pi^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right) < 1$. To characterize this trade-off, we have the following theorem. \begin{theorem}\label{Theorem_Optimality_N} In the i.i.d. Rayleigh fading MISO channel, given $1 \le N \le N_t$ and $0 < \bar A < \infty$ for TS scheme and $0 < \tau < 1$ for PS scheme, ${\Delta^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right) > {\Delta^{({\rm{P}})}}\left( \tau \right)$ for a given power scaling factor $0 < {\Pi^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right) = {\Pi^{({\rm{P}})}}\left( \tau \right) < 1$; furthermore, given $1 \le N < M \le N_t$ and $0 < \bar A_N, \bar A_M < \infty$ for TS schemes, ${\Delta^{\left( {\rm{T}} \right)}}\left( {N,\bar A_N} \right) > {\Delta^{\left( {\rm{T}} \right)}}\left( {M,\bar A_M} \right)$ for a given power scaling factor $0 < {\Pi^{\left( {\rm{T}} \right)}}\left( {N,\bar A_N} \right) = {\Pi^{\left( {\rm{T}} \right)}}\left( {M,\bar A_M} \right) < 1$. \end{theorem} \begin{proof} Please refer to Appendix \ref{App_Proof_Theorem_Optimality_N}. \end{proof} \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{AvgEnergy_vs_PreLogFactor} \caption{Rate vs. power scaling factors with $N_t = 4$.} \label{Fig_AvgEnergy_vs_PreLogFactor} \end{figure} Fig. \ref{Fig_AvgEnergy_vs_PreLogFactor} shows the rate scaling factor (${\Delta^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right)$ for TS and ${\Delta^{({\rm{P}})}}\left( \tau \right)$ for PS) versus power scaling factor ($\Pi^{(\rm{T})}\left( {N,\bar A} \right)$ for TS and $\Pi^{(\rm{P})}\left( {\tau} \right)$ for PS) with $N_t = 4$. For a given $0 < \Pi^{(\rm{T})}\left( {N,\bar A} \right) = {\Pi^{({\rm{P}})}}\left( \tau \right) < 1$, the rate scaling factor of TS with $N = 1$, i.e., one single random beam, is the largest among all values of $N$. In addition, ${\Delta^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right)$ for TS decreases with increasing $N$, but is always larger than ${\Delta^{({\rm{P}})}}\left( \tau \right)$ for PS. The above observations are in accordance with Theorem \ref{Theorem_Optimality_N}. \subsection{Power Outage Probability} In this subsection, we study the power outage probability with a given harvested power target $\hat Q$ at Rx, which is defined as $p_{Q,\,out} \mathop = \limits^\Delta {\Pr \left( {Q < \hat Q} \right)}$ with $Q$ denoting the harvested power in one block. In particular, we are interested in characterizing the asymptotic behavior of $p_{Q,\,out}$ as $P \to \infty$, namely \emph{power diversity order}, which is defined as \begin{equation}\label{Eq_EnergyDiversity} {{d_Q} \buildrel \Delta \over = - \mathop {\lim }\limits_{P \to \infty } \frac{{\log {p_{Q,\,out}}}}{{{{\log }} P}}.} \end{equation} \begin{proposition}\label{Proposition_EnergyDiversity_TBS} In the i.i.d. Rayleigh fading MISO channel, for TS the power outage probability $p_{Q,out}^{({\rm{T}})}$ with $P \to \infty$ is approximated by \begin{equation}\label{Eq_EnergyDiversity_TBS} {p_{Q,out}^{({\rm{T}})} = \left\{ {\begin{array}{*{20}{c}} {{{\left( {{{\hat Q}}/ \left({\theta P}\right)} \right)}^{{N_t}}}} \\ {{{\left( {N\bar A{{\left( {\ln \left(\theta P\right)} \right)}^{ - 1}}} \right)}^{{N_t}}}} \\ \end{array}\begin{array}{*{20}{c}} {,\,\,\,\bar A = 0} \\ {,\,\,\,\bar A > 0.} \\ \end{array}} \right.} \end{equation} \end{proposition} \begin{proof} Please refer to Appendix \ref{App_Proof_Proposition_EnergyDiversity_TBS}. \end{proof} From (\ref{Eq_EnergyDiversity}) and (\ref{Eq_EnergyDiversity_TBS}), it can be verified that the power diversity order of TS is $d_Q^{({\rm{T}})} = N_t$ when $\bar A = 0$, i.e., no WIT is required, while $d_Q^{({\rm{T}})} = 0$ with a fixed $\bar A > 0$ when both WIT and WET are implemented, which means that although $p_{Q,out}^{({\rm{T}})}$ decreases with increasing $P$, the decrease of $p_{Q,out}^{({\rm{T}})}$ is much slower than increase of $P$ as $P \to \infty$. \begin{figure} \centering \includegraphics[width=0.58\columnwidth]{EnergyOutage_vs_P_EnergyDiversity} \caption{Power outage probability with $N_t = 2$ and $\hat Q = 1\mu$W.} \label{Fig_EnergyOutage_vs_P_EnergyDiversity} \end{figure} On the other hand, in the i.i.d. Rayleigh fading MISO channels, the power outage probability of PS with $P \to \infty$ can be obtained as $p_{Q,out}^{({\rm{P}})} = {\left( {\frac{\hat Q}{{(1 - \tau) \theta P }}} \right)^{ N_t}}$, $0 \le \tau < 1$; thus, from (\ref{Eq_Energy_TimeSharing}) and the fact that ${F_H}\left( h \right) \approx {h^{ N_t}}$ as $h \to 0$, we obtain the power diversity order as $d_Q^{({\rm{P}})} = N_t$, $0 \le \tau < 1$. Fig. \ref{Fig_EnergyOutage_vs_P_EnergyDiversity} shows the power outage probabilities of TS and PS versus the transmit power $P$ in dBm when $N_t = 2$ and $\hat Q = 1\mu$W with the same setup as for Fig. \ref{Fig_HarvestedEnergy_vs_H}, i.e., $\theta = 10^{-4}$. It is observed that the smallest power outage probabilities are achieved by TS with $\bar A = 0$ or equivalently PS with $\tau = 0$. When $\bar A >0$, $p_{Q,out}^{({\rm{T}})}$ for TS is observed to decrease slower with increasing $P$ than $p_{Q,out}^{({\rm{P}})}$ for PS, since $d_Q^{({\rm{T}})} = 0$, $\bar A > 0$ for TS while $d_Q^{({\rm{P}})} = N_t$, $0 \le \tau < 1$, for PS. Furthermore, it is also observed that $p_{Q,out}^{({\rm{T}})}$ decreases slower with increasing $P$ as $\bar A$ and/or $N$ increases, which is consistent with (\ref{Eq_EnergyDiversity_TBS}). \subsection{Numerical Results}\label{NumericalExample1} In this subsection, we compare the rate-energy performance of TS and PS for a practical SWIPT system setup and $N_t = 2$ with the same channel setup as for Figs. \ref{Fig_HarvestedEnergy_vs_H} and \ref{Fig_AchievableRate_vs_H}. It is further assumed that energy conversion efficiency is set to be $\zeta = 0.5$ to reflect practical power harvesting efficiency. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{AvgRate_vs_P_Different_N} \caption{Comparison of the achievable average rate with $N_t = 2$ and $\Pi = 0.1$, $0.5$, and $0.9$.} \label{Fig_AvgRate_vs_P_Different_N} \end{figure} As inferred from Theorem \ref{Theorem_Optimality_N}, the asymptotic rate scaling factor of TS with $N = 1$ is the largest among all values of $N$ and is also larger than that for PS for a given power scaling factor as $P \to \infty$. However, it does not imply that the largest achievable average rate is always attained for a given average harvested power when $P$ is finite. Therefore, it is necessary to compare the achievable average rates for PS and TS with finite values of $P$. Fig. \ref{Fig_AvgRate_vs_P_Different_N} shows the achievable average rates for PS and TS versus transmit power in dBm under the same power scaling factor $\Pi = {\Pi ^{({\rm{T}})}}\left( {N,\bar A} \right) = {\Pi ^{({\rm{P}})}}\left( {\tau} \right)$, i.e., the same average power harvesting requirement $\bar Q = \zeta \, \theta P \, \Pi$ (e.g., $\bar Q = 45 \, \mu{\rm{W}}$ with $\zeta = 0.5$, $\theta = -40$dB, $\Pi = 0.9$ and $P$ = 30dBm). When $\Pi = 0.9$, the benefit from a larger rate scaling factor is clearly observed for TS with $N = 1$, since it achieves the largest average information rate. When $\Pi = 0.5$, the achievable average rates for TS are similar with $N = 1$ and $2$, but still grow faster with the transmit power than that for PS. When $\Pi = 0.1$, the gaps between rate scaling factors of different schemes are small (cf. Fig. \ref{Fig_AvgEnergy_vs_PreLogFactor}) and as a result their achievable average rates become similar. It is worth noting that one typical application scenario of the SWIPT is wireless sensor network, for which the power consumption at each sensor node is in general limited to $5$-$20\,\mu{\rm{W}}$. As observed in Fig. \ref{Fig_AvgRate_vs_P_Different_N}, with 30dBm (or 1W) transmit power, the amount of average harvested power at each receiver is $5$-$45\,\mu{\rm{W}}$ with a practical energy harvesting efficiency of $50\%$, which satisfies the power requirement of practical sensors. Furthermore, the received power can always be increased if transmit power is increased and/or the transmission distance is decreased, to meet higher power requirement of other wireless applications. Next, Fig. \ref{Fig_AvgRate_EnergyOutage_Region} shows the trade-offs between the achievable average rate and power non-outage probabilities, i.e., $1 - p_{Q,\,out}$, of TS and PS schemes under the same per-block harvested power requirements $\hat Q = 25\,\mu\rm{W}$ or $45\,\mu \rm{W}$ when the transmit power is set to be $30$dBm. It is observed that the minimum power outage probability of TS is attained by $N = 1$ when the achievable average rate is small, but by $N = 2$ when the achievable average rate is larger, while TS with both $N = 1$ and $2$ outperforms PS. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{AvgRate_EnergyOutage_Region} \caption{Trade-off between achievable average rate and power non-outage probability with $N_t = 2$, $30$dBm transmit power, and per-block harvested power requirement $\hat Q = 25$ or $45$ $\mu \rm{W}$.} \label{Fig_AvgRate_EnergyOutage_Region} \end{figure} \begin{remark}\label{Remark_Rate_Performance} Employing random beamforming at the transmitter requires additional complexity. However, from the above results, it is inferred that the achievable average rate is maximized by using only one single RB, i.e., $N = 1$, when the transmit power is asymptotically large or at finite transmit power when more harvested power is required (which is of more practical interest). In addition, TS with one single RB also optimizes power outage performance when transmit power is finite and large harvested power is required in each transmission block. Therefore, TS with one single RB in general can achieve the optimal WET efficiency and/or reliability with a given WIT rate requirement, thus yielding an appealing low-complexity implementation for practical systems. \end{remark} Finally, we investigate the overall network throughput in the multicast SWIPT system with the proposed TS scheme, which is defined as \begin{equation}\label{Eq_NW_Throughput} {{C} \buildrel \Delta \over = \sum\limits_{i = 1}^K {\left( {1 - {p_{R,out}}\left( i \right)} \right) \bar R} ,} \end{equation} with $K$, ${{p_{R,out}}\left( i \right)}$, and $\bar R$ denoting the number of users in the network, the rate outage probability of the $i$th Rx, and the common information rate, respectively. It is worth noting that each Rx can adjust its threshold $\bar A_i$, $i = 1, \cdots, K$, according to the individual channel condition and rate requirement assuming that Rxs move slowly with a sufficiently large channel coherence time; therefore, rate outage of the $i$th user occurs when its average achievable rate cannot meet the rate target $\bar R$ even with $\bar A_i = \infty$, i.e., when all the received sub-blocks are allocated to ID mode for a given $\theta$. Accordingly, ${p_{R,out}}\left( i \right)$ is given by \begin{equation}\label{Eq_RateOutage} {{p_{R,out}}\left( i \right) = \Pr \left( {\bar R_i^{\left( {\rm{T}} \right)}}\left( {N,\infty} \right) < \bar R \right),} \end{equation} where ${\bar R_i^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right) = {{\mathbb E_H}[ {{R_i^{\left( {\rm{T}} \right)}}\left( {h,N,\bar A} \right)} ]}$ with ${{R_i^{\left( {\rm{T}} \right)}}\left( {h,N,\bar A} \right)}$ denoting the achievable rate of the $i$th Rx for given $N$ and $\bar A_i$ in a block with the normalized channel power $h$, which is given by (\ref{Eq_Rate_TBS2}). Note that for each Rx, $\theta_i$, $i = 1, \,\, \cdots, \,\, K$, can be modeled as $\theta_i = \theta_{L,i}\theta_{S,i}$ where $\theta_{L,i}$ and $\theta_{S,i}$ denote signal power attenuation due to distance-dependent pathloss and shadowing, respectively. Assuming fixed Rx locations, therefore, ${p_{R,out}}\left( i \right)$ should be measured according to the variation of $\theta_{S,i}$ in this case. Fig. \ref{Fig_NW_Throughput} shows the trade-offs between the network throughput $C$ defined in (\ref{Eq_NW_Throughput}) versus the average sum harvested power by all Rxs, denoted by $\bar Q$, under the same channel setup as for Figs. \ref{Fig_HarvestedEnergy_vs_H} and \ref{Fig_AchievableRate_vs_H}, with $K = 10$, $N = 1$, and $P = 30$dBm. The distance between the Tx and the $i$th Rx, denoted by $D_i$, is assumed to be uniformly distributed within $3{\rm{m}} \le D_i \le 10{\rm{m}}$, $i = 1, \,\, \cdots, \,\, K$. It is also assumed that $\theta_{L,i} = C_0 D_i^{-\alpha}$ with $C_0 = -20$dB denoting the pathloss at the reference distance $1$m and $\alpha = 3$ denoting the pathloss exponent. Assuming indoor shadowing, $\theta_{S,i}$ is drawn from lognormal distribution with standard deviation given by $3.72\,{\rm{dB}}$ \cite{Liberti}. Furthermore, each Rx is assumed to set $\bar A$ such that ${\bar R_i^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right) = \bar R$ if ${\bar R_i^{\left( {\rm{T}} \right)}}\left( {N,\infty} \right) \ge \bar R$, but set $\bar A = 0$, i.e. all the received power is used for power harvesting, otherwise. It is observed that the maximum throughput in the network is $C^* = 46.8$Mpbs with average harvested sum power $\bar Q^* = 424\mu$W. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{NW_Throughput} \caption{Trade-off between network throughput and average sum harvested power with $N_t = 2$, $N = 1$, and $30$dBm transmit power.} \label{Fig_NW_Throughput} \end{figure} In addition, the trade-offs shown in Fig. \ref{Fig_NW_Throughput} can be categorized into three regimes, as denoted by $1)$, $2)$, and $3)$ in the figure. When $\bar R$ is small, i.e., in the regime denoted by $1)$, $C$ increases with $\bar R$ since ${p_{R,out}}\left( i \right)$ is small. In this regime, each Rx sets larger $\bar A_i$ with increasing $\bar R$ to meet the rate target, and thus the harvested sum power decreases accordingly. When $\bar R$ is larger than a certain threshold, i.e., in the regime denoted by $2)$, $C$ decreases with increasing $\bar R$ since the number of Rx with rate outage increases. In this regime, $\bar Q$ also decreases with increasing $\bar R$ since Rxs with large $\theta_i$'s still set larger $\bar A_i$ with increasing $\bar R$ and their harvested power decreases. Finally, when $\bar R$ further increases, i.e., in the regime denoted by $3)$, $C$ decreases with increasing $\bar R$ whereas $\bar Q$ increases with $\bar R$. This is because most of Rxs in the network experience rate outage and thus only harvest power. When $\bar R \to \infty$, therefore, $C \to 0$ and $\bar Q$ becomes equivalent to that without WIT and with WET only, i.e., $\bar R = 0$ with $\bar A = 0$. Therefore, for a given throughput $C < C^*$, there are two possible values of average sum harvested power (e.g., $\bar Q_1 = 490\mu$W and $\bar Q_2 = 368\mu$W with $C = 20$Mbps), and thereby we can choose larger value of average sum harvested power for a given throughput (e.g., $\bar Q_1$ for the aforementioned example). \section{Alternative Random Beam Designs}\label{Sec:OtherBeams} It is worth noting that TS with Gaussian random beams (referred to as TS-G), as considered in the preceding sections, may not be practically favorable due to the fact that Gaussian random beams (GRBs) will cause large transmit power at certain sub-blocks. Instead, artificial channel fading within each transmission block of each Rx can be generated by employing non-Gaussian random beams with constant transmit power for TS. In this section, we investigate the performance of TS with two alternative RBs other than GRB, such that the average transmit power remains constant within each transmission block, which are given next. \subsection{Unitary Random Beams (URBs)} In this case, $N$ unitary random vectors obtained from the isotropic distribution \cite{Hassibi} are independently employed for the $N$ random beams at the $k$th sub-block, i.e., $\boldsymbol{\phi}_n\left( k \right)$, $1 \le n \le N$, $\forall k$. With URBs, it is in general difficult to obtain the closed-form expressions for the PDF and CDF of the received channel power $A \left( k \right)$ at each sub-block conditioned on $H = h$. However, if we consider the special case of $N_t = 2$ and $N = 1$, it is known that with URBs $A \left( k \right)$ is uniformly distributed within $[0, 2h]$. Thus, given a threshold $\bar A \ge 0$, the amount of harvested power in each block with URBs can be obtained using (\ref{Eq_Energy_TBS2}) as \begin{equation}\label{Eq_URB_Energy} {Q^{(\rm{U})}\left( {h,\bar A} \right) = \left\{ {\begin{array}{*{20}{c}} {\theta P(h - \frac{{\bar A}^2}{4h})} \\ 0 \\ \end{array}} \right.\begin{array}{*{20}{c}} {,\,\,\,\, 0 \le \bar A \le 2h\,\,\,} \\ {,\,\,\,\,\,\,\,\, \bar A > 2h. \,\,\,\,\,\,\,\,} \\ \end{array}} \end{equation} In the i.i.d. fading MISO channel, the average harvested power for TS with URBs (referred to as TS-U) given a fixed threshold $\bar A \ge 0$ is obtained as ${\bar Q^{({\rm{U}})}}\left( {\bar A} \right) = \int_0^\infty {{Q^{({\rm{U}})}}\left( {h,\bar A} \right){f_H}\left( h \right)dh}$, where ${f_H}\left( h \right) = 4h{e^{ - 2h}}$ is given by (\ref{Eq_pdf_H}) for $N_t = 2$. It is worth noting that in the special case of $N_t = 2$ and $N = 1$, the unconditional distribution of $A \left( k \right)$ with URBs after averaging over the fading channels can be shown to be the exponential distribution, where the unconditional PDF is given by ${q_A^{(\rm{U})}}\left( a \right) = {e^{ - a}}$. Therefore, $\bar Q^{(\rm{U})} \left( {\bar A} \right)$ can be alternatively obtained as \[ {{\bar Q}^{({\rm{U}})}}\left( {\bar A} \right) = \int_{\bar A}^\infty {\theta P a{q_A^{(\rm{U})}}\left( a \right)da} \] \begin{equation}\label{Eq_AvgEnergy_TS_U} {\,\,\,\,\,\, = \theta P \Gamma \left( {2,\bar A} \right), } \end{equation} which is equivalent to ${Q^{(\rm{T})}}\left( {1,1,\bar A} \right)$ for TS-G given by (\ref{Eq_Energy_TBS3}) with $N = 1$ and $h = 1$. Similarly, given $\bar A \ge 0$, the achievable average transmission rate for TS-U is obtained as \[ {{\bar R}^{({\rm{U}})}}\left( {\bar A} \right) = \int_0^{\bar A} {{{\log }_2}\left( {1 + \frac{\theta P a}{\sigma^2}} \right){q_A^{(\rm{U})}}\left( a \right)da} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \] \begin{equation}\label{Eq_AvgRate_TS_U} {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = \frac{{{e^{ - \frac{1}{{\eta}}}}}}{{\ln 2}}\left( {{E_1}\left( {\frac{\sigma^2}{\theta P}} \right) - {E_1}\left( \bar A + {\frac{\sigma^2}{{\theta P}}} \right)} \right) - {e^{ - \bar A}}{\log _2}\left( {1 + \frac{\theta P \bar A}{\sigma^2}} \right),} \end{equation} with ${E_n}\left( z \right) = \int_1^\infty {{e^{ - zt}}{t^{ - n}}dt}$ denoting the exponential integral function for integer $n \ge 0$, which is also equivalent to ${R^{({\rm{T}})}}\left( {1,1,\bar A} \right) $ for TS-G given by (\ref{Eq_Rate_TBS2}) with $h = 1$ and $N = 1$. In addition, given fixed $\bar A \ge 0$ and per-block power harvesting requirement $\hat Q > 0$, the power outage probability of TS-U in the i.i.d. Rayleigh fading MISO channel with $N_t = 2$ and $N = 1$ can be obtained from (\ref{Eq_URB_Energy}) as \begin{equation}\label{Eq_URB_EnergyOutage} {p_{Q,out}^{({\rm{U}})} = {F_H}\left( {\frac{{{{\hat Q}^2} + \sqrt {{{\hat Q}^2} + {{\theta^2}{P^2}{\bar A}^2}} }}{{2\theta P}}} \right),} \end{equation} where ${F_H}\left( h \right)$ is given by (\ref{Eq_CDF_H}) for $N_t = 2$. From (\ref{Eq_EnergyDiversity}) and (\ref{Eq_URB_EnergyOutage}), it is easily verified that TS-U in the case of $N_t = 2$ and $N = 1$ has the power diversity order of $0$, the same as TS-G. \subsection{Binary Random Beams (BRBs)} In this case, a random subset of $N$ out of $N_t$ transmit antennas at Tx, $1 \le N \le N_t$, are selected to transmit at each sub-block, which is equivalent to selecting ${\bf{\Phi}} \left( k \right) = [ {{\boldsymbol{\phi} _1}( k )\,\,{\boldsymbol{\phi} _2}( k )\,\, \cdots \,\,{\boldsymbol{\phi} _N}( k )} ] \in {{\mathbb R}^{{N_t} \times N}}$, $\forall k$, where ${\boldsymbol{\phi} _n}( k ) = [{\phi _{n,1}}\left( k \right) \,\,\, {\phi _{n,2}}\left( k \right) \,\,\, \cdots {\phi _{n,N_t}}\left( k \right)]^T$, $1 \le n \le N$, with ${\phi _{n,i}}\left( k \right) \in \left\{ {0,1} \right\}$, $1 \le i \le N_t$, such that ${\left\| {\boldsymbol{\phi} _n}( k ) \right\|^2} = 1$ and ${{\boldsymbol{\phi} _{m}^{T}}{{\left( {k} \right)}}{\boldsymbol{\phi} _n}\left( k \right)} = 0$, $n \ne m$. We assume that all the subsets of the selected antennas are equally probable. Consider the special case of $N_t = 2$ and $N = 1$. Denote ${\bf{h}} = {[{h_1}\,\,\,{h_2}]^T}$, $V = \max ( \, {{{\left| {{h_1}} \right|}^2},{{\left| {{h_2}} \right|}^2}} )$, and $W = \min ( \, {{{\left| {{h_1}} \right|}^2},{{\left| {{h_2}} \right|}^2}} )$. Note that in this case the received channel power at each sub-block is either $A \left( k \right) = V$ or $A \left( k \right) = W$, each of which occurs with a probability of $1/2$. Thus, given $V = v$, $W = w$, and a fixed threshold $\bar A \ge 0$, the amount of harvested power in each block with BRBs is obtained using (\ref{Eq_Energy_TBS2}) as \begin{equation}\label{Eq_AS_Energy} {Q^{(\rm{B})}\left( {v,w,\bar A} \right) = \left\{ {\begin{array}{*{20}{c}} \theta P\left( {v+w} \right)/2 \\ {\theta Pv / 2} \\ 0 \\ \end{array}\begin{array}{*{20}{c}} {, \,\,\,\,\,\,\, \bar A < w \,\,\,\,\,\,\,} \\ {, \,\,\, w \le \bar A \le v} \\ {, \,\,\,\,\,\,\, \bar A > v. \,\,\,\,\,\,\,} \\ \end{array}} \right.} \end{equation} Similar to TS-U, in the i.i.d. fading MISO channel, it can be shown that with $N_t = 2$ and $N = 1$ the unconditional distribution of $A \left( k \right)$ with BRBs after averaging over the fading channels is the exponential distribution, where the unconditional PDF is also given by ${q_A^{(\rm{B})}}\left( a \right) = {e^{ - a}}$. Therefore, given a fixed threshold $\bar A \ge 0$, $\bar Q^{(\rm{B})} \left( {\bar A} \right) = \bar Q^{(\rm{U})} \left( {\bar A} \right)$ and $\bar R^{(\rm{B})} \left( {\bar A} \right) = \bar R^{(\rm{U})} \left( {\bar A} \right)$, where $\bar Q^{(\rm{B})} \left( {\bar A} \right)$ and $\bar R^{(\rm{B})} \left( {\bar A} \right)$ denote the average harvested power and achievable average information rate for TS with BRBs (referred to as TS-B), respectively, and $\bar Q^{(\rm{U})} \left( {\bar A} \right)$ and $\bar R^{(\rm{U})} \left( {\bar A} \right)$ for TS-U are given by (\ref{Eq_AvgEnergy_TS_U}) and (\ref{Eq_AvgRate_TS_U}), respectively. In addition, given fixed $\bar A \ge 0$ and per-block power harvesting requirement $\hat Q > 0$, the power outage probability of TS-B in the i.i.d. Rayleigh fading MISO channel with $N_t = 2$ and $N = 1$ can be obtained from (\ref{Eq_AS_Energy}) as (see Appendix \ref{App_Derivation_P_E_out_B} for the detailed derivation) \[ p_{Q,out}^{({\rm{B}})} = {\left( {1 - {e^{ - \bar A}}} \right)^2} + {\bf{1}}\left( {\bar A < 2D} \right) \cdot 2{e^{ - 2\left( {\bar A + D} \right)}}\left( { - 1 + {e^{\bar A}}} \right)\left( { - {e^{\bar A}} + {e^{2D}}} \right) \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \] \begin{equation}\label{Eq_BRB_EnergyOutage} {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, + {\bf{1}}\left( {\bar A < D} \right)\left( {{e^{ - 2\left( {\bar A + D} \right)}}{{\left( {{e^{\bar A}} - {e^D}} \right)}^2} + {e^{ - \bar A - 2D}}\left( {\left( { - 1 + \bar A - D} \right){e^{\bar A}} + {e^D}} \right)} \right),} \end{equation} where $D = {\hat Q}/({\theta P})$, and ${\bf{1}}\left( {x < y} \right)$ denotes the indicator function given by \[{\bf{1}}\left( {x < y} \right) = \left\{ {\begin{array}{*{20}{c}} 1 \\ 0 \\ \end{array}\begin{array}{*{20}{c}} {,\,\,\,\, {\rm{if}} \,\, x < y \,\,\,\,\,} \\ {,\,\,\,{\rm{otherwise}}.} \\ \end{array}} \right.\] From (\ref{Eq_BRB_EnergyOutage}), it is shown that both ${\bf{1}}\left( {\bar A < 2D} \right) = 0$ and ${\bf{1}}\left( {\bar A < D} \right) = 0$ if $P > 2\hat Q / (\theta \bar A)$, and thus $p_{Q,out}^{({\rm{B}})} = { ( {1 - {e^{ - \bar A}}} )^2}$. Therefore, TS-B in the case of $N_t = 2$ and $N = 1$ also has the power diversity order of $0$, the same as TS-G and TS-U. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{AvgRate_vs_P_DifferentBeams} \caption{Comparison of the achievable average rate for different RB designs with $N_t = 2$, $N = 1$, and \textcolor{red}{$\Pi = 0.9$.}} \label{Fig_AvgRate_vs_P_DifferentBeams} \end{figure} Fig. \ref{Fig_AvgRate_vs_P_DifferentBeams} shows the achievable average rates of TS-G, TS-U, TS-B, and PS versus transmit power in dBm for the same setup as for Fig. \ref{Fig_AvgRate_vs_P_Different_N}, under the same average power harvesting requirement with $\Pi = 0.9$. It is observed that the achievable average information rates of TS-U and TS-B are the same, which is as expected for the considered case here of $N_t = 2$ and $N = 1$. It is also observed that the achievable average rates of TS-U and TS-B are larger than that of PS, but smaller than that of TS-G. This result is originated from the fact that the artificial channel fading generated by URBs or BRBs in this case is less substantial over time than that by GRBs, due to the limitation of constant average transmit power over sub-blocks with URBs or BRBs. \begin{figure} \centering \includegraphics[width=0.6\columnwidth]{EnergyOutage_vs_P_FixedRbar} \caption{Comparison of power outage probability for different RB designs with $N_t = 2$, $N = 1$, $\bar R = 2$ bps/Hz, and $\hat Q = 25\mu \rm{W}$.} \label{Fig_EnergyOutage_vs_P_FixedRbar} \end{figure} Fig. \ref{Fig_EnergyOutage_vs_P_FixedRbar} shows the power outage probabilities of TS-U and TS-B versus transmit power in dBm for the same setup as for Fig. \ref{Fig_AchievableRate_vs_H}, when $\bar R = 2$ bps/Hz and $\hat Q = 25$ $\mu \rm{W}$, as compared to that of PS and TS-G. Among TS schemes, it is observed that the power outage probability of TS-G is the smallest. The power outage probability of TS-U is observed to be similar to that of TS-G when transmit power is small, but becomes larger than that of TS-G when transmit power is larger than $30$dBm. The power outage probability of TS-B is observed to be between those of TS-U and of TS-G. It is also observed that the power outage probability of PS is larger than those of all TS schemes when transmit power is small, but is the smallest when transmit power is larger than $33$dBm. \section{Conclusion} \label{Sec:Conclusion} This paper has studied a novel receiver mode switching scheme for the MISO multicast SWIPT system when the channel is only known at the receiver, but unknown at the transmitter. The proposed scheme exploits the benefit of opportunistic energy harvesting over artificial channel fading induced by employing multi-antenna random beamforming at the transmitter. By investigating the achievable average information rate, average harvested power/power outage probability, and their various trade-offs, it is revealed that the proposed scheme yields better power and information transfer performance than the reference scheme of periodic switching without transmit random beamforming when the harvested power requirement is sufficiently large. Particularly, employing one single random beam for the proposed scheme is proved to achieve the asymptotically optimal trade-off between the average information rate and average harvested power when transmit power goes to infinity. Moreover, it is shown by simulations that the best trade-offs between average information rate and average harvested power/power outage probability are also achieved by the proposed scheme employing one single random beam for large power harvesting targets of most practical interests, even with finite transmit power. \appendices \section{Derivations of ${R^{({\rm{T}})}}\left( {h,1,\bar A} \right)$ and ${R^{({\rm{T}})}}\left( {h,2,\bar A} \right)$}\label{App_Rate_N1_Derivation} From (\ref{Eq_TBS_pdf}) with $N = 1$ and (\ref{Eq_Rate_TBS2}), ${R^{({\rm{T}})}}\left( {h,1,\bar A} \right)$ can be expressed as \begin{equation}\label{App_Rate_N1_1} {{R^{({\rm{T}})}}\left( {h,1,\bar A} \right) = \frac{1}{{\ln 2}}\int_0^{\bar A} {\ln \left( {1 + \tilde P a} \right)\frac{1}{h}{e^{ - \frac{a}{h}}}} da} \end{equation} \begin{equation}\label{App_Rate_N1_2} {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = \frac{1}{{\ln 2}}\left( {{e^{ - \frac{{\bar A}}{h}}}\ln \left( {1 + \tilde P \bar A} \right) + \int_0^{\bar A} {\frac{\tilde P}{{1 + \tilde P a}}{e^{ - \frac{a}{h}}}} da} \right),} \end{equation} where $\tilde P = \frac{\theta P}{\sigma^2}$ and (\ref{App_Rate_N1_2}) is obtained from integrating (\ref{App_Rate_N1_1}) by part. By changing a variable as $x = 1 + \tilde P a$, the integral term in (\ref{App_Rate_N1_2}) can be obtained as \[ \int_0^{\bar A} {\frac{\tilde P}{{1 + \tilde P a}}{e^{ - \frac{a}{h}}}} da = \int_1^{1 + \tilde P\bar A} {\frac{1}{x}{e^{ - \frac{{x - 1}}{{\tilde Ph}}}}} dx \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \] \begin{equation}\label{App_Rate_N1_3} {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = {e^{ \frac{1}{{\tilde P h}}}}\left( {\int_1^\infty {\frac{1}{x}{e^{ - \frac{x}{{\tilde P h}}}}} dx - \int_{1 + \tilde P \bar A}^\infty {\frac{1}{x}{e^{ - \frac{x}{{\tilde P h}}}}} dx} \right)} \end{equation} \begin{equation}\label{App_Rate_N1_4} { \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = {e^{ \frac{1}{{\tilde P h}}}}\left( {{E_1}\left( {\frac{1}{{\tilde P h}}} \right) - {E_1}\left( {\frac{{1 + \tilde P \bar A}}{{\tilde P h}}} \right)} \right),} \end{equation} where, for integer $n \ge 0$, ${E_n}\left( z \right) = \int_1^\infty {{e^{ - zt}}{t^{ - n}}dt}$ denotes the exponential integral function; (\ref{App_Rate_N1_4}) is obtained by a change of variable as $y = \left( {1 + \tilde P \bar A} \right)x$ for the second integral term in (\ref{App_Rate_N1_3}). From (\ref{App_Rate_N1_2}), (\ref{App_Rate_N1_4}), and $\tilde P = \frac{\theta P}{\sigma^2}$, ${R^{({\rm{T}})}}\left( {h,1,\bar A} \right)$ is obtained as \begin{equation}\label{Eq_Rate_TBS_N_1} {{R^{({\rm{T}})}}\left( {h,1,\bar A} \right) = \frac{{{e^{ \frac{\sigma^2}{{\theta P h}}}}}}{{\ln 2}}\left( {{E_1}\left( {\frac{\sigma^2}{{\theta P h}}} \right) - {E_1}\left( {\frac{{\bar A}}{{h}} + \frac{\sigma^2}{\theta Ph}} \right)} \right) - {e^{ - \frac{{\bar A}}{h}}}{\log _2}\left( {1 + \frac{\theta P\bar A}{\sigma^2}} \right).} \end{equation} When $N = 2$, ${R^{({\rm{T}})}}\left( {h,2,\bar A} \right)$ can be derived similarly by integrating (\ref{Eq_Rate_TBS2}) by part. In this operation, it is necessary to apply differentiation of the incomplete Gamma function given by \cite{Geddes} \begin{equation}\label{App_Meijer} {\frac{\partial }{{\partial N}}\Gamma \left( {N,x } \right) = \Gamma \left( {N,x } \right)\ln x + x G_{2,3}^{3,0}\left( {\left. {\begin{array}{*{20}{c}} {0,0} \\ {N - 1, - 1, - 1} \\ \end{array}} \right|x } \right),} \end{equation} where $G_{p,q}^{m,n}\left( {\left. {\begin{array}{*{20}{c}} {{a_1},\,\, \cdots ,\,\,{a_p}} \\ {{b_1},\,\, \cdots ,\,\,{b_q}} \\ \end{array}} \right|z} \right)$ denotes the Meijer-G function, defined as \cite[9.301]{TableOfIntegral} \begin{equation}\label{Meijer} {G_{p,q}^{m,n}\left( {\left. {\begin{array}{*{20}{c}} {{a_1},\,\, \cdots ,\,\,{a_p}} \\ {{b_1},\,\, \cdots ,\,\,{b_q}} \\ \end{array}} \right|z} \right) = \frac{1}{{2\pi i}}\int_L {\frac{{\prod\limits_{j = 1}^m {\Gamma \left( {{b_j} - s} \right)} \prod\limits_{k = 1}^n {\Gamma \left( {1 - {a_k} + s} \right)} }}{{\prod\limits_{j = m + 1}^q {\Gamma \left( {1 - {b_j} + s} \right)} \prod\limits_{k = n + 1}^p {\Gamma \left( {{a_k} - s} \right)} }}} {z^s}ds,} \end{equation} with $\int_L {}$ denoting the Barres integral. By the definition of the Meijer-G function, the last term in (\ref{App_Meijer}) can be represented by \clearpage \begin{equation}\label{App_Meijer_Simplify1} {xG_{2,3}^{3,0}\left( {\left. {\begin{array}{*{20}{c}} {0,0} \\ {N - 1, - 1, - 1} \\ \end{array}} \right|x} \right) = \frac{1}{{2\pi i}}\int_L {\frac{{\Gamma \left( {N - 1 - s} \right)\Gamma \left( { - 1 - s} \right)\Gamma \left( { - 1 - s} \right)}}{{\Gamma \left( { - s} \right)\Gamma \left( { - s} \right)}}{x^{s + 1}}ds}} \end{equation} \begin{equation}\label{App_Meijer_Simplify3} { = G_{2,3}^{3,0}\left( {\left. {\begin{array}{*{20}{c}} {1,1} \\ {0,0,N} \\ \end{array}} \right|x} \right), \,\,\,\,} \end{equation} where (\ref{App_Meijer_Simplify3}) is achieved by a change of variable as $t = s+1$ in (\ref{App_Meijer_Simplify1}). By applying (\ref{App_Meijer})-(\ref{App_Meijer_Simplify3}) to the integration of (\ref{Eq_Rate_TBS2}) by part, ${R^{({\rm{T}})}}\left( {h,2,\bar A} \right)$ can be obtained as \[ {R^{({\rm{T}})}}\left( {h,2,\bar A} \right) = \left( {\frac{2 \sigma^2}{{\theta P h}}{e^{ - \frac{{2\bar A}}{h}}} - {e^{\frac{2 \sigma^2}{{\theta P h}}}}\Gamma \left( {2,2 \left( \frac{{ {\bar A} }}{{h}} + \frac{\sigma^2}{\theta Ph} \right)} \right)} \right){\log _2}\left( {1 + \frac{\theta P\bar A}{\sigma^2}} \right) \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \] \[ + \frac{2 \sigma^2}{{\theta Ph\ln 2}}{e^{\frac{2 \sigma^2}{{\theta Ph}}}}\left( {{E_1}\left( 2 \left( {\frac{{{\bar A}}}{{h}} + \frac{\sigma^2}{\theta Ph}} \right) \right) - {E_1}\left( {\frac{2 \sigma^2}{{\theta Ph}}} \right)} \right) \] \begin{equation}\label{Eq_Rate_TBS_N_2} {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, + \frac{1}{{\ln 2}}{e^{\frac{2 \sigma^2}{{\theta Ph}}}}\left( {G_{2,3}^{3,0}\left( {\left. {\begin{array}{*{20}{c}} {1,1} \\ {0,0,2} \\ \end{array}} \right|\frac{2\sigma^2}{{\theta Ph}}} \right) - G_{2,3}^{3,0}\left( {\left. {\begin{array}{*{20}{c}} {1,1} \\ {0,0,2} \\ \end{array}} \right| 2 \left( {\frac{{ {\bar A} }}{{h}} + \frac{\sigma^2}{\theta Ph} } \right) }\right)} \right).} \end{equation} \section{Derivations of $C_0\left( h, N, \bar A \right)$ in (\ref{Eq_Rate_TBS3})}\label{App_C0_Derivation} From (\ref{Eq_TBS_pdf}) and (\ref{Eq_pdf_H}), $C_0\left( {h,\bar A,N} \right)$ in (\ref{Eq_Rate_TBS3}) can be expressed as \[ C_0 \left( {h,\bar A,N} \right) = \int_0^{\bar A} {{{\log }_2}\left( a \right){f_{A\left| H \right.}}\left( {a\left| h \right.} \right)da} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \] \begin{equation}\label{App_TBS_Constant1} {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = \underbrace {\frac{{{{\left( {N/h} \right)}^N}}}{{\Gamma \left( N \right)\ln 2}}\int_0^\infty {{a^{N - 1}}{e^{ - \frac{N}{h}a}}\ln \left( a \right)da} }_{\alpha} - \underbrace {\frac{{{{\left( {N/h} \right)}^N}}}{{\Gamma \left( N \right)\ln 2}}\smallint _{\bar A}^\infty {a^{N - 1}}{e^{ - \frac{N}{h}a}}\ln \left( a \right)da}_{\beta}.} \end{equation} From \cite[4.352-1]{TableOfIntegral}, $\alpha$ can be derived as \begin{equation}\label{App_TBS_Constant2} {\alpha = \frac{{\Psi \left( N \right)}}{{\ln 2}} + {\log _2}\frac{h}{N}.} \end{equation} Next, by changing variable as $a = \bar A x$, $\beta$ can be modified as \begin{equation}\label{App_TBS_Constant3} {\beta = \underbrace { - {{\left( {\frac{{N\bar A}}{h}} \right)}^N}\frac{{{{\log }_2}\left( {\bar A} \right)}}{{\Gamma \left( N \right)}}\int_1^\infty {{x^{N - 1}}{e^{ - \frac{{N\bar A}}{h}x}}dx} }_{{\beta _1}} - \underbrace {{{\left( {\frac{N\bar A}{h}} \right)}^N}\frac{1}{{\Gamma \left( N \right)\ln 2}}\int_1^\infty {{x^{N - 1}}{e^{ - \frac{{N\bar A}}{h}x}}\ln xdx} }_{{\beta _2}},} \end{equation} where, by the similar process to derive (\ref{Eq_Energy_TBS3}), $\beta_1$ is derived as \begin{equation}\label{App_TBS_Constant4} { \beta_1 = - \frac{{\Gamma \left( {N,\frac{{N\bar A}}{{h}}} \right)}}{{\Gamma \left( N \right)}}{\log _2}\left( {\bar A} \right).} \end{equation} In addition, by changing variable as $\frac{{N\bar A}}{{h}} = \theta$, $\beta_2$ can be derived from \cite[4.358-1]{TableOfIntegral} as \begin{equation}\label{App_TBS_Constant5} {\beta_2 = \frac{{{\theta ^N}}}{{\Gamma \left( N \right)}\ln 2}\int_1^\infty {{x^{N - 1}}{e^{ - \theta x}}\ln xdx} = \frac{\partial }{{\partial N}}\left( {{\theta ^{ - N}}\Gamma \left( {N,\theta } \right)} \right) .} \end{equation} Since $\frac{\partial }{{\partial N}}{\theta ^{ - N}} = - {\theta ^{ - N}}\ln \theta$, $\beta_2$ in (\ref{App_TBS_Constant5}) can be obtained from (\ref{App_Meijer})-(\ref{App_Meijer_Simplify3}) as \[ {\beta_2 = \frac{1}{{\Gamma \left( N \right)\ln 2}}\theta G_{2,3}^{3,0}\left( {\left. {\begin{array}{*{20}{c}} {0,0} \\ {N - 1, - 1, - 1} \\ \end{array}} \right|\theta } \right)} \] \begin{equation}\label{App_TBS_Constant6} {= \frac{1}{{\Gamma \left( N \right)\ln 2}} G_{2,3}^{3,0}\left( {\left. {\begin{array}{*{20}{c}} {1,1} \\ {0,0,N} \\ \end{array}} \right|\theta } \right), \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,} \end{equation} From (\ref{App_TBS_Constant1})-(\ref{App_TBS_Constant6}) and by changing variable as $\theta = {\frac{{N\bar A}}{{h}}}$ in (\ref{App_TBS_Constant6}), we arrive at (\ref{Eq_Rate_TBS3}). This completes the derivation of $C_0 \left( {h,\bar A,N} \right)$ in (\ref{Eq_Rate_TBS3}). \section{Proof of Proposition \ref{Proposition_Avg_Scaling}}\label{App_Proof_Proposition_Avg_Scaling} With $P \to \infty$, the achievable average information rate for TS is expressed from (\ref{Eq_Rate_TBS3}) as \[ {\mathbb E_H}\left[ {{R^{({\rm{T}})}}\left( {h,N,\bar A} \right)} \right] = {\mathbb E_H}\left[ {F_{A\left| H \right.}^{(N)}\left( {\bar A\left| h \right.} \right)} {\log _2}\left( \frac{\theta P}{\sigma^2} \right) + {{C_0}\left( {h,N,\bar A} \right)} \right] \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \] \[ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = {\mathbb E_H}\left[ {F_{A\left| H \right.}^{(N)}\left( {\bar A\left| h \right.} \right)} \right] {\log _2}P + {\mathbb E_H}\left[ {F_{A\left| H \right.}^{(N)}\left( {\bar A\left| h \right.} \right)} {\log _2}\left( \frac{\theta}{\sigma^2} \right) + {{C_0}\left( {h,N,\bar A} \right)} \right], \] where ${\mathbb E_H}\left[ {F_{A\left| H \right.}^{(N)}\left( {\bar A\left| h \right.} \right)} {\log _2}\left( \frac{\theta}{\sigma^2} \right) + {{C_0}\left( {h,N,\bar A} \right)} \right]$ is a constant not related to $P$ and thus regarded as $o\left( {{{\log }_2}\,\eta} \right)$ since ${F_{A\left| H \right.}^{(N)}\left( {\bar A\left| h \right.} \right)} {\log _2}\left( \frac{\theta}{\sigma^2} \right) + {{C_0}\left( {h,N,\bar A} \right)}$ is a constant not related to $P$. For an integer $N \ge 1$, note that ${\Gamma \left( {N,x} \right)}$ is equivalently expressed as \cite{Proakis} \begin{equation}\label{App_IncompleteGamma} {{\Gamma \left( {N,x} \right)} = (N - 1)!\,{e^{ - x}}{\sum\limits_{m = 0}^{N-1} {\frac{x^m}{{m!}}} }.} \end{equation} From (\ref{Eq_TBS_CDF}), (\ref{Eq_pdf_H}), and (\ref{App_IncompleteGamma}), ${F_A^{(N)}}\left( a \right) = {\mathbb E_H}\left[ {F_{A\left| H \right.}^{(N)}\left( {a\left| h \right.} \right)} \right]$ is obtained as \[ {F_A^{(N)}}\left( a \right) = \int_0^\infty {{F_{A\left| H \right.}^{(N)}}\left( {a\left| h \right.} \right)} {f_H}\left( h \right)dh \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \] \[ \,\,\,\,\,\,\,\,\,\,\,\,\, = \int_0^\infty {\left( {1 - {e^{ - \frac{{Na}}{h}}}\sum\limits_{m = 0}^{N - 1} {\frac{1}{{m!}}{{\left( {\frac{{Na}}{h}} \right)}^m}} } \right)\frac{{N_t^{{N_t}}}}{{\Gamma \left( {{N_t}} \right)}}{h^{{N_t} - 1}}{e^{ - {N_t}h}}dh} \] \begin{equation}\label{App_F_A_a1} {\,\,\,\,\,\, = 1 - \frac{{N_t^{{N_t}}}}{{\Gamma \left( {{N_t}} \right)}}\sum\limits_{m = 0}^{N - 1} {\frac{1}{{m!}}{{\left( {Na} \right)}^m}\underbrace {\int_0^\infty {{h^{{N_t} - m - 1}}{e^{ - {N_t}h - \frac{{Na}}{h}}}dh} }_{\buildrel \Delta \over = \,\, \alpha} } .} \end{equation} By applying \cite[3.471-9]{TableOfIntegral} to $\alpha$, we can obtain (\ref{Eq_Marginal_CDF}). This completes the proof of Proposition \ref{Proposition_Avg_Scaling}. \section{Proof of Proposition \ref{Proposition_AvgEnergy}}\label{Proof_Proposition_AvgEnergy} From (\ref{Eq_Energy_TBS3}), (\ref{Eq_pdf_H}), and (\ref{App_IncompleteGamma}), ${\bar Q^{({\rm{T}})}}\left( {N,\bar A} \right) = {\mathbb E_H}\left[ {{Q^{\left( {\rm{T}} \right)}}\left( {h,N,\bar A} \right)} \right]$ is further obtained as \[ {\bar Q^{({\rm{T}})}}\left( {N,\bar A} \right) = \theta P \mathbb E_H \left[ {{h}{e^{ - \frac{{N\bar A}}{{h}}}}\sum\limits_{k = 0}^N {\frac{1}{{k!}}{{\left( {N\bar A} \right)}^k}{{h}^{ - k}}} } \right] \,\,\,\,\,\,\,\,\, \] \[ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = \theta P \mathbb E_H \left[ {{e^{ - \frac{{N\bar A}}{{h}}}}\sum\limits_{k = 0}^N {\frac{1}{{k!}}{{\left( {N\bar A} \right)}^k}{{h}^{1 - k}}} } \right] \] \begin{equation}\label{App_AvgEnergy3} {\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = \theta P\frac{{N_t^{{N_t}}}}{{\Gamma \left( {{N_t}} \right)}}\sum\limits_{k = 0}^N {\frac{1}{{k!}}{{\left( {N\bar A} \right)}^k}\underbrace {\int_0^\infty {{h^{{N_t} - k}}{e^{ - {N_t}h - \frac{{N\bar A}}{h}}}dh} }_{\buildrel \Delta \over = \,\, \beta} } . \,\,\,\,\,\,\,\,\,\,\,\,\,} \end{equation} By applying \cite[3.471-9]{TableOfIntegral} to $\beta$ in (\ref{App_AvgEnergy3}), we obtain (\ref{Eq_AvgEnergy_Slope}). This completes the proof of Proposition \ref{Proposition_AvgEnergy}. \section{Proof of Theorem \ref{Theorem_Optimality_N}}\label{App_Proof_Theorem_Optimality_N} First, the former part of Theorem \ref{Theorem_Optimality_N} can be proved using the following lemma by considering an arbitrary distribution of $A$ with PDF and CDF denoted by ${g_A}\left( a \right)$ and ${G_A}\left( a \right)$, respectively, where ${G_A}\left( a \right) > 0$ for $a > 0$. \begin{lemma}\label{Lemma_OOS_vs_TBS} Given ${G_A}\left( {\bar A} \right) = \tau$ with $0 < \tau < 1$ and $0 < \bar A < \infty$, \begin{equation}\label{Eq_Lemma_OOS_vs_TBS} {\int_{\bar A}^\infty {a{g_A}\left( a \right)da} > \left( {1 - \tau } \right)b.} \end{equation} \end{lemma} \begin{proof} Please refer to Appendix \ref{App_Proof_Lemma_TBS_vs_OOS}. \end{proof} For TS, we have the rate scaling factor ${{\Delta^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right) = {F_A^{(N)}}\left( \bar A \right)}$ from Proposition \ref{Proposition_Avg_Scaling}, and it can be shown from (\ref{Eq_Marginal_CDF}) that ${F_A^{(N)}}\left( a \right) > 0$ for $a > 0$. In addition, the energy scaling factor for TS can be alternatively expressed as ${{\Pi^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right) = \int_{\bar A}^{\infty} {a{{f_A^{(N)}}\left( a \right)}da}}$ with ${f_A^{(N)}}\left( a \right) = \mathbb E_H \left[{f_{A\left| H \right.}^{(N)}}\left( {a\left| h \right.} \right)\right]$ denoting the unconditional PDF of $A$ after averaging over the fading distribution. Furthermore, it can be easily verified that $\int_{0}^{\infty} {a{{f_A^{(N)}}\left( a \right)}da} = 1$. Given ${F_A^{(N)}}\left( \bar A \right) = \tau$ with $0 < \bar A < \infty$ and $0 < \tau < 1$, it can thus be verified from Lemma \ref{Lemma_OOS_vs_TBS} that ${{\Delta^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right)} + {\Pi^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right) > 1$, by substituting $b$, ${{g_A}\left( a \right)}$, and ${{G_A}\left( a \right)}$ in Lemma \ref{Lemma_OOS_vs_TBS} by $1$, ${f_A^{(N)}}\left( a \right)$, and ${F_A^{(N)}}\left( a \right)$, respectively. Since we have ${\Delta^{({\rm{P}})}}\left( \tau \right) + {\Pi^{({\rm{P}})}}\left( \tau \right) = 1$ for PS from (\ref{Eq_PreLog_OOS}) and (\ref{AvgEnergy_OOS}), it follows that ${{\Delta^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right)} + {\Pi^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right) > {\Delta^{({\rm{P}})}}\left( \tau \right) + {\Pi^{({\rm{P}})}}\left( \tau \right)$. Therefore, we have ${{\Delta^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right)} > {\Delta^{({\rm{P}})}}\left( \tau \right)$ for given $0 < {\Pi^{\left( {\rm{T}} \right)}}\left( {N,\bar A} \right) = {\Pi^{({\rm{P}})}}\left( \tau \right) < 1$. This proves the former part of Theorem \ref{Theorem_Optimality_N}. It is worth remarking that Lemma \ref{Lemma_OOS_vs_TBS} implies that TS in general yields better trade-off between the rate and energy scaling factors than PS provided that the average received channel power for TS is the same as that for PS, based on which the former part of Theorem \ref{Theorem_Optimality_N} for the i.i.d. Rayleigh fading MISO channel with a fixed threshold $\bar A$ is proved. As another example, even for a transmission block with $H = h$, TS with $N$ RBs, $1 \le N \le N_t$, yields better trade-off between the rate and energy scaling factors than PS. This can be proved by substituting $b$, ${{g_A}\left( a \right)}$, and ${{G_A}\left( a \right)}$ in Lemma \ref{Lemma_OOS_vs_TBS} by $h$, $f_{A\left| H \right.}^{(N)}( {\bar A\left| h \right.} )$ in (\ref{Eq_TBS_pdf}), and $F_{A\left| H \right.}^{(N)}( {\bar A\left| h \right.} )$ in (\ref{Eq_TBS_CDF}), respectively. This is originated from the fact that for both TS and PS schemes the rate scaling factor is determined by the percentage of sub-blocks allocated to ID mode, whereas the energy scaling factor is determined by the percentage of sub-blocks assigned to EH mode as well as their channel power values. Note that TS scheme assigns a subset of sub-blocks with the largest channel power to EH mode, as inferred from (\ref{Eq_Opt_1_Solution}). Therefore, given a percentage of sub-blocks allocated to EH mode, i.e., $1 - {G_{A}}\left( {\bar A} \right) = 1 - \tau$ for TS and PS schemes, respectively, the energy scaling factor of TS is larger than that of PS while rate scaling factors are the same for both schemes, i.e., ${G_{A}}\left( {\bar A} \right) = \tau$. Next, to prove the latter part of Theorem \ref{Theorem_Optimality_N}, we consider two arbitrary distributions of $A$ with PDFs denoted by ${g_A}\left( a \right)$ and ${u_A}\left( a \right)$, and the corresponding CDFs denoted by ${G_A}\left( a \right)$ and ${U_A}\left( a \right)$, respectively. It is assumed that $\int_{0}^\infty {a{g_A}\left( a \right)da} = \int_{0}^\infty {a{u_A}\left( a \right)da} = b > 0$. It is further assumed that ${G_A}\left( a \right) > 0$ and ${U_A}\left( a \right) > 0$ for $a > 0$, and ${G_A}\left( a \right)$ and ${U_A}\left( a \right)$ intersect at $a = \hat A$, satisfying \begin{equation}\label{Eq_Condition2} {\left\{ {\begin{array}{*{20}{c}} {{G_{A}}\left( {a} \right) > {U_{A}}\left( {a} \right),\,\,\,{\rm{if}}\,\,0 < a < \hat A} \\ {{G_{A}}\left( {a} \right) = {U_{A}}\left( {a} \right),\,\,\,{\rm{if}}\,\,a = \hat A\,\,\,\,\,\,\,\,\,\,\,} \\ {{G_{A}}\left( {a} \right) < {U_{A}}\left( {a} \right),\,\,\,{\rm{if}}\,\, a > \hat A. \,\,\,\,\,\,\,\,\,} \\ \end{array}} \right.} \end{equation} \begin{lemma}\label{Lemma_Optimality} Given $0 < G_A\left( {\bar A_g} \right) = U_A\left( {\bar A_u} \right) < 1$ with $0 < \bar A_g, \bar A_u < \infty$, \begin{equation}\label{Eq_Lemma_TBS_Optimality} {\int_{\bar A_g}^\infty {a{g_A}\left( a \right)da} > \int_{\bar A_u}^\infty {a{u_A}\left( a \right)da}.} \end{equation} \end{lemma} \begin{proof} Please refer to Appendix \ref{App_Proof_Lemma_Optimality}. \end{proof} The latter part of Theorem \ref{Theorem_Optimality_N} can be proved using Lemma \ref{Lemma_Optimality} as follows. Given $1 \le N < M \le N_t$ for TS, it can be verified that $\int_0^\infty {af_A^{(N)}\left( a \right)da} = \int_0^\infty {af_A^{(M)}\left( a \right)da} = 1$. Furthermore, it can be shown from (\ref{Eq_Marginal_CDF}) that ${{F_A^{(N)}}\left( a \right)}$ and ${{F_A^{(M)}}\left( a \right)}$ correspond to ${G_A}\left( a \right)$ and ${U_A}\left( a \right)$ in (\ref{Eq_Condition2}), respectively (c.f. Fig. \ref{Fig_Marginal_CDF}). By substituting ${{f_A^{(N)}}\left( a \right)}$, ${{f_A^{(M)}}\left( a \right)}$, ${{F_A^{(N)}}\left( a \right)}$, and ${{F_A^{(M)}}\left( a \right)}$ for ${{g_A}\left( a \right)}$, ${{u_A}\left( a \right)}$, ${{G_A}\left( a \right)}$, and ${{U_A}\left( a \right)}$ in Lemma \ref{Lemma_OOS_vs_TBS}, respectively, it can be verified that ${\Pi^{\left( {\rm{T}} \right)}}\left( {N,\bar A_N} \right) > {\Pi^{\left( {\rm{T}} \right)}}\left( {M,\bar A_M} \right)$ when ${{\Delta^{\left( {\rm{T}} \right)}}\left( {N,\bar A_N} \right)} = {{\Delta^{\left( {\rm{T}} \right)}}\left( {M,\bar A_M} \right)}$, $0 < \bar A_N, \bar A_M < \infty$ since ${{\Delta^{\left( {\rm{T}} \right)}}\left( {N,\bar A_N} \right)} = {{F_A^{(N)}}\left( a \right)}$ from Proposition \ref{Proposition_Avg_Scaling} and ${{\Pi^{\left( {\rm{T}} \right)}}\left( {N,\bar A_N} \right) = \int_{\bar A_N}^{\infty} {a{{f_A^{(N)}}\left( a \right)}da}}$. This guarantees that ${{\Delta^{\left( {\rm{T}} \right)}}\left( {N,\bar A_N} \right)} > {{\Delta^{\left( {\rm{T}} \right)}}\left( {M,\bar A_M} \right)}$ for given $0 < {\Pi^{\left( {\rm{T}} \right)}}\left( {N,\bar A_N} \right) = {\Pi^{\left( {\rm{T}} \right)}}\left( {M,\bar A_M} \right) < 1$, since both ${{\Delta^{\left( {\rm{T}} \right)}}\left( {N,\bar A_N} \right)}$ and ${{\Delta^{\left( {\rm{T}} \right)}}\left( {M,\bar A_M} \right)}$ decrease monotonically with increasing ${\Pi^{\left( {\rm{T}} \right)}}\left( {N,\bar A_N} \right)$ and ${\Pi^{\left( {\rm{T}} \right)}}\left( {M,\bar A_M} \right)$, respectively. This proves the latter part of Theorem \ref{Theorem_Optimality_N}. As a remark, Lemma \ref{Lemma_Optimality} compares the trade-offs between the rate and energy scaling factors in TS schemes with two different distributions of channel power induced by different values of $N$ provided that both distributions have the same average channel power and satisfy the condition in (\ref{Eq_Condition2}). The latter part of Theorem \ref{Theorem_Optimality_N} for the i.i.d Rayleigh fading MISO channel with a fixed $\bar A$ is one application of Lemma \ref{Lemma_Optimality}. As another example, even for a transmission block with $H = h$, better trade-off between the rate and energy scaling factors is attained with $N$ than $M$ RBs, $1 \le N < M \le N_t$, since $F_{A\left| H \right.}^{(N)}\left( {a\left| h \right.} \right)$ and $F_{A\left| H \right.}^{(M)}\left( {a\left| h \right.} \right)$ correspond to ${G_A}\left( a \right)$ and ${U_A}\left( a \right)$ in (\ref{Eq_Condition2}), respectively, as shown from (\ref{Eq_TBS_CDF}). This is due to the fact that the artificial channel fading is more substantial when smaller number of RBs is employed, and the argument similarly as for Lemma \ref{Lemma_OOS_vs_TBS}. Combining the proofs for both the above two parts, Theorem \ref{Theorem_Optimality_N} is proved. \section{Proof of Lemma \ref{Lemma_OOS_vs_TBS}}\label{App_Proof_Lemma_TBS_vs_OOS} Integrating by part, $\int_0^{\bar A} {a{g_A}\left( a \right)da}$ can be evaluated as \begin{equation}\label{App_Integration_by_Part} {\int_0^{\bar A} {a{g_A}\left( a \right)da} = \bar A{G_A}\left( {\bar A} \right) - \int_0^{\bar A} {{G_A}\left( a \right)da}.} \end{equation} Assume that $\bar A$ is given such that ${G_{A}}\left( {\bar A} \right) = \tau$, $0 < \tau < 1$. From (\ref{Eq_Lemma_OOS_vs_TBS}) and (\ref{App_Integration_by_Part}), we have \[ \tilde E = \int_{\bar A}^\infty {a{g_A}\left( a \right)da} - \left( {1 - \tau } \right)b \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \] \[ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = \int_0^\infty {a{g_A}\left( a \right)da} - \int_0^{\bar A} {a{g_A}\left( a \right)da} - b\left( {1 - {G_A}\left( {\bar A} \right)} \right)\] \begin{equation}\label{App_EnergyGap_OOS} { = \left( {b - \bar A} \right){G_A}\left( {\bar A} \right) + \int_0^{\bar A} {{G_A}\left( {\bar A} \right)da,} \,\,\,\,\,\,\,\,\,\,\,\,\, } \end{equation} and the resulting derivative \begin{equation}\label{App_Differentiation_OOS} {\mu = \frac{{d\tilde E}}{{d\bar A}} = \left( {b - \bar A} \right){g_A}\left( \bar A \right),} \end{equation} where $\mu = 0$ is achieved at $\bar A = b$. From (\ref{App_EnergyGap_OOS}) and (\ref{App_Differentiation_OOS}), it is observed that $\tilde E > 0$ with $0 < \bar A \le b$, and $\tilde E$ increases monotonically with $\bar A$ until $\bar A = b$. For $\bar A > b$, it is observed that $\mu < 0$ and thus $\tilde E$ decreases monotonically with increasing $\bar A$. Because $\mathop {\lim }\limits_{\bar A \to \infty } \int_{\bar A}^\infty {a{g_A}\left( a \right)da} = 0$, it can be shown that $\tilde E > 0$ even with $\bar A > b$. Therefore, it is verified that $\tilde E > 0$ in (\ref{App_EnergyGap_OOS}) for any $\bar A > 0$. This completes the proof of Lemma \ref{Lemma_OOS_vs_TBS}. \clearpage \section{Proof of Lemma \ref{Lemma_Optimality}}\label{App_Proof_Lemma_Optimality} Denote $\Delta = G_A( {\bar A_g} ) = U_A( {\bar A_u} )$, $0 < \bar A_g, \bar A_u < \infty$. From (\ref{Eq_Lemma_TBS_Optimality}) and (\ref{App_Integration_by_Part}), we have \[\tilde E = \int_{{{\bar A}_g}}^\infty {a{g_A}\left( a \right)da} - \int_{{{\bar A}_u}}^\infty {a{u_A}\left( a \right)da} \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \] \begin{equation}\label{App_EnergyGap_Temp} { = \int_0^{{{\bar A}_u}} {a{u_A}\left( a \right)da} - \int_0^{\bar A_g} {a{g_A}\left( a \right)da} \,\,\,\,\,\,\,\,\,\,\,\,\,\, } \end{equation} \begin{equation}\label{App_EnergyGap_General} { \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = \Delta \left( {{{\bar A}_u} - {{\bar A}_g}} \right) + \int_0^{{{\bar A}_g}} {{G_{A}}\left( {a} \right)da} - \int_0^{{{\bar A}_u}} {{U_{A}}\left( {a} \right)da}.} \end{equation} According to (\ref{Eq_Condition2}), there are three cases addressed as follows for given $0 < \Delta < 1$. \emph{1)} ${{\bar A}_g} = {{\bar A}_u} = \hat A$: According to (\ref{Eq_Condition2}), $G_A ( {\hat A} ) = U_A ( {\hat A} ) = \hat \Delta$. Since it is assumed in (\ref{Eq_Condition2}) that ${G_{A}}( {a} ) > {U_{A}}( {a} )$ with $0 < a < \hat A$, $\tilde E$ is evaluated from (\ref{App_EnergyGap_General}) as \begin{equation}\label{App_EnergyGap_Equal} {\tilde E = \int_0^{\hat A} {\left( {{G_{A}}\left( {a} \right) - {U_{A}}\left( {a} \right)} \right)da} > 0.} \end{equation} \emph{2)} $0 < {{\bar A}_g}, {{\bar A}_u} < \hat A$: It can be inferred from (\ref{Eq_Condition2}) that ${{\bar A}_g} < {{\bar A}_u} < \hat A$, which results in $0 < \Delta < \hat \Delta$. From (\ref{App_EnergyGap_General}), we have \begin{equation}\label{App_EnergyGap_Smaller} {\tilde E = \underbrace {U_A\left( \bar A_u \right) \left( {{{\bar A}_u} - {{\bar A}_g}} \right) - \int_{{{\bar A}_g}}^{{{\bar A}_u}} {{U_{A}}\left( {a} \right)da} }_{\buildrel \Delta \over = \,\, \beta} + \underbrace {\int_0^{{{\bar A}_g}} {\left( {{G_{A}}\left( {a} \right) - {U_{A}}\left( {a} \right)} \right)da} }_{\buildrel \Delta \over = \,\, \alpha} .} \end{equation} Since ${{\bar A}_g} < {{\bar A}_u}$ and ${G_{A}}( {a} ) > {U_{A}}( {a} )$ with $0 < a < \hat A$, it can be verified that $\alpha > 0$ and $\beta > 0$, and thus $\tilde E > 0$. \emph{3)} $\hat A < {{\bar A}_g}, {{\bar A}_u} < \infty$: It can be inferred from (\ref{Eq_Condition2}) that $\hat A < {{\bar A}_u} < {{\bar A}_g}$, which results in $\hat \Delta < \Delta < 1$. From (\ref{App_EnergyGap_Temp}), we have \begin{equation}\label{App_EnergyGap_Larger} {\tilde E = \underbrace {\int_0^{\hat A} {a\left( {{u_{A}}\left( {a} \right) - {g_{A}}\left( {a} \right)} \right)da} }_{\buildrel \Delta \over = \,\, \delta} - \underbrace {\left( {\int_{\hat A}^{{{\bar A}_g}} {a{g_{A}}\left( {a} \right)da} - \int_{\hat A}^{{{\bar A}_u}} {a{u_{A}}\left( {a} \right)da} } \right)}_{\buildrel \Delta \over = \,\, \varepsilon} ,} \end{equation} with $\delta > 0$ as shown in (\ref{App_EnergyGap_Temp})-(\ref{App_EnergyGap_Equal}). In addition, it can be verified that \begin{equation}\label{App_Limitation1} {\mathop {\lim }\limits_{\Delta \to \hat \Delta } \varepsilon = \mathop {\lim }\limits_{{{\bar A}_g} \to \hat A} \int_{\hat A}^{{{\bar A}_g}} {a{g_{A}}\left( {a} \right)da} - \mathop {\lim }\limits_{{{\bar A}_u} \to \hat A} \int_{\hat A}^{{{\bar A}_u}} {a{u_{A}}\left( {a} \right)da} = 0,} \end{equation} \begin{equation}\label{App_Limitation1} {\mathop {\lim }\limits_{\Delta \to 1} \varepsilon = \mathop {\lim }\limits_{\scriptstyle {{\bar A}_g} \to \infty , \hfill \atop \scriptstyle {{\bar A}_u} \to \infty \hfill} \varepsilon = \int_0^{\hat A} {a{u_{A}}\left( {a} \right)da} - \int_0^{\hat A} {a{g_{A}}\left( {a} \right)da} = \delta .} \end{equation} Since $\frac{{d\Delta }}{{d{{\bar A}_g}}} = \frac{d}{{d{{\bar A}_g}}}{G_{A}}\left( {\bar A_g} \right) = {g_{A}}\left( {\bar A_g} \right)$ and $\frac{{d\Delta }}{{d{{\bar A}_u}}} = \frac{d}{{d{{\bar A}_u}}}{U_{A}}\left( {\bar A_u} \right) = {u_{A}}\left( {\bar A_u} \right)$, we have \[ \frac{d}{{d\Delta }}\varepsilon = \frac{1}{{\frac{{d\Delta }}{{d{{\bar A}_g}}}}}\frac{d}{{d{{\bar A}_g}}}\int_{\hat A}^{{{\bar A}_g}} {a{g_{A}}\left( {a} \right)da} - \frac{1}{{\frac{{d\Delta }}{{d{{\bar A}_u}}}}}\frac{d}{{d{{\bar A}_u}}}\int_{\hat A}^{{{\bar A}_u}} {a{u_{A}}\left( {a} \right)da} \] \begin{equation}\label{App_Differentiation} {\,\,\,\, \,\,\,\, = {{\bar A}_g} - {{\bar A}_u} > 0. \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,} \end{equation} From (\ref{App_EnergyGap_Larger})-(\ref{App_Differentiation}), it can be verified that ${\tilde E}$ monotonically decreases from $\delta$ to $0$ with increasing $\Delta$ with $\hat \Delta < \Delta < 1$, i.e., ${\tilde E} > 0$ with $\hat A < {{\bar A}_g}, {{\bar A}_u} < \infty$. Combining the above three cases, Lemma \ref{Lemma_Optimality} is thus proved. \section{Proof of Proposition \ref{Proposition_EnergyDiversity_TBS}}\label{App_Proof_Proposition_EnergyDiversity_TBS} Given a transmission block with $H = h$, ${Q^{(\rm{T})}}\left( {h,N,0} \right) = \theta Ph$ from (\ref{Eq_Energy_TBS3}). In the i.i.d. Rayleigh fading MISO channel, given $\hat Q > 0$, $p_{Q,out}^{({\rm{T}})}$ for TS with $\bar A = 0$ can be approximated as $P \to \infty$ by $\mathop {\lim }\limits_{P \to \infty } \Pr\left( {h < \frac{\hat Q}{\theta P}} \right) = \mathop {\lim }\limits_{P \to \infty } F_{H}\left( {\frac{\hat Q}{\theta P}} \right) = {\left( {\frac{\hat Q}{{\theta P }}} \right)^{ N_t}}$, since ${\mathop {\lim }\limits_{h \to 0 } F_{H}\left( {h} \right) = {h ^{N_t}}}$ in (\ref{Eq_CDF_H}). This proves the first equality in (\ref{Eq_EnergyDiversity_TBS}) for $\bar A = 0$. When $\bar A > 0$, the harvested power per block for a given $h$, ${Q^{(\rm{T})}}\left( {h,N,\bar A} \right)$ in (\ref{Eq_Energy_TBS3}), is a monotonically increasing function of $h$, since ${\Gamma \left( {\alpha,x} \right)}$ is a monotonically decreasing function of $x$. For a given power requirement $\hat Q > 0$, denote $\bar h$ as the minimum value of $h$ such that ${Q^{(\rm{T})}}\left( {\bar h,N,\bar A} \right) \ge \hat Q$, i.e., \begin{equation}\label{App_g_hbar} {\vartheta\left( {\bar h} \right) \buildrel \Delta \over = \bar h\,\frac{{\Gamma \left( {N + 1,N\bar A/\bar h} \right)}}{{\Gamma \left( {N + 1} \right)}} = \frac{\hat Q}{\theta P}.} \end{equation} Thus, the power outage probability for TS with given $N$, $\bar A$, and $\hat Q$ is obtained as $p_{Q, \,out}^{({\rm{T}})}( {N, \bar A,\hat Q} ) = F_H(\bar h)$. Since $\vartheta\left( {h} \right)$ increases with $h$, with $P \to \infty$ it then follows from (\ref{App_g_hbar}) that $\bar h \to 0$, under which we have \begin{equation}\label{App_q_h_temp} {\vartheta\left( \bar h \right) = \bar h {e^{ - \left( {N\bar A/\bar h} \right)}}{\sum\limits_{k = 0}^N {\frac{1}{{k!}}\left( {\frac{{N\bar A}}{\bar h}} \right)} ^k}} \end{equation} \begin{equation}\label{App_q_h} {\,\,\,\,\,\,\,\,\,\,\,\, \approx \bar h {e^{ - \left( {N\bar A/\bar h} \right)}}\frac{1}{{N!}}{\left( {\frac{{N\bar A}}{\bar h}} \right)^N},} \end{equation} where, since $N \ge 1$ is an integer, (\ref{App_q_h_temp}) is obtained from \cite{Proakis} \[ {{\Gamma \left( {N,x} \right)} = (N - 1)!\,{e^{ - x}}{\sum\limits_{m = 0}^{N-1} {\frac{x^m}{{m!}}} }.} \] Since $\ln \vartheta\left( {\bar h} \right) = \ln \frac{\hat Q}{\theta P}$, from (\ref{App_q_h}), we have \begin{equation}\label{App_EnergyOutage1} {{\left( {N - 1} \right)\ln x - N\bar Ax = \ln \left( {\frac{{\hat QN!}}{{\theta P{{\left( {N\bar A} \right)}^N}}}} \right)},} \end{equation} where $x = 1/\bar h$. With $\bar h \to 0$, i.e., $x \to \infty$, the left-hand side of (\ref{App_EnergyOutage1}) can be further approximated as $- N\bar Ax$. Therefore, $\bar h$ as $P \to \infty$ can be approximated by \begin{equation}\label{App_EnergyOutage2} {\bar h = {N\bar A{{\left( {\ln \left( {\frac{{\theta P{{\left( {N\bar A} \right)}^N}}}{{\hat QN!}}} \right)} \right)}^{ - 1}}} \approx {N\bar A {{\left( {\ln \left( \theta P \right)} \right)}^{ - 1}}}.} \end{equation} From (\ref{App_EnergyOutage2}) and the fact that ${F_H}\left( h \right) \approx {h^{ N_t}}$ as $h \to 0$, we obtain the second equality in (\ref{Eq_EnergyDiversity_TBS}) for $\bar A > 0$. From the proofs for both the first and second equalities in (\ref{Eq_EnergyDiversity_TBS}), Proposition \ref{Proposition_EnergyDiversity_TBS} is thus proved. \section{Derivation of (\ref{Eq_BRB_EnergyOutage})}\label{App_Derivation_P_E_out_B} From (\ref{Eq_AS_Energy}), the energy outage probability of TS-B is obtained as \begin{equation}\label{App_Outage_Total} {p_{Q,out}^{({\rm{B}})} = \Pr \left( {v < \bar A} \right) + \Pr \left( {w \le \bar A \le v, \, \frac{{\theta Pv}}{2} < \hat Q} \right) + \Pr \left( {w > \bar A, \, \frac{{\theta P\left( {v + w} \right)}}{2} < \hat Q} \right).} \end{equation} First, $\Pr \left( {v < \bar A} \right)$ in (\ref{App_Outage_Total}) is given by \begin{equation}\label{App_Outage_Case1} {\Pr \left( {v < \bar A} \right) = {F_V}\left( \bar A \right) = {\left( {1 - {e^{ - \bar A}}} \right)^2},} \end{equation} where ${F_V}\left( v \right)$ denotes the CDF of $V = \max ( \, {{{\left| {{h_1}} \right|}^2},{{\left| {{h_2}} \right|}^2}} )$, given by ${F_V}\left( v \right) = {\left( {1 - {e^{ - v}}} \right)^2}$, since both ${{\left| {{h_1}} \right|}^2}$ and ${{\left| {{h_2}} \right|}^2}$ are independent exponential random variables. The second term in (\ref{App_Outage_Total}) can be obtained as \[\Pr \left( {w \le \bar A \le v, \, \frac{{\theta Pv}}{2} < \hat Q} \right) = \Pr \left( {w \le \bar A, \, \bar A \le v \le 2D, \, w \le v} \right) \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \] \[ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = {\bf{1}}\left( {\bar A < 2D} \right) \int_{\bar A}^{2D} {\int_0^{\bar A} {{f_{V,W}}\left( {v,w} \right)dwdv} } \] \begin{equation}\label{App_Outage_Case2} { \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = {\bf{1}}\left( {\bar A < 2D} \right) 2{e^{ - 2\left( {\bar A + D} \right)}}\left( { - 1 + {e^{\bar A}}} \right)\left( { - {e^{\bar A}} + {e^{2B}}} \right),} \end{equation} where $D = \frac{\hat Q}{\theta P}$ and ${f_{V,W}}\left( {v,w} \right)$ denotes the joint PDF for $V$ and $W$ given by ${f_{V,W}}\left( {v,w} \right) = 2{e^{ - v}}{e^{ - w}}$, $v > w$. Finally, the last term in (\ref{App_Outage_Total}) can be obtained as \[ \Pr \left( {w > \bar A, \, \frac{{\theta P\left( {v + w} \right)}}{2} < \hat E} \right) \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \] \[ = \Pr \left( {w > \bar A, \, v + w < 2D } \right) \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \] \[ \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = {\bf{1}}\left( {\bar A < D} \right) \left( \int_{\bar A}^B {\int_{\bar A}^v {{f_{V,W}}\left( {v,w} \right)dwdv} } + \int_B^{2B - \bar A} {\int_{\bar A}^{2B - v} {{f_{V,W}}\left( {v,w} \right)dwdv} } \right) \] \begin{equation}\label{App_Outage_Case3} { \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, = {\bf{1}}\left( {\bar A < D} \right)\left( {{e^{ - 2\left( {\bar A + D} \right)}}{{\left( {{e^{\bar A}} - {e^D}} \right)}^2} + {e^{ - \bar A - 2D}}\left( {\left( { - 1 + \bar A - D} \right){e^{\bar A}} + {e^D}} \right)} \right).} \end{equation} From (\ref{App_Outage_Total})-(\ref{App_Outage_Case3}), we can obtain (\ref{Eq_BRB_EnergyOutage}).